xeon auggy

2 x Intel Xeon Platinum 8380 testing with a Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS) and ASPEED on Ubuntu 22.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308067-NE-XEONAUGGY65
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 2 Tests
CPU Massive 4 Tests
Creator Workloads 7 Tests
Database Test Suite 3 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 2 Tests
Multi-Core 7 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 3 Tests
Python Tests 2 Tests
Renderers 2 Tests
Software Defined Radio 2 Tests
Server 3 Tests
Server CPU Tests 4 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
August 06 2023
  3 Hours, 24 Minutes
b
August 06 2023
  3 Hours, 43 Minutes
Invert Hiding All Results Option
  3 Hours, 33 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


xeon auggyOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Ice Lake IEH512GB7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFPUbuntu 22.106.2.0-rc5-phx-dodt (x86_64)GNOME Shell 43.0X Server 1.21.1.31.3.224GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionXeon Auggy BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000389 - OpenJDK Runtime Environment (build 11.0.19+7-post-Ubuntu-0ubuntu122.10.1)- Python 3.10.7- dodt: Mitigation of DOITM + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

a vs. b ComparisonPhoronix Test SuiteBaseline+21.1%+21.1%+42.2%+42.2%+63.3%+63.3%+84.4%+84.4%84.5%21.8%13.7%8.7%7.1%3.9%3.7%2.9%2.5%2.4%2.2%2.1%2%128Cloning22.9%Pipe100 - 100 - 500100 - 100 - 500CPU - resnet18500 - 1 - 2006.6%r2c - FFTW - double - 1285.4%500 - 1 - 2005.1%CPU - FastestDet4.1%CPU - alexnet128 - 256 - 573.8%r2c - FFTW - float - 256c2c - FFTW - float - 2563.5%CPU - resnet503.3%CPU - regnety_400m3.1%200 - 1 - 200100 - 1 - 2002.9%c2c - FFTW - double - 1282.8%CPU - blazeface2.7%c2c - Stock - float - 256CPU - mnasnetc2c - Stock - double - 1282.3%CPU - squeezenet_ssd2.2%500 - 1 - 500CPU - vision_transformerCPU - googlenetPthread2%libxsmmStress-NGStress-NGApache IoTDBApache IoTDBNCNNApache IoTDBHeFFTe - Highly Efficient FFT for ExascaleApache IoTDBNCNNNCNNLiquid-DSPHeFFTe - Highly Efficient FFT for ExascaleHeFFTe - Highly Efficient FFT for ExascaleNCNNNCNNApache IoTDBApache IoTDBHeFFTe - Highly Efficient FFT for ExascaleNCNNHeFFTe - Highly Efficient FFT for ExascaleNCNNHeFFTe - Highly Efficient FFT for ExascaleNCNNApache IoTDBNCNNNCNNStress-NGab

xeon auggylibxsmm: 128stress-ng: Cloningstress-ng: Pipeapache-iotdb: 100 - 100 - 500apache-iotdb: 100 - 100 - 500ncnn: CPU - resnet18apache-iotdb: 500 - 1 - 200heffte: r2c - FFTW - double - 128apache-iotdb: 500 - 1 - 200ncnn: CPU - FastestDetncnn: CPU - alexnetliquid-dsp: 128 - 256 - 57heffte: r2c - FFTW - float - 256heffte: c2c - FFTW - float - 256ncnn: CPU - resnet50ncnn: CPU - regnety_400mapache-iotdb: 200 - 1 - 200apache-iotdb: 100 - 1 - 200heffte: c2c - FFTW - double - 128ncnn: CPU - blazefaceheffte: c2c - Stock - float - 256ncnn: CPU - mnasnetheffte: c2c - Stock - double - 128ncnn: CPU - squeezenet_ssdapache-iotdb: 500 - 1 - 500ncnn: CPU - vision_transformerncnn: CPU - googlenetstress-ng: Pthreadapache-iotdb: 500 - 1 - 500heffte: r2c - Stock - float - 128z3: 1.smt2apache-iotdb: 200 - 1 - 200embree: Pathtracer - Crownospray: particle_volume/scivis/real_timeheffte: r2c - Stock - double - 512apache-iotdb: 100 - 1 - 200ospray: gravity_spheres_volume/dim_512/scivis/real_timeapache-iotdb: 100 - 100 - 200heffte: c2c - FFTW - double - 512heffte: r2c - Stock - float - 512apache-iotdb: 100 - 100 - 200liquid-dsp: 16 - 256 - 512vvenc: Bosphorus 4K - Fasterremhos: Sample Remap Examplencnn: CPU - efficientnet-b0liquid-dsp: 16 - 256 - 57heffte: c2c - FFTW - double - 256liquid-dsp: 160 - 256 - 57ospray: gravity_spheres_volume/dim_512/ao/real_timelibxsmm: 256ncnn: CPU - yolov4-tinyheffte: c2c - Stock - float - 512liquid-dsp: 64 - 256 - 32liquid-dsp: 32 - 256 - 512stress-ng: Vector Floating Pointliquid-dsp: 32 - 256 - 57ncnn: CPU - mobilenetoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlylibxsmm: 32liquid-dsp: 16 - 256 - 32z3: 2.smt2vvenc: Bosphorus 4K - Fastncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3oidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyliquid-dsp: 64 - 256 - 512blender: Fishy Cat - CPU-Onlyquantlib: ospray: gravity_spheres_volume/dim_512/pathtracer/real_timeliquid-dsp: 128 - 256 - 32heffte: c2c - FFTW - float - 512heffte: r2c - Stock - double - 256apache-iotdb: 100 - 1 - 500heffte: c2c - Stock - float - 128srsran: PUSCH Processor Benchmark, Throughput Totalliquid-dsp: 128 - 256 - 512gpaw: Carbon Nanotubeheffte: c2c - FFTW - float - 128embree: Pathtracer - Asian Dragon Objncnn: CPU-v2-v2 - mobilenet-v2heffte: r2c - FFTW - double - 512liquid-dsp: 64 - 256 - 57heffte: c2c - Stock - double - 256vvenc: Bosphorus 1080p - Fasterlibxsmm: 64ncnn: CPU - vgg16blender: BMW27 - CPU-Onlyliquid-dsp: 160 - 256 - 32blender: Classroom - CPU-Onlyapache-iotdb: 200 - 1 - 500liquid-dsp: 1 - 256 - 512apache-iotdb: 100 - 1 - 500heffte: r2c - Stock - float - 256heffte: c2c - Stock - double - 512liquid-dsp: 1 - 256 - 32apache-iotdb: 200 - 1 - 500blender: Barbershop - CPU-Onlyheffte: r2c - FFTW - double - 256liquid-dsp: 160 - 256 - 512build-gcc: Time To Compileembree: Pathtracer ISPC - Asian Dragon Objheffte: r2c - Stock - double - 128heffte: r2c - FFTW - float - 512embree: Pathtracer ISPC - Asian Dragonembree: Pathtracer - Asian Dragonstress-ng: Fused Multiply-Adddav1d: Summer Nature 1080pheffte: r2c - FFTW - float - 128vvenc: Bosphorus 1080p - Fastliquid-dsp: 32 - 256 - 32ospray: particle_volume/pathtracer/real_timeospray: particle_volume/ao/real_timedav1d: Chimera 1080psrsran: PUSCH Processor Benchmark, Throughput Threadsrsran: Downlink Processor Benchmarkstress-ng: Vector Shuffledav1d: Summer Nature 4Kstress-ng: Wide Vector Mathencode-opus: WAV To Opus Encodestress-ng: AVL Treeliquid-dsp: 1 - 256 - 57dav1d: Chimera 1080p 10-bitstress-ng: Matrix 3D Mathstress-ng: Floating Pointstress-ng: Zlibembree: Pathtracer ISPC - Crowncouchdb: 500 - 1000 - 30couchdb: 300 - 1000 - 30couchdb: 100 - 1000 - 30apache-iotdb: 100 - 1 - 200ab1055.316195.0340500166.81109.439562245.2210.3113.25156.2241199743.229.625.652519200000222.215102.27817.5738.2014.8417.5494.45444.49101.87.6169.481615.781343156.5646.5017.0692131.5433.38185.45325.713904320.672.041924.359294.2637638644.3520.81134266143.8549.4363176.63042.7920161500010.28412.19511.4861510500045.8509260230000021.2056599.824.7793.33491805000000400730000132479.08119770000016.063.04633.249366000087.9985.6729.828.711.473.0572584000030.592622.922.6977296110000094.8348101.93835.99107.4529800.594940000045.824159.34476.96087.9190.5745206920000046.663629.0771219.926.2723.69339070000062.351134736.5413323000995259.68236.66647.28013233800036.74239.5593.00981013200000957.94689.8447117.006170.906104.414885.2423181083180.47699.97199.10315.708992540000151.13824.637516.17164.8556.548054.48282.532195391.4136.736610.6953918500476.8212743.8121134.816879.8687.93061090.424152.45694.8341946.713172.8149325396.9796.1843021501.49.6314.12148.1771141859.2510.015.442426350000230.54198.831518.1539.3714.4218.0491.90394.61104.3457.4367.948316.131372429.5845.5616.7290361.732.75182.0325.251920435.7770.790524.782792.7173628202.5520.481834807016.8548.7210174.11442.1919879000010.43012.36511.6462353500046.4607263645000020.9459592.524.4892.25681825450000396265000131100.25118550000015.903.01639.349841000087.1785.7179.758.771.463.0373031000030.772607.622.5752294515000094.3442102.42636.16107.9529756.794519000045.636158.71177.26337.9490.2388207665000046.505229.1761216.026.1923.62338195000062.511137612.6113291000992909.69236.11947.38563226700036.82239.0392.81611011200000956.12790.0132116.839171.134104.553985.1315181314757.42699.09199.33315.723993445000151.27324.6207516.50164.7556.848076.78282.652196242.2136.726610.8353926500476.7712742.7021133.026880.2287.9319OpenBenchmarking.org

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ab400800120016002000SE +/- 54.55, N = 21055.31946.71. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ab30060090012001500Min: 1892.1 / Avg: 1946.65 / Max: 2001.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningab3K6K9K12K15KSE +/- 3270.70, N = 2SE +/- 654.66, N = 216195.0313172.811. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningab3K6K9K12K15KMin: 12924.33 / Avg: 16195.03 / Max: 19465.73Min: 12518.15 / Avg: 13172.81 / Max: 13827.471. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeab11M22M33M44M55MSE +/- 2523742.74, N = 2SE +/- 6369572.71, N = 240500166.8149325396.971. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeab9M18M27M36M45MMin: 37976424.07 / Avg: 40500166.81 / Max: 43023909.54Min: 42955824.26 / Avg: 49325396.97 / Max: 55694969.671. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ba2040608010096.18109.40MAX: 1249.92MAX: 2142.92

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ba9M18M27M36M45M43021501.4039562245.22

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ab3691215SE +/- 1.04, N = 2SE +/- 0.31, N = 210.319.63MIN: 9.03 / MAX: 33.3MIN: 9.16 / MAX: 26.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18ab3691215Min: 9.27 / Avg: 10.31 / Max: 11.34Min: 9.32 / Avg: 9.63 / Max: 9.931. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ba4812162014.1213.25MAX: 878.17MAX: 896.77

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128ab306090120150SE +/- 2.45, N = 2156.22148.181. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128ab306090120150Min: 145.73 / Avg: 148.18 / Max: 150.621. (CXX) g++ options: -O3

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ba300K600K900K1200K1500K1141859.251199743.22

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetab3691215SE +/- 0.05, N = 2SE +/- 0.28, N = 29.6210.01MIN: 9.35 / MAX: 10.52MIN: 9.4 / MAX: 59.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetab3691215Min: 9.57 / Avg: 9.62 / Max: 9.67Min: 9.72 / Avg: 10.01 / Max: 10.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetab1.27132.54263.81395.08526.3565SE +/- 0.43, N = 2SE +/- 0.22, N = 25.655.44MIN: 5.03 / MAX: 6.71MIN: 5.08 / MAX: 7.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetab246810Min: 5.22 / Avg: 5.65 / Max: 6.08Min: 5.22 / Avg: 5.44 / Max: 5.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 57ab500M1000M1500M2000M2500MSE +/- 13350000.00, N = 2251920000024263500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 57ab400M800M1200M1600M2000MMin: 2413000000 / Avg: 2426350000 / Max: 24397000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256ab50100150200250SE +/- 3.31, N = 2222.22230.541. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256ab4080120160200Min: 227.23 / Avg: 230.54 / Max: 233.851. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256ab20406080100SE +/- 0.00, N = 2102.2898.831. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256ab20406080100Min: 98.83 / Avg: 98.83 / Max: 98.831. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ab48121620SE +/- 0.36, N = 2SE +/- 0.83, N = 217.5718.15MIN: 16.92 / MAX: 18.88MIN: 16.98 / MAX: 42.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50ab510152025Min: 17.21 / Avg: 17.57 / Max: 17.93Min: 17.32 / Avg: 18.15 / Max: 18.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mab918273645SE +/- 0.86, N = 2SE +/- 1.10, N = 238.2039.37MIN: 36.18 / MAX: 62.76MIN: 37.07 / MAX: 103.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mab816243240Min: 37.34 / Avg: 38.2 / Max: 39.06Min: 38.27 / Avg: 39.37 / Max: 40.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ba4812162014.4214.84MAX: 596.84MAX: 605.55

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ba4812162018.0417.54MAX: 597.99MAX: 680.16

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128ab20406080100SE +/- 1.46, N = 294.4591.901. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128ab20406080100Min: 90.45 / Avg: 91.9 / Max: 93.361. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceab1.03732.07463.11194.14925.1865SE +/- 0.09, N = 2SE +/- 0.01, N = 24.494.61MIN: 4.31 / MAX: 5.13MIN: 4.49 / MAX: 5.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceab246810Min: 4.4 / Avg: 4.49 / Max: 4.57Min: 4.6 / Avg: 4.61 / Max: 4.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 256ab20406080100SE +/- 0.34, N = 2101.80104.351. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 256ab20406080100Min: 104 / Avg: 104.34 / Max: 104.691. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetab246810SE +/- 0.07, N = 2SE +/- 0.01, N = 27.617.43MIN: 7.33 / MAX: 43.29MIN: 7.16 / MAX: 15.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetab3691215Min: 7.54 / Avg: 7.61 / Max: 7.67Min: 7.42 / Avg: 7.43 / Max: 7.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 128ab1530456075SE +/- 1.01, N = 269.4867.951. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 128ab1326395265Min: 66.94 / Avg: 67.95 / Max: 68.951. (CXX) g++ options: -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdab48121620SE +/- 0.07, N = 2SE +/- 0.41, N = 215.7816.13MIN: 15.4 / MAX: 43.08MIN: 15.35 / MAX: 39.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdab48121620Min: 15.71 / Avg: 15.78 / Max: 15.85Min: 15.72 / Avg: 16.13 / Max: 16.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ba300K600K900K1200K1500K1372429.581343156.56

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerab1122334455SE +/- 2.46, N = 2SE +/- 1.19, N = 246.5045.56MIN: 42.6 / MAX: 72.28MIN: 43.24 / MAX: 70.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerab918273645Min: 44.04 / Avg: 46.5 / Max: 48.95Min: 44.37 / Avg: 45.56 / Max: 46.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetab48121620SE +/- 1.05, N = 2SE +/- 0.37, N = 217.0616.72MIN: 15.5 / MAX: 66.12MIN: 15.67 / MAX: 100.921. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetab48121620Min: 16.01 / Avg: 17.06 / Max: 18.1Min: 16.35 / Avg: 16.72 / Max: 17.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadab20K40K60K80K100KSE +/- 279.15, N = 2SE +/- 894.90, N = 292131.5490361.701. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadab16K32K48K64K80KMin: 91852.39 / Avg: 92131.54 / Max: 92410.69Min: 89466.8 / Avg: 90361.7 / Max: 91256.61. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ba81624324032.7533.38MAX: 992.49MAX: 934.86

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 128ab4080120160200SE +/- 0.86, N = 2185.45182.031. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 128ab306090120150Min: 181.17 / Avg: 182.03 / Max: 182.891. (CXX) g++ options: -O3

Z3 Theorem Prover

The Z3 Theorem Prover / SMT solver is developed by Microsoft Research under the MIT license. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 1.smt2ab612182430SE +/- 0.03, N = 2SE +/- 0.07, N = 225.7125.251. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 1.smt2ab612182430Min: 25.69 / Avg: 25.71 / Max: 25.74Min: 25.18 / Avg: 25.25 / Max: 25.321. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ba200K400K600K800K1000K920435.77904320.60

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownab1632486480SE +/- 0.12, N = 272.0470.79MIN: 68.2 / MAX: 79.55MIN: 67 / MAX: 79.71
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Crownab1428425670Min: 70.67 / Avg: 70.79 / Max: 70.91

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeab612182430SE +/- 0.03, N = 224.3624.78
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timeab612182430Min: 24.75 / Avg: 24.78 / Max: 24.82

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 512ab20406080100SE +/- 0.13, N = 2SE +/- 0.73, N = 294.2692.721. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 512ab20406080100Min: 94.14 / Avg: 94.26 / Max: 94.39Min: 91.98 / Avg: 92.72 / Max: 93.451. (CXX) g++ options: -O3

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ba140K280K420K560K700K628202.55638644.35

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeab510152025SE +/- 0.05, N = 220.8120.48
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeab510152025Min: 20.43 / Avg: 20.48 / Max: 20.53

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ba7M14M21M28M35M34807016.8534266143.85

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512ab1122334455SE +/- 0.29, N = 2SE +/- 0.39, N = 249.4448.721. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512ab1020304050Min: 49.15 / Avg: 49.44 / Max: 49.73Min: 48.33 / Avg: 48.72 / Max: 49.111. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 512ab4080120160200SE +/- 0.68, N = 2SE +/- 0.12, N = 2176.63174.111. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 512ab306090120150Min: 175.95 / Avg: 176.63 / Max: 177.31Min: 173.99 / Avg: 174.11 / Max: 174.241. (CXX) g++ options: -O3

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ba102030405042.1942.79MAX: 784.56MAX: 855.16

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512ab40M80M120M160M200MSE +/- 795000.00, N = 2SE +/- 650000.00, N = 22016150001987900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512ab30M60M90M120M150MMin: 200820000 / Avg: 201615000 / Max: 202410000Min: 198140000 / Avg: 198790000 / Max: 1994400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab3691215SE +/- 0.03, N = 2SE +/- 0.10, N = 210.2810.431. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab3691215Min: 10.26 / Avg: 10.28 / Max: 10.31Min: 10.33 / Avg: 10.43 / Max: 10.531. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Exampleab3691215SE +/- 0.10, N = 212.2012.371. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap Exampleab48121620Min: 12.26 / Avg: 12.37 / Max: 12.471. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ab3691215SE +/- 0.23, N = 2SE +/- 0.26, N = 211.4811.64MIN: 10.9 / MAX: 56.34MIN: 10.85 / MAX: 37.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0ab3691215Min: 11.25 / Avg: 11.48 / Max: 11.7Min: 11.38 / Avg: 11.64 / Max: 11.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57ab130M260M390M520M650MSE +/- 3155000.00, N = 2SE +/- 11305000.00, N = 26151050006235350001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57ab110M220M330M440M550MMin: 611950000 / Avg: 615105000 / Max: 618260000Min: 612230000 / Avg: 623535000 / Max: 6348400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256ab1122334455SE +/- 0.55, N = 245.8546.461. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256ab918273645Min: 45.91 / Avg: 46.46 / Max: 47.011. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 160 - Buffer Length: 256 - Filter Length: 57ab600M1200M1800M2400M3000MSE +/- 17250000.00, N = 2260230000026364500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 160 - Buffer Length: 256 - Filter Length: 57ab500M1000M1500M2000M2500MMin: 2619200000 / Avg: 2636450000 / Max: 26537000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeab510152025SE +/- 0.22, N = 221.2120.95
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timeab510152025Min: 20.73 / Avg: 20.95 / Max: 21.16

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 256ab130260390520650SE +/- 2.65, N = 2599.8592.51. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 256ab110220330440550Min: 589.8 / Avg: 592.45 / Max: 595.11. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyab612182430SE +/- 0.65, N = 2SE +/- 0.64, N = 224.7724.48MIN: 22.68 / MAX: 208.18MIN: 22.66 / MAX: 47.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyab612182430Min: 24.12 / Avg: 24.77 / Max: 25.41Min: 23.84 / Avg: 24.48 / Max: 25.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 512ab20406080100SE +/- 0.40, N = 2SE +/- 0.24, N = 293.3392.261. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 512ab20406080100Min: 92.93 / Avg: 93.33 / Max: 93.74Min: 92.02 / Avg: 92.26 / Max: 92.51. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32ab400M800M1200M1600M2000MSE +/- 2750000.00, N = 2180500000018254500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32ab300M600M900M1200M1500MMin: 1822700000 / Avg: 1825450000 / Max: 18282000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512ab90M180M270M360M450MSE +/- 2775000.00, N = 24007300003962650001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512ab70M140M210M280M350MMin: 393490000 / Avg: 396265000 / Max: 3990400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointab30K60K90K120K150KSE +/- 872.22, N = 2SE +/- 235.00, N = 2132479.08131100.251. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointab20K40K60K80K100KMin: 131606.86 / Avg: 132479.08 / Max: 133351.3Min: 130865.24 / Avg: 131100.25 / Max: 131335.251. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57ab300M600M900M1200M1500MSE +/- 20600000.00, N = 2119770000011855000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57ab200M400M600M800M1000MMin: 1164900000 / Avg: 1185500000 / Max: 12061000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetab48121620SE +/- 0.82, N = 2SE +/- 0.28, N = 216.0615.90MIN: 14.92 / MAX: 25.43MIN: 15.2 / MAX: 39.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetab48121620Min: 15.24 / Avg: 16.06 / Max: 16.88Min: 15.62 / Avg: 15.9 / Max: 16.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyab0.6841.3682.0522.7363.42SE +/- 0.00, N = 23.043.01
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyab246810Min: 3 / Avg: 3.01 / Max: 3.01

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ab140280420560700SE +/- 2.35, N = 2633.2639.31. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ab110220330440550Min: 636.9 / Avg: 639.25 / Max: 641.61. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32ab110M220M330M440M550MSE +/- 1500000.00, N = 2SE +/- 830000.00, N = 24936600004984100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32ab90M180M270M360M450MMin: 492160000 / Avg: 493660000 / Max: 495160000Min: 497580000 / Avg: 498410000 / Max: 4992400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Z3 Theorem Prover

The Z3 Theorem Prover / SMT solver is developed by Microsoft Research under the MIT license. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 2.smt2ab20406080100SE +/- 0.02, N = 2SE +/- 0.05, N = 288.0087.181. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
OpenBenchmarking.orgSeconds, Fewer Is BetterZ3 Theorem Prover 4.12.1SMT File: 2.smt2ab20406080100Min: 87.98 / Avg: 88 / Max: 88.02Min: 87.13 / Avg: 87.18 / Max: 87.231. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab1.28632.57263.85895.14526.4315SE +/- 0.019, N = 2SE +/- 0.063, N = 25.6725.7171. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab246810Min: 5.65 / Avg: 5.67 / Max: 5.69Min: 5.65 / Avg: 5.72 / Max: 5.781. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ab3691215SE +/- 0.03, N = 2SE +/- 0.06, N = 29.829.75MIN: 9.6 / MAX: 12.61MIN: 9.56 / MAX: 13.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2ab3691215Min: 9.79 / Avg: 9.82 / Max: 9.85Min: 9.69 / Avg: 9.75 / Max: 9.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ab246810SE +/- 0.14, N = 2SE +/- 0.03, N = 28.718.77MIN: 8.43 / MAX: 9.8MIN: 8.59 / MAX: 32.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3ab3691215Min: 8.57 / Avg: 8.71 / Max: 8.85Min: 8.74 / Avg: 8.77 / Max: 8.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyab0.33080.66160.99241.32321.654SE +/- 0.00, N = 21.471.46
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyab246810Min: 1.45 / Avg: 1.46 / Max: 1.46

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyab0.68631.37262.05892.74523.4315SE +/- 0.01, N = 23.053.03
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyab246810Min: 3.02 / Avg: 3.03 / Max: 3.05

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512ab160M320M480M640M800MSE +/- 1250000.00, N = 27258400007303100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512ab130M260M390M520M650MMin: 729060000 / Avg: 730310000 / Max: 7315600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyab714212835SE +/- 0.01, N = 2SE +/- 0.02, N = 230.5930.77
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyab714212835Min: 30.58 / Avg: 30.59 / Max: 30.59Min: 30.75 / Avg: 30.77 / Max: 30.79

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30ab6001200180024003000SE +/- 2.00, N = 2SE +/- 1.80, N = 22622.92607.61. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.30ab5001000150020002500Min: 2620.9 / Avg: 2622.9 / Max: 2624.9Min: 2605.8 / Avg: 2607.6 / Max: 2609.41. (CXX) g++ options: -O3 -march=native -fPIE -pie

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeab510152025SE +/- 0.00, N = 222.7022.58
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeab510152025Min: 22.57 / Avg: 22.58 / Max: 22.58

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 32ab600M1200M1800M2400M3000MSE +/- 6350000.00, N = 2296110000029451500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 32ab500M1000M1500M2000M2500MMin: 2938800000 / Avg: 2945150000 / Max: 29515000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512ab20406080100SE +/- 1.11, N = 2SE +/- 0.95, N = 294.8394.341. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512ab20406080100Min: 93.72 / Avg: 94.83 / Max: 95.95Min: 93.39 / Avg: 94.34 / Max: 95.31. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 256ab20406080100SE +/- 0.72, N = 2101.94102.431. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 256ab20406080100Min: 101.7 / Avg: 102.43 / Max: 103.151. (CXX) g++ options: -O3

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ba81624324036.1635.99MAX: 769.5MAX: 724.8

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 128ab20406080100SE +/- 1.54, N = 2107.45107.951. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 128ab20406080100Min: 106.41 / Avg: 107.95 / Max: 109.491. (CXX) g++ options: -O3

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalab2K4K6K8K10KSE +/- 47.35, N = 2SE +/- 33.95, N = 29800.59756.71. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalab2K4K6K8K10KMin: 9753.1 / Avg: 9800.45 / Max: 9847.8Min: 9722.7 / Avg: 9756.65 / Max: 9790.61. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 512ab200M400M600M800M1000MSE +/- 2180000.00, N = 29494000009451900001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 128 - Buffer Length: 256 - Filter Length: 512ab160M320M480M640M800MMin: 943010000 / Avg: 945190000 / Max: 9473700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

GPAW

GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon Nanotubeab1020304050SE +/- 0.02, N = 2SE +/- 0.03, N = 245.8245.641. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterGPAW 23.6Input: Carbon Nanotubeab918273645Min: 45.81 / Avg: 45.82 / Max: 45.84Min: 45.61 / Avg: 45.64 / Max: 45.661. (CC) gcc options: -shared -fwrapv -O2 -lxc -lblas -lmpi

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128ab4080120160200SE +/- 0.21, N = 2159.34158.711. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128ab306090120150Min: 158.5 / Avg: 158.71 / Max: 158.921. (CXX) g++ options: -O3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objab20406080100SE +/- 0.03, N = 2SE +/- 0.03, N = 276.9677.26MIN: 75.53 / MAX: 82.14MIN: 75.78 / MAX: 81.08
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon Objab1530456075Min: 76.94 / Avg: 76.96 / Max: 76.99Min: 77.23 / Avg: 77.26 / Max: 77.29

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ab246810SE +/- 0.13, N = 2SE +/- 0.02, N = 27.917.94MIN: 7.68 / MAX: 9.6MIN: 7.81 / MAX: 10.991. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2ab3691215Min: 7.78 / Avg: 7.91 / Max: 8.04Min: 7.92 / Avg: 7.94 / Max: 7.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512ab20406080100SE +/- 1.08, N = 2SE +/- 1.17, N = 290.5790.241. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512ab20406080100Min: 89.49 / Avg: 90.57 / Max: 91.66Min: 89.07 / Avg: 90.24 / Max: 91.41. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57ab400M800M1200M1600M2000MSE +/- 13650000.00, N = 2206920000020766500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57ab400M800M1200M1600M2000MMin: 2063000000 / Avg: 2076650000 / Max: 20903000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 256ab1122334455SE +/- 0.08, N = 246.6646.511. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 256ab1020304050Min: 46.42 / Avg: 46.51 / Max: 46.591. (CXX) g++ options: -O3

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab714212835SE +/- 0.37, N = 2SE +/- 0.23, N = 229.0829.181. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab612182430Min: 28.71 / Avg: 29.08 / Max: 29.44Min: 28.95 / Avg: 29.18 / Max: 29.41. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

libxsmm

Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ab30060090012001500SE +/- 1.25, N = 21219.91216.01. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ab2004006008001000Min: 1214.7 / Avg: 1215.95 / Max: 1217.21. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ab612182430SE +/- 0.84, N = 2SE +/- 0.34, N = 226.2726.19MIN: 24.05 / MAX: 301.35MIN: 24.19 / MAX: 341.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16ab612182430Min: 25.43 / Avg: 26.27 / Max: 27.1Min: 25.85 / Avg: 26.19 / Max: 26.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyab612182430SE +/- 0.09, N = 2SE +/- 0.06, N = 223.6923.62
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyab612182430Min: 23.6 / Avg: 23.69 / Max: 23.78Min: 23.56 / Avg: 23.62 / Max: 23.68

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 160 - Buffer Length: 256 - Filter Length: 32ab700M1400M2100M2800M3500MSE +/- 9150000.00, N = 2339070000033819500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 160 - Buffer Length: 256 - Filter Length: 32ab600M1200M1800M2400M3000MMin: 3372800000 / Avg: 3381950000 / Max: 33911000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyab1428425670SE +/- 0.04, N = 2SE +/- 0.19, N = 262.3562.51
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyab1224364860Min: 62.31 / Avg: 62.35 / Max: 62.38Min: 62.32 / Avg: 62.51 / Max: 62.7

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ba200K400K600K800K1000K1137612.611134736.54

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512ab3M6M9M12M15MSE +/- 1000.00, N = 2SE +/- 34000.00, N = 213323000132910001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512ab2M4M6M8M10MMin: 13322000 / Avg: 13323000 / Max: 13324000Min: 13257000 / Avg: 13291000 / Max: 133250001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ba200K400K600K800K1000K992909.69995259.68

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 256ab50100150200250SE +/- 2.75, N = 2236.67236.121. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 256ab4080120160200Min: 233.37 / Avg: 236.12 / Max: 238.871. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 512ab1122334455SE +/- 0.14, N = 2SE +/- 0.07, N = 247.2847.391. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 512ab1020304050Min: 47.14 / Avg: 47.28 / Max: 47.42Min: 47.31 / Avg: 47.39 / Max: 47.461. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32ab7M14M21M28M35MSE +/- 0.00, N = 2SE +/- 0.00, N = 232338000322670001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32ab6M12M18M24M30MMin: 32338000 / Avg: 32338000 / Max: 32338000Min: 32267000 / Avg: 32267000 / Max: 322670001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ba81624324036.8236.74MAX: 691.5MAX: 793.88

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyab50100150200250SE +/- 0.32, N = 2SE +/- 1.32, N = 2239.55239.03
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyab4080120160200Min: 239.23 / Avg: 239.55 / Max: 239.86Min: 237.71 / Avg: 239.03 / Max: 240.34

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256ab20406080100SE +/- 1.52, N = 293.0192.821. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256ab20406080100Min: 91.29 / Avg: 92.82 / Max: 94.341. (CXX) g++ options: -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 160 - Buffer Length: 256 - Filter Length: 512ab200M400M600M800M1000MSE +/- 1800000.00, N = 2101320000010112000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 160 - Buffer Length: 256 - Filter Length: 512ab200M400M600M800M1000MMin: 1009400000 / Avg: 1011200000 / Max: 10130000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileab2004006008001000SE +/- 1.97, N = 2957.95956.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileab2004006008001000Min: 954.16 / Avg: 956.13 / Max: 958.09

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objab20406080100SE +/- 0.06, N = 2SE +/- 0.05, N = 289.8490.01MIN: 87.68 / MAX: 94.71MIN: 87.6 / MAX: 94.43
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon Objab20406080100Min: 89.78 / Avg: 89.84 / Max: 89.91Min: 89.96 / Avg: 90.01 / Max: 90.07

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 128ab306090120150SE +/- 4.04, N = 2117.01116.841. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 128ab20406080100Min: 112.8 / Avg: 116.84 / Max: 120.881. (CXX) g++ options: -O3

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512ab4080120160200SE +/- 1.25, N = 2SE +/- 1.39, N = 2170.91171.131. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512ab306090120150Min: 169.66 / Avg: 170.91 / Max: 172.16Min: 169.74 / Avg: 171.13 / Max: 172.521. (CXX) g++ options: -O3

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonab20406080100SE +/- 0.32, N = 2SE +/- 0.24, N = 2104.41104.55MIN: 101.88 / MAX: 109.22MIN: 102.2 / MAX: 108.91
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragonab20406080100Min: 104.1 / Avg: 104.41 / Max: 104.73Min: 104.31 / Avg: 104.55 / Max: 104.79

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonab20406080100SE +/- 0.04, N = 2SE +/- 0.14, N = 285.2485.13MIN: 83.75 / MAX: 89.99MIN: 83.65 / MAX: 90.45
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragonab1632486480Min: 85.2 / Avg: 85.24 / Max: 85.29Min: 84.99 / Avg: 85.13 / Max: 85.27

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addab40M80M120M160M200MSE +/- 118686.48, N = 2SE +/- 92010.25, N = 2181083180.47181314757.421. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addab30M60M90M120M150MMin: 180964493.98 / Avg: 181083180.47 / Max: 181201866.95Min: 181222747.17 / Avg: 181314757.42 / Max: 181406767.671. (CXX) g++ options: -O2 -std=gnu99 -lc

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 1080pab150300450600750SE +/- 0.50, N = 2699.97699.091. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 1080pab120240360480600Min: 698.59 / Avg: 699.09 / Max: 699.591. (CC) gcc options: -pthread -lm

HeFFTe - Highly Efficient FFT for Exascale

HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128ab4080120160200SE +/- 0.39, N = 2199.10199.331. (CXX) g++ options: -O3
OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128ab4080120160200Min: 198.94 / Avg: 199.33 / Max: 199.721. (CXX) g++ options: -O3

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab48121620SE +/- 0.06, N = 2SE +/- 0.04, N = 215.7115.721. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab48121620Min: 15.65 / Avg: 15.71 / Max: 15.76Min: 15.68 / Avg: 15.72 / Max: 15.761. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32ab200M400M600M800M1000MSE +/- 2085000.00, N = 29925400009934450001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32ab200M400M600M800M1000MMin: 991360000 / Avg: 993445000 / Max: 9955300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeab306090120150SE +/- 0.63, N = 2151.14151.27
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeab306090120150Min: 150.64 / Avg: 151.27 / Max: 151.91

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeab612182430SE +/- 0.09, N = 224.6424.62
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timeab612182430Min: 24.53 / Avg: 24.62 / Max: 24.71

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080pab110220330440550SE +/- 0.06, N = 2516.17516.501. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080pab90180270360450Min: 516.44 / Avg: 516.5 / Max: 516.561. (CC) gcc options: -pthread -lm

srsRAN Project

srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadab4080120160200SE +/- 1.70, N = 2SE +/- 0.90, N = 2164.8164.71. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadab306090120150Min: 163.1 / Avg: 164.8 / Max: 166.5Min: 163.8 / Avg: 164.7 / Max: 165.61. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkab120240360480600SE +/- 0.70, N = 2SE +/- 1.25, N = 2556.5556.81. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkab100200300400500Min: 555.8 / Avg: 556.5 / Max: 557.2Min: 555.5 / Avg: 556.75 / Max: 5581. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab10K20K30K40K50KSE +/- 1.26, N = 2SE +/- 42.22, N = 248054.4848076.781. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab8K16K24K32K40KMin: 48053.21 / Avg: 48054.48 / Max: 48055.74Min: 48034.56 / Avg: 48076.78 / Max: 48118.991. (CXX) g++ options: -O2 -std=gnu99 -lc

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4Kab60120180240300SE +/- 0.08, N = 2282.53282.651. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Summer Nature 4Kab50100150200250Min: 282.57 / Avg: 282.65 / Max: 282.731. (CC) gcc options: -pthread -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab500K1000K1500K2000K2500KSE +/- 1200.16, N = 2SE +/- 497.69, N = 22195391.412196242.211. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab400K800K1200K1600K2000KMin: 2194191.25 / Avg: 2195391.41 / Max: 2196591.57Min: 2195744.52 / Avg: 2196242.21 / Max: 2196739.891. (CXX) g++ options: -O2 -std=gnu99 -lc

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.4WAV To Opus Encodeab816243240SE +/- 0.01, N = 236.7436.731. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.4WAV To Opus Encodeab816243240Min: 36.71 / Avg: 36.73 / Max: 36.741. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab130260390520650SE +/- 0.08, N = 2SE +/- 0.44, N = 2610.69610.831. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab110220330440550Min: 610.6 / Avg: 610.69 / Max: 610.77Min: 610.39 / Avg: 610.83 / Max: 611.261. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57ab12M24M36M48M60MSE +/- 500.00, N = 2SE +/- 1500.00, N = 253918500539265001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57ab9M18M27M36M45MMin: 53918000 / Avg: 53918500 / Max: 53919000Min: 53925000 / Avg: 53926500 / Max: 539280001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

dav1d

Dav1d is an open-source, speedy AV1 video decoder supporting modern SIMD CPU features. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p 10-bitab100200300400500SE +/- 0.41, N = 2476.82476.771. (CC) gcc options: -pthread -lm
OpenBenchmarking.orgFPS, More Is Betterdav1d 1.2.1Video Input: Chimera 1080p 10-bitab80160240320400Min: 476.36 / Avg: 476.77 / Max: 477.181. (CC) gcc options: -pthread -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab3K6K9K12K15KSE +/- 9.80, N = 2SE +/- 5.06, N = 212743.8112742.701. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab2K4K6K8K10KMin: 12734.01 / Avg: 12743.81 / Max: 12753.6Min: 12737.64 / Avg: 12742.7 / Max: 12747.761. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointab5K10K15K20K25KSE +/- 6.86, N = 2SE +/- 9.14, N = 221134.8121133.021. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointab4K8K12K16K20KMin: 21127.95 / Avg: 21134.81 / Max: 21141.67Min: 21123.88 / Avg: 21133.02 / Max: 21142.151. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibab15003000450060007500SE +/- 8.83, N = 2SE +/- 4.39, N = 26879.866880.221. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibab12002400360048006000Min: 6871.03 / Avg: 6879.86 / Max: 6888.69Min: 6875.83 / Avg: 6880.22 / Max: 6884.611. (CXX) g++ options: -O2 -std=gnu99 -lc

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownab20406080100SE +/- 0.10, N = 287.9387.93MIN: 85.27 / MAX: 92.58MIN: 84.73 / MAX: 92.37
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Crownab20406080100Min: 87.83 / Avg: 87.93 / Max: 88.03

Apache CouchDB

This is a bulk insertion benchmark of Apache CouchDB. CouchDB is a document-oriented NoSQL database implemented in Erlang. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 500 - Inserts: 1000 - Rounds: 30a20040060080010001090.421. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 300 - Inserts: 1000 - Rounds: 30a306090120150152.461. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

OpenBenchmarking.orgSeconds, Fewer Is BetterApache CouchDB 3.3.2Bulk Size: 100 - Inserts: 1000 - Rounds: 30a2040608010094.831. (CXX) g++ options: -std=c++17 -lmozjs-78 -lm -lei -fPIC -MMD

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

Clients Per Thread: 60 - Set To Get Ratio: 1:100

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 50 - Set To Get Ratio: 1:100

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 20 - Set To Get Ratio: 1:100

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 10 - Set To Get Ratio: 1:100

b: The test run did not produce a result. E: Connection error: Connection reset by peer

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 60 - Set To Get Ratio: 1:10

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 50 - Set To Get Ratio: 1:10

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 20 - Set To Get Ratio: 1:10

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 10 - Set To Get Ratio: 1:10

b: The test run did not produce a result. E: Connection error: Connection reset by peer

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Clients Per Thread: 60 - Set To Get Ratio: 1:5

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 50 - Set To Get Ratio: 1:5

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 20 - Set To Get Ratio: 1:5

b: The test run did not produce a result. E: Connection error: Connection refused

a: The test run did not produce a result. E: Connection error: Connection refused

Clients Per Thread: 10 - Set To Get Ratio: 1:5

b: The test run did not produce a result. E: Connection error: Connection reset by peer

a: The test run did not produce a result. E: Connection error: Connection reset by peer

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

a: Test failed to run.

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

a: Test failed to run.

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

a: Test failed to run.

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

a: Test failed to run.

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

a: Test failed to run.

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

a: Test failed to run.

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

a: Test failed to run.

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

a: Test failed to run.

130 Results Shown

libxsmm
Stress-NG:
  Cloning
  Pipe
Apache IoTDB:
  100 - 100 - 500:
    Average Latency
    point/sec
NCNN
Apache IoTDB
HeFFTe - Highly Efficient FFT for Exascale
Apache IoTDB
NCNN:
  CPU - FastestDet
  CPU - alexnet
Liquid-DSP
HeFFTe - Highly Efficient FFT for Exascale:
  r2c - FFTW - float - 256
  c2c - FFTW - float - 256
NCNN:
  CPU - resnet50
  CPU - regnety_400m
Apache IoTDB:
  200 - 1 - 200
  100 - 1 - 200
HeFFTe - Highly Efficient FFT for Exascale
NCNN
HeFFTe - Highly Efficient FFT for Exascale
NCNN
HeFFTe - Highly Efficient FFT for Exascale
NCNN
Apache IoTDB
NCNN:
  CPU - vision_transformer
  CPU - googlenet
Stress-NG
Apache IoTDB
HeFFTe - Highly Efficient FFT for Exascale
Z3 Theorem Prover
Apache IoTDB
Embree
OSPRay
HeFFTe - Highly Efficient FFT for Exascale
Apache IoTDB
OSPRay
Apache IoTDB
HeFFTe - Highly Efficient FFT for Exascale:
  c2c - FFTW - double - 512
  r2c - Stock - float - 512
Apache IoTDB
Liquid-DSP
VVenC
Remhos
NCNN
Liquid-DSP
HeFFTe - Highly Efficient FFT for Exascale
Liquid-DSP
OSPRay
libxsmm
NCNN
HeFFTe - Highly Efficient FFT for Exascale
Liquid-DSP:
  64 - 256 - 32
  32 - 256 - 512
Stress-NG
Liquid-DSP
NCNN
Intel Open Image Denoise
libxsmm
Liquid-DSP
Z3 Theorem Prover
VVenC
NCNN:
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
Intel Open Image Denoise:
  RTLightmap.hdr.4096x4096 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
Liquid-DSP
Blender
QuantLib
OSPRay
Liquid-DSP
HeFFTe - Highly Efficient FFT for Exascale:
  c2c - FFTW - float - 512
  r2c - Stock - double - 256
Apache IoTDB
HeFFTe - Highly Efficient FFT for Exascale
srsRAN Project
Liquid-DSP
GPAW
HeFFTe - Highly Efficient FFT for Exascale
Embree
NCNN
HeFFTe - Highly Efficient FFT for Exascale
Liquid-DSP
HeFFTe - Highly Efficient FFT for Exascale
VVenC
libxsmm
NCNN
Blender
Liquid-DSP
Blender
Apache IoTDB
Liquid-DSP
Apache IoTDB
HeFFTe - Highly Efficient FFT for Exascale:
  r2c - Stock - float - 256
  c2c - Stock - double - 512
Liquid-DSP
Apache IoTDB
Blender
HeFFTe - Highly Efficient FFT for Exascale
Liquid-DSP
Timed GCC Compilation
Embree
HeFFTe - Highly Efficient FFT for Exascale:
  r2c - Stock - double - 128
  r2c - FFTW - float - 512
Embree:
  Pathtracer ISPC - Asian Dragon
  Pathtracer - Asian Dragon
Stress-NG
dav1d
HeFFTe - Highly Efficient FFT for Exascale
VVenC
Liquid-DSP
OSPRay:
  particle_volume/pathtracer/real_time
  particle_volume/ao/real_time
dav1d
srsRAN Project:
  PUSCH Processor Benchmark, Throughput Thread
  Downlink Processor Benchmark
Stress-NG
dav1d
Stress-NG
Opus Codec Encoding
Stress-NG
Liquid-DSP
dav1d
Stress-NG:
  Matrix 3D Math
  Floating Point
  Zlib
Embree
Apache CouchDB:
  500 - 1000 - 30
  300 - 1000 - 30
  100 - 1000 - 30