ddddx

AMD Ryzen Threadripper PRO 5965WX 24-Cores testing with a ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS) and ASUS NVIDIA NV106 2GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403218-NE-DDDDX513530
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 8 Tests
Creator Workloads 12 Tests
Encoding 3 Tests
HPC - High Performance Computing 3 Tests
Imaging 2 Tests
Machine Learning 3 Tests
Multi-Core 11 Tests
Intel oneAPI 4 Tests
Python Tests 2 Tests
Raytracing 2 Tests
Renderers 3 Tests
Server CPU Tests 4 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 20
  2 Hours, 33 Minutes
b
March 20
  2 Hours, 33 Minutes
c
March 21
  7 Hours, 54 Minutes
d
March 21
  2 Hours, 37 Minutes
Invert Hiding All Results Option
  3 Hours, 54 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ddddxOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper PRO 5965WX 24-Cores @ 3.80GHz (24 Cores / 48 Threads)ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS)AMD Starship/Matisse8 x 16GB DDR4-2133MT/s Corsair CMK32GX4M2E3200C162048GB SOLIDIGM SSDPFKKW020X7ASUS NVIDIA NV106 2GBAMD Starship/MatisseVA24312 x Intel X550 + Intel Wi-Fi 6 AX200Ubuntu 23.106.5.0-15-generic (x86_64)GNOME Shell 45.0X Server + Waylandnouveau4.3 Mesa 23.2.1-1ubuntu3GCC 13.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionDdddx BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa008205- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%104%108%112%StockfishJPEG-XL Decoding libjxlJPEG-XL libjxlParallel BZIP2 CompressionBRL-CADPrimesieveTimed Linux Kernel CompilationoneDNNsrsRAN ProjectChaos Group V-RAYRocksDBVVenCOSPRayOpenVINONeural Magic DeepSparseWavPack Audio EncodingSVT-AV1OSPRay StudioGoogle Draco

ddddxbuild-linux-kernel: allmodconfigbrl-cad: VGR Performance Metricstockfish: Chess Benchmarkospray-studio: 3 - 4K - 32 - Path Tracer - CPUospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: particle_volume/ao/real_timeospray-studio: 3 - 4K - 16 - Path Tracer - CPUjpegxl: PNG - 90jpegxl: JPEG - 90ospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUvvenc: Bosphorus 4K - Fastospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 3 - 1080p - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUprimesieve: 1e13onednn: Recurrent Neural Network Training - CPUv-ray: CPUonednn: Recurrent Neural Network Inference - CPUospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray-studio: 2 - 1080p - 16 - Path Tracer - CPUdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamospray-studio: 1 - 1080p - 16 - Path Tracer - CPUospray-studio: 3 - 1080p - 1 - Path Tracer - CPUospray-studio: 2 - 1080p - 1 - Path Tracer - CPUospray-studio: 1 - 1080p - 1 - Path Tracer - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUospray: gravity_spheres_volume/dim_512/pathtracer/real_timeopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUrocksdb: Rand Fill Syncopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUrocksdb: Update Randrocksdb: Overwriterocksdb: Read Rand Write Randrocksdb: Rand Fillrocksdb: Read While Writingrocksdb: Rand Readospray-studio: 3 - 1080p - 32 - Path Tracer - CPUbuild-linux-kernel: defconfigrocksdb: Seq Fillospray-studio: 2 - 1080p - 32 - Path Tracer - CPUospray-studio: 1 - 1080p - 32 - Path Tracer - CPUdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamjpegxl: PNG - 80deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamjpegxl: JPEG - 80deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamvvenc: Bosphorus 4K - Fasterdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamjpegxl-decode: 1deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamvvenc: Bosphorus 1080p - Fastsvt-av1: Preset 4 - Bosphorus 4Konednn: Deconvolution Batch shapes_1d - CPUjpegxl-decode: Allsrsran: PUSCH Processor Benchmark, Throughput Totaljpegxl: PNG - 100jpegxl: JPEG - 100vvenc: Bosphorus 1080p - Fasteronednn: IP Shapes 1D - CPUsrsran: PUSCH Processor Benchmark, Throughput Threadsvt-av1: Preset 8 - Bosphorus 4Ksrsran: PDSCH Processor Benchmark, Throughput Totalsvt-av1: Preset 4 - Bosphorus 1080ponednn: IP Shapes 3D - CPUdraco: Church Facadecompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressiondraco: Lionsvt-av1: Preset 8 - Bosphorus 1080ponednn: Convolution Batch Shapes Auto - CPUprimesieve: 1e12svt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kencode-wavpack: WAV To WavPacksrsran: PDSCH Processor Benchmark, Throughput Threadonednn: Deconvolution Batch shapes_3d - CPUsvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pabcd597.0344303875260752817744515265315037510.1386156.39610.28359036239.40342.50278788780617.065330461621449452877.1481254.6844287638.1284.598824.902571854313.174275.876182031336115811381558.817.65873.27011.8772715.2216.667.46614171.2270.01171.9269.7313688.1313.38895.2527.99428.4311.541033.67101198.6470.48170.156.87421.7263.46377.844552810.861104.033.713224.640.5344518.219.94600.9414.521652.055.462192.090.9724478.3116.69718.2671121781166287978779060355468011459628914749554.1339204304183941167165.39996.044844.7485.1435194.293817.5108684.565146.58435.8509334.3887448.709126.6926446.840626.741154.416718.37254.067318.490814.566393.660630.468346.058221.700964.03253.4812224.16418.8921112.33086.0031994.38979.4548150.828310.061599.334439.0535306.997439.0528306.97026.3927156.20996.3846156.43221.2491798.474719.6246.6475.51082483.4151888.227.6527.52841.2831.31474177.662.95411710.319.2063.5256970233.2303055328131.7212.738046.244151.958154.174.433603.72.36511489.431587.48597.8984246416100827017549615355215004010.1433155.37310.24669013137.4439.89278304769957.0755342460921398452176.9181255.6744634636.7684.577064.880531847113.131276.1222181811338115011381547.317.685904.27881.8677713.7216.737.44837171.4369.95171.1670.01135.788.3313.47889.3628428.1811.411046.39.981201.2270.23170.6856.3426.0363.08380.224597410.831106.443.693241.350.5344461.0220.04598.0314.491654.725.452197.50.9724434.5116.65719.93666715779236285933978396755865391458937534782754.2089081774159841125165.55196.039343.2455.2115191.761717.4858685.448942.6635.5391337.4525447.653526.6035446.237526.855454.013918.50953.973118.523214.823393.076130.485145.960121.747163.30153.4714224.19718.9317111.83715.88982032.956778.8208151.981110.073199.222839.0501307.018839.0344307.08936.4286155.34136.4293155.31351.2468800.047419.4896.7875.33929482.5771889.227.57227.32340.4611.31101178.763.89312063.418.9163.5258470343.3147495365131.9282.711356.081145.97150.8094.438600.22.34602499.14602.968596.5064205285523757317576715251015038110.1479155.83310.28049008338.51140.76178919775117.0665341461521409453577.2141256.4344375637.1734.592664.886081857913.143176.0575182291333115211381563.157.595896.59331.8705716.1616.667.43821171.6769.83171.2969.99136.1588.0513.39894.5728.08427.0211.461041.679.971202.0970.66169.6556.64423.4663.40378.344670110.821107.643.713228.190.5344306.1020.21592.9614.561647.135.452197.980.9824329.2116.76715.28660978777804286868177438455927601449217634768652.8129088824205041189165.46516.042539.9045.1787192.977017.5229684.015942.35335.8092334.7472449.118726.6165447.582926.695654.329218.401854.078718.486614.777393.820630.386646.076321.692563.03853.7553223.00288.9189112.00225.98881998.607379.2452151.255910.115998.800139.0664306.915939.0572306.94646.4155155.68036.4103155.79401.2478799.415119.4606.7115.37367470.1591886.927.48727.32140.9151.32743178.662.49811723.519.1703.5332370923.2850825363133.2702.731926.113150.522150.1334.435605.52.36138487.658606.360597.8274257785308245217512915162015069410.1833155.04910.29859038038.26539.59578354770737.0625331461821385452877.061254.4344633642.1944.580814.899281848813.204875.6991181771330115311341553.547.645897.27911.8701713.6416.747.4522172.1869.61171.5469.86135.9688.1413.4894.4127.98428.5511.441043.829.971202.2270.76169.4156.9421.5463.54377.494629910.861103.63.713230.250.5344596.3519.84603.9614.511652.915.452198.110.9724496.8416.72716.84658520762175281451377711556336131455168744759754.1639191484195941111165.60616.037342.1765.1408194.38417.476685.815943.50235.8135334.7181448.92726.6941447.174326.807954.340218.397854.101818.47914.859393.39430.48846.016621.720662.81753.6259223.5548.9625111.45065.94172014.576679.5069150.734410.090299.054739.0595306.943639.0746306.89976.4001156.04156.4059155.89851.2559794.390219.5296.665.03641439.613185727.33827.319411.31333177.962.93111654.919.3183.5385170963.2219935347133.7842.733526.123152.466151.6384.442607.82.34346488.521587.971OpenBenchmarking.org

Timed Linux Kernel Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigdcba130260390520650SE +/- 0.92, N = 3597.83596.51597.90597.03
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigdcba110220330440550Min: 594.94 / Avg: 596.51 / Max: 598.13

BRL-CAD

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.38.2VGR Performance Metricdcba90K180K270K360K450K4257784205284246414303871. (CXX) g++ options: -std=c++17 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lnetpbm -lregex_brl -lz_brl -lassimp -ldl -lm -ltk8.6

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess Benchmarkdcba13M26M39M52M65MSE +/- 1129127.95, N = 15530824525523757361008270526075281. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess Benchmarkdcba11M22M33M44M55MMin: 51266277 / Avg: 55237572.8 / Max: 649343561. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba40K80K120K160K200KSE +/- 268.88, N = 3175129175767175496177445
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba30K60K90K120K150KMin: 175445 / Avg: 175767 / Max: 176301

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba30K60K90K120K150KSE +/- 88.33, N = 3151620152510153552152653
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba30K60K90K120K150KMin: 152368 / Avg: 152510 / Max: 152672

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba30K60K90K120K150KSE +/- 64.70, N = 3150694150381150040150375
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba30K60K90K120K150KMin: 150259 / Avg: 150381.33 / Max: 150479

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/scivis/real_timedcba3691215SE +/- 0.01, N = 310.1810.1510.1410.14
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/scivis/real_timedcba3691215Min: 10.13 / Avg: 10.15 / Max: 10.16

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/pathtracer/real_timedcba306090120150SE +/- 0.22, N = 3155.05155.83155.37156.40
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/pathtracer/real_timedcba306090120150Min: 155.45 / Avg: 155.83 / Max: 156.22

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/ao/real_timedcba3691215SE +/- 0.01, N = 310.3010.2810.2510.28
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/ao/real_timedcba3691215Min: 10.26 / Avg: 10.28 / Max: 10.3

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba20K40K60K80K100KSE +/- 228.07, N = 390380900839013190362
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba16K32K48K64K80KMin: 89632 / Avg: 90082.67 / Max: 90369

JPEG-XL libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 90dcba918273645SE +/- 0.27, N = 1538.2738.5137.4439.401. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 90dcba816243240Min: 36.64 / Avg: 38.51 / Max: 39.771. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 90dcba1020304050SE +/- 0.39, N = 1539.6040.7639.8942.501. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 90dcba918273645Min: 38.69 / Avg: 40.76 / Max: 43.421. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba20K40K60K80K100KSE +/- 110.73, N = 378354789197830478788
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba14K28K42K56K70KMin: 78760 / Avg: 78919 / Max: 79132

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba20K40K60K80K100KSE +/- 155.86, N = 377073775117699578061
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba14K28K42K56K70KMin: 77330 / Avg: 77510.67 / Max: 77821

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fastdcba246810SE +/- 0.031, N = 37.0627.0667.0757.0601. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fastdcba3691215Min: 7.02 / Avg: 7.07 / Max: 7.131. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba11002200330044005500SE +/- 4.04, N = 35331534153425330
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba9001800270036004500Min: 5336 / Avg: 5341 / Max: 5349

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba10002000300040005000SE +/- 7.22, N = 34618461546094616
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba8001600240032004000Min: 4602 / Avg: 4614.67 / Max: 4627

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba5K10K15K20K25KSE +/- 35.14, N = 321385214092139821449
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba4K8K12K16K20KMin: 21340 / Avg: 21409.33 / Max: 21454

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba10002000300040005000SE +/- 4.41, N = 34528453545214528
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba8001600240032004000Min: 4527 / Avg: 4535.33 / Max: 4542

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13dcba20406080100SE +/- 0.05, N = 377.0677.2176.9277.151. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13dcba1530456075Min: 77.11 / Avg: 77.21 / Max: 77.291. (CXX) g++ options: -O3

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUdcba30060090012001500SE +/- 0.77, N = 31254.431256.431255.671254.68MIN: 1249.27MIN: 1249.44MIN: 1250.99MIN: 1250.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUdcba2004006008001000Min: 1254.9 / Avg: 1256.43 / Max: 1257.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Chaos Group V-RAY

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 6.0Mode: CPUdcba10K20K30K40K50KSE +/- 195.74, N = 344633443754463444287
OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 6.0Mode: CPUdcba8K16K24K32K40KMin: 44038 / Avg: 44374.67 / Max: 44716

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUdcba140280420560700SE +/- 0.39, N = 3642.19637.17636.77638.13MIN: 633.4MIN: 632.47MIN: 632.58MIN: 634.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUdcba110220330440550Min: 636.44 / Avg: 637.17 / Max: 637.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/scivis/real_timedcba1.03472.06943.10414.13885.1735SE +/- 0.00963, N = 34.580814.592664.577064.59882
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/scivis/real_timedcba246810Min: 4.57 / Avg: 4.59 / Max: 4.6

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/ao/real_timedcba1.10312.20623.30934.41245.5155SE +/- 0.01074, N = 34.899284.886084.880534.90257
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/ao/real_timedcba246810Min: 4.87 / Avg: 4.89 / Max: 4.91

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba4K8K12K16K20KSE +/- 16.33, N = 318488185791847118543
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba3K6K9K12K15KMin: 18549 / Avg: 18579.33 / Max: 18605

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba3691215SE +/- 0.02, N = 313.2013.1413.1313.17
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba48121620Min: 13.11 / Avg: 13.14 / Max: 13.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba20406080100SE +/- 0.14, N = 375.7076.0676.1275.88
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba1530456075Min: 75.8 / Avg: 76.06 / Max: 76.27

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba4K8K12K16K20KSE +/- 39.50, N = 318177182291818118203
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba3K6K9K12K15KMin: 18152 / Avg: 18229.33 / Max: 18282

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba30060090012001500SE +/- 4.41, N = 31330133313381336
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba2004006008001000Min: 1326 / Avg: 1332.67 / Max: 1341

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba2004006008001000SE +/- 0.67, N = 31153115211501158
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba2004006008001000Min: 1151 / Avg: 1152.33 / Max: 1153

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba2004006008001000SE +/- 1.00, N = 31134113811381138
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba2004006008001000Min: 1137 / Avg: 1138 / Max: 1140

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUdcba30060090012001500SE +/- 0.80, N = 31553.541563.151547.311558.81MIN: 1365.71 / MAX: 1635.16MIN: 1369.79 / MAX: 1663.37MIN: 1403.59 / MAX: 1636.72MIN: 1416.22 / MAX: 1644.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUdcba30060090012001500Min: 1561.57 / Avg: 1563.15 / Max: 1564.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUdcba246810SE +/- 0.01, N = 37.647.597.687.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUdcba3691215Min: 7.58 / Avg: 7.59 / Max: 7.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamdcba13002600390052006500SE +/- 9.69, N = 35897.285896.595904.285873.27
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamdcba10002000300040005000Min: 5877.27 / Avg: 5896.59 / Max: 5907.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamdcba0.42240.84481.26721.68962.112SE +/- 0.0031, N = 31.87011.87051.86771.8772
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamdcba246810Min: 1.87 / Avg: 1.87 / Max: 1.88

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUdcba150300450600750SE +/- 0.26, N = 3713.64716.16713.72715.22MIN: 667.56 / MAX: 731.59MIN: 661.6 / MAX: 732.11MIN: 658.61 / MAX: 738.08MIN: 664.62 / MAX: 729.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUdcba130260390520650Min: 715.7 / Avg: 716.16 / Max: 716.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUdcba48121620SE +/- 0.01, N = 316.7416.6616.7316.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUdcba48121620Min: 16.65 / Avg: 16.66 / Max: 16.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timedcba246810SE +/- 0.00473, N = 37.452207.438217.448377.46614
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timedcba3691215Min: 7.43 / Avg: 7.44 / Max: 7.44

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUdcba4080120160200SE +/- 0.11, N = 3172.18171.67171.43171.22MIN: 138.53 / MAX: 224.8MIN: 132.19 / MAX: 231.16MIN: 140.26 / MAX: 224.54MIN: 130.32 / MAX: 233.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUdcba306090120150Min: 171.52 / Avg: 171.67 / Max: 171.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUdcba1632486480SE +/- 0.04, N = 369.6169.8369.9570.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUdcba1428425670Min: 69.75 / Avg: 69.83 / Max: 69.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUdcba4080120160200SE +/- 0.12, N = 3171.54171.29171.16171.92MIN: 135.7 / MAX: 226.04MIN: 134.25 / MAX: 225.4MIN: 129.54 / MAX: 225.82MIN: 129.51 / MAX: 227.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUdcba306090120150Min: 171.14 / Avg: 171.29 / Max: 171.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUdcba1632486480SE +/- 0.06, N = 369.8669.9970.0169.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUdcba1428425670Min: 69.88 / Avg: 69.99 / Max: 70.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUdcba306090120150SE +/- 0.06, N = 3135.96136.15135.70136.00MIN: 110.6 / MAX: 155.59MIN: 73.11 / MAX: 161.85MIN: 109.99 / MAX: 155.71MIN: 118.61 / MAX: 153.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUdcba306090120150Min: 136.06 / Avg: 136.15 / Max: 136.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUdcba20406080100SE +/- 0.03, N = 388.1488.0588.3388.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUdcba20406080100Min: 87.99 / Avg: 88.05 / Max: 88.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUdcba3691215SE +/- 0.05, N = 313.4013.3913.4713.38MIN: 8.15 / MAX: 34.65MIN: 7.1 / MAX: 31.9MIN: 7.33 / MAX: 31.02MIN: 7.23 / MAX: 35.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUdcba48121620Min: 13.29 / Avg: 13.39 / Max: 13.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUdcba2004006008001000SE +/- 3.73, N = 3894.41894.57889.36895.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUdcba160320480640800Min: 889.08 / Avg: 894.57 / Max: 901.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUdcba714212835SE +/- 0.02, N = 327.9828.0828.0027.99MIN: 14.99 / MAX: 46.26MIN: 14.29 / MAX: 41.57MIN: 18.81 / MAX: 38.86MIN: 18.95 / MAX: 38.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUdcba612182430Min: 28.04 / Avg: 28.08 / Max: 28.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUdcba90180270360450SE +/- 0.36, N = 3428.55427.02428.18428.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUdcba80160240320400Min: 426.35 / Avg: 427.02 / Max: 427.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUdcba3691215SE +/- 0.03, N = 311.4411.4611.4111.54MIN: 9.06 / MAX: 31.81MIN: 6.19 / MAX: 32.08MIN: 7.86 / MAX: 31.96MIN: 6.61 / MAX: 30.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUdcba3691215Min: 11.4 / Avg: 11.46 / Max: 11.51. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUdcba2004006008001000SE +/- 3.03, N = 31043.821041.671046.301033.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUdcba2004006008001000Min: 1038.29 / Avg: 1041.67 / Max: 1047.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUdcba3691215SE +/- 0.01, N = 39.979.979.9810.00MIN: 5.63 / MAX: 25.55MIN: 5.91 / MAX: 41.34MIN: 5.6 / MAX: 24.1MIN: 6.87 / MAX: 16.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUdcba3691215Min: 9.96 / Avg: 9.97 / Max: 9.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUdcba30060090012001500SE +/- 0.87, N = 31202.221202.091201.221198.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUdcba2004006008001000Min: 1200.38 / Avg: 1202.09 / Max: 1203.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUdcba1632486480SE +/- 0.02, N = 370.7670.6670.2370.48MIN: 41.53 / MAX: 122.26MIN: 24.84 / MAX: 127.05MIN: 43.23 / MAX: 122.75MIN: 43.7 / MAX: 128.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUdcba1428425670Min: 70.63 / Avg: 70.66 / Max: 70.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUdcba4080120160200SE +/- 0.04, N = 3169.41169.65170.68170.101. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUdcba306090120150Min: 169.6 / Avg: 169.65 / Max: 169.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUdcba1326395265SE +/- 0.01, N = 356.9056.6456.3056.87MIN: 51.72 / MAX: 72.17MIN: 36.21 / MAX: 73.18MIN: 52.17 / MAX: 69.13MIN: 35.77 / MAX: 72.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUdcba1122334455Min: 56.62 / Avg: 56.64 / Max: 56.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUdcba90180270360450SE +/- 0.10, N = 3421.54423.46426.03421.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUdcba80160240320400Min: 423.27 / Avg: 423.46 / Max: 423.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUdcba1428425670SE +/- 0.11, N = 363.5463.4063.0863.46MIN: 41.89 / MAX: 89.71MIN: 37.56 / MAX: 90.19MIN: 42.84 / MAX: 83.6MIN: 45.07 / MAX: 85.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUdcba1224364860Min: 63.26 / Avg: 63.4 / Max: 63.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUdcba80160240320400SE +/- 0.65, N = 3377.49378.34380.22377.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUdcba70140210280350Min: 377.05 / Avg: 378.34 / Max: 379.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

RocksDB

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Fill Syncdcba10K20K30K40K50KSE +/- 32.60, N = 3462994670145974455281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Fill Syncdcba8K16K24K32K40KMin: 46636 / Avg: 46701 / Max: 467381. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUdcba3691215SE +/- 0.02, N = 310.8610.8210.8310.86MIN: 6.39 / MAX: 25.76MIN: 5.79 / MAX: 27.55MIN: 6.67 / MAX: 24.71MIN: 7.32 / MAX: 25.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUdcba3691215Min: 10.79 / Avg: 10.82 / Max: 10.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUdcba2004006008001000SE +/- 1.89, N = 31103.601107.641106.441104.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUdcba2004006008001000Min: 1105.36 / Avg: 1107.64 / Max: 1111.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUdcba0.83481.66962.50443.33924.174SE +/- 0.01, N = 33.713.713.693.71MIN: 2.38 / MAX: 15.94MIN: 2.23 / MAX: 18MIN: 2.28 / MAX: 15.95MIN: 2.14 / MAX: 26.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUdcba246810Min: 3.7 / Avg: 3.71 / Max: 3.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUdcba7001400210028003500SE +/- 3.42, N = 33230.253228.193241.353224.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUdcba6001200180024003000Min: 3221.38 / Avg: 3228.19 / Max: 3232.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUdcba0.11930.23860.35790.47720.5965SE +/- 0.00, N = 30.530.530.530.53MIN: 0.31 / MAX: 12.66MIN: 0.3 / MAX: 13.83MIN: 0.3 / MAX: 12.53MIN: 0.3 / MAX: 12.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUdcba246810Min: 0.53 / Avg: 0.53 / Max: 0.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUdcba10K20K30K40K50KSE +/- 27.92, N = 344596.3544306.1044461.0244518.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUdcba8K16K24K32K40KMin: 44250.9 / Avg: 44306.1 / Max: 44340.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUdcba510152025SE +/- 0.01, N = 319.8420.2120.0419.94MIN: 11.38 / MAX: 34.24MIN: 9.28 / MAX: 49.09MIN: 12.81 / MAX: 37.51MIN: 8.87 / MAX: 42.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUdcba510152025Min: 20.19 / Avg: 20.21 / Max: 20.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUdcba130260390520650SE +/- 0.39, N = 3603.96592.96598.03600.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUdcba110220330440550Min: 592.33 / Avg: 592.96 / Max: 593.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUdcba48121620SE +/- 0.01, N = 314.5114.5614.4914.52MIN: 8.58 / MAX: 29.8MIN: 8.28 / MAX: 27.62MIN: 8.36 / MAX: 28.02MIN: 8.14 / MAX: 28.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUdcba48121620Min: 14.55 / Avg: 14.56 / Max: 14.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUdcba400800120016002000SE +/- 0.87, N = 31652.911647.131654.721652.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUdcba30060090012001500Min: 1645.81 / Avg: 1647.13 / Max: 1648.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUdcba1.22852.4573.68554.9146.1425SE +/- 0.01, N = 35.455.455.455.46MIN: 2.8 / MAX: 28.34MIN: 2.8 / MAX: 22.36MIN: 2.89 / MAX: 21.85MIN: 2.82 / MAX: 21.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUdcba246810Min: 5.43 / Avg: 5.45 / Max: 5.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUdcba5001000150020002500SE +/- 4.26, N = 32198.112197.982197.502192.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUdcba400800120016002000Min: 2190.56 / Avg: 2197.98 / Max: 2205.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUdcba0.22050.4410.66150.8821.1025SE +/- 0.00, N = 30.970.980.970.97MIN: 0.55 / MAX: 13.67MIN: 0.53 / MAX: 16.95MIN: 0.57 / MAX: 14.08MIN: 0.66 / MAX: 15.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUdcba246810Min: 0.98 / Avg: 0.98 / Max: 0.981. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUdcba5K10K15K20K25KSE +/- 23.00, N = 324496.8424329.2124434.5124478.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUdcba4K8K12K16K20KMin: 24295.82 / Avg: 24329.21 / Max: 24373.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUdcba48121620SE +/- 0.01, N = 316.7216.7616.6516.69MIN: 9.04 / MAX: 25.27MIN: 8.74 / MAX: 34.17MIN: 10 / MAX: 33.75MIN: 13.14 / MAX: 33.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUdcba48121620Min: 16.73 / Avg: 16.76 / Max: 16.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUdcba160320480640800SE +/- 0.61, N = 3716.84715.28719.93718.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUdcba130260390520650Min: 714.29 / Avg: 715.28 / Max: 716.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

RocksDB

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Update Randomdcba140K280K420K560K700KSE +/- 1458.06, N = 36585206609786667156711211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Update Randomdcba120K240K360K480K600KMin: 659328 / Avg: 660977.67 / Max: 6638851. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Overwritedcba200K400K600K800K1000KSE +/- 4178.44, N = 37621757778047792367811661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Overwritedcba140K280K420K560K700KMin: 769618 / Avg: 777803.67 / Max: 7833541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read Random Write Randomdcba600K1200K1800K2400K3000KSE +/- 5471.11, N = 328145132868681285933928797871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read Random Write Randomdcba500K1000K1500K2000K2500KMin: 2860470 / Avg: 2868680.67 / Max: 28790501. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Filldcba200K400K600K800K1000KSE +/- 1329.24, N = 37771157743847839677906031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Filldcba140K280K420K560K700KMin: 771897 / Avg: 774384 / Max: 7764411. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read While Writingdcba1.2M2.4M3.6M4.8M6MSE +/- 38492.46, N = 356336135592760558653955468011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read While Writingdcba1000K2000K3000K4000K5000KMin: 5534477 / Avg: 5592760.33 / Max: 56654601. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Readdcba30M60M90M120M150MSE +/- 374266.35, N = 31455168741449217631458937531459628911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Readdcba30M60M90M120M150MMin: 144174460 / Avg: 144921763 / Max: 1453325571. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba10K20K30K40K50KSE +/- 129.36, N = 347597476864782747495
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba8K16K24K32K40KMin: 47510 / Avg: 47685.67 / Max: 47938

Timed Linux Kernel Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigdcba1224364860SE +/- 0.59, N = 354.1652.8154.2154.13
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigdcba1122334455Min: 52.22 / Avg: 52.81 / Max: 53.99

RocksDB

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Sequential Filldcba200K400K600K800K1000KSE +/- 4090.17, N = 39191489088829081779204301. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Sequential Filldcba160K320K480K640K800KMin: 900706 / Avg: 908882 / Max: 9132011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba9K18K27K36K45KSE +/- 250.02, N = 341959420504159841839
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba7K14K21K28K35KMin: 41747 / Avg: 42050 / Max: 42546

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba9K18K27K36K45KSE +/- 58.89, N = 341111411894112541167
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba7K14K21K28K35KMin: 41087 / Avg: 41189.33 / Max: 41291

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 0.11, N = 3165.61165.47165.55165.40
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamdcba306090120150Min: 165.3 / Avg: 165.47 / Max: 165.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamdcba246810SE +/- 0.0038, N = 36.03736.04256.03936.0448
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamdcba246810Min: 6.04 / Avg: 6.04 / Max: 6.05

JPEG-XL libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 80dcba1020304050SE +/- 0.29, N = 342.1839.9043.2544.751. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 80dcba918273645Min: 39.32 / Avg: 39.9 / Max: 40.241. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba1.17262.34523.51784.69045.863SE +/- 0.0247, N = 35.14085.17875.21155.1435
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba246810Min: 5.13 / Avg: 5.18 / Max: 5.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 0.92, N = 3194.38192.98191.76194.29
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba4080120160200Min: 191.96 / Avg: 192.98 / Max: 194.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba48121620SE +/- 0.01, N = 317.4817.5217.4917.51
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba48121620Min: 17.51 / Avg: 17.52 / Max: 17.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba150300450600750SE +/- 0.35, N = 3685.82684.02685.45684.57
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba120240360480600Min: 683.46 / Avg: 684.02 / Max: 684.67

JPEG-XL libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 80dcba1122334455SE +/- 0.32, N = 343.5042.3542.6646.581. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 80dcba918273645Min: 41.72 / Avg: 42.35 / Max: 42.711. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba816243240SE +/- 0.06, N = 335.8135.8135.5435.85
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba816243240Min: 35.69 / Avg: 35.81 / Max: 35.91

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba70140210280350SE +/- 0.53, N = 3334.72334.75337.45334.39
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba60120180240300Min: 333.85 / Avg: 334.75 / Max: 335.68

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba100200300400500SE +/- 0.38, N = 3448.93449.12447.65448.71
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba80160240320400Min: 448.44 / Avg: 449.12 / Max: 449.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba612182430SE +/- 0.04, N = 326.6926.6226.6026.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba612182430Min: 26.54 / Avg: 26.62 / Max: 26.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba100200300400500SE +/- 0.66, N = 3447.17447.58446.24446.84
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba80160240320400Min: 446.33 / Avg: 447.58 / Max: 448.55

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba612182430SE +/- 0.03, N = 326.8126.7026.8626.74
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba612182430Min: 26.64 / Avg: 26.7 / Max: 26.74

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba1224364860SE +/- 0.05, N = 354.3454.3354.0154.42
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba1122334455Min: 54.24 / Avg: 54.33 / Max: 54.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba510152025SE +/- 0.02, N = 318.4018.4018.5118.37
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba510152025Min: 18.37 / Avg: 18.4 / Max: 18.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba1224364860SE +/- 0.02, N = 354.1054.0853.9754.07
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba1122334455Min: 54.04 / Avg: 54.08 / Max: 54.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba510152025SE +/- 0.01, N = 318.4818.4918.5218.49
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba510152025Min: 18.47 / Avg: 18.49 / Max: 18.5

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fasterdcba48121620SE +/- 0.03, N = 314.8614.7814.8214.571. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fasterdcba48121620Min: 14.75 / Avg: 14.78 / Max: 14.831. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba90180270360450SE +/- 0.32, N = 3393.39393.82393.08393.66
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba70140210280350Min: 393.34 / Avg: 393.82 / Max: 394.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba714212835SE +/- 0.09, N = 330.4930.3930.4930.47
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba714212835Min: 30.21 / Avg: 30.39 / Max: 30.48

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba1020304050SE +/- 0.03, N = 346.0246.0845.9646.06
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba918273645Min: 46.02 / Avg: 46.08 / Max: 46.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba510152025SE +/- 0.01, N = 321.7221.6921.7521.70
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba510152025Min: 21.67 / Avg: 21.69 / Max: 21.72

JPEG-XL Decoding libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: 1dcba1428425670SE +/- 0.12, N = 362.8263.0463.3064.03
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: 1dcba1326395265Min: 62.87 / Avg: 63.04 / Max: 63.27

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba1224364860SE +/- 0.05, N = 353.6353.7653.4753.48
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba1122334455Min: 53.67 / Avg: 53.76 / Max: 53.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba50100150200250SE +/- 0.20, N = 3223.55223.00224.20224.16
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba4080120160200Min: 222.68 / Avg: 223 / Max: 223.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba3691215SE +/- 0.0057, N = 38.96258.91898.93178.8921
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba3691215Min: 8.91 / Avg: 8.92 / Max: 8.93

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba306090120150SE +/- 0.07, N = 3111.45112.00111.84112.33
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba20406080100Min: 111.87 / Avg: 112 / Max: 112.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba246810SE +/- 0.0186, N = 35.94175.98885.88986.0030
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba246810Min: 5.96 / Avg: 5.99 / Max: 6.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba400800120016002000SE +/- 6.40, N = 32014.581998.612032.961994.39
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba400800120016002000Min: 1986.99 / Avg: 1998.61 / Max: 2009.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba20406080100SE +/- 0.16, N = 379.5179.2578.8279.45
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba1530456075Min: 79.03 / Avg: 79.25 / Max: 79.57

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba306090120150SE +/- 0.31, N = 3150.73151.26151.98150.83
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba306090120150Min: 150.64 / Avg: 151.26 / Max: 151.68

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba3691215SE +/- 0.01, N = 310.0910.1210.0710.06
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba3691215Min: 10.11 / Avg: 10.12 / Max: 10.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba20406080100SE +/- 0.05, N = 399.0598.8099.2299.33
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba20406080100Min: 98.72 / Avg: 98.8 / Max: 98.89

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba918273645SE +/- 0.00, N = 339.0639.0739.0539.05
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba816243240Min: 39.06 / Avg: 39.07 / Max: 39.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba70140210280350SE +/- 0.00, N = 3306.94306.92307.02307.00
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba60120180240300Min: 306.91 / Avg: 306.92 / Max: 306.92

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba918273645SE +/- 0.00, N = 339.0739.0639.0339.05
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba816243240Min: 39.05 / Avg: 39.06 / Max: 39.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba70140210280350SE +/- 0.03, N = 3306.90306.95307.09306.97
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba60120180240300Min: 306.91 / Avg: 306.95 / Max: 307

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba246810SE +/- 0.0148, N = 36.40016.41556.42866.3927
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba3691215Min: 6.39 / Avg: 6.42 / Max: 6.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba306090120150SE +/- 0.37, N = 3156.04155.68155.34156.21
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba306090120150Min: 155.2 / Avg: 155.68 / Max: 156.4

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba246810SE +/- 0.0061, N = 36.40596.41036.42936.3846
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba3691215Min: 6.4 / Avg: 6.41 / Max: 6.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba306090120150SE +/- 0.15, N = 3155.90155.79155.31156.43
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba306090120150Min: 155.53 / Avg: 155.79 / Max: 156.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba0.28260.56520.84781.13041.413SE +/- 0.0030, N = 31.25591.24781.24681.2491
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba246810Min: 1.24 / Avg: 1.25 / Max: 1.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba2004006008001000SE +/- 1.90, N = 3794.39799.42800.05798.47
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba140280420560700Min: 796.71 / Avg: 799.42 / Max: 803.09

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fastdcba510152025SE +/- 0.03, N = 319.5319.4619.4919.621. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fastdcba510152025Min: 19.41 / Avg: 19.46 / Max: 19.491. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kdcba246810SE +/- 0.005, N = 36.6606.7116.7876.6471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kdcba3691215Min: 6.7 / Avg: 6.71 / Max: 6.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUdcba1.23992.47983.71974.95966.1995SE +/- 0.03672, N = 35.036415.373675.339295.51082MIN: 3.91MIN: 3.84MIN: 3.86MIN: 3.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUdcba246810Min: 5.32 / Avg: 5.37 / Max: 5.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

JPEG-XL Decoding libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: Alldcba100200300400500SE +/- 1.28, N = 3439.61470.16482.58483.42
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: Alldcba90180270360450Min: 468.17 / Avg: 470.16 / Max: 472.55

srsRAN Project

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Totaldcba400800120016002000SE +/- 1.62, N = 31857.01886.91889.21888.2MIN: 1145.4MIN: 1135 / MAX: 1889.2MIN: 1137.6MIN: 1136.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Totaldcba30060090012001500Min: 1883.8 / Avg: 1886.93 / Max: 1889.21. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

JPEG-XL libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 100dcba714212835SE +/- 0.01, N = 327.3427.4927.5727.651. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 100dcba612182430Min: 27.48 / Avg: 27.49 / Max: 27.51. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 100dcba612182430SE +/- 0.03, N = 327.3227.3227.3227.531. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 100dcba612182430Min: 27.26 / Avg: 27.32 / Max: 27.361. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fasterdcba918273645SE +/- 0.15, N = 341.0040.9240.4641.281. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fasterdcba918273645Min: 40.61 / Avg: 40.92 / Max: 41.111. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUdcba0.29870.59740.89611.19481.4935SE +/- 0.00673, N = 31.313331.327431.311011.31474MIN: 1.27MIN: 1.26MIN: 1.27MIN: 1.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUdcba246810Min: 1.32 / Avg: 1.33 / Max: 1.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

srsRAN Project

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Threaddcba4080120160200SE +/- 0.70, N = 3177.9178.6178.7177.6MIN: 110.9MIN: 112.6 / MAX: 179.4MIN: 112.2MIN: 113.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Threaddcba306090120150Min: 177.2 / Avg: 178.6 / Max: 179.41. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kdcba1428425670SE +/- 0.26, N = 362.9362.5063.8962.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kdcba1224364860Min: 61.98 / Avg: 62.5 / Max: 62.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

srsRAN Project

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totaldcba3K6K9K12K15KSE +/- 27.21, N = 311654.911723.512063.411710.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totaldcba2K4K6K8K10KMin: 11669.4 / Avg: 11723.47 / Max: 11755.81. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pdcba510152025SE +/- 0.04, N = 319.3219.1718.9219.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pdcba510152025Min: 19.12 / Avg: 19.17 / Max: 19.241. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUdcba0.79621.59242.38863.18483.981SE +/- 0.00386, N = 33.538513.533233.525843.52569MIN: 3.49MIN: 3.47MIN: 3.48MIN: 3.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUdcba246810Min: 3.53 / Avg: 3.53 / Max: 3.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Google Draco

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadedcba15003000450060007500SE +/- 9.91, N = 370967092703470231. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadedcba12002400360048006000Min: 7076 / Avg: 7091.67 / Max: 71101. (CXX) g++ options: -O3

Parallel BZIP2 Compression

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressiondcba0.74581.49162.23742.98323.729SE +/- 0.043942, N = 123.2219933.2850823.3147493.2303051. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressiondcba246810Min: 3 / Avg: 3.29 / Max: 3.471. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

Google Draco

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Liondcba12002400360048006000SE +/- 15.72, N = 353475363536553281. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Liondcba9001800270036004500Min: 5342 / Avg: 5363.33 / Max: 53941. (CXX) g++ options: -O3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pdcba306090120150SE +/- 0.72, N = 3133.78133.27131.93131.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pdcba306090120150Min: 132.38 / Avg: 133.27 / Max: 134.71. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUdcba0.61611.23221.84832.46443.0805SE +/- 0.00421, N = 32.733522.731922.711352.73804MIN: 2.67MIN: 2.66MIN: 2.65MIN: 2.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUdcba246810Min: 2.72 / Avg: 2.73 / Max: 2.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12dcba246810SE +/- 0.056, N = 36.1236.1136.0816.2441. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12dcba246810Min: 6.04 / Avg: 6.11 / Max: 6.221. (CXX) g++ options: -O3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kdcba306090120150SE +/- 1.34, N = 3152.47150.52145.97151.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kdcba306090120150Min: 147.95 / Avg: 150.52 / Max: 152.441. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kdcba306090120150SE +/- 0.99, N = 3151.64150.13150.81154.171. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kdcba306090120150Min: 148.26 / Avg: 150.13 / Max: 151.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

WavPack Audio Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.7WAV To WavPackdcba0.99951.9992.99853.9984.9975SE +/- 0.000, N = 54.4424.4354.4384.433
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.7WAV To WavPackdcba246810Min: 4.43 / Avg: 4.43 / Max: 4.44

srsRAN Project

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threaddcba130260390520650SE +/- 1.71, N = 3607.8605.5600.2603.71. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threaddcba110220330440550Min: 602.4 / Avg: 605.53 / Max: 608.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUdcba0.53211.06421.59632.12842.6605SE +/- 0.00819, N = 32.343462.361382.346022.36511MIN: 2.25MIN: 2.25MIN: 2.28MIN: 2.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUdcba246810Min: 2.35 / Avg: 2.36 / Max: 2.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pdcba110220330440550SE +/- 6.37, N = 3488.52487.66499.14489.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pdcba90180270360450Min: 478.08 / Avg: 487.66 / Max: 499.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pdcba130260390520650SE +/- 4.87, N = 3587.97606.36602.97587.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pdcba110220330440550Min: 599.87 / Avg: 606.36 / Max: 615.891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

158 Results Shown

Timed Linux Kernel Compilation
BRL-CAD
Stockfish
OSPRay Studio:
  3 - 4K - 32 - Path Tracer - CPU
  2 - 4K - 32 - Path Tracer - CPU
  1 - 4K - 32 - Path Tracer - CPU
OSPRay:
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  particle_volume/ao/real_time
OSPRay Studio
JPEG-XL libjxl:
  PNG - 90
  JPEG - 90
OSPRay Studio:
  2 - 4K - 16 - Path Tracer - CPU
  1 - 4K - 16 - Path Tracer - CPU
VVenC
OSPRay Studio:
  3 - 4K - 1 - Path Tracer - CPU
  2 - 4K - 1 - Path Tracer - CPU
  3 - 1080p - 16 - Path Tracer - CPU
  1 - 4K - 1 - Path Tracer - CPU
Primesieve
oneDNN
Chaos Group V-RAY
oneDNN
OSPRay:
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/ao/real_time
OSPRay Studio
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
OSPRay Studio:
  1 - 1080p - 16 - Path Tracer - CPU
  3 - 1080p - 1 - Path Tracer - CPU
  2 - 1080p - 1 - Path Tracer - CPU
  1 - 1080p - 1 - Path Tracer - CPU
OpenVINO:
  Face Detection FP16 - CPU:
    ms
    FPS
Neural Magic DeepSparse:
  Llama2 Chat 7b Quantized - Asynchronous Multi-Stream:
    ms/batch
    items/sec
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
OSPRay
OpenVINO:
  Person Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Noise Suppression Poconet-Like FP16 - CPU:
    ms
    FPS
  Person Re-Identification Retail FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16 - CPU:
    ms
    FPS
RocksDB
OpenVINO:
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
RocksDB:
  Update Rand
  Overwrite
  Read Rand Write Rand
  Rand Fill
  Read While Writing
  Rand Read
OSPRay Studio
Timed Linux Kernel Compilation
RocksDB
OSPRay Studio:
  2 - 1080p - 32 - Path Tracer - CPU
  1 - 1080p - 32 - Path Tracer - CPU
Neural Magic DeepSparse:
  Llama2 Chat 7b Quantized - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG-XL libjxl
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
JPEG-XL libjxl
Neural Magic DeepSparse:
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    ms/batch
    items/sec
VVenC
Neural Magic DeepSparse:
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    ms/batch
    items/sec
JPEG-XL Decoding libjxl
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    ms/batch
    items/sec
  ResNet-50, Baseline - Synchronous Single-Stream:
    ms/batch
    items/sec
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    ms/batch
    items/sec
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    ms/batch
    items/sec
VVenC
SVT-AV1
oneDNN
JPEG-XL Decoding libjxl
srsRAN Project
JPEG-XL libjxl:
  PNG - 100
  JPEG - 100
VVenC
oneDNN
srsRAN Project
SVT-AV1
srsRAN Project
SVT-AV1
oneDNN
Google Draco
Parallel BZIP2 Compression
Google Draco
SVT-AV1
oneDNN
Primesieve
SVT-AV1:
  Preset 13 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
WavPack Audio Encoding
srsRAN Project
oneDNN
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p