ddddx

AMD Ryzen Threadripper PRO 5965WX 24-Cores testing with a ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS) and ASUS NVIDIA NV106 2GB on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403218-NE-DDDDX513530
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 8 Tests
Creator Workloads 12 Tests
Encoding 3 Tests
HPC - High Performance Computing 3 Tests
Imaging 2 Tests
Machine Learning 3 Tests
Multi-Core 11 Tests
Intel oneAPI 4 Tests
Python Tests 2 Tests
Raytracing 2 Tests
Renderers 3 Tests
Server CPU Tests 4 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
March 20
  2 Hours, 33 Minutes
b
March 20
  2 Hours, 33 Minutes
c
March 21
  7 Hours, 54 Minutes
d
March 21
  2 Hours, 37 Minutes
Invert Hiding All Results Option
  3 Hours, 54 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ddddxOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen Threadripper PRO 5965WX 24-Cores @ 3.80GHz (24 Cores / 48 Threads)ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS)AMD Starship/Matisse8 x 16GB DDR4-2133MT/s Corsair CMK32GX4M2E3200C162048GB SOLIDIGM SSDPFKKW020X7ASUS NVIDIA NV106 2GBAMD Starship/MatisseVA24312 x Intel X550 + Intel Wi-Fi 6 AX200Ubuntu 23.106.5.0-15-generic (x86_64)GNOME Shell 45.0X Server + Waylandnouveau4.3 Mesa 23.2.1-1ubuntu3GCC 13.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionDdddx BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa008205- Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%104%108%112%StockfishJPEG-XL Decoding libjxlJPEG-XL libjxlParallel BZIP2 CompressionBRL-CADPrimesieveTimed Linux Kernel CompilationoneDNNsrsRAN ProjectChaos Group V-RAYRocksDBVVenCOSPRayOpenVINONeural Magic DeepSparseWavPack Audio EncodingSVT-AV1OSPRay StudioGoogle Draco

ddddxjpegxl: PNG - 80jpegxl: PNG - 90jpegxl: JPEG - 80jpegxl: JPEG - 90jpegxl: PNG - 100jpegxl: JPEG - 100jpegxl-decode: 1jpegxl-decode: Allsrsran: PDSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PDSCH Processor Benchmark, Throughput Threadsrsran: PUSCH Processor Benchmark, Throughput Threadsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timestockfish: Chess Benchmarkbuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionprimesieve: 1e12primesieve: 1e13onednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUospray-studio: 3 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 1080p - 1 - Path Tracer - CPUospray-studio: 2 - 1080p - 1 - Path Tracer - CPUospray-studio: 3 - 1080p - 1 - Path Tracer - CPUospray-studio: 1 - 1080p - 16 - Path Tracer - CPUospray-studio: 1 - 1080p - 32 - Path Tracer - CPUospray-studio: 2 - 1080p - 16 - Path Tracer - CPUospray-studio: 2 - 1080p - 32 - Path Tracer - CPUospray-studio: 3 - 1080p - 16 - Path Tracer - CPUospray-studio: 3 - 1080p - 32 - Path Tracer - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdraco: Liondraco: Church Facadeopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Overwriterocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingrocksdb: Read Rand Write Randencode-wavpack: WAV To WavPackbrl-cad: VGR Performance Metricv-ray: CPUabcd44.74839.40346.58442.50227.6527.52864.032483.41511710.31888.2603.7177.66.64762.954154.17151.95819.206131.721489.431587.487.0614.56619.62441.28310.283510.1386156.3964.902574.598827.466145260752854.133597.0343.2303056.24477.1481.314743.525692.738045.510822.365111254.68638.12845284616533078061150375787881526539036217744511381158133618203411671854341839214494749526.6926448.709118.37254.4167684.565117.5108194.29385.1435306.997439.0535156.20996.39271994.3896.003798.47471.24911.87725873.27016.0448165.3999306.970239.0528156.43226.3846150.828379.454899.334410.0615224.164153.4812112.33088.892130.4683393.660621.700946.0582334.388735.850975.87613.174226.7411446.840618.490854.0673532870237.61558.8170.01171.2269.73171.92600.9419.9416.66715.222192.095.46170.170.481104.0310.86718.216.693224.643.71428.4327.9988.131361652.0514.52895.2513.381033.6711.54377.8463.461198.641024478.310.97421.7256.8744518.20.5378116679060314596289167112192043045528554680128797874.4334303874428743.24537.4442.6639.89227.57227.32363.301482.57712063.41889.2600.2178.76.78763.893150.809145.9718.916131.928499.14602.9687.07514.82319.48940.46110.246610.1433155.3734.880534.577067.448376100827054.208597.8983.3147496.08176.9181.311013.525842.711355.339292.346021255.67636.76845214609534276995150040783041535529013117549611381150133818181411251847141598213984782726.6035447.653518.50954.0139685.448917.4858191.76175.2115307.018839.0501155.34136.42862032.95675.8898800.04741.24681.86775904.27886.0393165.5519307.089339.0344155.31356.4293151.981178.820899.222810.0731224.197153.4714111.83718.931730.4851393.076121.747145.9601337.452535.539176.122213.131226.8554446.237518.523253.9731536570347.681547.3169.95171.4370.01171.16598.0320.0416.73713.722197.55.45170.6870.231106.4410.83719.9316.653241.353.69428.182888.33135.71654.7214.49889.3613.471046.311.41380.2263.081201.229.9824434.510.97426.0356.344461.020.5377923678396714589375366671590817745974558653928593394.4384246414463439.90438.51142.35340.76127.48727.32163.038470.15911723.51886.9605.5178.66.71162.498150.133150.52219.170133.270487.658606.3607.06614.77719.46040.91510.280410.1479155.8334.886084.592667.438215523757352.812596.5063.2850826.11377.2141.327433.533232.731925.373672.361381256.43637.17345354615534177511150381789191525109008317576711381152133318229411891857942050214094768626.6165449.118718.401854.3292684.015917.5229192.97705.1787306.915939.0664155.68036.41551998.60735.9888799.41511.24781.87055896.59336.0425165.4651306.946439.0572155.79406.4103151.255979.245298.800110.1159223.002853.7553112.00228.918930.3866393.820621.692546.0763334.747235.809276.057513.143126.6956447.582918.486654.0787536370927.591563.1569.83171.6769.99171.29592.9620.2116.66716.162197.985.45169.6570.661107.6410.82715.2816.763228.193.71427.0228.0888.05136.151647.1314.56894.5713.391041.6711.46378.3463.401202.099.9724329.210.98423.4656.6444306.100.5377780477438414492176366097890888246701559276028686814.4354205284437542.17638.26543.50239.59527.33827.31962.817439.61311654.91857607.8177.96.6662.931151.638152.46619.318133.784488.521587.9717.06214.85919.5294110.298510.1833155.0494.899284.580817.45225308245254.163597.8273.2219936.12377.061.313333.538512.733525.036412.343461254.43642.19445284618533177073150694783541516209038017512911341153133018177411111848841959213854759726.6941448.92718.397854.3402685.815917.476194.3845.1408306.943639.0595156.04156.40012014.57665.9417794.39021.25591.87015897.27916.0373165.6061306.899739.0746155.89856.4059150.734479.506999.054710.0902223.55453.6259111.45068.962530.488393.39421.720646.0166334.718135.813575.699113.204826.8079447.174318.47954.1018534770967.641553.5469.61172.1869.86171.54603.9619.8416.74713.642198.115.45169.4170.761103.610.86716.8416.723230.253.71428.5527.9888.14135.961652.9114.51894.4113.41043.8211.44377.4963.541202.229.9724496.840.97421.5456.944596.350.5376217577711514551687465852091914846299563361328145134.44242577844633OpenBenchmarking.org

JPEG-XL libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 80abcd1020304050SE +/- 0.29, N = 344.7543.2539.9042.181. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 80abcd918273645Min: 39.32 / Avg: 39.9 / Max: 40.241. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 90abcd918273645SE +/- 0.27, N = 1539.4037.4438.5138.271. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 90abcd816243240Min: 36.64 / Avg: 38.51 / Max: 39.771. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 80abcd1122334455SE +/- 0.32, N = 346.5842.6642.3543.501. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 80abcd918273645Min: 41.72 / Avg: 42.35 / Max: 42.711. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 90abcd1020304050SE +/- 0.39, N = 1542.5039.8940.7639.601. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 90abcd918273645Min: 38.69 / Avg: 40.76 / Max: 43.421. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 100abcd714212835SE +/- 0.01, N = 327.6527.5727.4927.341. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 100abcd612182430Min: 27.48 / Avg: 27.49 / Max: 27.51. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 100abcd612182430SE +/- 0.03, N = 327.5327.3227.3227.321. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 100abcd612182430Min: 27.26 / Avg: 27.32 / Max: 27.361. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL Decoding libjxl

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: 1abcd1428425670SE +/- 0.12, N = 364.0363.3063.0462.82
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: 1abcd1326395265Min: 62.87 / Avg: 63.04 / Max: 63.27

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: Allabcd100200300400500SE +/- 1.28, N = 3483.42482.58470.16439.61
OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: Allabcd90180270360450Min: 468.17 / Avg: 470.16 / Max: 472.55

srsRAN Project

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totalabcd3K6K9K12K15KSE +/- 27.21, N = 311710.312063.411723.511654.91. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totalabcd2K4K6K8K10KMin: 11669.4 / Avg: 11723.47 / Max: 11755.81. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Totalabcd400800120016002000SE +/- 1.62, N = 31888.21889.21886.91857.0MIN: 1136.3MIN: 1137.6MIN: 1135 / MAX: 1889.2MIN: 1145.41. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Totalabcd30060090012001500Min: 1883.8 / Avg: 1886.93 / Max: 1889.21. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threadabcd130260390520650SE +/- 1.71, N = 3603.7600.2605.5607.81. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threadabcd110220330440550Min: 602.4 / Avg: 605.53 / Max: 608.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Threadabcd4080120160200SE +/- 0.70, N = 3177.6178.7178.6177.9MIN: 113.3MIN: 112.2MIN: 112.6 / MAX: 179.4MIN: 110.91. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl
OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Threadabcd306090120150Min: 177.2 / Avg: 178.6 / Max: 179.41. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kabcd246810SE +/- 0.005, N = 36.6476.7876.7116.6601. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kabcd3691215Min: 6.7 / Avg: 6.71 / Max: 6.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kabcd1428425670SE +/- 0.26, N = 362.9563.8962.5062.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kabcd1224364860Min: 61.98 / Avg: 62.5 / Max: 62.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kabcd306090120150SE +/- 0.99, N = 3154.17150.81150.13151.641. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kabcd306090120150Min: 148.26 / Avg: 150.13 / Max: 151.651. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kabcd306090120150SE +/- 1.34, N = 3151.96145.97150.52152.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kabcd306090120150Min: 147.95 / Avg: 150.52 / Max: 152.441. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pabcd510152025SE +/- 0.04, N = 319.2118.9219.1719.321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pabcd510152025Min: 19.12 / Avg: 19.17 / Max: 19.241. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pabcd306090120150SE +/- 0.72, N = 3131.72131.93133.27133.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pabcd306090120150Min: 132.38 / Avg: 133.27 / Max: 134.71. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pabcd110220330440550SE +/- 6.37, N = 3489.43499.14487.66488.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pabcd90180270360450Min: 478.08 / Avg: 487.66 / Max: 499.731. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pabcd130260390520650SE +/- 4.87, N = 3587.48602.97606.36587.971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pabcd110220330440550Min: 599.87 / Avg: 606.36 / Max: 615.891. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

VVenC

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fastabcd246810SE +/- 0.031, N = 37.0607.0757.0667.0621. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fastabcd3691215Min: 7.02 / Avg: 7.07 / Max: 7.131. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fasterabcd48121620SE +/- 0.03, N = 314.5714.8214.7814.861. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fasterabcd48121620Min: 14.75 / Avg: 14.78 / Max: 14.831. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fastabcd510152025SE +/- 0.03, N = 319.6219.4919.4619.531. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fastabcd510152025Min: 19.41 / Avg: 19.46 / Max: 19.491. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fasterabcd918273645SE +/- 0.15, N = 341.2840.4640.9241.001. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fasterabcd918273645Min: 40.61 / Avg: 40.92 / Max: 41.111. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OSPRay

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/ao/real_timeabcd3691215SE +/- 0.01, N = 310.2810.2510.2810.30
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/ao/real_timeabcd3691215Min: 10.26 / Avg: 10.28 / Max: 10.3

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/scivis/real_timeabcd3691215SE +/- 0.01, N = 310.1410.1410.1510.18
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/scivis/real_timeabcd3691215Min: 10.13 / Avg: 10.15 / Max: 10.16

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/pathtracer/real_timeabcd306090120150SE +/- 0.22, N = 3156.40155.37155.83155.05
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/pathtracer/real_timeabcd306090120150Min: 155.45 / Avg: 155.83 / Max: 156.22

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/ao/real_timeabcd1.10312.20623.30934.41245.5155SE +/- 0.01074, N = 34.902574.880534.886084.89928
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/ao/real_timeabcd246810Min: 4.87 / Avg: 4.89 / Max: 4.91

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeabcd1.03472.06943.10414.13885.1735SE +/- 0.00963, N = 34.598824.577064.592664.58081
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeabcd246810Min: 4.57 / Avg: 4.59 / Max: 4.6

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeabcd246810SE +/- 0.00473, N = 37.466147.448377.438217.45220
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeabcd3691215Min: 7.43 / Avg: 7.44 / Max: 7.44

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess Benchmarkabcd13M26M39M52M65MSE +/- 1129127.95, N = 15526075286100827055237573530824521. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess Benchmarkabcd11M22M33M44M55MMin: 51266277 / Avg: 55237572.8 / Max: 649343561. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

Timed Linux Kernel Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigabcd1224364860SE +/- 0.59, N = 354.1354.2152.8154.16
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigabcd1122334455Min: 52.22 / Avg: 52.81 / Max: 53.99

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigabcd130260390520650SE +/- 0.92, N = 3597.03597.90596.51597.83
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigabcd110220330440550Min: 594.94 / Avg: 596.51 / Max: 598.13

Parallel BZIP2 Compression

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionabcd0.74581.49162.23742.98323.729SE +/- 0.043942, N = 123.2303053.3147493.2850823.2219931. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionabcd246810Min: 3 / Avg: 3.29 / Max: 3.471. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12abcd246810SE +/- 0.056, N = 36.2446.0816.1136.1231. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12abcd246810Min: 6.04 / Avg: 6.11 / Max: 6.221. (CXX) g++ options: -O3

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13abcd20406080100SE +/- 0.05, N = 377.1576.9277.2177.061. (CXX) g++ options: -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13abcd1530456075Min: 77.11 / Avg: 77.21 / Max: 77.291. (CXX) g++ options: -O3

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUabcd0.29870.59740.89611.19481.4935SE +/- 0.00673, N = 31.314741.311011.327431.31333MIN: 1.27MIN: 1.27MIN: 1.26MIN: 1.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUabcd246810Min: 1.32 / Avg: 1.33 / Max: 1.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUabcd0.79621.59242.38863.18483.981SE +/- 0.00386, N = 33.525693.525843.533233.53851MIN: 3.48MIN: 3.48MIN: 3.47MIN: 3.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUabcd246810Min: 3.53 / Avg: 3.53 / Max: 3.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUabcd0.61611.23221.84832.46443.0805SE +/- 0.00421, N = 32.738042.711352.731922.73352MIN: 2.66MIN: 2.65MIN: 2.66MIN: 2.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUabcd246810Min: 2.72 / Avg: 2.73 / Max: 2.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUabcd1.23992.47983.71974.95966.1995SE +/- 0.03672, N = 35.510825.339295.373675.03641MIN: 3.9MIN: 3.86MIN: 3.84MIN: 3.911. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUabcd246810Min: 5.32 / Avg: 5.37 / Max: 5.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUabcd0.53211.06421.59632.12842.6605SE +/- 0.00819, N = 32.365112.346022.361382.34346MIN: 2.28MIN: 2.28MIN: 2.25MIN: 2.251. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUabcd246810Min: 2.35 / Avg: 2.36 / Max: 2.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUabcd30060090012001500SE +/- 0.77, N = 31254.681255.671256.431254.43MIN: 1250.31MIN: 1250.99MIN: 1249.44MIN: 1249.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUabcd2004006008001000Min: 1254.9 / Avg: 1256.43 / Max: 1257.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUabcd140280420560700SE +/- 0.39, N = 3638.13636.77637.17642.19MIN: 634.67MIN: 632.58MIN: 632.47MIN: 633.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUabcd110220330440550Min: 636.44 / Avg: 637.17 / Max: 637.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd10002000300040005000SE +/- 4.41, N = 34528452145354528
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd8001600240032004000Min: 4527 / Avg: 4535.33 / Max: 4542

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd10002000300040005000SE +/- 7.22, N = 34616460946154618
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd8001600240032004000Min: 4602 / Avg: 4614.67 / Max: 4627

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd11002200330044005500SE +/- 4.04, N = 35330534253415331
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd9001800270036004500Min: 5336 / Avg: 5341 / Max: 5349

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd20K40K60K80K100KSE +/- 155.86, N = 378061769957751177073
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd14K28K42K56K70KMin: 77330 / Avg: 77510.67 / Max: 77821

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd30K60K90K120K150KSE +/- 64.70, N = 3150375150040150381150694
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd30K60K90K120K150KMin: 150259 / Avg: 150381.33 / Max: 150479

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd20K40K60K80K100KSE +/- 110.73, N = 378788783047891978354
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd14K28K42K56K70KMin: 78760 / Avg: 78919 / Max: 79132

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd30K60K90K120K150KSE +/- 88.33, N = 3152653153552152510151620
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd30K60K90K120K150KMin: 152368 / Avg: 152510 / Max: 152672

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd20K40K60K80K100KSE +/- 228.07, N = 390362901319008390380
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd16K32K48K64K80KMin: 89632 / Avg: 90082.67 / Max: 90369

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd40K80K120K160K200KSE +/- 268.88, N = 3177445175496175767175129
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd30K60K90K120K150KMin: 175445 / Avg: 175767 / Max: 176301

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd2004006008001000SE +/- 1.00, N = 31138113811381134
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd2004006008001000Min: 1137 / Avg: 1138 / Max: 1140

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd2004006008001000SE +/- 0.67, N = 31158115011521153
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd2004006008001000Min: 1151 / Avg: 1152.33 / Max: 1153

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd30060090012001500SE +/- 4.41, N = 31336133813331330
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUabcd2004006008001000Min: 1326 / Avg: 1332.67 / Max: 1341

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd4K8K12K16K20KSE +/- 39.50, N = 318203181811822918177
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd3K6K9K12K15KMin: 18152 / Avg: 18229.33 / Max: 18282

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd9K18K27K36K45KSE +/- 58.89, N = 341167411254118941111
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd7K14K21K28K35KMin: 41087 / Avg: 41189.33 / Max: 41291

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd4K8K12K16K20KSE +/- 16.33, N = 318543184711857918488
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd3K6K9K12K15KMin: 18549 / Avg: 18579.33 / Max: 18605

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd9K18K27K36K45KSE +/- 250.02, N = 341839415984205041959
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd7K14K21K28K35KMin: 41747 / Avg: 42050 / Max: 42546

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd5K10K15K20K25KSE +/- 35.14, N = 321449213982140921385
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUabcd4K8K12K16K20KMin: 21340 / Avg: 21409.33 / Max: 21454

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd10K20K30K40K50KSE +/- 129.36, N = 347495478274768647597
OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUabcd8K16K24K32K40KMin: 47510 / Avg: 47685.67 / Max: 47938

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd612182430SE +/- 0.04, N = 326.6926.6026.6226.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd612182430Min: 26.54 / Avg: 26.62 / Max: 26.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd100200300400500SE +/- 0.38, N = 3448.71447.65449.12448.93
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamabcd80160240320400Min: 448.44 / Avg: 449.12 / Max: 449.75

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd510152025SE +/- 0.02, N = 318.3718.5118.4018.40
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd510152025Min: 18.37 / Avg: 18.4 / Max: 18.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd1224364860SE +/- 0.05, N = 354.4254.0154.3354.34
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamabcd1122334455Min: 54.24 / Avg: 54.33 / Max: 54.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd150300450600750SE +/- 0.35, N = 3684.57685.45684.02685.82
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd120240360480600Min: 683.46 / Avg: 684.02 / Max: 684.67

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd48121620SE +/- 0.01, N = 317.5117.4917.5217.48
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd48121620Min: 17.51 / Avg: 17.52 / Max: 17.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd4080120160200SE +/- 0.92, N = 3194.29191.76192.98194.38
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd4080120160200Min: 191.96 / Avg: 192.98 / Max: 194.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd1.17262.34523.51784.69045.863SE +/- 0.0247, N = 35.14355.21155.17875.1408
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamabcd246810Min: 5.13 / Avg: 5.18 / Max: 5.21

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd70140210280350SE +/- 0.00, N = 3307.00307.02306.92306.94
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd60120180240300Min: 306.91 / Avg: 306.92 / Max: 306.92

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd918273645SE +/- 0.00, N = 339.0539.0539.0739.06
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamabcd816243240Min: 39.06 / Avg: 39.07 / Max: 39.07

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd306090120150SE +/- 0.37, N = 3156.21155.34155.68156.04
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd306090120150Min: 155.2 / Avg: 155.68 / Max: 156.4

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd246810SE +/- 0.0148, N = 36.39276.42866.41556.4001
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamabcd3691215Min: 6.39 / Avg: 6.42 / Max: 6.43

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd400800120016002000SE +/- 6.40, N = 31994.392032.961998.612014.58
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd400800120016002000Min: 1986.99 / Avg: 1998.61 / Max: 2009.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd246810SE +/- 0.0186, N = 36.00305.88985.98885.9417
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd246810Min: 5.96 / Avg: 5.99 / Max: 6.02

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd2004006008001000SE +/- 1.90, N = 3798.47800.05799.42794.39
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd140280420560700Min: 796.71 / Avg: 799.42 / Max: 803.09

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd0.28260.56520.84781.13041.413SE +/- 0.0030, N = 31.24911.24681.24781.2559
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamabcd246810Min: 1.24 / Avg: 1.25 / Max: 1.25

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd0.42240.84481.26721.68962.112SE +/- 0.0031, N = 31.87721.86771.87051.8701
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd246810Min: 1.87 / Avg: 1.87 / Max: 1.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd13002600390052006500SE +/- 9.69, N = 35873.275904.285896.595897.28
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamabcd10002000300040005000Min: 5877.27 / Avg: 5896.59 / Max: 5907.44

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd246810SE +/- 0.0038, N = 36.04486.03936.04256.0373
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd246810Min: 6.04 / Avg: 6.04 / Max: 6.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd4080120160200SE +/- 0.11, N = 3165.40165.55165.47165.61
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamabcd306090120150Min: 165.3 / Avg: 165.47 / Max: 165.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd70140210280350SE +/- 0.03, N = 3306.97307.09306.95306.90
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd60120180240300Min: 306.91 / Avg: 306.95 / Max: 307

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd918273645SE +/- 0.00, N = 339.0539.0339.0639.07
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamabcd816243240Min: 39.05 / Avg: 39.06 / Max: 39.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd306090120150SE +/- 0.15, N = 3156.43155.31155.79155.90
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd306090120150Min: 155.53 / Avg: 155.79 / Max: 156.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd246810SE +/- 0.0061, N = 36.38466.42936.41036.4059
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamabcd3691215Min: 6.4 / Avg: 6.41 / Max: 6.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd306090120150SE +/- 0.31, N = 3150.83151.98151.26150.73
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd306090120150Min: 150.64 / Avg: 151.26 / Max: 151.68

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd20406080100SE +/- 0.16, N = 379.4578.8279.2579.51
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd1530456075Min: 79.03 / Avg: 79.25 / Max: 79.57

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd20406080100SE +/- 0.05, N = 399.3399.2298.8099.05
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd20406080100Min: 98.72 / Avg: 98.8 / Max: 98.89

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd3691215SE +/- 0.01, N = 310.0610.0710.1210.09
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamabcd3691215Min: 10.11 / Avg: 10.12 / Max: 10.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd50100150200250SE +/- 0.20, N = 3224.16224.20223.00223.55
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd4080120160200Min: 222.68 / Avg: 223 / Max: 223.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd1224364860SE +/- 0.05, N = 353.4853.4753.7653.63
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamabcd1122334455Min: 53.67 / Avg: 53.76 / Max: 53.83

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd306090120150SE +/- 0.07, N = 3112.33111.84112.00111.45
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd20406080100Min: 111.87 / Avg: 112 / Max: 112.12

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd3691215SE +/- 0.0057, N = 38.89218.93178.91898.9625
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamabcd3691215Min: 8.91 / Avg: 8.92 / Max: 8.93

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd714212835SE +/- 0.09, N = 330.4730.4930.3930.49
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd714212835Min: 30.21 / Avg: 30.39 / Max: 30.48

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd90180270360450SE +/- 0.32, N = 3393.66393.08393.82393.39
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamabcd70140210280350Min: 393.34 / Avg: 393.82 / Max: 394.42

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd510152025SE +/- 0.01, N = 321.7021.7521.6921.72
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd510152025Min: 21.67 / Avg: 21.69 / Max: 21.72

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd1020304050SE +/- 0.03, N = 346.0645.9646.0846.02
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamabcd918273645Min: 46.02 / Avg: 46.08 / Max: 46.12

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd70140210280350SE +/- 0.53, N = 3334.39337.45334.75334.72
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd60120180240300Min: 333.85 / Avg: 334.75 / Max: 335.68

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd816243240SE +/- 0.06, N = 335.8535.5435.8135.81
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamabcd816243240Min: 35.69 / Avg: 35.81 / Max: 35.91

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd20406080100SE +/- 0.14, N = 375.8876.1276.0675.70
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd1530456075Min: 75.8 / Avg: 76.06 / Max: 76.27

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd3691215SE +/- 0.02, N = 313.1713.1313.1413.20
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamabcd48121620Min: 13.11 / Avg: 13.14 / Max: 13.19

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd612182430SE +/- 0.03, N = 326.7426.8626.7026.81
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd612182430Min: 26.64 / Avg: 26.7 / Max: 26.74

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd100200300400500SE +/- 0.66, N = 3446.84446.24447.58447.17
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamabcd80160240320400Min: 446.33 / Avg: 447.58 / Max: 448.55

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd510152025SE +/- 0.01, N = 318.4918.5218.4918.48
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd510152025Min: 18.47 / Avg: 18.49 / Max: 18.5

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd1224364860SE +/- 0.02, N = 354.0753.9754.0854.10
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamabcd1122334455Min: 54.04 / Avg: 54.08 / Max: 54.12

Google Draco

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionabcd12002400360048006000SE +/- 15.72, N = 353285365536353471. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Lionabcd9001800270036004500Min: 5342 / Avg: 5363.33 / Max: 53941. (CXX) g++ options: -O3

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadeabcd15003000450060007500SE +/- 9.91, N = 370237034709270961. (CXX) g++ options: -O3
OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadeabcd12002400360048006000Min: 7076 / Avg: 7091.67 / Max: 71101. (CXX) g++ options: -O3

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUabcd246810SE +/- 0.01, N = 37.607.687.597.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUabcd3691215Min: 7.58 / Avg: 7.59 / Max: 7.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUabcd30060090012001500SE +/- 0.80, N = 31558.811547.311563.151553.54MIN: 1416.22 / MAX: 1644.19MIN: 1403.59 / MAX: 1636.72MIN: 1369.79 / MAX: 1663.37MIN: 1365.71 / MAX: 1635.161. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUabcd30060090012001500Min: 1561.57 / Avg: 1563.15 / Max: 1564.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUabcd1632486480SE +/- 0.04, N = 370.0169.9569.8369.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUabcd1428425670Min: 69.75 / Avg: 69.83 / Max: 69.881. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUabcd4080120160200SE +/- 0.11, N = 3171.22171.43171.67172.18MIN: 130.32 / MAX: 233.99MIN: 140.26 / MAX: 224.54MIN: 132.19 / MAX: 231.16MIN: 138.53 / MAX: 224.81. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUabcd306090120150Min: 171.52 / Avg: 171.67 / Max: 171.891. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUabcd1632486480SE +/- 0.06, N = 369.7370.0169.9969.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUabcd1428425670Min: 69.88 / Avg: 69.99 / Max: 70.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUabcd4080120160200SE +/- 0.12, N = 3171.92171.16171.29171.54MIN: 129.51 / MAX: 227.57MIN: 129.54 / MAX: 225.82MIN: 134.25 / MAX: 225.4MIN: 135.7 / MAX: 226.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUabcd306090120150Min: 171.14 / Avg: 171.29 / Max: 171.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUabcd130260390520650SE +/- 0.39, N = 3600.94598.03592.96603.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUabcd110220330440550Min: 592.33 / Avg: 592.96 / Max: 593.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUabcd510152025SE +/- 0.01, N = 319.9420.0420.2119.84MIN: 8.87 / MAX: 42.42MIN: 12.81 / MAX: 37.51MIN: 9.28 / MAX: 49.09MIN: 11.38 / MAX: 34.241. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUabcd510152025Min: 20.19 / Avg: 20.21 / Max: 20.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUabcd48121620SE +/- 0.01, N = 316.6616.7316.6616.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUabcd48121620Min: 16.65 / Avg: 16.66 / Max: 16.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUabcd150300450600750SE +/- 0.26, N = 3715.22713.72716.16713.64MIN: 664.62 / MAX: 729.04MIN: 658.61 / MAX: 738.08MIN: 661.6 / MAX: 732.11MIN: 667.56 / MAX: 731.591. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUabcd130260390520650Min: 715.7 / Avg: 716.16 / Max: 716.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUabcd5001000150020002500SE +/- 4.26, N = 32192.092197.502197.982198.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUabcd400800120016002000Min: 2190.56 / Avg: 2197.98 / Max: 2205.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUabcd1.22852.4573.68554.9146.1425SE +/- 0.01, N = 35.465.455.455.45MIN: 2.82 / MAX: 21.64MIN: 2.89 / MAX: 21.85MIN: 2.8 / MAX: 22.36MIN: 2.8 / MAX: 28.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUabcd246810Min: 5.43 / Avg: 5.45 / Max: 5.471. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUabcd4080120160200SE +/- 0.04, N = 3170.10170.68169.65169.411. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUabcd306090120150Min: 169.6 / Avg: 169.65 / Max: 169.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUabcd1632486480SE +/- 0.02, N = 370.4870.2370.6670.76MIN: 43.7 / MAX: 128.66MIN: 43.23 / MAX: 122.75MIN: 24.84 / MAX: 127.05MIN: 41.53 / MAX: 122.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUabcd1428425670Min: 70.63 / Avg: 70.66 / Max: 70.681. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUabcd2004006008001000SE +/- 1.89, N = 31104.031106.441107.641103.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUabcd2004006008001000Min: 1105.36 / Avg: 1107.64 / Max: 1111.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUabcd3691215SE +/- 0.02, N = 310.8610.8310.8210.86MIN: 7.32 / MAX: 25.15MIN: 6.67 / MAX: 24.71MIN: 5.79 / MAX: 27.55MIN: 6.39 / MAX: 25.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUabcd3691215Min: 10.79 / Avg: 10.82 / Max: 10.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUabcd160320480640800SE +/- 0.61, N = 3718.20719.93715.28716.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUabcd130260390520650Min: 714.29 / Avg: 715.28 / Max: 716.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUabcd48121620SE +/- 0.01, N = 316.6916.6516.7616.72MIN: 13.14 / MAX: 33.56MIN: 10 / MAX: 33.75MIN: 8.74 / MAX: 34.17MIN: 9.04 / MAX: 25.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUabcd48121620Min: 16.73 / Avg: 16.76 / Max: 16.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUabcd7001400210028003500SE +/- 3.42, N = 33224.643241.353228.193230.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUabcd6001200180024003000Min: 3221.38 / Avg: 3228.19 / Max: 3232.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUabcd0.83481.66962.50443.33924.174SE +/- 0.01, N = 33.713.693.713.71MIN: 2.14 / MAX: 26.73MIN: 2.28 / MAX: 15.95MIN: 2.23 / MAX: 18MIN: 2.38 / MAX: 15.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUabcd246810Min: 3.7 / Avg: 3.71 / Max: 3.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabcd90180270360450SE +/- 0.36, N = 3428.43428.18427.02428.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabcd80160240320400Min: 426.35 / Avg: 427.02 / Max: 427.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabcd714212835SE +/- 0.02, N = 327.9928.0028.0827.98MIN: 18.95 / MAX: 38.76MIN: 18.81 / MAX: 38.86MIN: 14.29 / MAX: 41.57MIN: 14.99 / MAX: 46.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabcd612182430Min: 28.04 / Avg: 28.08 / Max: 28.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUabcd20406080100SE +/- 0.03, N = 388.1388.3388.0588.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.orgFPS, More Is Better