ddddx

AMD Ryzen Threadripper PRO 5965WX 24-Cores testing with a ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS) and ASUS NVIDIA NV106 2GB on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2403218-NE-DDDDX513530&rdt&rro.

ddddxProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionabcdAMD Ryzen Threadripper PRO 5965WX 24-Cores @ 3.80GHz (24 Cores / 48 Threads)ASUS Pro WS WRX80E-SAGE SE WIFI (1201 BIOS)AMD Starship/Matisse8 x 16GB DDR4-2133MT/s Corsair CMK32GX4M2E3200C162048GB SOLIDIGM SSDPFKKW020X7ASUS NVIDIA NV106 2GBAMD Starship/MatisseVA24312 x Intel X550 + Intel Wi-Fi 6 AX200Ubuntu 23.106.5.0-15-generic (x86_64)GNOME Shell 45.0X Server + Waylandnouveau4.3 Mesa 23.2.1-1ubuntu3GCC 13.2.0ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa008205Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ddddxjpegxl: PNG - 80jpegxl: PNG - 90jpegxl: JPEG - 80jpegxl: JPEG - 90jpegxl: PNG - 100jpegxl: JPEG - 100jpegxl-decode: 1jpegxl-decode: Allsrsran: PDSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PDSCH Processor Benchmark, Throughput Threadsrsran: PUSCH Processor Benchmark, Throughput Threadsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timestockfish: Chess Benchmarkbuild-linux-kernel: defconfigbuild-linux-kernel: allmodconfigcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionprimesieve: 1e12primesieve: 1e13onednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUonednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_1d - CPUonednn: Deconvolution Batch shapes_3d - CPUonednn: Recurrent Neural Network Training - CPUonednn: Recurrent Neural Network Inference - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUospray-studio: 3 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 1080p - 1 - Path Tracer - CPUospray-studio: 2 - 1080p - 1 - Path Tracer - CPUospray-studio: 3 - 1080p - 1 - Path Tracer - CPUospray-studio: 1 - 1080p - 16 - Path Tracer - CPUospray-studio: 1 - 1080p - 32 - Path Tracer - CPUospray-studio: 2 - 1080p - 16 - Path Tracer - CPUospray-studio: 2 - 1080p - 32 - Path Tracer - CPUospray-studio: 3 - 1080p - 16 - Path Tracer - CPUospray-studio: 3 - 1080p - 32 - Path Tracer - CPUdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Asynchronous Multi-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: Llama2 Chat 7b Quantized - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdraco: Liondraco: Church Facadeopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Noise Suppression Poconet-Like FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Person Re-Identification Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUrocksdb: Overwriterocksdb: Rand Fillrocksdb: Rand Readrocksdb: Update Randrocksdb: Seq Fillrocksdb: Rand Fill Syncrocksdb: Read While Writingrocksdb: Read Rand Write Randencode-wavpack: WAV To WavPackbrl-cad: VGR Performance Metricv-ray: CPUabcd44.74839.40346.58442.50227.6527.52864.032483.41511710.31888.2603.7177.66.64762.954154.17151.95819.206131.721489.431587.487.0614.56619.62441.28310.283510.1386156.3964.902574.598827.466145260752854.133597.0343.2303056.24477.1481.314743.525692.738045.510822.365111254.68638.12845284616533078061150375787881526539036217744511381158133618203411671854341839214494749526.6926448.709118.37254.4167684.565117.5108194.29385.1435306.997439.0535156.20996.39271994.3896.003798.47471.24911.87725873.27016.0448165.3999306.970239.0528156.43226.3846150.828379.454899.334410.0615224.164153.4812112.33088.892130.4683393.660621.700946.0582334.388735.850975.87613.174226.7411446.840618.490854.0673532870237.61558.8170.01171.2269.73171.92600.9419.9416.66715.222192.095.46170.170.481104.0310.86718.216.693224.643.71428.4327.9988.131361652.0514.52895.2513.381033.6711.54377.8463.461198.641024478.310.97421.7256.8744518.20.5378116679060314596289167112192043045528554680128797874.4334303874428743.24537.4442.6639.89227.57227.32363.301482.57712063.41889.2600.2178.76.78763.893150.809145.9718.916131.928499.14602.9687.07514.82319.48940.46110.246610.1433155.3734.880534.577067.448376100827054.208597.8983.3147496.08176.9181.311013.525842.711355.339292.346021255.67636.76845214609534276995150040783041535529013117549611381150133818181411251847141598213984782726.6035447.653518.50954.0139685.448917.4858191.76175.2115307.018839.0501155.34136.42862032.95675.8898800.04741.24681.86775904.27886.0393165.5519307.089339.0344155.31356.4293151.981178.820899.222810.0731224.197153.4714111.83718.931730.4851393.076121.747145.9601337.452535.539176.122213.131226.8554446.237518.523253.9731536570347.681547.3169.95171.4370.01171.16598.0320.0416.73713.722197.55.45170.6870.231106.4410.83719.9316.653241.353.69428.182888.33135.71654.7214.49889.3613.471046.311.41380.2263.081201.229.9824434.510.97426.0356.344461.020.5377923678396714589375366671590817745974558653928593394.4384246414463439.90438.51142.35340.76127.48727.32163.038470.15911723.51886.9605.5178.66.71162.498150.133150.52219.170133.270487.658606.3607.06614.77719.46040.91510.280410.1479155.8334.886084.592667.438215523757352.812596.5063.2850826.11377.2141.327433.533232.731925.373672.361381256.43637.17345354615534177511150381789191525109008317576711381152133318229411891857942050214094768626.6165449.118718.401854.3292684.015917.5229192.97705.1787306.915939.0664155.68036.41551998.60735.9888799.41511.24781.87055896.59336.0425165.4651306.946439.0572155.79406.4103151.255979.245298.800110.1159223.002853.7553112.00228.918930.3866393.820621.692546.0763334.747235.809276.057513.143126.6956447.582918.486654.0787536370927.591563.1569.83171.6769.99171.29592.9620.2116.66716.162197.985.45169.6570.661107.6410.82715.2816.763228.193.71427.0228.0888.05136.151647.1314.56894.5713.391041.6711.46378.3463.401202.099.9724329.210.98423.4656.6444306.100.5377780477438414492176366097890888246701559276028686814.4354205284437542.17638.26543.50239.59527.33827.31962.817439.61311654.91857607.8177.96.6662.931151.638152.46619.318133.784488.521587.9717.06214.85919.5294110.298510.1833155.0494.899284.580817.45225308245254.163597.8273.2219936.12377.061.313333.538512.733525.036412.343461254.43642.19445284618533177073150694783541516209038017512911341153133018177411111848841959213854759726.6941448.92718.397854.3402685.815917.476194.3845.1408306.943639.0595156.04156.40012014.57665.9417794.39021.25591.87015897.27916.0373165.6061306.899739.0746155.89856.4059150.734479.506999.054710.0902223.55453.6259111.45068.962530.488393.39421.720646.0166334.718135.813575.699113.204826.8079447.174318.47954.1018534770967.641553.5469.61172.1869.86171.54603.9619.8416.74713.642198.115.45169.4170.761103.610.86716.8416.723230.253.71428.5527.9888.14135.961652.9114.51894.4113.41043.8211.44377.4963.541202.229.9724496.840.97421.5456.944596.350.5376217577711514551687465852091914846299563361328145134.44242577844633OpenBenchmarking.org

JPEG-XL libjxl

Input: PNG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 80dcba1020304050SE +/- 0.29, N = 342.1839.9043.2544.751. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL libjxl

Input: PNG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 90dcba918273645SE +/- 0.27, N = 1538.2738.5137.4439.401. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL libjxl

Input: JPEG - Quality: 80

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 80dcba1122334455SE +/- 0.32, N = 343.5042.3542.6646.581. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL libjxl

Input: JPEG - Quality: 90

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 90dcba1020304050SE +/- 0.39, N = 1539.6040.7639.8942.501. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL libjxl

Input: PNG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: PNG - Quality: 100dcba714212835SE +/- 0.01, N = 327.3427.4927.5727.651. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL libjxl

Input: JPEG - Quality: 100

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL libjxl 0.10.1Input: JPEG - Quality: 100dcba612182430SE +/- 0.03, N = 327.3227.3227.3227.531. (CXX) g++ options: -fno-rtti -O3 -fPIE -pie -lm

JPEG-XL Decoding libjxl

CPU Threads: 1

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: 1dcba1428425670SE +/- 0.12, N = 362.8263.0463.3064.03

JPEG-XL Decoding libjxl

CPU Threads: All

OpenBenchmarking.orgMP/s, More Is BetterJPEG-XL Decoding libjxl 0.10.1CPU Threads: Alldcba100200300400500SE +/- 1.28, N = 3439.61470.16482.58483.42

srsRAN Project

Test: PDSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Totaldcba3K6K9K12K15KSE +/- 27.21, N = 311654.911723.512063.411710.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Totaldcba400800120016002000SE +/- 1.62, N = 31857.01886.91889.21888.2MIN: 1145.4MIN: 1135 / MAX: 1889.2MIN: 1137.6MIN: 1136.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

srsRAN Project

Test: PDSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PDSCH Processor Benchmark, Throughput Threaddcba130260390520650SE +/- 1.71, N = 3607.8605.5600.2603.71. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.10.1-20240219Test: PUSCH Processor Benchmark, Throughput Threaddcba4080120160200SE +/- 0.70, N = 3177.9178.6178.7177.6MIN: 110.9MIN: 112.6 / MAX: 179.4MIN: 112.2MIN: 113.31. (CXX) g++ options: -march=native -mavx2 -mavx -msse4.1 -mfma -O3 -fno-trapping-math -fno-math-errno -ldl

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 4Kdcba246810SE +/- 0.005, N = 36.6606.7116.7876.6471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 4Kdcba1428425670SE +/- 0.26, N = 362.9362.5063.8962.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 4Kdcba306090120150SE +/- 0.99, N = 3151.64150.13150.81154.171. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 4Kdcba306090120150SE +/- 1.34, N = 3152.47150.52145.97151.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 4 - Input: Bosphorus 1080pdcba510152025SE +/- 0.04, N = 319.3219.1718.9219.211. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 8 - Input: Bosphorus 1080pdcba306090120150SE +/- 0.72, N = 3133.78133.27131.93131.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 12 - Input: Bosphorus 1080pdcba110220330440550SE +/- 6.37, N = 3488.52487.66499.14489.431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.0Encoder Mode: Preset 13 - Input: Bosphorus 1080pdcba130260390520650SE +/- 4.87, N = 3587.97606.36602.97587.481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fastdcba246810SE +/- 0.031, N = 37.0627.0667.0757.0601. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 4K - Video Preset: Fasterdcba48121620SE +/- 0.03, N = 314.8614.7814.8214.571. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fastdcba510152025SE +/- 0.03, N = 319.5319.4619.4919.621. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.11Video Input: Bosphorus 1080p - Video Preset: Fasterdcba918273645SE +/- 0.15, N = 341.0040.9240.4641.281. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OSPRay

Benchmark: particle_volume/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/ao/real_timedcba3691215SE +/- 0.01, N = 310.3010.2810.2510.28

OSPRay

Benchmark: particle_volume/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/scivis/real_timedcba3691215SE +/- 0.01, N = 310.1810.1510.1410.14

OSPRay

Benchmark: particle_volume/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: particle_volume/pathtracer/real_timedcba306090120150SE +/- 0.22, N = 3155.05155.83155.37156.40

OSPRay

Benchmark: gravity_spheres_volume/dim_512/ao/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/ao/real_timedcba1.10312.20623.30934.41245.5155SE +/- 0.01074, N = 34.899284.886084.880534.90257

OSPRay

Benchmark: gravity_spheres_volume/dim_512/scivis/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/scivis/real_timedcba1.03472.06943.10414.13885.1735SE +/- 0.00963, N = 34.580814.592664.577064.59882

OSPRay

Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 3.1Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timedcba246810SE +/- 0.00473, N = 37.452207.438217.448377.46614

Stockfish

Chess Benchmark

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 16.1Chess Benchmarkdcba13M26M39M52M65MSE +/- 1129127.95, N = 15530824525523757361008270526075281. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: defconfigdcba1224364860SE +/- 0.59, N = 354.1652.8154.2154.13

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.8Build: allmodconfigdcba130260390520650SE +/- 0.92, N = 3597.83596.51597.90597.03

Parallel BZIP2 Compression

FreeBSD-13.0-RELEASE-amd64-memstick.img Compression

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img Compressiondcba0.74581.49162.23742.98323.729SE +/- 0.043942, N = 123.2219933.2850823.3147493.2303051. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

Primesieve

Length: 1e12

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e12dcba246810SE +/- 0.056, N = 36.1236.1136.0816.2441. (CXX) g++ options: -O3

Primesieve

Length: 1e13

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.1Length: 1e13dcba20406080100SE +/- 0.05, N = 377.0677.2176.9277.151. (CXX) g++ options: -O3

oneDNN

Harness: IP Shapes 1D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 1D - Engine: CPUdcba0.29870.59740.89611.19481.4935SE +/- 0.00673, N = 31.313331.327431.311011.31474MIN: 1.27MIN: 1.26MIN: 1.27MIN: 1.271. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: IP Shapes 3D - Engine: CPUdcba0.79621.59242.38863.18483.981SE +/- 0.00386, N = 33.538513.533233.525843.52569MIN: 3.49MIN: 3.47MIN: 3.48MIN: 3.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Convolution Batch Shapes Auto - Engine: CPUdcba0.61611.23221.84832.46443.0805SE +/- 0.00421, N = 32.733522.731922.711352.73804MIN: 2.67MIN: 2.66MIN: 2.65MIN: 2.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_1d - Engine: CPUdcba1.23992.47983.71974.95966.1995SE +/- 0.03672, N = 35.036415.373675.339295.51082MIN: 3.91MIN: 3.84MIN: 3.86MIN: 3.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Deconvolution Batch shapes_3d - Engine: CPUdcba0.53211.06421.59632.12842.6605SE +/- 0.00819, N = 32.343462.361382.346022.36511MIN: 2.25MIN: 2.25MIN: 2.28MIN: 2.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Training - Engine: CPUdcba30060090012001500SE +/- 0.77, N = 31254.431256.431255.671254.68MIN: 1249.27MIN: 1249.44MIN: 1250.99MIN: 1250.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.4Harness: Recurrent Neural Network Inference - Engine: CPUdcba140280420560700SE +/- 0.39, N = 3642.19637.17636.77638.13MIN: 633.4MIN: 632.47MIN: 632.58MIN: 634.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba10002000300040005000SE +/- 4.41, N = 34528453545214528

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba10002000300040005000SE +/- 7.22, N = 34618461546094616

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba11002200330044005500SE +/- 4.04, N = 35331534153425330

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba20K40K60K80K100KSE +/- 155.86, N = 377073775117699578061

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba30K60K90K120K150KSE +/- 64.70, N = 3150694150381150040150375

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba20K40K60K80K100KSE +/- 110.73, N = 378354789197830478788

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba30K60K90K120K150KSE +/- 88.33, N = 3151620152510153552152653

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba20K40K60K80K100KSE +/- 228.07, N = 390380900839013190362

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba40K80K120K160K200KSE +/- 268.88, N = 3175129175767175496177445

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba2004006008001000SE +/- 1.00, N = 31134113811381138

OSPRay Studio

Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba2004006008001000SE +/- 0.67, N = 31153115211501158

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPUdcba30060090012001500SE +/- 4.41, N = 31330133313381336

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba4K8K12K16K20KSE +/- 39.50, N = 318177182291818118203

OSPRay Studio

Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba9K18K27K36K45KSE +/- 58.89, N = 341111411894112541167

OSPRay Studio

Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba4K8K12K16K20KSE +/- 16.33, N = 318488185791847118543

OSPRay Studio

Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba9K18K27K36K45KSE +/- 250.02, N = 341959420504159841839

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPUdcba5K10K15K20K25KSE +/- 35.14, N = 321385214092139821449

OSPRay Studio

Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 1.0Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPUdcba10K20K30K40K50KSE +/- 129.36, N = 347597476864782747495

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba612182430SE +/- 0.04, N = 326.6926.6226.6026.69

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamdcba100200300400500SE +/- 0.38, N = 3448.93449.12447.65448.71

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba510152025SE +/- 0.02, N = 318.4018.4018.5118.37

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Streamdcba1224364860SE +/- 0.05, N = 354.3454.3354.0154.42

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba150300450600750SE +/- 0.35, N = 3685.82684.02685.45684.57

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba48121620SE +/- 0.01, N = 317.4817.5217.4917.51

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 0.92, N = 3194.38192.98191.76194.29

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Streamdcba1.17262.34523.51784.69045.863SE +/- 0.0247, N = 35.14085.17875.21155.1435

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba70140210280350SE +/- 0.00, N = 3306.94306.92307.02307.00

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamdcba918273645SE +/- 0.00, N = 339.0639.0739.0539.05

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba306090120150SE +/- 0.37, N = 3156.04155.68155.34156.21

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Baseline - Scenario: Synchronous Single-Streamdcba246810SE +/- 0.0148, N = 36.40016.41556.42866.3927

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba400800120016002000SE +/- 6.40, N = 32014.581998.612032.961994.39

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba246810SE +/- 0.0186, N = 35.94175.98885.88986.0030

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba2004006008001000SE +/- 1.90, N = 3794.39799.42800.05798.47

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Streamdcba0.28260.56520.84781.13041.413SE +/- 0.0030, N = 31.25591.24781.24681.2491

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamdcba0.42240.84481.26721.68962.112SE +/- 0.0031, N = 31.87011.87051.86771.8772

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Asynchronous Multi-Streamdcba13002600390052006500SE +/- 9.69, N = 35897.285896.595904.285873.27

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamdcba246810SE +/- 0.0038, N = 36.03736.04256.03936.0448

Neural Magic DeepSparse

Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: Llama2 Chat 7b Quantized - Scenario: Synchronous Single-Streamdcba4080120160200SE +/- 0.11, N = 3165.61165.47165.55165.40

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba70140210280350SE +/- 0.03, N = 3306.90306.95307.09306.97

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamdcba918273645SE +/- 0.00, N = 339.0739.0639.0339.05

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba306090120150SE +/- 0.15, N = 3155.90155.79155.31156.43

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Streamdcba246810SE +/- 0.0061, N = 36.40596.41036.42936.3846

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba306090120150SE +/- 0.31, N = 3150.73151.26151.98150.83

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba20406080100SE +/- 0.16, N = 379.5179.2578.8279.45

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba20406080100SE +/- 0.05, N = 399.0598.8099.2299.33

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Streamdcba3691215SE +/- 0.01, N = 310.0910.1210.0710.06

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba50100150200250SE +/- 0.20, N = 3223.55223.00224.20224.16

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamdcba1224364860SE +/- 0.05, N = 353.6353.7653.4753.48

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba306090120150SE +/- 0.07, N = 3111.45112.00111.84112.33

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Streamdcba3691215SE +/- 0.0057, N = 38.96258.91898.93178.8921

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba714212835SE +/- 0.09, N = 330.4930.3930.4930.47

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamdcba90180270360450SE +/- 0.32, N = 3393.39393.82393.08393.66

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba510152025SE +/- 0.01, N = 321.7221.6921.7521.70

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Streamdcba1020304050SE +/- 0.03, N = 346.0246.0845.9646.06

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba70140210280350SE +/- 0.53, N = 3334.72334.75337.45334.39

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamdcba816243240SE +/- 0.06, N = 335.8135.8135.5435.85

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba20406080100SE +/- 0.14, N = 375.7076.0676.1275.88

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Streamdcba3691215SE +/- 0.02, N = 313.2013.1413.1313.17

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba612182430SE +/- 0.03, N = 326.8126.7026.8626.74

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamdcba100200300400500SE +/- 0.66, N = 3447.17447.58446.24446.84

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba510152025SE +/- 0.01, N = 318.4818.4918.5218.49

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.7Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Streamdcba1224364860SE +/- 0.02, N = 354.1054.0853.9754.07

Google Draco

Model: Lion

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Liondcba12002400360048006000SE +/- 15.72, N = 353475363536553281. (CXX) g++ options: -O3

Google Draco

Model: Church Facade

OpenBenchmarking.orgms, Fewer Is BetterGoogle Draco 1.5.6Model: Church Facadedcba15003000450060007500SE +/- 9.91, N = 370967092703470231. (CXX) g++ options: -O3

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUdcba246810SE +/- 0.01, N = 37.647.597.687.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16 - Device: CPUdcba30060090012001500SE +/- 0.80, N = 31553.541563.151547.311558.81MIN: 1365.71 / MAX: 1635.16MIN: 1369.79 / MAX: 1663.37MIN: 1403.59 / MAX: 1636.72MIN: 1416.22 / MAX: 1644.191. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUdcba1632486480SE +/- 0.04, N = 369.6169.8369.9570.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP16 - Device: CPUdcba4080120160200SE +/- 0.11, N = 3172.18171.67171.43171.22MIN: 138.53 / MAX: 224.8MIN: 132.19 / MAX: 231.16MIN: 140.26 / MAX: 224.54MIN: 130.32 / MAX: 233.991. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUdcba1632486480SE +/- 0.06, N = 369.8669.9970.0169.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Detection FP32 - Device: CPUdcba4080120160200SE +/- 0.12, N = 3171.54171.29171.16171.92MIN: 135.7 / MAX: 226.04MIN: 134.25 / MAX: 225.4MIN: 129.54 / MAX: 225.82MIN: 129.51 / MAX: 227.571. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUdcba130260390520650SE +/- 0.39, N = 3603.96592.96598.03600.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16 - Device: CPUdcba510152025SE +/- 0.01, N = 319.8420.2120.0419.94MIN: 11.38 / MAX: 34.24MIN: 9.28 / MAX: 49.09MIN: 12.81 / MAX: 37.51MIN: 8.87 / MAX: 42.421. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUdcba48121620SE +/- 0.01, N = 316.7416.6616.7316.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection FP16-INT8 - Device: CPUdcba150300450600750SE +/- 0.26, N = 3713.64716.16713.72715.22MIN: 667.56 / MAX: 731.59MIN: 661.6 / MAX: 732.11MIN: 658.61 / MAX: 738.08MIN: 664.62 / MAX: 729.041. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUdcba5001000150020002500SE +/- 4.26, N = 32198.112197.982197.502192.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16 - Device: CPUdcba1.22852.4573.68554.9146.1425SE +/- 0.01, N = 35.455.455.455.46MIN: 2.8 / MAX: 28.34MIN: 2.8 / MAX: 22.36MIN: 2.89 / MAX: 21.85MIN: 2.82 / MAX: 21.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUdcba4080120160200SE +/- 0.04, N = 3169.41169.65170.68170.101. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16 - Device: CPUdcba1632486480SE +/- 0.02, N = 370.7670.6670.2370.48MIN: 41.53 / MAX: 122.26MIN: 24.84 / MAX: 127.05MIN: 43.23 / MAX: 122.75MIN: 43.7 / MAX: 128.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUdcba2004006008001000SE +/- 1.89, N = 31103.601107.641106.441104.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Vehicle Detection FP16-INT8 - Device: CPUdcba3691215SE +/- 0.02, N = 310.8610.8210.8310.86MIN: 6.39 / MAX: 25.76MIN: 5.79 / MAX: 27.55MIN: 6.67 / MAX: 24.71MIN: 7.32 / MAX: 25.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUdcba160320480640800SE +/- 0.61, N = 3716.84715.28719.93718.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16 - Device: CPUdcba48121620SE +/- 0.01, N = 316.7216.7616.6516.69MIN: 9.04 / MAX: 25.27MIN: 8.74 / MAX: 34.17MIN: 10 / MAX: 33.75MIN: 13.14 / MAX: 33.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUdcba7001400210028003500SE +/- 3.42, N = 33230.253228.193241.353224.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Face Detection Retail FP16-INT8 - Device: CPUdcba0.83481.66962.50443.33924.174SE +/- 0.01, N = 33.713.713.693.71MIN: 2.38 / MAX: 15.94MIN: 2.23 / MAX: 18MIN: 2.28 / MAX: 15.95MIN: 2.14 / MAX: 26.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUdcba90180270360450SE +/- 0.36, N = 3428.55427.02428.18428.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Road Segmentation ADAS FP16-INT8 - Device: CPUdcba714212835SE +/- 0.02, N = 327.9828.0828.0027.99MIN: 14.99 / MAX: 46.26MIN: 14.29 / MAX: 41.57MIN: 18.81 / MAX: 38.86MIN: 18.95 / MAX: 38.761. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUdcba20406080100SE +/- 0.03, N = 388.1488.0588.3388.131. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Machine Translation EN To DE FP16 - Device: CPUdcba306090120150SE +/- 0.06, N = 3135.96136.15135.70136.00MIN: 110.6 / MAX: 155.59MIN: 73.11 / MAX: 161.85MIN: 109.99 / MAX: 155.71MIN: 118.61 / MAX: 153.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUdcba400800120016002000SE +/- 0.87, N = 31652.911647.131654.721652.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Weld Porosity Detection FP16-INT8 - Device: CPUdcba48121620SE +/- 0.01, N = 314.5114.5614.4914.52MIN: 8.58 / MAX: 29.8MIN: 8.28 / MAX: 27.62MIN: 8.36 / MAX: 28.02MIN: 8.14 / MAX: 28.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUdcba2004006008001000SE +/- 3.73, N = 3894.41894.57889.36895.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Vehicle Bike Detection FP16 - Device: CPUdcba3691215SE +/- 0.05, N = 313.4013.3913.4713.38MIN: 8.15 / MAX: 34.65MIN: 7.1 / MAX: 31.9MIN: 7.33 / MAX: 31.02MIN: 7.23 / MAX: 35.431. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUdcba2004006008001000SE +/- 3.03, N = 31043.821041.671046.301033.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Noise Suppression Poconet-Like FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Noise Suppression Poconet-Like FP16 - Device: CPUdcba3691215SE +/- 0.03, N = 311.4411.4611.4111.54MIN: 9.06 / MAX: 31.81MIN: 6.19 / MAX: 32.08MIN: 7.86 / MAX: 31.96MIN: 6.61 / MAX: 30.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUdcba80160240320400SE +/- 0.65, N = 3377.49378.34380.22377.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16 - Device: CPUdcba1428425670SE +/- 0.11, N = 363.5463.4063.0863.46MIN: 41.89 / MAX: 89.71MIN: 37.56 / MAX: 90.19MIN: 42.84 / MAX: 83.6MIN: 45.07 / MAX: 85.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUdcba30060090012001500SE +/- 0.87, N = 31202.221202.091201.221198.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Person Re-Identification Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Person Re-Identification Retail FP16 - Device: CPUdcba3691215SE +/- 0.01, N = 39.979.979.9810.00MIN: 5.63 / MAX: 25.55MIN: 5.91 / MAX: 41.34MIN: 5.6 / MAX: 24.1MIN: 6.87 / MAX: 16.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUdcba5K10K15K20K25KSE +/- 23.00, N = 324496.8424329.2124434.5124478.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUdcba0.22050.4410.66150.8821.1025SE +/- 0.00, N = 30.970.980.970.97MIN: 0.55 / MAX: 13.67MIN: 0.53 / MAX: 16.95MIN: 0.57 / MAX: 14.08MIN: 0.66 / MAX: 15.151. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUdcba90180270360450SE +/- 0.10, N = 3421.54423.46426.03421.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Handwritten English Recognition FP16-INT8 - Device: CPUdcba1326395265SE +/- 0.01, N = 356.9056.6456.3056.87MIN: 51.72 / MAX: 72.17MIN: 36.21 / MAX: 73.18MIN: 52.17 / MAX: 69.13MIN: 35.77 / MAX: 72.741. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUdcba10K20K30K40K50KSE +/- 27.92, N = 344596.3544306.1044461.0244518.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2024.0Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUdcba0.11930.23860.35790.47720.5965SE +/- 0.00, N = 30.530.530.530.53MIN: 0.31 / MAX: 12.66MIN: 0.3 / MAX: 13.83MIN: 0.3 / MAX: 12.53MIN: 0.3 / MAX: 12.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

RocksDB

Test: Overwrite

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Overwritedcba200K400K600K800K1000KSE +/- 4178.44, N = 37621757778047792367811661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Filldcba200K400K600K800K1000KSE +/- 1329.24, N = 37771157743847839677906031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Readdcba30M60M90M120M150MSE +/- 374266.35, N = 31455168741449217631458937531459628911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Update Randomdcba140K280K420K560K700KSE +/- 1458.06, N = 36585206609786667156711211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Sequential Fill

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Sequential Filldcba200K400K600K800K1000KSE +/- 4090.17, N = 39191489088829081779204301. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Fill Sync

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random Fill Syncdcba10K20K30K40K50KSE +/- 32.60, N = 3462994670145974455281. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read While Writingdcba1.2M2.4M3.6M4.8M6MSE +/- 38492.46, N = 356336135592760558653955468011. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read Random Write Randomdcba600K1200K1800K2400K3000KSE +/- 5471.11, N = 328145132868681285933928797871. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

WavPack Audio Encoding

WAV To WavPack

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.7WAV To WavPackdcba0.99951.9992.99853.9984.9975SE +/- 0.000, N = 54.4424.4354.4384.433

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.38.2VGR Performance Metricdcba90K180K270K360K450K4257784205284246414303871. (CXX) g++ options: -std=c++17 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lnetpbm -lregex_brl -lz_brl -lassimp -ldl -lm -ltk8.6

Chaos Group V-RAY

Mode: CPU

OpenBenchmarking.orgvsamples, More Is BetterChaos Group V-RAY 6.0Mode: CPUdcba10K20K30K40K50KSE +/- 195.74, N = 344633443754463444287


Phoronix Test Suite v10.8.5