new-tests

Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401110-NE-NEWTESTS900&grs&sro&rro.

new-testsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionZen 1 - EPYC 7601bc3232 z32 c32 dAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8llvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG1000GB INTEL SSDPE2KX010T81920x1200AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads)ASPEEDAMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212Security Details- Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details- 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- 32, 32 z, 32 c, 32 d: Python 3.11.6

new-testsy-cruncher: 1By-cruncher: 500Mquicksilver: CORAL2 P1quicksilver: CTS2svt-av1: Preset 8 - Bosphorus 4Krocksdb: Rand Readdacapobench: PMD Source Code Analyzerspeedb: Rand Readdacapobench: FOP Print Formatterspeedb: Read While Writingquantlib: Multi-Threadedopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUblender: Fishy Cat - CPU-Onlyquicksilver: CORAL2 P2build-gem5: Time To Compileblender: Pabellon Barcelona - CPU-Onlydacapobench: Zxing 1D/2D Barcode Image Processingopenvino: Weld Porosity Detection FP16-INT8 - CPUblender: Classroom - CPU-Onlyopenvino: Face Detection Retail FP16-INT8 - CPUblender: BMW27 - CPU-Onlyopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUdacapobench: H2O In-Memory Platform For Machine Learningopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUdacapobench: H2 Database Engineopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUdacapobench: Tradesoapopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUbuild-linux-kernel: allmodconfigtensorflow: CPU - 1 - GoogLeNetospray-studio: 1 - 4K - 16 - Path Tracer - CPUrocksdb: Read While Writingopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUpytorch: CPU - 1 - Efficientnet_v2_ldacapobench: Apache Lucene Search Engineblender: Barbershop - CPU-Onlytensorflow: CPU - 1 - AlexNetdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streambuild-ffmpeg: Time To Compileembree: Pathtracer - Crowndacapobench: Spring Bootdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUdacapobench: GraphChiopenvino: Face Detection Retail FP16 - CPUdacapobench: Avrora AVR Simulation Frameworksvt-av1: Preset 12 - Bosphorus 4Kbuild-linux-kernel: defconfigopenvino: Age Gender Recognition Retail 0013 FP16 - CPUsvt-av1: Preset 4 - Bosphorus 4Ktensorflow: CPU - 16 - VGG-16openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 16 - Path Tracer - CPUopenvino: Vehicle Detection FP16 - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUospray-studio: 3 - 4K - 1 - Path Tracer - CPUopenvino: Vehicle Detection FP16 - CPUdacapobench: Tradebeansdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamospray-studio: 3 - 4K - 32 - Path Tracer - CPUxmrig: GhostRider - 1Mdacapobench: Jythonospray-studio: 2 - 4K - 16 - Path Tracer - CPUopenfoam: drivaerFastback, Small Mesh Size - Execution Timedacapobench: Apache Xalan XSLTospray-studio: 2 - 4K - 1 - Path Tracer - CPUxmrig: Wownero - 1Mopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUtensorflow: CPU - 1 - ResNet-50deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamembree: Pathtracer ISPC - Crownrocksdb: Read Rand Write Randspeedb: Read Rand Write Randpytorch: CPU - 16 - ResNet-152embree: Pathtracer ISPC - Asian Dragontensorflow: CPU - 16 - GoogLeNetpytorch: CPU - 1 - ResNet-50embree: Pathtracer - Asian Dragon Objdacapobench: Apache Tomcatopenvino: Face Detection Retail FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUdacapobench: Eclipsespeedb: Update Randtensorflow: CPU - 16 - AlexNetllama-cpp: llama-2-13b.Q4_0.ggufxmrig: CryptoNight-Heavy - 1Mdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdacapobench: Batik SVG Toolkitdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streampytorch: CPU - 16 - Efficientnet_v2_lxmrig: KawPow - 1Mdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamsvt-av1: Preset 13 - Bosphorus 4Kpytorch: CPU - 1 - ResNet-152openvino: Person Detection FP16 - CPUdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamrocksdb: Update Randopenvino: Person Detection FP16 - CPUpytorch: CPU - 16 - ResNet-50compress-7zip: Compression Ratingdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamcachebench: Read / Modify / Writedacapobench: Apache Lucene Search Indexxmrig: Monero - 1Mdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamembree: Pathtracer - Asian Dragondacapobench: BioJava Biological Data Frameworkdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamllama-cpp: llama-2-7b.Q4_0.ggufembree: Pathtracer ISPC - Asian Dragon Objffmpeg: libx265 - Video On Demandffmpeg: libx265 - Livexmrig: CryptoNight-Femto UPX2 - 1Mdacapobench: Apache Cassandratensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 1 - VGG-16openvino: Person Detection FP32 - CPUcompress-7zip: Decompression Ratingopenvino: Person Detection FP32 - CPUffmpeg: libx265 - Uploadffmpeg: libx265 - Platformllama-cpp: llama-2-70b-chat.Q5_0.ggufopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUdacapobench: Apache Kafkadeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdacapobench: jMonkeyEnginecachebench: Writecachebench: ReadZen 1 - EPYC 7601bc3232 z32 c32 d33.92315.69312996667114266671501333310.4165.20221180000162700001614000010.4765.21321250000162600001615000011.6765.656187900001432000048.45117677046817841796859547517457600107079.228.37258342.8774555.6515350000254.01139.096099.56112.035.4144.733300.995747.65397435.51898.68.071960.1823.952675666.2232.82486.65540318.691704.2627.69576.18433.78928.99606734284691929.2317.179.851402410.6132.12396.291423.55736.9584244440.1688199.979.8235363.915613186.62552.1330.655.80125.159.121741.5734041165667136113.3611637740491190.428561836.421419.11071364644067.467036198772.807288871345125814.40.488.7421.293337.29672373654223140315.6145.9374158.4752.4437.28421073921.540123.6212656314123272.9317.9419004.521.2278173326.0566747.067487.4908383.9746123.01577.1718777.241.6314185.66519.04151.45182.5085630575105.4840.192415452208.15377.2332747.314123.8817128.8158129.874987227.587713461318845.5266.879959.883341.59587874607.93529.7538.937845.18109.8418860.1594651.349.73150.8212209105.9722.2845.133.4252441.94511059.8638266.8574691445646.0913537616.08733411.5955.685187600001429000058.71517716763618201794349246967210235107381.630.7547243.71730.8255.5415230000272.61138.65999.56112.095.4244.483299.935751.58386835.59896.698.051964.9923.952655666.332.81486.03516818.691704.0227.53579.41434.18728.71614304364996927.5717.189.821425410.4331.92397.959323.75937.2545246039.9467201.1579.3936303.95441185.56252.0120.665.89925.29.161735.6434061169727149513.2911566940481197.468600835.26219.13381363124038.667736211371.201285859344625943.70.478.7721.271137.67912361270225934415.5146.3088155.7752.7836.858620823924.8640101.812735314114274.9717.8718936.521.289172326.0768745.180687.325385.6481122.9557.1118961.341.4377184.98118.92150.06182.7643636242106.4439.962423992199.49417.261746.128123.785128.845129.803587218.210974458918763.8267.841759.667441.81987858608.132629.939.10745.08110.3718909593851.579.75150.37211584106.2422.2045.053.4152475.39512159.8673266.9761691745646.8161077616.33414211.9025.78310400001443000047.2531606653051966163202721764774634698916.230.53759146.17692.0259.5815180000258.307148.7456910.22119.725.7847.523099.25416.31397937.4853.388.521860.9925.222773632.9231.22510.79536619.581627.9328.77554.68453.69327.73628024419497964.216.5110.041379426.333.14411.343524.44635.9147253338.7708194.2182.1835384.035561180.95553.6150.675.82924.479.391694.0134931189807302413.6511822141571166.568520816.278519.58311396854136.368656340272.384007852351525385.90.488.6120.872936.99672327800222949415.3245.4648157.653.0037.440520943877.9139562.8712826317758274.9717.8718783.921.0419171825.8175753.122988.1952384.3164122.33077.1818947.341.5889183.89918.86150.07181.1043633688106.4340.322402872195.91987.2738751.9259123.1469129.5421130.475587238.013197458018897.5266.03460.061341.56967904611.602629.7439.004644.95110.0218887.5595551.569.77150.84211815105.9122.2145.133.4252382.31511159.9016266.8428691745645.0911337615.94808611.9755.751188400001428000058.6421607078121833163512432758710560298618.730.72419446.28690.2459.7915100000258.934148.5659910.21119.575.7847.413100.955423.13375537.61848.628.521862.2425.162634634.531.2510.9514919.561628.9128.82553.65452.60628.79633364244478965.3516.5410.211433426.3733.02410.326724.336.2812245238.8343195.0581.8736564.035572186.36853.6320.675.97724.519.371696.534991197837332913.6511880241321166.838380815.976819.58581394454095.767696278772.305836861352225396.80.488.5921.093236.93692351568221589615.3545.6482158.0853.3037.405621123869.739843.0512768313683276.1918.081892421.0667173825.7874751.211788.2278381.7839121.80017.1518901.141.8438184.09918.86151.25181.1155630478105.6440.312411912189.06557.2896750.3997122.9312129.8101130.793787854.117672460218866.1266.277660.028441.5577907611.443929.8539.142145.10110.2918818.6592751.499.75150.25211383106.3222.2244.973.4252344.6511459.9698266.5343691645643.0387137615.833145OpenBenchmarking.org

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1BcbZen 1 - EPYC 760132 z32 d32 c32816243240SE +/- 0.09, N = 310.4810.4233.9211.6011.9811.9011.68

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500McbZen 1 - EPYC 760132 z32 d32 c3248121620SE +/- 0.118, N = 35.2135.20215.6935.6855.7515.7835.656

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1cbZen 1 - EPYC 760132 z32 d32 c325M10M15M20M25MSE +/- 66916.20, N = 321250000211800001299666718760000188400001040000187900001. (CXX) g++ options: -fopenmp -O3 -march=native

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2cbZen 1 - EPYC 760132 z32 d32 c323M6M9M12M15MSE +/- 16666.67, N = 3162600001627000011426667142900001428000014430000143200001. (CXX) g++ options: -fopenmp -O3 -march=native

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4K32 z32 d32 c32132639526558.7258.6447.2548.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read32 z32 d32 c3240M80M120M160M200M1771676361607078121606653051767704681. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

DaCapo Benchmark

Java Test: PMD Source Code Analyzer

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code Analyzer32 z32 d32 c324008001200160020001820183319661784

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Read32 z32 d32 c3240M80M120M160M200M1794349241635124321632027211796859541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

DaCapo Benchmark

Java Test: FOP Print Formatter

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print Formatter32 z32 d32 c32160320480640800696758764751

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writing32 z32 d32 c321.7M3.4M5.1M6.8M8.5M72102357105602774634674576001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threaded32 z32 d32 c3220K40K60K80K100K107381.698618.798916.2107079.21. (CXX) g++ options: -O3 -march=native -fPIE -pie

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time32 z32 d32 c3271421283530.7530.7230.5428.371. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU32 z32 d32 c32102030405043.7146.2846.1742.87MIN: 35.06 / MAX: 153.84MIN: 30.15 / MAX: 108.49MIN: 39.81 / MAX: 161.92MIN: 35.14 / MAX: 107.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU32 z32 d32 c32160320480640800730.82690.24692.02745.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-Only32 z32 d32 c32132639526555.5459.7959.5855.65

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2cbZen 1 - EPYC 760132 z32 d32 c323M6M9M12M15MSE +/- 37118.43, N = 3161500001614000015013333152300001510000015180000153500001. (CXX) g++ options: -fopenmp -O3 -march=native

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To Compile32 z32 d32 c3260120180240300272.61258.93258.31254.01

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-Only32 z32 d32 c32306090120150138.60148.56148.74139.09

DaCapo Benchmark

Java Test: Zxing 1D/2D Barcode Image Processing

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image Processing32 z32 d32 c32130260390520650599599569609

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU32 z32 d32 c3236912159.5610.2110.229.56MIN: 5.09 / MAX: 75.37MIN: 5.17 / MAX: 61.15MIN: 5.48 / MAX: 68.07MIN: 5.1 / MAX: 77.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-Only32 z32 d32 c32306090120150112.09119.57119.72112.03

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU32 z32 d32 c321.30052.6013.90155.2026.50255.425.785.785.41MIN: 3.15 / MAX: 67.23MIN: 3.37 / MAX: 65.27MIN: 3.21 / MAX: 58.78MIN: 3.17 / MAX: 57.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-Only32 z32 d32 c32112233445544.4847.4147.5244.73

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU32 z32 d32 c3270014002100280035003299.933100.953099.203300.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU32 z32 d32 c32120024003600480060005751.585423.135416.315747.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: H2O In-Memory Platform For Machine Learning

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine Learning32 z32 d32 c3290018002700360045003868375539793974

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU32 z32 d32 c3291827364535.5937.6137.4035.51MIN: 24.72 / MAX: 147.24MIN: 24.11 / MAX: 127.49MIN: 27.33 / MAX: 92.33MIN: 22.8 / MAX: 100.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU32 z32 d32 c322004006008001000896.69848.62853.38898.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 z32 d32 c322468108.058.528.528.07MIN: 4.56 / MAX: 76.48MIN: 4.8 / MAX: 75.53MIN: 4.97 / MAX: 67.6MIN: 4.55 / MAX: 69.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 z32 d32 c324008001200160020001964.991862.241860.991960.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU32 z32 d32 c3261218243023.9525.1625.2223.95MIN: 15.19 / MAX: 90.71MIN: 19.24 / MAX: 86.7MIN: 21.61 / MAX: 89.16MIN: 13.94 / MAX: 114.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: H2 Database Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database Engine32 z32 d32 c3260012001800240030002655263427732675

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU32 z32 d32 c32140280420560700666.30634.50632.92666.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU32 z32 d32 c3281624324032.8131.2031.2232.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU32 z32 d32 c32110220330440550486.03510.90510.79486.65MIN: 454.31 / MAX: 580.9MIN: 470.7 / MAX: 595.97MIN: 473.86 / MAX: 584.54MIN: 465.68 / MAX: 570.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradesoap32 z32 d32 c32120024003600480060005168514953665403

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU32 z32 d32 c3251015202518.6919.5619.5818.69MIN: 9.78 / MAX: 86.93MIN: 13.73 / MAX: 73.6MIN: 10.24 / MAX: 83.63MIN: 9.97 / MAX: 81.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU32 z32 d32 c324008001200160020001704.021628.911627.931704.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 z32 d32 c3271421283527.5328.8228.7727.69MIN: 18.86 / MAX: 82.58MIN: 19.39 / MAX: 99.16MIN: 17.12 / MAX: 135.79MIN: 18.56 / MAX: 147.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 z32 d32 c32130260390520650579.41553.65554.68576.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig32 z32 d32 c32100200300400500434.19452.61453.69433.79

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNet32 z32 d32 c3271421283528.7128.7927.7328.99

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3214K28K42K56K70K61430633366280260673

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing32 z32 d32 c32900K1800K2700K3600K4500K43649964244478441949742846911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 z32 d32 c322004006008001000927.57965.35964.20929.23MIN: 895.6 / MAX: 1019.94MIN: 922.7 / MAX: 1047.5MIN: 905.78 / MAX: 1053.38MIN: 907.01 / MAX: 1013.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 z32 d32 c324812162017.1816.5416.5117.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l32 z32 d32 c3236912159.8210.2110.049.85MIN: 5.63 / MAX: 10.05MIN: 5.69 / MAX: 10.32MIN: 5.86 / MAX: 10.23MIN: 5.1 / MAX: 9.99

DaCapo Benchmark

Java Test: Apache Lucene Search Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Engine32 z32 d32 c32300600900120015001425143313791402

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-Only32 z32 d32 c3290180270360450410.43426.37426.30410.61

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNet32 z32 d32 c3281624324031.9233.0233.1432.12

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream32 z32 d32 c3290180270360450397.96410.33411.34396.29

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To Compile32 z32 d32 c3261218243023.7624.3024.4523.56

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crown32 z32 d32 c3291827364537.2536.2835.9136.96MIN: 36.89 / MAX: 37.75MIN: 35.88 / MAX: 37.13MIN: 35.53 / MAX: 37.08MIN: 36.61 / MAX: 37.43

DaCapo Benchmark

Java Test: Spring Boot

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring Boot32 z32 d32 c3250010001500200025002460245225332444

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream32 z32 d32 c3291827364539.9538.8338.7740.17

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 z32 d32 c324080120160200201.15195.05194.21199.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 z32 d32 c322040608010079.3981.8782.1879.82MIN: 43.97 / MAX: 186.13MIN: 52.13 / MAX: 175.84MIN: 58.39 / MAX: 175.7MIN: 42.02 / MAX: 179.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: GraphChi

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChi32 z32 d32 c3280016002400320040003630365635383536

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 z32 d32 c320.90681.81362.72043.62724.5343.904.034.033.91MIN: 2.18 / MAX: 64.81MIN: 2.23 / MAX: 62.26MIN: 2.23 / MAX: 54.09MIN: 2.2 / MAX: 72.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: Avrora AVR Simulation Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation Framework32 z32 d32 c32120024003600480060005441557255615613

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4K32 z32 d32 c324080120160200185.56186.37180.96186.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig32 z32 d32 c32122436486052.0153.6353.6252.13

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU32 z32 d32 c320.15080.30160.45240.60320.7540.660.670.670.65MIN: 0.36 / MAX: 65.79MIN: 0.36 / MAX: 50.74MIN: 0.36 / MAX: 62.87MIN: 0.36 / MAX: 51.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4K32 z32 d32 c321.34482.68964.03445.37926.7245.8995.9775.8295.8011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-1632 z32 d32 c3261218243025.2024.5124.4725.15

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU32 z32 d32 c3236912159.169.379.399.12MIN: 5.99 / MAX: 67.91MIN: 6.07 / MAX: 71.06MIN: 5.95 / MAX: 68.66MIN: 6.22 / MAX: 56.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU32 z32 d32 c324008001200160020001735.641696.501694.011741.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3280016002400320040003406349934933404

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3230K60K90K120K150K116972119783118980116566

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3216K32K48K64K80K71495733297302471361

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 z32 d32 c324812162013.2913.6513.6513.36MIN: 8.3 / MAX: 73.59MIN: 6.73 / MAX: 75.18MIN: 9.08 / MAX: 67.03MIN: 7.26 / MAX: 78.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3230K60K90K120K150K115669118802118221116377

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3290018002700360045004048413241574049

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 z32 d32 c32300600900120015001197.461166.831166.561190.421. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradebeans32 z32 d32 c322K4K6K8K10K8600838085208561

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c322004006008001000835.26815.98816.28836.42

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3251015202519.1319.5919.5819.11

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3230K60K90K120K150K136312139445139685136464

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1M32 z32 d32 c3290018002700360045004038.64095.74136.34067.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Jython32 z32 d32 c32150030004500600075006773676968656703

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3214K28K42K56K70K62113627876340261987

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time32 z32 d32 c32163248648071.2072.3172.3872.811. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

DaCapo Benchmark

Java Test: Apache Xalan XSLT

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLT32 z32 d32 c322004006008001000859861852871

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3280016002400320040003446352235153451

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1M32 z32 d32 c326K12K18K24K30K25943.725396.825385.925814.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 z32 d32 c320.1080.2160.3240.4320.540.470.480.480.48MIN: 0.27 / MAX: 64.47MIN: 0.27 / MAX: 65.55MIN: 0.27 / MAX: 50.17MIN: 0.27 / MAX: 50.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-5032 z32 d32 c322468108.778.598.618.74

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream32 z32 d32 c3251015202521.2721.0920.8721.29

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crown32 z32 d32 c3291827364537.6836.9437.0037.30MIN: 37.25 / MAX: 38.37MIN: 36.46 / MAX: 37.76MIN: 36.53 / MAX: 38.11MIN: 36.86 / MAX: 38.04

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random32 z32 d32 c32500K1000K1500K2000K2500K23612702351568232780023736541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Random32 z32 d32 c32500K1000K1500K2000K2500K22593442215896222949422314031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-15232 z32 d32 c324812162015.5115.3515.3215.61MIN: 7.3 / MAX: 15.63MIN: 8.86 / MAX: 15.52MIN: 6.91 / MAX: 15.45MIN: 6.89 / MAX: 15.74

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon32 z32 d32 c32112233445546.3145.6545.4645.94MIN: 46.05 / MAX: 46.74MIN: 45.37 / MAX: 46.89MIN: 45.22 / MAX: 46.6MIN: 45.66 / MAX: 46.38

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNet32 z32 d32 c324080120160200155.77158.08157.60158.47

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-5032 z32 d32 c32122436486052.7853.3053.0052.44MIN: 17.43 / MAX: 53.32MIN: 50.97 / MAX: 53.84MIN: 50.62 / MAX: 53.51MIN: 15.02 / MAX: 53.14

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Obj32 z32 d32 c3291827364536.8637.4137.4437.28MIN: 36.67 / MAX: 37.11MIN: 37.22 / MAX: 37.69MIN: 37.24 / MAX: 37.71MIN: 37.09 / MAX: 37.7

DaCapo Benchmark

Java Test: Apache Tomcat

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Tomcat32 z32 d32 c3250010001500200025002082211220942107

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 z32 d32 c3280016002400320040003924.863869.703877.913921.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU32 z32 d32 c329K18K27K36K45K40101.8039843.0539562.8740123.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: Eclipse

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Eclipse32 z32 d32 c323K6K9K12K15K12735127681282612656

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Random32 z32 d32 c3270K140K210K280K350K3141143136833177583141231. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNet32 z32 d32 c3260120180240300274.97276.19274.97272.93

Llama.cpp

Model: llama-2-13b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.gguf32 z32 d32 c324812162017.8718.0817.8717.941. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18936.518924.018783.919004.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3251015202521.2921.0721.0421.23

DaCapo Benchmark

Java Test: Batik SVG Toolkit

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG Toolkit32 z32 d32 c324008001200160020001723173817181733

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream32 z32 d32 c3261218243026.0825.7925.8226.06

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream32 z32 d32 c32160320480640800745.18751.21753.12747.07

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 z32 d32 c322040608010087.3388.2388.2087.49

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3280160240320400385.65381.78384.32383.97

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150122.96121.80122.33123.02

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l32 z32 d32 c322468107.117.157.187.17MIN: 4.25 / MAX: 7.26MIN: 4.34 / MAX: 7.3MIN: 4.37 / MAX: 7.37MIN: 4.45 / MAX: 7.33

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18961.318901.118947.318777.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32102030405041.4441.8441.5941.63

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4K32 z32 d32 c324080120160200184.98184.10183.90185.671. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-15232 z32 d32 c3251015202518.9218.8618.8619.04MIN: 7.59 / MAX: 19.04MIN: 7.91 / MAX: 19.03MIN: 10.78 / MAX: 19.02MIN: 6.89 / MAX: 19.18

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU32 z32 d32 c32306090120150150.06151.25150.07151.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 z32 d32 c324080120160200182.76181.12181.10182.51

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random32 z32 d32 c32140K280K420K560K700K6362426304786336886305751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU32 z32 d32 c3220406080100106.44105.64106.43105.48MIN: 81.71 / MAX: 196.1MIN: 54.2 / MAX: 154.42MIN: 80.87 / MAX: 199.77MIN: 82.05 / MAX: 167.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-5032 z32 d32 c3291827364539.9640.3140.3240.19MIN: 15.13 / MAX: 40.53MIN: 15.27 / MAX: 40.73MIN: 15.51 / MAX: 40.87MIN: 15.55 / MAX: 40.67

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating32 z32 d32 c3250K100K150K200K250K2423992411912402872415451. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3250010001500200025002199.492189.072195.922208.15

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c322468107.26107.28967.27387.2332

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32160320480640800746.13750.40751.93747.31

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150123.79122.93123.15123.88

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150128.85129.81129.54128.82

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150129.80130.79130.48129.87

CacheBench

Test: Read / Modify / Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / Write32 z32 d32 c3220K40K60K80K100K87218.2187854.1287238.0187227.59MIN: 65721.62 / MAX: 90703.93MIN: 72077.93 / MAX: 90708.03MIN: 65732.92 / MAX: 90706.91MIN: 65739.52 / MAX: 90694.351. (CC) gcc options: -O3 -lrt

DaCapo Benchmark

Java Test: Apache Lucene Search Index

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Index32 z32 d32 c32100020003000400050004589460245804613

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18763.818866.118897.518845.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 z32 d32 c3260120180240300267.84266.28266.03266.88

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 z32 d32 c32132639526559.6760.0360.0659.88

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon32 z32 d32 c32102030405041.8241.5641.5741.60MIN: 41.6 / MAX: 42.16MIN: 41.33 / MAX: 41.84MIN: 41.37 / MAX: 41.9MIN: 41.36 / MAX: 41.86

DaCapo Benchmark

Java Test: BioJava Biological Data Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data Framework32 z32 d32 c322K4K6K8K10K7858790779047874

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream32 z32 d32 c32130260390520650608.13611.44611.60607.94

Llama.cpp

Model: llama-2-7b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.gguf32 z32 d32 c3271421283529.9029.8529.7429.751. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Obj32 z32 d32 c3291827364539.1139.1439.0038.94MIN: 38.88 / MAX: 39.43MIN: 38.92 / MAX: 39.84MIN: 38.78 / MAX: 39.64MIN: 38.69 / MAX: 39.29

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On Demand32 z32 d32 c32102030405045.0845.1044.9545.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Live32 z32 d32 c3220406080100110.37110.29110.02109.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18909.018818.618887.518860.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

Java Test: Apache Cassandra

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Cassandra32 z32 d32 c32130026003900520065005938592759555946

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-5032 z32 d32 c32122436486051.5751.4951.5651.34

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-1632 z32 d32 c3236912159.759.759.779.73

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 z32 d32 c32306090120150150.37150.25150.84150.801. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating32 z32 d32 c3250K100K150K200K250K2115842113832118152122091. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 z32 d32 c3220406080100106.24106.32105.91105.97MIN: 81.06 / MAX: 185.99MIN: 81.37 / MAX: 177.41MIN: 82.12 / MAX: 188.16MIN: 81.88 / MAX: 218.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Upload32 z32 d32 c3251015202522.2022.2222.2122.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Platform32 z32 d32 c32102030405045.0544.9745.1345.131. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Llama.cpp

Model: llama-2-70b-chat.Q5_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.gguf32 z32 d32 c320.76951.5392.30853.0783.84753.413.423.423.421. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 z32 d32 c3211K22K33K44K55K52475.3952344.6052382.3152441.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

DaCapo Benchmark

Java Test: Apache Kafka

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Kafka32 z32 d32 c32110022003300440055005121511451115110

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream32 z32 d32 c32132639526559.8759.9759.9059.86

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream32 z32 d32 c3260120180240300266.98266.53266.84266.86

DaCapo Benchmark

Java Test: jMonkeyEngine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngine32 z32 d32 c32150030004500600075006917691669176914

CacheBench

Test: Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Write32 z32 d32 c3210K20K30K40K50K45646.8245643.0445645.0945646.09MIN: 45482.27 / MAX: 45698.03MIN: 45482.26 / MAX: 45696.12MIN: 45483.02 / MAX: 45696.19MIN: 45484.29 / MAX: 45698.111. (CC) gcc options: -O3 -lrt

CacheBench

Test: Read

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read32 z32 d32 c32160032004800640080007616.337615.837615.957616.09MIN: 7615.95 / MAX: 7616.74MIN: 7615.4 / MAX: 7616.44MIN: 7615.46 / MAX: 7616.35MIN: 7615.65 / MAX: 7616.541. (CC) gcc options: -O3 -lrt

Meta Performance Per Watts

Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsZen 1 - EPYC 76013M6M9M12M15M13064001.66

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringZen 1 - EPYC 7601130260390520650Min: 242.58 / Avg: 585.92 / Max: 718

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601263602718OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601262543712OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Quicksilver

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption MonitorZen 1 - EPYC 7601120240360480600Min: 258.88 / Avg: 624.15 / Max: 662.04

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 76014K8K12K16K20K18307.66

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601255.2553.7594.9OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor160320480640800

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 76016K12K18K24K30K27116.87

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601243584648OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor2004006008001000

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 76015K10K15K20K25K22248.55


Phoronix Test Suite v10.8.5