new-tests

Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401110-NE-NEWTESTS900&grr&sro.

new-testsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionZen 1 - EPYC 7601bc3232 z32 c32 dAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8llvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG1000GB INTEL SSDPE2KX010T81920x1200AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads)ASPEEDAMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212Security Details- Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details- 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- 32, 32 z, 32 c, 32 d: Python 3.11.6

new-testsquicksilver: CTS2build-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlyquicksilver: CORAL2 P2pytorch: CPU - 16 - Efficientnet_v2_lbuild-gem5: Time To Compilexmrig: GhostRider - 1Mquicksilver: CORAL2 P1ffmpeg: libx265 - Uploadffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandospray-studio: 3 - 4K - 32 - Path Tracer - CPUllama-cpp: llama-2-70b-chat.Q5_0.ggufblender: Pabellon Barcelona - CPU-Onlypytorch: CPU - 16 - ResNet-152ospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUcachebench: Read / Modify / Writecachebench: Writecachebench: Readblender: Classroom - CPU-Onlypytorch: CPU - 1 - Efficientnet_v2_lopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeospray-studio: 3 - 4K - 16 - Path Tracer - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUtensorflow: CPU - 16 - VGG-16ffmpeg: libx265 - Liveopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUquantlib: Multi-Threadedpytorch: CPU - 1 - ResNet-152openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUspeedb: Update Randopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUspeedb: Read While Writingrocksdb: Update Randspeedb: Read Rand Write Randspeedb: Rand Readrocksdb: Read Rand Write Randrocksdb: Read While Writingrocksdb: Rand Readdacapobench: Apache Cassandrablender: Fishy Cat - CPU-Onlyxmrig: Monero - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: KawPow - 1Mxmrig: CryptoNight-Heavy - 1Mbuild-linux-kernel: defconfigpytorch: CPU - 16 - ResNet-50deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdacapobench: Eclipseblender: BMW27 - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdacapobench: Apache Lucene Search Indexxmrig: Wownero - 1Mdacapobench: H2 Database Enginedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 16 - ResNet-50deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdacapobench: Tradebeanscompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingsvt-av1: Preset 4 - Bosphorus 4Kembree: Pathtracer - Asian Dragon Objllama-cpp: llama-2-13b.Q4_0.ggufy-cruncher: 1Bembree: Pathtracer ISPC - Asian Dragon Objdacapobench: Tradesoapdacapobench: BioJava Biological Data Frameworkpytorch: CPU - 1 - ResNet-50build-ffmpeg: Time To Compiledacapobench: Jythondacapobench: jMonkeyEnginedacapobench: GraphChiembree: Pathtracer - Crownembree: Pathtracer ISPC - Crowndacapobench: H2O In-Memory Platform For Machine Learningllama-cpp: llama-2-7b.Q4_0.ggufdacapobench: Apache Kafkaembree: Pathtracer - Asian Dragontensorflow: CPU - 1 - ResNet-50dacapobench: Avrora AVR Simulation Frameworkembree: Pathtracer ISPC - Asian Dragonsvt-av1: Preset 8 - Bosphorus 4Ky-cruncher: 500Mdacapobench: Spring Boottensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 1 - VGG-16dacapobench: Apache Tomcatdacapobench: Apache Lucene Search Enginesvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kdacapobench: PMD Source Code Analyzertensorflow: CPU - 16 - AlexNetdacapobench: Batik SVG Toolkittensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - AlexNetdacapobench: FOP Print Formatterdacapobench: Apache Xalan XSLTdacapobench: Zxing 1D/2D Barcode Image ProcessingZen 1 - EPYC 7601bc3232 z32 c32 d11426667150133331299666733.92315.69316270000161400002118000010.4165.20216260000161500002125000010.4765.21314320000433.789410.61153500007.17254.014067.41879000022.2845.1345.181364643.42139.0915.6111656611637787227.58771345646.0913537616.087334112.039.8572.80728828.37258371361607.93526.0566404934513404619876067325.15109.84929.2317.17486.6532.82107079.219.04105.48151.45105.97150.879.82199.99.121741.5723.95666.2227.69576.185.415747.6542.8774535.51898.68.071960.180.4852441.940.6540123.6213.361190.4218.691704.263141233.913921.59.563300.997457600630575223140317968595423736544284691176770468594655.6518845.518860.118777.219004.552.13340.1919.1107836.42141265644.73747.067421.293341.6314383.9746747.31421.2278396.291440.1688461325814.4267587.4908182.508551.34129.8749123.0157128.8158123.881759.8638266.85747.23322208.153759.8833266.879985612122092415455.80137.28417.9411.67638.93785403787452.4423.55767036914353636.958437.2967397429.75511041.59588.74561345.937448.4515.6562444158.479.7321071402185.665186.6251784272.93173328.9932.1275187160914290000434.187410.43152300007.11272.614038.61876000022.2045.0545.081363123.41138.615.5111697211566987218.21097445646.8161077616.334142112.099.8271.20128530.7547271495608.132626.0768404834463406621136143025.2110.37927.5717.18486.0332.81107381.618.92106.44150.06106.24150.3779.39201.159.161735.6423.95666.327.53579.415.425751.5843.71730.8235.59896.698.051964.990.4752475.390.6640101.813.291197.4618.691704.023141143.93924.869.563299.937210235636242225934417943492423612704364996177167636593855.5418763.81890918961.318936.552.01239.9619.1338835.2621273544.48745.180621.271141.4377385.6481746.12821.289397.959339.9467458925943.7265587.325182.764351.57129.8035122.955128.845123.78559.8673266.97617.2612199.494159.6674267.841786002115842423995.89936.858617.8711.59539.1075168785852.7823.75967736917363037.254537.6791386829.9512141.81988.77544146.308858.7155.6852460155.779.7520821425184.981185.5621820274.97172328.7131.9269685959914430000453.693426.3151800007.18258.3074136.3104000022.2145.1344.951396853.42148.7415.3211898011822187238.01319745645.0911337615.948086119.7210.0472.38400730.53759173024611.602625.8175415735153493634026280224.47110.02964.216.51510.7931.2298916.218.86106.43150.07105.91150.8482.18194.219.391694.0125.22632.9228.77554.685.785416.3146.17692.0237.4853.388.521860.990.4852382.310.6739562.8713.651166.5619.581627.933177584.033877.9110.223099.27746346633688222949416320272123278004419497160665305595559.5818897.518887.518947.318783.953.61540.3219.5831816.27851282647.52753.122920.872941.5889384.3164751.925921.0419411.343538.7708458025385.9277388.1952181.104351.56130.4755122.3307129.5421123.146959.9016266.84287.27382195.919860.0613266.03485202118152402875.82937.440517.8711.90239.00465366790453.0024.44668656917353835.914736.9967397929.74511141.56968.61556145.464847.2535.7832533157.69.7720941379183.899180.9551966274.97171827.7333.1476485256914280000452.606426.37151000007.15258.9344095.71884000022.2244.9745.101394453.42148.5615.3511978311880287854.11767245643.0387137615.833145119.5710.2172.30583630.72419473329611.443925.7874413235223499627876333624.51110.29965.3516.54510.931.298618.718.86105.64151.25106.32150.2581.87195.059.371696.525.16634.528.82553.655.785423.1346.28690.2437.61848.628.521862.240.4852344.60.6739843.0513.651166.8319.561628.913136834.033869.710.213100.957105602630478221589616351243223515684244478160707812592759.7918866.118818.618901.11892453.63240.3119.5858815.97681276847.41751.211721.093241.8438381.7839750.399721.0667410.326738.8343460225396.8263488.2278181.115551.49130.7937121.8001129.8101122.931259.9698266.53437.28962189.065560.0284266.277683802113832411915.97737.405618.0811.97539.14215149790753.3024.367696916365636.281236.9369375529.85511441.5578.59557245.648258.6425.7512452158.089.7521121433184.099186.3681833276.19173828.7933.02758861599OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS23232 c32 d32 zZen 1 - EPYC 7601bc3M6M9M12M15MSE +/- 16666.67, N = 3143200001443000014280000142900001142666716270000162600001. (CXX) g++ options: -fopenmp -O3 -march=native

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig3232 c32 d32 z100200300400500433.79453.69452.61434.19

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-Only3232 c32 d32 z90180270360450410.61426.30426.37410.43

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P23232 c32 d32 zZen 1 - EPYC 7601bc3M6M9M12M15MSE +/- 37118.43, N = 3153500001518000015100000152300001501333316140000161500001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l3232 c32 d32 z2468107.177.187.157.11MIN: 4.45 / MAX: 7.33MIN: 4.37 / MAX: 7.37MIN: 4.34 / MAX: 7.3MIN: 4.25 / MAX: 7.26

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To Compile3232 c32 d32 z60120180240300254.01258.31258.93272.61

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1M3232 c32 d32 z90018002700360045004067.44136.34095.74038.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P13232 c32 d32 zZen 1 - EPYC 7601bc5M10M15M20M25MSE +/- 66916.20, N = 318790000104000018840000187600001299666721180000212500001. (CXX) g++ options: -fopenmp -O3 -march=native

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Upload3232 c32 d32 z51015202522.2822.2122.2222.201. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Platform3232 c32 d32 z102030405045.1345.1344.9745.051. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On Demand3232 c32 d32 z102030405045.1844.9545.1045.081. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z30K60K90K120K150K136464139685139445136312

Llama.cpp

Model: llama-2-70b-chat.Q5_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.gguf3232 c32 d32 z0.76951.5392.30853.0783.84753.423.423.423.411. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-Only3232 c32 d32 z306090120150139.09148.74148.56138.60

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-1523232 c32 d32 z4812162015.6115.3215.3515.51MIN: 6.89 / MAX: 15.74MIN: 6.91 / MAX: 15.45MIN: 8.86 / MAX: 15.52MIN: 7.3 / MAX: 15.63

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z30K60K90K120K150K116566118980119783116972

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z30K60K90K120K150K116377118221118802115669

CacheBench

Test: Read / Modify / Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / Write3232 c32 d32 z20K40K60K80K100K87227.5987238.0187854.1287218.21MIN: 65739.52 / MAX: 90694.35MIN: 65732.92 / MAX: 90706.91MIN: 72077.93 / MAX: 90708.03MIN: 65721.62 / MAX: 90703.931. (CC) gcc options: -O3 -lrt

CacheBench

Test: Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Write3232 c32 d32 z10K20K30K40K50K45646.0945645.0945643.0445646.82MIN: 45484.29 / MAX: 45698.11MIN: 45483.02 / MAX: 45696.19MIN: 45482.26 / MAX: 45696.12MIN: 45482.27 / MAX: 45698.031. (CC) gcc options: -O3 -lrt

CacheBench

Test: Read

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read3232 c32 d32 z160032004800640080007616.097615.957615.837616.33MIN: 7615.65 / MAX: 7616.54MIN: 7615.46 / MAX: 7616.35MIN: 7615.4 / MAX: 7616.44MIN: 7615.95 / MAX: 7616.741. (CC) gcc options: -O3 -lrt

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-Only3232 c32 d32 z306090120150112.03119.72119.57112.09

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l3232 c32 d32 z36912159.8510.0410.219.82MIN: 5.1 / MAX: 9.99MIN: 5.86 / MAX: 10.23MIN: 5.69 / MAX: 10.32MIN: 5.63 / MAX: 10.05

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time3232 c32 d32 z163248648072.8172.3872.3171.201. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time3232 c32 d32 z71421283528.3730.5430.7230.751. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z16K32K48K64K80K71361730247332971495

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream3232 c32 d32 z130260390520650607.94611.60611.44608.13

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream3232 c32 d32 z61218243026.0625.8225.7926.08

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z90018002700360045004049415741324048

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z80016002400320040003451351535223446

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z80016002400320040003404349334993406

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z14K28K42K56K70K61987634026278762113

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 c32 d32 z14K28K42K56K70K60673628026333661430

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-163232 c32 d32 z61218243025.1524.4724.5125.20

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Live3232 c32 d32 z20406080100109.84110.02110.29110.371. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU3232 c32 d32 z2004006008001000929.23964.20965.35927.57MIN: 907.01 / MAX: 1013.02MIN: 905.78 / MAX: 1053.38MIN: 922.7 / MAX: 1047.5MIN: 895.6 / MAX: 1019.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU3232 c32 d32 z4812162017.1716.5116.5417.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU3232 c32 d32 z110220330440550486.65510.79510.90486.03MIN: 465.68 / MAX: 570.73MIN: 473.86 / MAX: 584.54MIN: 470.7 / MAX: 595.97MIN: 454.31 / MAX: 580.91. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU3232 c32 d32 z81624324032.8231.2231.2032.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threaded3232 c32 d32 z20K40K60K80K100K107079.298916.298618.7107381.61. (CXX) g++ options: -O3 -march=native -fPIE -pie

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-1523232 c32 d32 z51015202519.0418.8618.8618.92MIN: 6.89 / MAX: 19.18MIN: 10.78 / MAX: 19.02MIN: 7.91 / MAX: 19.03MIN: 7.59 / MAX: 19.04

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU3232 c32 d32 z20406080100105.48106.43105.64106.44MIN: 82.05 / MAX: 167.92MIN: 80.87 / MAX: 199.77MIN: 54.2 / MAX: 154.42MIN: 81.71 / MAX: 196.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU3232 c32 d32 z306090120150151.45150.07151.25150.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU3232 c32 d32 z20406080100105.97105.91106.32106.24MIN: 81.88 / MAX: 218.45MIN: 82.12 / MAX: 188.16MIN: 81.37 / MAX: 177.41MIN: 81.06 / MAX: 185.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU3232 c32 d32 z306090120150150.80150.84150.25150.371. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU3232 c32 d32 z2040608010079.8282.1881.8779.39MIN: 42.02 / MAX: 179.47MIN: 58.39 / MAX: 175.7MIN: 52.13 / MAX: 175.84MIN: 43.97 / MAX: 186.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU3232 c32 d32 z4080120160200199.90194.21195.05201.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU3232 c32 d32 z36912159.129.399.379.16MIN: 6.22 / MAX: 56.95MIN: 5.95 / MAX: 68.66MIN: 6.07 / MAX: 71.06MIN: 5.99 / MAX: 67.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU3232 c32 d32 z4008001200160020001741.571694.011696.501735.641. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU3232 c32 d32 z61218243023.9525.2225.1623.95MIN: 13.94 / MAX: 114.01MIN: 21.61 / MAX: 89.16MIN: 19.24 / MAX: 86.7MIN: 15.19 / MAX: 90.711. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU3232 c32 d32 z140280420560700666.22632.92634.50666.301. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU3232 c32 d32 z71421283527.6928.7728.8227.53MIN: 18.56 / MAX: 147.54MIN: 17.12 / MAX: 135.79MIN: 19.39 / MAX: 99.16MIN: 18.86 / MAX: 82.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU3232 c32 d32 z130260390520650576.18554.68553.65579.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU3232 c32 d32 z1.30052.6013.90155.2026.50255.415.785.785.42MIN: 3.17 / MAX: 57.08MIN: 3.21 / MAX: 58.78MIN: 3.37 / MAX: 65.27MIN: 3.15 / MAX: 67.231. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU3232 c32 d32 z120024003600480060005747.655416.315423.135751.581. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU3232 c32 d32 z102030405042.8746.1746.2843.71MIN: 35.14 / MAX: 107.5MIN: 39.81 / MAX: 161.92MIN: 30.15 / MAX: 108.49MIN: 35.06 / MAX: 153.841. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU3232 c32 d32 z160320480640800745.00692.02690.24730.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU3232 c32 d32 z91827364535.5137.4037.6135.59MIN: 22.8 / MAX: 100.53MIN: 27.33 / MAX: 92.33MIN: 24.11 / MAX: 127.49MIN: 24.72 / MAX: 147.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU3232 c32 d32 z2004006008001000898.60853.38848.62896.691. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU3232 c32 d32 z2468108.078.528.528.05MIN: 4.55 / MAX: 69.24MIN: 4.97 / MAX: 67.6MIN: 4.8 / MAX: 75.53MIN: 4.56 / MAX: 76.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU3232 c32 d32 z4008001200160020001960.181860.991862.241964.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU3232 c32 d32 z0.1080.2160.3240.4320.540.480.480.480.47MIN: 0.27 / MAX: 50.11MIN: 0.27 / MAX: 50.17MIN: 0.27 / MAX: 65.55MIN: 0.27 / MAX: 64.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU3232 c32 d32 z11K22K33K44K55K52441.9452382.3152344.6052475.391. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU3232 c32 d32 z0.15080.30160.45240.60320.7540.650.670.670.66MIN: 0.36 / MAX: 51.48MIN: 0.36 / MAX: 62.87MIN: 0.36 / MAX: 50.74MIN: 0.36 / MAX: 65.791. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU3232 c32 d32 z9K18K27K36K45K40123.6239562.8739843.0540101.801. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU3232 c32 d32 z4812162013.3613.6513.6513.29MIN: 7.26 / MAX: 78.85MIN: 9.08 / MAX: 67.03MIN: 6.73 / MAX: 75.18MIN: 8.3 / MAX: 73.591. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU3232 c32 d32 z300600900120015001190.421166.561166.831197.461. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU3232 c32 d32 z51015202518.6919.5819.5618.69MIN: 9.97 / MAX: 81.33MIN: 10.24 / MAX: 83.63MIN: 13.73 / MAX: 73.6MIN: 9.78 / MAX: 86.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU3232 c32 d32 z4008001200160020001704.261627.931628.911704.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Random3232 c32 d32 z70K140K210K280K350K3141233177583136833141141. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU3232 c32 d32 z0.90681.81362.72043.62724.5343.914.034.033.90MIN: 2.2 / MAX: 72.73MIN: 2.23 / MAX: 54.09MIN: 2.23 / MAX: 62.26MIN: 2.18 / MAX: 64.811. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU3232 c32 d32 z80016002400320040003921.503877.913869.703924.861. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU3232 c32 d32 z36912159.5610.2210.219.56MIN: 5.1 / MAX: 77.12MIN: 5.48 / MAX: 68.07MIN: 5.17 / MAX: 61.15MIN: 5.09 / MAX: 75.371. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU3232 c32 d32 z70014002100280035003300.993099.203100.953299.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writing3232 c32 d32 z1.7M3.4M5.1M6.8M8.5M74576007746346710560272102351. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random3232 c32 d32 z140K280K420K560K700K6305756336886304786362421. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Random3232 c32 d32 z500K1000K1500K2000K2500K22314032229494221589622593441. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Read3232 c32 d32 z40M80M120M160M200M1796859541632027211635124321794349241. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random3232 c32 d32 z500K1000K1500K2000K2500K23736542327800235156823612701. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing3232 c32 d32 z900K1800K2700K3600K4500K42846914419497424447843649961. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read3232 c32 d32 z40M80M120M160M200M1767704681606653051607078121771676361. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

DaCapo Benchmark

Java Test: Apache Cassandra

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Cassandra3232 c32 d32 z130026003900520065005946595559275938

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-Only3232 c32 d32 z132639526555.6559.5859.7955.54

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1M3232 c32 d32 z4K8K12K16K20K18845.518897.518866.118763.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1M3232 c32 d32 z4K8K12K16K20K18860.118887.518818.618909.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1M3232 c32 d32 z4K8K12K16K20K18777.218947.318901.118961.31. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1M3232 c32 d32 z4K8K12K16K20K19004.518783.918924.018936.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig3232 c32 d32 z122436486052.1353.6253.6352.01

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-503232 c32 d32 z91827364540.1940.3240.3139.96MIN: 15.55 / MAX: 40.67MIN: 15.51 / MAX: 40.87MIN: 15.27 / MAX: 40.73MIN: 15.13 / MAX: 40.53

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z51015202519.1119.5819.5919.13

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z2004006008001000836.42816.28815.98835.26

DaCapo Benchmark

Java Test: Eclipse

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Eclipse3232 c32 d32 z3K6K9K12K15K12656128261276812735

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-Only3232 c32 d32 z112233445544.7347.5247.4144.48

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream3232 c32 d32 z160320480640800747.07753.12751.21745.18

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream3232 c32 d32 z51015202521.2920.8721.0921.27

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z102030405041.6341.5941.8441.44

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z80160240320400383.97384.32381.78385.65

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z160320480640800747.31751.93750.40746.13

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z51015202521.2321.0421.0721.29

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream3232 c32 d32 z90180270360450396.29411.34410.33397.96

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream3232 c32 d32 z91827364540.1738.7738.8339.95

DaCapo Benchmark

Java Test: Apache Lucene Search Index

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Index3232 c32 d32 z100020003000400050004613458046024589

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1M3232 c32 d32 z6K12K18K24K30K25814.425385.925396.825943.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

Java Test: H2 Database Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database Engine3232 c32 d32 z60012001800240030002675277326342655

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream3232 c32 d32 z2040608010087.4988.2088.2387.33

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream3232 c32 d32 z4080120160200182.51181.10181.12182.76

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-503232 c32 d32 z122436486051.3451.5651.4951.57

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream3232 c32 d32 z306090120150129.87130.48130.79129.80

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream3232 c32 d32 z306090120150123.02122.33121.80122.96

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z306090120150128.82129.54129.81128.85

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z306090120150123.88123.15122.93123.79

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream3232 c32 d32 z132639526559.8659.9059.9759.87

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream3232 c32 d32 z60120180240300266.86266.84266.53266.98

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z2468107.23327.27387.28967.2610

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 c32 d32 z50010001500200025002208.152195.922189.072199.49

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream3232 c32 d32 z132639526559.8860.0660.0359.67

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream3232 c32 d32 z60120180240300266.88266.03266.28267.84

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradebeans3232 c32 d32 z2K4K6K8K10K8561852083808600

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating3232 c32 d32 z50K100K150K200K250K2122092118152113832115841. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating3232 c32 d32 z50K100K150K200K250K2415452402872411912423991. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4K3232 c32 d32 z1.34482.68964.03445.37926.7245.8015.8295.9775.8991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Obj3232 c32 d32 z91827364537.2837.4437.4136.86MIN: 37.09 / MAX: 37.7MIN: 37.24 / MAX: 37.71MIN: 37.22 / MAX: 37.69MIN: 36.67 / MAX: 37.11

Llama.cpp

Model: llama-2-13b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.gguf3232 c32 d32 z4812162017.9417.8718.0817.871. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1B3232 c32 d32 zZen 1 - EPYC 7601bc816243240SE +/- 0.09, N = 311.6811.9011.9811.6033.9210.4210.48

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Obj3232 c32 d32 z91827364538.9439.0039.1439.11MIN: 38.69 / MAX: 39.29MIN: 38.78 / MAX: 39.64MIN: 38.92 / MAX: 39.84MIN: 38.88 / MAX: 39.43

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradesoap3232 c32 d32 z120024003600480060005403536651495168

DaCapo Benchmark

Java Test: BioJava Biological Data Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data Framework3232 c32 d32 z2K4K6K8K10K7874790479077858

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-503232 c32 d32 z122436486052.4453.0053.3052.78MIN: 15.02 / MAX: 53.14MIN: 50.62 / MAX: 53.51MIN: 50.97 / MAX: 53.84MIN: 17.43 / MAX: 53.32

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To Compile3232 c32 d32 z61218243023.5624.4524.3023.76

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Jython3232 c32 d32 z150030004500600075006703686567696773

DaCapo Benchmark

Java Test: jMonkeyEngine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngine3232 c32 d32 z150030004500600075006914691769166917

DaCapo Benchmark

Java Test: GraphChi

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChi3232 c32 d32 z80016002400320040003536353836563630

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crown3232 c32 d32 z91827364536.9635.9136.2837.25MIN: 36.61 / MAX: 37.43MIN: 35.53 / MAX: 37.08MIN: 35.88 / MAX: 37.13MIN: 36.89 / MAX: 37.75

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crown3232 c32 d32 z91827364537.3037.0036.9437.68MIN: 36.86 / MAX: 38.04MIN: 36.53 / MAX: 38.11MIN: 36.46 / MAX: 37.76MIN: 37.25 / MAX: 38.37

DaCapo Benchmark

Java Test: H2O In-Memory Platform For Machine Learning

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine Learning3232 c32 d32 z90018002700360045003974397937553868

Llama.cpp

Model: llama-2-7b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.gguf3232 c32 d32 z71421283529.7529.7429.8529.901. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

DaCapo Benchmark

Java Test: Apache Kafka

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Kafka3232 c32 d32 z110022003300440055005110511151145121

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon3232 c32 d32 z102030405041.6041.5741.5641.82MIN: 41.36 / MAX: 41.86MIN: 41.37 / MAX: 41.9MIN: 41.33 / MAX: 41.84MIN: 41.6 / MAX: 42.16

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-503232 c32 d32 z2468108.748.618.598.77

DaCapo Benchmark

Java Test: Avrora AVR Simulation Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation Framework3232 c32 d32 z120024003600480060005613556155725441

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon3232 c32 d32 z112233445545.9445.4645.6546.31MIN: 45.66 / MAX: 46.38MIN: 45.22 / MAX: 46.6MIN: 45.37 / MAX: 46.89MIN: 46.05 / MAX: 46.74

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4K3232 c32 d32 z132639526548.4547.2558.6458.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500M3232 c32 d32 zZen 1 - EPYC 7601bc48121620SE +/- 0.118, N = 35.6565.7835.7515.68515.6935.2025.213

DaCapo Benchmark

Java Test: Spring Boot

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring Boot3232 c32 d32 z50010001500200025002444253324522460

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNet3232 c32 d32 z4080120160200158.47157.60158.08155.77

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-163232 c32 d32 z36912159.739.779.759.75

DaCapo Benchmark

Java Test: Apache Tomcat

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Tomcat3232 c32 d32 z50010001500200025002107209421122082

DaCapo Benchmark

Java Test: Apache Lucene Search Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Engine3232 c32 d32 z300600900120015001402137914331425

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4K3232 c32 d32 z4080120160200185.67183.90184.10184.981. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4K3232 c32 d32 z4080120160200186.63180.96186.37185.561. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

Java Test: PMD Source Code Analyzer

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code Analyzer3232 c32 d32 z4008001200160020001784196618331820

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNet3232 c32 d32 z60120180240300272.93274.97276.19274.97

DaCapo Benchmark

Java Test: Batik SVG Toolkit

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG Toolkit3232 c32 d32 z4008001200160020001733171817381723

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNet3232 c32 d32 z71421283528.9927.7328.7928.71

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNet3232 c32 d32 z81624324032.1233.1433.0231.92

DaCapo Benchmark

Java Test: FOP Print Formatter

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print Formatter3232 c32 d32 z160320480640800751764758696

DaCapo Benchmark

Java Test: Apache Xalan XSLT

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLT3232 c32 d32 z2004006008001000871852861859

DaCapo Benchmark

Java Test: Zxing 1D/2D Barcode Image Processing

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image Processing3232 c32 d32 z130260390520650609569599599

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringZen 1 - EPYC 7601130260390520650Min: 242.58 / Avg: 585.92 / Max: 718

Meta Performance Per Watts

Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsZen 1 - EPYC 76013M6M9M12M15M13064001.66

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601263602718OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601262543712OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Quicksilver

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption MonitorZen 1 - EPYC 7601120240360480600Min: 258.88 / Avg: 624.15 / Max: 662.04

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 76014K8K12K16K20K18307.66

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601255.2553.7594.9OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor160320480640800

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 76016K12K18K24K30K27116.87

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601243584648OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor2004006008001000

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 76015K10K15K20K25K22248.55


Phoronix Test Suite v10.8.4