new-tests

Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401110-NE-NEWTESTS900&grr.

new-testsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionZen 1 - EPYC 7601bc3232 z32 c32 dAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8llvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG1000GB INTEL SSDPE2KX010T81920x1200AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads)ASPEEDAMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212Security Details- Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details- 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- 32, 32 z, 32 c, 32 d: Python 3.11.6

new-testsquicksilver: CTS2build-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlyquicksilver: CORAL2 P2pytorch: CPU - 16 - Efficientnet_v2_lbuild-gem5: Time To Compilexmrig: GhostRider - 1Mquicksilver: CORAL2 P1ffmpeg: libx265 - Uploadffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandospray-studio: 3 - 4K - 32 - Path Tracer - CPUllama-cpp: llama-2-70b-chat.Q5_0.ggufblender: Pabellon Barcelona - CPU-Onlypytorch: CPU - 16 - ResNet-152ospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUcachebench: Read / Modify / Writecachebench: Writecachebench: Readblender: Classroom - CPU-Onlypytorch: CPU - 1 - Efficientnet_v2_lopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeospray-studio: 3 - 4K - 16 - Path Tracer - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUtensorflow: CPU - 16 - VGG-16ffmpeg: libx265 - Liveopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUquantlib: Multi-Threadedpytorch: CPU - 1 - ResNet-152openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUspeedb: Update Randopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUspeedb: Read While Writingrocksdb: Update Randspeedb: Read Rand Write Randspeedb: Rand Readrocksdb: Read Rand Write Randrocksdb: Read While Writingrocksdb: Rand Readdacapobench: Apache Cassandrablender: Fishy Cat - CPU-Onlyxmrig: Monero - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: KawPow - 1Mxmrig: CryptoNight-Heavy - 1Mbuild-linux-kernel: defconfigpytorch: CPU - 16 - ResNet-50deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdacapobench: Eclipseblender: BMW27 - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdacapobench: Apache Lucene Search Indexxmrig: Wownero - 1Mdacapobench: H2 Database Enginedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 16 - ResNet-50deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdacapobench: Tradebeanscompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingsvt-av1: Preset 4 - Bosphorus 4Kembree: Pathtracer - Asian Dragon Objllama-cpp: llama-2-13b.Q4_0.ggufy-cruncher: 1Bembree: Pathtracer ISPC - Asian Dragon Objdacapobench: Tradesoapdacapobench: BioJava Biological Data Frameworkpytorch: CPU - 1 - ResNet-50build-ffmpeg: Time To Compiledacapobench: Jythondacapobench: jMonkeyEnginedacapobench: GraphChiembree: Pathtracer - Crownembree: Pathtracer ISPC - Crowndacapobench: H2O In-Memory Platform For Machine Learningllama-cpp: llama-2-7b.Q4_0.ggufdacapobench: Apache Kafkaembree: Pathtracer - Asian Dragontensorflow: CPU - 1 - ResNet-50dacapobench: Avrora AVR Simulation Frameworkembree: Pathtracer ISPC - Asian Dragonsvt-av1: Preset 8 - Bosphorus 4Ky-cruncher: 500Mdacapobench: Spring Boottensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 1 - VGG-16dacapobench: Apache Tomcatdacapobench: Apache Lucene Search Enginesvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kdacapobench: PMD Source Code Analyzertensorflow: CPU - 16 - AlexNetdacapobench: Batik SVG Toolkittensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - AlexNetdacapobench: FOP Print Formatterdacapobench: Apache Xalan XSLTdacapobench: Zxing 1D/2D Barcode Image ProcessingZen 1 - EPYC 7601bc3232 z32 c32 d11426667150133331299666733.92315.69316270000161400002118000010.4165.20216260000161500002125000010.4765.21314320000433.789410.61153500007.17254.014067.41879000022.2845.1345.181364643.42139.0915.6111656611637787227.58771345646.0913537616.087334112.039.8572.80728828.37258371361607.93526.0566404934513404619876067325.15109.84929.2317.17486.6532.82107079.219.04105.48151.45105.97150.879.82199.99.121741.5723.95666.2227.69576.185.415747.6542.8774535.51898.68.071960.180.4852441.940.6540123.6213.361190.4218.691704.263141233.913921.59.563300.997457600630575223140317968595423736544284691176770468594655.6518845.518860.118777.219004.552.13340.1919.1107836.42141265644.73747.067421.293341.6314383.9746747.31421.2278396.291440.1688461325814.4267587.4908182.508551.34129.8749123.0157128.8158123.881759.8638266.85747.23322208.153759.8833266.879985612122092415455.80137.28417.9411.67638.93785403787452.4423.55767036914353636.958437.2967397429.75511041.59588.74561345.937448.4515.6562444158.479.7321071402185.665186.6251784272.93173328.9932.1275187160914290000434.187410.43152300007.11272.614038.61876000022.2045.0545.081363123.41138.615.5111697211566987218.21097445646.8161077616.334142112.099.8271.20128530.7547271495608.132626.0768404834463406621136143025.2110.37927.5717.18486.0332.81107381.618.92106.44150.06106.24150.3779.39201.159.161735.6423.95666.327.53579.415.425751.5843.71730.8235.59896.698.051964.990.4752475.390.6640101.813.291197.4618.691704.023141143.93924.869.563299.937210235636242225934417943492423612704364996177167636593855.5418763.81890918961.318936.552.01239.9619.1338835.2621273544.48745.180621.271141.4377385.6481746.12821.289397.959339.9467458925943.7265587.325182.764351.57129.8035122.955128.845123.78559.8673266.97617.2612199.494159.6674267.841786002115842423995.89936.858617.8711.59539.1075168785852.7823.75967736917363037.254537.6791386829.9512141.81988.77544146.308858.7155.6852460155.779.7520821425184.981185.5621820274.97172328.7131.9269685959914430000453.693426.3151800007.18258.3074136.3104000022.2145.1344.951396853.42148.7415.3211898011822187238.01319745645.0911337615.948086119.7210.0472.38400730.53759173024611.602625.8175415735153493634026280224.47110.02964.216.51510.7931.2298916.218.86106.43150.07105.91150.8482.18194.219.391694.0125.22632.9228.77554.685.785416.3146.17692.0237.4853.388.521860.990.4852382.310.6739562.8713.651166.5619.581627.933177584.033877.9110.223099.27746346633688222949416320272123278004419497160665305595559.5818897.518887.518947.318783.953.61540.3219.5831816.27851282647.52753.122920.872941.5889384.3164751.925921.0419411.343538.7708458025385.9277388.1952181.104351.56130.4755122.3307129.5421123.146959.9016266.84287.27382195.919860.0613266.03485202118152402875.82937.440517.8711.90239.00465366790453.0024.44668656917353835.914736.9967397929.74511141.56968.61556145.464847.2535.7832533157.69.7720941379183.899180.9551966274.97171827.7333.1476485256914280000452.606426.37151000007.15258.9344095.71884000022.2244.9745.101394453.42148.5615.3511978311880287854.11767245643.0387137615.833145119.5710.2172.30583630.72419473329611.443925.7874413235223499627876333624.51110.29965.3516.54510.931.298618.718.86105.64151.25106.32150.2581.87195.059.371696.525.16634.528.82553.655.785423.1346.28690.2437.61848.628.521862.240.4852344.60.6739843.0513.651166.8319.561628.913136834.033869.710.213100.957105602630478221589616351243223515684244478160707812592759.7918866.118818.618901.11892453.63240.3119.5858815.97681276847.41751.211721.093241.8438381.7839750.399721.0667410.326738.8343460225396.8263488.2278181.115551.49130.7937121.8001129.8101122.931259.9698266.53437.28962189.065560.0284266.277683802113832411915.97737.405618.0811.97539.14215149790753.3024.367696916365636.281236.9369375529.85511441.5578.59557245.648258.6425.7512452158.089.7521121433184.099186.3681833276.19173828.7933.02758861599OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 7601bc3232 z32 c32 d3M6M9M12M15MSE +/- 16666.67, N = 3114266671627000016260000143200001429000014430000142800001. (CXX) g++ options: -fopenmp -O3 -march=native

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig3232 z32 c32 d100200300400500433.79434.19453.69452.61

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-Only3232 z32 c32 d90180270360450410.61410.43426.30426.37

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 7601bc3232 z32 c32 d3M6M9M12M15MSE +/- 37118.43, N = 3150133331614000016150000153500001523000015180000151000001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l3232 z32 c32 d2468107.177.117.187.15MIN: 4.45 / MAX: 7.33MIN: 4.25 / MAX: 7.26MIN: 4.37 / MAX: 7.37MIN: 4.34 / MAX: 7.3

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To Compile3232 z32 c32 d60120180240300254.01272.61258.31258.93

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1M3232 z32 c32 d90018002700360045004067.44038.64136.34095.71. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 7601bc3232 z32 c32 d5M10M15M20M25MSE +/- 66916.20, N = 312996667211800002125000018790000187600001040000188400001. (CXX) g++ options: -fopenmp -O3 -march=native

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Upload3232 z32 c32 d51015202522.2822.2022.2122.221. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Platform3232 z32 c32 d102030405045.1345.0545.1344.971. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On Demand3232 z32 c32 d102030405045.1845.0844.9545.101. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d30K60K90K120K150K136464136312139685139445

Llama.cpp

Model: llama-2-70b-chat.Q5_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.gguf3232 z32 c32 d0.76951.5392.30853.0783.84753.423.413.423.421. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-Only3232 z32 c32 d306090120150139.09138.60148.74148.56

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-1523232 z32 c32 d4812162015.6115.5115.3215.35MIN: 6.89 / MAX: 15.74MIN: 7.3 / MAX: 15.63MIN: 6.91 / MAX: 15.45MIN: 8.86 / MAX: 15.52

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d30K60K90K120K150K116566116972118980119783

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d30K60K90K120K150K116377115669118221118802

CacheBench

Test: Read / Modify / Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / Write3232 z32 c32 d20K40K60K80K100K87227.5987218.2187238.0187854.12MIN: 65739.52 / MAX: 90694.35MIN: 65721.62 / MAX: 90703.93MIN: 65732.92 / MAX: 90706.91MIN: 72077.93 / MAX: 90708.031. (CC) gcc options: -O3 -lrt

CacheBench

Test: Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Write3232 z32 c32 d10K20K30K40K50K45646.0945646.8245645.0945643.04MIN: 45484.29 / MAX: 45698.11MIN: 45482.27 / MAX: 45698.03MIN: 45483.02 / MAX: 45696.19MIN: 45482.26 / MAX: 45696.121. (CC) gcc options: -O3 -lrt

CacheBench

Test: Read

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read3232 z32 c32 d160032004800640080007616.097616.337615.957615.83MIN: 7615.65 / MAX: 7616.54MIN: 7615.95 / MAX: 7616.74MIN: 7615.46 / MAX: 7616.35MIN: 7615.4 / MAX: 7616.441. (CC) gcc options: -O3 -lrt

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-Only3232 z32 c32 d306090120150112.03112.09119.72119.57

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l3232 z32 c32 d36912159.859.8210.0410.21MIN: 5.1 / MAX: 9.99MIN: 5.63 / MAX: 10.05MIN: 5.86 / MAX: 10.23MIN: 5.69 / MAX: 10.32

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time3232 z32 c32 d163248648072.8171.2072.3872.311. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time3232 z32 c32 d71421283528.3730.7530.5430.721. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d16K32K48K64K80K71361714957302473329

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream3232 z32 c32 d130260390520650607.94608.13611.60611.44

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream3232 z32 c32 d61218243026.0626.0825.8225.79

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d90018002700360045004049404841574132

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d80016002400320040003451344635153522

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d80016002400320040003404340634933499

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d14K28K42K56K70K61987621136340262787

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d14K28K42K56K70K60673614306280263336

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-163232 z32 c32 d61218243025.1525.2024.4724.51

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Live3232 z32 c32 d20406080100109.84110.37110.02110.291. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU3232 z32 c32 d2004006008001000929.23927.57964.20965.35MIN: 907.01 / MAX: 1013.02MIN: 895.6 / MAX: 1019.94MIN: 905.78 / MAX: 1053.38MIN: 922.7 / MAX: 1047.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU3232 z32 c32 d4812162017.1717.1816.5116.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU3232 z32 c32 d110220330440550486.65486.03510.79510.90MIN: 465.68 / MAX: 570.73MIN: 454.31 / MAX: 580.9MIN: 473.86 / MAX: 584.54MIN: 470.7 / MAX: 595.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU3232 z32 c32 d81624324032.8232.8131.2231.201. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threaded3232 z32 c32 d20K40K60K80K100K107079.2107381.698916.298618.71. (CXX) g++ options: -O3 -march=native -fPIE -pie

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-1523232 z32 c32 d51015202519.0418.9218.8618.86MIN: 6.89 / MAX: 19.18MIN: 7.59 / MAX: 19.04MIN: 10.78 / MAX: 19.02MIN: 7.91 / MAX: 19.03

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU3232 z32 c32 d20406080100105.48106.44106.43105.64MIN: 82.05 / MAX: 167.92MIN: 81.71 / MAX: 196.1MIN: 80.87 / MAX: 199.77MIN: 54.2 / MAX: 154.421. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU3232 z32 c32 d306090120150151.45150.06150.07151.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU3232 z32 c32 d20406080100105.97106.24105.91106.32MIN: 81.88 / MAX: 218.45MIN: 81.06 / MAX: 185.99MIN: 82.12 / MAX: 188.16MIN: 81.37 / MAX: 177.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU3232 z32 c32 d306090120150150.80150.37150.84150.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU3232 z32 c32 d2040608010079.8279.3982.1881.87MIN: 42.02 / MAX: 179.47MIN: 43.97 / MAX: 186.13MIN: 58.39 / MAX: 175.7MIN: 52.13 / MAX: 175.841. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU3232 z32 c32 d4080120160200199.90201.15194.21195.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU3232 z32 c32 d36912159.129.169.399.37MIN: 6.22 / MAX: 56.95MIN: 5.99 / MAX: 67.91MIN: 5.95 / MAX: 68.66MIN: 6.07 / MAX: 71.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU3232 z32 c32 d4008001200160020001741.571735.641694.011696.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU3232 z32 c32 d61218243023.9523.9525.2225.16MIN: 13.94 / MAX: 114.01MIN: 15.19 / MAX: 90.71MIN: 21.61 / MAX: 89.16MIN: 19.24 / MAX: 86.71. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU3232 z32 c32 d140280420560700666.22666.30632.92634.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU3232 z32 c32 d71421283527.6927.5328.7728.82MIN: 18.56 / MAX: 147.54MIN: 18.86 / MAX: 82.58MIN: 17.12 / MAX: 135.79MIN: 19.39 / MAX: 99.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU3232 z32 c32 d130260390520650576.18579.41554.68553.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU3232 z32 c32 d1.30052.6013.90155.2026.50255.415.425.785.78MIN: 3.17 / MAX: 57.08MIN: 3.15 / MAX: 67.23MIN: 3.21 / MAX: 58.78MIN: 3.37 / MAX: 65.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU3232 z32 c32 d120024003600480060005747.655751.585416.315423.131. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU3232 z32 c32 d102030405042.8743.7146.1746.28MIN: 35.14 / MAX: 107.5MIN: 35.06 / MAX: 153.84MIN: 39.81 / MAX: 161.92MIN: 30.15 / MAX: 108.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU3232 z32 c32 d160320480640800745.00730.82692.02690.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU3232 z32 c32 d91827364535.5135.5937.4037.61MIN: 22.8 / MAX: 100.53MIN: 24.72 / MAX: 147.24MIN: 27.33 / MAX: 92.33MIN: 24.11 / MAX: 127.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU3232 z32 c32 d2004006008001000898.60896.69853.38848.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU3232 z32 c32 d2468108.078.058.528.52MIN: 4.55 / MAX: 69.24MIN: 4.56 / MAX: 76.48MIN: 4.97 / MAX: 67.6MIN: 4.8 / MAX: 75.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU3232 z32 c32 d4008001200160020001960.181964.991860.991862.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU3232 z32 c32 d0.1080.2160.3240.4320.540.480.470.480.48MIN: 0.27 / MAX: 50.11MIN: 0.27 / MAX: 64.47MIN: 0.27 / MAX: 50.17MIN: 0.27 / MAX: 65.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU3232 z32 c32 d11K22K33K44K55K52441.9452475.3952382.3152344.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU3232 z32 c32 d0.15080.30160.45240.60320.7540.650.660.670.67MIN: 0.36 / MAX: 51.48MIN: 0.36 / MAX: 65.79MIN: 0.36 / MAX: 62.87MIN: 0.36 / MAX: 50.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU3232 z32 c32 d9K18K27K36K45K40123.6240101.8039562.8739843.051. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU3232 z32 c32 d4812162013.3613.2913.6513.65MIN: 7.26 / MAX: 78.85MIN: 8.3 / MAX: 73.59MIN: 9.08 / MAX: 67.03MIN: 6.73 / MAX: 75.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU3232 z32 c32 d300600900120015001190.421197.461166.561166.831. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU3232 z32 c32 d51015202518.6918.6919.5819.56MIN: 9.97 / MAX: 81.33MIN: 9.78 / MAX: 86.93MIN: 10.24 / MAX: 83.63MIN: 13.73 / MAX: 73.61. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU3232 z32 c32 d4008001200160020001704.261704.021627.931628.911. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Random3232 z32 c32 d70K140K210K280K350K3141233141143177583136831. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU3232 z32 c32 d0.90681.81362.72043.62724.5343.913.904.034.03MIN: 2.2 / MAX: 72.73MIN: 2.18 / MAX: 64.81MIN: 2.23 / MAX: 54.09MIN: 2.23 / MAX: 62.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU3232 z32 c32 d80016002400320040003921.503924.863877.913869.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU3232 z32 c32 d36912159.569.5610.2210.21MIN: 5.1 / MAX: 77.12MIN: 5.09 / MAX: 75.37MIN: 5.48 / MAX: 68.07MIN: 5.17 / MAX: 61.151. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU3232 z32 c32 d70014002100280035003300.993299.933099.203100.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writing3232 z32 c32 d1.7M3.4M5.1M6.8M8.5M74576007210235774634671056021. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random3232 z32 c32 d140K280K420K560K700K6305756362426336886304781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Random3232 z32 c32 d500K1000K1500K2000K2500K22314032259344222949422158961. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Read3232 z32 c32 d40M80M120M160M200M1796859541794349241632027211635124321. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random3232 z32 c32 d500K1000K1500K2000K2500K23736542361270232780023515681. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing3232 z32 c32 d900K1800K2700K3600K4500K42846914364996441949742444781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read3232 z32 c32 d40M80M120M160M200M1767704681771676361606653051607078121. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

DaCapo Benchmark

Java Test: Apache Cassandra

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Cassandra3232 z32 c32 d130026003900520065005946593859555927

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-Only3232 z32 c32 d132639526555.6555.5459.5859.79

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1M3232 z32 c32 d4K8K12K16K20K18845.518763.818897.518866.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1M3232 z32 c32 d4K8K12K16K20K18860.118909.018887.518818.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1M3232 z32 c32 d4K8K12K16K20K18777.218961.318947.318901.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1M3232 z32 c32 d4K8K12K16K20K19004.518936.518783.918924.01. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig3232 z32 c32 d122436486052.1352.0153.6253.63

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-503232 z32 c32 d91827364540.1939.9640.3240.31MIN: 15.55 / MAX: 40.67MIN: 15.13 / MAX: 40.53MIN: 15.51 / MAX: 40.87MIN: 15.27 / MAX: 40.73

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d51015202519.1119.1319.5819.59

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d2004006008001000836.42835.26816.28815.98

DaCapo Benchmark

Java Test: Eclipse

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Eclipse3232 z32 c32 d3K6K9K12K15K12656127351282612768

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-Only3232 z32 c32 d112233445544.7344.4847.5247.41

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream3232 z32 c32 d160320480640800747.07745.18753.12751.21

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream3232 z32 c32 d51015202521.2921.2720.8721.09

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d102030405041.6341.4441.5941.84

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d80160240320400383.97385.65384.32381.78

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d160320480640800747.31746.13751.93750.40

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d51015202521.2321.2921.0421.07

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream3232 z32 c32 d90180270360450396.29397.96411.34410.33

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream3232 z32 c32 d91827364540.1739.9538.7738.83

DaCapo Benchmark

Java Test: Apache Lucene Search Index

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Index3232 z32 c32 d100020003000400050004613458945804602

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1M3232 z32 c32 d6K12K18K24K30K25814.425943.725385.925396.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

Java Test: H2 Database Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database Engine3232 z32 c32 d60012001800240030002675265527732634

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream3232 z32 c32 d2040608010087.4987.3388.2088.23

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream3232 z32 c32 d4080120160200182.51182.76181.10181.12

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-503232 z32 c32 d122436486051.3451.5751.5651.49

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream3232 z32 c32 d306090120150129.87129.80130.48130.79

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream3232 z32 c32 d306090120150123.02122.96122.33121.80

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d306090120150128.82128.85129.54129.81

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d306090120150123.88123.79123.15122.93

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream3232 z32 c32 d132639526559.8659.8759.9059.97

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream3232 z32 c32 d60120180240300266.86266.98266.84266.53

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d2468107.23327.26107.27387.2896

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d50010001500200025002208.152199.492195.922189.07

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream3232 z32 c32 d132639526559.8859.6760.0660.03

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream3232 z32 c32 d60120180240300266.88267.84266.03266.28

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradebeans3232 z32 c32 d2K4K6K8K10K8561860085208380

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating3232 z32 c32 d50K100K150K200K250K2122092115842118152113831. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating3232 z32 c32 d50K100K150K200K250K2415452423992402872411911. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4K3232 z32 c32 d1.34482.68964.03445.37926.7245.8015.8995.8295.9771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Obj3232 z32 c32 d91827364537.2836.8637.4437.41MIN: 37.09 / MAX: 37.7MIN: 36.67 / MAX: 37.11MIN: 37.24 / MAX: 37.71MIN: 37.22 / MAX: 37.69

Llama.cpp

Model: llama-2-13b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.gguf3232 z32 c32 d4812162017.9417.8717.8718.081. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1BZen 1 - EPYC 7601bc3232 z32 c32 d816243240SE +/- 0.09, N = 333.9210.4210.4811.6811.6011.9011.98

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Obj3232 z32 c32 d91827364538.9439.1139.0039.14MIN: 38.69 / MAX: 39.29MIN: 38.88 / MAX: 39.43MIN: 38.78 / MAX: 39.64MIN: 38.92 / MAX: 39.84

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradesoap3232 z32 c32 d120024003600480060005403516853665149

DaCapo Benchmark

Java Test: BioJava Biological Data Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data Framework3232 z32 c32 d2K4K6K8K10K7874785879047907

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-503232 z32 c32 d122436486052.4452.7853.0053.30MIN: 15.02 / MAX: 53.14MIN: 17.43 / MAX: 53.32MIN: 50.62 / MAX: 53.51MIN: 50.97 / MAX: 53.84

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To Compile3232 z32 c32 d61218243023.5623.7624.4524.30

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Jython3232 z32 c32 d150030004500600075006703677368656769

DaCapo Benchmark

Java Test: jMonkeyEngine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngine3232 z32 c32 d150030004500600075006914691769176916

DaCapo Benchmark

Java Test: GraphChi

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChi3232 z32 c32 d80016002400320040003536363035383656

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crown3232 z32 c32 d91827364536.9637.2535.9136.28MIN: 36.61 / MAX: 37.43MIN: 36.89 / MAX: 37.75MIN: 35.53 / MAX: 37.08MIN: 35.88 / MAX: 37.13

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crown3232 z32 c32 d91827364537.3037.6837.0036.94MIN: 36.86 / MAX: 38.04MIN: 37.25 / MAX: 38.37MIN: 36.53 / MAX: 38.11MIN: 36.46 / MAX: 37.76

DaCapo Benchmark

Java Test: H2O In-Memory Platform For Machine Learning

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine Learning3232 z32 c32 d90018002700360045003974386839793755

Llama.cpp

Model: llama-2-7b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.gguf3232 z32 c32 d71421283529.7529.9029.7429.851. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

DaCapo Benchmark

Java Test: Apache Kafka

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Kafka3232 z32 c32 d110022003300440055005110512151115114

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon3232 z32 c32 d102030405041.6041.8241.5741.56MIN: 41.36 / MAX: 41.86MIN: 41.6 / MAX: 42.16MIN: 41.37 / MAX: 41.9MIN: 41.33 / MAX: 41.84

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-503232 z32 c32 d2468108.748.778.618.59

DaCapo Benchmark

Java Test: Avrora AVR Simulation Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation Framework3232 z32 c32 d120024003600480060005613544155615572

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon3232 z32 c32 d112233445545.9446.3145.4645.65MIN: 45.66 / MAX: 46.38MIN: 46.05 / MAX: 46.74MIN: 45.22 / MAX: 46.6MIN: 45.37 / MAX: 46.89

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4K3232 z32 c32 d132639526548.4558.7247.2558.641. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500MZen 1 - EPYC 7601bc3232 z32 c32 d48121620SE +/- 0.118, N = 315.6935.2025.2135.6565.6855.7835.751

DaCapo Benchmark

Java Test: Spring Boot

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring Boot3232 z32 c32 d50010001500200025002444246025332452

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNet3232 z32 c32 d4080120160200158.47155.77157.60158.08

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-163232 z32 c32 d36912159.739.759.779.75

DaCapo Benchmark

Java Test: Apache Tomcat

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Tomcat3232 z32 c32 d50010001500200025002107208220942112

DaCapo Benchmark

Java Test: Apache Lucene Search Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Engine3232 z32 c32 d300600900120015001402142513791433

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4K3232 z32 c32 d4080120160200185.67184.98183.90184.101. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4K3232 z32 c32 d4080120160200186.63185.56180.96186.371. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

Java Test: PMD Source Code Analyzer

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code Analyzer3232 z32 c32 d4008001200160020001784182019661833

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNet3232 z32 c32 d60120180240300272.93274.97274.97276.19

DaCapo Benchmark

Java Test: Batik SVG Toolkit

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG Toolkit3232 z32 c32 d4008001200160020001733172317181738

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNet3232 z32 c32 d71421283528.9928.7127.7328.79

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNet3232 z32 c32 d81624324032.1231.9233.1433.02

DaCapo Benchmark

Java Test: FOP Print Formatter

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print Formatter3232 z32 c32 d160320480640800751696764758

DaCapo Benchmark

Java Test: Apache Xalan XSLT

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLT3232 z32 c32 d2004006008001000871859852861

DaCapo Benchmark

Java Test: Zxing 1D/2D Barcode Image Processing

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image Processing3232 z32 c32 d130260390520650609599569599

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringZen 1 - EPYC 7601130260390520650Min: 242.58 / Avg: 585.92 / Max: 718

Meta Performance Per Watts

Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsZen 1 - EPYC 76013M6M9M12M15M13064001.66

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601263602718OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601262543712OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Quicksilver

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption MonitorZen 1 - EPYC 7601120240360480600Min: 258.88 / Avg: 624.15 / Max: 662.04

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 76014K8K12K16K20K18307.66

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601255.2553.7594.9OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor160320480640800

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 76016K12K18K24K30K27116.87

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601243584648OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor2004006008001000

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 76015K10K15K20K25K22248.55


Phoronix Test Suite v10.8.4