new-tests

Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401110-NE-NEWTESTS900&sor&grr.

new-testsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionZen 1 - EPYC 7601bc3232 z32 c32 dAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8llvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG1000GB INTEL SSDPE2KX010T81920x1200AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads)ASPEEDAMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212Security Details- Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details- 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- 32, 32 z, 32 c, 32 d: Python 3.11.6

new-testsquicksilver: CTS2build-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlyquicksilver: CORAL2 P2pytorch: CPU - 16 - Efficientnet_v2_lbuild-gem5: Time To Compilexmrig: GhostRider - 1Mquicksilver: CORAL2 P1ffmpeg: libx265 - Uploadffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandospray-studio: 3 - 4K - 32 - Path Tracer - CPUllama-cpp: llama-2-70b-chat.Q5_0.ggufblender: Pabellon Barcelona - CPU-Onlypytorch: CPU - 16 - ResNet-152ospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUcachebench: Read / Modify / Writecachebench: Writecachebench: Readblender: Classroom - CPU-Onlypytorch: CPU - 1 - Efficientnet_v2_lopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeospray-studio: 3 - 4K - 16 - Path Tracer - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUtensorflow: CPU - 16 - VGG-16ffmpeg: libx265 - Liveopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUquantlib: Multi-Threadedpytorch: CPU - 1 - ResNet-152openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUspeedb: Update Randopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUspeedb: Read While Writingrocksdb: Update Randspeedb: Read Rand Write Randspeedb: Rand Readrocksdb: Read Rand Write Randrocksdb: Read While Writingrocksdb: Rand Readdacapobench: Apache Cassandrablender: Fishy Cat - CPU-Onlyxmrig: Monero - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: KawPow - 1Mxmrig: CryptoNight-Heavy - 1Mbuild-linux-kernel: defconfigpytorch: CPU - 16 - ResNet-50deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdacapobench: Eclipseblender: BMW27 - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdacapobench: Apache Lucene Search Indexxmrig: Wownero - 1Mdacapobench: H2 Database Enginedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 16 - ResNet-50deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdacapobench: Tradebeanscompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingsvt-av1: Preset 4 - Bosphorus 4Kembree: Pathtracer - Asian Dragon Objllama-cpp: llama-2-13b.Q4_0.ggufy-cruncher: 1Bembree: Pathtracer ISPC - Asian Dragon Objdacapobench: Tradesoapdacapobench: BioJava Biological Data Frameworkpytorch: CPU - 1 - ResNet-50build-ffmpeg: Time To Compiledacapobench: Jythondacapobench: jMonkeyEnginedacapobench: GraphChiembree: Pathtracer - Crownembree: Pathtracer ISPC - Crowndacapobench: H2O In-Memory Platform For Machine Learningllama-cpp: llama-2-7b.Q4_0.ggufdacapobench: Apache Kafkaembree: Pathtracer - Asian Dragontensorflow: CPU - 1 - ResNet-50dacapobench: Avrora AVR Simulation Frameworkembree: Pathtracer ISPC - Asian Dragonsvt-av1: Preset 8 - Bosphorus 4Ky-cruncher: 500Mdacapobench: Spring Boottensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 1 - VGG-16dacapobench: Apache Tomcatdacapobench: Apache Lucene Search Enginesvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kdacapobench: PMD Source Code Analyzertensorflow: CPU - 16 - AlexNetdacapobench: Batik SVG Toolkittensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - AlexNetdacapobench: FOP Print Formatterdacapobench: Apache Xalan XSLTdacapobench: Zxing 1D/2D Barcode Image ProcessingZen 1 - EPYC 7601bc3232 z32 c32 d11426667150133331299666733.92315.69316270000161400002118000010.4165.20216260000161500002125000010.4765.21314320000433.789410.61153500007.17254.014067.41879000022.2845.1345.181364643.42139.0915.6111656611637787227.58771345646.0913537616.087334112.039.8572.80728828.37258371361607.93526.0566404934513404619876067325.15109.84929.2317.17486.6532.82107079.219.04105.48151.45105.97150.879.82199.99.121741.5723.95666.2227.69576.185.415747.6542.8774535.51898.68.071960.180.4852441.940.6540123.6213.361190.4218.691704.263141233.913921.59.563300.997457600630575223140317968595423736544284691176770468594655.6518845.518860.118777.219004.552.13340.1919.1107836.42141265644.73747.067421.293341.6314383.9746747.31421.2278396.291440.1688461325814.4267587.4908182.508551.34129.8749123.0157128.8158123.881759.8638266.85747.23322208.153759.8833266.879985612122092415455.80137.28417.9411.67638.93785403787452.4423.55767036914353636.958437.2967397429.75511041.59588.74561345.937448.4515.6562444158.479.7321071402185.665186.6251784272.93173328.9932.1275187160914290000434.187410.43152300007.11272.614038.61876000022.2045.0545.081363123.41138.615.5111697211566987218.21097445646.8161077616.334142112.099.8271.20128530.7547271495608.132626.0768404834463406621136143025.2110.37927.5717.18486.0332.81107381.618.92106.44150.06106.24150.3779.39201.159.161735.6423.95666.327.53579.415.425751.5843.71730.8235.59896.698.051964.990.4752475.390.6640101.813.291197.4618.691704.023141143.93924.869.563299.937210235636242225934417943492423612704364996177167636593855.5418763.81890918961.318936.552.01239.9619.1338835.2621273544.48745.180621.271141.4377385.6481746.12821.289397.959339.9467458925943.7265587.325182.764351.57129.8035122.955128.845123.78559.8673266.97617.2612199.494159.6674267.841786002115842423995.89936.858617.8711.59539.1075168785852.7823.75967736917363037.254537.6791386829.9512141.81988.77544146.308858.7155.6852460155.779.7520821425184.981185.5621820274.97172328.7131.9269685959914430000453.693426.3151800007.18258.3074136.3104000022.2145.1344.951396853.42148.7415.3211898011822187238.01319745645.0911337615.948086119.7210.0472.38400730.53759173024611.602625.8175415735153493634026280224.47110.02964.216.51510.7931.2298916.218.86106.43150.07105.91150.8482.18194.219.391694.0125.22632.9228.77554.685.785416.3146.17692.0237.4853.388.521860.990.4852382.310.6739562.8713.651166.5619.581627.933177584.033877.9110.223099.27746346633688222949416320272123278004419497160665305595559.5818897.518887.518947.318783.953.61540.3219.5831816.27851282647.52753.122920.872941.5889384.3164751.925921.0419411.343538.7708458025385.9277388.1952181.104351.56130.4755122.3307129.5421123.146959.9016266.84287.27382195.919860.0613266.03485202118152402875.82937.440517.8711.90239.00465366790453.0024.44668656917353835.914736.9967397929.74511141.56968.61556145.464847.2535.7832533157.69.7720941379183.899180.9551966274.97171827.7333.1476485256914280000452.606426.37151000007.15258.9344095.71884000022.2244.9745.101394453.42148.5615.3511978311880287854.11767245643.0387137615.833145119.5710.2172.30583630.72419473329611.443925.7874413235223499627876333624.51110.29965.3516.54510.931.298618.718.86105.64151.25106.32150.2581.87195.059.371696.525.16634.528.82553.655.785423.1346.28690.2437.61848.628.521862.240.4852344.60.6739843.0513.651166.8319.561628.913136834.033869.710.213100.957105602630478221589616351243223515684244478160707812592759.7918866.118818.618901.11892453.63240.3119.5858815.97681276847.41751.211721.093241.8438381.7839750.399721.0667410.326738.8343460225396.8263488.2278181.115551.49130.7937121.8001129.8101122.931259.9698266.53437.28962189.065560.0284266.277683802113832411915.97737.405618.0811.97539.14215149790753.3024.367696916365636.281236.9369375529.85511441.5578.59557245.648258.6425.7512452158.089.7521121433184.099186.3681833276.19173828.7933.02758861599OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2bc32 c3232 z32 dZen 1 - EPYC 76013M6M9M12M15MSE +/- 16666.67, N = 3162700001626000014430000143200001429000014280000114266671. (CXX) g++ options: -fopenmp -O3 -march=native

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig3232 z32 d32 c100200300400500433.79434.19452.61453.69

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-Only32 z3232 c32 d90180270360450410.43410.61426.30426.37

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2cb3232 z32 c32 dZen 1 - EPYC 76013M6M9M12M15MSE +/- 37118.43, N = 3161500001614000015350000152300001518000015100000150133331. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l32 c3232 d32 z2468107.187.177.157.11MIN: 4.37 / MAX: 7.37MIN: 4.45 / MAX: 7.33MIN: 4.34 / MAX: 7.3MIN: 4.25 / MAX: 7.26

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To Compile3232 c32 d32 z60120180240300254.01258.31258.93272.61

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1M32 c32 d3232 z90018002700360045004136.34095.74067.44038.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1cb32 d3232 zZen 1 - EPYC 760132 c5M10M15M20M25MSE +/- 66916.20, N = 321250000211800001884000018790000187600001299666710400001. (CXX) g++ options: -fopenmp -O3 -march=native

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Upload3232 d32 c32 z51015202522.2822.2222.2122.201. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Platform32 c3232 z32 d102030405045.1345.1345.0544.971. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On Demand3232 d32 z32 c102030405045.1845.1045.0844.951. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z3232 d32 c30K60K90K120K150K136312136464139445139685

Llama.cpp

Model: llama-2-70b-chat.Q5_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.gguf32 d32 c3232 z0.76951.5392.30853.0783.84753.423.423.423.411. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-Only32 z3232 d32 c306090120150138.60139.09148.56148.74

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-1523232 z32 d32 c4812162015.6115.5115.3515.32MIN: 6.89 / MAX: 15.74MIN: 7.3 / MAX: 15.63MIN: 8.86 / MAX: 15.52MIN: 6.91 / MAX: 15.45

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d30K60K90K120K150K116566116972118980119783

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z3232 c32 d30K60K90K120K150K115669116377118221118802

CacheBench

Test: Read / Modify / Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / Write32 d32 c3232 z20K40K60K80K100K87854.1287238.0187227.5987218.21MIN: 72077.93 / MAX: 90708.03MIN: 65732.92 / MAX: 90706.91MIN: 65739.52 / MAX: 90694.35MIN: 65721.62 / MAX: 90703.931. (CC) gcc options: -O3 -lrt

CacheBench

Test: Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Write32 z3232 c32 d10K20K30K40K50K45646.8245646.0945645.0945643.04MIN: 45482.27 / MAX: 45698.03MIN: 45484.29 / MAX: 45698.11MIN: 45483.02 / MAX: 45696.19MIN: 45482.26 / MAX: 45696.121. (CC) gcc options: -O3 -lrt

CacheBench

Test: Read

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read32 z3232 c32 d160032004800640080007616.337616.097615.957615.83MIN: 7615.95 / MAX: 7616.74MIN: 7615.65 / MAX: 7616.54MIN: 7615.46 / MAX: 7616.35MIN: 7615.4 / MAX: 7616.441. (CC) gcc options: -O3 -lrt

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-Only3232 z32 d32 c306090120150112.03112.09119.57119.72

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l32 d32 c3232 z369121510.2110.049.859.82MIN: 5.69 / MAX: 10.32MIN: 5.86 / MAX: 10.23MIN: 5.1 / MAX: 9.99MIN: 5.63 / MAX: 10.05

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time32 z32 d32 c32163248648071.2072.3172.3872.811. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time3232 c32 d32 z71421283528.3730.5430.7230.751. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d16K32K48K64K80K71361714957302473329

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream3232 z32 d32 c130260390520650607.94608.13611.44611.60

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream32 z3232 c32 d61218243026.0826.0625.8225.79

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z3232 d32 c90018002700360045004048404941324157

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z3232 c32 d80016002400320040003446345135153522

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d80016002400320040003404340634933499

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 z32 d32 c14K28K42K56K70K61987621136278763402

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU3232 z32 c32 d14K28K42K56K70K60673614306280263336

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-1632 z3232 d32 c61218243025.2025.1524.5124.47

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Live32 z32 d32 c3220406080100110.37110.29110.02109.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 z3232 c32 d2004006008001000927.57929.23964.20965.35MIN: 895.6 / MAX: 1019.94MIN: 907.01 / MAX: 1013.02MIN: 905.78 / MAX: 1053.38MIN: 922.7 / MAX: 1047.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 z3232 d32 c4812162017.1817.1716.5416.511. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU32 z3232 c32 d110220330440550486.03486.65510.79510.90MIN: 454.31 / MAX: 580.9MIN: 465.68 / MAX: 570.73MIN: 473.86 / MAX: 584.54MIN: 470.7 / MAX: 595.971. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU3232 z32 c32 d81624324032.8232.8131.2231.201. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threaded32 z3232 c32 d20K40K60K80K100K107381.6107079.298916.298618.71. (CXX) g++ options: -O3 -march=native -fPIE -pie

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-1523232 z32 d32 c51015202519.0418.9218.8618.86MIN: 6.89 / MAX: 19.18MIN: 7.59 / MAX: 19.04MIN: 7.91 / MAX: 19.03MIN: 10.78 / MAX: 19.02

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU3232 d32 c32 z20406080100105.48105.64106.43106.44MIN: 82.05 / MAX: 167.92MIN: 54.2 / MAX: 154.42MIN: 80.87 / MAX: 199.77MIN: 81.71 / MAX: 196.11. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU3232 d32 c32 z306090120150151.45151.25150.07150.061. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 c3232 z32 d20406080100105.91105.97106.24106.32MIN: 82.12 / MAX: 188.16MIN: 81.88 / MAX: 218.45MIN: 81.06 / MAX: 185.99MIN: 81.37 / MAX: 177.411. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 c3232 z32 d306090120150150.84150.80150.37150.251. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 z3232 d32 c2040608010079.3979.8281.8782.18MIN: 43.97 / MAX: 186.13MIN: 42.02 / MAX: 179.47MIN: 52.13 / MAX: 175.84MIN: 58.39 / MAX: 175.71. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 z3232 d32 c4080120160200201.15199.90195.05194.211. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU3232 z32 d32 c36912159.129.169.379.39MIN: 6.22 / MAX: 56.95MIN: 5.99 / MAX: 67.91MIN: 6.07 / MAX: 71.06MIN: 5.95 / MAX: 68.661. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU3232 z32 d32 c4008001200160020001741.571735.641696.501694.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU3232 z32 d32 c61218243023.9523.9525.1625.22MIN: 13.94 / MAX: 114.01MIN: 15.19 / MAX: 90.71MIN: 19.24 / MAX: 86.7MIN: 21.61 / MAX: 89.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU32 z3232 d32 c140280420560700666.30666.22634.50632.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 z3232 c32 d71421283527.5327.6928.7728.82MIN: 18.86 / MAX: 82.58MIN: 18.56 / MAX: 147.54MIN: 17.12 / MAX: 135.79MIN: 19.39 / MAX: 99.161. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 z3232 c32 d130260390520650579.41576.18554.68553.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU3232 z32 c32 d1.30052.6013.90155.2026.50255.415.425.785.78MIN: 3.17 / MAX: 57.08MIN: 3.15 / MAX: 67.23MIN: 3.21 / MAX: 58.78MIN: 3.37 / MAX: 65.271. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU32 z3232 d32 c120024003600480060005751.585747.655423.135416.311. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU3232 z32 c32 d102030405042.8743.7146.1746.28MIN: 35.14 / MAX: 107.5MIN: 35.06 / MAX: 153.84MIN: 39.81 / MAX: 161.92MIN: 30.15 / MAX: 108.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU3232 z32 c32 d160320480640800745.00730.82692.02690.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU3232 z32 c32 d91827364535.5135.5937.4037.61MIN: 22.8 / MAX: 100.53MIN: 24.72 / MAX: 147.24MIN: 27.33 / MAX: 92.33MIN: 24.11 / MAX: 127.491. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU3232 z32 c32 d2004006008001000898.60896.69853.38848.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 z3232 c32 d2468108.058.078.528.52MIN: 4.56 / MAX: 76.48MIN: 4.55 / MAX: 69.24MIN: 4.97 / MAX: 67.6MIN: 4.8 / MAX: 75.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 z3232 d32 c4008001200160020001964.991960.181862.241860.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 z3232 c32 d0.1080.2160.3240.4320.540.470.480.480.48MIN: 0.27 / MAX: 64.47MIN: 0.27 / MAX: 50.11MIN: 0.27 / MAX: 50.17MIN: 0.27 / MAX: 65.551. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 z3232 c32 d11K22K33K44K55K52475.3952441.9452382.3152344.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU3232 z32 c32 d0.15080.30160.45240.60320.7540.650.660.670.67MIN: 0.36 / MAX: 51.48MIN: 0.36 / MAX: 65.79MIN: 0.36 / MAX: 62.87MIN: 0.36 / MAX: 50.741. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU3232 z32 d32 c9K18K27K36K45K40123.6240101.8039843.0539562.871. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 z3232 c32 d4812162013.2913.3613.6513.65MIN: 8.3 / MAX: 73.59MIN: 7.26 / MAX: 78.85MIN: 9.08 / MAX: 67.03MIN: 6.73 / MAX: 75.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 z3232 d32 c300600900120015001197.461190.421166.831166.561. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU3232 z32 d32 c51015202518.6918.6919.5619.58MIN: 9.97 / MAX: 81.33MIN: 9.78 / MAX: 86.93MIN: 13.73 / MAX: 73.6MIN: 10.24 / MAX: 83.631. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU3232 z32 d32 c4008001200160020001704.261704.021628.911627.931. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Random32 c3232 z32 d70K140K210K280K350K3177583141233141143136831. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 z3232 c32 d0.90681.81362.72043.62724.5343.903.914.034.03MIN: 2.18 / MAX: 64.81MIN: 2.2 / MAX: 72.73MIN: 2.23 / MAX: 54.09MIN: 2.23 / MAX: 62.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 z3232 c32 d80016002400320040003924.863921.503877.913869.701. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU3232 z32 d32 c36912159.569.5610.2110.22MIN: 5.1 / MAX: 77.12MIN: 5.09 / MAX: 75.37MIN: 5.17 / MAX: 61.15MIN: 5.48 / MAX: 68.071. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU3232 z32 d32 c70014002100280035003300.993299.933100.953099.201. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writing32 c3232 z32 d1.7M3.4M5.1M6.8M8.5M77463467457600721023571056021. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random32 z32 c3232 d140K280K420K560K700K6362426336886305756304781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Random32 z3232 c32 d500K1000K1500K2000K2500K22593442231403222949422158961. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Read3232 z32 d32 c40M80M120M160M200M1796859541794349241635124321632027211. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random3232 z32 d32 c500K1000K1500K2000K2500K23736542361270235156823278001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing32 c32 z3232 d900K1800K2700K3600K4500K44194974364996428469142444781. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read32 z3232 d32 c40M80M120M160M200M1771676361767704681607078121606653051. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

DaCapo Benchmark

Java Test: Apache Cassandra

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Cassandra32 d32 z3232 c130026003900520065005927593859465955

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-Only32 z3232 c32 d132639526555.5455.6559.5859.79

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1M32 c32 d3232 z4K8K12K16K20K18897.518866.118845.518763.81. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1M32 z32 c3232 d4K8K12K16K20K18909.018887.518860.118818.61. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1M32 z32 c32 d324K8K12K16K20K18961.318947.318901.118777.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1M3232 z32 d32 c4K8K12K16K20K19004.518936.518924.018783.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig32 z3232 c32 d122436486052.0152.1353.6253.63

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-5032 c32 d3232 z91827364540.3240.3140.1939.96MIN: 15.51 / MAX: 40.87MIN: 15.27 / MAX: 40.73MIN: 15.55 / MAX: 40.67MIN: 15.13 / MAX: 40.53

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d51015202519.1119.1319.5819.59

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d2004006008001000836.42835.26816.28815.98

DaCapo Benchmark

Java Test: Eclipse

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Eclipse3232 z32 d32 c3K6K9K12K15K12656127351276812826

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-Only32 z3232 d32 c112233445544.4844.7347.4147.52

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream32 z3232 d32 c160320480640800745.18747.07751.21753.12

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream3232 z32 d32 c51015202521.2921.2721.0920.87

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 c3232 d102030405041.4441.5941.6341.84

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 c3232 d80160240320400385.65384.32383.97381.78

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 z3232 d32 c160320480640800746.13747.31750.40751.93

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 z3232 d32 c51015202521.2921.2321.0721.04

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream3232 z32 d32 c90180270360450396.29397.96410.33411.34

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream3232 z32 d32 c91827364540.1739.9538.8338.77

DaCapo Benchmark

Java Test: Apache Lucene Search Index

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Index32 c32 z32 d32100020003000400050004580458946024613

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1M32 z3232 d32 c6K12K18K24K30K25943.725814.425396.825385.91. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

Java Test: H2 Database Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database Engine32 d32 z3232 c60012001800240030002634265526752773

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 z3232 c32 d2040608010087.3387.4988.2088.23

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 z3232 d32 c4080120160200182.76182.51181.12181.10

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-5032 z32 c32 d32122436486051.5751.5651.4951.34

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream32 z3232 c32 d306090120150129.80129.87130.48130.79

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream3232 z32 c32 d306090120150123.02122.96122.33121.80

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d306090120150128.82128.85129.54129.81

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d306090120150123.88123.79123.15122.93

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream3232 z32 c32 d132639526559.8659.8759.9059.97

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream32 z3232 c32 d60120180240300266.98266.86266.84266.53

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d2468107.23327.26107.27387.2896

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream3232 z32 c32 d50010001500200025002208.152199.492195.922189.07

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 z3232 d32 c132639526559.6759.8860.0360.06

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 z3232 d32 c60120180240300267.84266.88266.28266.03

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradebeans32 d32 c3232 z2K4K6K8K10K8380852085618600

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating3232 c32 z32 d50K100K150K200K250K2122092118152115842113831. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating32 z3232 d32 c50K100K150K200K250K2423992415452411912402871. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4K32 d32 z32 c321.34482.68964.03445.37926.7245.9775.8995.8295.8011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Obj32 c32 d3232 z91827364537.4437.4137.2836.86MIN: 37.24 / MAX: 37.71MIN: 37.22 / MAX: 37.69MIN: 37.09 / MAX: 37.7MIN: 36.67 / MAX: 37.11

Llama.cpp

Model: llama-2-13b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.gguf32 d3232 c32 z4812162018.0817.9417.8717.871. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1Bbc32 z3232 c32 dZen 1 - EPYC 7601816243240SE +/- 0.09, N = 310.4210.4811.6011.6811.9011.9833.92

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Obj32 d32 z32 c3291827364539.1439.1139.0038.94MIN: 38.92 / MAX: 39.84MIN: 38.88 / MAX: 39.43MIN: 38.78 / MAX: 39.64MIN: 38.69 / MAX: 39.29

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradesoap32 d32 z32 c32120024003600480060005149516853665403

DaCapo Benchmark

Java Test: BioJava Biological Data Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data Framework32 z3232 c32 d2K4K6K8K10K7858787479047907

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-5032 d32 c32 z32122436486053.3053.0052.7852.44MIN: 50.97 / MAX: 53.84MIN: 50.62 / MAX: 53.51MIN: 17.43 / MAX: 53.32MIN: 15.02 / MAX: 53.14

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To Compile3232 z32 d32 c61218243023.5623.7624.3024.45

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Jython3232 d32 z32 c150030004500600075006703676967736865

DaCapo Benchmark

Java Test: jMonkeyEngine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngine3232 d32 z32 c150030004500600075006914691669176917

DaCapo Benchmark

Java Test: GraphChi

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChi3232 c32 z32 d80016002400320040003536353836303656

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crown32 z3232 d32 c91827364537.2536.9636.2835.91MIN: 36.89 / MAX: 37.75MIN: 36.61 / MAX: 37.43MIN: 35.88 / MAX: 37.13MIN: 35.53 / MAX: 37.08

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crown32 z3232 c32 d91827364537.6837.3037.0036.94MIN: 37.25 / MAX: 38.37MIN: 36.86 / MAX: 38.04MIN: 36.53 / MAX: 38.11MIN: 36.46 / MAX: 37.76

DaCapo Benchmark

Java Test: H2O In-Memory Platform For Machine Learning

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine Learning32 d32 z3232 c90018002700360045003755386839743979

Llama.cpp

Model: llama-2-7b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.gguf32 z32 d3232 c71421283529.9029.8529.7529.741. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

DaCapo Benchmark

Java Test: Apache Kafka

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Kafka3232 c32 d32 z110022003300440055005110511151145121

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon32 z3232 c32 d102030405041.8241.6041.5741.56MIN: 41.6 / MAX: 42.16MIN: 41.36 / MAX: 41.86MIN: 41.37 / MAX: 41.9MIN: 41.33 / MAX: 41.84

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-5032 z3232 c32 d2468108.778.748.618.59

DaCapo Benchmark

Java Test: Avrora AVR Simulation Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation Framework32 z32 c32 d32120024003600480060005441556155725613

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon32 z3232 d32 c112233445546.3145.9445.6545.46MIN: 46.05 / MAX: 46.74MIN: 45.66 / MAX: 46.38MIN: 45.37 / MAX: 46.89MIN: 45.22 / MAX: 46.6

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4K32 z32 d3232 c132639526558.7258.6448.4547.251. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500Mbc3232 z32 d32 cZen 1 - EPYC 760148121620SE +/- 0.118, N = 35.2025.2135.6565.6855.7515.78315.693

DaCapo Benchmark

Java Test: Spring Boot

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring Boot3232 d32 z32 c50010001500200025002444245224602533

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNet3232 d32 c32 z4080120160200158.47158.08157.60155.77

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-1632 c32 d32 z3236912159.779.759.759.73

DaCapo Benchmark

Java Test: Apache Tomcat

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Tomcat32 z32 c3232 d50010001500200025002082209421072112

DaCapo Benchmark

Java Test: Apache Lucene Search Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Engine32 c3232 z32 d300600900120015001379140214251433

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4K3232 z32 d32 c4080120160200185.67184.98184.10183.901. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4K3232 d32 z32 c4080120160200186.63186.37185.56180.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

Java Test: PMD Source Code Analyzer

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code Analyzer3232 z32 d32 c4008001200160020001784182018331966

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNet32 d32 c32 z3260120180240300276.19274.97274.97272.93

DaCapo Benchmark

Java Test: Batik SVG Toolkit

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG Toolkit32 c32 z3232 d4008001200160020001718172317331738

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNet3232 d32 z32 c71421283528.9928.7928.7127.73

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNet32 c32 d3232 z81624324033.1433.0232.1231.92

DaCapo Benchmark

Java Test: FOP Print Formatter

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print Formatter32 z3232 d32 c160320480640800696751758764

DaCapo Benchmark

Java Test: Apache Xalan XSLT

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLT32 c32 z32 d322004006008001000852859861871

DaCapo Benchmark

Java Test: Zxing 1D/2D Barcode Image Processing

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image Processing32 c32 z32 d32130260390520650569599599609

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringZen 1 - EPYC 7601130260390520650Min: 242.58 / Avg: 585.92 / Max: 718

Meta Performance Per Watts

Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsZen 1 - EPYC 76013M6M9M12M15M13064001.66

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601263602718OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601262543712OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Quicksilver

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption MonitorZen 1 - EPYC 7601120240360480600Min: 258.88 / Avg: 624.15 / Max: 662.04

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 76014K8K12K16K20K18307.66

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601255.2553.7594.9OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor160320480640800

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 76016K12K18K24K30K27116.87

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601243584648OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor2004006008001000

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 76015K10K15K20K25K22248.55


Phoronix Test Suite v10.8.4