new-tests

Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2401110-NE-NEWTESTS900
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C++ Boost Tests 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 7 Tests
Creator Workloads 6 Tests
Encoding 2 Tests
HPC - High Performance Computing 6 Tests
Machine Learning 5 Tests
Multi-Core 10 Tests
Intel oneAPI 3 Tests
Programmer / Developer System Benchmarks 3 Tests
Python Tests 6 Tests
Renderers 2 Tests
Server 2 Tests
Server CPU Tests 5 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Zen 1 - EPYC 7601
January 07
  46 Minutes
b
January 10
  12 Minutes
c
January 10
  12 Minutes
32
January 11
  2 Hours, 56 Minutes
32 z
January 11
  2 Hours, 56 Minutes
32 c
January 11
  3 Hours, 14 Minutes
32 d
January 11
  2 Hours, 55 Minutes
Invert Hiding All Results Option
  1 Hour, 53 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


new-tests - Phoronix Test Suite

new-tests

Tests for a future article. AMD EPYC 8324P 32-Core testing with a AMD Cinnabar (RCB1009C BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2401110-NE-NEWTESTS900&grr&sro&rro.

new-testsProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionZen 1 - EPYC 7601bc3232 z32 c32 dAMD EPYC 7601 32-Core @ 2.20GHz (32 Cores / 64 Threads)TYAN B8026T70AE24HR (V1.02.B10 BIOS)AMD 17h128GB280GB INTEL SSDPE21D280GA + 1000GB INTEL SSDPE2KX010T8llvmpipeVE2282 x Broadcom NetXtreme BCM5720 PCIeUbuntu 23.106.6.9-060609-generic (x86_64)GNOME Shell 45.0X Server 1.21.1.74.5 Mesa 23.2.1-1ubuntu3.1 (LLVM 15.0.7 256 bits)GCC 13.2.0ext41920x1080AMD EPYC 8534PN 64-Core @ 2.00GHz (64 Cores / 128 Threads)AMD Cinnabar (RCB1009C BIOS)AMD Device 14a46 x 32 GB DRAM-4800MT/s Samsung M321R4GA0BB0-CQKMG1000GB INTEL SSDPE2KX010T81920x1200AMD EPYC 8534PN 32-Core @ 2.05GHz (32 Cores / 64 Threads)ASPEEDAMD EPYC 8324P 32-Core @ 2.65GHz (32 Cores / 64 Threads)OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Zen 1 - EPYC 7601: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0x800126e- b: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 z: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 c: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212- 32 d: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xaa00212Security Details- Zen 1 - EPYC 7601: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT vulnerable + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - b: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 z: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 c: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - 32 d: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected Java Details- 32, 32 z, 32 c, 32 d: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Details- 32, 32 z, 32 c, 32 d: Python 3.11.6

new-testsquicksilver: CTS2build-linux-kernel: allmodconfigblender: Barbershop - CPU-Onlyquicksilver: CORAL2 P2pytorch: CPU - 16 - Efficientnet_v2_lbuild-gem5: Time To Compilexmrig: GhostRider - 1Mquicksilver: CORAL2 P1ffmpeg: libx265 - Uploadffmpeg: libx265 - Platformffmpeg: libx265 - Video On Demandospray-studio: 3 - 4K - 32 - Path Tracer - CPUllama-cpp: llama-2-70b-chat.Q5_0.ggufblender: Pabellon Barcelona - CPU-Onlypytorch: CPU - 16 - ResNet-152ospray-studio: 2 - 4K - 32 - Path Tracer - CPUospray-studio: 1 - 4K - 32 - Path Tracer - CPUcachebench: Read / Modify / Writecachebench: Writecachebench: Readblender: Classroom - CPU-Onlypytorch: CPU - 1 - Efficientnet_v2_lopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timeospray-studio: 3 - 4K - 16 - Path Tracer - CPUdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamospray-studio: 3 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 1 - Path Tracer - CPUospray-studio: 1 - 4K - 1 - Path Tracer - CPUospray-studio: 2 - 4K - 16 - Path Tracer - CPUospray-studio: 1 - 4K - 16 - Path Tracer - CPUtensorflow: CPU - 16 - VGG-16ffmpeg: libx265 - Liveopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUquantlib: Multi-Threadedpytorch: CPU - 1 - ResNet-152openvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUspeedb: Update Randopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUspeedb: Read While Writingrocksdb: Update Randspeedb: Read Rand Write Randspeedb: Rand Readrocksdb: Read Rand Write Randrocksdb: Read While Writingrocksdb: Rand Readdacapobench: Apache Cassandrablender: Fishy Cat - CPU-Onlyxmrig: Monero - 1Mxmrig: CryptoNight-Femto UPX2 - 1Mxmrig: KawPow - 1Mxmrig: CryptoNight-Heavy - 1Mbuild-linux-kernel: defconfigpytorch: CPU - 16 - ResNet-50deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdacapobench: Eclipseblender: BMW27 - CPU-Onlydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdacapobench: Apache Lucene Search Indexxmrig: Wownero - 1Mdacapobench: H2 Database Enginedeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 16 - ResNet-50deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdacapobench: Tradebeanscompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingsvt-av1: Preset 4 - Bosphorus 4Kembree: Pathtracer - Asian Dragon Objllama-cpp: llama-2-13b.Q4_0.ggufy-cruncher: 1Bembree: Pathtracer ISPC - Asian Dragon Objdacapobench: Tradesoapdacapobench: BioJava Biological Data Frameworkpytorch: CPU - 1 - ResNet-50build-ffmpeg: Time To Compiledacapobench: Jythondacapobench: jMonkeyEnginedacapobench: GraphChiembree: Pathtracer - Crownembree: Pathtracer ISPC - Crowndacapobench: H2O In-Memory Platform For Machine Learningllama-cpp: llama-2-7b.Q4_0.ggufdacapobench: Apache Kafkaembree: Pathtracer - Asian Dragontensorflow: CPU - 1 - ResNet-50dacapobench: Avrora AVR Simulation Frameworkembree: Pathtracer ISPC - Asian Dragonsvt-av1: Preset 8 - Bosphorus 4Ky-cruncher: 500Mdacapobench: Spring Boottensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 1 - VGG-16dacapobench: Apache Tomcatdacapobench: Apache Lucene Search Enginesvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Kdacapobench: PMD Source Code Analyzertensorflow: CPU - 16 - AlexNetdacapobench: Batik SVG Toolkittensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - AlexNetdacapobench: FOP Print Formatterdacapobench: Apache Xalan XSLTdacapobench: Zxing 1D/2D Barcode Image ProcessingZen 1 - EPYC 7601bc3232 z32 c32 d11426667150133331299666733.92315.69316270000161400002118000010.4165.20216260000161500002125000010.4765.21314320000433.789410.61153500007.17254.014067.41879000022.2845.1345.181364643.42139.0915.6111656611637787227.58771345646.0913537616.087334112.039.8572.80728828.37258371361607.93526.0566404934513404619876067325.15109.84929.2317.17486.6532.82107079.219.04105.48151.45105.97150.879.82199.99.121741.5723.95666.2227.69576.185.415747.6542.8774535.51898.68.071960.180.4852441.940.6540123.6213.361190.4218.691704.263141233.913921.59.563300.997457600630575223140317968595423736544284691176770468594655.6518845.518860.118777.219004.552.13340.1919.1107836.42141265644.73747.067421.293341.6314383.9746747.31421.2278396.291440.1688461325814.4267587.4908182.508551.34129.8749123.0157128.8158123.881759.8638266.85747.23322208.153759.8833266.879985612122092415455.80137.28417.9411.67638.93785403787452.4423.55767036914353636.958437.2967397429.75511041.59588.74561345.937448.4515.6562444158.479.7321071402185.665186.6251784272.93173328.9932.1275187160914290000434.187410.43152300007.11272.614038.61876000022.2045.0545.081363123.41138.615.5111697211566987218.21097445646.8161077616.334142112.099.8271.20128530.7547271495608.132626.0768404834463406621136143025.2110.37927.5717.18486.0332.81107381.618.92106.44150.06106.24150.3779.39201.159.161735.6423.95666.327.53579.415.425751.5843.71730.8235.59896.698.051964.990.4752475.390.6640101.813.291197.4618.691704.023141143.93924.869.563299.937210235636242225934417943492423612704364996177167636593855.5418763.81890918961.318936.552.01239.9619.1338835.2621273544.48745.180621.271141.4377385.6481746.12821.289397.959339.9467458925943.7265587.325182.764351.57129.8035122.955128.845123.78559.8673266.97617.2612199.494159.6674267.841786002115842423995.89936.858617.8711.59539.1075168785852.7823.75967736917363037.254537.6791386829.9512141.81988.77544146.308858.7155.6852460155.779.7520821425184.981185.5621820274.97172328.7131.9269685959914430000453.693426.3151800007.18258.3074136.3104000022.2145.1344.951396853.42148.7415.3211898011822187238.01319745645.0911337615.948086119.7210.0472.38400730.53759173024611.602625.8175415735153493634026280224.47110.02964.216.51510.7931.2298916.218.86106.43150.07105.91150.8482.18194.219.391694.0125.22632.9228.77554.685.785416.3146.17692.0237.4853.388.521860.990.4852382.310.6739562.8713.651166.5619.581627.933177584.033877.9110.223099.27746346633688222949416320272123278004419497160665305595559.5818897.518887.518947.318783.953.61540.3219.5831816.27851282647.52753.122920.872941.5889384.3164751.925921.0419411.343538.7708458025385.9277388.1952181.104351.56130.4755122.3307129.5421123.146959.9016266.84287.27382195.919860.0613266.03485202118152402875.82937.440517.8711.90239.00465366790453.0024.44668656917353835.914736.9967397929.74511141.56968.61556145.464847.2535.7832533157.69.7720941379183.899180.9551966274.97171827.7333.1476485256914280000452.606426.37151000007.15258.9344095.71884000022.2244.9745.101394453.42148.5615.3511978311880287854.11767245643.0387137615.833145119.5710.2172.30583630.72419473329611.443925.7874413235223499627876333624.51110.29965.3516.54510.931.298618.718.86105.64151.25106.32150.2581.87195.059.371696.525.16634.528.82553.655.785423.1346.28690.2437.61848.628.521862.240.4852344.60.6739843.0513.651166.8319.561628.913136834.033869.710.213100.957105602630478221589616351243223515684244478160707812592759.7918866.118818.618901.11892453.63240.3119.5858815.97681276847.41751.211721.093241.8438381.7839750.399721.0667410.326738.8343460225396.8263488.2278181.115551.49130.7937121.8001129.8101122.931259.9698266.53437.28962189.065560.0284266.277683802113832411915.97737.405618.0811.97539.14215149790753.3024.367696916365636.281236.9369375529.85511441.5578.59557245.648258.6425.7512452158.089.7521121433184.099186.3681833276.19173828.7933.02758861599OpenBenchmarking.org

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CTS2cbZen 1 - EPYC 760132 z32 d32 c323M6M9M12M15MSE +/- 16666.67, N = 3162600001627000011426667142900001428000014430000143200001. (CXX) g++ options: -fopenmp -O3 -march=native

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfig32 z32 d32 c32100200300400500434.19452.61453.69433.79

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Barbershop - Compute: CPU-Only32 z32 d32 c3290180270360450410.43426.37426.30410.61

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P2cbZen 1 - EPYC 760132 z32 d32 c323M6M9M12M15MSE +/- 37118.43, N = 3161500001614000015013333152300001510000015180000153500001. (CXX) g++ options: -fopenmp -O3 -march=native

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l32 z32 d32 c322468107.117.157.187.17MIN: 4.25 / MAX: 7.26MIN: 4.34 / MAX: 7.3MIN: 4.37 / MAX: 7.37MIN: 4.45 / MAX: 7.33

Timed Gem5 Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Gem5 Compilation 23.0.1Time To Compile32 z32 d32 c3260120180240300272.61258.93258.31254.01

Xmrig

Variant: GhostRider - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: GhostRider - Hash Count: 1M32 z32 d32 c3290018002700360045004038.64095.74136.34067.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit, More Is BetterQuicksilver 20230818Input: CORAL2 P1cbZen 1 - EPYC 760132 z32 d32 c325M10M15M20M25MSE +/- 66916.20, N = 321250000211800001299666718760000188400001040000187900001. (CXX) g++ options: -fopenmp -O3 -march=native

FFmpeg

Encoder: libx265 - Scenario: Upload

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Upload32 z32 d32 c3251015202522.2022.2222.2122.281. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Platform

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Platform32 z32 d32 c32102030405045.0544.9745.1345.131. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

FFmpeg

Encoder: libx265 - Scenario: Video On Demand

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Video On Demand32 z32 d32 c32102030405045.0845.1044.9545.181. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3230K60K90K120K150K136312139445139685136464

Llama.cpp

Model: llama-2-70b-chat.Q5_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-70b-chat.Q5_0.gguf32 z32 d32 c320.76951.5392.30853.0783.84753.413.423.423.421. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Pabellon Barcelona - Compute: CPU-Only32 z32 d32 c32306090120150138.60148.56148.74139.09

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-15232 z32 d32 c324812162015.5115.3515.3215.61MIN: 7.3 / MAX: 15.63MIN: 8.86 / MAX: 15.52MIN: 6.91 / MAX: 15.45MIN: 6.89 / MAX: 15.74

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3230K60K90K120K150K116972119783118980116566

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3230K60K90K120K150K115669118802118221116377

CacheBench

Test: Read / Modify / Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read / Modify / Write32 z32 d32 c3220K40K60K80K100K87218.2187854.1287238.0187227.59MIN: 65721.62 / MAX: 90703.93MIN: 72077.93 / MAX: 90708.03MIN: 65732.92 / MAX: 90706.91MIN: 65739.52 / MAX: 90694.351. (CC) gcc options: -O3 -lrt

CacheBench

Test: Write

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Write32 z32 d32 c3210K20K30K40K50K45646.8245643.0445645.0945646.09MIN: 45482.27 / MAX: 45698.03MIN: 45482.26 / MAX: 45696.12MIN: 45483.02 / MAX: 45696.19MIN: 45484.29 / MAX: 45698.111. (CC) gcc options: -O3 -lrt

CacheBench

Test: Read

OpenBenchmarking.orgMB/s, More Is BetterCacheBenchTest: Read32 z32 d32 c32160032004800640080007616.337615.837615.957616.09MIN: 7615.95 / MAX: 7616.74MIN: 7615.4 / MAX: 7616.44MIN: 7615.46 / MAX: 7616.35MIN: 7615.65 / MAX: 7616.541. (CC) gcc options: -O3 -lrt

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Classroom - Compute: CPU-Only32 z32 d32 c32306090120150112.09119.57119.72112.03

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l32 z32 d32 c3236912159.8210.2110.049.85MIN: 5.63 / MAX: 10.05MIN: 5.69 / MAX: 10.32MIN: 5.86 / MAX: 10.23MIN: 5.1 / MAX: 9.99

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Time32 z32 d32 c32163248648071.2072.3172.3872.811. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Time32 z32 d32 c3271421283530.7530.7230.5428.371. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3216K32K48K64K80K71495733297302471361

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream32 z32 d32 c32130260390520650608.13611.44611.60607.94

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream32 z32 d32 c3261218243026.0825.7925.8226.06

OSPRay Studio

Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3290018002700360045004048413241574049

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3280016002400320040003446352235153451

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3280016002400320040003406349934933404

OSPRay Studio

Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3214K28K42K56K70K62113627876340261987

OSPRay Studio

Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU

OpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.13Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU32 z32 d32 c3214K28K42K56K70K61430633366280260673

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: VGG-1632 z32 d32 c3261218243025.2024.5124.4725.15

FFmpeg

Encoder: libx265 - Scenario: Live

OpenBenchmarking.orgFPS, More Is BetterFFmpeg 6.1Encoder: libx265 - Scenario: Live32 z32 d32 c3220406080100110.37110.29110.02109.841. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 z32 d32 c322004006008001000927.57965.35964.20929.23MIN: 895.6 / MAX: 1019.94MIN: 922.7 / MAX: 1047.5MIN: 905.78 / MAX: 1053.38MIN: 907.01 / MAX: 1013.021. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16 - Device: CPU32 z32 d32 c324812162017.1816.5416.5117.171. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU32 z32 d32 c32110220330440550486.03510.90510.79486.65MIN: 454.31 / MAX: 580.9MIN: 470.7 / MAX: 595.97MIN: 473.86 / MAX: 584.54MIN: 465.68 / MAX: 570.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection FP16-INT8 - Device: CPU32 z32 d32 c3281624324032.8131.2031.2232.821. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

QuantLib

Configuration: Multi-Threaded

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.32Configuration: Multi-Threaded32 z32 d32 c3220K40K60K80K100K107381.698618.798916.2107079.21. (CXX) g++ options: -O3 -march=native -fPIE -pie

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-15232 z32 d32 c3251015202518.9218.8618.8619.04MIN: 7.59 / MAX: 19.04MIN: 7.91 / MAX: 19.03MIN: 10.78 / MAX: 19.02MIN: 6.89 / MAX: 19.18

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU32 z32 d32 c3220406080100106.44105.64106.43105.48MIN: 81.71 / MAX: 196.1MIN: 54.2 / MAX: 154.42MIN: 80.87 / MAX: 199.77MIN: 82.05 / MAX: 167.921. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP16 - Device: CPU32 z32 d32 c32306090120150150.06151.25150.07151.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 z32 d32 c3220406080100106.24106.32105.91105.97MIN: 81.06 / MAX: 185.99MIN: 81.37 / MAX: 177.41MIN: 82.12 / MAX: 188.16MIN: 81.88 / MAX: 218.451. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Detection FP32 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Detection FP32 - Device: CPU32 z32 d32 c32306090120150150.37150.25150.84150.801. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 z32 d32 c322040608010079.3981.8782.1879.82MIN: 43.97 / MAX: 186.13MIN: 52.13 / MAX: 175.84MIN: 58.39 / MAX: 175.7MIN: 42.02 / MAX: 179.471. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Machine Translation EN To DE FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Machine Translation EN To DE FP16 - Device: CPU32 z32 d32 c324080120160200201.15195.05194.21199.901. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU32 z32 d32 c3236912159.169.379.399.12MIN: 5.99 / MAX: 67.91MIN: 6.07 / MAX: 71.06MIN: 5.95 / MAX: 68.66MIN: 6.22 / MAX: 56.951. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Person Vehicle Bike Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Person Vehicle Bike Detection FP16 - Device: CPU32 z32 d32 c324008001200160020001735.641696.501694.011741.571. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU32 z32 d32 c3261218243023.9525.1625.2223.95MIN: 15.19 / MAX: 90.71MIN: 19.24 / MAX: 86.7MIN: 21.61 / MAX: 89.16MIN: 13.94 / MAX: 114.011. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16-INT8 - Device: CPU32 z32 d32 c32140280420560700666.30634.50632.92666.221. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 z32 d32 c3271421283527.5328.8228.7727.69MIN: 18.86 / MAX: 82.58MIN: 19.39 / MAX: 99.16MIN: 17.12 / MAX: 135.79MIN: 18.56 / MAX: 147.541. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Road Segmentation ADAS FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Road Segmentation ADAS FP16 - Device: CPU32 z32 d32 c32130260390520650579.41553.65554.68576.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU32 z32 d32 c321.30052.6013.90155.2026.50255.425.785.785.41MIN: 3.15 / MAX: 67.23MIN: 3.37 / MAX: 65.27MIN: 3.21 / MAX: 58.78MIN: 3.17 / MAX: 57.081. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16-INT8 - Device: CPU32 z32 d32 c32120024003600480060005751.585423.135416.315747.651. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU32 z32 d32 c32102030405043.7146.2846.1742.87MIN: 35.06 / MAX: 153.84MIN: 30.15 / MAX: 108.49MIN: 39.81 / MAX: 161.92MIN: 35.14 / MAX: 107.51. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16-INT8 - Device: CPU32 z32 d32 c32160320480640800730.82690.24692.02745.001. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU32 z32 d32 c3291827364535.5937.6137.4035.51MIN: 24.72 / MAX: 147.24MIN: 24.11 / MAX: 127.49MIN: 27.33 / MAX: 92.33MIN: 22.8 / MAX: 100.531. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Handwritten English Recognition FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Handwritten English Recognition FP16 - Device: CPU32 z32 d32 c322004006008001000896.69848.62853.38898.601. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 z32 d32 c322468108.058.528.528.07MIN: 4.56 / MAX: 76.48MIN: 4.8 / MAX: 75.53MIN: 4.97 / MAX: 67.6MIN: 4.55 / MAX: 69.241. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16-INT8 - Device: CPU32 z32 d32 c324008001200160020001964.991862.241860.991960.181. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 z32 d32 c320.1080.2160.3240.4320.540.470.480.480.48MIN: 0.27 / MAX: 64.47MIN: 0.27 / MAX: 65.55MIN: 0.27 / MAX: 50.17MIN: 0.27 / MAX: 50.111. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU32 z32 d32 c3211K22K33K44K55K52475.3952344.6052382.3152441.941. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU32 z32 d32 c320.15080.30160.45240.60320.7540.660.670.670.65MIN: 0.36 / MAX: 65.79MIN: 0.36 / MAX: 50.74MIN: 0.36 / MAX: 62.87MIN: 0.36 / MAX: 51.481. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Age Gender Recognition Retail 0013 FP16 - Device: CPU32 z32 d32 c329K18K27K36K45K40101.8039843.0539562.8740123.621. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 z32 d32 c324812162013.2913.6513.6513.36MIN: 8.3 / MAX: 73.59MIN: 6.73 / MAX: 75.18MIN: 9.08 / MAX: 67.03MIN: 7.26 / MAX: 78.851. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Vehicle Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Vehicle Detection FP16 - Device: CPU32 z32 d32 c32300600900120015001197.461166.831166.561190.421. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU32 z32 d32 c3251015202518.6919.5619.5818.69MIN: 9.78 / MAX: 86.93MIN: 13.73 / MAX: 73.6MIN: 10.24 / MAX: 83.63MIN: 9.97 / MAX: 81.331. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16 - Device: CPU32 z32 d32 c324008001200160020001704.021628.911627.931704.261. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Update Random32 z32 d32 c3270K140K210K280K350K3141143136833177583141231. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 z32 d32 c320.90681.81362.72043.62724.5343.904.034.033.91MIN: 2.18 / MAX: 64.81MIN: 2.23 / MAX: 62.26MIN: 2.23 / MAX: 54.09MIN: 2.2 / MAX: 72.731. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Face Detection Retail FP16 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Face Detection Retail FP16 - Device: CPU32 z32 d32 c3280016002400320040003924.863869.703877.913921.501. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU32 z32 d32 c3236912159.5610.2110.229.56MIN: 5.09 / MAX: 75.37MIN: 5.17 / MAX: 61.15MIN: 5.48 / MAX: 68.07MIN: 5.1 / MAX: 77.121. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

OpenVINO

Model: Weld Porosity Detection FP16-INT8 - Device: CPU

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.2.devModel: Weld Porosity Detection FP16-INT8 - Device: CPU32 z32 d32 c3270014002100280035003299.933100.953099.203300.991. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie

Speedb

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read While Writing32 z32 d32 c321.7M3.4M5.1M6.8M8.5M72102357105602774634674576001. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Update Random32 z32 d32 c32140K280K420K560K700K6362426304786336886305751. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Read Random Write Random32 z32 d32 c32500K1000K1500K2000K2500K22593442215896222949422314031. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Speedb

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterSpeedb 2.7Test: Random Read32 z32 d32 c3240M80M120M160M200M1794349241635124321632027211796859541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read Random Write Random32 z32 d32 c32500K1000K1500K2000K2500K23612702351568232780023736541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writing32 z32 d32 c32900K1800K2700K3600K4500K43649964244478441949742846911. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Read32 z32 d32 c3240M80M120M160M200M1771676361607078121606653051767704681. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

DaCapo Benchmark

Java Test: Apache Cassandra

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Cassandra32 z32 d32 c32130026003900520065005938592759555946

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: Fishy Cat - Compute: CPU-Only32 z32 d32 c32132639526555.5459.7959.5855.65

Xmrig

Variant: Monero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Monero - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18763.818866.118897.518845.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Femto UPX2 - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Femto UPX2 - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18909.018818.618887.518860.11. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: KawPow - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: KawPow - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18961.318901.118947.318777.21. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Xmrig

Variant: CryptoNight-Heavy - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: CryptoNight-Heavy - Hash Count: 1M32 z32 d32 c324K8K12K16K20K18936.518924.018783.919004.51. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfig32 z32 d32 c32122436486052.0153.6353.6252.13

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 16 - Model: ResNet-5032 z32 d32 c3291827364539.9640.3140.3240.19MIN: 15.13 / MAX: 40.53MIN: 15.27 / MAX: 40.73MIN: 15.51 / MAX: 40.87MIN: 15.55 / MAX: 40.67

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3251015202519.1319.5919.5819.11

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c322004006008001000835.26815.98816.28836.42

DaCapo Benchmark

Java Test: Eclipse

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Eclipse32 z32 d32 c323K6K9K12K15K12735127681282612656

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.0Blend File: BMW27 - Compute: CPU-Only32 z32 d32 c32112233445544.4847.4147.5244.73

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream32 z32 d32 c32160320480640800745.18751.21753.12747.07

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream32 z32 d32 c3251015202521.2721.0920.8721.29

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32102030405041.4441.8441.5941.63

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3280160240320400385.65381.78384.32383.97

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32160320480640800746.13750.40751.93747.31

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3251015202521.2921.0721.0421.23

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream32 z32 d32 c3290180270360450397.96410.33411.34396.29

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream32 z32 d32 c3291827364539.9538.8338.7740.17

DaCapo Benchmark

Java Test: Apache Lucene Search Index

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Index32 z32 d32 c32100020003000400050004589460245804613

Xmrig

Variant: Wownero - Hash Count: 1M

OpenBenchmarking.orgH/s, More Is BetterXmrig 6.21Variant: Wownero - Hash Count: 1M32 z32 d32 c326K12K18K24K30K25943.725396.825385.925814.41. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc

DaCapo Benchmark

Java Test: H2 Database Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2 Database Engine32 z32 d32 c3260012001800240030002655263427732675

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 z32 d32 c322040608010087.3388.2388.2087.49

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream32 z32 d32 c324080120160200182.76181.12181.10182.51

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-5032 z32 d32 c32122436486051.5751.4951.5651.34

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150129.80130.79130.48129.87

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150122.96121.80122.33123.02

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150128.85129.81129.54128.82

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c32306090120150123.79122.93123.15123.88

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream32 z32 d32 c32132639526559.8759.9759.9059.86

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream32 z32 d32 c3260120180240300266.98266.53266.84266.86

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c322468107.26107.28967.27387.2332

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream32 z32 d32 c3250010001500200025002199.492189.072195.922208.15

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 z32 d32 c32132639526559.6760.0360.0659.88

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.6Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream32 z32 d32 c3260120180240300267.84266.28266.03266.88

DaCapo Benchmark

Java Test: Tradebeans

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradebeans32 z32 d32 c322K4K6K8K10K8600838085208561

7-Zip Compression

Test: Decompression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Rating32 z32 d32 c3250K100K150K200K250K2115842113832118152122091. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

7-Zip Compression

Test: Compression Rating

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Rating32 z32 d32 c3250K100K150K200K250K2423992411912402872415451. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 4 - Input: Bosphorus 4K32 z32 d32 c321.34482.68964.03445.37926.7245.8995.9775.8295.8011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Embree

Binary: Pathtracer - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Obj32 z32 d32 c3291827364536.8637.4137.4437.28MIN: 36.67 / MAX: 37.11MIN: 37.22 / MAX: 37.69MIN: 37.24 / MAX: 37.71MIN: 37.09 / MAX: 37.7

Llama.cpp

Model: llama-2-13b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-13b.Q4_0.gguf32 z32 d32 c324812162017.8718.0817.8717.941. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 1BcbZen 1 - EPYC 760132 z32 d32 c32816243240SE +/- 0.09, N = 310.4810.4233.9211.6011.9811.9011.68

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon Obj

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Obj32 z32 d32 c3291827364539.1139.1439.0038.94MIN: 38.88 / MAX: 39.43MIN: 38.92 / MAX: 39.84MIN: 38.78 / MAX: 39.64MIN: 38.69 / MAX: 39.29

DaCapo Benchmark

Java Test: Tradesoap

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Tradesoap32 z32 d32 c32120024003600480060005168514953665403

DaCapo Benchmark

Java Test: BioJava Biological Data Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: BioJava Biological Data Framework32 z32 d32 c322K4K6K8K10K7858790779047874

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.1Device: CPU - Batch Size: 1 - Model: ResNet-5032 z32 d32 c32122436486052.7853.3053.0052.44MIN: 17.43 / MAX: 53.32MIN: 50.97 / MAX: 53.84MIN: 50.62 / MAX: 53.51MIN: 15.02 / MAX: 53.14

Timed FFmpeg Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.1Time To Compile32 z32 d32 c3261218243023.7624.3024.4523.56

DaCapo Benchmark

Java Test: Jython

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Jython32 z32 d32 c32150030004500600075006773676968656703

DaCapo Benchmark

Java Test: jMonkeyEngine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: jMonkeyEngine32 z32 d32 c32150030004500600075006917691669176914

DaCapo Benchmark

Java Test: GraphChi

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: GraphChi32 z32 d32 c3280016002400320040003630365635383536

Embree

Binary: Pathtracer - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crown32 z32 d32 c3291827364537.2536.2835.9136.96MIN: 36.89 / MAX: 37.75MIN: 35.88 / MAX: 37.13MIN: 35.53 / MAX: 37.08MIN: 36.61 / MAX: 37.43

Embree

Binary: Pathtracer ISPC - Model: Crown

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crown32 z32 d32 c3291827364537.6836.9437.0037.30MIN: 37.25 / MAX: 38.37MIN: 36.46 / MAX: 37.76MIN: 36.53 / MAX: 38.11MIN: 36.86 / MAX: 38.04

DaCapo Benchmark

Java Test: H2O In-Memory Platform For Machine Learning

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: H2O In-Memory Platform For Machine Learning32 z32 d32 c3290018002700360045003868375539793974

Llama.cpp

Model: llama-2-7b.Q4_0.gguf

OpenBenchmarking.orgTokens Per Second, More Is BetterLlama.cpp b1808Model: llama-2-7b.Q4_0.gguf32 z32 d32 c3271421283529.9029.8529.7429.751. (CXX) g++ options: -std=c++11 -fPIC -O3 -pthread -march=native -mtune=native -lopenblas

DaCapo Benchmark

Java Test: Apache Kafka

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Kafka32 z32 d32 c32110022003300440055005121511451115110

Embree

Binary: Pathtracer - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon32 z32 d32 c32102030405041.8241.5641.5741.60MIN: 41.6 / MAX: 42.16MIN: 41.33 / MAX: 41.84MIN: 41.37 / MAX: 41.9MIN: 41.36 / MAX: 41.86

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: ResNet-5032 z32 d32 c322468108.778.598.618.74

DaCapo Benchmark

Java Test: Avrora AVR Simulation Framework

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Avrora AVR Simulation Framework32 z32 d32 c32120024003600480060005441557255615613

Embree

Binary: Pathtracer ISPC - Model: Asian Dragon

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon32 z32 d32 c32112233445546.3145.6545.4645.94MIN: 46.05 / MAX: 46.74MIN: 45.37 / MAX: 46.89MIN: 45.22 / MAX: 46.6MIN: 45.66 / MAX: 46.38

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 8 - Input: Bosphorus 4K32 z32 d32 c32132639526558.7258.6447.2548.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.8.3Pi Digits To Calculate: 500McbZen 1 - EPYC 760132 z32 d32 c3248121620SE +/- 0.118, N = 35.2135.20215.6935.6855.7515.7835.656

DaCapo Benchmark

Java Test: Spring Boot

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Spring Boot32 z32 d32 c3250010001500200025002460245225332444

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNet32 z32 d32 c324080120160200155.77158.08157.60158.47

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: VGG-1632 z32 d32 c3236912159.759.759.779.73

DaCapo Benchmark

Java Test: Apache Tomcat

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Tomcat32 z32 d32 c3250010001500200025002082211220942107

DaCapo Benchmark

Java Test: Apache Lucene Search Engine

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Lucene Search Engine32 z32 d32 c32300600900120015001425143313791402

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 13 - Input: Bosphorus 4K32 z32 d32 c324080120160200184.98184.10183.90185.671. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.8Encoder Mode: Preset 12 - Input: Bosphorus 4K32 z32 d32 c324080120160200185.56186.37180.96186.631. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

DaCapo Benchmark

Java Test: PMD Source Code Analyzer

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: PMD Source Code Analyzer32 z32 d32 c324008001200160020001820183319661784

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNet32 z32 d32 c3260120180240300274.97276.19274.97272.93

DaCapo Benchmark

Java Test: Batik SVG Toolkit

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Batik SVG Toolkit32 z32 d32 c324008001200160020001723173817181733

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: GoogLeNet32 z32 d32 c3271421283528.7128.7927.7328.99

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 1 - Model: AlexNet32 z32 d32 c3281624324031.9233.0233.1432.12

DaCapo Benchmark

Java Test: FOP Print Formatter

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: FOP Print Formatter32 z32 d32 c32160320480640800696758764751

DaCapo Benchmark

Java Test: Apache Xalan XSLT

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Apache Xalan XSLT32 z32 d32 c322004006008001000859861852871

DaCapo Benchmark

Java Test: Zxing 1D/2D Barcode Image Processing

OpenBenchmarking.orgmsec, Fewer Is BetterDaCapo Benchmark 23.11Java Test: Zxing 1D/2D Barcode Image Processing32 z32 d32 c32130260390520650599599569609

CPU Power Consumption Monitor

Phoronix Test Suite System Monitoring

OpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringZen 1 - EPYC 7601130260390520650Min: 242.58 / Avg: 585.92 / Max: 718

Meta Performance Per Watts

Performance Per Watts

OpenBenchmarking.orgPerformance Per Watts, More Is BetterMeta Performance Per WattsPerformance Per WattsZen 1 - EPYC 76013M6M9M12M15M13064001.66

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601263602718OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Y-Cruncher

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601262543712OpenBenchmarking.orgWatts, Fewer Is BetterY-Cruncher 0.8.3CPU Power Consumption Monitor2004006008001000

Quicksilver

CPU Power Consumption Monitor

OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption MonitorZen 1 - EPYC 7601120240360480600Min: 258.88 / Avg: 624.15 / Max: 662.04

Quicksilver

Input: CTS2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CTS2Zen 1 - EPYC 76014K8K12K16K20K18307.66

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601255.2553.7594.9OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor160320480640800

Quicksilver

Input: CORAL2 P2

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P2Zen 1 - EPYC 76016K12K18K24K30K27116.87

Quicksilver

CPU Power Consumption Monitor

MinAvgMaxZen 1 - EPYC 7601243584648OpenBenchmarking.orgWatts, Fewer Is BetterQuicksilver 20230818CPU Power Consumption Monitor2004006008001000

Quicksilver

Input: CORAL2 P1

OpenBenchmarking.orgFigure Of Merit Per Watt, More Is BetterQuicksilver 20230818Input: CORAL2 P1Zen 1 - EPYC 76015K10K15K20K25K22248.55


Phoronix Test Suite v10.8.4