Sapphire Rapids AVX-512 Benchmarks

Ice Lake, Sapphire Rapids, Genoa AVX-512 benchmarks by Michael Larabel for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2301176-NE-SAPPHIRER14
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 7 Tests
Cryptography 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 7 Tests
Machine Learning 5 Tests
Multi-Core 8 Tests
Intel oneAPI 7 Tests
Python Tests 2 Tests
Raytracing 2 Tests
Renderers 2 Tests
Server CPU Tests 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs
No Box Plots
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
Xeon 8380: AVX-512 Off
January 13 2023
  9 Hours, 51 Minutes
Xeon 8380: AVX-512 On
January 13 2023
  10 Hours, 13 Minutes
Xeon 8490H: AVX-512 Off
January 12 2023
  7 Hours, 13 Minutes
Xeon 8490H: AVX-512 On
January 11 2023
  7 Hours, 3 Minutes
EPYC 9654: AVX-512 Off
January 15 2023
  7 Hours, 12 Minutes
EPYC 9654: AVX-512 On
January 16 2023
  8 Hours, 47 Minutes
Invert Hiding All Results Option
  8 Hours, 23 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionXeon 8490HXeon 8380EPYC 9654 AVX-512 On AVX-512 Off AVX-512 On AVX-512 Off AVX-512 Off AVX-512 On2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce1008GB2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007ASPEEDVGA HDMI4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-TUbuntu 22.105.19.0-29-generic (x86_64)GNOME Shell 43.1X Server 1.21.1.41.3.224GCC 12.2.0ext41920x10802 x Intel Xeon Platinum 8380 @ 3.40GHz (80 Cores / 160 Threads)Intel M50CYP2SB2U (SE5C6200.86B.0022.D08.2103221623 BIOS)Intel Ice Lake IEH512GB2 x Intel X710 for 10GBASE-T + 2 x Intel E810-C for QSFP2 x AMD EPYC 9654 96-Core @ 3.71GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1002E BIOS)AMD Device 14a41520GBBroadcom NetXtreme BCM5720 PCIeOpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseEnvironment Details- Xeon 8490H: AVX-512 On: CXXFLAGS="-O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mprefer-vector-width=512 -mno-amx-tile" CFLAGS="-O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mprefer-vector-width=512 -mno-amx-tile" - Xeon 8490H: AVX-512 Off: CXXFLAGS="-O3 -march=native -mno-avx512f -mno-amx-tile" CFLAGS="-O3 -march=native -mno-avx512f -mno-amx-tile"- Xeon 8380: AVX-512 On: CXXFLAGS="-O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mprefer-vector-width=512 -mno-amx-tile" CFLAGS="-O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mprefer-vector-width=512 -mno-amx-tile" - Xeon 8380: AVX-512 Off: CXXFLAGS="-O3 -march=native -mno-avx512f -mno-amx-tile" CFLAGS="-O3 -march=native -mno-avx512f -mno-amx-tile"- EPYC 9654: AVX-512 Off: CXXFLAGS="-O3 -march=native -mno-avx512f -mno-amx-tile" CFLAGS="-O3 -march=native -mno-avx512f -mno-amx-tile"- EPYC 9654: AVX-512 On: CXXFLAGS="-O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mprefer-vector-width=512 -mno-amx-tile" CFLAGS="-O3 -march=native -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mprefer-vector-width=512 -mno-amx-tile" Compiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-U8K4Qv/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Xeon 8490H: AVX-512 On: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0- Xeon 8490H: AVX-512 Off: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0- Xeon 8380: AVX-512 On: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375- Xeon 8380: AVX-512 Off: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0xd000375- EPYC 9654: AVX-512 Off: Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa10110d- EPYC 9654: AVX-512 On: Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xa10110dPython Details- Python 3.10.7Security Details- Xeon 8490H: AVX-512 On: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - Xeon 8490H: AVX-512 Off: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - Xeon 8380: AVX-512 On: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - Xeon 8380: AVX-512 Off: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected - EPYC 9654: AVX-512 Off: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected - EPYC 9654: AVX-512 On: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamlczero: BLASlczero: Eigenembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Crownopenvkl: vklBenchmark ISPCoidn: RT.hdr_alb_nrm.3840x2160oidn: RTLightmap.hdr.4096x4096ospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray-studio: 1 - 4K - 1 - Path Tracerospray-studio: 1 - 4K - 16 - Path Tracerospray-studio: 1 - 4K - 32 - Path Tracerospray-studio: 3 - 4K - 1 - Path Tracerospray-studio: 3 - 4K - 16 - Path Tracerospray-studio: 3 - 4K - 32 - Path Traceronednn: Deconvolution Batch shapes_3d - f32 - CPUmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0cpuminer-opt: scryptcpuminer-opt: Triple SHA-256, Onecoincpuminer-opt: Quad SHA-256, Pyritecpuminer-opt: x25xcpuminer-opt: Deepcoincpuminer-opt: Skeincoincpuminer-opt: LBC, LBRY Creditsopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUminibude: OpenMP - BM1minibude: OpenMP - BM1minibude: OpenMP - BM2minibude: OpenMP - BM2openfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Large Mesh Size - Mesh Timeopenfoam: drivaerFastback, Large Mesh Size - Execution Timesmhasher: FarmHash32 x86_64 AVXsmhasher: FarmHash32 x86_64 AVXXeon 8490HXeon 8380EPYC 9654 AVX-512 On AVX-512 Off AVX-512 On AVX-512 Off AVX-512 Off AVX-512 On424.2714141.1445875.641668.4347281.23613.55091492.884440.140727.067036.938292.0211648.199583.700811.9378355.6686168.2285100.53979.9734592.4522101.042526.843237.247392.0328647.75451067111134122.526092.651211964.191.9936.364335.954442.96628591358927204102916332327490.7368222.0552.7378.4964.4153665.55207420014726204071.591055131072453672677103184.011.046206.7219.313738.928.013586.273143.4513643.840145.754159.67329195.47713847.910294305.124242508.3523.273264.7704226.2678584.1487102.6136187.64775.3242832.854271.934621.286246.973566.7540887.589346.861421.3212232.2085257.562284.837811.7823405.7230147.457821.178947.209466.7281889.38997542824390.395962.037710342.951.4823.174822.832832.48379801560131373117018608373002.012412.2883.55117.7165.2221623.8810926376873202166.909110767773326391773906.761.573878.7930.911732.7417.292480.60799.2252602.652104.106169.84164194.789751039.31534319.124537809.7123.268227.4544175.6914455.031287.8658228.12194.3750835.784647.821620.193349.514548.4134819.532579.140512.6264221.7294180.170483.419711.9769318.7928125.163920.231349.420848.2393821.958855245750106.179874.80339172.941.4221.224221.064025.517913082087241834156324927498470.8751961.7092.2977.9093.9992307.4213285479240402650.366469061327341913367701.601.164442.954.491119.4517.832339.32393.5732588.758103.550160.6645361.366881051.35257117.005933979.9931.084183.8325217.0993379.5792105.2624113.32618.8158541.861773.746316.827559.417543.1140916.936045.854721.7973158.5261251.567869.726414.3286247.2732161.334216.799659.515943.1601913.41964780470996.344170.50797922.071.0314.345513.779219.957814912394348165177028490627801.992861.7642.67818.1334.1951122.746470333551731840.415793039935015708350413.981.581750.3011.411014.2719.681796.15971.8461993.06779.723162.72306358.63124887.207627106.761130330.5331.584510.8775187.35151006.082495.1665206.31714.84441410.422267.901630.855332.4024120.5142787.327071.508813.9740491.3426194.7179109.25819.1835660.5371144.885830.456332.8263120.5189787.723594258161177.6774153.181511763.461.7226.395325.309640.6658705112162247583613341265411.736673.8897.56624.0819.2782947.94333885013343635712.581652401413063492780114083.241.146218.257.713791.8012.645192.425207.6976371.320254.853146.75453110.94948867.3792803.544436458.3126.413613.9355155.91181205.181379.4967191.33255.22401966.405248.700334.232229.202984.86761128.8450100.28559.9667704.5818135.9937126.01147.9604809.0751118.352534.500828.977184.39961132.613391719085213.9512181.888913413.481.6643.708143.235854.155358192601850069511058221200.9862143.8667.20414.7278.7624816.31332628722525007691.7816029019907901068993118103.940.9611029.454.347420.496.467274.050290.9628562.694342.508145.89944112.38169996.777412849.61139795.3626.551OpenBenchmarking.org

Neural Magic DeepSparse

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off130260390520650SE +/- 0.55, N = 3SE +/- 0.22, N = 3SE +/- 0.52, N = 3SE +/- 0.49, N = 3SE +/- 0.93, N = 3SE +/- 0.95, N = 3613.94510.88183.83227.45264.77424.27
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off110220330440550Min: 612.98 / Avg: 613.94 / Max: 614.87Min: 510.43 / Avg: 510.88 / Max: 511.15Min: 183.04 / Avg: 183.83 / Max: 184.81Min: 226.64 / Avg: 227.45 / Max: 228.32Min: 263.04 / Avg: 264.77 / Max: 266.21Min: 422.52 / Avg: 424.27 / Max: 425.77

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off50100150200250SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.69, N = 3SE +/- 0.35, N = 3SE +/- 0.84, N = 3SE +/- 0.32, N = 3155.91187.35217.10175.69226.27141.14
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off4080120160200Min: 155.76 / Avg: 155.91 / Max: 156.08Min: 187.23 / Avg: 187.35 / Max: 187.56Min: 215.84 / Avg: 217.1 / Max: 218.23Min: 175.06 / Avg: 175.69 / Max: 176.27Min: 225.13 / Avg: 226.27 / Max: 227.9Min: 140.74 / Avg: 141.14 / Max: 141.77

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off30060090012001500SE +/- 3.78, N = 3SE +/- 0.56, N = 3SE +/- 0.62, N = 3SE +/- 0.94, N = 3SE +/- 0.51, N = 3SE +/- 2.15, N = 31205.181006.08379.58455.03584.15875.64
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off2004006008001000Min: 1200.58 / Avg: 1205.18 / Max: 1212.68Min: 1004.96 / Avg: 1006.08 / Max: 1006.68Min: 378.95 / Avg: 379.58 / Max: 380.81Min: 453.31 / Avg: 455.03 / Max: 456.55Min: 583.21 / Avg: 584.15 / Max: 584.96Min: 871.69 / Avg: 875.64 / Max: 879.1

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off20406080100SE +/- 0.26, N = 3SE +/- 0.04, N = 3SE +/- 0.17, N = 3SE +/- 0.18, N = 3SE +/- 0.06, N = 3SE +/- 0.18, N = 379.5095.17105.2687.87102.6168.43
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off20406080100Min: 78.98 / Avg: 79.5 / Max: 79.84Min: 95.09 / Avg: 95.17 / Max: 95.24Min: 104.94 / Avg: 105.26 / Max: 105.5Min: 87.58 / Avg: 87.87 / Max: 88.21Min: 102.51 / Avg: 102.61 / Max: 102.71Min: 68.14 / Avg: 68.43 / Max: 68.75

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off60120180240300SE +/- 0.36, N = 3SE +/- 0.04, N = 3SE +/- 0.44, N = 3SE +/- 0.16, N = 3SE +/- 0.21, N = 3SE +/- 0.71, N = 3191.33206.32113.33228.12187.65281.24
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off50100150200250Min: 190.96 / Avg: 191.33 / Max: 192.06Min: 206.24 / Avg: 206.32 / Max: 206.38Min: 112.48 / Avg: 113.33 / Max: 113.96Min: 227.91 / Avg: 228.12 / Max: 228.42Min: 187.33 / Avg: 187.65 / Max: 188.05Min: 279.83 / Avg: 281.24 / Max: 282.12

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off246810SE +/- 0.0099, N = 3SE +/- 0.0009, N = 3SE +/- 0.0346, N = 3SE +/- 0.0029, N = 3SE +/- 0.0060, N = 3SE +/- 0.0091, N = 35.22404.84448.81584.37505.32423.5509
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off3691215Min: 5.2 / Avg: 5.22 / Max: 5.23Min: 4.84 / Avg: 4.84 / Max: 4.85Min: 8.77 / Avg: 8.82 / Max: 8.88Min: 4.37 / Avg: 4.38 / Max: 4.38Min: 5.31 / Avg: 5.32 / Max: 5.33Min: 3.54 / Avg: 3.55 / Max: 3.57

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off400800120016002000SE +/- 1.71, N = 3SE +/- 0.94, N = 3SE +/- 1.47, N = 3SE +/- 0.98, N = 3SE +/- 0.84, N = 3SE +/- 1.88, N = 31966.411410.42541.86835.78832.851492.88
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off30060090012001500Min: 1963.17 / Avg: 1966.41 / Max: 1969Min: 1409.02 / Avg: 1410.42 / Max: 1412.22Min: 539.25 / Avg: 541.86 / Max: 544.36Min: 833.96 / Avg: 835.78 / Max: 837.31Min: 831.73 / Avg: 832.85 / Max: 834.49Min: 1490.44 / Avg: 1492.88 / Max: 1496.57

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off1632486480SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.18, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 348.7067.9073.7547.8271.9340.14
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off1428425670Min: 48.65 / Avg: 48.7 / Max: 48.8Min: 67.86 / Avg: 67.9 / Max: 67.94Min: 73.43 / Avg: 73.75 / Max: 74.07Min: 47.74 / Avg: 47.82 / Max: 47.93Min: 71.83 / Avg: 71.93 / Max: 72Min: 40.05 / Avg: 40.14 / Max: 40.19

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off816243240SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.01, N = 334.2330.8616.8320.1921.2927.07
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off714212835Min: 34.13 / Avg: 34.23 / Max: 34.38Min: 30.71 / Avg: 30.86 / Max: 30.93Min: 16.81 / Avg: 16.83 / Max: 16.84Min: 20.14 / Avg: 20.19 / Max: 20.26Min: 21.17 / Avg: 21.29 / Max: 21.47Min: 27.05 / Avg: 27.07 / Max: 27.08

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off1326395265SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.20, N = 3SE +/- 0.01, N = 329.2032.4059.4249.5146.9736.94
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off1224364860Min: 29.08 / Avg: 29.2 / Max: 29.29Min: 32.32 / Avg: 32.4 / Max: 32.55Min: 59.36 / Avg: 59.42 / Max: 59.49Min: 49.34 / Avg: 49.51 / Max: 49.64Min: 46.58 / Avg: 46.97 / Max: 47.24Min: 36.92 / Avg: 36.94 / Max: 36.96

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off306090120150SE +/- 0.19, N = 3SE +/- 0.22, N = 3SE +/- 0.10, N = 3SE +/- 0.21, N = 3SE +/- 0.17, N = 3SE +/- 0.03, N = 384.87120.5143.1148.4166.7592.02
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off20406080100Min: 84.52 / Avg: 84.87 / Max: 85.16Min: 120.09 / Avg: 120.51 / Max: 120.77Min: 42.95 / Avg: 43.11 / Max: 43.28Min: 48.2 / Avg: 48.41 / Max: 48.83Min: 66.5 / Avg: 66.75 / Max: 67.08Min: 91.99 / Avg: 92.02 / Max: 92.08

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off2004006008001000SE +/- 2.80, N = 3SE +/- 0.46, N = 3SE +/- 1.29, N = 3SE +/- 3.51, N = 3SE +/- 1.38, N = 3SE +/- 0.76, N = 31128.85787.33916.94819.53887.59648.20
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off2004006008001000Min: 1123.96 / Avg: 1128.85 / Max: 1133.68Min: 786.78 / Avg: 787.33 / Max: 788.25Min: 914.37 / Avg: 916.94 / Max: 918.44Min: 812.53 / Avg: 819.53 / Max: 823.43Min: 886.01 / Avg: 887.59 / Max: 890.35Min: 646.74 / Avg: 648.2 / Max: 649.31

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off20406080100SE +/- 0.67, N = 3SE +/- 0.16, N = 3SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 3SE +/- 0.15, N = 3100.2971.5145.8579.1446.8683.70
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off20406080100Min: 98.97 / Avg: 100.29 / Max: 101.19Min: 71.28 / Avg: 71.51 / Max: 71.83Min: 45.7 / Avg: 45.85 / Max: 46.13Min: 78.93 / Avg: 79.14 / Max: 79.37Min: 46.73 / Avg: 46.86 / Max: 47.1Min: 83.41 / Avg: 83.7 / Max: 83.86

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off510152025SE +/- 0.0671, N = 3SE +/- 0.0319, N = 3SE +/- 0.0648, N = 3SE +/- 0.0206, N = 3SE +/- 0.0534, N = 3SE +/- 0.0214, N = 39.966713.974021.797312.626421.321211.9378
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off510152025Min: 9.88 / Avg: 9.97 / Max: 10.1Min: 13.91 / Avg: 13.97 / Max: 14.02Min: 21.67 / Avg: 21.8 / Max: 21.87Min: 12.59 / Avg: 12.63 / Max: 12.66Min: 21.21 / Avg: 21.32 / Max: 21.38Min: 11.91 / Avg: 11.94 / Max: 11.98

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off150300450600750SE +/- 5.42, N = 13SE +/- 1.03, N = 3SE +/- 0.92, N = 3SE +/- 0.18, N = 3SE +/- 0.51, N = 3SE +/- 1.22, N = 3704.58491.34158.53221.73232.21355.67
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off120240360480600Min: 681.88 / Avg: 704.58 / Max: 762.39Min: 489.36 / Avg: 491.34 / Max: 492.82Min: 156.79 / Avg: 158.53 / Max: 159.93Min: 221.44 / Avg: 221.73 / Max: 222.07Min: 231.21 / Avg: 232.21 / Max: 232.86Min: 353.37 / Avg: 355.67 / Max: 357.52

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off60120180240300SE +/- 0.99, N = 13SE +/- 0.35, N = 3SE +/- 1.34, N = 3SE +/- 0.11, N = 3SE +/- 0.30, N = 3SE +/- 0.53, N = 3135.99194.72251.57180.17257.56168.23
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off50100150200250Min: 125.57 / Avg: 135.99 / Max: 140.35Min: 194.16 / Avg: 194.72 / Max: 195.37Min: 249.84 / Avg: 251.57 / Max: 254.22Min: 180.06 / Avg: 180.17 / Max: 180.38Min: 257.24 / Avg: 257.56 / Max: 258.16Min: 167.45 / Avg: 168.23 / Max: 169.24

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off306090120150SE +/- 2.25, N = 15SE +/- 1.97, N = 15SE +/- 0.21, N = 3SE +/- 0.06, N = 3SE +/- 0.59, N = 3SE +/- 1.49, N = 15126.01109.2669.7383.4284.84100.54
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off20406080100Min: 116.36 / Avg: 126.01 / Max: 135.4Min: 99.65 / Avg: 109.26 / Max: 119.75Min: 69.44 / Avg: 69.73 / Max: 70.13Min: 83.36 / Avg: 83.42 / Max: 83.53Min: 84.03 / Avg: 84.84 / Max: 85.99Min: 86.23 / Avg: 100.54 / Max: 106.06

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off48121620SE +/- 0.1413, N = 15SE +/- 0.1629, N = 15SE +/- 0.0435, N = 3SE +/- 0.0082, N = 3SE +/- 0.0817, N = 3SE +/- 0.1607, N = 157.96049.183514.328611.976911.78239.9734
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off48121620Min: 7.37 / Avg: 7.96 / Max: 8.58Min: 8.34 / Avg: 9.18 / Max: 10.02Min: 14.24 / Avg: 14.33 / Max: 14.39Min: 11.96 / Avg: 11.98 / Max: 11.99Min: 11.62 / Avg: 11.78 / Max: 11.9Min: 9.42 / Avg: 9.97 / Max: 11.59

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off2004006008001000SE +/- 2.56, N = 3SE +/- 0.26, N = 3SE +/- 0.62, N = 3SE +/- 0.38, N = 3SE +/- 0.43, N = 3SE +/- 0.33, N = 3809.08660.54247.27318.79405.72592.45
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off140280420560700Min: 803.95 / Avg: 809.08 / Max: 811.76Min: 660.03 / Avg: 660.54 / Max: 660.85Min: 246.52 / Avg: 247.27 / Max: 248.51Min: 318.03 / Avg: 318.79 / Max: 319.19Min: 405.1 / Avg: 405.72 / Max: 406.53Min: 591.98 / Avg: 592.45 / Max: 593.1

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off4080120160200SE +/- 0.36, N = 3SE +/- 0.05, N = 3SE +/- 0.40, N = 3SE +/- 0.17, N = 3SE +/- 0.14, N = 3SE +/- 0.05, N = 3118.35144.89161.33125.16147.46101.04
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off306090120150Min: 117.97 / Avg: 118.35 / Max: 119.07Min: 144.79 / Avg: 144.89 / Max: 144.98Min: 160.54 / Avg: 161.33 / Max: 161.85Min: 124.99 / Avg: 125.16 / Max: 125.5Min: 147.22 / Avg: 147.46 / Max: 147.7Min: 100.95 / Avg: 101.04 / Max: 101.1

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off816243240SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.12, N = 334.5030.4616.8020.2321.1826.84
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off714212835Min: 34.45 / Avg: 34.5 / Max: 34.53Min: 30.31 / Avg: 30.46 / Max: 30.56Min: 16.76 / Avg: 16.8 / Max: 16.85Min: 20.16 / Avg: 20.23 / Max: 20.29Min: 21.13 / Avg: 21.18 / Max: 21.21Min: 26.68 / Avg: 26.84 / Max: 27.08

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off1326395265SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.10, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 328.9832.8359.5249.4247.2137.25
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamAVX-512 OnAVX-512 Off1224364860Min: 28.95 / Avg: 28.98 / Max: 29.02Min: 32.72 / Avg: 32.83 / Max: 32.99Min: 59.35 / Avg: 59.52 / Max: 59.66Min: 49.27 / Avg: 49.42 / Max: 49.6Min: 47.14 / Avg: 47.21 / Max: 47.31Min: 36.93 / Avg: 37.25 / Max: 37.48

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off306090120150SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 3SE +/- 0.01, N = 3SE +/- 0.23, N = 3SE +/- 0.19, N = 384.40120.5243.1648.2466.7392.03
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off20406080100Min: 84.2 / Avg: 84.4 / Max: 84.51Min: 120.37 / Avg: 120.52 / Max: 120.7Min: 42.89 / Avg: 43.16 / Max: 43.43Min: 48.23 / Avg: 48.24 / Max: 48.26Min: 66.29 / Avg: 66.73 / Max: 67.09Min: 91.66 / Avg: 92.03 / Max: 92.23

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off2004006008001000SE +/- 1.43, N = 3SE +/- 0.15, N = 3SE +/- 2.25, N = 3SE +/- 0.79, N = 3SE +/- 2.38, N = 3SE +/- 0.72, N = 31132.61787.72913.42821.96889.39647.75
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamAVX-512 OnAVX-512 Off2004006008001000Min: 1129.76 / Avg: 1132.61 / Max: 1134.13Min: 787.55 / Avg: 787.72 / Max: 788.02Min: 909.13 / Avg: 913.42 / Max: 916.77Min: 820.58 / Avg: 821.96 / Max: 823.32Min: 886.1 / Avg: 889.39 / Max: 894.02Min: 646.33 / Avg: 647.75 / Max: 648.62

LeelaChessZero

LeelaChessZero (lc0 / lczero) is a chess engine automated vian neural networks. This test profile can be used for OpenCL, CUDA + cuDNN, and BLAS (CPU-based) benchmarking. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASAVX-512 OnAVX-512 Off2K4K6K8K10KSE +/- 36.09, N = 3SE +/- 92.33, N = 9SE +/- 47.46, N = 5SE +/- 68.15, N = 3SE +/- 61.68, N = 3SE +/- 133.67, N = 391719425478055247542106711. (CXX) g++ options: -flto -O3 -march=native -mno-amx-tile -pthread
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: BLASAVX-512 OnAVX-512 Off2K4K6K8K10KMin: 9108 / Avg: 9171.33 / Max: 9233Min: 9064 / Avg: 9425.11 / Max: 10021Min: 4669 / Avg: 4780 / Max: 4922Min: 5428 / Avg: 5524.33 / Max: 5656Min: 7450 / Avg: 7541.67 / Max: 7659Min: 10441 / Avg: 10671 / Max: 109041. (CXX) g++ options: -flto -O3 -march=native -mno-amx-tile -pthread

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAVX-512 OnAVX-512 Off2K4K6K8K10KSE +/- 35.63, N = 3SE +/- 66.84, N = 3SE +/- 58.86, N = 3SE +/- 64.32, N = 4SE +/- 63.14, N = 3SE +/- 151.11, N = 390858161470957508243111341. (CXX) g++ options: -flto -O3 -march=native -mno-amx-tile -pthread
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgNodes Per Second, More Is BetterLeelaChessZero 0.28Backend: EigenAVX-512 OnAVX-512 Off2K4K6K8K10KMin: 9033 / Avg: 9084.67 / Max: 9153Min: 8061 / Avg: 8161.33 / Max: 8288Min: 4602 / Avg: 4709 / Max: 4805Min: 5617 / Avg: 5749.5 / Max: 5916Min: 8156 / Avg: 8243.33 / Max: 8366Min: 10891 / Avg: 11133.67 / Max: 114111. (CXX) g++ options: -flto -O3 -march=native -mno-amx-tile -pthread

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian DragonAVX-512 OnAVX-512 Off50100150200250SE +/- 0.25, N = 9SE +/- 0.11, N = 8SE +/- 0.42, N = 6SE +/- 0.56, N = 6SE +/- 0.50, N = 6SE +/- 0.64, N = 6213.95177.6896.34106.1890.40122.53
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: Asian DragonAVX-512 OnAVX-512 Off4080120160200Min: 212.25 / Avg: 213.95 / Max: 214.74Min: 177.07 / Avg: 177.68 / Max: 178.23Min: 94.63 / Avg: 96.34 / Max: 97.51Min: 103.78 / Avg: 106.18 / Max: 107.77Min: 88.51 / Avg: 90.4 / Max: 92.2Min: 121.25 / Avg: 122.53 / Max: 125.37

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownAVX-512 OnAVX-512 Off4080120160200SE +/- 0.47, N = 8SE +/- 0.42, N = 7SE +/- 0.27, N = 5SE +/- 0.17, N = 5SE +/- 5.25, N = 15SE +/- 0.53, N = 6181.89153.1870.5174.8062.0492.65
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 3.13Binary: Pathtracer ISPC - Model: CrownAVX-512 OnAVX-512 Off306090120150Min: 179.83 / Avg: 181.89 / Max: 183.6Min: 151.26 / Avg: 153.18 / Max: 154.52Min: 70.13 / Avg: 70.51 / Max: 71.57Min: 74.52 / Avg: 74.8 / Max: 75.49Min: 42.58 / Avg: 62.04 / Max: 90.41Min: 90.89 / Avg: 92.65 / Max: 94.17

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCAVX-512 OnAVX-512 Off30060090012001500SE +/- 16.33, N = 3SE +/- 13.87, N = 3SE +/- 1.86, N = 3SE +/- 1.76, N = 3SE +/- 4.16, N = 3SE +/- 2.52, N = 31341117679291710341196
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 1.3.1Benchmark: vklBenchmark ISPCAVX-512 OnAVX-512 Off2004006008001000Min: 1311 / Avg: 1341.33 / Max: 1367Min: 1151 / Avg: 1175.67 / Max: 1199Min: 790 / Avg: 792.33 / Max: 796Min: 914 / Avg: 917.33 / Max: 920Min: 1028 / Avg: 1034 / Max: 1042Min: 1191 / Avg: 1196 / Max: 1199

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160AVX-512 OnAVX-512 Off0.94281.88562.82843.77124.714SE +/- 0.01, N = 5SE +/- 0.02, N = 5SE +/- 0.01, N = 4SE +/- 0.01, N = 5SE +/- 0.02, N = 5SE +/- 0.02, N = 63.483.462.072.942.954.19
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RT.hdr_alb_nrm.3840x2160AVX-512 OnAVX-512 Off246810Min: 3.44 / Avg: 3.48 / Max: 3.51Min: 3.43 / Avg: 3.46 / Max: 3.52Min: 2.06 / Avg: 2.07 / Max: 2.08Min: 2.93 / Avg: 2.94 / Max: 2.96Min: 2.89 / Avg: 2.95 / Max: 2.99Min: 4.13 / Avg: 4.19 / Max: 4.24

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x4096AVX-512 OnAVX-512 Off0.44780.89561.34341.79122.239SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 41.661.721.031.421.481.99
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 1.4.0Run: RTLightmap.hdr.4096x4096AVX-512 OnAVX-512 Off246810Min: 1.65 / Avg: 1.66 / Max: 1.67Min: 1.72 / Avg: 1.72 / Max: 1.72Min: 1.03 / Avg: 1.03 / Max: 1.03Min: 1.42 / Avg: 1.42 / Max: 1.43Min: 1.48 / Avg: 1.48 / Max: 1.48Min: 1.97 / Avg: 1.99 / Max: 2

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeAVX-512 OnAVX-512 Off1020304050SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.04, N = 3SE +/- 0.12, N = 343.7126.4014.3521.2223.1736.36
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/ao/real_timeAVX-512 OnAVX-512 Off918273645Min: 43.61 / Avg: 43.71 / Max: 43.81Min: 26.35 / Avg: 26.4 / Max: 26.48Min: 14.25 / Avg: 14.35 / Max: 14.4Min: 21.15 / Avg: 21.22 / Max: 21.27Min: 23.1 / Avg: 23.17 / Max: 23.24Min: 36.17 / Avg: 36.36 / Max: 36.57

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeAVX-512 OnAVX-512 Off1020304050SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 343.2425.3113.7821.0622.8335.95
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeAVX-512 OnAVX-512 Off918273645Min: 43.05 / Avg: 43.24 / Max: 43.35Min: 25.26 / Avg: 25.31 / Max: 25.36Min: 13.71 / Avg: 13.78 / Max: 13.82Min: 21.02 / Avg: 21.06 / Max: 21.11Min: 22.69 / Avg: 22.83 / Max: 22.98Min: 35.9 / Avg: 35.95 / Max: 36

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeAVX-512 OnAVX-512 Off1224364860SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 354.1640.6719.9625.5232.4842.97
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.10Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeAVX-512 OnAVX-512 Off1122334455Min: 54.12 / Avg: 54.16 / Max: 54.19Min: 40.58 / Avg: 40.67 / Max: 40.75Min: 19.91 / Avg: 19.96 / Max: 20.01Min: 25.38 / Avg: 25.52 / Max: 25.74Min: 32.46 / Avg: 32.48 / Max: 32.54Min: 42.91 / Avg: 42.97 / Max: 43.06

OSPRay Studio

Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAVX-512 OnAVX-512 Off30060090012001500SE +/- 1.20, N = 3SE +/- 0.33, N = 3SE +/- 1.00, N = 3SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3581705149113089808591. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAVX-512 OnAVX-512 Off30060090012001500Min: 579 / Avg: 580.67 / Max: 583Min: 704 / Avg: 704.67 / Max: 705Min: 1490 / Avg: 1491 / Max: 1493Min: 1308 / Avg: 1308.33 / Max: 1309Min: 979 / Avg: 979.67 / Max: 981Min: 858 / Avg: 858.67 / Max: 8591. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerAVX-512 OnAVX-512 Off5K10K15K20K25KSE +/- 7.00, N = 3SE +/- 8.39, N = 3SE +/- 42.81, N = 3SE +/- 8.41, N = 3SE +/- 37.53, N = 3SE +/- 19.41, N = 3926011216239432087215601135891. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerAVX-512 OnAVX-512 Off4K8K12K16K20KMin: 9249 / Avg: 9260 / Max: 9273Min: 11202 / Avg: 11216 / Max: 11231Min: 23890 / Avg: 23943.33 / Max: 24028Min: 20858 / Avg: 20871.67 / Max: 20887Min: 15561 / Avg: 15601 / Max: 15676Min: 13567 / Avg: 13589.33 / Max: 136281. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAVX-512 OnAVX-512 Off10K20K30K40K50KSE +/- 30.61, N = 3SE +/- 50.78, N = 3SE +/- 10.69, N = 3SE +/- 44.00, N = 3SE +/- 69.83, N = 3SE +/- 11.72, N = 31850022475481654183431373272041. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAVX-512 OnAVX-512 Off8K16K24K32K40KMin: 18465 / Avg: 18500 / Max: 18561Min: 22378 / Avg: 22474.67 / Max: 22550Min: 48146 / Avg: 48165 / Max: 48183Min: 41789 / Avg: 41834 / Max: 41922Min: 31270 / Avg: 31372.67 / Max: 31506Min: 27181 / Avg: 27204.33 / Max: 272181. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAVX-512 OnAVX-512 Off400800120016002000SE +/- 0.33, N = 3SE +/- 0.67, N = 3SE +/- 1.00, N = 3SE +/- 0.00, N = 3SE +/- 1.20, N = 3SE +/- 0.33, N = 369583617701563117010291. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path TracerAVX-512 OnAVX-512 Off30060090012001500Min: 694 / Avg: 694.67 / Max: 695Min: 835 / Avg: 836.33 / Max: 837Min: 1768 / Avg: 1770 / Max: 1771Min: 1563 / Avg: 1563 / Max: 1563Min: 1168 / Avg: 1170.33 / Max: 1172Min: 1028 / Avg: 1028.67 / Max: 10291. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerAVX-512 OnAVX-512 Off6K12K18K24K30KSE +/- 3.79, N = 3SE +/- 32.66, N = 3SE +/- 46.52, N = 3SE +/- 18.50, N = 3SE +/- 45.36, N = 3SE +/- 25.54, N = 31105813341284902492718608163321. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path TracerAVX-512 OnAVX-512 Off5K10K15K20K25KMin: 11052 / Avg: 11058 / Max: 11065Min: 13301 / Avg: 13341.33 / Max: 13406Min: 28401 / Avg: 28490 / Max: 28558Min: 24896 / Avg: 24927 / Max: 24960Min: 18528 / Avg: 18608.33 / Max: 18685Min: 16299 / Avg: 16331.67 / Max: 163821. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAVX-512 OnAVX-512 Off13K26K39K52K65KSE +/- 44.52, N = 3SE +/- 18.25, N = 3SE +/- 56.19, N = 3SE +/- 4.98, N = 3SE +/- 115.86, N = 3SE +/- 45.83, N = 32212026541627804984737300327491. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOSPRay Studio 0.11Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path TracerAVX-512 OnAVX-512 Off11K22K33K44K55KMin: 22055 / Avg: 22119.67 / Max: 22205Min: 26508 / Avg: 26541 / Max: 26571Min: 62670 / Avg: 62780.33 / Max: 62854Min: 49838 / Avg: 49847.33 / Max: 49855Min: 37088 / Avg: 37300 / Max: 37487Min: 32659 / Avg: 32749.33 / Max: 328081. (CXX) g++ options: -O3 -march=native -mno-amx-tile -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUAVX-512 OnAVX-512 Off0.45280.90561.35841.81122.264SE +/- 0.003984, N = 9SE +/- 0.002552, N = 9SE +/- 0.002804, N = 9SE +/- 0.001297, N = 9SE +/- 0.006513, N = 9SE +/- 0.001579, N = 90.9862141.7366701.9928600.8751962.0124100.7368221. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUAVX-512 OnAVX-512 Off246810Min: 0.97 / Avg: 0.99 / Max: 1Min: 1.73 / Avg: 1.74 / Max: 1.75Min: 1.98 / Avg: 1.99 / Max: 2.01Min: 0.87 / Avg: 0.88 / Max: 0.88Min: 1.99 / Avg: 2.01 / Max: 2.05Min: 0.73 / Avg: 0.74 / Max: 0.741. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3AVX-512 OnAVX-512 Off0.8751.752.6253.54.375SE +/- 0.053, N = 9SE +/- 0.049, N = 3SE +/- 0.022, N = 10SE +/- 0.025, N = 15SE +/- 0.012, N = 3SE +/- 0.005, N = 33.8663.8891.7641.7092.2882.0551. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3AVX-512 OnAVX-512 Off246810Min: 3.59 / Avg: 3.87 / Max: 4.07Min: 3.82 / Avg: 3.89 / Max: 3.98Min: 1.68 / Avg: 1.76 / Max: 1.89Min: 1.59 / Avg: 1.71 / Max: 1.86Min: 2.27 / Avg: 2.29 / Max: 2.31Min: 2.05 / Avg: 2.06 / Max: 2.061. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1AVX-512 OnAVX-512 Off246810SE +/- 0.219, N = 9SE +/- 0.137, N = 3SE +/- 0.039, N = 10SE +/- 0.064, N = 15SE +/- 0.008, N = 3SE +/- 0.072, N = 37.2047.5662.6782.2973.5512.7371. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1AVX-512 OnAVX-512 Off3691215Min: 6.21 / Avg: 7.2 / Max: 8.02Min: 7.37 / Avg: 7.57 / Max: 7.83Min: 2.55 / Avg: 2.68 / Max: 2.87Min: 2.02 / Avg: 2.3 / Max: 2.61Min: 3.54 / Avg: 3.55 / Max: 3.56Min: 2.6 / Avg: 2.74 / Max: 2.841. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50AVX-512 OnAVX-512 Off612182430SE +/- 0.193, N = 9SE +/- 0.300, N = 3SE +/- 0.099, N = 10SE +/- 0.074, N = 15SE +/- 0.060, N = 3SE +/- 0.065, N = 314.72724.08118.1337.90917.7168.4961. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50AVX-512 OnAVX-512 Off612182430Min: 13.56 / Avg: 14.73 / Max: 15.39Min: 23.52 / Avg: 24.08 / Max: 24.55Min: 17.43 / Avg: 18.13 / Max: 18.56Min: 7.54 / Avg: 7.91 / Max: 8.34Min: 17.6 / Avg: 17.72 / Max: 17.78Min: 8.37 / Avg: 8.5 / Max: 8.591. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0AVX-512 OnAVX-512 Off3691215SE +/- 0.234, N = 9SE +/- 0.132, N = 3SE +/- 0.056, N = 10SE +/- 0.080, N = 15SE +/- 0.105, N = 3SE +/- 0.082, N = 38.7629.2784.1953.9995.2224.4151. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0AVX-512 OnAVX-512 Off3691215Min: 7.87 / Avg: 8.76 / Max: 9.61Min: 9.08 / Avg: 9.28 / Max: 9.53Min: 3.94 / Avg: 4.2 / Max: 4.38Min: 3.55 / Avg: 4 / Max: 4.34Min: 5.03 / Avg: 5.22 / Max: 5.39Min: 4.29 / Avg: 4.42 / Max: 4.571. (CXX) g++ options: -O3 -march=native -mno-amx-tile -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl

Cpuminer-Opt

Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptAVX-512 OnAVX-512 Off10002000300040005000SE +/- 13.77, N = 3SE +/- 0.03, N = 3SE +/- 0.63, N = 3SE +/- 7.11, N = 3SE +/- 0.54, N = 3SE +/- 1.21, N = 34816.312947.941122.742307.421623.883665.551. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: scryptAVX-512 OnAVX-512 Off8001600240032004000Min: 4796.1 / Avg: 4816.31 / Max: 4842.61Min: 2947.91 / Avg: 2947.94 / Max: 2947.99Min: 1121.73 / Avg: 1122.74 / Max: 1123.89Min: 2299.47 / Avg: 2307.42 / Max: 2321.6Min: 1622.99 / Avg: 1623.88 / Max: 1624.84Min: 3663.29 / Avg: 3665.55 / Max: 3667.441. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinAVX-512 OnAVX-512 Off700K1400K2100K2800K3500KSE +/- 23073.29, N = 15SE +/- 18173.37, N = 3SE +/- 579.55, N = 3SE +/- 8466.21, N = 3SE +/- 3398.36, N = 3SE +/- 18201.57, N = 3332628733388506470331328547109263720742001. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Triple SHA-256, OnecoinAVX-512 OnAVX-512 Off600K1200K1800K2400K3000KMin: 3248420 / Avg: 3326286.67 / Max: 3509480Min: 3318430 / Avg: 3338850 / Max: 3375100Min: 646390 / Avg: 647033.33 / Max: 648190Min: 1317220 / Avg: 1328546.67 / Max: 1345110Min: 1085840 / Avg: 1092636.67 / Max: 1096060Min: 2052090 / Avg: 2074200 / Max: 21103001. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteAVX-512 OnAVX-512 Off500K1000K1500K2000K2500KSE +/- 10844.37, N = 3SE +/- 7009.29, N = 3SE +/- 4107.35, N = 3SE +/- 1289.92, N = 3SE +/- 657.95, N = 3SE +/- 3990.27, N = 32252500133436335517392404068732014726201. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: Quad SHA-256, PyriteAVX-512 OnAVX-512 Off400K800K1200K1600K2000KMin: 2232080 / Avg: 2252500 / Max: 2269040Min: 1320850 / Avg: 1334363.33 / Max: 1344350Min: 346960 / Avg: 355173.33 / Max: 359410Min: 922560 / Avg: 924040 / Max: 926610Min: 686450 / Avg: 687320 / Max: 688610Min: 1468550 / Avg: 1472620 / Max: 14806001. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xAVX-512 OnAVX-512 Off16003200480064008000SE +/- 54.56, N = 3SE +/- 0.55, N = 3SE +/- 4.47, N = 3SE +/- 4.49, N = 3SE +/- 21.26, N = 6SE +/- 26.18, N = 37691.785712.581840.412650.362166.904071.591. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: x25xAVX-512 OnAVX-512 Off13002600390052006500Min: 7590.41 / Avg: 7691.78 / Max: 7777.44Min: 5711.72 / Avg: 5712.58 / Max: 5713.61Min: 1831.82 / Avg: 1840.41 / Max: 1846.87Min: 2641.98 / Avg: 2650.36 / Max: 2657.36Min: 2118.64 / Avg: 2166.9 / Max: 2265.53Min: 4032.79 / Avg: 4071.59 / Max: 4121.451. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinAVX-512 OnAVX-512 Off40K80K120K160K200KSE +/- 536.94, N = 3SE +/- 204.21, N = 3SE +/- 32.15, N = 3SE +/- 52.92, N = 3SE +/- 928.66, N = 3SE +/- 155.06, N = 31602901652405793064690911071055131. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: DeepcoinAVX-512 OnAVX-512 Off30K60K90K120K150KMin: 159240 / Avg: 160290 / Max: 161010Min: 164940 / Avg: 165240 / Max: 165630Min: 57880 / Avg: 57930 / Max: 57990Min: 64610 / Avg: 64690 / Max: 64790Min: 89820 / Avg: 91106.67 / Max: 92910Min: 105320 / Avg: 105513.33 / Max: 1058201. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinAVX-512 OnAVX-512 Off400K800K1200K1600K2000KSE +/- 4597.66, N = 3SE +/- 7503.92, N = 3SE +/- 716.54, N = 3SE +/- 438.80, N = 3SE +/- 329.36, N = 3SE +/- 668.14, N = 31990790141306339935061327367773310724531. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: SkeincoinAVX-512 OnAVX-512 Off300K600K900K1200K1500KMin: 1984310 / Avg: 1990790 / Max: 1999680Min: 1404250 / Avg: 1413063.33 / Max: 1427990Min: 398140 / Avg: 399350 / Max: 400620Min: 612680 / Avg: 613273.33 / Max: 614130Min: 677360 / Avg: 677733.33 / Max: 678390Min: 1071340 / Avg: 1072453.33 / Max: 10736501. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsAVX-512 OnAVX-512 Off200K400K600K800K1000KSE +/- 3825.46, N = 3SE +/- 1744.51, N = 3SE +/- 498.81, N = 3SE +/- 390.61, N = 3SE +/- 820.11, N = 3SE +/- 1047.73, N = 310689934927801570834191332639176726771. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgkH/s, More Is BetterCpuminer-Opt 3.20.3Algorithm: LBC, LBRY CreditsAVX-512 OnAVX-512 Off200K400K600K800K1000KMin: 1061350 / Avg: 1068993.33 / Max: 1073110Min: 490730 / Avg: 492780 / Max: 496250Min: 156360 / Avg: 157083.33 / Max: 158040Min: 418620 / Avg: 419133.33 / Max: 419900Min: 262970 / Avg: 263916.67 / Max: 265550Min: 671440 / Avg: 672676.67 / Max: 6747601. (CXX) g++ options: -O3 -march=native -mno-amx-tile -lcurl -lz -lpthread -lssl -lcrypto -lgmp

OpenVINO

This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off30K60K90K120K150KSE +/- 1008.13, N = 3SE +/- 673.74, N = 3SE +/- 38.18, N = 3SE +/- 42.04, N = 3SE +/- 74.60, N = 3SE +/- 153.42, N = 3118103.94114083.2450413.9867701.6073906.76103184.011. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off20K40K60K80K100KMin: 117003.98 / Avg: 118103.94 / Max: 120117.32Min: 112795.33 / Avg: 114083.24 / Max: 115070.32Min: 50368.89 / Avg: 50413.98 / Max: 50489.9Min: 67626.19 / Avg: 67701.6 / Max: 67771.49Min: 73830.71 / Avg: 73906.76 / Max: 74055.95Min: 102946.01 / Avg: 103184.01 / Max: 103470.731. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off0.35550.7111.06651.4221.7775SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.961.141.581.161.571.041. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off246810Min: 0.96 / Avg: 0.96 / Max: 0.97Min: 1.13 / Avg: 1.14 / Max: 1.14Min: 1.57 / Avg: 1.58 / Max: 1.58Min: 1.16 / Avg: 1.16 / Max: 1.16Min: 1.57 / Avg: 1.57 / Max: 1.57Min: 1.04 / Avg: 1.04 / Max: 1.041. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off2K4K6K8K10KSE +/- 1.97, N = 3SE +/- 0.29, N = 3SE +/- 2.52, N = 3SE +/- 2.81, N = 3SE +/- 1.04, N = 3SE +/- 1.24, N = 311029.456218.251750.304442.953878.796206.721. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off2K4K6K8K10KMin: 11026.7 / Avg: 11029.45 / Max: 11033.27Min: 6217.68 / Avg: 6218.25 / Max: 6218.57Min: 1746.05 / Avg: 1750.3 / Max: 1754.78Min: 4438.93 / Avg: 4442.95 / Max: 4448.37Min: 3876.78 / Avg: 3878.79 / Max: 3880.22Min: 6204.35 / Avg: 6206.72 / Max: 6208.561. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off714212835SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.02, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 34.347.7111.414.4930.9119.311. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16-INT8 - Device: CPUAVX-512 OnAVX-512 Off714212835Min: 4.34 / Avg: 4.34 / Max: 4.34Min: 7.71 / Avg: 7.71 / Max: 7.71Min: 11.38 / Avg: 11.41 / Max: 11.44Min: 4.48 / Avg: 4.49 / Max: 4.49Min: 30.9 / Avg: 30.91 / Max: 30.93Min: 19.3 / Avg: 19.31 / Max: 19.321. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUAVX-512 OnAVX-512 Off16003200480064008000SE +/- 1.27, N = 3SE +/- 2.86, N = 3SE +/- 0.71, N = 3SE +/- 0.97, N = 3SE +/- 0.62, N = 3SE +/- 1.23, N = 37420.493791.801014.271119.451732.743738.921. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgFPS, More Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUAVX-512 OnAVX-512 Off13002600390052006500Min: 7417.96 / Avg: 7420.49 / Max: 7421.97Min: 3788.73 / Avg: 3791.8 / Max: 3797.51Min: 1013.52 / Avg: 1014.27 / Max: 1015.69Min: 1117.86 / Avg: 1119.45 / Max: 1121.22Min: 1731.65 / Avg: 1732.74 / Max: 1733.81Min: 3736.64 / Avg: 3738.92 / Max: 3740.841. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUAVX-512 OnAVX-512 Off510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 36.4612.6419.6817.8317.298.011. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2022.3Model: Vehicle Detection FP16 - Device: CPUAVX-512 OnAVX-512 Off510152025Min: 6.46 / Avg: 6.46 / Max: 6.46Min: 12.62 / Avg: 12.64 / Max: 12.65Min: 19.65 / Avg: 19.68 / Max: 19.69Min: 17.8 / Avg: 17.83 / Max: 17.85Min: 17.28 / Avg: 17.29 / Max: 17.3Min: 8.01 / Avg: 8.01 / Max: 8.021. (CXX) g++ options: -O3 -march=native -mno-amx-tile -fsigned-char -ffunction-sections -fdata-sections -fno-strict-overflow -fwrapv

miniBUDE

MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX-512 OnAVX-512 Off16003200480064008000SE +/- 9.60, N = 10SE +/- 27.90, N = 8SE +/- 3.51, N = 5SE +/- 6.26, N = 5SE +/- 6.63, N = 6SE +/- 13.13, N = 77274.055192.431796.162339.322480.613586.271. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX-512 OnAVX-512 Off13002600390052006500Min: 7211.16 / Avg: 7274.05 / Max: 7315.83Min: 5115.49 / Avg: 5192.43 / Max: 5353.21Min: 1783.27 / Avg: 1796.16 / Max: 1802.48Min: 2322.63 / Avg: 2339.32 / Max: 2353.47Min: 2463 / Avg: 2480.61 / Max: 2503.65Min: 3545.69 / Avg: 3586.27 / Max: 3644.171. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX-512 OnAVX-512 Off60120180240300SE +/- 0.38, N = 10SE +/- 1.12, N = 8SE +/- 0.14, N = 5SE +/- 0.25, N = 5SE +/- 0.27, N = 6SE +/- 0.53, N = 7290.96207.7071.8593.5799.23143.451. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM1AVX-512 OnAVX-512 Off50100150200250Min: 288.45 / Avg: 290.96 / Max: 292.63Min: 204.62 / Avg: 207.7 / Max: 214.13Min: 71.33 / Avg: 71.85 / Max: 72.1Min: 92.91 / Avg: 93.57 / Max: 94.14Min: 98.52 / Avg: 99.22 / Max: 100.15Min: 141.83 / Avg: 143.45 / Max: 145.771. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX-512 OnAVX-512 Off2K4K6K8K10KSE +/- 47.16, N = 4SE +/- 46.49, N = 3SE +/- 12.20, N = 3SE +/- 13.49, N = 3SE +/- 24.70, N = 6SE +/- 9.11, N = 38562.696371.321993.072588.762602.653643.841. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgGFInst/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX-512 OnAVX-512 Off15003000450060007500Min: 8421.46 / Avg: 8562.69 / Max: 8617.6Min: 6283.52 / Avg: 6371.32 / Max: 6441.74Min: 1972.67 / Avg: 1993.07 / Max: 2014.86Min: 2573.18 / Avg: 2588.76 / Max: 2615.62Min: 2485.24 / Avg: 2602.65 / Max: 2648.84Min: 3628.3 / Avg: 3643.84 / Max: 3659.841. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX-512 OnAVX-512 Off70140210280350SE +/- 1.89, N = 4SE +/- 1.86, N = 3SE +/- 0.49, N = 3SE +/- 0.54, N = 3SE +/- 0.99, N = 6SE +/- 0.36, N = 3342.51254.8579.72103.55104.11145.751. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgBillion Interactions/s, More Is BetterminiBUDE 20210901Implementation: OpenMP - Input Deck: BM2AVX-512 OnAVX-512 Off60120180240300Min: 336.86 / Avg: 342.51 / Max: 344.7Min: 251.34 / Avg: 254.85 / Max: 257.67Min: 78.91 / Avg: 79.72 / Max: 80.59Min: 102.93 / Avg: 103.55 / Max: 104.63Min: 99.41 / Avg: 104.11 / Max: 105.95Min: 145.13 / Avg: 145.75 / Max: 146.391. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh TimeAVX-512 OnAVX-512 Off4080120160200145.90146.75162.72160.66169.84159.671. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution TimeAVX-512 OnAVX-512 Off80160240320400112.38110.95358.63361.37194.79195.481. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Large Mesh Size - Mesh TimeAVX-512 OnAVX-512 Off2004006008001000996.78867.38887.211051.351039.32847.911. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Large Mesh Size - Execution TimeAVX-512 OnAVX-512 Off150030004500600075002849.612803.547106.767117.014319.124305.121. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXAVX-512 OnAVX-512 Off9K18K27K36K45KSE +/- 0.56, N = 5SE +/- 41.48, N = 5SE +/- 181.36, N = 4SE +/- 1.95, N = 4SE +/- 181.17, N = 4SE +/- 68.33, N = 439795.3636458.3130330.5333979.9937809.7142508.351. (CXX) g++ options: -O3 -march=native -mno-amx-tile -flto=auto -fno-fat-lto-objects
EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXAVX-512 OnAVX-512 Off7K14K21K28K35KMin: 39793.53 / Avg: 39795.36 / Max: 39796.84Min: 36299.38 / Avg: 36458.31 / Max: 36530.98Min: 29795.81 / Avg: 30330.53 / Max: 30569.12Min: 33974.33 / Avg: 33979.99 / Max: 33983.28Min: 37290.88 / Avg: 37809.71 / Max: 38097.38Min: 42303.42 / Avg: 42508.35 / Max: 42580.421. (CXX) g++ options: -O3 -march=native -mno-amx-tile -flto=auto -fno-fat-lto-objects

CPU Peak Freq (Highest CPU Core Frequency) Monitor

EPYC 9654Xeon 8380Xeon 8490HOpenBenchmarking.orgMegahertzCPU Peak Freq (Highest CPU Core Frequency) MonitorPhoronix Test Suite System MonitoringAVX-512 OnAVX-512 Off7001400210028003500Min: 2547 / Avg: 3615.9 / Max: 3772Min: 2336 / Avg: 3667.6 / Max: 4199Min: 802 / Avg: 3114.77 / Max: 3421Min: 804 / Avg: 2939.27 / Max: 3420Min: 1899 / Avg: 3041.53 / Max: 4080Min: 690 / Avg: 3000.31 / Max: 4081

CPU Power Consumption Monitor

Xeon 8380Xeon 8490HOpenBenchmarking.orgWattsCPU Power Consumption MonitorPhoronix Test Suite System MonitoringAVX-512 OffAVX-512 On160320480640800Min: 57.8 / Avg: 365.97 / Max: 660.32Min: 60.6 / Avg: 451.19 / Max: 668.12Min: 101.7 / Avg: 588.67 / Max: 887.42Min: 102.55 / Avg: 582.55 / Max: 894.12

69 Results Shown

Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
LeelaChessZero:
  BLAS
  Eigen
Embree:
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Crown
OpenVKL
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160
  RTLightmap.hdr.4096x4096
OSPRay:
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
OSPRay Studio:
  1 - 4K - 1 - Path Tracer
  1 - 4K - 16 - Path Tracer
  1 - 4K - 32 - Path Tracer
  3 - 4K - 1 - Path Tracer
  3 - 4K - 16 - Path Tracer
  3 - 4K - 32 - Path Tracer
oneDNN
Mobile Neural Network:
  mobilenetV3
  squeezenetv1.1
  resnet-v2-50
  SqueezeNetV1.0
Cpuminer-Opt:
  scrypt
  Triple SHA-256, Onecoin
  Quad SHA-256, Pyrite
  x25x
  Deepcoin
  Skeincoin
  LBC, LBRY Credits
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    FPS
    ms
  Vehicle Detection FP16-INT8 - CPU:
    FPS
    ms
  Vehicle Detection FP16 - CPU:
    FPS
    ms
miniBUDE:
  OpenMP - BM1:
    GFInst/s
    Billion Interactions/s
  OpenMP - BM2:
    GFInst/s
    Billion Interactions/s
OpenFOAM:
  drivaerFastback, Medium Mesh Size - Mesh Time
  drivaerFastback, Medium Mesh Size - Execution Time
  drivaerFastback, Large Mesh Size - Mesh Time
  drivaerFastback, Large Mesh Size - Execution Time
SMHasher
CPU Peak Freq (Highest CPU Core Frequency) Monitor:
  Phoronix Test Suite System Monitoring:
    Megahertz
    Watts