extra tests

2 x AMD EPYC 9124 16-Core testing with a Supermicro H13DSH (1.5 BIOS) and astdrmfb on AlmaLinux 9.2 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308259-NE-EXTRATEST30
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 5 Tests
Creator Workloads 7 Tests
Database Test Suite 2 Tests
Encoding 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 2 Tests
Multi-Core 8 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 5 Tests
Renderers 2 Tests
Server 2 Tests
Server CPU Tests 4 Tests
Video Encoding 2 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 9124 2P
August 25 2023
  3 Hours, 7 Minutes
b
August 25 2023
  10 Hours, 34 Minutes
c
August 25 2023
  3 Hours, 12 Minutes
d
August 25 2023
  3 Hours, 8 Minutes
Invert Hiding All Results Option
  5 Hours

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


extra testsOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9124 16-Core @ 3.00GHz (32 Cores / 64 Threads)Supermicro H13DSH (1.5 BIOS)24 x 32 GB DDR5-4800MT/s Samsung M321R4GA3BB6-CQKET2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbAlmaLinux 9.25.14.0-284.25.1.el9_2.x86_64 (x86_64)GCC 11.3.1 20221121ext41024x768ProcessorMotherboardMemoryDiskGraphicsOSKernelCompilerFile-SystemScreen ResolutionExtra Tests PerformanceSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1) (build 11.0.20+8-LTS)- Python 3.9.16- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

EPYC 9124 2PbcdResult OverviewPhoronix Test Suite100%101%102%103%NCNNStress-NGApache CassandraDragonflydbSVT-AV1RemhosKripkeTimed Linux Kernel CompilationSPECFEM3DnekRSBRL-CADIntel Open Image DenoiseVVenCLaghosNeural Magic DeepSparseLiquid-DSPEmbreeOSPRayBlender

extra testslaghos: Triple Point Problemlaghos: Sedov Blast Wave, ube_922_hex.meshremhos: Sample Remap Examplespecfem3d: Mount St. Helensspecfem3d: Layered Halfspacespecfem3d: Tomographic Modelspecfem3d: Homogeneous Halfspacespecfem3d: Water-layered Halfspacenekrs: Kershawnekrs: TurboPipe Periodicembree: Pathtracer - Crownembree: Pathtracer ISPC - Crownembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objsvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasteroidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timebuild-linux-kernel: defconfigliquid-dsp: 1 - 256 - 32liquid-dsp: 1 - 256 - 57liquid-dsp: 2 - 256 - 32liquid-dsp: 2 - 256 - 57liquid-dsp: 4 - 256 - 32liquid-dsp: 4 - 256 - 57liquid-dsp: 8 - 256 - 32liquid-dsp: 8 - 256 - 57liquid-dsp: 1 - 256 - 512liquid-dsp: 16 - 256 - 32liquid-dsp: 16 - 256 - 57liquid-dsp: 2 - 256 - 512liquid-dsp: 32 - 256 - 32liquid-dsp: 32 - 256 - 57liquid-dsp: 4 - 256 - 512liquid-dsp: 64 - 256 - 32liquid-dsp: 64 - 256 - 57liquid-dsp: 8 - 256 - 512liquid-dsp: 16 - 256 - 512liquid-dsp: 32 - 256 - 512liquid-dsp: 64 - 256 - 512dragonflydb: 10 - 1:10dragonflydb: 20 - 1:10dragonflydb: 50 - 1:10dragonflydb: 10 - 1:100dragonflydb: 20 - 1:100dragonflydb: 50 - 1:100deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Baseline - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamstress-ng: Hashstress-ng: MMAPstress-ng: NUMAstress-ng: Pipestress-ng: Pollstress-ng: Zlibstress-ng: Futexstress-ng: MEMFDstress-ng: Mutexstress-ng: Atomicstress-ng: Cryptostress-ng: Mallocstress-ng: Cloningstress-ng: Forkingstress-ng: AVL Treestress-ng: SENDFILEstress-ng: CPU Cachestress-ng: CPU Stressstress-ng: Semaphoresstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: AVX-512 VNNIstress-ng: Function Callstress-ng: x86_64 RdRandstress-ng: Floating Pointstress-ng: Matrix 3D Mathstress-ng: Memory Copyingstress-ng: Vector Shufflestress-ng: Mixed Schedulerstress-ng: Socket Activitystress-ng: Wide Vector Mathstress-ng: Context Switchingstress-ng: Fused Multiply-Addstress-ng: Vector Floating Pointstress-ng: Glibc C String Functionsstress-ng: Glibc Qsort Data Sortingstress-ng: System V Message Passingncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlycassandra: Writeskripke: brl-cad: VGR Performance Metricstress-ng: PthreadEPYC 9124 2Pbcd196.44264.8220.35614.71359156738.0960763914.94914780318.92469841934.64914764311469600000741085000037.966339.119442.636238.335948.510440.82165.06870.774188.696199.36415.154139.31443.528522.5216.76312.75518.94233.6551.361.370.6510.831710.8472177.27310.10039.8768211.928735.1735288000528500006882400010565000013689000017811000027581000033879000012637000550610000608470000246800001081000000120560000049627000205620000018090000009860700019779000038833000050107000012095393.8413016172.3816386566.1711365230.1415292696.215294187.6419.7944793.777514.091870.9558754.636221.1701204.81444.8772356.328744.8589138.62147.2056106.7864149.54948.316820.6865258.667561.7898140.81537.09182673.2215.9715904.42081.1021113.9236140.140367.690314.760324.8554640.62517.065458.5883257.773861.985140.68347.0989115.0024138.820968.222414.65168.759194.683998.957610.097536.2719438.188124.538540.7322374.026742.718591.534610.911285.6076186.599451.758119.312619.9271794.18814.116170.83257217477.21142.018.8920487917.544334100.752934.244322676.82920.9438082.3235.3579760.82136350929.271173.141008.85410.95852899.85774009.0189602.5181543324.92173872.46234249.813675272.5127757.9512012881.5111911.0311756.3513430.3324987.4635532.569605.291539140.9518252120.5232101622.54107730.441243730.27917.086868467.3722.9811.6311.2714.5910.8915.655.829.4928.3113.538.0626.2935.6324.933.7557.2316.6237.6694.8249.25373.93122.49222075376258700545952196.03264.9020.31014.67618607638.39161188815.06909179419.23935934636.43785876311495900000741774000037.713339.310442.699938.208248.542740.81275.09271.849194.503199.27714.992139.402457.785545.7876.81612.70519.02934.1951.361.360.6410.861310.8482176.98210.14639.8287811.887434.36935213667529066676881366710206286713722333317495266727363666733122000012528333549303333638966667250403331078866667120396666749195333204953333318073666679876900019800666738887333350328666711029528.7014116849.9016274465.5911311943.1014344471.8415714147.2619.7988793.816113.997171.4366757.112821.1000202.89454.9229355.577744.9310139.73817.1484106.6840149.709848.573920.5774257.886761.9786141.07857.07892670.21185.9740913.32881.0913113.5859140.531267.727214.752824.7941641.239517.027558.7203257.686562.0231140.73487.0963114.4758139.442868.251614.6439168.754994.661599.279510.065236.2924437.907824.528640.7479374.817242.634291.260210.944785.3011186.927252.274919.122019.8545793.816714.003771.40227218980.871143.5117.6521069388.894331008.992940.064311036.05913.40437829.05234.8179687.85137065291.041143.641011.36413.89859483.20763301.0189306.2878539814.90174068.70234493.073671132.6527681.2012013496.3711937.7211593.6813421.5025048.7935867.333050.451542414.0218094107.2432117609.18107171.3441635360.27917.316872106.4823.6711.3811.9015.3210.8915.655.8229.6328.4813.668.2226.6335.6425.1334.7558.5217.6437.8595.1049.21373.06122.3722679037403360053983270031.30196.34265.0320.76614.79748636638.36870220215.27074161319.38854634835.29838747611206400000746029000037.405739.480942.855738.223348.466340.7445.07171.426170.95196.67915.299136.778426.754539.1066.88812.7119.08333.6851.371.370.6510.863510.8212177.24410.05859.8580611.902335.28235183000528370006886300010545000013728000017870000027497000033444000012666000547680000634300000250700001080000000118080000049997000204500000018320000009952800019733000038613000050111000012032750.2614637785.3916263996.1611871319.9713357787.1114906385.7819.8333792.227414.031971.259753.741821.191204.53034.8831356.909444.7637140.82337.0929106.9582149.272748.436820.6351257.748462.0239140.37377.11432675.00855.9629899.59571.1081113.366140.879867.751414.747124.7677640.087517.043958.6622258.207261.9005140.127.1274114.5294139.340568.194114.6561168.810294.63299.374910.055736.3468436.770124.640.6301372.957742.851592.506210.797585.4452186.821452.323919.104319.9271794.079114.02871.27797218080.71128.4618.4520689455.394335334.872940.273946891.51920.29436953.86235.5679781.66137440336.061168.881008.05407.35857316.41786282.1188082.8994057409.46173848.98234217.543674818.5927539.1912011361.1111904.7311673.1113405.825080.3335991.419611.341541832.2918545533.3632144757.8106411.7439897113.46920.536858428.6623.512.0411.9815.3810.7515.186.1529.7528.5613.68.1426.8435.524.9336.858.1419.8737.7595.0249.26373.55123228846365660900542621195.40263.8520.90414.50564525338.48977964914.87771034618.85382887634.82222748211520900000747049000037.67739.384142.764137.987648.474640.57265.05372.353199.523195.82914.974136.071419.853545.1956.76812.67819.16633.5721.361.370.6510.863410.8453176.61410.07749.8131811.900235.28335195000527650006883400010563000013650000017428000027173000033120000012652000549390000650410000249000001073800000123450000049884000204560000017786000009644900019551000038746000050466000011686474.2615179164.8615253621.5311860591.6514250093.7117583489.3819.81795.85213.919271.8362757.47721.0894202.50194.9317355.887744.9148139.79347.1456106.1467150.27648.251220.714257.423762.1006140.45687.11022661.22555.9947890.94481.1189113.6578140.576767.765714.745424.7153641.605116.996158.8279258.559961.8053140.20147.1228114.282139.612768.162114.6632168.724794.619799.122210.081336.3849436.974924.53640.7345373.027942.833790.075611.086885.4854186.648250.908919.635619.8763794.038714.013971.357218607.121131.5518.3520249488.614337787.722944.544022496.07912.62438280.16236.99107643.49137476652.931170.311007.46410.89857324.89776708.7288466.5492019395.13173878.7234239.63667964.1527731.1612013602.0611939.4110869.4913428.6425006.7335640.369504.381543291.9117931974.9232151217.03106715.8540991307.95916.746851573.223.5511.2911.6413.9610.714.765.6329.0328.6913.658.1326.5335.7524.2233.555.051637.9394.8848.95372.72122.723040237207200054464870043.78OpenBenchmarking.org

Laghos

Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point ProblemEPYC 9124 2Pcbd4080120160200SE +/- 0.76, N = 3196.44196.34196.03195.401. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point ProblemEPYC 9124 2Pcbd4080120160200Min: 195.22 / Avg: 196.03 / Max: 197.541. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.meshcbEPYC 9124 2Pd60120180240300SE +/- 0.70, N = 3265.03264.90264.82263.851. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.meshcbEPYC 9124 2Pd50100150200250Min: 263.8 / Avg: 264.9 / Max: 266.21. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Remhos

Remhos (REMap High-Order Solver) is a miniapp that solves the pure advection equations that are used to perform monotonic and conservative discontinuous field interpolation (remap) as part of the Eulerian phase in Arbitrary Lagrangian Eulerian (ALE) simulations. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap ExamplebEPYC 9124 2Pcd510152025SE +/- 0.20, N = 320.3120.3620.7720.901. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterRemhos 1.0Test: Sample Remap ExamplebEPYC 9124 2Pcd510152025Min: 20.03 / Avg: 20.31 / Max: 20.691. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

SPECFEM3D

simulates acoustic (fluid), elastic (solid), coupled acoustic/elastic, poroelastic or seismic wave propagation in any type of conforming mesh of hexahedra. This test profile currently relies on CPU-based execution for SPECFEM3D and using a variety of their built-in examples/models for benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. HelensdbEPYC 9124 2Pc48121620SE +/- 0.06, N = 314.5114.6814.7114.801. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Mount St. HelensdbEPYC 9124 2Pc48121620Min: 14.56 / Avg: 14.68 / Max: 14.751. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered HalfspaceEPYC 9124 2Pcbd918273645SE +/- 0.26, N = 338.1038.3738.3938.491. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Layered HalfspaceEPYC 9124 2Pcbd816243240Min: 37.88 / Avg: 38.39 / Max: 38.711. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic ModeldEPYC 9124 2Pbc48121620SE +/- 0.10, N = 314.8814.9515.0715.271. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Tomographic ModeldEPYC 9124 2Pbc48121620Min: 14.89 / Avg: 15.07 / Max: 15.231. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous HalfspacedEPYC 9124 2Pbc510152025SE +/- 0.18, N = 718.8518.9219.2419.391. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Homogeneous HalfspacedEPYC 9124 2Pbc510152025Min: 18.45 / Avg: 19.24 / Max: 19.891. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered HalfspaceEPYC 9124 2Pdcb816243240SE +/- 0.39, N = 534.6534.8235.3036.441. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterSPECFEM3D 4.0Model: Water-layered HalfspaceEPYC 9124 2Pdcb816243240Min: 35.6 / Avg: 36.44 / Max: 37.91. (F9X) gfortran options: -O2 -fopenmp -std=f2003 -fimplicit-none -fmax-errors=10 -pedantic -pedantic-errors -O3 -finline-functions -lmpi_usempif08 -lmpi_mpifh -lmpi

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: KershawdbEPYC 9124 2Pc2000M4000M6000M8000M10000MSE +/- 49802643.84, N = 3115209000001149590000011469600000112064000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: KershawdbEPYC 9124 2Pc2000M4000M6000M8000M10000MMin: 11423400000 / Avg: 11495900000 / Max: 115913000001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe PeriodicdcbEPYC 9124 2P1600M3200M4800M6400M8000MSE +/- 22116600.85, N = 374704900007460290000741774000074108500001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe PeriodicdcbEPYC 9124 2P1300M2600M3900M5200M6500MMin: 7384330000 / Avg: 7417740000 / Max: 74595500001. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

Embree

Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: CrownEPYC 9124 2Pbdc918273645SE +/- 0.01, N = 337.9737.7137.6837.41MIN: 37.52 / MAX: 39.47MIN: 37.2 / MAX: 38.96MIN: 37.26 / MAX: 39.44MIN: 36.79 / MAX: 38.87
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: CrownEPYC 9124 2Pbdc816243240Min: 37.68 / Avg: 37.71 / Max: 37.73

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrowncdbEPYC 9124 2P918273645SE +/- 0.04, N = 339.4839.3839.3139.12MIN: 39 / MAX: 40.94MIN: 38.86 / MAX: 40.97MIN: 38.66 / MAX: 40.8MIN: 38.49 / MAX: 40.34
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: CrowncdbEPYC 9124 2P816243240Min: 39.23 / Avg: 39.31 / Max: 39.38

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian DragoncdbEPYC 9124 2P1020304050SE +/- 0.04, N = 342.8642.7642.7042.64MIN: 42.63 / MAX: 43.37MIN: 42.48 / MAX: 43.18MIN: 42.4 / MAX: 43.23MIN: 42.35 / MAX: 43.05
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian DragoncdbEPYC 9124 2P918273645Min: 42.63 / Avg: 42.7 / Max: 42.75

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon ObjEPYC 9124 2Pcbd918273645SE +/- 0.02, N = 338.3438.2238.2137.99MIN: 37.99 / MAX: 38.77MIN: 37.93 / MAX: 38.58MIN: 37.93 / MAX: 38.75MIN: 37.71 / MAX: 38.4
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer - Model: Asian Dragon ObjEPYC 9124 2Pcbd816243240Min: 38.17 / Avg: 38.21 / Max: 38.24

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonbEPYC 9124 2Pdc1122334455SE +/- 0.05, N = 348.5448.5148.4748.47MIN: 48.1 / MAX: 49.38MIN: 48.2 / MAX: 49.13MIN: 48.1 / MAX: 49.28MIN: 48.15 / MAX: 49.27
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian DragonbEPYC 9124 2Pdc1020304050Min: 48.47 / Avg: 48.54 / Max: 48.63

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjEPYC 9124 2Pbcd918273645SE +/- 0.04, N = 340.8240.8140.7440.57MIN: 40.51 / MAX: 41.28MIN: 40.43 / MAX: 41.85MIN: 40.34 / MAX: 41.4MIN: 40.28 / MAX: 41.19
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.1Binary: Pathtracer ISPC - Model: Asian Dragon ObjEPYC 9124 2Pbcd816243240Min: 40.75 / Avg: 40.81 / Max: 40.9

SVT-AV1

This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 4KbcEPYC 9124 2Pd1.14572.29143.43714.58285.7285SE +/- 0.021, N = 35.0925.0715.0685.0531. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 4KbcEPYC 9124 2Pd246810Min: 5.05 / Avg: 5.09 / Max: 5.121. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4KdbcEPYC 9124 2P1632486480SE +/- 0.21, N = 372.3571.8571.4370.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 4KdbcEPYC 9124 2P1428425670Min: 71.47 / Avg: 71.85 / Max: 72.181. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4KdbEPYC 9124 2Pc4080120160200SE +/- 1.55, N = 3199.52194.50188.70170.951. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 4KdbEPYC 9124 2Pc4080120160200Min: 191.65 / Avg: 194.5 / Max: 196.971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4KEPYC 9124 2Pbcd4080120160200SE +/- 1.95, N = 3199.36199.28196.68195.831. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 4KEPYC 9124 2Pbcd4080120160200Min: 195.38 / Avg: 199.28 / Max: 201.461. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 1080pcEPYC 9124 2Pbd48121620SE +/- 0.08, N = 315.3015.1514.9914.971. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 4 - Input: Bosphorus 1080pcEPYC 9124 2Pbd48121620Min: 14.85 / Avg: 14.99 / Max: 15.141. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 1080pbEPYC 9124 2Pcd306090120150SE +/- 0.50, N = 3139.40139.31136.78136.071. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 8 - Input: Bosphorus 1080pbEPYC 9124 2Pcd306090120150Min: 138.63 / Avg: 139.4 / Max: 140.331. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 1080pbEPYC 9124 2Pcd100200300400500SE +/- 5.17, N = 3457.79443.53426.75419.851. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 12 - Input: Bosphorus 1080pbEPYC 9124 2Pcd80160240320400Min: 451.22 / Avg: 457.78 / Max: 467.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 1080pbdcEPYC 9124 2P120240360480600SE +/- 4.09, N = 15545.79545.20539.11522.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.6Encoder Mode: Preset 13 - Input: Bosphorus 1080pbdcEPYC 9124 2P100200300400500Min: 516.66 / Avg: 545.79 / Max: 565.171. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastcbdEPYC 9124 2P246810SE +/- 0.051, N = 36.8886.8166.7686.7631. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FastcbdEPYC 9124 2P3691215Min: 6.72 / Avg: 6.82 / Max: 6.891. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterEPYC 9124 2Pcbd3691215SE +/- 0.01, N = 312.7612.7112.7112.681. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: FasterEPYC 9124 2Pcbd48121620Min: 12.69 / Avg: 12.71 / Max: 12.721. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FastdcbEPYC 9124 2P510152025SE +/- 0.03, N = 319.1719.0819.0318.941. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FastdcbEPYC 9124 2P510152025Min: 18.97 / Avg: 19.03 / Max: 19.061. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FasterbcEPYC 9124 2Pd816243240SE +/- 0.11, N = 334.2033.6933.6633.571. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: FasterbcEPYC 9124 2Pd714212835Min: 34.06 / Avg: 34.19 / Max: 34.411. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Intel Open Image Denoise

Open Image Denoise is a denoising library for ray-tracing and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlycdbEPYC 9124 2P0.30830.61660.92491.23321.5415SE +/- 0.00, N = 31.371.361.361.36
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-OnlycdbEPYC 9124 2P246810Min: 1.36 / Avg: 1.36 / Max: 1.37

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlydcEPYC 9124 2Pb0.30830.61660.92491.23321.5415SE +/- 0.00, N = 31.371.371.371.36
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-OnlydcEPYC 9124 2Pb246810Min: 1.36 / Avg: 1.36 / Max: 1.37

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlydcEPYC 9124 2Pb0.14630.29260.43890.58520.7315SE +/- 0.00, N = 30.650.650.650.64
OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.0Run: RTLightmap.hdr.4096x4096 - Device: CPU-OnlydcEPYC 9124 2Pb246810Min: 0.64 / Avg: 0.64 / Max: 0.65

OSPRay

Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timecdbEPYC 9124 2P3691215SE +/- 0.00, N = 310.8610.8610.8610.83
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/ao/real_timecdbEPYC 9124 2P3691215Min: 10.86 / Avg: 10.86 / Max: 10.87

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timebEPYC 9124 2Pdc3691215SE +/- 0.01, N = 310.8510.8510.8510.82
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/scivis/real_timebEPYC 9124 2Pdc3691215Min: 10.84 / Avg: 10.85 / Max: 10.86

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeEPYC 9124 2Pcbd4080120160200SE +/- 0.36, N = 3177.27177.24176.98176.61
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: particle_volume/pathtracer/real_timeEPYC 9124 2Pcbd306090120150Min: 176.31 / Avg: 176.98 / Max: 177.54

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timebEPYC 9124 2Pdc3691215SE +/- 0.01, N = 310.1510.1010.0810.06
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/ao/real_timebEPYC 9124 2Pdc3691215Min: 10.13 / Avg: 10.15 / Max: 10.16

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeEPYC 9124 2Pcbd3691215SE +/- 0.01904, N = 39.876829.858069.828789.81318
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeEPYC 9124 2Pcbd3691215Min: 9.79 / Avg: 9.83 / Max: 9.86

OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeEPYC 9124 2Pcdb3691215SE +/- 0.00, N = 311.9311.9011.9011.89
OpenBenchmarking.orgItems Per Second, More Is BetterOSPRay 2.12Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeEPYC 9124 2Pcdb3691215Min: 11.88 / Avg: 11.89 / Max: 11.89

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigbEPYC 9124 2Pcd816243240SE +/- 0.49, N = 334.3735.1735.2835.28
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigbEPYC 9124 2Pcd816243240Min: 33.86 / Avg: 34.37 / Max: 35.34

Build: allmodconfig

EPYC 9124 2P: The test quit with a non-zero exit status.

b: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

d: The test quit with a non-zero exit status.

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pbdc8M16M24M32M40MSE +/- 26660.42, N = 3352880003521366735195000351830001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pbdc6M12M18M24M30MMin: 35170000 / Avg: 35213666.67 / Max: 352620001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57bEPYC 9124 2Pcd11M22M33M44M55MSE +/- 24037.01, N = 3529066675285000052837000527650001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 57bEPYC 9124 2Pcd9M18M27M36M45MMin: 52860000 / Avg: 52906666.67 / Max: 529400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32cdEPYC 9124 2Pb15M30M45M60M75MSE +/- 21231.53, N = 3688630006883400068824000688136671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 32cdEPYC 9124 2Pb12M24M36M48M60MMin: 68782000 / Avg: 68813666.67 / Max: 688540001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57EPYC 9124 2Pdcb20M40M60M80M100MSE +/- 2143861.14, N = 151056500001056300001054500001020628671. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 57EPYC 9124 2Pdcb20M40M60M80M100MMin: 81690000 / Avg: 102062866.67 / Max: 1057100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32cbEPYC 9124 2Pd30M60M90M120M150MSE +/- 301182.85, N = 31372800001372233331368900001365000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 32cbEPYC 9124 2Pd20M40M60M80M100MMin: 136630000 / Avg: 137223333.33 / Max: 1376100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57cEPYC 9124 2Pbd40M80M120M160M200MSE +/- 2041803.93, N = 151787000001781100001749526671742800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 57cEPYC 9124 2Pbd30M60M90M120M150MMin: 151510000 / Avg: 174952666.67 / Max: 1851500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pcbd60M120M180M240M300MSE +/- 1505958.54, N = 32758100002749700002736366672717300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pcbd50M100M150M200M250MMin: 270690000 / Avg: 273636666.67 / Max: 2756500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57EPYC 9124 2Pcbd70M140M210M280M350MSE +/- 2830995.11, N = 33387900003344400003312200003312000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 57EPYC 9124 2Pcbd60M120M180M240M300MMin: 325560000 / Avg: 331220000 / Max: 3341800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512cdEPYC 9124 2Pb3M6M9M12M15MSE +/- 143667.83, N = 3126660001265200012637000125283331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 1 - Buffer Length: 256 - Filter Length: 512cdEPYC 9124 2Pb2M4M6M8M10MMin: 12241000 / Avg: 12528333.33 / Max: 126730001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pdbc120M240M360M480M600MSE +/- 1392172.56, N = 35506100005493900005493033335476800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pdbc100M200M300M400M500MMin: 546520000 / Avg: 549303333.33 / Max: 5507600001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57dbcEPYC 9124 2P140M280M420M560M700MSE +/- 2539254.05, N = 36504100006389666676343000006084700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57dbcEPYC 9124 2P110M220M330M440M550MMin: 634310000 / Avg: 638966666.67 / Max: 6430500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512cbdEPYC 9124 2P5M10M15M20M25MSE +/- 138974.02, N = 3250700002504033324900000246800001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 2 - Buffer Length: 256 - Filter Length: 512cbdEPYC 9124 2P4M8M12M16M20MMin: 24763000 / Avg: 25040333.33 / Max: 251950001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pcbd200M400M600M800M1000MSE +/- 1146492.23, N = 310810000001080000000107886666710738000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pcbd200M400M600M800M1000MMin: 1076600000 / Avg: 1078866666.67 / Max: 10803000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57dEPYC 9124 2Pbc300M600M900M1200M1500MSE +/- 10114401.17, N = 312345000001205600000120396666711808000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57dEPYC 9124 2Pbc200M400M600M800M1000MMin: 1192200000 / Avg: 1203966666.67 / Max: 12241000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512cdEPYC 9124 2Pb11M22M33M44M55MSE +/- 220678.75, N = 3499970004988400049627000491953331. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 4 - Buffer Length: 256 - Filter Length: 512cdEPYC 9124 2Pb9M18M27M36M45MMin: 48764000 / Avg: 49195333.33 / Max: 494920001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pbdc400M800M1200M1600M2000MSE +/- 1319511.69, N = 320562000002049533333204560000020450000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32EPYC 9124 2Pbdc400M800M1200M1600M2000MMin: 2046900000 / Avg: 2049533333.33 / Max: 20510000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57cEPYC 9124 2Pbd400M800M1200M1600M2000MSE +/- 7859884.08, N = 318320000001809000000180736666717786000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57cEPYC 9124 2Pbd300M600M900M1200M1500MMin: 1796700000 / Avg: 1807366666.67 / Max: 18227000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512cbEPYC 9124 2Pd20M40M60M80M100MSE +/- 197811.36, N = 3995280009876900098607000964490001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 8 - Buffer Length: 256 - Filter Length: 512cbEPYC 9124 2Pd20M40M60M80M100MMin: 98375000 / Avg: 98769000 / Max: 989970001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512bEPYC 9124 2Pcd40M80M120M160M200MSE +/- 401718.53, N = 31980066671977900001973300001955100001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512bEPYC 9124 2Pcd30M60M90M120M150MMin: 197500000 / Avg: 198006666.67 / Max: 1988000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512bEPYC 9124 2Pdc80M160M240M320M400MSE +/- 468946.81, N = 33888733333883300003874600003861300001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512bEPYC 9124 2Pdc70M140M210M280M350MMin: 387940000 / Avg: 388873333.33 / Max: 3894200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512dbcEPYC 9124 2P110M220M330M440M550MSE +/- 219266.45, N = 35046600005032866675011100005010700001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512dbcEPYC 9124 2P90M180M270M360M450MMin: 502850000 / Avg: 503286666.67 / Max: 5035400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Dragonflydb

Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10EPYC 9124 2Pcdb3M6M9M12M15MSE +/- 28474.68, N = 312095393.8412032750.2611686474.2611029528.701. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:10EPYC 9124 2Pcdb2M4M6M8M10MMin: 10973004.92 / Avg: 11029528.7 / Max: 11063808.751. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10dcbEPYC 9124 2P3M6M9M12M15MSE +/- 184904.56, N = 1515179164.8614637785.3914116849.9013016172.381. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:10dcbEPYC 9124 2P3M6M9M12M15MMin: 12996964.32 / Avg: 14116849.9 / Max: 15191181.011. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10EPYC 9124 2Pbcd4M8M12M16M20MSE +/- 323344.22, N = 1516386566.1716274465.5916263996.1615253621.531. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:10EPYC 9124 2Pbcd3M6M9M12M15MMin: 14407263.1 / Avg: 16274465.59 / Max: 18404376.81. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Clients Per Thread: 60 - Set To Get Ratio: 1:10

EPYC 9124 2P: The test run did not produce a result. E: Connection error: Connection reset by peer

b: The test run did not produce a result. E: Connection error: Connection reset by peer

c: The test run did not produce a result. E: Connection error: Connection reset by peer

d: The test run did not produce a result. E: Connection error: Connection reset by peer

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100cdEPYC 9124 2Pb3M6M9M12M15MSE +/- 109712.47, N = 311871319.9711860591.6511365230.1411311943.101. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 10 - Set To Get Ratio: 1:100cdEPYC 9124 2Pb2M4M6M8M10MMin: 11092643.38 / Avg: 11311943.1 / Max: 11428012.071. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100EPYC 9124 2Pbdc3M6M9M12M15MSE +/- 164469.07, N = 1515292696.2014344471.8414250093.7113357787.111. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 20 - Set To Get Ratio: 1:100EPYC 9124 2Pbdc3M6M9M12M15MMin: 13404513.91 / Avg: 14344471.84 / Max: 15359531.261. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100dbEPYC 9124 2Pc4M8M12M16M20MSE +/- 219976.25, N = 1517583489.3815714147.2615294187.6414906385.781. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterDragonflydb 1.6.2Clients Per Thread: 50 - Set To Get Ratio: 1:100dbEPYC 9124 2Pc3M6M9M12M15MMin: 13987721.57 / Avg: 15714147.26 / Max: 17177573.281. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Clients Per Thread: 60 - Set To Get Ratio: 1:100

EPYC 9124 2P: The test run did not produce a result. E: Connection error: Connection reset by peer

b: The test run did not produce a result. E: Connection error: Connection reset by peer

c: The test run did not produce a result. E: Connection error: Connection reset by peer

d: The test run did not produce a result. E: Connection error: Connection reset by peer

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamcdbEPYC 9124 2P510152025SE +/- 0.06, N = 319.8319.8119.8019.79
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamcdbEPYC 9124 2P510152025Min: 19.72 / Avg: 19.8 / Max: 19.91

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd2004006008001000SE +/- 0.18, N = 3792.23793.78793.82795.85
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd140280420560700Min: 793.47 / Avg: 793.82 / Max: 794.1

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd48121620SE +/- 0.05, N = 314.0914.0314.0013.92
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd48121620Min: 13.9 / Avg: 14 / Max: 14.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd1632486480SE +/- 0.25, N = 370.9671.2671.4471.84
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd1428425670Min: 71.18 / Avg: 71.44 / Max: 71.93

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamdbEPYC 9124 2Pc160320480640800SE +/- 1.99, N = 3757.48757.11754.64753.74
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamdbEPYC 9124 2Pc130260390520650Min: 754.57 / Avg: 757.11 / Max: 761.04

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamdbEPYC 9124 2Pc510152025SE +/- 0.06, N = 321.0921.1021.1721.19
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-StreamdbEPYC 9124 2Pc510152025Min: 20.99 / Avg: 21.1 / Max: 21.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd4080120160200SE +/- 0.98, N = 3204.81204.53202.89202.50
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd4080120160200Min: 201.69 / Avg: 202.89 / Max: 204.84

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd1.10962.21923.32884.43845.548SE +/- 0.0237, N = 34.87724.88314.92294.9317
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd246810Min: 4.88 / Avg: 4.92 / Max: 4.95

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pdb80160240320400SE +/- 0.15, N = 3356.91356.33355.89355.58
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pdb60120180240300Min: 355.39 / Avg: 355.58 / Max: 355.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pdb1020304050SE +/- 0.01, N = 344.7644.8644.9144.93
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pdb918273645Min: 44.9 / Avg: 44.93 / Max: 44.95

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamcdbEPYC 9124 2P306090120150SE +/- 0.26, N = 3140.82139.79139.74138.62
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamcdbEPYC 9124 2P306090120150Min: 139.38 / Avg: 139.74 / Max: 140.23

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamcdbEPYC 9124 2P246810SE +/- 0.0129, N = 37.09297.14567.14847.2056
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-StreamcdbEPYC 9124 2P3691215Min: 7.12 / Avg: 7.15 / Max: 7.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd20406080100SE +/- 0.09, N = 3106.96106.79106.68106.15
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd20406080100Min: 106.5 / Avg: 106.68 / Max: 106.8

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd306090120150SE +/- 0.18, N = 3149.27149.55149.71150.28
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd306090120150Min: 149.53 / Avg: 149.71 / Max: 150.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreambcEPYC 9124 2Pd1122334455SE +/- 0.15, N = 348.5748.4448.3248.25
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreambcEPYC 9124 2Pd1020304050Min: 48.39 / Avg: 48.57 / Max: 48.87

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreambcEPYC 9124 2Pd510152025SE +/- 0.06, N = 320.5820.6420.6920.71
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreambcEPYC 9124 2Pd510152025Min: 20.45 / Avg: 20.58 / Max: 20.66

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbcd60120180240300SE +/- 0.30, N = 3258.67257.89257.75257.42
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbcd50100150200250Min: 257.29 / Avg: 257.89 / Max: 258.2

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbcd1428425670SE +/- 0.05, N = 361.7961.9862.0262.10
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbcd1224364860Min: 61.92 / Avg: 61.98 / Max: 62.09

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc306090120150SE +/- 0.14, N = 3141.08140.82140.46140.37
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc306090120150Min: 140.86 / Avg: 141.08 / Max: 141.34

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc246810SE +/- 0.0070, N = 37.07897.09187.11027.1143
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc3691215Min: 7.07 / Avg: 7.08 / Max: 7.09

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd6001200180024003000SE +/- 2.37, N = 32675.012673.222670.212661.23
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd5001000150020002500Min: 2666.57 / Avg: 2670.21 / Max: 2674.65

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd1.34882.69764.04645.39526.744SE +/- 0.0057, N = 35.96295.97155.97405.9947
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd246810Min: 5.96 / Avg: 5.97 / Max: 5.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd2004006008001000SE +/- 1.35, N = 3913.33904.42899.60890.94
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd160320480640800Min: 910.96 / Avg: 913.33 / Max: 915.61

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd0.25180.50360.75541.00721.259SE +/- 0.0016, N = 31.09131.10211.10811.1189
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd246810Min: 1.09 / Avg: 1.09 / Max: 1.09

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pdbc306090120150SE +/- 0.06, N = 3113.92113.66113.59113.37
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pdbc20406080100Min: 113.51 / Avg: 113.59 / Max: 113.71

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbdc306090120150SE +/- 0.10, N = 3140.14140.53140.58140.88
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbdc306090120150Min: 140.35 / Avg: 140.53 / Max: 140.71

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamdcbEPYC 9124 2P1530456075SE +/- 0.02, N = 367.7767.7567.7367.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamdcbEPYC 9124 2P1326395265Min: 67.7 / Avg: 67.73 / Max: 67.77

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamdcbEPYC 9124 2P48121620SE +/- 0.00, N = 314.7514.7514.7514.76
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-StreamdcbEPYC 9124 2P48121620Min: 14.74 / Avg: 14.75 / Max: 14.76

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbcd612182430SE +/- 0.04, N = 324.8624.7924.7724.72
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pbcd612182430Min: 24.73 / Avg: 24.79 / Max: 24.87

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd140280420560700SE +/- 0.58, N = 3640.09640.63641.24641.61
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd110220330440550Min: 640.11 / Avg: 641.24 / Max: 642.04

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd48121620SE +/- 0.04, N = 317.0717.0417.0317.00
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd48121620Min: 16.95 / Avg: 17.03 / Max: 17.09

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd1326395265SE +/- 0.15, N = 358.5958.6658.7258.83
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-StreamEPYC 9124 2Pcbd1224364860Min: 58.5 / Avg: 58.72 / Max: 59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamdcEPYC 9124 2Pb60120180240300SE +/- 0.66, N = 3258.56258.21257.77257.69
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamdcEPYC 9124 2Pb50100150200250Min: 256.99 / Avg: 257.69 / Max: 259.02

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamdcEPYC 9124 2Pb1428425670SE +/- 0.16, N = 361.8161.9061.9962.02
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamdcEPYC 9124 2Pb1224364860Min: 61.71 / Avg: 62.02 / Max: 62.2

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc306090120150SE +/- 0.05, N = 3140.73140.68140.20140.12
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc306090120150Min: 140.67 / Avg: 140.73 / Max: 140.84

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc246810SE +/- 0.0029, N = 37.09637.09897.12287.1274
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreambEPYC 9124 2Pdc3691215Min: 7.09 / Avg: 7.1 / Max: 7.1

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pcbd306090120150SE +/- 0.13, N = 3115.00114.53114.48114.28
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pcbd20406080100Min: 114.34 / Avg: 114.48 / Max: 114.74

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pcbd306090120150SE +/- 0.13, N = 3138.82139.34139.44139.61
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pcbd306090120150Min: 139.19 / Avg: 139.44 / Max: 139.63

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd1530456075SE +/- 0.01, N = 368.2568.2268.1968.16
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd1326395265Min: 68.23 / Avg: 68.25 / Max: 68.27

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd48121620SE +/- 0.00, N = 314.6414.6514.6614.66
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-StreambEPYC 9124 2Pcd48121620Min: 14.64 / Avg: 14.64 / Max: 14.65

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd4080120160200SE +/- 0.19, N = 3168.81168.76168.75168.72
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pbd306090120150Min: 168.48 / Avg: 168.75 / Max: 169.1

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamdcbEPYC 9124 2P20406080100SE +/- 0.10, N = 394.6294.6394.6694.68
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamdcbEPYC 9124 2P20406080100Min: 94.49 / Avg: 94.66 / Max: 94.82

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamcbdEPYC 9124 2P20406080100SE +/- 0.23, N = 399.3799.2899.1298.96
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamcbdEPYC 9124 2P20406080100Min: 98.95 / Avg: 99.28 / Max: 99.73

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamcbdEPYC 9124 2P3691215SE +/- 0.02, N = 310.0610.0710.0810.10
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamcbdEPYC 9124 2P3691215Min: 10.02 / Avg: 10.07 / Max: 10.1

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamdcbEPYC 9124 2P816243240SE +/- 0.02, N = 336.3836.3536.2936.27
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamdcbEPYC 9124 2P816243240Min: 36.25 / Avg: 36.29 / Max: 36.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamcdbEPYC 9124 2P90180270360450SE +/- 0.15, N = 3436.77436.97437.91438.19
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-StreamcdbEPYC 9124 2P80160240320400Min: 437.67 / Avg: 437.91 / Max: 438.18

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamcEPYC 9124 2Pdb612182430SE +/- 0.01, N = 324.6024.5424.5424.53
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamcEPYC 9124 2Pdb612182430Min: 24.51 / Avg: 24.53 / Max: 24.55

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamcEPYC 9124 2Pdb918273645SE +/- 0.02, N = 340.6340.7340.7340.75
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-StreamcEPYC 9124 2Pdb816243240Min: 40.71 / Avg: 40.75 / Max: 40.78

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreambEPYC 9124 2Pdc80160240320400SE +/- 0.27, N = 3374.82374.03373.03372.96
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreambEPYC 9124 2Pdc70140210280350Min: 374.27 / Avg: 374.82 / Max: 375.11

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreambEPYC 9124 2Pdc1020304050SE +/- 0.03, N = 342.6342.7242.8342.85
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-StreambEPYC 9124 2Pdc918273645Min: 42.61 / Avg: 42.63 / Max: 42.68

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamcEPYC 9124 2Pbd20406080100SE +/- 0.31, N = 392.5191.5391.2690.08
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamcEPYC 9124 2Pbd20406080100Min: 90.93 / Avg: 91.26 / Max: 91.87

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamcEPYC 9124 2Pbd3691215SE +/- 0.04, N = 310.8010.9110.9411.09
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-StreamcEPYC 9124 2Pbd3691215Min: 10.87 / Avg: 10.94 / Max: 10.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pdcb20406080100SE +/- 0.04, N = 385.6185.4985.4585.30
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pdcb1632486480Min: 85.23 / Avg: 85.3 / Max: 85.35

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pdcb4080120160200SE +/- 0.01, N = 3186.60186.65186.82186.93
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamEPYC 9124 2Pdcb306090120150Min: 186.91 / Avg: 186.93 / Max: 186.94

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamcbEPYC 9124 2Pd1224364860SE +/- 0.07, N = 352.3252.2751.7650.91
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamcbEPYC 9124 2Pd1020304050Min: 52.14 / Avg: 52.27 / Max: 52.37

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamcbEPYC 9124 2Pd510152025SE +/- 0.03, N = 319.1019.1219.3119.64
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamcbEPYC 9124 2Pd510152025Min: 19.09 / Avg: 19.12 / Max: 19.17

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pdb510152025SE +/- 0.02, N = 319.9319.9319.8819.85
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamcEPYC 9124 2Pdb510152025Min: 19.81 / Avg: 19.85 / Max: 19.88

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreambdcEPYC 9124 2P2004006008001000SE +/- 0.67, N = 3793.82794.04794.08794.19
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreambdcEPYC 9124 2P140280420560700Min: 792.48 / Avg: 793.82 / Max: 794.54

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcdb48121620SE +/- 0.04, N = 314.1214.0314.0114.00
OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcdb48121620Min: 13.93 / Avg: 14 / Max: 14.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcdb1632486480SE +/- 0.19, N = 370.8371.2871.3571.40
OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamEPYC 9124 2Pcdb1428425670Min: 71.14 / Avg: 71.4 / Max: 71.78

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashbdcEPYC 9124 2P1.5M3M4.5M6M7.5MSE +/- 1736.87, N = 37218980.877218607.127218080.707217477.201. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: HashbdcEPYC 9124 2P1.3M2.6M3.9M5.2M6.5MMin: 7215814.73 / Avg: 7218980.87 / Max: 7221801.611. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPbEPYC 9124 2Pdc2004006008001000SE +/- 10.51, N = 31143.511142.011131.551128.461. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPbEPYC 9124 2Pdc2004006008001000Min: 1128.73 / Avg: 1143.51 / Max: 1163.841. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAcdbEPYC 9124 2P510152025SE +/- 0.81, N = 1218.4518.3517.658.891. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAcdbEPYC 9124 2P510152025Min: 8.78 / Avg: 17.65 / Max: 18.661. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipebcEPYC 9124 2Pd5M10M15M20M25MSE +/- 263402.50, N = 1521069388.8920689455.3920487917.5420249488.611. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PipebcEPYC 9124 2Pd4M8M12M16M20MMin: 19944074.59 / Avg: 21069388.89 / Max: 22688113.291. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PolldcEPYC 9124 2Pb900K1800K2700K3600K4500KSE +/- 1071.05, N = 34337787.724335334.874334100.754331008.991. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: PolldcEPYC 9124 2Pb800K1600K2400K3200K4000KMin: 4328970.97 / Avg: 4331008.99 / Max: 4332599.231. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibdcbEPYC 9124 2P6001200180024003000SE +/- 1.86, N = 32944.542940.272940.062934.241. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ZlibdcbEPYC 9124 2P5001000150020002500Min: 2937.85 / Avg: 2940.06 / Max: 2943.751. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexEPYC 9124 2Pbdc900K1800K2700K3600K4500KSE +/- 13871.52, N = 34322676.824311036.054022496.073946891.511. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: FutexEPYC 9124 2Pbdc700K1400K2100K2800K3500KMin: 4285567.36 / Avg: 4311036.05 / Max: 4333297.561. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDEPYC 9124 2Pcbd2004006008001000SE +/- 2.16, N = 3920.90920.29913.40912.621. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDEPYC 9124 2Pcbd160320480640800Min: 910.63 / Avg: 913.4 / Max: 917.661. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexdEPYC 9124 2Pbc90K180K270K360K450KSE +/- 421.09, N = 3438280.16438082.30437829.05436953.861. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MutexdEPYC 9124 2Pbc80K160K240K320K400KMin: 437132.91 / Avg: 437829.05 / Max: 438587.61. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicdcEPYC 9124 2Pb50100150200250SE +/- 0.55, N = 3236.99235.56235.35234.811. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AtomicdcEPYC 9124 2Pb4080120160200Min: 234.03 / Avg: 234.81 / Max: 235.871. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptodcEPYC 9124 2Pb20K40K60K80K100KSE +/- 23.57, N = 3107643.4979781.6679760.8279687.851. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CryptodcEPYC 9124 2Pb20K40K60K80K100KMin: 79640.71 / Avg: 79687.85 / Max: 79712.091. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocdcbEPYC 9124 2P30M60M90M120M150MSE +/- 86125.58, N = 3137476652.93137440336.06137065291.04136350929.271. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MallocdcbEPYC 9124 2P20M40M60M80M100MMin: 136895269.17 / Avg: 137065291.04 / Max: 137174224.131. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningEPYC 9124 2Pdcb30060090012001500SE +/- 10.16, N = 31173.141170.311168.881143.641. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CloningEPYC 9124 2Pdcb2004006008001000Min: 1123.32 / Avg: 1143.64 / Max: 1153.821. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingbEPYC 9124 2Pcd2004006008001000SE +/- 5.44, N = 31011.361008.851008.051007.461. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: ForkingbEPYC 9124 2Pcd2004006008001000Min: 1002.93 / Avg: 1011.36 / Max: 1021.531. (CXX) g++ options: -O2 -std=gnu99 -lc

Test: Pthread

EPYC 9124 2P: The test quit with a non-zero exit status.

c: The test quit with a non-zero exit status.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreebEPYC 9124 2Pdc90180270360450SE +/- 5.37, N = 3413.89410.95410.89407.351. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL TreebEPYC 9124 2Pdc70140210280350Min: 406.62 / Avg: 413.89 / Max: 424.371. (CXX) g++ options: -O2 -std=gnu99 -lc

Test: IO_uring

EPYC 9124 2P: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

d: The test run did not produce a result.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEbdcEPYC 9124 2P200K400K600K800K1000KSE +/- 459.45, N = 3859483.20857324.89857316.41852899.851. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEbdcEPYC 9124 2P150K300K450K600K750KMin: 858564.59 / Avg: 859483.2 / Max: 859962.271. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CachecdEPYC 9124 2Pb200K400K600K800K1000KSE +/- 13475.92, N = 15786282.11776708.72774009.01763301.011. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU CachecdEPYC 9124 2Pb140K280K420K560K700KMin: 682738.81 / Avg: 763301.01 / Max: 873373.821. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressEPYC 9124 2Pbdc20K40K60K80K100KSE +/- 346.97, N = 389602.5189306.2888466.5488082.891. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU StressEPYC 9124 2Pbdc16K32K48K64K80KMin: 88697.25 / Avg: 89306.28 / Max: 89898.861. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphorescdEPYC 9124 2Pb20M40M60M80M100MSE +/- 972856.35, N = 1594057409.4692019395.1381543324.9278539814.901. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SemaphorescdEPYC 9124 2Pb16M32M48M64M80MMin: 72689061.96 / Avg: 78539814.9 / Max: 84826582.491. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathbdEPYC 9124 2Pc40K80K120K160K200KSE +/- 1.07, N = 3174068.70173878.70173872.46173848.981. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix MathbdEPYC 9124 2Pc30K60K90K120K150KMin: 174066.65 / Avg: 174068.7 / Max: 174070.231. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathbEPYC 9124 2Pdc50K100K150K200K250KSE +/- 5.36, N = 3234493.07234249.81234239.60234217.541. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector MathbEPYC 9124 2Pdc40K80K120K160K200KMin: 234483 / Avg: 234493.07 / Max: 234501.281. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIEPYC 9124 2Pcbd800K1600K2400K3200K4000KSE +/- 4464.99, N = 33675272.513674818.593671132.653667964.151. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIEPYC 9124 2Pcbd600K1200K1800K2400K3000KMin: 3662212.15 / Avg: 3671132.65 / Max: 3675949.111. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallEPYC 9124 2Pdbc6K12K18K24K30KSE +/- 27.79, N = 327757.9527731.1627681.2027539.191. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function CallEPYC 9124 2Pdbc5K10K15K20K25KMin: 27633.19 / Avg: 27681.2 / Max: 27729.441. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRanddbEPYC 9124 2Pc3M6M9M12M15MSE +/- 82.64, N = 312013602.0612013496.3712012881.5112011361.111. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRanddbEPYC 9124 2Pc2M4M6M8M10MMin: 12013338.93 / Avg: 12013496.37 / Max: 12013618.651. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointdbEPYC 9124 2Pc3K6K9K12K15KSE +/- 1.29, N = 311939.4111937.7211911.0311904.731. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating PointdbEPYC 9124 2Pc2K4K6K8K10KMin: 11935.65 / Avg: 11937.72 / Max: 11940.091. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathEPYC 9124 2Pcbd3K6K9K12K15KSE +/- 31.82, N = 311756.3511673.1111593.6810869.491. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D MathEPYC 9124 2Pcbd2K4K6K8K10KMin: 11547.21 / Avg: 11593.68 / Max: 11654.581. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingEPYC 9124 2Pdbc3K6K9K12K15KSE +/- 2.68, N = 313430.3313428.6413421.5013405.801. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory CopyingEPYC 9124 2Pdbc2K4K6K8K10KMin: 13417.94 / Avg: 13421.5 / Max: 13426.751. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShufflecbdEPYC 9124 2P5K10K15K20K25KSE +/- 29.34, N = 325080.3325048.7925006.7324987.461. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector ShufflecbdEPYC 9124 2P4K8K12K16K20KMin: 24990.12 / Avg: 25048.79 / Max: 25078.91. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulercbdEPYC 9124 2P8K16K24K32K40KSE +/- 132.19, N = 335991.4135867.3335640.3635532.561. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed SchedulercbdEPYC 9124 2P6K12K18K24K30KMin: 35659.45 / Avg: 35867.33 / Max: 36112.731. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivitycEPYC 9124 2Pdb2K4K6K8K10KSE +/- 941.54, N = 159611.349605.299504.383050.451. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket ActivitycEPYC 9124 2Pdb2K4K6K8K10KMin: 3.16 / Avg: 3050.45 / Max: 9592.861. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathdbcEPYC 9124 2P300K600K900K1200K1500KSE +/- 1152.35, N = 31543291.911542414.021541832.291539140.951. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector MathdbcEPYC 9124 2P300K600K900K1200K1500KMin: 1540110.96 / Avg: 1542414.02 / Max: 1543640.951. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingcEPYC 9124 2Pbd4M8M12M16M20MSE +/- 108301.82, N = 318545533.3618252120.5218094107.2417931974.921. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context SwitchingcEPYC 9124 2Pbd3M6M9M12M15MMin: 17904372.96 / Avg: 18094107.24 / Max: 18279464.891. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AdddcbEPYC 9124 2P7M14M21M28M35MSE +/- 5730.93, N = 332151217.0332144757.8032117609.1832101622.541. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-AdddcbEPYC 9124 2P6M12M18M24M30MMin: 32107254.66 / Avg: 32117609.18 / Max: 32127043.051. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointEPYC 9124 2Pbdc20K40K60K80K100KSE +/- 414.14, N = 3107730.40107171.34106715.85106411.741. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating PointEPYC 9124 2Pbdc20K40K60K80K100KMin: 106712.61 / Avg: 107171.34 / Max: 107997.961. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsbEPYC 9124 2Pdc9M18M27M36M45MSE +/- 540521.68, N = 341635360.2741243730.2740991307.9539897113.461. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String FunctionsbEPYC 9124 2Pdc7M14M21M28M35MMin: 41090766 / Avg: 41635360.27 / Max: 42716393.381. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingcbEPYC 9124 2Pd2004006008001000SE +/- 0.71, N = 3920.53917.31917.08916.741. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data SortingcbEPYC 9124 2Pd160320480640800Min: 916.11 / Avg: 917.31 / Max: 918.571. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingbEPYC 9124 2Pcd1.5M3M4.5M6M7.5MSE +/- 5858.93, N = 36872106.486868467.376858428.666851573.201. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message PassingbEPYC 9124 2Pcd1.2M2.4M3.6M4.8M6MMin: 6861429.23 / Avg: 6872106.48 / Max: 6881625.821. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetEPYC 9124 2Pcdb612182430SE +/- 0.17, N = 322.9823.5023.5523.67MIN: 22.68 / MAX: 27.26MIN: 23.22 / MAX: 27.65MIN: 23.18 / MAX: 27.8MIN: 22.96 / MAX: 27.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetEPYC 9124 2Pcdb612182430Min: 23.38 / Avg: 23.67 / Max: 23.981. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2dbEPYC 9124 2Pc3691215SE +/- 0.10, N = 311.2911.3811.6312.04MIN: 10.83 / MAX: 11.91MIN: 10.59 / MAX: 46.25MIN: 11.11 / MAX: 16.06MIN: 10.97 / MAX: 124.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2dbEPYC 9124 2Pc48121620Min: 11.2 / Avg: 11.38 / Max: 11.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 9124 2Pdbc3691215SE +/- 0.03, N = 311.2711.6411.9011.98MIN: 10.84 / MAX: 15.54MIN: 11.48 / MAX: 15.48MIN: 11.07 / MAX: 16.04MIN: 11.57 / MAX: 16.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 9124 2Pdbc3691215Min: 11.86 / Avg: 11.9 / Max: 11.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2dEPYC 9124 2Pbc48121620SE +/- 0.07, N = 313.9614.5915.3215.38MIN: 13.65 / MAX: 18.71MIN: 14.26 / MAX: 19.1MIN: 14.99 / MAX: 19.69MIN: 14.6 / MAX: 86.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2dEPYC 9124 2Pbc48121620Min: 15.23 / Avg: 15.32 / Max: 15.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetdcEPYC 9124 2Pb3691215SE +/- 0.11, N = 310.7010.7510.8910.89MIN: 10.22 / MAX: 14.66MIN: 10.55 / MAX: 20.58MIN: 10.64 / MAX: 15.33MIN: 10.59 / MAX: 11.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetdcEPYC 9124 2Pb3691215Min: 10.69 / Avg: 10.89 / Max: 11.051. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0dcEPYC 9124 2Pb48121620SE +/- 0.09, N = 314.7615.1815.6515.65MIN: 14.47 / MAX: 29.12MIN: 14.78 / MAX: 24.12MIN: 15.38 / MAX: 19.76MIN: 15.22 / MAX: 20.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0dcEPYC 9124 2Pb48121620Min: 15.49 / Avg: 15.65 / Max: 15.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacedEPYC 9124 2Pbc246810SE +/- 0.12, N = 35.635.805.826.15MIN: 5.53 / MAX: 5.95MIN: 5.61 / MAX: 6.11MIN: 5.44 / MAX: 10.1MIN: 5.85 / MAX: 10.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacedEPYC 9124 2Pbc246810Min: 5.6 / Avg: 5.82 / Max: 6.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetdEPYC 9124 2Pbc714212835SE +/- 0.14, N = 329.0329.4929.6329.75MIN: 27.97 / MAX: 36.49MIN: 28.8 / MAX: 33.78MIN: 28.91 / MAX: 37.12MIN: 29.08 / MAX: 39.281. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetdEPYC 9124 2Pbc714212835Min: 29.43 / Avg: 29.63 / Max: 29.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16EPYC 9124 2Pbcd714212835SE +/- 0.61, N = 328.3128.4828.5628.57MIN: 27.15 / MAX: 33.46MIN: 26.51 / MAX: 33.89MIN: 25.87 / MAX: 151.12MIN: 27.98 / MAX: 32.461. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16EPYC 9124 2Pbcd612182430Min: 27.49 / Avg: 28.48 / Max: 29.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18EPYC 9124 2Pcdb48121620SE +/- 0.18, N = 313.5313.6013.6513.66MIN: 13.23 / MAX: 13.98MIN: 13.23 / MAX: 17.34MIN: 13.29 / MAX: 15.11MIN: 13.02 / MAX: 18.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18EPYC 9124 2Pcdb48121620Min: 13.45 / Avg: 13.66 / Max: 14.021. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetEPYC 9124 2Pdcb246810SE +/- 0.23, N = 38.068.138.148.22MIN: 7.76 / MAX: 12.3MIN: 7.95 / MAX: 8.44MIN: 7.75 / MAX: 12.48MIN: 7.66 / MAX: 12.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetEPYC 9124 2Pdcb3691215Min: 7.94 / Avg: 8.22 / Max: 8.681. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50EPYC 9124 2Pdbc612182430SE +/- 0.32, N = 326.2926.5326.6326.84MIN: 25.96 / MAX: 30.61MIN: 26.13 / MAX: 30.95MIN: 25.98 / MAX: 30.72MIN: 26.51 / MAX: 31.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50EPYC 9124 2Pdbc612182430Min: 26.24 / Avg: 26.63 / Max: 27.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinycEPYC 9124 2Pbd918273645SE +/- 0.25, N = 335.5035.6335.6435.75MIN: 34.27 / MAX: 39.94MIN: 34.31 / MAX: 39.09MIN: 34.1 / MAX: 39.46MIN: 34.32 / MAX: 45.51. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinycEPYC 9124 2Pbd816243240Min: 35.15 / Avg: 35.64 / Max: 35.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssddEPYC 9124 2Pcb612182430SE +/- 0.09, N = 324.2224.9024.9325.13MIN: 23.8 / MAX: 28.97MIN: 24.53 / MAX: 28.94MIN: 24.54 / MAX: 29.12MIN: 24.57 / MAX: 30.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssddEPYC 9124 2Pcb612182430Min: 24.97 / Avg: 25.13 / Max: 25.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mdEPYC 9124 2Pbc816243240SE +/- 0.30, N = 333.5033.7534.7536.80MIN: 33.26 / MAX: 37.92MIN: 33.36 / MAX: 38.06MIN: 32.99 / MAX: 39.74MIN: 33.81 / MAX: 451.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mdEPYC 9124 2Pbc816243240Min: 34.2 / Avg: 34.75 / Max: 35.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerdEPYC 9124 2Pcb1326395265SE +/- 1.11, N = 355.0557.2358.1458.52MIN: 54.24 / MAX: 72.37MIN: 56.51 / MAX: 61.47MIN: 56.8 / MAX: 80.65MIN: 56.14 / MAX: 303.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerdEPYC 9124 2Pcb1224364860Min: 57.41 / Avg: 58.52 / Max: 60.731. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetdEPYC 9124 2Pbc510152025SE +/- 1.24, N = 316.0016.6217.6419.87MIN: 15.86 / MAX: 20.49MIN: 16.27 / MAX: 18.12MIN: 15.65 / MAX: 24.31MIN: 19.45 / MAX: 29.361. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetdEPYC 9124 2Pbc510152025Min: 15.88 / Avg: 17.64 / Max: 20.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyEPYC 9124 2Pcbd918273645SE +/- 0.02, N = 337.6637.7537.8537.93
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-OnlyEPYC 9124 2Pcbd816243240Min: 37.81 / Avg: 37.85 / Max: 37.89

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyEPYC 9124 2Pdcb20406080100SE +/- 0.08, N = 394.8294.8895.0295.10
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-OnlyEPYC 9124 2Pdcb20406080100Min: 94.95 / Avg: 95.1 / Max: 95.23

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-OnlydbEPYC 9124 2Pc1122334455SE +/- 0.13, N = 348.9549.2149.2549.26
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-OnlydbEPYC 9124 2Pc1020304050Min: 48.96 / Avg: 49.21 / Max: 49.36

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlydbcEPYC 9124 2P80160240320400SE +/- 0.81, N = 3372.72373.06373.55373.93
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-OnlydbcEPYC 9124 2P70140210280350Min: 371.82 / Avg: 373.06 / Max: 374.58

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-OnlybEPYC 9124 2Pdc306090120150SE +/- 0.41, N = 3122.37122.49122.70123.00
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-OnlybEPYC 9124 2Pdc20406080100Min: 121.7 / Avg: 122.37 / Max: 123.12

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: WritesdcbEPYC 9124 2P50K100K150K200K250KSE +/- 1083.03, N = 3230402228846226790222075
OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: WritesdcbEPYC 9124 2P40K80K120K160K200KMin: 224726 / Avg: 226790 / Max: 228391

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6EPYC 9124 2Pbdc80M160M240M320M400MSE +/- 2419934.15, N = 33762587003740336003720720003656609001. (CXX) g++ options: -O3 -fopenmp -ldl
OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.6EPYC 9124 2Pbdc70M140M210M280M350MMin: 369296800 / Avg: 374033600 / Max: 3772624001. (CXX) g++ options: -O3 -fopenmp -ldl

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance MetricEPYC 9124 2Pdcb120K240K360K480K600K5459525446485426215398321. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreaddb15K30K45K60K75KSE +/- 71.08, N = 270043.7870031.301. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreaddb12K24K36K48K60KMin: 69960.21 / Avg: 70031.3 / Max: 70102.381. (CXX) g++ options: -O2 -std=gnu99 -lc

Test: Pthread

c: The test quit with a non-zero exit status.

188 Results Shown

Laghos:
  Triple Point Problem
  Sedov Blast Wave, ube_922_hex.mesh
Remhos
SPECFEM3D:
  Mount St. Helens
  Layered Halfspace
  Tomographic Model
  Homogeneous Halfspace
  Water-layered Halfspace
nekRS:
  Kershaw
  TurboPipe Periodic
Embree:
  Pathtracer - Crown
  Pathtracer ISPC - Crown
  Pathtracer - Asian Dragon
  Pathtracer - Asian Dragon Obj
  Pathtracer ISPC - Asian Dragon
  Pathtracer ISPC - Asian Dragon Obj
SVT-AV1:
  Preset 4 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 13 - Bosphorus 4K
  Preset 4 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
VVenC:
  Bosphorus 4K - Fast
  Bosphorus 4K - Faster
  Bosphorus 1080p - Fast
  Bosphorus 1080p - Faster
Intel Open Image Denoise:
  RT.hdr_alb_nrm.3840x2160 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RTLightmap.hdr.4096x4096 - CPU-Only
OSPRay:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
Timed Linux Kernel Compilation
Liquid-DSP:
  1 - 256 - 32
  1 - 256 - 57
  2 - 256 - 32
  2 - 256 - 57
  4 - 256 - 32
  4 - 256 - 57
  8 - 256 - 32
  8 - 256 - 57
  1 - 256 - 512
  16 - 256 - 32
  16 - 256 - 57
  2 - 256 - 512
  32 - 256 - 32
  32 - 256 - 57
  4 - 256 - 512
  64 - 256 - 32
  64 - 256 - 57
  8 - 256 - 512
  16 - 256 - 512
  32 - 256 - 512
  64 - 256 - 512
Dragonflydb:
  10 - 1:10
  20 - 1:10
  50 - 1:10
  10 - 1:100
  20 - 1:100
  50 - 1:100
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Baseline - Synchronous Single-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  ResNet-50, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
Stress-NG:
  Hash
  MMAP
  NUMA
  Pipe
  Poll
  Zlib
  Futex
  MEMFD
  Mutex
  Atomic
  Crypto
  Malloc
  Cloning
  Forking
  AVL Tree
  SENDFILE
  CPU Cache
  CPU Stress
  Semaphores
  Matrix Math
  Vector Math
  AVX-512 VNNI
  Function Call
  x86_64 RdRand
  Floating Point
  Matrix 3D Math
  Memory Copying
  Vector Shuffle
  Mixed Scheduler
  Socket Activity
  Wide Vector Math
  Context Switching
  Fused Multiply-Add
  Vector Floating Point
  Glibc C String Functions
  Glibc Qsort Data Sorting
  System V Message Passing
NCNN:
  CPU - mobilenet
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - shufflenet-v2
  CPU - mnasnet
  CPU - efficientnet-b0
  CPU - blazeface
  CPU - googlenet
  CPU - vgg16
  CPU - resnet18
  CPU - alexnet
  CPU - resnet50
  CPU - yolov4-tiny
  CPU - squeezenet_ssd
  CPU - regnety_400m
  CPU - vision_transformer
  CPU - FastestDet
Blender:
  BMW27 - CPU-Only
  Classroom - CPU-Only
  Fishy Cat - CPU-Only
  Barbershop - CPU-Only
  Pabellon Barcelona - CPU-Only
Apache Cassandra
Kripke
BRL-CAD
Stress-NG