AMD EPYC 7763 1P spec_rstack_overflow
Benchmarks by Michael Larabel for a future article looking at AMD Inception impact.
HTML result view exported from: https://openbenchmarking.org/result/2308112-NE-EPYC7763124&rdt&grs.
MariaDB
Clients: 4096
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Only
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency
RocksDB
Test: Update Random
SQLite
Threads / Copies: 16
RocksDB
Test: Read Random Write Random
SQLite
Threads / Copies: 8
DaCapo Benchmark
Java Test: Tradebeans
OpenRadioss
Model: Bumper Beam
MariaDB
Clients: 8192
Timed Linux Kernel Compilation
Build: defconfig
Apache Spark
Row Count: 1000000 - Partitions: 100 - Inner Join Test Time
OpenRadioss
Model: Rubber O-Ring Seal Installation
Apache Spark
Row Count: 1000000 - Partitions: 100 - Broadcast Inner Join Test Time
nginx
Connections: 500
nginx
Connections: 1000
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Write
PostgreSQL
Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency
Timed Linux Kernel Compilation
Build: allmodconfig
OpenRadioss
Model: Cell Phone Drop Test
Timed Node.js Compilation
Time To Compile
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200
Numpy Benchmark
Apache Spark
Row Count: 1000000 - Partitions: 100 - Group By Test Time
Timed LLVM Compilation
Build System: Ninja
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500
7-Zip Compression
Test: Compression Rating
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200
TensorFlow
Device: CPU - Batch Size: 64 - Model: ResNet-50
CockroachDB
Workload: KV, 95% Reads - Concurrency: 128
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500
Timed Godot Game Engine Compilation
Time To Compile
OpenRadioss
Model: Bird Strike on Windshield
ClickHouse
100M Rows Hits Dataset, Third Run
ClickHouse
100M Rows Hits Dataset, First Run / Cold Cache
Apache Spark
Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time
CockroachDB
Workload: KV, 50% Reads - Concurrency: 128
Apache Cassandra
Test: Writes
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200
Remhos
Test: Sample Remap Example
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500
ClickHouse
100M Rows Hits Dataset, Second Run
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200
DaCapo Benchmark
Java Test: Jython
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5
Timed MrBayes Analysis
Primate Phylogeny Analysis
OpenRadioss
Model: INIVOL and Fluid Structure Interaction Drop Container
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Mesh Time
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500
Apache IoTDB
Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500
ACES DGEMM
Sustained Floating-Point Rate
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5
OpenVINO
Model: Person Detection FP16 - Device: CPU
OSPRay
Benchmark: particle_volume/pathtracer/real_time
SPECFEM3D
Model: Water-layered Halfspace
Apache IoTDB
Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500
OpenVINO
Model: Person Detection FP16 - Device: CPU
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10
SPECFEM3D
Model: Tomographic Model
Embree
Binary: Pathtracer ISPC - Model: Asian Dragon
Apache Spark
Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark
OpenFOAM
Input: drivaerFastback, Medium Mesh Size - Execution Time
Redis 7.0.12 + memtier_benchmark
Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10
SPECFEM3D
Model: Mount St. Helens
SPECFEM3D
Model: Homogeneous Halfspace
Blender
Blend File: BMW27 - Compute: CPU-Only
Blender
Blend File: Pabellon Barcelona - Compute: CPU-Only
Algebraic Multi-Grid Benchmark
NAMD
ATPase Simulation - 327,506 Atoms
SPECFEM3D
Model: Layered Halfspace
OSPRay
Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time
GROMACS
Implementation: MPI CPU - Input: water_GMX50_bare
OSPRay
Benchmark: gravity_spheres_volume/dim_512/scivis/real_time
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OSPRay
Benchmark: gravity_spheres_volume/dim_512/ao/real_time
OpenVINO
Model: Weld Porosity Detection FP16 - Device: CPU
OpenVKL
Benchmark: vklBenchmark ISPC
7-Zip Compression
Test: Decompression Rating
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
OSPRay
Benchmark: particle_volume/scivis/real_time
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
OSPRay
Benchmark: particle_volume/ao/real_time
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
OpenVINO
Model: Face Detection FP16-INT8 - Device: CPU
Neural Magic DeepSparse
Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream
Embree
Binary: Pathtracer ISPC - Model: Crown
Neural Magic DeepSparse
Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Neural Magic DeepSparse
Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream
Apache Spark
Row Count: 1000000 - Partitions: 100 - Calculate Pi Benchmark Using Dataframe
Apache Spark
Row Count: 1000000 - Partitions: 100 - Repartition Test Time
Phoronix Test Suite v10.8.5