okt

AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310319-NE-OKT13575789
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 4 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 7 Tests
CPU Massive 15 Tests
Creator Workloads 15 Tests
Cryptography 2 Tests
Database Test Suite 6 Tests
Encoding 5 Tests
Fortran Tests 3 Tests
Game Development 4 Tests
HPC - High Performance Computing 13 Tests
Java Tests 3 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 6 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 3 Tests
Multi-Core 22 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 7 Tests
OpenMPI Tests 8 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 11 Tests
Raytracing 2 Tests
Renderers 3 Tests
Scientific Computing 4 Tests
Software Defined Radio 2 Tests
Server 9 Tests
Server CPU Tests 9 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 30 2023
  10 Hours, 59 Minutes
b
October 30 2023
  11 Hours, 3 Minutes
Invert Hiding All Results Option
  11 Hours, 1 Minute
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


okt, "3DMark Wild Life Extreme 1.1.2.1 - Resolution: 1920 x 1080", Higher Results Are Better "a",251.66830444336 "b",253.0650177002 "Apache Cassandra 4.1.3 - Test: Writes", Higher Results Are Better "a", "b", "Apache Hadoop 3.3.6 - Operation: Create - Threads: 20 - Files: 100000", Higher Results Are Better "a",18804.061677322 "b",19127.773527161 "Apache HTTP Server 2.4.56 - Concurrent Requests: 100", Higher Results Are Better "a", "b", "Apache HTTP Server 2.4.56 - Concurrent Requests: 200", Higher Results Are Better "a", "b", "Apache HTTP Server 2.4.56 - Concurrent Requests: 500", Higher Results Are Better "a", "b", "Apache HTTP Server 2.4.56 - Concurrent Requests: 1000", Higher Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100", Higher Results Are Better "a",870894.82 "b",876234.02 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400", Higher Results Are Better "a",883205.76 "b",907114.02 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100", Higher Results Are Better "a",1598972.23 "b",1634117.87 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400", Higher Results Are Better "a",1680328.45 "b",1678638.62 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100", Higher Results Are Better "a",1353913.18 "b",1345876.33 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400", Higher Results Are Better "a",1946289.31 "b",1907446.08 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100", Higher Results Are Better "a",27513198.71 "b",27931669.95 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400", "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100", Higher Results Are Better "a",22648486.29 "b",25003579.13 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400", Higher Results Are Better "a",23078340.04 "b",22919673.53 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100", Higher Results Are Better "a",23311140.77 "b",21936964.27 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100", Lower Results Are Better "a", "b", "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400", Higher Results Are Better "a",19627767.11 "b",18927006.35 "Apache IoTDB 1.2 - Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400", Lower Results Are Better "a", "b", "Blender 3.6 - Blend File: BMW27 - Compute: CPU-Only", Lower Results Are Better "a", "b", "Blender 3.6 - Blend File: Classroom - Compute: CPU-Only", Lower Results Are Better "a", "b", "Blender 3.6 - Blend File: Fishy Cat - Compute: CPU-Only", Lower Results Are Better "a", "b", "Blender 3.6 - Blend File: Barbershop - Compute: CPU-Only", Lower Results Are Better "a", "b", "Blender 3.6 - Blend File: Pabellon Barcelona - Compute: CPU-Only", Lower Results Are Better "a", "b", "BRL-CAD 7.36 - VGR Performance Metric", Higher Results Are Better "a", "b", "Build2 0.15 - Time To Compile", Lower Results Are Better "a", "b", "CloverLeaf 1.3 - Input: clover_bm", Lower Results Are Better "a",134.1073050499 "b",135.44919419289 "CloverLeaf 1.3 - Input: clover_bm64_short", Lower Results Are Better "a",391.2198369503 "b",391.59525513649 "CP2K Molecular Dynamics 2023.1 - Input: H20-64", Lower Results Are Better "a", "b", "CP2K Molecular Dynamics 2023.1 - Input: H2O-DFT-LS", Lower Results Are Better "a", "b", "CP2K Molecular Dynamics 2023.1 - Input: Fayalite-FIST", Lower Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Magi", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: scrypt", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Deepcoin", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Ringcoin", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Blake-2 S", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Garlicoin", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Skeincoin", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Myriad-Groestl", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: LBC, LBRY Credits", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Quad SHA-256, Pyrite", Higher Results Are Better "a", "b", "Cpuminer-Opt 23.5 - Algorithm: Triple SHA-256, Onecoin", Higher Results Are Better "a", "b", "Crypto++ 8.8 - Test: Unkeyed Algorithms", Higher Results Are Better "a", "b", "dav1d 1.2.1 - Video Input: Chimera 1080p", Higher Results Are Better "a", "b", "dav1d 1.2.1 - Video Input: Summer Nature 4K", Higher Results Are Better "a", "b", "dav1d 1.2.1 - Video Input: Summer Nature 1080p", Higher Results Are Better "a", "b", "dav1d 1.2.1 - Video Input: Chimera 1080p 10-bit", Higher Results Are Better "a", "b", "DuckDB 0.9.1 - Benchmark: IMDB", Lower Results Are Better "a", "b", "DuckDB 0.9.1 - Benchmark: TPC-H Parquet", Lower Results Are Better "a", "b", "easyWave r34 - Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240", Lower Results Are Better "a", "b", "easyWave r34 - Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200", Lower Results Are Better "a", "b", "Embree 4.3 - Binary: Pathtracer - Model: Crown", Higher Results Are Better "a", "b", "Embree 4.3 - Binary: Pathtracer ISPC - Model: Crown", Higher Results Are Better "a", "b", "Embree 4.3 - Binary: Pathtracer - Model: Asian Dragon", Higher Results Are Better "a", "b", "Embree 4.3 - Binary: Pathtracer - Model: Asian Dragon Obj", Higher Results Are Better "a", "b", "Embree 4.3 - Binary: Pathtracer ISPC - Model: Asian Dragon", Higher Results Are Better "a", "b", "Embree 4.3 - Binary: Pathtracer ISPC - Model: Asian Dragon Obj", Higher Results Are Better "a", "b", "GPAW 23.6 - Input: Carbon Nanotube", Lower Results Are Better "a", "b", "High Performance Conjugate Gradient 3.1 - X Y Z: 104 104 104 - RT: 60", Higher Results Are Better "a", "b", "Intel Open Image Denoise 2.1 - Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only", Higher Results Are Better "a",0.45624392625273 "b",0.4549155904122 "Intel Open Image Denoise 2.1 - Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only", Higher Results Are Better "a",0.43736113783874 "b",0.43569187870338 "Intel Open Image Denoise 2.1 - Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only", Higher Results Are Better "a",0.22546999219874 "b",0.22496023827788 "libavif avifenc 1.0 - Encoder Speed: 0", Lower Results Are Better "a", "b", "libavif avifenc 1.0 - Encoder Speed: 2", Lower Results Are Better "a", "b", "libavif avifenc 1.0 - Encoder Speed: 6", Lower Results Are Better "a", "b", "libavif avifenc 1.0 - Encoder Speed: 6, Lossless", Lower Results Are Better "a", "b", "libavif avifenc 1.0 - Encoder Speed: 10, Lossless", Lower Results Are Better "a", "b", "libxsmm 2-1.17-3645 - M N K: 128", Higher Results Are Better "a", "b", "libxsmm 2-1.17-3645 - M N K: 32", Higher Results Are Better "a", "b", "libxsmm 2-1.17-3645 - M N K: 64", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 1 - Buffer Length: 256 - Filter Length: 32", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 1 - Buffer Length: 256 - Filter Length: 57", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 2 - Buffer Length: 256 - Filter Length: 32", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 2 - Buffer Length: 256 - Filter Length: 57", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 4 - Buffer Length: 256 - Filter Length: 32", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 4 - Buffer Length: 256 - Filter Length: 57", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 8 - Buffer Length: 256 - Filter Length: 32", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 8 - Buffer Length: 256 - Filter Length: 57", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 1 - Buffer Length: 256 - Filter Length: 512", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 16 - Buffer Length: 256 - Filter Length: 32", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 16 - Buffer Length: 256 - Filter Length: 57", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 2 - Buffer Length: 256 - Filter Length: 512", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 24 - Buffer Length: 256 - Filter Length: 32", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 24 - Buffer Length: 256 - Filter Length: 57", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 4 - Buffer Length: 256 - Filter Length: 512", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 8 - Buffer Length: 256 - Filter Length: 512", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 16 - Buffer Length: 256 - Filter Length: 512", Higher Results Are Better "a", "b", "Liquid-DSP 1.6 - Threads: 24 - Buffer Length: 256 - Filter Length: 512", Higher Results Are Better "a", "b", "Memcached 1.6.19 - Set To Get Ratio: 1:10", Higher Results Are Better "a", "b", "Memcached 1.6.19 - Set To Get Ratio: 1:100", Higher Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: mobilenet", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU-v2-v2 - Model: mobilenet-v2", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU-v3-v3 - Model: mobilenet-v3", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: shufflenet-v2", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: mnasnet", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: efficientnet-b0", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: blazeface", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: googlenet", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: vgg16", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: resnet18", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: alexnet", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: resnet50", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: yolov4-tiny", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: squeezenet_ssd", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: regnety_400m", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: vision_transformer", Lower Results Are Better "a", "b", "NCNN 20230517 - Target: CPU - Model: FastestDet", Lower Results Are Better "a", "b", "nekRS 23.0 - Input: Kershaw", Higher Results Are Better "a", "b", "nekRS 23.0 - Input: TurboPipe Periodic", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream", Lower Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", Higher Results Are Better "a", "b", "Neural Magic DeepSparse 1.5 - Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream", Lower Results Are Better "a", "b", "nginx 1.23.2 - Connections: 100", Higher Results Are Better "a", "b", "nginx 1.23.2 - Connections: 200", Higher Results Are Better "a", "b", "nginx 1.23.2 - Connections: 500", Higher Results Are Better "a", "b", "nginx 1.23.2 - Connections: 1000", Higher Results Are Better "a", "b", "oneDNN 3.3 - Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU", Lower Results Are Better "a", "b", "oneDNN 3.3 - Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU", Lower Results Are Better "a", "b", "OpenRadioss 2023.09.15 - Model: Bumper Beam", Lower Results Are Better "a", "b", "OpenRadioss 2023.09.15 - Model: Chrysler Neon 1M", Lower Results Are Better "a", "b", "OpenRadioss 2023.09.15 - Model: Cell Phone Drop Test", Lower Results Are Better "a", "b", "OpenRadioss 2023.09.15 - Model: Bird Strike on Windshield", Lower Results Are Better "a", "b", "OpenRadioss 2023.09.15 - Model: Rubber O-Ring Seal Installation", Lower Results Are Better "a", "b", "OpenRadioss 2023.09.15 - Model: INIVOL and Fluid Structure Interaction Drop Container", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Person Detection FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Person Detection FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Person Detection FP32 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Person Detection FP32 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Vehicle Detection FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Vehicle Detection FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection FP16-INT8 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection FP16-INT8 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection Retail FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection Retail FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Road Segmentation ADAS FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Road Segmentation ADAS FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Vehicle Detection FP16-INT8 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Vehicle Detection FP16-INT8 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Weld Porosity Detection FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Weld Porosity Detection FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection Retail FP16-INT8 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Face Detection Retail FP16-INT8 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Road Segmentation ADAS FP16-INT8 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Road Segmentation ADAS FP16-INT8 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Machine Translation EN To DE FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Machine Translation EN To DE FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Weld Porosity Detection FP16-INT8 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Weld Porosity Detection FP16-INT8 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Person Vehicle Bike Detection FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Person Vehicle Bike Detection FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Handwritten English Recognition FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Handwritten English Recognition FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Handwritten English Recognition FP16-INT8 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Handwritten English Recognition FP16-INT8 - Device: CPU", Lower Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU", Higher Results Are Better "a", "b", "OpenVINO 2023.2.dev - Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU", Lower Results Are Better "a", "b", "OpenVKL 2.0.0 - Benchmark: vklBenchmarkCPU ISPC", Higher Results Are Better "a", "b", "OpenVKL 2.0.0 - Benchmark: vklBenchmarkCPU Scalar", Higher Results Are Better "a", "b", "Opus Codec Encoding 1.4 - WAV To Opus Encode", Lower Results Are Better "a", "b", "OSPRay 2.12 - Benchmark: particle_volume/ao/real_time", Higher Results Are Better "a", "b", "OSPRay 2.12 - Benchmark: particle_volume/scivis/real_time", Higher Results Are Better "a", "b", "OSPRay 2.12 - Benchmark: particle_volume/pathtracer/real_time", Higher Results Are Better "a", "b", "OSPRay 2.12 - Benchmark: gravity_spheres_volume/dim_512/ao/real_time", Higher Results Are Better "a", "b", "OSPRay 2.12 - Benchmark: gravity_spheres_volume/dim_512/scivis/real_time", Higher Results Are Better "a", "b", "OSPRay 2.12 - Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time", Higher Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "OSPRay Studio 0.13 - Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU", Lower Results Are Better "a", "b", "Palabos 2.3 - Grid Size: 100", Higher Results Are Better "a", "b", "PostgreSQL 16 - Scaling Factor: 100 - Clients: 1000 - Mode: Read Only", Higher Results Are Better "a",492880.700588 "b",513618.259916 "PostgreSQL 16 - Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency", Lower Results Are Better "a", "b", "PostgreSQL 16 - Scaling Factor: 100 - Clients: 1000 - Mode: Read Write", Higher Results Are Better "a",9822.75565 "b",9832.783435 "PostgreSQL 16 - Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency", Lower Results Are Better "a", "b", "QMCPACK 3.17.1 - Input: H4_ae", Lower Results Are Better "a", "b", "QMCPACK 3.17.1 - Input: Li2_STO_ae", Lower Results Are Better "a", "b", "QMCPACK 3.17.1 - Input: LiH_ae_MSD", Lower Results Are Better "a", "b", "QMCPACK 3.17.1 - Input: simple-H2O", Lower Results Are Better "a", "b", "QMCPACK 3.17.1 - Input: O_ae_pyscf_UHF", Lower Results Are Better "a", "b", "QMCPACK 3.17.1 - Input: FeCO6_b3lyp_gms", Lower Results Are Better "a", "b", "QuantLib 1.32 - Configuration: Multi-Threaded", Higher Results Are Better "a", "b", "QuantLib 1.32 - Configuration: Single-Threaded", Higher Results Are Better "a", "b", "SQLite 3.41.2 - Threads / Copies: 1", Lower Results Are Better "a", "b", "SQLite 3.41.2 - Threads / Copies: 2", Lower Results Are Better "a", "b", "SQLite 3.41.2 - Threads / Copies: 4", Lower Results Are Better "a", "b", "SQLite 3.41.2 - Threads / Copies: 8", Lower Results Are Better "a", "b", "srsRAN Project 23.5 - Test: Downlink Processor Benchmark", Higher Results Are Better "a", "b", "srsRAN Project 23.5 - Test: PUSCH Processor Benchmark, Throughput Total", Higher Results Are Better "a", "b", "srsRAN Project 23.5 - Test: PUSCH Processor Benchmark, Throughput Thread", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Hash", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: MMAP", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: NUMA", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Pipe", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Poll", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Zlib", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Futex", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: MEMFD", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Mutex", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Atomic", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Crypto", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Malloc", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Cloning", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Forking", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Pthread", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: AVL Tree", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: IO_uring", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: SENDFILE", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: CPU Cache", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: CPU Stress", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Semaphores", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Matrix Math", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Vector Math", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: AVX-512 VNNI", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Function Call", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: x86_64 RdRand", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Floating Point", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Matrix 3D Math", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Memory Copying", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Vector Shuffle", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Mixed Scheduler", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Socket Activity", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Wide Vector Math", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Context Switching", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Fused Multiply-Add", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Vector Floating Point", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Glibc C String Functions", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: Glibc Qsort Data Sorting", Higher Results Are Better "a", "b", "Stress-NG 0.16.04 - Test: System V Message Passing", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 4 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 8 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 12 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 13 - Input: Bosphorus 4K", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 4 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 8 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 12 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "SVT-AV1 1.7 - Encoder Mode: Preset 13 - Input: Bosphorus 1080p", Higher Results Are Better "a", "b", "TensorFlow 2.12 - Device: CPU - Batch Size: 16 - Model: ResNet-50", Higher Results Are Better "a", "b", "TensorFlow 2.12 - Device: CPU - Batch Size: 32 - Model: ResNet-50", Higher Results Are Better "a", "b", "Timed GCC Compilation 13.2 - Time To Compile", Lower Results Are Better "a", "b", "Timed Gem5 Compilation 23.0.1 - Time To Compile", Lower Results Are Better "a", "b", "Timed Godot Game Engine Compilation 4.0 - Time To Compile", Lower Results Are Better "a", "b", "Timed LLVM Compilation 16.0 - Build System: Ninja", Lower Results Are Better "a", "b", "Timed LLVM Compilation 16.0 - Build System: Unix Makefiles", Lower Results Are Better "a", "b", "Timed Node.js Compilation 19.8.1 - Time To Compile", Lower Results Are Better "a", "b", "VVenC 1.9 - Video Input: Bosphorus 4K - Video Preset: Fast", Higher Results Are Better "a", "b", "VVenC 1.9 - Video Input: Bosphorus 4K - Video Preset: Faster", Higher Results Are Better "a", "b", "VVenC 1.9 - Video Input: Bosphorus 1080p - Video Preset: Fast", Higher Results Are Better "a", "b", "VVenC 1.9 - Video Input: Bosphorus 1080p - Video Preset: Faster", Higher Results Are Better "a", "b", "Whisper.cpp 1.4 - Model: ggml-base.en - Input: 2016 State of the Union", Lower Results Are Better "a", "b", "Whisper.cpp 1.4 - Model: ggml-small.en - Input: 2016 State of the Union", Lower Results Are Better "a", "b", "Whisper.cpp 1.4 - Model: ggml-medium.en - Input: 2016 State of the Union", Lower Results Are Better "a", "b", "Z3 Theorem Prover 4.12.1 - SMT File: 1.smt2", Lower Results Are Better "a", "b", "Z3 Theorem Prover 4.12.1 - SMT File: 2.smt2", Lower Results Are Better "a", "b",