okt Tests for a future article. AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 500GB Seagate FireCuda 520 SSD ZP500GM30002, Graphics: AMD Radeon RX 56/64 8GB (1630/945MHz), Audio: AMD Vega 10 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200 OS: Ubuntu 22.04, Kernel: 6.2.0-35-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49), Vulkan: 1.3.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160 b: Processor: AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 500GB Seagate FireCuda 520 SSD ZP500GM30002, Graphics: AMD Radeon RX 56/64 8GB (1630/945MHz), Audio: AMD Vega 10 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200 OS: Ubuntu 22.04, Kernel: 6.2.0-35-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49), Vulkan: 1.3.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160 3DMark Wild Life Extreme 1.1.2.1 Resolution: 1920 x 1080 Frames Per Second > Higher Is Better a . 251.67 |=================================================================== b . 253.07 |=================================================================== Apache Cassandra 4.1.3 Test: Writes Op/s > Higher Is Better a . 110053 |================================================================== b . 111134 |=================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 20 - Files: 100000 Ops per sec > Higher Is Better a . 18804 |=================================================================== b . 19128 |==================================================================== Apache HTTP Server 2.4.56 Concurrent Requests: 100 Requests Per Second > Higher Is Better Apache HTTP Server 2.4.56 Concurrent Requests: 200 Requests Per Second > Higher Is Better Apache HTTP Server 2.4.56 Concurrent Requests: 500 Requests Per Second > Higher Is Better Apache HTTP Server 2.4.56 Concurrent Requests: 1000 Requests Per Second > Higher Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 870895 |=================================================================== b . 876234 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 21.46 |==================================================================== b . 21.31 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 point/sec > Higher Is Better a . 883206 |================================================================= b . 907114 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 Average Latency < Lower Is Better a . 82.72 |==================================================================== b . 79.59 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 1598972 |================================================================= b . 1634118 |================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 29.57 |==================================================================== b . 29.06 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 point/sec > Higher Is Better a . 1680328 |================================================================== b . 1678639 |================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 Average Latency < Lower Is Better a . 108.87 |=================================================================== b . 108.49 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 1353913 |================================================================== b . 1345876 |================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 57.10 |==================================================================== b . 57.24 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 point/sec > Higher Is Better a . 1946289 |================================================================== b . 1907446 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 Average Latency < Lower Is Better a . 151.10 |================================================================= b . 154.86 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 27513199 |================================================================ b . 27931670 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 68.27 |==================================================================== b . 67.75 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 22648486 |=========================================================== b . 25003579 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 214.04 |=================================================================== b . 193.72 |============================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 point/sec > Higher Is Better a . 23078340 |================================================================= b . 22919674 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 Average Latency < Lower Is Better a . 783.24 |================================================================== b . 800.04 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 23311141 |================================================================= b . 21936964 |============================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 335.42 |=============================================================== b . 356.70 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 point/sec > Higher Is Better a . 19627767 |================================================================= b . 18927006 |=============================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 Average Latency < Lower Is Better a . 1540.09 |================================================================ b . 1597.52 |================================================================== Blender 3.6 Blend File: BMW27 - Compute: CPU-Only Seconds < Lower Is Better a . 106.56 |================================================================== b . 107.52 |=================================================================== Blender 3.6 Blend File: Classroom - Compute: CPU-Only Seconds < Lower Is Better a . 289.11 |=================================================================== b . 290.28 |=================================================================== Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only Seconds < Lower Is Better a . 134.28 |=================================================================== b . 133.77 |=================================================================== Blender 3.6 Blend File: Barbershop - Compute: CPU-Only Seconds < Lower Is Better a . 1113.55 |================================================================== b . 1113.73 |================================================================== Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only Seconds < Lower Is Better a . 345.49 |=================================================================== b . 343.58 |=================================================================== BRL-CAD 7.36 VGR Performance Metric VGR Performance Metric > Higher Is Better a . 189733 |=================================================================== b . 190422 |=================================================================== Build2 0.15 Time To Compile Seconds < Lower Is Better a . 123.55 |=================================================================== b . 119.19 |================================================================= CloverLeaf 1.3 Input: clover_bm Seconds < Lower Is Better a . 134.11 |================================================================== b . 135.45 |=================================================================== CloverLeaf 1.3 Input: clover_bm64_short Seconds < Lower Is Better a . 391.22 |=================================================================== b . 391.60 |=================================================================== CP2K Molecular Dynamics 2023.1 Input: H20-64 Seconds < Lower Is Better a . 102.99 |=================================================================== b . 102.75 |=================================================================== CP2K Molecular Dynamics 2023.1 Input: H2O-DFT-LS Seconds < Lower Is Better CP2K Molecular Dynamics 2023.1 Input: Fayalite-FIST Seconds < Lower Is Better a . 171.29 |=================================================================== b . 171.27 |=================================================================== Cpuminer-Opt 23.5 Algorithm: Magi kH/s > Higher Is Better a . 370.56 |=================================================================== b . 371.36 |=================================================================== Cpuminer-Opt 23.5 Algorithm: scrypt kH/s > Higher Is Better a . 128.03 |=================================================================== b . 128.06 |=================================================================== Cpuminer-Opt 23.5 Algorithm: Deepcoin kH/s > Higher Is Better a . 4267.42 |================================================================= b . 4307.38 |================================================================== Cpuminer-Opt 23.5 Algorithm: Ringcoin kH/s > Higher Is Better a . 1882.92 |================================================================ b . 1936.29 |================================================================== Cpuminer-Opt 23.5 Algorithm: Blake-2 S kH/s > Higher Is Better a . 73570 |==================================================================== b . 73540 |==================================================================== Cpuminer-Opt 23.5 Algorithm: Garlicoin kH/s > Higher Is Better a . 1281.81 |=============================================================== b . 1333.62 |================================================================== Cpuminer-Opt 23.5 Algorithm: Skeincoin kH/s > Higher Is Better a . 18940 |==================================================================== b . 18510 |================================================================== Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl kH/s > Higher Is Better a . 6257.66 |================================================================== b . 6205.35 |================================================================= Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits kH/s > Higher Is Better a . 8168.31 |================================================================== b . 8157.18 |================================================================== Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite kH/s > Higher Is Better a . 28470 |==================================================================== b . 28390 |==================================================================== Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin kH/s > Higher Is Better a . 40300 |==================================================================== b . 40150 |==================================================================== Crypto++ 8.8 Test: Unkeyed Algorithms MiB/second > Higher Is Better a . 415.49 |=============================================================== b . 439.59 |=================================================================== dav1d 1.2.1 Video Input: Chimera 1080p FPS > Higher Is Better a . 402.46 |=================================================================== b . 400.81 |=================================================================== dav1d 1.2.1 Video Input: Summer Nature 4K FPS > Higher Is Better a . 196.13 |=================================================================== b . 196.12 |=================================================================== dav1d 1.2.1 Video Input: Summer Nature 1080p FPS > Higher Is Better a . 793.04 |=================================================================== b . 793.90 |=================================================================== dav1d 1.2.1 Video Input: Chimera 1080p 10-bit FPS > Higher Is Better a . 457.63 |=================================================================== b . 458.10 |=================================================================== DuckDB 0.9.1 Benchmark: IMDB Seconds < Lower Is Better DuckDB 0.9.1 Benchmark: TPC-H Parquet Seconds < Lower Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 Seconds < Lower Is Better a . 12.29 |==================================================================== b . 12.27 |==================================================================== easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 Seconds < Lower Is Better a . 382.08 |=================================================================== b . 382.27 |=================================================================== Embree 4.3 Binary: Pathtracer - Model: Crown Frames Per Second > Higher Is Better a . 15.34 |==================================================================== b . 15.19 |=================================================================== Embree 4.3 Binary: Pathtracer ISPC - Model: Crown Frames Per Second > Higher Is Better a . 14.03 |==================================================================== b . 14.10 |==================================================================== Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Frames Per Second > Higher Is Better a . 16.27 |==================================================================== b . 16.33 |==================================================================== Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj Frames Per Second > Higher Is Better a . 14.69 |==================================================================== b . 14.77 |==================================================================== Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Frames Per Second > Higher Is Better a . 15.58 |==================================================================== b . 15.65 |==================================================================== Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj Frames Per Second > Higher Is Better a . 13.44 |==================================================================== b . 13.44 |==================================================================== GPAW 23.6 Input: Carbon Nanotube Seconds < Lower Is Better a . 310.50 |=================================================================== b . 310.45 |=================================================================== High Performance Conjugate Gradient 3.1 X Y Z: 104 104 104 - RT: 60 GFLOP/s > Higher Is Better a . 5.10918 |================================================================== b . 5.09819 |================================================================== Intel Open Image Denoise 2.1 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.46 |===================================================================== b . 0.45 |==================================================================== Intel Open Image Denoise 2.1 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.44 |===================================================================== b . 0.44 |===================================================================== Intel Open Image Denoise 2.1 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only Images / Sec > Higher Is Better a . 0.23 |===================================================================== b . 0.22 |================================================================== libavif avifenc 1.0 Encoder Speed: 0 Seconds < Lower Is Better a . 129.00 |=================================================================== b . 128.80 |=================================================================== libavif avifenc 1.0 Encoder Speed: 2 Seconds < Lower Is Better a . 63.92 |==================================================================== b . 63.18 |=================================================================== libavif avifenc 1.0 Encoder Speed: 6 Seconds < Lower Is Better a . 6.329 |==================================================================== b . 6.238 |=================================================================== libavif avifenc 1.0 Encoder Speed: 6, Lossless Seconds < Lower Is Better a . 10.59 |==================================================================== b . 10.51 |=================================================================== libavif avifenc 1.0 Encoder Speed: 10, Lossless Seconds < Lower Is Better a . 6.291 |==================================================================== b . 6.229 |=================================================================== libxsmm 2-1.17-3645 M N K: 128 GFLOPS/s > Higher Is Better a . 231.0 |==================================================================== b . 230.3 |==================================================================== libxsmm 2-1.17-3645 M N K: 32 GFLOPS/s > Higher Is Better a . 55.0 |===================================================================== b . 54.9 |===================================================================== libxsmm 2-1.17-3645 M N K: 64 GFLOPS/s > Higher Is Better a . 112.8 |==================================================================== b . 112.7 |==================================================================== Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 47740000 |================================================================= b . 45239000 |============================================================== Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 53633000 |================================================================= b . 52191000 |=============================================================== Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 88956000 |================================================================ b . 90487000 |================================================================= Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 102790000 |=============================================================== b . 104190000 |================================================================ Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 174840000 |=============================================================== b . 176360000 |================================================================ Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 200240000 |================================================================ b . 201440000 |================================================================ Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 344230000 |================================================================ b . 341300000 |=============================================================== Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 391000000 |=============================================================== b . 395730000 |================================================================ Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 10723000 |================================================================= b . 10761000 |================================================================= Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 640290000 |================================================================ b . 633870000 |=============================================================== Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 630380000 |================================================================ b . 631120000 |================================================================ Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 21283000 |================================================================= b . 21268000 |================================================================= Liquid-DSP 1.6 Threads: 24 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 890410000 |================================================================ b . 890730000 |================================================================ Liquid-DSP 1.6 Threads: 24 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 732320000 |================================================================ b . 721140000 |=============================================================== Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 39977000 |================================================================= b . 39956000 |================================================================= Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 78180000 |================================================================ b . 78962000 |================================================================= Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 145080000 |================================================================ b . 143910000 |=============================================================== Liquid-DSP 1.6 Threads: 24 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 198510000 |================================================================ b . 198780000 |================================================================ Memcached 1.6.19 Set To Get Ratio: 1:10 Ops/sec > Higher Is Better a . 1646210.97 |=============================================================== b . 1656534.13 |=============================================================== Memcached 1.6.19 Set To Get Ratio: 1:100 Ops/sec > Higher Is Better a . 1590140.06 |=============================================================== b . 1598152.08 |=============================================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better a . 13.21 |==================================================================== b . 13.20 |==================================================================== NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better a . 4.24 |===================================================================== b . 4.23 |===================================================================== NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better a . 3.64 |===================================================================== b . 3.62 |===================================================================== NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better a . 4.62 |===================================================================== b . 4.60 |===================================================================== NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better a . 3.81 |===================================================================== b . 3.81 |===================================================================== NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better a . 6.21 |===================================================================== b . 6.18 |===================================================================== NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better a . 1.76 |===================================================================== b . 1.72 |=================================================================== NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better a . 14.18 |==================================================================== b . 14.03 |=================================================================== NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better a . 52.11 |=================================================================== b . 52.55 |==================================================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better a . 9.75 |===================================================================== b . 9.82 |===================================================================== NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better a . 9.14 |==================================================================== b . 9.23 |===================================================================== NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better a . 18.14 |==================================================================== b . 17.98 |=================================================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better a . 24.32 |==================================================================== b . 24.27 |==================================================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better a . 11.93 |==================================================================== b . 11.91 |==================================================================== NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better a . 10.30 |==================================================================== b . 10.14 |=================================================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better a . 70.77 |==================================================================== b . 70.42 |==================================================================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better a . 5.11 |===================================================================== b . 5.09 |===================================================================== nekRS 23.0 Input: Kershaw flops/rank > Higher Is Better a . 1771410000 |=============================================================== b . 1777340000 |=============================================================== nekRS 23.0 Input: TurboPipe Periodic flops/rank > Higher Is Better a . 2989030000 |=============================================================== b . 2989160000 |=============================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 9.9426 |=================================================================== b . 9.9514 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 603.41 |=================================================================== b . 601.87 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 8.1515 |=================================================================== b . 8.1693 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 122.67 |=================================================================== b . 122.40 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 237.57 |=================================================================== b . 237.66 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 25.23 |==================================================================== b . 25.22 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 130.74 |=================================================================== b . 131.22 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 7.6440 |=================================================================== b . 7.6158 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 94.20 |==================================================================== b . 93.88 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 63.68 |==================================================================== b . 63.89 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 51.76 |=================================================================== b . 52.27 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 19.31 |==================================================================== b . 19.13 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 31.73 |==================================================================== b . 31.57 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 189.08 |=================================================================== b . 190.03 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 19.82 |==================================================================== b . 19.70 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 50.45 |==================================================================== b . 50.76 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 125.01 |=================================================================== b . 125.18 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 47.97 |==================================================================== b . 47.91 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 85.97 |==================================================================== b . 85.88 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 11.62 |==================================================================== b . 11.64 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 737.24 |=================================================================== b . 737.75 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 8.1177 |=================================================================== b . 8.1122 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 466.22 |=================================================================== b . 467.13 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 2.1424 |=================================================================== b . 2.1383 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 55.58 |==================================================================== b . 55.88 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 107.87 |=================================================================== b . 107.35 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 46.36 |==================================================================== b . 46.50 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 21.56 |==================================================================== b . 21.50 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 12.68 |==================================================================== b . 12.69 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 473.31 |=================================================================== b . 472.86 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 9.3034 |=================================================================== b . 9.3103 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 107.48 |=================================================================== b . 107.40 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 124.47 |=================================================================== b . 124.73 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 48.18 |==================================================================== b . 48.07 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 85.52 |==================================================================== b . 85.40 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 11.69 |==================================================================== b . 11.70 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 56.45 |==================================================================== b . 56.34 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 106.27 |=================================================================== b . 106.37 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 46.71 |==================================================================== b . 46.68 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 21.40 |==================================================================== b . 21.42 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 86.32 |==================================================================== b . 86.34 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 69.49 |==================================================================== b . 69.47 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 63.84 |==================================================================== b . 64.03 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 15.66 |==================================================================== b . 15.61 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 12.24 |==================================================================== b . 12.31 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 490.27 |=================================================================== b . 487.56 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 11.21 |==================================================================== b . 11.20 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 89.21 |==================================================================== b . 89.24 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 126.61 |=================================================================== b . 126.90 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 47.32 |==================================================================== b . 47.24 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 57.34 |==================================================================== b . 57.54 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 17.43 |==================================================================== b . 17.37 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 43.76 |==================================================================== b . 44.04 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 137.09 |=================================================================== b . 136.22 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 32.15 |==================================================================== b . 32.32 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 31.10 |==================================================================== b . 30.94 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 9.9597 |=================================================================== b . 9.9696 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 602.37 |=================================================================== b . 600.87 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream items/sec > Higher Is Better a . 8.1486 |=================================================================== b . 8.1412 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream ms/batch < Lower Is Better a . 122.71 |=================================================================== b . 122.83 |=================================================================== nginx 1.23.2 Connections: 100 Requests Per Second > Higher Is Better a . 79156.02 |================================================================= b . 78192.19 |================================================================ nginx 1.23.2 Connections: 200 Requests Per Second > Higher Is Better a . 76304.80 |================================================================= b . 76041.37 |================================================================= nginx 1.23.2 Connections: 500 Requests Per Second > Higher Is Better a . 72766.53 |================================================================= b . 72403.86 |================================================================= nginx 1.23.2 Connections: 1000 Requests Per Second > Higher Is Better a . 63534.92 |================================================================= b . 62870.64 |================================================================ oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU ms < Lower Is Better a . 4.76396 |================================================================== b . 4.72080 |================================================================= oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU ms < Lower Is Better a . 10.64 |================================================================= b . 11.20 |==================================================================== oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better a . 1.87701 |================================================================== b . 1.79957 |=============================================================== oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better a . 0.893574 |============================================================== b . 0.931971 |================================================================= oneDNN 3.3 Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU ms < Lower Is Better a . 22.35 |==================================================================== b . 22.38 |==================================================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU ms < Lower Is Better a . 7.40470 |================================================================= b . 7.51062 |================================================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU ms < Lower Is Better a . 5.39869 |================================================================== b . 5.37146 |================================================================== oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better a . 23.94 |=================================================================== b . 24.18 |==================================================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better a . 2.47724 |================================================================== b . 2.46296 |================================================================== oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better a . 3.44831 |================================================================== b . 3.47437 |================================================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU ms < Lower Is Better a . 3936.51 |================================================================== b . 3965.21 |================================================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU ms < Lower Is Better a . 2418.33 |================================================================== b . 2414.21 |================================================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better a . 3923.99 |================================================================= b . 3959.79 |================================================================== oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU ms < Lower Is Better a . 2410.82 |================================================================== b . 2405.08 |================================================================== oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better a . 3937.24 |================================================================= b . 3967.68 |================================================================== oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU ms < Lower Is Better a . 2409.49 |================================================================== b . 2412.87 |================================================================== OpenRadioss 2023.09.15 Model: Bumper Beam Seconds < Lower Is Better a . 131.78 |=================================================================== b . 131.94 |=================================================================== OpenRadioss 2023.09.15 Model: Chrysler Neon 1M Seconds < Lower Is Better a . 1612.17 |================================================================== b . 1610.50 |================================================================== OpenRadioss 2023.09.15 Model: Cell Phone Drop Test Seconds < Lower Is Better a . 98.05 |==================================================================== b . 97.70 |==================================================================== OpenRadioss 2023.09.15 Model: Bird Strike on Windshield Seconds < Lower Is Better a . 272.77 |=================================================================== b . 272.42 |=================================================================== OpenRadioss 2023.09.15 Model: Rubber O-Ring Seal Installation Seconds < Lower Is Better a . 134.16 |=================================================================== b . 134.76 |=================================================================== OpenRadioss 2023.09.15 Model: INIVOL and Fluid Structure Interaction Drop Container Seconds < Lower Is Better a . 600.03 |=================================================================== b . 599.25 |=================================================================== OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU FPS > Higher Is Better a . 3.14 |===================================================================== b . 3.14 |===================================================================== OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU ms < Lower Is Better a . 1907.31 |================================================================== b . 1894.17 |================================================================== OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU FPS > Higher Is Better a . 24.67 |==================================================================== b . 24.42 |=================================================================== OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU ms < Lower Is Better a . 242.85 |================================================================== b . 245.33 |=================================================================== OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU FPS > Higher Is Better a . 24.7 |===================================================================== b . 24.3 |==================================================================== OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU ms < Lower Is Better a . 242.84 |================================================================== b . 246.67 |=================================================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU FPS > Higher Is Better a . 150.52 |=================================================================== b . 149.62 |=================================================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU ms < Lower Is Better a . 39.83 |==================================================================== b . 40.08 |==================================================================== OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU FPS > Higher Is Better a . 4.3 |====================================================================== b . 4.3 |====================================================================== OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU ms < Lower Is Better a . 1389.42 |================================================================== b . 1388.86 |================================================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU FPS > Higher Is Better a . 674.92 |=================================================================== b . 678.10 |=================================================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU ms < Lower Is Better a . 8.87 |===================================================================== b . 8.83 |===================================================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU FPS > Higher Is Better a . 39.08 |==================================================================== b . 39.27 |==================================================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU ms < Lower Is Better a . 153.43 |=================================================================== b . 152.68 |=================================================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU FPS > Higher Is Better a . 354.56 |=================================================================== b . 354.06 |=================================================================== OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU ms < Lower Is Better a . 16.91 |==================================================================== b . 16.93 |==================================================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU FPS > Higher Is Better a . 302.35 |=================================================================== b . 303.21 |=================================================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU ms < Lower Is Better a . 19.82 |==================================================================== b . 19.76 |==================================================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU FPS > Higher Is Better a . 1065.97 |================================================================== b . 1067.32 |================================================================== OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU ms < Lower Is Better a . 5.62 |===================================================================== b . 5.61 |===================================================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU FPS > Higher Is Better a . 158.63 |=================================================================== b . 158.31 |=================================================================== OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU ms < Lower Is Better a . 37.79 |==================================================================== b . 37.87 |==================================================================== OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU FPS > Higher Is Better a . 34.38 |=================================================================== b . 34.87 |==================================================================== OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU ms < Lower Is Better a . 174.33 |=================================================================== b . 171.89 |================================================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU FPS > Higher Is Better a . 424.82 |=================================================================== b . 425.10 |=================================================================== OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU ms < Lower Is Better a . 28.23 |==================================================================== b . 28.21 |==================================================================== OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU FPS > Higher Is Better a . 328.80 |=================================================================== b . 325.14 |================================================================== OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU ms < Lower Is Better a . 18.23 |=================================================================== b . 18.44 |==================================================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU FPS > Higher Is Better a . 129.11 |================================================================== b . 130.15 |=================================================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU ms < Lower Is Better a . 92.88 |==================================================================== b . 92.11 |=================================================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU FPS > Higher Is Better a . 9800.16 |================================================================== b . 9776.02 |================================================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU ms < Lower Is Better a . 1.22 |===================================================================== b . 1.22 |===================================================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU FPS > Higher Is Better a . 134.26 |=================================================================== b . 134.30 |=================================================================== OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU ms < Lower Is Better a . 89.28 |==================================================================== b . 89.31 |==================================================================== OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU FPS > Higher Is Better a . 14175.70 |================================================================= b . 14139.29 |================================================================= OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU ms < Lower Is Better a . 0.84 |===================================================================== b . 0.84 |===================================================================== OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU ISPC Items / Sec > Higher Is Better a . 230 |====================================================================== b . 230 |====================================================================== OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar Items / Sec > Higher Is Better a . 125 |===================================================================== b . 126 |====================================================================== Opus Codec Encoding 1.4 WAV To Opus Encode Seconds < Lower Is Better a . 29.93 |==================================================================== b . 29.64 |=================================================================== OSPRay 2.12 Benchmark: particle_volume/ao/real_time Items Per Second > Higher Is Better a . 3.83843 |================================================================== b . 3.84180 |================================================================== OSPRay 2.12 Benchmark: particle_volume/scivis/real_time Items Per Second > Higher Is Better a . 3.79088 |================================================================== b . 3.79143 |================================================================== OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time Items Per Second > Higher Is Better a . 111.96 |=================================================================== b . 110.47 |================================================================== OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time Items Per Second > Higher Is Better a . 1.94041 |================================================================= b . 1.95769 |================================================================== OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time Items Per Second > Higher Is Better a . 1.82533 |================================================================== b . 1.83527 |================================================================== OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time Items Per Second > Higher Is Better a . 3.07954 |================================================================== b . 3.07569 |================================================================== OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 10796 |==================================================================== b . 10809 |==================================================================== OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 11044 |==================================================================== b . 11022 |==================================================================== OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 12790 |==================================================================== b . 12792 |==================================================================== OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 178923 |=================================================================== b . 179435 |=================================================================== OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 354104 |=================================================================== b . 352945 |=================================================================== OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 182398 |=================================================================== b . 182424 |=================================================================== OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 359981 |=================================================================== b . 359095 |=================================================================== OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 210828 |================================================================== b . 213159 |=================================================================== OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 415946 |=================================================================== b . 415202 |=================================================================== OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 2712 |===================================================================== b . 2718 |===================================================================== OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 2773 |===================================================================== b . 2762 |===================================================================== OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 3222 |===================================================================== b . 3213 |===================================================================== OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 49808 |==================================================================== b . 49490 |==================================================================== OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 92700 |==================================================================== b . 93216 |==================================================================== OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 50479 |==================================================================== b . 50427 |==================================================================== OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 94954 |==================================================================== b . 94957 |==================================================================== OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 57406 |==================================================================== b . 57662 |==================================================================== OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU ms < Lower Is Better a . 108963 |=================================================================== b . 108921 |=================================================================== Palabos 2.3 Grid Size: 100 Mega Site Updates Per Second > Higher Is Better a . 40.84 |==================================================================== b . 40.65 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better a . 492881 |================================================================ b . 513618 |=================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency ms < Lower Is Better a . 2.029 |==================================================================== b . 1.947 |================================================================= PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better a . 9823 |===================================================================== b . 9833 |===================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency ms < Lower Is Better a . 101.80 |=================================================================== b . 101.70 |=================================================================== QMCPACK 3.17.1 Input: H4_ae Total Execution Time - Seconds < Lower Is Better a . 24.37 |==================================================================== b . 23.60 |================================================================== QMCPACK 3.17.1 Input: Li2_STO_ae Total Execution Time - Seconds < Lower Is Better a . 229.86 |=================================================================== b . 231.53 |=================================================================== QMCPACK 3.17.1 Input: LiH_ae_MSD Total Execution Time - Seconds < Lower Is Better a . 111.68 |=================================================================== b . 108.69 |================================================================= QMCPACK 3.17.1 Input: simple-H2O Total Execution Time - Seconds < Lower Is Better a . 26.11 |=================================================================== b . 26.33 |==================================================================== QMCPACK 3.17.1 Input: O_ae_pyscf_UHF Total Execution Time - Seconds < Lower Is Better a . 185.19 |=================================================================== b . 186.52 |=================================================================== QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms Total Execution Time - Seconds < Lower Is Better a . 169.28 |=================================================================== b . 169.43 |=================================================================== QuantLib 1.32 Configuration: Multi-Threaded MFLOPS > Higher Is Better a . 42432.3 |================================================================== b . 42491.7 |================================================================== QuantLib 1.32 Configuration: Single-Threaded MFLOPS > Higher Is Better a . 3064.1 |=================================================================== b . 3055.5 |=================================================================== SQLite 3.41.2 Threads / Copies: 1 Seconds < Lower Is Better a . 15.16 |==================================================================== b . 15.27 |==================================================================== SQLite 3.41.2 Threads / Copies: 2 Seconds < Lower Is Better a . 26.11 |========================================= b . 43.17 |==================================================================== SQLite 3.41.2 Threads / Copies: 4 Seconds < Lower Is Better a . 31.24 |================================ b . 66.30 |==================================================================== SQLite 3.41.2 Threads / Copies: 8 Seconds < Lower Is Better a . 46.35 |==================== b . 153.75 |=================================================================== srsRAN Project 23.5 Test: Downlink Processor Benchmark Mbps > Higher Is Better a . 824.7 |==================================================================== b . 800.9 |================================================================== srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total Mbps > Higher Is Better a . 2003.6 |=================================================================== b . 1999.6 |=================================================================== srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread Mbps > Higher Is Better a . 245.5 |==================================================================== b . 245.1 |==================================================================== Stress-NG 0.16.04 Test: Hash Bogo Ops/s > Higher Is Better a . 2849944.32 |=============================================================== b . 2861814.33 |=============================================================== Stress-NG 0.16.04 Test: MMAP Bogo Ops/s > Higher Is Better a . 169.42 |=================================================================== b . 168.18 |=================================================================== Stress-NG 0.16.04 Test: NUMA Bogo Ops/s > Higher Is Better a . 134.28 |=================================================================== b . 134.69 |=================================================================== Stress-NG 0.16.04 Test: Pipe Bogo Ops/s > Higher Is Better a . 5085408.46 |============================================================== b . 5137270.21 |=============================================================== Stress-NG 0.16.04 Test: Poll Bogo Ops/s > Higher Is Better a . 1274815.51 |=============================================================== b . 1269594.39 |=============================================================== Stress-NG 0.16.04 Test: Zlib Bogo Ops/s > Higher Is Better a . 1529.17 |================================================================== b . 1536.76 |================================================================== Stress-NG 0.16.04 Test: Futex Bogo Ops/s > Higher Is Better a . 2719437.33 |=============================================================== b . 2724951.63 |=============================================================== Stress-NG 0.16.04 Test: MEMFD Bogo Ops/s > Higher Is Better a . 167.93 |================================================================= b . 173.10 |=================================================================== Stress-NG 0.16.04 Test: Mutex Bogo Ops/s > Higher Is Better a . 3757618.65 |=============================================================== b . 3763560.06 |=============================================================== Stress-NG 0.16.04 Test: Atomic Bogo Ops/s > Higher Is Better a . 546.82 |=================================================================== b . 550.80 |=================================================================== Stress-NG 0.16.04 Test: Crypto Bogo Ops/s > Higher Is Better a . 30063.86 |================================================================= b . 30023.08 |================================================================= Stress-NG 0.16.04 Test: Malloc Bogo Ops/s > Higher Is Better a . 6492227.26 |=============================================================== b . 6507116.41 |=============================================================== Stress-NG 0.16.04 Test: Cloning Bogo Ops/s > Higher Is Better a . 876.55 |=================================================================== b . 876.19 |=================================================================== Stress-NG 0.16.04 Test: Forking Bogo Ops/s > Higher Is Better a . 25858.27 |================================================================ b . 26129.74 |================================================================= Stress-NG 0.16.04 Test: Pthread Bogo Ops/s > Higher Is Better a . 112428.11 |================================================================ b . 112860.19 |================================================================ Stress-NG 0.16.04 Test: AVL Tree Bogo Ops/s > Higher Is Better a . 116.18 |=================================================================== b . 116.68 |=================================================================== Stress-NG 0.16.04 Test: IO_uring Bogo Ops/s > Higher Is Better a . 151644.87 |================================================================ b . 147373.62 |============================================================== Stress-NG 0.16.04 Test: SENDFILE Bogo Ops/s > Higher Is Better a . 127691.03 |================================================================ b . 127965.64 |================================================================ Stress-NG 0.16.04 Test: CPU Cache Bogo Ops/s > Higher Is Better a . 1265046.60 |============================================================== b . 1276371.48 |=============================================================== Stress-NG 0.16.04 Test: CPU Stress Bogo Ops/s > Higher Is Better a . 31748.06 |================================================================= b . 30131.49 |============================================================== Stress-NG 0.16.04 Test: Semaphores Bogo Ops/s > Higher Is Better a . 14365053.84 |============================================================== b . 14280745.55 |============================================================== Stress-NG 0.16.04 Test: Matrix Math Bogo Ops/s > Higher Is Better a . 76846.98 |================================================================ b . 77971.11 |================================================================= Stress-NG 0.16.04 Test: Vector Math Bogo Ops/s > Higher Is Better a . 87181.78 |================================================================= b . 87270.26 |================================================================= Stress-NG 0.16.04 Test: AVX-512 VNNI Bogo Ops/s > Higher Is Better a . 527354.87 |================================================================ b . 528272.84 |================================================================ Stress-NG 0.16.04 Test: Function Call Bogo Ops/s > Higher Is Better a . 9373.74 |================================================================= b . 9463.10 |================================================================== Stress-NG 0.16.04 Test: x86_64 RdRand Bogo Ops/s > Higher Is Better a . 6422.15 |================================================================== b . 6421.87 |================================================================== Stress-NG 0.16.04 Test: Floating Point Bogo Ops/s > Higher Is Better a . 4282.52 |================================================================== b . 4272.51 |================================================================== Stress-NG 0.16.04 Test: Matrix 3D Math Bogo Ops/s > Higher Is Better a . 887.15 |=================================================================== b . 871.12 |================================================================== Stress-NG 0.16.04 Test: Memory Copying Bogo Ops/s > Higher Is Better a . 3721.25 |================================================================== b . 3725.74 |================================================================== Stress-NG 0.16.04 Test: Vector Shuffle Bogo Ops/s > Higher Is Better a . 8782.71 |================================================================== b . 8803.33 |================================================================== Stress-NG 0.16.04 Test: Mixed Scheduler Bogo Ops/s > Higher Is Better a . 8305.74 |================================================================== b . 8320.26 |================================================================== Stress-NG 0.16.04 Test: Socket Activity Bogo Ops/s > Higher Is Better a . 7209.11 |================================================================== b . 7240.74 |================================================================== Stress-NG 0.16.04 Test: Wide Vector Math Bogo Ops/s > Higher Is Better a . 587699.43 |================================================================ b . 584000.31 |================================================================ Stress-NG 0.16.04 Test: Context Switching Bogo Ops/s > Higher Is Better a . 2670705.05 |=============================================================== b . 2690991.53 |=============================================================== Stress-NG 0.16.04 Test: Fused Multiply-Add Bogo Ops/s > Higher Is Better a . 13209967.50 |============================================================== b . 13224332.05 |============================================================== Stress-NG 0.16.04 Test: Vector Floating Point Bogo Ops/s > Higher Is Better a . 35283.23 |================================================================= b . 35339.57 |================================================================= Stress-NG 0.16.04 Test: Glibc C String Functions Bogo Ops/s > Higher Is Better a . 13058517.39 |============================================================== b . 12751183.72 |============================================================= Stress-NG 0.16.04 Test: Glibc Qsort Data Sorting Bogo Ops/s > Higher Is Better a . 370.88 |=================================================================== b . 370.05 |=================================================================== Stress-NG 0.16.04 Test: System V Message Passing Bogo Ops/s > Higher Is Better a . 8498512.20 |=============================================================== b . 8490199.38 |=============================================================== SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 3.150 |==================================================================== b . 3.109 |=================================================================== SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 43.19 |==================================================================== b . 43.19 |==================================================================== SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 92.08 |=================================================================== b . 93.28 |==================================================================== SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 92.64 |==================================================================== b . 91.62 |=================================================================== SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 8.875 |==================================================================== b . 8.897 |==================================================================== SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 80.61 |==================================================================== b . 79.87 |=================================================================== SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 352.04 |=================================================================== b . 350.35 |=================================================================== SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 397.45 |=================================================================== b . 394.02 |================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 images/sec > Higher Is Better a . 10.21 |=================================================================== b . 10.29 |==================================================================== TensorFlow 2.12 Device: CPU - Batch Size: 32 - Model: ResNet-50 images/sec > Higher Is Better a . 10.05 |==================================================================== b . 10.06 |==================================================================== Timed GCC Compilation 13.2 Time To Compile Seconds < Lower Is Better a . 1095.02 |================================================================== b . 1094.51 |================================================================== Timed Gem5 Compilation 23.0.1 Time To Compile Seconds < Lower Is Better a . 460.89 |=================================================================== b . 457.20 |================================================================== Timed Godot Game Engine Compilation 4.0 Time To Compile Seconds < Lower Is Better a . 281.55 |=================================================================== b . 281.71 |=================================================================== Timed LLVM Compilation 16.0 Build System: Ninja Seconds < Lower Is Better a . 573.50 |=================================================================== b . 575.03 |=================================================================== Timed LLVM Compilation 16.0 Build System: Unix Makefiles Seconds < Lower Is Better a . 591.05 |================================================================== b . 600.28 |=================================================================== Timed Node.js Compilation 19.8.1 Time To Compile Seconds < Lower Is Better b . 468.62 |=================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast Frames Per Second > Higher Is Better a . 4.422 |==================================================================== b . 4.415 |==================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster Frames Per Second > Higher Is Better a . 9.018 |==================================================================== b . 9.051 |==================================================================== VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast Frames Per Second > Higher Is Better a . 14.14 |==================================================================== b . 14.09 |==================================================================== VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster Frames Per Second > Higher Is Better a . 28.39 |==================================================================== b . 28.41 |==================================================================== Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union Seconds < Lower Is Better a . 160.66 |=================================================================== b . 159.87 |=================================================================== Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union Seconds < Lower Is Better a . 475.27 |=================================================================== b . 471.32 |================================================================== Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union Seconds < Lower Is Better a . 1447.54 |================================================================= b . 1472.87 |================================================================== Z3 Theorem Prover 4.12.1 SMT File: 1.smt2 Seconds < Lower Is Better a . 29.48 |==================================================================== b . 29.33 |==================================================================== Z3 Theorem Prover 4.12.1 SMT File: 2.smt2 Seconds < Lower Is Better a . 75.48 |=================================================================== b . 76.66 |====================================================================