new xeon Intel Xeon Gold 6421N testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite. a: Processor: Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 3 x 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Monitor: VGA HDMI, Network: 4 x Intel E810-C for QSFP OS: Ubuntu 22.04, Kernel: 5.15.0-47-generic (x86_64), Desktop: GNOME Shell 42.4, Display Server: X Server 1.21.1.3, Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1600x1200 b: Processor: Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 3 x 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Monitor: VGA HDMI, Network: 4 x Intel E810-C for QSFP OS: Ubuntu 22.04, Kernel: 5.15.0-47-generic (x86_64), Desktop: GNOME Shell 42.4, Display Server: X Server 1.21.1.3, Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1600x1200 OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time Seconds < Lower Is Better a . 615.99 |=================================================================== b . 615.46 |=================================================================== OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time Seconds < Lower Is Better a . 144.70 |=================================================================== b . 144.94 |=================================================================== BRL-CAD 7.36 VGR Performance Metric VGR Performance Metric > Higher Is Better a . 466686 |=================================================================== Blender 3.6 Blend File: Barbershop - Compute: CPU-Only Seconds < Lower Is Better a . 493.45 |=================================================================== b . 493.61 |=================================================================== Timed Linux Kernel Compilation 6.1 Build: allmodconfig Seconds < Lower Is Better a . 445.39 |=================================================================== b . 445.38 |=================================================================== Timed LLVM Compilation 16.0 Build System: Unix Makefiles Seconds < Lower Is Better a . 323.86 |=================================================================== b . 319.85 |================================================================== High Performance Conjugate Gradient 3.1 X Y Z: 160 160 160 - RT: 60 GFLOP/s > Higher Is Better a . 27.51 |==================================================================== b . 27.40 |==================================================================== libxsmm 2-1.17-3645 M N K: 128 GFLOPS/s > Higher Is Better a . 1211.8 |================================================================== b . 1225.0 |=================================================================== Timed LLVM Compilation 16.0 Build System: Ninja Seconds < Lower Is Better a . 263.15 |=================================================================== b . 262.88 |=================================================================== High Performance Conjugate Gradient 3.1 X Y Z: 144 144 144 - RT: 60 GFLOP/s > Higher Is Better a . 27.42 |==================================================================== b . 27.39 |==================================================================== Blender 3.6 Blend File: Pabellon Barcelona - Compute: CPU-Only Seconds < Lower Is Better a . 159.94 |=================================================================== Laghos 3.1 Test: Sedov Blast Wave, ube_922_hex.mesh Major Kernels Total Rate > Higher Is Better a . 216.86 |=================================================================== b . 217.19 |=================================================================== High Performance Conjugate Gradient 3.1 X Y Z: 104 104 104 - RT: 60 GFLOP/s > Higher Is Better a . 27.78 |==================================================================== b . 27.84 |==================================================================== libxsmm 2-1.17-3645 M N K: 256 GFLOPS/s > Higher Is Better a . 879.6 |==================================================================== b . 758.9 |=========================================================== Blender 3.6 Blend File: Classroom - Compute: CPU-Only Seconds < Lower Is Better a . 127.78 |=================================================================== b . 127.76 |=================================================================== Apache Cassandra 4.1.3 Test: Writes Op/s > Higher Is Better a . 155626 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 453.48 |=================================================================== b . 428.67 |=============================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 35.15 |================================================================ b . 37.33 |==================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast Frames Per Second > Higher Is Better a . 5.842 |=================================================================== b . 5.917 |==================================================================== OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time Seconds < Lower Is Better a . 67.71 |==================================================================== b . 67.56 |==================================================================== OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time Seconds < Lower Is Better a . 27.97 |==================================================================== b . 27.95 |==================================================================== Palabos 2.3 Grid Size: 100 Mega Site Updates Per Second > Higher Is Better a . 235.19 |=================================================================== b . 234.87 |=================================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 Ops/sec > Higher Is Better a . 2447092.01 |=============================================================== b . 2304730.19 |=========================================================== Palabos 2.3 Grid Size: 400 Mega Site Updates Per Second > Higher Is Better a . 287.27 |=================================================================== b . 285.76 |=================================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 Ops/sec > Higher Is Better a . 2285996.17 |=============================================================== b . 2227152.02 |============================================================= Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 Average Latency > Higher Is Better a . 68.34 |==================================================================== b . 68.01 |==================================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 point/sec > Higher Is Better a . 67607191.64 |============================================================== b . 65935725.67 |============================================================ Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 Ops/sec > Higher Is Better a . 2211638.65 |=============================================================== b . 2217192.12 |=============================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 Ops/sec > Higher Is Better a . 2316281.26 |=============================================================== b . 2293467.62 |============================================================== Palabos 2.3 Grid Size: 500 Mega Site Updates Per Second > Higher Is Better a . 300.28 |=================================================================== b . 300.86 |=================================================================== Blender 3.6 Blend File: Fishy Cat - Compute: CPU-Only Seconds < Lower Is Better a . 64.07 |==================================================================== b . 64.01 |==================================================================== Laghos 3.1 Test: Triple Point Problem Major Kernels Total Rate > Higher Is Better a . 177.78 |=================================================================== b . 176.92 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 31.68 |==================================================================== b . 31.65 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 504.61 |=================================================================== b . 505.13 |=================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster Frames Per Second > Higher Is Better a . 11.02 |==================================================================== b . 10.99 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 14.86 |==================================================================== b . 14.85 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 1074.82 |================================================================== b . 1075.96 |================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 116.38 |=================================================================== b . 111.50 |================================================================ Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 137.38 |================================================================ b . 143.44 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 460.78 |=================================================================== b . 460.76 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 34.53 |==================================================================== b . 34.55 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 468.80 |=================================================================== b . 460.67 |================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 33.94 |=================================================================== b . 34.54 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 345.15 |=================================================================== b . 343.52 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 46.33 |==================================================================== b . 46.55 |==================================================================== Blender 3.6 Blend File: BMW27 - Compute: CPU-Only Seconds < Lower Is Better a . 47.15 |==================================================================== b . 47.22 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: double - X Y Z: 512 GFLOP/s > Higher Is Better a . 40.74 |==================================================================== b . 40.66 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512 GFLOP/s > Higher Is Better a . 43.97 |==================================================================== b . 44.01 |==================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 Average Latency > Higher Is Better a . 101.25 |=================================================================== b . 98.87 |================================================================= Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 point/sec > Higher Is Better a . 45677447.24 |============================================================= b . 46726912.46 |============================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 131.45 |=================================================================== b . 131.07 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 121.69 |=================================================================== b . 122.04 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 40.91 |==================================================================== b . 40.81 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 390.91 |=================================================================== b . 391.91 |=================================================================== Timed PHP Compilation 8.1.9 Time To Compile Seconds < Lower Is Better a . 42.35 |==================================================================== b . 42.38 |==================================================================== Timed GDB GNU Debugger Compilation 10.2 Time To Compile Seconds < Lower Is Better a . 41.91 |==================================================================== b . 42.01 |==================================================================== Timed Linux Kernel Compilation 6.1 Build: defconfig Seconds < Lower Is Better a . 40.44 |==================================================================== b . 40.45 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 54.07 |==================================================================== b . 53.33 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 295.83 |================================================================== b . 299.93 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 76.58 |==================================================================== b . 75.72 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 208.85 |================================================================== b . 211.23 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 76.56 |==================================================================== b . 76.47 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 208.90 |=================================================================== b . 208.99 |=================================================================== VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast Frames Per Second > Higher Is Better a . 16.10 |=================================================================== b . 16.25 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 33.33 |==================================================================== b . 33.28 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 479.79 |=================================================================== b . 480.52 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 33.39 |==================================================================== b . 33.37 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 478.91 |=================================================================== b . 479.22 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 4.9416 |=================================================================== b . 4.9312 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 3227.10 |================================================================== b . 3233.96 |================================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 Average Latency > Higher Is Better a . 31.58 |==================================================================== b . 31.69 |==================================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 point/sec > Higher Is Better a . 56894390.61 |============================================================== b . 56137174.70 |============================================================= srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total Mbps > Higher Is Better a . 5372.9 |================================================================= b . 5543.7 |=================================================================== Stress-NG 0.15.10 Test: IO_uring Bogo Ops/s > Higher Is Better a . 1529665.98 |=============================================================== b . 1503623.79 |============================================================== Stress-NG 0.15.10 Test: Atomic Bogo Ops/s > Higher Is Better a . 133.83 |=================================================================== b . 132.61 |================================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 Average Latency > Higher Is Better a . 22.97 |==================================================================== b . 21.63 |================================================================ Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 point/sec > Higher Is Better a . 1916642.90 |============================================================ b . 2009050.46 |=============================================================== Stress-NG 0.15.10 Test: CPU Cache Bogo Ops/s > Higher Is Better a . 1537111.20 |=================================================== b . 1885833.11 |=============================================================== Stress-NG 0.15.10 Test: MMAP Bogo Ops/s > Higher Is Better a . 861.28 |=================================================================== b . 856.14 |=================================================================== Stress-NG 0.15.10 Test: Cloning Bogo Ops/s > Higher Is Better a . 9740.57 |================================================================== b . 9326.09 |=============================================================== Stress-NG 0.15.10 Test: Malloc Bogo Ops/s > Higher Is Better a . 99373474.31 |============================================================== b . 99251227.28 |============================================================== Stress-NG 0.15.10 Test: MEMFD Bogo Ops/s > Higher Is Better a . 549.94 |=================================================================== b . 549.55 |=================================================================== Stress-NG 0.15.10 Test: Zlib Bogo Ops/s > Higher Is Better a . 2647.81 |================================================================== b . 2648.81 |================================================================== Stress-NG 0.15.10 Test: Glibc Qsort Data Sorting Bogo Ops/s > Higher Is Better a . 696.65 |=================================================================== b . 696.92 |=================================================================== Stress-NG 0.15.10 Test: Fused Multiply-Add Bogo Ops/s > Higher Is Better a . 34197705.63 |============================================================== b . 34050669.23 |============================================================== Stress-NG 0.15.10 Test: Pthread Bogo Ops/s > Higher Is Better a . 136846.01 |================================================================ b . 136709.81 |================================================================ Stress-NG 0.15.10 Test: System V Message Passing Bogo Ops/s > Higher Is Better a . 5852281.71 |=============================================================== b . 5854201.78 |=============================================================== Stress-NG 0.15.10 Test: Hash Bogo Ops/s > Higher Is Better a . 5577252.32 |=============================================================== b . 5583978.14 |=============================================================== Stress-NG 0.15.10 Test: Vector Math Bogo Ops/s > Higher Is Better a . 151386.31 |================================================================ b . 151431.15 |================================================================ Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 513135000 |================================================================ b . 513040000 |================================================================ Stress-NG 0.15.10 Test: Futex Bogo Ops/s > Higher Is Better a . 1541676.36 |=============================================================== b . 1492979.46 |============================================================= Stress-NG 0.15.10 Test: Socket Activity Bogo Ops/s > Higher Is Better a . 24947.14 |================================================================ b . 25282.31 |================================================================= Stress-NG 0.15.10 Test: Vector Shuffle Bogo Ops/s > Higher Is Better a . 167204.21 |================================================================ b . 167202.07 |================================================================ Stress-NG 0.15.10 Test: Matrix 3D Math Bogo Ops/s > Higher Is Better a . 9599.93 |================================================================== b . 9605.30 |================================================================== Stress-NG 0.15.10 Test: NUMA Bogo Ops/s > Higher Is Better a . 390.87 |=================================================================== b . 392.08 |=================================================================== Stress-NG 0.15.10 Test: Vector Floating Point Bogo Ops/s > Higher Is Better a . 58243.38 |================================================================= b . 58232.70 |================================================================= Stress-NG 0.15.10 Test: Pipe Bogo Ops/s > Higher Is Better a . 35837711.85 |============================================================ b . 36852791.12 |============================================================== Stress-NG 0.15.10 Test: Wide Vector Math Bogo Ops/s > Higher Is Better a . 1745029.27 |=============================================================== b . 1750003.43 |=============================================================== Stress-NG 0.15.10 Test: x86_64 RdRand Bogo Ops/s > Higher Is Better a . 331416.52 |================================================================ b . 331423.04 |================================================================ Stress-NG 0.15.10 Test: AVL Tree Bogo Ops/s > Higher Is Better a . 294.26 |=================================================================== b . 294.66 |=================================================================== Stress-NG 0.15.10 Test: Forking Bogo Ops/s > Higher Is Better a . 89918.21 |================================================================= b . 89966.29 |================================================================= Stress-NG 0.15.10 Test: CPU Stress Bogo Ops/s > Higher Is Better a . 64111.11 |================================================================= b . 64118.87 |================================================================= Stress-NG 0.15.10 Test: Glibc C String Functions Bogo Ops/s > Higher Is Better a . 26067360.60 |============================================================== b . 26125214.84 |============================================================== Stress-NG 0.15.10 Test: Function Call Bogo Ops/s > Higher Is Better a . 22028.03 |================================================================= b . 22106.49 |================================================================= Stress-NG 0.15.10 Test: Matrix Math Bogo Ops/s > Higher Is Better a . 160653.44 |================================================================ b . 156668.43 |============================================================== Stress-NG 0.15.10 Test: SENDFILE Bogo Ops/s > Higher Is Better a . 582724.63 |============================================================== b . 598173.56 |================================================================ Stress-NG 0.15.10 Test: Crypto Bogo Ops/s > Higher Is Better a . 50240.09 |================================================================= b . 50243.48 |================================================================= Stress-NG 0.15.10 Test: Mutex Bogo Ops/s > Higher Is Better a . 15147444.51 |============================================================== b . 15192892.59 |============================================================== Stress-NG 0.15.10 Test: Context Switching Bogo Ops/s > Higher Is Better a . 2572801.75 |=============================================================== b . 2571092.69 |=============================================================== Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 383555000 |================================================================ b . 378650000 |=============================================================== Stress-NG 0.15.10 Test: Floating Point Bogo Ops/s > Higher Is Better a . 10587.48 |================================================================= b . 10601.10 |================================================================= Stress-NG 0.15.10 Test: Memory Copying Bogo Ops/s > Higher Is Better a . 7176.19 |================================================================== b . 7180.43 |================================================================== Stress-NG 0.15.10 Test: Semaphores Bogo Ops/s > Higher Is Better a . 62126446.21 |============================================================== b . 61651485.43 |============================================================== Stress-NG 0.15.10 Test: Poll Bogo Ops/s > Higher Is Better a . 3669281.69 |=============================================================== b . 3671617.97 |=============================================================== Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 samples/s > Higher Is Better a . 243940000 |=============================================================== b . 248820000 |================================================================ Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 1728850000 |=============================================================== b . 1733700000 |=============================================================== Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 1577300000 |=============================================================== b . 1576850000 |=============================================================== Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 1328100000 |=============================================================== b . 1323900000 |=============================================================== Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 847085000 |================================================================ b . 847675000 |================================================================ Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 samples/s > Higher Is Better a . 848435000 |=============================================================== b . 862195000 |================================================================ Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 samples/s > Higher Is Better a . 557945000 |================================================================ b . 558655000 |================================================================ HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: float - X Y Z: 512 GFLOP/s > Higher Is Better a . 72.56 |==================================================================== b . 72.54 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512 GFLOP/s > Higher Is Better a . 74.47 |==================================================================== b . 74.71 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: Stock - Precision: double - X Y Z: 512 GFLOP/s > Higher Is Better a . 76.61 |==================================================================== b . 76.60 |==================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 Average Latency > Higher Is Better a . 69.08 |================================================================ b . 73.56 |==================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 point/sec > Higher Is Better a . 59041436.64 |============================================================== b . 56018457.87 |=========================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512 GFLOP/s > Higher Is Better a . 78.83 |==================================================================== b . 78.96 |==================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 Average Latency > Higher Is Better a . 29.54 |================================================================ b . 31.63 |==================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 point/sec > Higher Is Better a . 54224351.10 |============================================================== b . 51199962.11 |=========================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 Average Latency > Higher Is Better a . 26.29 |=================================================================== b . 26.64 |==================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 point/sec > Higher Is Better a . 1505080.34 |=============================================================== b . 1469808.89 |============================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 Average Latency > Higher Is Better a . 9.49 |================================================================== b . 9.87 |===================================================================== Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 point/sec > Higher Is Better a . 1576432.25 |=============================================================== b . 1521587.40 |============================================================= VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster Frames Per Second > Higher Is Better a . 30.95 |==================================================================== b . 30.93 |==================================================================== srsRAN Project 23.5 Test: Downlink Processor Benchmark Mbps > Higher Is Better a . 705.8 |==================================================================== b . 710.9 |==================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 Average Latency > Higher Is Better a . 31.83 |================================================= b . 43.86 |==================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 point/sec > Higher Is Better a . 43074031.84 |============================================================== b . 34191814.86 |================================================= Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 Average Latency > Higher Is Better a . 28.27 |==================================================================== b . 28.45 |==================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 point/sec > Higher Is Better a . 1191500.88 |=============================================================== b . 1185338.02 |=============================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 Average Latency > Higher Is Better a . 11.86 |================================================================== b . 12.18 |==================================================================== Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 point/sec > Higher Is Better a . 1045806.81 |=============================================================== b . 1042859.03 |=============================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 Average Latency > Higher Is Better a . 14.58 |================================================================== b . 14.98 |==================================================================== Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 point/sec > Higher Is Better a . 710382.44 |================================================================ b . 697217.55 |=============================================================== libxsmm 2-1.17-3645 M N K: 64 GFLOPS/s > Higher Is Better a . 833.8 |==================================================================== b . 839.9 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: Stock - Precision: float - X Y Z: 512 GFLOP/s > Higher Is Better a . 137.54 |=================================================================== b . 137.74 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512 GFLOP/s > Higher Is Better a . 141.41 |=================================================================== b . 141.19 |=================================================================== libxsmm 2-1.17-3645 M N K: 32 GFLOPS/s > Higher Is Better a . 440.0 |=================================================================== b . 444.6 |==================================================================== srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread Mbps > Higher Is Better a . 240.4 |==================================================================== b . 236.3 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256 GFLOP/s > Higher Is Better a . 38.93 |==================================================================== b . 38.52 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: double - X Y Z: 256 GFLOP/s > Higher Is Better a . 38.96 |==================================================================== b . 38.68 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256 GFLOP/s > Higher Is Better a . 72.29 |==================================================================== b . 72.20 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: float - X Y Z: 256 GFLOP/s > Higher Is Better a . 75.09 |==================================================================== b . 74.93 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256 GFLOP/s > Higher Is Better a . 76.03 |==================================================================== b . 75.30 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: Stock - Precision: double - X Y Z: 256 GFLOP/s > Higher Is Better a . 76.90 |==================================================================== b . 77.03 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256 GFLOP/s > Higher Is Better a . 149.83 |================================================================= b . 154.05 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: Stock - Precision: float - X Y Z: 256 GFLOP/s > Higher Is Better a . 157.87 |================================================================ b . 164.05 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: double - X Y Z: 128 GFLOP/s > Higher Is Better a . 46.64 |================================================================ b . 49.52 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128 GFLOP/s > Higher Is Better a . 64.43 |==================================================================== b . 62.30 |================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: float - X Y Z: 128 GFLOP/s > Higher Is Better a . 85.74 |==================================================================== b . 85.49 |==================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: Stock - Precision: double - X Y Z: 128 GFLOP/s > Higher Is Better a . 92.40 |==================================================================== b . 90.99 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128 GFLOP/s > Higher Is Better a . 131.66 |=================================================================== b . 130.98 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128 GFLOP/s > Higher Is Better a . 121.79 |=================================================================== b . 122.46 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: Stock - Precision: float - X Y Z: 128 GFLOP/s > Higher Is Better a . 149.94 |================================================================== b . 151.80 |=================================================================== HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128 GFLOP/s > Higher Is Better a . 207.24 |=================================================================== b . 206.22 |===================================================================