3970x sep 2023 AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite. a: Processor: AMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads), Motherboard: ASUS ROG ZENITH II EXTREME (1603 BIOS), Chipset: AMD Starship/Matisse, Memory: 64GB, Disk: Samsung SSD 980 PRO 500GB, Graphics: AMD Radeon RX 5700 8GB (1750/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: ASUS VP28U, Network: Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 OS: Ubuntu 22.04, Kernel: 5.19.0-051900rc7-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47), Vulkan: 1.2.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160 AOM AV1 3.7 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better AOM AV1 3.7 Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p Frames Per Second > Higher Is Better Apache Cassandra 4.1.3 Test: Writes Op/s > Higher Is Better a . 263638 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 1000 - Rounds: 30 Seconds < Lower Is Better a . 147.60 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 100 - Inserts: 3000 - Rounds: 30 Seconds < Lower Is Better a . 485.93 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 1000 - Rounds: 30 Seconds < Lower Is Better a . 279.30 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 300 - Inserts: 3000 - Rounds: 30 Seconds < Lower Is Better a . 872.94 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 1000 - Rounds: 30 Seconds < Lower Is Better a . 405.37 |=================================================================== Apache CouchDB 3.3.2 Bulk Size: 500 - Inserts: 3000 - Rounds: 30 Seconds < Lower Is Better a . 1252.19 |================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 20 - Files: 100000 Ops per sec > Higher Is Better a . 436681 |=================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 100000 Ops per sec > Higher Is Better a . 420168 |=================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 100000 Ops per sec > Higher Is Better a . 502513 |=================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 20 - Files: 1000000 Ops per sec > Higher Is Better a . 1226994 |================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 1000000 Ops per sec > Higher Is Better a . 113714 |=================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 20 - Files: 100000 Ops per sec > Higher Is Better a . 3567 |===================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 100000 Ops per sec > Higher Is Better a . 8593 |===================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 20 - Files: 100000 Ops per sec > Higher Is Better a . 3735 |===================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 100000 Ops per sec > Higher Is Better a . 9132 |===================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 1000000 Ops per sec > Higher Is Better a . 284333 |=================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 20 - Files: 10000000 Ops per sec > Higher Is Better a . 89635 |==================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 50 - Files: 10000000 Ops per sec > Higher Is Better a . 158995 |=================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 20 - Files: 100000 Ops per sec > Higher Is Better a . 3571 |===================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 100000 Ops per sec > Higher Is Better a . 8203 |===================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 100000 Ops per sec > Higher Is Better a . 15373 |==================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 20 - Files: 1000000 Ops per sec > Higher Is Better a . 3685 |===================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 1000000 Ops per sec > Higher Is Better a . 8839 |===================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 100000 Ops per sec > Higher Is Better a . 17599 |==================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 20 - Files: 1000000 Ops per sec > Higher Is Better a . 3712 |===================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 1000000 Ops per sec > Higher Is Better a . 11088 |==================================================================== Apache Hadoop 3.3.6 Operation: Open - Threads: 100 - Files: 10000000 Ops per sec > Higher Is Better a . 153128 |=================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 100000 Ops per sec > Higher Is Better a . 26157 |==================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 20 - Files: 1000000 Ops per sec > Higher Is Better a . 3733 |===================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 1000000 Ops per sec > Higher Is Better a . 8910 |===================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 1000000 Ops per sec > Higher Is Better a . 16260 |==================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 20 - Files: 10000000 Ops per sec > Higher Is Better a . 3752 |===================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 50 - Files: 10000000 Ops per sec > Higher Is Better a . 8861 |===================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 1000000 Ops per sec > Higher Is Better a . 18208 |==================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 20 - Files: 10000000 Ops per sec > Higher Is Better a . 3926 |===================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 50 - Files: 10000000 Ops per sec > Higher Is Better a . 9635 |===================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 1000000 Ops per sec > Higher Is Better a . 16064 |==================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 20 - Files: 10000000 Ops per sec > Higher Is Better a . 3789 |===================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 50 - Files: 10000000 Ops per sec > Higher Is Better a . 8889 |===================================================================== Apache Hadoop 3.3.6 Operation: Create - Threads: 100 - Files: 10000000 Ops per sec > Higher Is Better a . 16006 |==================================================================== Apache Hadoop 3.3.6 Operation: Delete - Threads: 100 - Files: 10000000 Ops per sec > Higher Is Better a . 18086 |==================================================================== Apache Hadoop 3.3.6 Operation: Rename - Threads: 100 - Files: 10000000 Ops per sec > Higher Is Better a . 15992 |==================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 20 - Files: 100000 Ops per sec > Higher Is Better a . 149925 |=================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 100000 Ops per sec > Higher Is Better a . 724638 |=================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 100000 Ops per sec > Higher Is Better a . 396825 |=================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 20 - Files: 1000000 Ops per sec > Higher Is Better a . 1960784 |================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 1000000 Ops per sec > Higher Is Better a . 2288330 |================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 1000000 Ops per sec > Higher Is Better a . 2074689 |================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 20 - Files: 10000000 Ops per sec > Higher Is Better a . 489237 |=================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 50 - Files: 10000000 Ops per sec > Higher Is Better a . 307475 |=================================================================== Apache Hadoop 3.3.6 Operation: File Status - Threads: 100 - Files: 10000000 Ops per sec > Higher Is Better a . 1490535 |================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 139562 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 117.16 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 344892 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 115.63 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 546770 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 122.28 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 266795 |=================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 65.05 |==================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 675295 |=================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 65.67 |==================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 1044528 |================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 69.25 |==================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 650057 |=================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 28.32 |==================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 point/sec > Higher Is Better a . 631664 |=================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 Average Latency < Lower Is Better a . 109.8 |==================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 1477700 |================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 30.83 |==================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 point/sec > Higher Is Better a . 1520487 |================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 Average Latency < Lower Is Better a . 115.25 |=================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 2294842 |================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 32.26 |==================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 point/sec > Higher Is Better a . 2307967 |================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 Average Latency < Lower Is Better a . 120.74 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 952128 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 19.61 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 point/sec > Higher Is Better a . 953899 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 Average Latency < Lower Is Better a . 73 |======================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 2250548 |================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 20.78 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 point/sec > Higher Is Better a . 2292358 |================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 Average Latency < Lower Is Better a . 78.33 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 3256331 |================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 23.05 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 point/sec > Higher Is Better a . 3333342 |================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 Average Latency < Lower Is Better a . 87.3 |===================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 12720025 |================================================================= Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 128.15 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 27615777 |================================================================= Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 147.22 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 39801968 |================================================================= Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 157.62 |=================================================================== Apache IoTDB 1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 22807204 |================================================================= Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 77.25 |==================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 44916090 |================================================================= Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 98.48 |==================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 60748076 |================================================================= Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 112.37 |=================================================================== Apache IoTDB 1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 43282315 |================================================================= Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 42.19 |==================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 point/sec > Higher Is Better a . 41758329 |================================================================= Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 Average Latency < Lower Is Better a . 160.29 |=================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 69052926 |================================================================= Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 65.93 |==================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 point/sec > Higher Is Better a . 66302152 |================================================================= Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 Average Latency < Lower Is Better a . 248.51 |=================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 79688165 |================================================================= Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 93.94 |==================================================================== Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 point/sec > Higher Is Better a . 77131211 |================================================================= Apache IoTDB 1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 Average Latency < Lower Is Better a . 339.42 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 point/sec > Higher Is Better a . 53756801 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 Average Latency < Lower Is Better a . 34.76 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 point/sec > Higher Is Better a . 53828947 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 Average Latency < Lower Is Better a . 130.54 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 point/sec > Higher Is Better a . 77801658 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 Average Latency < Lower Is Better a . 60.12 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 point/sec > Higher Is Better a . 78451272 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 Average Latency < Lower Is Better a . 233.73 |=================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 point/sec > Higher Is Better a . 79844854 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 Average Latency < Lower Is Better a . 95.15 |==================================================================== Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 point/sec > Higher Is Better a . 81231176 |================================================================= Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 Average Latency < Lower Is Better a . 355.25 |=================================================================== BRL-CAD 7.36 VGR Performance Metric VGR Performance Metric > Higher Is Better a . 537121 |=================================================================== libavif avifenc 1.0 Encoder Speed: 0 Seconds < Lower Is Better a . 79.61 |==================================================================== libavif avifenc 1.0 Encoder Speed: 2 Seconds < Lower Is Better a . 42.35 |==================================================================== libavif avifenc 1.0 Encoder Speed: 6 Seconds < Lower Is Better a . 3.333 |==================================================================== libavif avifenc 1.0 Encoder Speed: 6, Lossless Seconds < Lower Is Better a . 6.746 |==================================================================== libavif avifenc 1.0 Encoder Speed: 10, Lossless Seconds < Lower Is Better a . 4.882 |==================================================================== NCNN 20230517 Target: CPU - Model: mobilenet ms < Lower Is Better a . 14.02 |==================================================================== NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 ms < Lower Is Better a . 5.52 |===================================================================== NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 ms < Lower Is Better a . 5.21 |===================================================================== NCNN 20230517 Target: CPU - Model: shufflenet-v2 ms < Lower Is Better a . 7.05 |===================================================================== NCNN 20230517 Target: CPU - Model: mnasnet ms < Lower Is Better a . 4.97 |===================================================================== NCNN 20230517 Target: CPU - Model: efficientnet-b0 ms < Lower Is Better a . 8.07 |===================================================================== NCNN 20230517 Target: CPU - Model: blazeface ms < Lower Is Better a . 2.86 |===================================================================== NCNN 20230517 Target: CPU - Model: googlenet ms < Lower Is Better a . 14.2 |===================================================================== NCNN 20230517 Target: CPU - Model: vgg16 ms < Lower Is Better a . 29.24 |==================================================================== NCNN 20230517 Target: CPU - Model: resnet18 ms < Lower Is Better a . 9.69 |===================================================================== NCNN 20230517 Target: CPU - Model: alexnet ms < Lower Is Better a . 7.74 |===================================================================== NCNN 20230517 Target: CPU - Model: resnet50 ms < Lower Is Better a . 17.13 |==================================================================== NCNN 20230517 Target: CPU - Model: yolov4-tiny ms < Lower Is Better a . 23.28 |==================================================================== NCNN 20230517 Target: CPU - Model: squeezenet_ssd ms < Lower Is Better a . 12.43 |==================================================================== NCNN 20230517 Target: CPU - Model: regnety_400m ms < Lower Is Better a . 19.01 |==================================================================== NCNN 20230517 Target: CPU - Model: vision_transformer ms < Lower Is Better a . 52.81 |==================================================================== NCNN 20230517 Target: CPU - Model: FastestDet ms < Lower Is Better a . 7.61 |===================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 28.94 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 550.25 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 647.55 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 24.68 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 255.12 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 62.68 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 84.20 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 189.99 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 320.85 |=================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 49.85 |==================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 1987.82 |================================================================== Neural Magic DeepSparse 1.5 Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 8.026 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 152.61 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 104.69 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 34.11 |==================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 469.02 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 322.54 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 49.58 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 156.91 |=================================================================== Neural Magic DeepSparse 1.5 Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 101.88 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 234.69 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 68.15 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 32.35 |==================================================================== Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 490.66 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 347.30 |=================================================================== Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 45.99 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 123.00 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 129.90 |=================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream items/sec > Higher Is Better a . 28.81 |==================================================================== Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream ms/batch < Lower Is Better a . 553.19 |=================================================================== OpenRadioss 2023.09.15 Model: Bumper Beam Seconds < Lower Is Better a . 94.37 |==================================================================== OpenRadioss 2023.09.15 Model: Chrysler Neon 1M Seconds < Lower Is Better a . 547.81 |=================================================================== OpenRadioss 2023.09.15 Model: Cell Phone Drop Test Seconds < Lower Is Better a . 42.11 |==================================================================== OpenRadioss 2023.09.15 Model: Bird Strike on Windshield Seconds < Lower Is Better a . 139.13 |=================================================================== OpenRadioss 2023.09.15 Model: Rubber O-Ring Seal Installation Seconds < Lower Is Better a . 66.03 |==================================================================== OpenRadioss 2023.09.15 Model: INIVOL and Fluid Structure Interaction Drop Container Seconds < Lower Is Better a . 216.49 |=================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Only TPS > Higher Is Better a . 1540137 |================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.065 |==================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Only TPS > Higher Is Better a . 1576869 |================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.159 |==================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Only TPS > Higher Is Better a . 1567522 |================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.319 |==================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Only TPS > Higher Is Better a . 1533021 |================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.522 |==================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Write TPS > Higher Is Better a . 568 |====================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency ms < Lower Is Better a . 175.94 |=================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better a . 1524497 |================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.656 |==================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Write TPS > Higher Is Better a . 396 |====================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency ms < Lower Is Better a . 630.71 |=================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Write TPS > Higher Is Better a . 329 |====================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latency ms < Lower Is Better a . 1517.75 |================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Write TPS > Higher Is Better a . 317 |====================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency ms < Lower Is Better a . 2523.27 |================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better a . 266 |====================================================================== PostgreSQL 16 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency ms < Lower Is Better a . 3762.87 |================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Only TPS > Higher Is Better a . 1466898 |================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.068 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Only TPS > Higher Is Better a . 1481168 |================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.169 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Only TPS > Higher Is Better a . 1493244 |================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.335 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Only TPS > Higher Is Better a . 1443932 |================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.554 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Write TPS > Higher Is Better a . 8284 |===================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency ms < Lower Is Better a . 12.07 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better a . 1433045 |================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.698 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Write TPS > Higher Is Better a . 10386 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency ms < Lower Is Better a . 24.07 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Write TPS > Higher Is Better a . 10968 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency ms < Lower Is Better a . 45.59 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Write TPS > Higher Is Better a . 11065 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency ms < Lower Is Better a . 72.3 |===================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Only TPS > Higher Is Better a . 1026569 |================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.097 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Only TPS > Higher Is Better a . 1107736 |================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.226 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Only TPS > Higher Is Better a . 1056453 |================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.473 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Only TPS > Higher Is Better a . 1022077 |================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.783 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better a . 11140 |==================================================================== PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency ms < Lower Is Better a . 89.77 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Write TPS > Higher Is Better a . 9037 |===================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 100 - Mode: Read Write - Average Latency ms < Lower Is Better a . 11.07 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only TPS > Higher Is Better a . 1014537 |================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latency ms < Lower Is Better a . 0.986 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Write TPS > Higher Is Better a . 11099 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 250 - Mode: Read Write - Average Latency ms < Lower Is Better a . 22.53 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Write TPS > Higher Is Better a . 10485 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 500 - Mode: Read Write - Average Latency ms < Lower Is Better a . 47.69 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write TPS > Higher Is Better a . 11478 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latency ms < Lower Is Better a . 69.70 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write TPS > Higher Is Better a . 12087 |==================================================================== PostgreSQL 16 Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latency ms < Lower Is Better a . 82.73 |==================================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:1 Ops/sec > Higher Is Better a . 1962532.28 |=============================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 Ops/sec > Higher Is Better a . 2085593.65 |=============================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:1 Ops/sec > Higher Is Better a . 1943330.63 |=============================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 Ops/sec > Higher Is Better a . 2147272.63 |=============================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 Ops/sec > Higher Is Better a . 2127101.53 |=============================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:1 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 Ops/sec > Higher Is Better a . 2191452.33 |=============================================================== Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10 Stress-NG 0.16.04 Test: Hash Bogo Ops/s > Higher Is Better a . 7624024.19 |=============================================================== Stress-NG 0.16.04 Test: MMAP Bogo Ops/s > Higher Is Better a . 446.68 |=================================================================== Stress-NG 0.16.04 Test: NUMA Bogo Ops/s > Higher Is Better a . 754.61 |=================================================================== Stress-NG 0.16.04 Test: Pipe Bogo Ops/s > Higher Is Better a . 13851265.33 |============================================================== Stress-NG 0.16.04 Test: Poll Bogo Ops/s > Higher Is Better a . 4086609.46 |=============================================================== Stress-NG 0.16.04 Test: Zlib Bogo Ops/s > Higher Is Better a . 3502.5 |=================================================================== Stress-NG 0.16.04 Test: Futex Bogo Ops/s > Higher Is Better a . 4441035 |================================================================== Stress-NG 0.16.04 Test: MEMFD Bogo Ops/s > Higher Is Better a . 393.05 |=================================================================== Stress-NG 0.16.04 Test: Mutex Bogo Ops/s > Higher Is Better a . 18884754.77 |============================================================== Stress-NG 0.16.04 Test: Atomic Bogo Ops/s > Higher Is Better a . 479.35 |=================================================================== Stress-NG 0.16.04 Test: Crypto Bogo Ops/s > Higher Is Better a . 78148.34 |================================================================= Stress-NG 0.16.04 Test: Malloc Bogo Ops/s > Higher Is Better a . 98198159.73 |============================================================== Stress-NG 0.16.04 Test: Cloning Bogo Ops/s > Higher Is Better a . 3222.12 |================================================================== Stress-NG 0.16.04 Test: Forking Bogo Ops/s > Higher Is Better a . 51559.3 |================================================================== Stress-NG 0.16.04 Test: Pthread Bogo Ops/s > Higher Is Better a . 128780.28 |================================================================ Stress-NG 0.16.04 Test: AVL Tree Bogo Ops/s > Higher Is Better a . 373.47 |=================================================================== Stress-NG 0.16.04 Test: IO_uring Bogo Ops/s > Higher Is Better a . 442330.43 |================================================================ Stress-NG 0.16.04 Test: SENDFILE Bogo Ops/s > Higher Is Better a . 530872.63 |================================================================ Stress-NG 0.16.04 Test: CPU Cache Bogo Ops/s > Higher Is Better a . 1617463.87 |=============================================================== Stress-NG 0.16.04 Test: CPU Stress Bogo Ops/s > Higher Is Better a . 82381.09 |================================================================= Stress-NG 0.16.04 Test: Semaphores Bogo Ops/s > Higher Is Better a . 107860946.05 |============================================================= Stress-NG 0.16.04 Test: Matrix Math Bogo Ops/s > Higher Is Better a . 199130.4 |================================================================= Stress-NG 0.16.04 Test: Vector Math Bogo Ops/s > Higher Is Better a . 224058.27 |================================================================ Stress-NG 0.16.04 Test: AVX-512 VNNI Bogo Ops/s > Higher Is Better a . 1396743.12 |=============================================================== Stress-NG 0.16.04 Test: Function Call Bogo Ops/s > Higher Is Better a . 24195.96 |================================================================= Stress-NG 0.16.04 Test: x86_64 RdRand Bogo Ops/s > Higher Is Better a . 4453.62 |================================================================== Stress-NG 0.16.04 Test: Floating Point Bogo Ops/s > Higher Is Better a . 11278.99 |================================================================= Stress-NG 0.16.04 Test: Matrix 3D Math Bogo Ops/s > Higher Is Better a . 2795.68 |================================================================== Stress-NG 0.16.04 Test: Memory Copying Bogo Ops/s > Higher Is Better a . 12458.08 |================================================================= Stress-NG 0.16.04 Test: Vector Shuffle Bogo Ops/s > Higher Is Better a . 22868.27 |================================================================= Stress-NG 0.16.04 Test: Mixed Scheduler Bogo Ops/s > Higher Is Better a . 34353.13 |================================================================= Stress-NG 0.16.04 Test: Socket Activity Bogo Ops/s > Higher Is Better a . 9538.61 |================================================================== Stress-NG 0.16.04 Test: Wide Vector Math Bogo Ops/s > Higher Is Better a . 1498606.48 |=============================================================== Stress-NG 0.16.04 Test: Context Switching Bogo Ops/s > Higher Is Better a . 10942678.54 |============================================================== Stress-NG 0.16.04 Test: Fused Multiply-Add Bogo Ops/s > Higher Is Better a . 33465006.28 |============================================================== Stress-NG 0.16.04 Test: Vector Floating Point Bogo Ops/s > Higher Is Better a . 94009.97 |================================================================= Stress-NG 0.16.04 Test: Glibc C String Functions Bogo Ops/s > Higher Is Better a . 31979929.5 |=============================================================== Stress-NG 0.16.04 Test: Glibc Qsort Data Sorting Bogo Ops/s > Higher Is Better a . 946.62 |=================================================================== Stress-NG 0.16.04 Test: System V Message Passing Bogo Ops/s > Higher Is Better a . 10638598.45 |============================================================== SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 3.81 |===================================================================== SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 64.25 |==================================================================== SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 126.78 |=================================================================== SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K Frames Per Second > Higher Is Better a . 129.99 |=================================================================== SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 8.942 |==================================================================== SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 81.06 |==================================================================== SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 316.79 |=================================================================== SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p Frames Per Second > Higher Is Better a . 368.88 |=================================================================== Timed GCC Compilation 13.2 Time To Compile Seconds < Lower Is Better a . 984.35 |=================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast Frames Per Second > Higher Is Better a . 5.468 |==================================================================== VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster Frames Per Second > Higher Is Better a . 10.99 |==================================================================== VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast Frames Per Second > Higher Is Better a . 13.92 |==================================================================== VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster Frames Per Second > Higher Is Better a . 25.04 |====================================================================