bergamo extra

AMD EPYC 9754 128-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2308295-NE-BERGAMOEX92
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
August 28 2023
  1 Hour, 29 Minutes
b
August 28 2023
  6 Hours, 1 Minute
c
August 29 2023
  1 Hour, 29 Minutes
d
August 29 2023
  3 Hours, 1 Minute
Invert Behavior (Only Show Selected Data)
  3 Hours

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


bergamo extraOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 9754 128-Core @ 3.10GHz (128 Cores / 256 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a4768GB3841GB Micron_9300_MTFDHAL3T8TDPASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.4X Server 1.21.1.31.2.204GCC 11.2.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionBergamo Extra BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate performance (Boost: Enabled) - CPU Microcode: 0xaa0010b - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

abcdResult OverviewPhoronix Test Suite100%103%107%110%Apache IoTDBSVT-AV1Stress-NGlibavif avifenc

bergamo extraapache-iotdb: 100 - 1 - 200apache-iotdb: 100 - 1 - 200svt-av1: Preset 8 - Bosphorus 4Kapache-iotdb: 200 - 1 - 500apache-iotdb: 200 - 1 - 200apache-iotdb: 200 - 1 - 500apache-iotdb: 100 - 100 - 200apache-iotdb: 500 - 1 - 500apache-iotdb: 200 - 1 - 200apache-iotdb: 500 - 1 - 200apache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 100 - 500apache-iotdb: 500 - 1 - 500apache-iotdb: 100 - 100 - 500apache-iotdb: 200 - 100 - 200apache-iotdb: 500 - 1 - 200apache-iotdb: 100 - 1 - 500stress-ng: Forkingapache-iotdb: 200 - 100 - 200stress-ng: Futexapache-iotdb: 500 - 100 - 500apache-iotdb: 100 - 1 - 500stress-ng: Context Switchingstress-ng: Pipeapache-iotdb: 500 - 100 - 500apache-iotdb: 200 - 100 - 500apache-iotdb: 200 - 100 - 500svt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080papache-iotdb: 500 - 100 - 200stress-ng: MMAPstress-ng: Socket Activityncnn: CPU - blazefacesvt-av1: Preset 12 - Bosphorus 4Kncnn: CPU - resnet18ncnn: CPU - mnasnetapache-iotdb: 500 - 100 - 200svt-av1: Preset 13 - Bosphorus 4Kstress-ng: Semaphoresncnn: CPU - FastestDetstress-ng: Pthreadstress-ng: MEMFDncnn: CPU - vgg16stress-ng: Glibc C String Functionsstress-ng: Mutexncnn: CPU - efficientnet-b0svt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 4 - Bosphorus 4Kncnn: CPU - mobilenetavifenc: 6ncnn: CPU - regnety_400mavifenc: 10, Losslessncnn: CPU - yolov4-tinystress-ng: Mixed Schedulerstress-ng: NUMAncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - vision_transformerstress-ng: Mallocncnn: CPU - alexnetncnn: CPU-v2-v2 - mobilenet-v2stress-ng: Atomicstress-ng: Vector Floating Pointncnn: CPU - googlenetavifenc: 6, Losslessstress-ng: Glibc Qsort Data Sortingstress-ng: Zlibstress-ng: Cryptoavifenc: 0stress-ng: AVX-512 VNNIncnn: CPU - resnet50stress-ng: CPU Stressstress-ng: System V Message Passingstress-ng: Pollstress-ng: Function Callavifenc: 2stress-ng: Hashncnn: CPU - squeezenet_ssdstress-ng: Floating Pointstress-ng: Fused Multiply-Addstress-ng: Wide Vector Mathstress-ng: Vector Shufflestress-ng: Memory Copyingstress-ng: Matrix Mathstress-ng: Vector Mathstress-ng: x86_64 RdRandncnn: CPU - shufflenet-v2stress-ng: Matrix 3D Mathstress-ng: CPU Cachestress-ng: SENDFILEstress-ng: IO_uringstress-ng: AVL Treestress-ng: Cloningabcd23.49537074.6398.456999701.3318.5843.0544.7435.72778139.9214.1233175460.0695.171274649.1644369568.9340.391151772.0739.451409.5841306750.531895146.471.97938096.157469675.572367692.2864537220.83107.0343327049.22613.433493.566142.33853919386.022231.1678214.728.98178.17613.9113.5733.04182.786367850395.9820.7198472.65440.4229.41102825108.4893918333.1320.712.0414.99924.772.67770.974.8132.4190689.72283.3716.1749.71388083810.488.3114.2211.66340089.7730.145.9452701.8813705.12271895.0773.80611480669.826.48279850.910103031.3914505273.7185124.2340.19424467797.726.5539805.61102526238.374572615.5785166.5543632.09552391.35716693.541046836.3618.1513023.79604083.21650240.078566298.56992.5710469.7115.80677623.11103.7071231654.4714.8833.6336.3533.96912813.6012.2738473913.1079.131329811.1151799432.6935.791303858.9940.1446384.7945305910.972040244.7971.66926489.117906554.6076591989.3965454585.35104.6244033224.05588.734510.670142.13754861433.152214.3880432.618.96181.63313.8613.5932.61179.955378329416.4121.0397666.68439.4728.79104553211.5192655094.8920.9912.1564.99124.882.67070.314.80832.0990027.33281.2216.1349.61389674083.448.3714.24210.75340161.6730.145.9412708.2613658.50271224.9773.75511473920.0826.56280293.5710090750.2914476344.5585282.2440.21124474278.8926.5439800.43102530863.134572134.6285167.4343636.71552416.95716679.5241047165.1321.458776.17749643.701443997.076725069.76820.439174.9523.93450372.2674.653926034.5919.2943.641.2231.26759389.3514.9831605198.3887.941413053.348070121.0640.591121506.3344.5950667.5840976754.032047529.1675.49856042.798037439.2876333457.3162451638.4100.9345493895.77584.53498.621148.34853565430.352149.5981154.129.29183.46114.3214.0232.88185.29368063616.0121.2298016.22431.2328.97103670597.4992948117.0320.7812.1414.94925.072.69471.054.83732.289799.15280.6216.0250.03390162833.028.3114.29212.03338134.0630.325.9742715.6413642.07272088.2874.05811514003.9826.47280785.6510078078.6214480252.8285248.6940.22524491429.3126.5339785.69102544387.844572889.7785177.9343635.88552401.63716735.7741047280.3318.1713029.35688466.741773574.369046768.13990.810130.6116.45662399.98100.9881214309.1114.5834.1536.3729.15928635.3513.6038246237.5684.271521285.8049363226.9934.881202704.6440.4445562.7646082624.381848992.3768.84921947.547901208.7877385869.4566646010.33105.9043683590.07600.485516.410147.93655623597.212148.6878780.84184.09331.98180.777378385037.3496237.31430.54104251502.3092600002.6912.1995.0092.7004.85790534.17281.34390992187.56211.25339290.915.9712706.8613663.25270963.6973.94611513829.31280456.8610084371.2514485138.9485281.0440.25424481007.5939788.35102525929.494572386.6385165.5043638.20552451.28716699.5941048937.838081.34728809.402029025.975363538.72789.689360.70OpenBenchmarking.org

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abcd612182430SE +/- 0.40, N = 3SE +/- 0.19, N = 323.4915.8023.9316.45MAX: 1530.02MAX: 721.43MAX: 1586.11MAX: 685.28

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200abcd150K300K450K600K750KSE +/- 9185.25, N = 3SE +/- 3588.97, N = 3537074.63677623.11450372.26662399.98

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kabcd20406080100SE +/- 1.11, N = 3SE +/- 0.37, N = 398.46103.7174.65100.991. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abcd300K600K900K1200K1500KSE +/- 12147.53, N = 3SE +/- 7695.87, N = 3999701.331231654.47926034.591214309.11

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abcd510152025SE +/- 0.18, N = 4SE +/- 0.34, N = 318.5814.8819.2914.58MAX: 1524.1MAX: 630.35MAX: 1525.08MAX: 743.12

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500abcd1020304050SE +/- 0.41, N = 3SE +/- 0.32, N = 343.0533.6343.6034.15MAX: 1829.74MAX: 734.86MAX: 1897.71MAX: 961.98

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abcd1020304050SE +/- 0.60, N = 3SE +/- 0.30, N = 1544.7436.3541.2236.37MAX: 1539.96MAX: 720.02MAX: 1559.43MAX: 888.19

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500abcd816243240SE +/- 0.22, N = 3SE +/- 0.09, N = 335.7233.9631.2629.15MAX: 1699.24MAX: 1056.19MAX: 1711.54MAX: 977.43

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200abcd200K400K600K800K1000KSE +/- 10906.20, N = 4SE +/- 12086.86, N = 3778139.92912813.60759389.35928635.35

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200abcd48121620SE +/- 0.13, N = 3SE +/- 0.08, N = 314.1212.2714.9813.60MAX: 1722.42MAX: 935.57MAX: 1615.98MAX: 909.68

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200abcd8M16M24M32M40MSE +/- 423133.56, N = 3SE +/- 280845.65, N = 1533175460.0638473913.1031605198.3838246237.56

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abcd20406080100SE +/- 0.59, N = 3SE +/- 0.43, N = 395.1779.1387.9484.27MAX: 1952.6MAX: 1217.05MAX: 2241.3MAX: 1502.98

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500abcd300K600K900K1200K1500KSE +/- 3441.10, N = 3SE +/- 2888.46, N = 31274649.161329811.111413053.301521285.80

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500abcd11M22M33M44M55MSE +/- 569086.89, N = 3SE +/- 220442.16, N = 344369568.9351799432.6948070121.0649363226.99

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abcd918273645SE +/- 0.21, N = 15SE +/- 0.30, N = 1540.3935.7940.5934.88MAX: 1576.69MAX: 1101.58MAX: 1632.95MAX: 1173.23

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200abcd300K600K900K1200K1500KSE +/- 5643.06, N = 3SE +/- 3576.59, N = 31151772.071303858.991121506.331202704.64

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abcd1020304050SE +/- 0.58, N = 3SE +/- 0.39, N = 339.4040.1444.5940.44MAX: 1521.83MAX: 792.12MAX: 1590.03MAX: 852.25

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Forkingabcd11K22K33K44K55KSE +/- 342.36, N = 15SE +/- 555.17, N = 351409.5846384.7950667.5845562.761. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200abcd10M20M30M40M50MSE +/- 289848.77, N = 15SE +/- 385456.41, N = 1541306750.5345305910.9740976754.0346082624.38

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Futexabcd400K800K1200K1600K2000KSE +/- 25106.04, N = 15SE +/- 2656.28, N = 31895146.402040244.792047529.161848992.371. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500abcd20406080100SE +/- 0.59, N = 3SE +/- 0.47, N = 371.9771.6675.4968.84MAX: 2457.28MAX: 1695.08MAX: 2452.25MAX: 1724.46

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500abcd200K400K600K800K1000KSE +/- 7863.29, N = 3SE +/- 7335.74, N = 3938096.15926489.11856042.79921947.54

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context Switchingabcd2M4M6M8M10MSE +/- 40757.28, N = 3SE +/- 19751.68, N = 37469675.507906554.608037439.287901208.781. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pipeabcd17M34M51M68M85MSE +/- 777377.85, N = 15SE +/- 1022734.46, N = 372367692.2876591989.3976333457.3177385869.451. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500abcd14M28M42M56M70MSE +/- 186140.02, N = 3SE +/- 629377.50, N = 364537220.8365454585.3562451638.4066646010.33

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abcd20406080100SE +/- 1.63, N = 3SE +/- 1.19, N = 5107.03104.62100.93105.90MAX: 3391.16MAX: 4006.98MAX: 3162.47MAX: 4306.59

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500abcd10M20M30M40M50MSE +/- 516345.89, N = 3SE +/- 459481.87, N = 543327049.2244033224.0545493895.7743683590.07

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pabcd130260390520650SE +/- 6.38, N = 3SE +/- 2.62, N = 3613.43588.73584.53600.491. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pabcd110220330440550SE +/- 5.02, N = 3SE +/- 4.72, N = 3493.57510.67498.62516.411. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pabcd306090120150SE +/- 1.70, N = 15SE +/- 1.28, N = 3142.34142.14148.35147.941. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Apache IoTDB

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200abcd12M24M36M48M60MSE +/- 439854.80, N = 9SE +/- 633658.18, N = 353919386.0254861433.1553565430.3555623597.21

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPabcd5001000150020002500SE +/- 5.59, N = 3SE +/- 4.32, N = 32231.162214.382149.592148.681. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket Activityabcd20K40K60K80K100KSE +/- 677.17, N = 15SE +/- 969.66, N = 1578214.7280432.6181154.1278780.841. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceabc3691215SE +/- 0.03, N = 38.988.969.29MIN: 8.86 / MAX: 10.59MIN: 8.74 / MAX: 10.5MIN: 8.89 / MAX: 73.721. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kabcd4080120160200SE +/- 2.17, N = 3SE +/- 1.39, N = 15178.18181.63183.46184.091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18abc48121620SE +/- 0.10, N = 313.9113.8614.32MIN: 13.78 / MAX: 15.02MIN: 13.5 / MAX: 15.71MIN: 13.1 / MAX: 132.571. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetabc48121620SE +/- 0.05, N = 313.5713.5914.02MIN: 13.46 / MAX: 14.66MIN: 13.35 / MAX: 19.96MIN: 13.22 / MAX: 112.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Apache IoTDB

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200abcd816243240SE +/- 0.28, N = 9SE +/- 0.40, N = 333.0432.6132.8831.98MAX: 1700.05MAX: 2175.08MAX: 1897.39MAX: 1985.76

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kabcd4080120160200SE +/- 1.79, N = 6SE +/- 2.53, N = 3182.79179.96185.29180.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Semaphoresabcd80M160M240M320M400MSE +/- 3488706.47, N = 15SE +/- 5453817.00, N = 3367850395.98378329416.41368063616.01378385037.341. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetabc510152025SE +/- 0.14, N = 320.7121.0321.22MIN: 20.49 / MAX: 27.74MIN: 20.16 / MAX: 178.74MIN: 20.45 / MAX: 152.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreadabcd20K40K60K80K100KSE +/- 226.57, N = 3SE +/- 593.71, N = 398472.6597666.6898016.2296237.311. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDabcd100200300400500SE +/- 0.15, N = 3SE +/- 0.28, N = 3440.42439.47431.23430.541. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16abc714212835SE +/- 0.33, N = 329.4128.7928.97MIN: 28.95 / MAX: 37.42MIN: 27.84 / MAX: 36.36MIN: 28.64 / MAX: 35.61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String Functionsabcd20M40M60M80M100MSE +/- 599063.24, N = 3SE +/- 503842.92, N = 3102825108.48104553211.51103670597.49104251502.301. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mutexabcd20M40M60M80M100MSE +/- 74890.02, N = 3SE +/- 139198.93, N = 393918333.1392655094.8992948117.0392600002.691. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0abc510152025SE +/- 0.27, N = 320.7020.9920.78MIN: 20.36 / MAX: 21.81MIN: 20.11 / MAX: 154.38MIN: 20.41 / MAX: 28.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pabcd3691215SE +/- 0.03, N = 3SE +/- 0.04, N = 312.0412.1612.1412.201. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kabcd1.1272.2543.3814.5085.635SE +/- 0.003, N = 3SE +/- 0.010, N = 34.9994.9914.9495.0091. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetabc612182430SE +/- 0.28, N = 324.7724.8825.07MIN: 24.64 / MAX: 25.94MIN: 24.04 / MAX: 149.11MIN: 23.88 / MAX: 128.211. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6abcd0.60751.2151.82252.433.0375SE +/- 0.004, N = 3SE +/- 0.006, N = 32.6772.6702.6942.7001. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mabc1632486480SE +/- 0.30, N = 370.9770.3171.05MIN: 69.64 / MAX: 332.61MIN: 69.06 / MAX: 230.57MIN: 69.95 / MAX: 273.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessabcd1.09282.18563.27844.37125.464SE +/- 0.004, N = 3SE +/- 0.018, N = 34.8104.8084.8374.8571. (CXX) g++ options: -O3 -fPIC -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyabc816243240SE +/- 0.09, N = 332.4132.0932.20MIN: 32.1 / MAX: 40.16MIN: 30.65 / MAX: 167.85MIN: 31.79 / MAX: 38.191. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed Schedulerabcd20K40K60K80K100KSE +/- 263.79, N = 3SE +/- 74.93, N = 390689.7290027.3389799.1590534.171. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAabcd60120180240300SE +/- 0.21, N = 3SE +/- 0.38, N = 3283.37281.22280.62281.341. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3abc48121620SE +/- 0.17, N = 316.1716.1316.02MIN: 15.94 / MAX: 23.88MIN: 15.46 / MAX: 147.31MIN: 15.82 / MAX: 17.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerabc1122334455SE +/- 0.42, N = 349.7149.6150.03MIN: 49 / MAX: 58.88MIN: 48.2 / MAX: 89.12MIN: 49.19 / MAX: 108.541. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mallocabcd80M160M240M320M400MSE +/- 579979.71, N = 3SE +/- 829833.53, N = 3388083810.48389674083.44390162833.02390992187.561. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetabc246810SE +/- 0.07, N = 38.318.378.31MIN: 8.15 / MAX: 9.27MIN: 8.04 / MAX: 9.46MIN: 8.14 / MAX: 17.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2abc48121620SE +/- 0.02, N = 314.2014.2414.29MIN: 13.66 / MAX: 15.53MIN: 13.62 / MAX: 15.59MIN: 13.76 / MAX: 17.531. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Atomicabcd50100150200250SE +/- 0.23, N = 3SE +/- 0.14, N = 3211.66210.75212.03211.251. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating Pointabcd70K140K210K280K350KSE +/- 498.79, N = 3SE +/- 1106.27, N = 3340089.77340161.67338134.06339290.911. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetabc714212835SE +/- 0.30, N = 330.1430.1430.32MIN: 29.87 / MAX: 37.19MIN: 29.29 / MAX: 100.58MIN: 30.02 / MAX: 34.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessabcd1.34422.68844.03265.37686.721SE +/- 0.004, N = 3SE +/- 0.004, N = 35.9455.9415.9745.9711. (CXX) g++ options: -O3 -fPIC -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data Sortingabcd6001200180024003000SE +/- 2.95, N = 3SE +/- 2.42, N = 32701.882708.262715.642706.861. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Zlibabcd3K6K9K12K15KSE +/- 17.04, N = 3SE +/- 10.36, N = 313705.1213658.5013642.0713663.251. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cryptoabcd60K120K180K240K300KSE +/- 186.77, N = 3SE +/- 471.77, N = 3271895.07271224.97272088.28270963.691. (CXX) g++ options: -O2 -std=gnu99 -lc

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0abcd1632486480SE +/- 0.07, N = 3SE +/- 0.05, N = 373.8173.7674.0673.951. (CXX) g++ options: -O3 -fPIC -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIabcd2M4M6M8M10MSE +/- 22298.43, N = 3SE +/- 1283.96, N = 311480669.8011473920.0811514003.9811513829.311. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50abc612182430SE +/- 0.37, N = 326.4826.5626.47MIN: 26.21 / MAX: 32.59MIN: 25.8 / MAX: 33.91MIN: 26.2 / MAX: 42.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Stressabcd60K120K180K240K300KSE +/- 68.83, N = 3SE +/- 73.06, N = 3279850.90280293.57280785.65280456.861. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message Passingabcd2M4M6M8M10MSE +/- 1058.07, N = 3SE +/- 4791.61, N = 310103031.3910090750.2910078078.6210084371.251. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pollabcd3M6M9M12M15MSE +/- 7979.11, N = 3SE +/- 3276.84, N = 314505273.7114476344.5514480252.8214485138.941. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function Callabcd20K40K60K80K100KSE +/- 3.58, N = 3SE +/- 1.98, N = 385124.2385282.2485248.6985281.041. (CXX) g++ options: -O2 -std=gnu99 -lc

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2abcd918273645SE +/- 0.04, N = 3SE +/- 0.05, N = 340.1940.2140.2340.251. (CXX) g++ options: -O3 -fPIC -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Hashabcd5M10M15M20M25MSE +/- 5139.72, N = 3SE +/- 12802.79, N = 324467797.7024474278.8924491429.3124481007.591. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdabc612182430SE +/- 0.18, N = 326.5526.5426.53MIN: 26.39 / MAX: 30.27MIN: 26.03 / MAX: 36.21MIN: 26.35 / MAX: 27.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating Pointabcd9K18K27K36K45KSE +/- 26.88, N = 3SE +/- 22.59, N = 339805.6139800.4339785.6939788.351. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-Addabcd20M40M60M80M100MSE +/- 18310.19, N = 3SE +/- 43428.26, N = 3102526238.37102530863.13102544387.84102525929.491. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector Mathabcd1000K2000K3000K4000K5000KSE +/- 87.18, N = 3SE +/- 133.55, N = 34572615.574572134.624572889.774572386.631. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Shuffleabcd20K40K60K80K100KSE +/- 1.61, N = 3SE +/- 4.82, N = 385166.5585167.4385177.9385165.501. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory Copyingabcd9K18K27K36K45KSE +/- 2.51, N = 3SE +/- 2.00, N = 343632.0943636.7143635.8843638.201. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix Mathabcd120K240K360K480K600KSE +/- 13.37, N = 3SE +/- 16.96, N = 3552391.35552416.95552401.63552451.281. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Mathabcd150K300K450K600K750KSE +/- 9.07, N = 3SE +/- 96.07, N = 3716693.50716679.52716735.77716699.591. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRandabcd9M18M27M36M45MSE +/- 440.05, N = 3SE +/- 490.53, N = 341046836.3641047165.1341047280.3341048937.831. (CXX) g++ options: -O2 -std=gnu99 -lc

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2abc510152025SE +/- 3.00, N = 318.1521.4518.17MIN: 17.99 / MAX: 25.98MIN: 17.63 / MAX: 2074.28MIN: 18.04 / MAX: 19.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D Mathabcd3K6K9K12K15KSE +/- 718.84, N = 12SE +/- 818.02, N = 1213023.798776.1713029.358081.341. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Cacheabcd160K320K480K640K800KSE +/- 15699.82, N = 15SE +/- 17887.86, N = 15604083.20749643.70688466.74728809.401. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEabcd400K800K1200K1600K2000KSE +/- 26756.09, N = 15SE +/- 47969.65, N = 121650240.071443997.071773574.362029025.971. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringabcd2M4M6M8M10MSE +/- 646210.62, N = 12SE +/- 332209.71, N = 158566298.566725069.769046768.135363538.721. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL Treeabcd2004006008001000SE +/- 17.45, N = 15SE +/- 22.17, N = 15992.57820.43990.80789.681. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cloningabcd2K4K6K8K10KSE +/- 889.03, N = 12SE +/- 885.53, N = 1210469.719174.9510130.619360.701. (CXX) g++ options: -O2 -std=gnu99 -lc

93 Results Shown

Apache IoTDB:
  100 - 1 - 200:
    Average Latency
    point/sec
SVT-AV1
Apache IoTDB:
  200 - 1 - 500
  200 - 1 - 200
  200 - 1 - 500
  100 - 100 - 200
  500 - 1 - 500
  200 - 1 - 200
  500 - 1 - 200
  100 - 100 - 200
  100 - 100 - 500
  500 - 1 - 500
  100 - 100 - 500
  200 - 100 - 200
  500 - 1 - 200
  100 - 1 - 500
Stress-NG
Apache IoTDB
Stress-NG
Apache IoTDB:
  500 - 100 - 500
  100 - 1 - 500
Stress-NG:
  Context Switching
  Pipe
Apache IoTDB:
  500 - 100 - 500
  200 - 100 - 500
  200 - 100 - 500
SVT-AV1:
  Preset 13 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
Apache IoTDB
Stress-NG:
  MMAP
  Socket Activity
NCNN
SVT-AV1
NCNN:
  CPU - resnet18
  CPU - mnasnet
Apache IoTDB
SVT-AV1
Stress-NG
NCNN
Stress-NG:
  Pthread
  MEMFD
NCNN
Stress-NG:
  Glibc C String Functions
  Mutex
NCNN
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 4 - Bosphorus 4K
NCNN
libavif avifenc
NCNN
libavif avifenc
NCNN
Stress-NG:
  Mixed Scheduler
  NUMA
NCNN:
  CPU-v3-v3 - mobilenet-v3
  CPU - vision_transformer
Stress-NG
NCNN:
  CPU - alexnet
  CPU-v2-v2 - mobilenet-v2
Stress-NG:
  Atomic
  Vector Floating Point
NCNN
libavif avifenc
Stress-NG:
  Glibc Qsort Data Sorting
  Zlib
  Crypto
libavif avifenc
Stress-NG
NCNN
Stress-NG:
  CPU Stress
  System V Message Passing
  Poll
  Function Call
libavif avifenc
Stress-NG
NCNN
Stress-NG:
  Floating Point
  Fused Multiply-Add
  Wide Vector Math
  Vector Shuffle
  Memory Copying
  Matrix Math
  Vector Math
  x86_64 RdRand
NCNN
Stress-NG:
  Matrix 3D Math
  CPU Cache
  SENDFILE
  IO_uring
  AVL Tree
  Cloning