Xeon Max Linux Kernels

2 x Intel Xeon Max 9480 testing with a Supermicro X13DEM v1.10 (1.3 BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310054-NE-XEONMAXLI13
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
v6.4
October 04 2023
  2 Days, 15 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Max Linux KernelsOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Max 9480 @ 3.50GHz (112 Cores / 224 Threads)Supermicro X13DEM v1.10 (1.3 BIOS)Intel Device 1bce512GB2 x 7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Broadcom BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200GbUbuntu 23.046.4.0-060400-generic (x86_64)GNOME Shell 44.0X Server 1.21.1.7GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionXeon Max Linux Kernels PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_cpufreq performance - CPU Microcode: 0x2c0001d1- OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-1ubuntu2)- Python 3.11.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Xeon Max Linux Kernelshadoop: Create - 50 - 1000000hadoop: Rename - 50 - 100000openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUblender: Pabellon Barcelona - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Classroom - CPU-Onlyblender: BMW27 - CPU-Onlyncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2stress-ng: System V Message Passingstress-ng: Glibc Qsort Data Sortingstress-ng: Glibc C String Functionsstress-ng: Vector Floating Pointstress-ng: Fused Multiply-Addstress-ng: Wide Vector Mathstress-ng: Vector Shufflestress-ng: Matrix 3D Mathstress-ng: Floating Pointstress-ng: x86_64 RdRandstress-ng: Function Callstress-ng: AVX-512 VNNIstress-ng: Vector Mathstress-ng: Semaphoresstress-ng: CPU Stressstress-ng: CPU Cachestress-ng: IO_uringstress-ng: AVL Treestress-ng: Pthreadstress-ng: Mallocstress-ng: Zlibstress-ng: NUMAstress-ng: MMAPmemtier-benchmark: Redis - 100 - 1:10tensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 16 - ResNet-50pgbench: 1000 - 1000 - Read Write - Average Latencypgbench: 1000 - 1000 - Read Writepgbench: 1000 - 800 - Read Write - Average Latencypgbench: 1000 - 800 - Read Writepgbench: 1000 - 1000 - Read Only - Average Latencypgbench: 1000 - 1000 - Read Onlypgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writepgbench: 1000 - 800 - Read Only - Average Latencypgbench: 1000 - 800 - Read Onlypgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 800 - Read Writeapache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 200 - 100 - 800 - 100apache-iotdb: 200 - 100 - 800 - 100apache-iotdb: 200 - 100 - 500 - 100apache-iotdb: 200 - 100 - 500 - 100apache-iotdb: 200 - 100 - 200 - 100apache-iotdb: 200 - 100 - 200 - 100apache-iotdb: 100 - 100 - 800 - 100apache-iotdb: 100 - 100 - 800 - 100apache-iotdb: 100 - 100 - 500 - 100apache-iotdb: 100 - 100 - 500 - 100apache-iotdb: 100 - 100 - 200 - 100build-llvm: Unix Makefilesbuild-llvm: Ninjabuild-linux-kernel: allmodconfigbuild-linux-kernel: defconfigavifenc: 10, Losslessavifenc: 6, Losslessavifenc: 6avifenc: 2avifenc: 0compress-7zip: Decompression Ratingcompress-7zip: Compression Ratingvvenc: Bosphorus 1080p - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 4K - Fastsvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 4 - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 4Knekrs: TurboPipe Periodicnekrs: Kershawopenradioss: INIVOL and Fluid Structure Interaction Drop Containeropenradioss: Rubber O-Ring Seal Installationopenradioss: Bird Strike on Windshieldopenradioss: Cell Phone Drop Testopenradioss: Chrysler Neon 1Mopenradioss: Bumper Beamopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timerodinia: OpenMP CFD Solverrodinia: OpenMP Leukocyterodinia: OpenMP HotSpot3Drodinia: OpenMP LavaMDnpb: SP.Cnpb: SP.Bnpb: MG.Cnpb: LU.Cnpb: FT.Cnpb: CG.Cnpb: BT.Chpcg: 144 144 144 - 60hpcg: 104 104 104 - 60cryptopp: Unkeyed Algorithmscryptopp: Keyed Algorithmshadoop: Create - 500 - 100000hadoop: Create - 100 - 100000hadoop: Open - 500 - 1000000hadoop: Open - 1000 - 100000hadoop: Open - 100 - 1000000hadoop: Delete - 50 - 100000hadoop: Create - 50 - 100000hadoop: Open - 500 - 100000hadoop: Open - 50 - 1000000hadoop: Open - 100 - 100000hadoop: Open - 50 - 100000openvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU - mobilenetstress-ng: Context Switchingstress-ng: Socket Activitystress-ng: Mixed Schedulerstress-ng: Memory Copyingstress-ng: Matrix Mathstress-ng: SENDFILEstress-ng: Forkingstress-ng: Cloningstress-ng: Cryptostress-ng: Atomicstress-ng: Mutexstress-ng: MEMFDstress-ng: Futexstress-ng: Pollstress-ng: Pipestress-ng: Hashmemtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 100 - 1:5memtier-benchmark: Redis - 50 - 1:5pgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlypgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 800 - Read Onlyapache-iotdb: 100 - 100 - 200 - 100svt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Kaom-av1: Speed 11 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Krodinia: OpenMP Streamclusternpb: IS.Dnpb: EP.Dnpb: EP.Cv6.437254477450.34112475.3445.632452.1431.103597.6517.706316.444.2325464.6255.31667.6474.401503.546.8916182.556.6016690.3622.285018.0333.211112.469.8511342.54337.22331.2014.352572.6377.70475.6677.97474.05293.6125.6974.88315.0240.1253.1421.4816.2613.4718811154.401875.8381547074.59244931.41253366692.275961543.12551599.3228901.0231253.93637179.5263249.9911752668.18381412.82177471711.51200494.80938764.062619140.96790.2336288.24217211109.038220.83514.097996.782262895.8665.2253.0445.2036.8550.3581988439.245204161.41171007923.296429471.14170121317.55645581399.8673678024104.9372605513277.406578447869.8866287906140.405042354235.3151248299370.8266098942111.7567045377265.995809660077.0059376956163.364052090142.6841675265125.955251400897.164303689870.7724582941155.2239122053130.813018998512977461275.085187.322288.41237.3165.0756.8923.72839.37170.73540037028355628.27016.3368.4505.07511.5894.1327.070.3940989533335865983333138.97116.35159.2132.37128.5095.17195.88764175.925837.84071239.3595948.71334.17380.64435.243193221.95176009.15239408.11269542.21103248.1351894.85322119.9976.5785104.677441.196989604.54164277032023322628227461023958260316265572656523155282690342941500.5774954.0318.43125.89151.4326.7044.9345.3722.7815.6963.2334.589.3921.5313.5427.4537.1313968013.673083.0474205.1725816.50396715.181084831.2728486.686428.75152827.1517.7523140484.141055.3572753.475307662.3031203885.9817389901.322249364.772189050.672331936.551.5706402351.237650214126.00518.769426.842110.855145.915133.92655.03449.0944.2347.5441.6714.3041.5142.6053073.6110703.4010844.90OpenBenchmarking.org

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000v6.48K16K24K32K40KSE +/- 371.08, N = 337254

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000v6.410K20K30K40K50KSE +/- 708.56, N = 1447745

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUv6.40.07650.1530.22950.3060.3825SE +/- 0.00, N = 150.34MIN: 0.25 / MAX: 36.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUv6.420K40K60K80K100KSE +/- 1041.05, N = 15112475.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUv6.41020304050SE +/- 0.11, N = 345.63MIN: 38.64 / MAX: 91.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUv6.45001000150020002500SE +/- 5.85, N = 32452.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUv6.4714212835SE +/- 0.09, N = 331.10MIN: 24.91 / MAX: 99.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUv6.48001600240032004000SE +/- 9.62, N = 33597.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUv6.448121620SE +/- 0.02, N = 317.70MIN: 12.28 / MAX: 42.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUv6.414002800420056007000SE +/- 8.07, N = 36316.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUv6.40.95181.90362.85543.80724.759SE +/- 0.03, N = 154.23MIN: 2.61 / MAX: 111.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUv6.45K10K15K20K25KSE +/- 231.74, N = 1525464.621. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUv6.41224364860SE +/- 0.43, N = 355.31MIN: 38.54 / MAX: 312.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUv6.4140280420560700SE +/- 5.12, N = 3667.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUv6.420406080100SE +/- 0.13, N = 374.40MIN: 53.51 / MAX: 281.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUv6.430060090012001500SE +/- 2.60, N = 31503.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUv6.4246810SE +/- 0.02, N = 36.89MIN: 5.52 / MAX: 36.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUv6.43K6K9K12K15KSE +/- 27.81, N = 316182.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUv6.4246810SE +/- 0.09, N = 36.60MIN: 4.28 / MAX: 87.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUv6.44K8K12K16K20KSE +/- 240.02, N = 316690.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUv6.4510152025SE +/- 0.09, N = 322.28MIN: 14.79 / MAX: 71.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUv6.411002200330044005500SE +/- 20.34, N = 35018.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUv6.4816243240SE +/- 0.39, N = 333.21MIN: 23.34 / MAX: 169.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUv6.42004006008001000SE +/- 13.29, N = 31112.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUv6.43691215SE +/- 0.01, N = 39.85MIN: 7.51 / MAX: 55.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUv6.42K4K6K8K10KSE +/- 16.48, N = 311342.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUv6.470140210280350SE +/- 0.31, N = 3337.22MIN: 252.67 / MAX: 470.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUv6.470140210280350SE +/- 0.28, N = 3331.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUv6.448121620SE +/- 0.13, N = 614.35MIN: 9.07 / MAX: 112.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUv6.46001200180024003000SE +/- 24.17, N = 62572.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUv6.420406080100SE +/- 0.95, N = 377.70MIN: 49.99 / MAX: 537.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUv6.4100200300400500SE +/- 5.84, N = 3475.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUv6.420406080100SE +/- 0.97, N = 377.97MIN: 47.82 / MAX: 365.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUv6.4100200300400500SE +/- 5.93, N = 3474.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUv6.460120180240300SE +/- 1.25, N = 3293.6MIN: 175.19 / MAX: 761.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUv6.4306090120150SE +/- 0.59, N = 3125.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyv6.420406080100SE +/- 0.36, N = 374.88

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyv6.470140210280350SE +/- 4.74, N = 9315.02

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyv6.4918273645SE +/- 0.15, N = 340.12

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyv6.41224364860SE +/- 0.28, N = 353.14

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyv6.4510152025SE +/- 0.17, N = 321.48

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3v6.448121620SE +/- 0.21, N = 916.26MIN: 13.09 / MAX: 150.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2v6.43691215SE +/- 0.22, N = 913.47MIN: 11.45 / MAX: 59.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message Passingv6.44M8M12M16M20MSE +/- 54732.60, N = 318811154.401. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data Sortingv6.4400800120016002000SE +/- 0.72, N = 31875.831. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String Functionsv6.420M40M60M80M100MSE +/- 906580.48, N = 1581547074.591. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating Pointv6.450K100K150K200K250KSE +/- 620.60, N = 3244931.411. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-Addv6.450M100M150M200M250MSE +/- 2984105.55, N = 15253366692.271. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector Mathv6.41.3M2.6M3.9M5.2M6.5MSE +/- 72348.80, N = 35961543.121. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Shufflev6.4120K240K360K480K600KSE +/- 1079.72, N = 3551599.321. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D Mathv6.46K12K18K24K30KSE +/- 329.39, N = 328901.021. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating Pointv6.47K14K21K28K35KSE +/- 348.48, N = 1531253.931. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRandv6.4140K280K420K560K700KSE +/- 674.33, N = 3637179.521. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function Callv6.414K28K42K56K70KSE +/- 811.29, N = 1563249.991. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIv6.43M6M9M12M15MSE +/- 48287.23, N = 311752668.181. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Mathv6.480K160K240K320K400KSE +/- 3943.55, N = 5381412.821. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Semaphoresv6.440M80M120M160M200MSE +/- 1308655.40, N = 3177471711.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Stressv6.440K80K120K160K200KSE +/- 341.26, N = 3200494.801. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Cachev6.4200K400K600K800K1000KSE +/- 7749.59, N = 3938764.061. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringv6.4600K1200K1800K2400K3000KSE +/- 33896.79, N = 152619140.961. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL Treev6.42004006008001000SE +/- 1.81, N = 3790.231. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreadv6.48K16K24K32K40KSE +/- 43.26, N = 336288.241. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mallocv6.450M100M150M200M250MSE +/- 129847.43, N = 3217211109.031. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Zlibv6.42K4K6K8K10KSE +/- 22.76, N = 38220.831. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAv6.4110220330440550SE +/- 7.82, N = 15514.091. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPv6.42K4K6K8K10KSE +/- 47.78, N = 37996.781. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10v6.4500K1000K1500K2000K2500KSE +/- 23334.59, N = 32262895.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50v6.41530456075SE +/- 0.29, N = 365.22

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50v6.41224364860SE +/- 0.72, N = 353.04

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50v6.41020304050SE +/- 0.54, N = 345.20

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50v6.4816243240SE +/- 0.35, N = 336.85

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latencyv6.41122334455SE +/- 0.55, N = 1250.361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Writev6.44K8K12K16K20KSE +/- 218.15, N = 12198841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latencyv6.4918273645SE +/- 0.47, N = 1239.251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Writev6.44K8K12K16K20KSE +/- 240.76, N = 12204161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latencyv6.40.31750.6350.95251.271.5875SE +/- 0.018, N = 121.4111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Onlyv6.4150K300K450K600K750KSE +/- 8915.31, N = 127100791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latencyv6.4612182430SE +/- 0.29, N = 423.301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Writev6.49K18K27K36K45KSE +/- 535.00, N = 4429471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latencyv6.40.25670.51340.77011.02681.2835SE +/- 0.016, N = 31.1411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Onlyv6.4150K300K450K600K750KSE +/- 9464.82, N = 37012131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latencyv6.448121620SE +/- 0.22, N = 317.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Writev6.410K20K30K40K50KSE +/- 557.77, N = 3455811. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.490180270360450SE +/- 3.41, N = 12399.86MAX: 31195.45

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.416M32M48M64M80MSE +/- 645314.77, N = 1273678024

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.420406080100SE +/- 0.13, N = 3104.93MAX: 23900.89

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.416M32M48M64M80MSE +/- 265706.48, N = 372605513

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.460120180240300SE +/- 2.23, N = 12277.40MAX: 30379.43

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.414M28M42M56M70MSE +/- 524197.72, N = 1265784478

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.41632486480SE +/- 0.22, N = 369.88MAX: 23895.2

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.414M28M42M56M70MSE +/- 690519.22, N = 366287906

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.4306090120150SE +/- 1.11, N = 15140.40MAX: 27765.12

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.411M22M33M44M55MSE +/- 396843.42, N = 1550423542

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.4816243240SE +/- 0.38, N = 1535.31MAX: 23938.81

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.411M22M33M44M55MSE +/- 482700.73, N = 1551248299

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.480160240320400SE +/- 4.26, N = 3370.82MAX: 30958.08

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.414M28M42M56M70MSE +/- 525580.37, N = 366098942

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.4306090120150SE +/- 0.10, N = 3111.75MAX: 10157.01

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.414M28M42M56M70MSE +/- 46596.93, N = 367045377

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.460120180240300SE +/- 3.08, N = 10265.99MAX: 30115.56

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.412M24M36M48M60MSE +/- 450916.16, N = 1058096600

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.420406080100SE +/- 0.85, N = 1577.00MAX: 12622.02

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.413M26M39M52M65MSE +/- 581785.82, N = 1559376956

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.44080120160200SE +/- 2.09, N = 3163.36MAX: 27171.99

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.49M18M27M36M45MSE +/- 245644.87, N = 340520901

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.41020304050SE +/- 0.46, N = 1542.68MAX: 15117.6

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.49M18M27M36M45MSE +/- 387575.50, N = 1541675265

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.4306090120150SE +/- 1.53, N = 13125.95MAX: 23943.82

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.411M22M33M44M55MSE +/- 506247.74, N = 1352514008

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.420406080100SE +/- 0.96, N = 1597.16MAX: 24088.47

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.49M18M27M36M45MSE +/- 358083.32, N = 1543036898

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.41632486480SE +/- 0.55, N = 1270.77MAX: 23941.04

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.45M10M15M20M25MSE +/- 171233.24, N = 1224582941

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.4306090120150SE +/- 1.48, N = 15155.22MAX: 27068.23

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.48M16M24M32M40MSE +/- 432863.04, N = 1539122053

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.4306090120150SE +/- 1.85, N = 15130.81MAX: 27828.94

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.46M12M18M24M30MSE +/- 332901.55, N = 1530189985

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.43M6M9M12M15MSE +/- 186146.39, N = 1512977461

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesv6.460120180240300SE +/- 2.50, N = 3275.09

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjav6.44080120160200SE +/- 3.13, N = 12187.32

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigv6.460120180240300SE +/- 1.69, N = 3288.41

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigv6.4918273645SE +/- 0.29, N = 1037.32

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessv6.41.14192.28383.42574.56765.7095SE +/- 0.005, N = 35.0751. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessv6.4246810SE +/- 0.054, N = 36.8921. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6v6.40.83881.67762.51643.35524.194SE +/- 0.005, N = 33.7281. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2v6.4918273645SE +/- 0.32, N = 339.371. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0v6.41632486480SE +/- 0.04, N = 370.741. (CXX) g++ options: -O3 -fPIC -lm

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingv6.490K180K270K360K450KSE +/- 822.46, N = 34003701. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingv6.460K120K180K240K300KSE +/- 2800.68, N = 32835561. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterv6.4714212835SE +/- 0.34, N = 428.271. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastv6.448121620SE +/- 0.19, N = 316.341. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterv6.4246810SE +/- 0.097, N = 158.4501. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastv6.41.14192.28383.42574.56765.7095SE +/- 0.058, N = 35.0751. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pv6.43691215SE +/- 0.04, N = 311.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kv6.40.92971.85942.78913.71884.6485SE +/- 0.031, N = 154.1321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kv6.4246810SE +/- 0.10, N = 157.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kv6.40.08780.17560.26340.35120.439SE +/- 0.00, N = 150.391. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicv6.4900M1800M2700M3600M4500MSE +/- 40424400.36, N = 340989533331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawv6.41300M2600M3900M5200M6500MSE +/- 38898803.89, N = 358659833331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Containerv6.4306090120150SE +/- 0.42, N = 3138.97

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal Installationv6.4306090120150SE +/- 0.71, N = 3116.35

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshieldv6.44080120160200SE +/- 0.82, N = 3159.21

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Testv6.4816243240SE +/- 0.01, N = 332.37

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1Mv6.4306090120150SE +/- 0.01, N = 3128.50

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beamv6.420406080100SE +/- 0.64, N = 395.17

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Timev6.44080120160200195.891. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh Timev6.44080120160200175.931. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timev6.491827364537.841. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timev6.491827364539.361. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solverv6.4246810SE +/- 0.058, N = 38.7131. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocytev6.4816243240SE +/- 0.52, N = 1534.171. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3Dv6.420406080100SE +/- 0.21, N = 380.641. (CXX) g++ options: -O2 -lOpenCL

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDv6.4816243240SE +/- 0.23, N = 335.241. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Cv6.440K80K120K160K200KSE +/- 1944.01, N = 5193221.951. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Bv6.440K80K120K160K200KSE +/- 1564.28, N = 13176009.151. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Cv6.450K100K150K200K250KSE +/- 2713.74, N = 15239408.111. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Cv6.460K120K180K240K300KSE +/- 2141.41, N = 3269542.211. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Cv6.420K40K60K80K100KSE +/- 1023.16, N = 15103248.131. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Cv6.411K22K33K44K55KSE +/- 525.17, N = 651894.851. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Cv6.470K140K210K280K350KSE +/- 2411.99, N = 11322119.991. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60v6.420406080100SE +/- 0.76, N = 676.581. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60v6.420406080100SE +/- 0.65, N = 3104.681. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Unkeyed Algorithmsv6.4100200300400500SE +/- 0.02, N = 3441.201. (CXX) g++ options: -g2 -O3 -fPIC -fno-devirtualize -pthread -pipe

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Keyed Algorithmsv6.4130260390520650SE +/- 0.03, N = 3604.541. (CXX) g++ options: -g2 -O3 -fPIC -fno-devirtualize -pthread -pipe

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 500 - Files: 100000v6.417003400510068008500SE +/- 225.23, N = 137703

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000v6.44K8K12K16K20KSE +/- 518.41, N = 1520233

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 1000000v6.450K100K150K200K250KSE +/- 38473.27, N = 12226282

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 1000 - Files: 100000v6.460K120K180K240K300KSE +/- 5523.57, N = 15274610

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000v6.450K100K150K200K250KSE +/- 18734.71, N = 15239582

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000v6.413K26K39K52K65KSE +/- 1094.57, N = 1560316

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000v6.46K12K18K24K30KSE +/- 650.81, N = 1526557

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 100000v6.460K120K180K240K300KSE +/- 9141.48, N = 12265652

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000v6.470K140K210K280K350KSE +/- 44222.69, N = 15315528

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000v6.460K120K180K240K300KSE +/- 6349.18, N = 12269034

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000v6.460K120K180K240K300KSE +/- 9603.74, N = 15294150

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUv6.40.12830.25660.38490.51320.6415SE +/- 0.01, N = 150.57MIN: 0.29 / MAX: 74.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUv6.416K32K48K64K80KSE +/- 1782.53, N = 1574954.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetv6.4510152025SE +/- 0.91, N = 918.43MIN: 14.45 / MAX: 73.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerv6.4306090120150SE +/- 3.72, N = 9125.89MIN: 72.76 / MAX: 4577.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mv6.4306090120150SE +/- 21.29, N = 9151.43MIN: 61.81 / MAX: 15083.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdv6.4612182430SE +/- 1.85, N = 926.70MIN: 20.19 / MAX: 1823.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyv6.41020304050SE +/- 3.97, N = 944.93MIN: 26.15 / MAX: 13201. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50v6.41020304050SE +/- 11.02, N = 945.37MIN: 21.66 / MAX: 2788.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetv6.4510152025SE +/- 7.87, N = 922.78MIN: 7.72 / MAX: 654.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18v6.448121620SE +/- 0.78, N = 915.69MIN: 11.72 / MAX: 98.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16v6.41428425670SE +/- 5.87, N = 963.23MIN: 32.56 / MAX: 1162.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetv6.4816243240SE +/- 4.26, N = 934.58MIN: 20.75 / MAX: 3103.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacev6.43691215SE +/- 1.12, N = 99.39MIN: 7.14 / MAX: 1661.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0v6.4510152025SE +/- 1.11, N = 921.53MIN: 16.55 / MAX: 1422.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetv6.43691215SE +/- 1.16, N = 913.54MIN: 10.12 / MAX: 1632.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2v6.4612182430SE +/- 7.18, N = 927.45MIN: 13.7 / MAX: 2722.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetv6.4918273645SE +/- 8.29, N = 937.13MIN: 19.94 / MAX: 2256.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context Switchingv6.43M6M9M12M15MSE +/- 1050064.40, N = 1213968013.671. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket Activityv6.47001400210028003500SE +/- 1161.86, N = 153083.041. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed Schedulerv6.416K32K48K64K80KSE +/- 6830.21, N = 1574205.171. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory Copyingv6.46K12K18K24K30KSE +/- 629.40, N = 1525816.501. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix Mathv6.480K160K240K320K400KSE +/- 15424.87, N = 12396715.181. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEv6.4200K400K600K800K1000KSE +/- 34998.26, N = 151084831.271. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Forkingv6.46K12K18K24K30KSE +/- 3958.39, N = 1228486.681. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cloningv6.414002800420056007000SE +/- 1413.09, N = 126428.751. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cryptov6.430K60K90K120K150KSE +/- 4552.49, N = 12152827.151. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Atomicv6.448121620SE +/- 1.12, N = 1217.751. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mutexv6.45M10M15M20M25MSE +/- 381162.66, N = 1523140484.141. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDv6.42004006008001000SE +/- 26.29, N = 151055.351. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Futexv6.416K32K48K64K80KSE +/- 3713.25, N = 1372753.471. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pollv6.41.1M2.2M3.3M4.4M5.5MSE +/- 324560.78, N = 125307662.301. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pipev6.47M14M21M28M35MSE +/- 487787.75, N = 1531203885.981. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Hashv6.44M8M12M16M20MSE +/- 343354.96, N = 1517389901.321. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10v6.4500K1000K1500K2000K2500KSE +/- 45839.72, N = 152249364.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5v6.4500K1000K1500K2000K2500KSE +/- 42441.54, N = 152189050.671. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5v6.4500K1000K1500K2000K2500KSE +/- 57616.61, N = 122331936.551. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latencyv6.40.35330.70661.05991.41321.7665SE +/- 0.042, N = 91.5701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Onlyv6.4140K280K420K560K700KSE +/- 15922.26, N = 96402351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencyv6.40.27830.55660.83491.11321.3915SE +/- 0.032, N = 91.2371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Onlyv6.4140K280K420K560K700KSE +/- 17253.11, N = 96502141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.4306090120150SE +/- 2.71, N = 15126.00MAX: 30405.42

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pv6.4110220330440550SE +/- 28.12, N = 15518.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pv6.490180270360450SE +/- 29.20, N = 15426.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pv6.420406080100SE +/- 4.48, N = 15110.861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kv6.4306090120150SE +/- 9.48, N = 12145.921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kv6.4306090120150SE +/- 8.17, N = 15133.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kv6.41224364860SE +/- 3.17, N = 1255.031. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4Kv6.41122334455SE +/- 1.22, N = 1549.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kv6.41020304050SE +/- 1.50, N = 1244.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kv6.41122334455SE +/- 1.51, N = 1547.541. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kv6.41020304050SE +/- 1.53, N = 1541.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kv6.448121620SE +/- 0.38, N = 1514.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kv6.4918273645SE +/- 0.79, N = 1541.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusterv6.41020304050SE +/- 2.00, N = 1542.611. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Dv6.47001400210028003500SE +/- 127.93, N = 143073.611. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Dv6.42K4K6K8K10KSE +/- 632.03, N = 1210703.401. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Cv6.42K4K6K8K10KSE +/- 222.04, N = 1210844.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

232 Results Shown

Apache Hadoop:
  Create - 50 - 1000000
  Rename - 50 - 100000
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16 - CPU:
    ms
    FPS
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
  Road Segmentation ADAS FP16 - CPU:
    ms
    FPS
  Face Detection Retail FP16 - CPU:
    ms
    FPS
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
  Vehicle Detection FP16 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
Blender:
  Pabellon Barcelona - CPU-Only
  Barbershop - CPU-Only
  Fishy Cat - CPU-Only
  Classroom - CPU-Only
  BMW27 - CPU-Only
NCNN:
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
Stress-NG:
  System V Message Passing
  Glibc Qsort Data Sorting
  Glibc C String Functions
  Vector Floating Point
  Fused Multiply-Add
  Wide Vector Math
  Vector Shuffle
  Matrix 3D Math
  Floating Point
  x86_64 RdRand
  Function Call
  AVX-512 VNNI
  Vector Math
  Semaphores
  CPU Stress
  CPU Cache
  IO_uring
  AVL Tree
  Pthread
  Malloc
  Zlib
  NUMA
  MMAP
Redis 7.0.12 + memtier_benchmark
TensorFlow:
  CPU - 256 - ResNet-50
  CPU - 64 - ResNet-50
  CPU - 32 - ResNet-50
  CPU - 16 - ResNet-50
PostgreSQL:
  1000 - 1000 - Read Write - Average Latency
  1000 - 1000 - Read Write
  1000 - 800 - Read Write - Average Latency
  1000 - 800 - Read Write
  1000 - 1000 - Read Only - Average Latency
  1000 - 1000 - Read Only
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
  1000 - 800 - Read Only - Average Latency
  1000 - 800 - Read Only
  100 - 800 - Read Write - Average Latency
  100 - 800 - Read Write
Apache IoTDB:
  800 - 100 - 800 - 400:
    Average Latency
    point/sec
  800 - 100 - 800 - 100:
    Average Latency
    point/sec
  800 - 100 - 500 - 400:
    Average Latency
    point/sec
  800 - 100 - 500 - 100:
    Average Latency
    point/sec
  800 - 100 - 200 - 400:
    Average Latency
    point/sec
  800 - 100 - 200 - 100:
    Average Latency
    point/sec
  500 - 100 - 800 - 400:
    Average Latency
    point/sec
  500 - 100 - 800 - 100:
    Average Latency
    point/sec
  500 - 100 - 500 - 400:
    Average Latency
    point/sec
  500 - 100 - 500 - 100:
    Average Latency
    point/sec
  500 - 100 - 200 - 400:
    Average Latency
    point/sec
  500 - 100 - 200 - 100:
    Average Latency
    point/sec
  200 - 100 - 800 - 100:
    Average Latency
    point/sec
  200 - 100 - 500 - 100:
    Average Latency
    point/sec
  200 - 100 - 200 - 100:
    Average Latency
    point/sec
  100 - 100 - 800 - 100:
    Average Latency
    point/sec
  100 - 100 - 500 - 100:
    Average Latency
    point/sec
  100 - 100 - 200 - 100:
    point/sec
Timed LLVM Compilation:
  Unix Makefiles
  Ninja
Timed Linux Kernel Compilation:
  allmodconfig
  defconfig
libavif avifenc:
  10, Lossless
  6, Lossless
  6
  2
  0
7-Zip Compression:
  Decompression Rating
  Compression Rating
VVenC:
  Bosphorus 1080p - Faster
  Bosphorus 1080p - Fast
  Bosphorus 4K - Faster
  Bosphorus 4K - Fast
SVT-AV1:
  Preset 4 - Bosphorus 1080p
  Preset 4 - Bosphorus 4K
AOM AV1:
  Speed 4 Two-Pass - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 4K
nekRS:
  TurboPipe Periodic
  Kershaw
OpenRadioss:
  INIVOL and Fluid Structure Interaction Drop Container
  Rubber O-Ring Seal Installation
  Bird Strike on Windshield
  Cell Phone Drop Test
  Chrysler Neon 1M
  Bumper Beam
OpenFOAM:
  drivaerFastback, Medium Mesh Size - Execution Time
  drivaerFastback, Medium Mesh Size - Mesh Time
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
Rodinia:
  OpenMP CFD Solver
  OpenMP Leukocyte
  OpenMP HotSpot3D
  OpenMP LavaMD
NAS Parallel Benchmarks:
  SP.C
  SP.B
  MG.C
  LU.C
  FT.C
  CG.C
  BT.C
High Performance Conjugate Gradient:
  144 144 144 - 60
  104 104 104 - 60
Crypto++:
  Unkeyed Algorithms
  Keyed Algorithms
Apache Hadoop:
  Create - 500 - 100000
  Create - 100 - 100000
  Open - 500 - 1000000
  Open - 1000 - 100000
  Open - 100 - 1000000
  Delete - 50 - 100000
  Create - 50 - 100000
  Open - 500 - 100000
  Open - 50 - 1000000
  Open - 100 - 100000
  Open - 50 - 100000
OpenVINO:
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU - mobilenet
Stress-NG:
  Context Switching
  Socket Activity
  Mixed Scheduler
  Memory Copying
  Matrix Math
  SENDFILE
  Forking
  Cloning
  Crypto
  Atomic
  Mutex
  MEMFD
  Futex
  Poll
  Pipe
  Hash
Redis 7.0.12 + memtier_benchmark:
  Redis - 50 - 1:10
  Redis - 100 - 1:5
  Redis - 50 - 1:5
PostgreSQL:
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
  100 - 800 - Read Only - Average Latency
  100 - 800 - Read Only
Apache IoTDB
SVT-AV1:
  Preset 13 - Bosphorus 1080p
  Preset 12 - Bosphorus 1080p
  Preset 8 - Bosphorus 1080p
  Preset 13 - Bosphorus 4K
  Preset 12 - Bosphorus 4K
  Preset 8 - Bosphorus 4K
AOM AV1:
  Speed 11 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
Rodinia
NAS Parallel Benchmarks:
  IS.D
  EP.D
  EP.C