Xeon Max Linux Kernels

2 x Intel Xeon Max 9480 testing with a Supermicro X13DEM v1.10 (1.3 BIOS) and ASPEED on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310054-NE-XEONMAXLI13
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
v6.4
October 04 2023
  2 Days, 15 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Xeon Max Linux KernelsOpenBenchmarking.orgPhoronix Test Suite2 x Intel Xeon Max 9480 @ 3.50GHz (112 Cores / 224 Threads)Supermicro X13DEM v1.10 (1.3 BIOS)Intel Device 1bce512GB2 x 7682GB INTEL SSDPF2KX076TZASPEEDVE2282 x Broadcom BCM57508 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb/200GbUbuntu 23.046.4.0-060400-generic (x86_64)GNOME Shell 44.0X Server 1.21.1.7GCC 12.2.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionXeon Max Linux Kernels PerformanceSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_cpufreq performance - CPU Microcode: 0x2c0001d1- OpenJDK Runtime Environment (build 17.0.6+10-Ubuntu-1ubuntu2)- Python 3.11.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

Xeon Max Linux Kernelsblender: Barbershop - CPU-Onlyhadoop: Open - 500 - 1000000apache-iotdb: 800 - 100 - 800 - 400apache-iotdb: 800 - 100 - 800 - 400ncnn: CPU - FastestDetncnn: CPU - vision_transformerncnn: CPU - regnety_400mncnn: CPU - squeezenet_ssdncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetpgbench: 1000 - 1000 - Read Write - Average Latencypgbench: 1000 - 1000 - Read Writepgbench: 1000 - 800 - Read Write - Average Latencypgbench: 1000 - 800 - Read Writepgbench: 1000 - 1000 - Read Only - Average Latencypgbench: 1000 - 1000 - Read Onlybuild-llvm: Ninjaapache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 800 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 100apache-iotdb: 500 - 100 - 500 - 100hpcg: 144 144 144 - 60cryptopp: Keyed Algorithmsaom-av1: Speed 4 Two-Pass - Bosphorus 4Kapache-iotdb: 800 - 100 - 200 - 400apache-iotdb: 800 - 100 - 200 - 400hadoop: Open - 100 - 1000000apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 800 - 100 - 200 - 100apache-iotdb: 500 - 100 - 500 - 400apache-iotdb: 500 - 100 - 500 - 400tensorflow: CPU - 256 - ResNet-50apache-iotdb: 200 - 100 - 800 - 100apache-iotdb: 200 - 100 - 800 - 100hadoop: Open - 50 - 1000000pgbench: 100 - 800 - Read Only - Average Latencypgbench: 100 - 800 - Read Onlypgbench: 100 - 1000 - Read Only - Average Latencypgbench: 100 - 1000 - Read Onlyapache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 500 - 100 - 200 - 100apache-iotdb: 200 - 100 - 500 - 100apache-iotdb: 200 - 100 - 500 - 100vvenc: Bosphorus 4K - Fasterapache-iotdb: 100 - 100 - 800 - 100apache-iotdb: 100 - 100 - 800 - 100memtier-benchmark: Redis - 50 - 1:10memtier-benchmark: Redis - 100 - 1:5apache-iotdb: 100 - 100 - 500 - 100apache-iotdb: 100 - 100 - 500 - 100stress-ng: Atomicapache-iotdb: 100 - 100 - 200 - 100apache-iotdb: 100 - 100 - 200 - 100openvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 4Kopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUbuild-linux-kernel: allmodconfighadoop: Open - 1000 - 100000memtier-benchmark: Redis - 50 - 1:5apache-iotdb: 800 - 100 - 800 - 100apache-iotdb: 800 - 100 - 800 - 100build-llvm: Unix Makefilesapache-iotdb: 200 - 100 - 200 - 100apache-iotdb: 200 - 100 - 200 - 100aom-av1: Speed 0 Two-Pass - Bosphorus 4Kcryptopp: Unkeyed Algorithmshadoop: Open - 500 - 100000hadoop: Delete - 50 - 100000apache-iotdb: 500 - 100 - 800 - 400apache-iotdb: 500 - 100 - 800 - 400hadoop: Create - 500 - 100000svt-av1: Preset 4 - Bosphorus 4Krodinia: OpenMP Streamclusterapache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 500 - 100 - 800 - 100apache-iotdb: 800 - 100 - 500 - 100apache-iotdb: 800 - 100 - 500 - 100hadoop: Open - 50 - 100000hadoop: Create - 100 - 100000hadoop: Create - 50 - 100000hadoop: Rename - 50 - 100000pgbench: 1000 - 800 - Read Only - Average Latencypgbench: 1000 - 800 - Read Onlypgbench: 100 - 1000 - Read Write - Average Latencypgbench: 100 - 1000 - Read Writehadoop: Open - 100 - 100000rodinia: OpenMP Leukocytestress-ng: IO_uringopenradioss: Bird Strike on Windshieldopenradioss: Chrysler Neon 1Mstress-ng: Pipestress-ng: Fused Multiply-Addstress-ng: Mutexstress-ng: Function Callstress-ng: Glibc C String Functionsstress-ng: Memory Copyingstress-ng: Hashstress-ng: Floating Pointstress-ng: SENDFILEstress-ng: Socket Activitystress-ng: NUMAstress-ng: Mixed Schedulerstress-ng: MEMFDopenradioss: INIVOL and Fluid Structure Interaction Drop Containerstress-ng: Futexstress-ng: Forkingtensorflow: CPU - 64 - ResNet-50pgbench: 100 - 800 - Read Write - Average Latencypgbench: 100 - 800 - Read Writestress-ng: Cloninghpcg: 104 104 104 - 60stress-ng: Matrix Mathstress-ng: Context Switchingstress-ng: Cryptostress-ng: Pollbuild-linux-kernel: defconfigopenvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUvvenc: Bosphorus 4K - Fastopenradioss: Rubber O-Ring Seal Installationopenfoam: drivaerFastback, Medium Mesh Size - Execution Timeopenfoam: drivaerFastback, Medium Mesh Size - Mesh Timeapache-iotdb: 500 - 100 - 200 - 400apache-iotdb: 500 - 100 - 200 - 400openradioss: Bumper Beamtensorflow: CPU - 32 - ResNet-50aom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Krodinia: OpenMP HotSpot3Dblender: Pabellon Barcelona - CPU-Onlyaom-av1: Speed 9 Realtime - Bosphorus 4Kopenvino: Face Detection FP16-INT8 - CPUopenvino: Face Detection FP16-INT8 - CPUavifenc: 0aom-av1: Speed 11 Realtime - Bosphorus 4Kcompress-7zip: Decompression Ratingcompress-7zip: Compression Ratingmemtier-benchmark: Redis - 100 - 1:10openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUnekrs: TurboPipe Periodichadoop: Create - 50 - 1000000openvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP32 - CPUopenvino: Person Detection FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUaom-av1: Speed 10 Realtime - Bosphorus 4Kopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Weld Porosity Detection FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUnpb: EP.Dnpb: IS.Dsvt-av1: Preset 8 - Bosphorus 4Kblender: Classroom - CPU-Onlytensorflow: CPU - 16 - ResNet-50stress-ng: Vector Mathnekrs: Kershawopenradioss: Cell Phone Drop Testblender: Fishy Cat - CPU-Onlyvvenc: Bosphorus 1080p - Faststress-ng: CPU Cacheavifenc: 2svt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Knpb: BT.Crodinia: OpenMP LavaMDvvenc: Bosphorus 1080p - Fasterstress-ng: Pthreadstress-ng: AVX-512 VNNIstress-ng: Matrix 3D Mathstress-ng: AVL Treestress-ng: Glibc Qsort Data Sortingstress-ng: Wide Vector Mathstress-ng: CPU Stressstress-ng: Vector Shufflestress-ng: Vector Floating Pointstress-ng: x86_64 RdRandstress-ng: Semaphoresstress-ng: Zlibstress-ng: MMAPstress-ng: System V Message Passingstress-ng: Mallocsvt-av1: Preset 13 - Bosphorus 4Kopenfoam: drivaerFastback, Small Mesh Size - Execution Timeopenfoam: drivaerFastback, Small Mesh Size - Mesh Timenpb: FT.Cblender: BMW27 - CPU-Onlysvt-av1: Preset 4 - Bosphorus 1080pnpb: SP.Cnpb: SP.Bsvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080prodinia: OpenMP CFD Solvernpb: LU.Cnpb: MG.Cnpb: CG.Cnpb: EP.Cavifenc: 6, Losslessavifenc: 10, Losslessavifenc: 6v6.4315.02226282399.867367802418.43125.89151.4326.7044.9345.3722.7815.6963.2334.589.3921.5313.5427.4516.2613.4737.1350.3581988439.245204161.411710079187.322277.406578447877.005937695676.5785604.5416427.07140.405042354223958235.3151248299265.995809660065.22125.95525140083155281.2376502141.57064023542.684167526597.16430368988.450155.22391220532249364.772189050.67130.813018998517.75126.00129774614.2325464.620.5774954.0314.300.34112475.34288.4122746102331936.55104.9372605513275.08570.77245829410.39441.19698926565260316370.826609894277034.13242.605111.756704537769.88662879062941502023326557477451.14170121323.2964294726903434.1732619140.96159.21128.5031203885.98253366692.2723140484.1463249.9981547074.5925816.5017389901.3231253.931084831.273083.04514.0974205.171055.35138.9772753.4728486.6853.0417.556455816428.75104.677396715.1813968013.67152827.155307662.3037.31614.352572.635.075116.35195.88764175.9258163.364052090195.1745.2041.5141.6780.64474.8847.54337.22331.2070.73549.094003702835562262895.8617.706316.44293.6125.6940989533333725474.401503.5477.70475.6677.97474.0555.31667.6422.285018.0344.2333.211112.469.8511342.546.8916182.556.6016690.3631.103597.6545.632452.1410703.403073.6155.03453.1436.85381412.82586598333332.3740.1216.336938764.0639.371110.855133.926322119.9935.24328.27036288.2411752668.1828901.02790.231875.835961543.12200494.80551599.32244931.41637179.52177471711.518220.837996.7818811154.40217211109.03145.91537.84071239.359594103248.1321.4811.589193221.95176009.15426.842518.7698.713269542.21239408.1151894.8510844.906.8925.0753.728OpenBenchmarking.org

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyv6.470140210280350SE +/- 4.74, N = 9315.02

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 1000000v6.450K100K150K200K250KSE +/- 38473.27, N = 12226282

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.490180270360450SE +/- 3.41, N = 12399.86MAX: 31195.45

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.416M32M48M64M80MSE +/- 645314.77, N = 1273678024

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetv6.4510152025SE +/- 0.91, N = 918.43MIN: 14.45 / MAX: 73.81. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerv6.4306090120150SE +/- 3.72, N = 9125.89MIN: 72.76 / MAX: 4577.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mv6.4306090120150SE +/- 21.29, N = 9151.43MIN: 61.81 / MAX: 15083.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdv6.4612182430SE +/- 1.85, N = 926.70MIN: 20.19 / MAX: 1823.291. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyv6.41020304050SE +/- 3.97, N = 944.93MIN: 26.15 / MAX: 13201. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50v6.41020304050SE +/- 11.02, N = 945.37MIN: 21.66 / MAX: 2788.111. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetv6.4510152025SE +/- 7.87, N = 922.78MIN: 7.72 / MAX: 654.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18v6.448121620SE +/- 0.78, N = 915.69MIN: 11.72 / MAX: 98.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16v6.41428425670SE +/- 5.87, N = 963.23MIN: 32.56 / MAX: 1162.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetv6.4816243240SE +/- 4.26, N = 934.58MIN: 20.75 / MAX: 3103.941. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacev6.43691215SE +/- 1.12, N = 99.39MIN: 7.14 / MAX: 1661.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0v6.4510152025SE +/- 1.11, N = 921.53MIN: 16.55 / MAX: 1422.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetv6.43691215SE +/- 1.16, N = 913.54MIN: 10.12 / MAX: 1632.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2v6.4612182430SE +/- 7.18, N = 927.45MIN: 13.7 / MAX: 2722.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3v6.448121620SE +/- 0.21, N = 916.26MIN: 13.09 / MAX: 150.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2v6.43691215SE +/- 0.22, N = 913.47MIN: 11.45 / MAX: 59.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetv6.4918273645SE +/- 8.29, N = 937.13MIN: 19.94 / MAX: 2256.091. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Write - Average Latencyv6.41122334455SE +/- 0.55, N = 1250.361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Writev6.44K8K12K16K20KSE +/- 218.15, N = 12198841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Write - Average Latencyv6.4918273645SE +/- 0.47, N = 1239.251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Writev6.44K8K12K16K20KSE +/- 240.76, N = 12204161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Only - Average Latencyv6.40.31750.6350.95251.271.5875SE +/- 0.018, N = 121.4111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 1000 - Mode: Read Onlyv6.4150K300K450K600K750KSE +/- 8915.31, N = 127100791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjav6.44080120160200SE +/- 3.13, N = 12187.32

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.460120180240300SE +/- 2.23, N = 12277.40MAX: 30379.43

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.414M28M42M56M70MSE +/- 524197.72, N = 1265784478

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.420406080100SE +/- 0.85, N = 1577.00MAX: 12622.02

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.413M26M39M52M65MSE +/- 581785.82, N = 1559376956

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60v6.420406080100SE +/- 0.76, N = 676.581. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Keyed Algorithmsv6.4130260390520650SE +/- 0.03, N = 3604.541. (CXX) g++ options: -g2 -O3 -fPIC -fno-devirtualize -pthread -pipe

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4Kv6.4246810SE +/- 0.10, N = 157.071. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.4306090120150SE +/- 1.11, N = 15140.40MAX: 27765.12

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.411M22M33M44M55MSE +/- 396843.42, N = 1550423542

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 1000000v6.450K100K150K200K250KSE +/- 18734.71, N = 15239582

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.4816243240SE +/- 0.38, N = 1535.31MAX: 23938.81

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.411M22M33M44M55MSE +/- 482700.73, N = 1551248299

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.460120180240300SE +/- 3.08, N = 10265.99MAX: 30115.56

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400v6.412M24M36M48M60MSE +/- 450916.16, N = 1058096600

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50v6.41530456075SE +/- 0.29, N = 365.22

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.4306090120150SE +/- 1.53, N = 13125.95MAX: 23943.82

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.411M22M33M44M55MSE +/- 506247.74, N = 1352514008

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 1000000v6.470K140K210K280K350KSE +/- 44222.69, N = 15315528

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latencyv6.40.27830.55660.83491.11321.3915SE +/- 0.032, N = 91.2371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Onlyv6.4140K280K420K560K700KSE +/- 17253.11, N = 96502141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latencyv6.40.35330.70661.05991.41321.7665SE +/- 0.042, N = 91.5701. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Onlyv6.4140K280K420K560K700KSE +/- 15922.26, N = 96402351. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.41020304050SE +/- 0.46, N = 1542.68MAX: 15117.6

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.49M18M27M36M45MSE +/- 387575.50, N = 1541675265

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.420406080100SE +/- 0.96, N = 1597.16MAX: 24088.47

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.49M18M27M36M45MSE +/- 358083.32, N = 1543036898

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterv6.4246810SE +/- 0.097, N = 158.4501. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.4306090120150SE +/- 1.48, N = 15155.22MAX: 27068.23

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.48M16M24M32M40MSE +/- 432863.04, N = 1539122053

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10v6.4500K1000K1500K2000K2500KSE +/- 45839.72, N = 152249364.771. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5v6.4500K1000K1500K2000K2500KSE +/- 42441.54, N = 152189050.671. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.4306090120150SE +/- 1.85, N = 15130.81MAX: 27828.94

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.46M12M18M24M30MSE +/- 332901.55, N = 1530189985

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Atomicv6.448121620SE +/- 1.12, N = 1217.751. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.4306090120150SE +/- 2.71, N = 15126.00MAX: 30405.42

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.43M6M9M12M15MSE +/- 186146.39, N = 1512977461

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUv6.40.95181.90362.85543.80724.759SE +/- 0.03, N = 154.23MIN: 2.61 / MAX: 111.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUv6.45K10K15K20K25KSE +/- 231.74, N = 1525464.621. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUv6.40.12830.25660.38490.51320.6415SE +/- 0.01, N = 150.57MIN: 0.29 / MAX: 74.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUv6.416K32K48K64K80KSE +/- 1782.53, N = 1574954.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4Kv6.448121620SE +/- 0.38, N = 1514.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUv6.40.07650.1530.22950.3060.3825SE +/- 0.00, N = 150.34MIN: 0.25 / MAX: 36.531. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUv6.420K40K60K80K100KSE +/- 1041.05, N = 15112475.341. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigv6.460120180240300SE +/- 1.69, N = 3288.41

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 1000 - Files: 100000v6.460K120K180K240K300KSE +/- 5523.57, N = 15274610

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5v6.4500K1000K1500K2000K2500KSE +/- 57616.61, N = 122331936.551. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.420406080100SE +/- 0.13, N = 3104.93MAX: 23900.89

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.416M32M48M64M80MSE +/- 265706.48, N = 372605513

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesv6.460120180240300SE +/- 2.50, N = 3275.09

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.41632486480SE +/- 0.55, N = 1270.77MAX: 23941.04

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100v6.45M10M15M20M25MSE +/- 171233.24, N = 1224582941

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4Kv6.40.08780.17560.26340.35120.439SE +/- 0.00, N = 150.391. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.8Test: Unkeyed Algorithmsv6.4100200300400500SE +/- 0.02, N = 3441.201. (CXX) g++ options: -g2 -O3 -fPIC -fno-devirtualize -pthread -pipe

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 500 - Files: 100000v6.460K120K180K240K300KSE +/- 9141.48, N = 12265652

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Delete - Threads: 50 - Files: 100000v6.413K26K39K52K65KSE +/- 1094.57, N = 1560316

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.480160240320400SE +/- 4.26, N = 3370.82MAX: 30958.08

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400v6.414M28M42M56M70MSE +/- 525580.37, N = 366098942

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 500 - Files: 100000v6.417003400510068008500SE +/- 225.23, N = 137703

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kv6.40.92971.85942.78913.71884.6485SE +/- 0.031, N = 154.1321. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Streamclusterv6.41020304050SE +/- 2.00, N = 1542.611. (CXX) g++ options: -O2 -lOpenCL

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.4306090120150SE +/- 0.10, N = 3111.75MAX: 10157.01

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100v6.414M28M42M56M70MSE +/- 46596.93, N = 367045377

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.41632486480SE +/- 0.22, N = 369.88MAX: 23895.2

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100v6.414M28M42M56M70MSE +/- 690519.22, N = 366287906

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 50 - Files: 100000v6.460K120K180K240K300KSE +/- 9603.74, N = 15294150

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 100 - Files: 100000v6.44K8K12K16K20KSE +/- 518.41, N = 1520233

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 100000v6.46K12K18K24K30KSE +/- 650.81, N = 1526557

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Rename - Threads: 50 - Files: 100000v6.410K20K30K40K50KSE +/- 708.56, N = 1447745

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Only - Average Latencyv6.40.25670.51340.77011.02681.2835SE +/- 0.016, N = 31.1411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 1000 - Clients: 800 - Mode: Read Onlyv6.4150K300K450K600K750KSE +/- 9464.82, N = 37012131. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latencyv6.4612182430SE +/- 0.29, N = 423.301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 1000 - Mode: Read Writev6.49K18K27K36K45KSE +/- 535.00, N = 4429471. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Open - Threads: 100 - Files: 100000v6.460K120K180K240K300KSE +/- 6349.18, N = 12269034

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP Leukocytev6.4816243240SE +/- 0.52, N = 1534.171. (CXX) g++ options: -O2 -lOpenCL

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: IO_uringv6.4600K1200K1800K2400K3000KSE +/- 33896.79, N = 152619140.961. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshieldv6.44080120160200SE +/- 0.82, N = 3159.21

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Chrysler Neon 1Mv6.4306090120150SE +/- 0.01, N = 3128.50

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pipev6.47M14M21M28M35MSE +/- 487787.75, N = 1531203885.981. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Fused Multiply-Addv6.450M100M150M200M250MSE +/- 2984105.55, N = 15253366692.271. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mutexv6.45M10M15M20M25MSE +/- 381162.66, N = 1523140484.141. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Function Callv6.414K28K42K56K70KSE +/- 811.29, N = 1563249.991. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc C String Functionsv6.420M40M60M80M100MSE +/- 906580.48, N = 1581547074.591. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Memory Copyingv6.46K12K18K24K30KSE +/- 629.40, N = 1525816.501. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Hashv6.44M8M12M16M20MSE +/- 343354.96, N = 1517389901.321. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Floating Pointv6.47K14K21K28K35KSE +/- 348.48, N = 1531253.931. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: SENDFILEv6.4200K400K600K800K1000KSE +/- 34998.26, N = 151084831.271. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Socket Activityv6.47001400210028003500SE +/- 1161.86, N = 153083.041. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: NUMAv6.4110220330440550SE +/- 7.82, N = 15514.091. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mixed Schedulerv6.416K32K48K64K80KSE +/- 6830.21, N = 1574205.171. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MEMFDv6.42004006008001000SE +/- 26.29, N = 151055.351. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Containerv6.4306090120150SE +/- 0.42, N = 3138.97

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Futexv6.416K32K48K64K80KSE +/- 3713.25, N = 1372753.471. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Forkingv6.46K12K18K24K30KSE +/- 3958.39, N = 1228486.681. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50v6.41224364860SE +/- 0.72, N = 353.04

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latencyv6.448121620SE +/- 0.22, N = 317.561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 16Scaling Factor: 100 - Clients: 800 - Mode: Read Writev6.410K20K30K40K50KSE +/- 557.77, N = 3455811. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cloningv6.414002800420056007000SE +/- 1413.09, N = 126428.751. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60v6.420406080100SE +/- 0.65, N = 3104.681. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix Mathv6.480K160K240K320K400KSE +/- 15424.87, N = 12396715.181. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Context Switchingv6.43M6M9M12M15MSE +/- 1050064.40, N = 1213968013.671. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Cryptov6.430K60K90K120K150KSE +/- 4552.49, N = 12152827.151. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pollv6.41.1M2.2M3.3M4.4M5.5MSE +/- 324560.78, N = 125307662.301. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigv6.4918273645SE +/- 0.29, N = 1037.32

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUv6.448121620SE +/- 0.13, N = 614.35MIN: 9.07 / MAX: 112.911. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUv6.46001200180024003000SE +/- 24.17, N = 62572.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastv6.41.14192.28383.42574.56765.7095SE +/- 0.058, N = 35.0751. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal Installationv6.4306090120150SE +/- 0.71, N = 3116.35

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Timev6.44080120160200195.891. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh Timev6.44080120160200175.931. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

Apache IoTDB

Apache IotDB is a time series database and this benchmark is facilitated using the IoT Benchmaark [https://github.com/thulab/iot-benchmark/]. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgAverage Latency, Fewer Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.44080120160200SE +/- 2.09, N = 3163.36MAX: 27171.99

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400v6.49M18M27M36M45MSE +/- 245644.87, N = 340520901

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beamv6.420406080100SE +/- 0.64, N = 395.17

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50v6.41020304050SE +/- 0.54, N = 345.20

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4Kv6.4918273645SE +/- 0.79, N = 1541.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4Kv6.41020304050SE +/- 1.53, N = 1541.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP HotSpot3Dv6.420406080100SE +/- 0.21, N = 380.641. (CXX) g++ options: -O2 -lOpenCL

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlyv6.420406080100SE +/- 0.36, N = 374.88

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4Kv6.41122334455SE +/- 1.51, N = 1547.541. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUv6.470140210280350SE +/- 0.31, N = 3337.22MIN: 252.67 / MAX: 470.111. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUv6.470140210280350SE +/- 0.28, N = 3331.201. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0v6.41632486480SE +/- 0.04, N = 370.741. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4Kv6.41122334455SE +/- 1.22, N = 1549.091. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingv6.490K180K270K360K450KSE +/- 822.46, N = 34003701. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingv6.460K120K180K240K300KSE +/- 2800.68, N = 32835561. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Redis 7.0.12 + memtier_benchmark

Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10v6.4500K1000K1500K2000K2500KSE +/- 23334.59, N = 32262895.861. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUv6.448121620SE +/- 0.02, N = 317.70MIN: 12.28 / MAX: 42.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUv6.414002800420056007000SE +/- 8.07, N = 36316.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUv6.460120180240300SE +/- 1.25, N = 3293.6MIN: 175.19 / MAX: 761.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUv6.4306090120150SE +/- 0.59, N = 3125.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: TurboPipe Periodicv6.4900M1800M2700M3600M4500MSE +/- 40424400.36, N = 340989533331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

Apache Hadoop

This is a benchmark of the Apache Hadoop making use of its built-in name-node throughput benchmark (NNThroughputBenchmark). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps per sec, More Is BetterApache Hadoop 3.3.6Operation: Create - Threads: 50 - Files: 1000000v6.48K16K24K32K40KSE +/- 371.08, N = 337254

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUv6.420406080100SE +/- 0.13, N = 374.40MIN: 53.51 / MAX: 281.31. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUv6.430060090012001500SE +/- 2.60, N = 31503.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUv6.420406080100SE +/- 0.95, N = 377.70MIN: 49.99 / MAX: 537.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUv6.4100200300400500SE +/- 5.84, N = 3475.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUv6.420406080100SE +/- 0.97, N = 377.97MIN: 47.82 / MAX: 365.811. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUv6.4100200300400500SE +/- 5.93, N = 3474.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUv6.41224364860SE +/- 0.43, N = 355.31MIN: 38.54 / MAX: 312.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUv6.4140280420560700SE +/- 5.12, N = 3667.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUv6.4510152025SE +/- 0.09, N = 322.28MIN: 14.79 / MAX: 71.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUv6.411002200330044005500SE +/- 20.34, N = 35018.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.7Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4Kv6.41020304050SE +/- 1.50, N = 1244.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUv6.4816243240SE +/- 0.39, N = 333.21MIN: 23.34 / MAX: 169.711. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUv6.42004006008001000SE +/- 13.29, N = 31112.461. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUv6.43691215SE +/- 0.01, N = 39.85MIN: 7.51 / MAX: 55.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUv6.42K4K6K8K10KSE +/- 16.48, N = 311342.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUv6.4246810SE +/- 0.02, N = 36.89MIN: 5.52 / MAX: 36.521. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUv6.43K6K9K12K15KSE +/- 27.81, N = 316182.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUv6.4246810SE +/- 0.09, N = 36.60MIN: 4.28 / MAX: 87.641. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUv6.44K8K12K16K20KSE +/- 240.02, N = 316690.361. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUv6.4714212835SE +/- 0.09, N = 331.10MIN: 24.91 / MAX: 99.631. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUv6.48001600240032004000SE +/- 9.62, N = 33597.651. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUv6.41020304050SE +/- 0.11, N = 345.63MIN: 38.64 / MAX: 91.41. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUv6.45001000150020002500SE +/- 5.85, N = 32452.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Dv6.42K4K6K8K10KSE +/- 632.03, N = 1210703.401. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: IS.Dv6.47001400210028003500SE +/- 127.93, N = 143073.611. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kv6.41224364860SE +/- 3.17, N = 1255.031. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyv6.41224364860SE +/- 0.28, N = 353.14

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50v6.4816243240SE +/- 0.35, N = 336.85

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Mathv6.480K160K240K320K400KSE +/- 3943.55, N = 5381412.821. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

nekRS

nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgflops/rank, More Is BetternekRS 23.0Input: Kershawv6.41300M2600M3900M5200M6500MSE +/- 38898803.89, N = 358659833331. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Testv6.4816243240SE +/- 0.01, N = 332.37

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyv6.4918273645SE +/- 0.15, N = 340.12

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastv6.448121620SE +/- 0.19, N = 316.341. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Cachev6.4200K400K600K800K1000KSE +/- 7749.59, N = 3938764.061. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2v6.4918273645SE +/- 0.32, N = 339.371. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pv6.420406080100SE +/- 4.48, N = 15110.861. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kv6.4306090120150SE +/- 8.17, N = 15133.931. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: BT.Cv6.470K140K210K280K350KSE +/- 2411.99, N = 11322119.991. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP LavaMDv6.4816243240SE +/- 0.23, N = 335.241. (CXX) g++ options: -O2 -lOpenCL

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterv6.4714212835SE +/- 0.34, N = 428.271. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

Stress-NG

Stress-NG is a Linux stress tool developed by Colin Ian King. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Pthreadv6.48K16K24K32K40KSE +/- 43.26, N = 336288.241. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVX-512 VNNIv6.43M6M9M12M15MSE +/- 48287.23, N = 311752668.181. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Matrix 3D Mathv6.46K12K18K24K30KSE +/- 329.39, N = 328901.021. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: AVL Treev6.42004006008001000SE +/- 1.81, N = 3790.231. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Glibc Qsort Data Sortingv6.4400800120016002000SE +/- 0.72, N = 31875.831. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Wide Vector Mathv6.41.3M2.6M3.9M5.2M6.5MSE +/- 72348.80, N = 35961543.121. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: CPU Stressv6.440K80K120K160K200KSE +/- 341.26, N = 3200494.801. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Shufflev6.4120K240K360K480K600KSE +/- 1079.72, N = 3551599.321. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Vector Floating Pointv6.450K100K150K200K250KSE +/- 620.60, N = 3244931.411. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: x86_64 RdRandv6.4140K280K420K560K700KSE +/- 674.33, N = 3637179.521. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Semaphoresv6.440M80M120M160M200MSE +/- 1308655.40, N = 3177471711.511. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Zlibv6.42K4K6K8K10KSE +/- 22.76, N = 38220.831. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: MMAPv6.42K4K6K8K10KSE +/- 47.78, N = 37996.781. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: System V Message Passingv6.44M8M12M16M20MSE +/- 54732.60, N = 318811154.401. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.16.04Test: Mallocv6.450M100M150M200M250MSE +/- 129847.43, N = 3217211109.031. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kv6.4306090120150SE +/- 9.48, N = 12145.921. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenFOAM

OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timev6.491827364537.841. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timev6.491827364539.361. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: FT.Cv6.420K40K60K80K100KSE +/- 1023.16, N = 15103248.131. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyv6.4510152025SE +/- 0.17, N = 321.48

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pv6.43691215SE +/- 0.04, N = 311.591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Cv6.440K80K120K160K200KSE +/- 1944.01, N = 5193221.951. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: SP.Bv6.440K80K120K160K200KSE +/- 1564.28, N = 13176009.151. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pv6.490180270360450SE +/- 29.20, N = 15426.841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pv6.4110220330440550SE +/- 28.12, N = 15518.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Rodinia

Rodinia is a suite focused upon accelerating compute-intensive applications with accelerators. CUDA, OpenMP, and OpenCL parallel models are supported by the included applications. This profile utilizes select OpenCL, NVIDIA CUDA and OpenMP test binaries at the moment. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRodinia 3.1Test: OpenMP CFD Solverv6.4246810SE +/- 0.058, N = 38.7131. (CXX) g++ options: -O2 -lOpenCL

NAS Parallel Benchmarks

NPB, NAS Parallel Benchmarks, is a benchmark developed by NASA for high-end computer systems. This test profile currently uses the MPI version of NPB. This test profile offers selecting the different NPB tests/problems and varying problem sizes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: LU.Cv6.460K120K180K240K300KSE +/- 2141.41, N = 3269542.211. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: MG.Cv6.450K100K150K200K250KSE +/- 2713.74, N = 15239408.111. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: CG.Cv6.411K22K33K44K55KSE +/- 525.17, N = 651894.851. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

OpenBenchmarking.orgTotal Mop/s, More Is BetterNAS Parallel Benchmarks 3.4Test / Class: EP.Cv6.42K4K6K8K10KSE +/- 222.04, N = 1210844.901. (F9X) gfortran options: -O3 -march=native -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz 2. Open MPI 4.1.4

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessv6.4246810SE +/- 0.054, N = 36.8921. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessv6.41.14192.28383.42574.56765.7095SE +/- 0.005, N = 35.0751. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6v6.40.83881.67762.51643.35524.194SE +/- 0.005, N = 33.7281. (CXX) g++ options: -O3 -fPIC -lm

232 Results Shown

Blender
Apache Hadoop
Apache IoTDB:
  800 - 100 - 800 - 400:
    Average Latency
    point/sec
NCNN:
  CPU - FastestDet
  CPU - vision_transformer
  CPU - regnety_400m
  CPU - squeezenet_ssd
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
PostgreSQL:
  1000 - 1000 - Read Write - Average Latency
  1000 - 1000 - Read Write
  1000 - 800 - Read Write - Average Latency
  1000 - 800 - Read Write
  1000 - 1000 - Read Only - Average Latency
  1000 - 1000 - Read Only
Timed LLVM Compilation
Apache IoTDB:
  800 - 100 - 500 - 400:
    Average Latency
    point/sec
  500 - 100 - 500 - 100:
    Average Latency
    point/sec
High Performance Conjugate Gradient
Crypto++
AOM AV1
Apache IoTDB:
  800 - 100 - 200 - 400:
    Average Latency
    point/sec
Apache Hadoop
Apache IoTDB:
  800 - 100 - 200 - 100:
    Average Latency
    point/sec
  500 - 100 - 500 - 400:
    Average Latency
    point/sec
TensorFlow
Apache IoTDB:
  200 - 100 - 800 - 100:
    Average Latency
    point/sec
Apache Hadoop
PostgreSQL:
  100 - 800 - Read Only - Average Latency
  100 - 800 - Read Only
  100 - 1000 - Read Only - Average Latency
  100 - 1000 - Read Only
Apache IoTDB:
  500 - 100 - 200 - 100:
    Average Latency
    point/sec
  200 - 100 - 500 - 100:
    Average Latency
    point/sec
VVenC
Apache IoTDB:
  100 - 100 - 800 - 100:
    Average Latency
    point/sec
Redis 7.0.12 + memtier_benchmark:
  Redis - 50 - 1:10
  Redis - 100 - 1:5
Apache IoTDB:
  100 - 100 - 500 - 100:
    Average Latency
    point/sec
Stress-NG
Apache IoTDB:
  100 - 100 - 200 - 100:
    Average Latency
    point/sec
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU:
    ms
    FPS
  Age Gender Recognition Retail 0013 FP16 - CPU:
    ms
    FPS
AOM AV1
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU:
    ms
    FPS
Timed Linux Kernel Compilation
Apache Hadoop
Redis 7.0.12 + memtier_benchmark
Apache IoTDB:
  800 - 100 - 800 - 100:
    Average Latency
    point/sec
Timed LLVM Compilation
Apache IoTDB:
  200 - 100 - 200 - 100:
    Average Latency
    point/sec
AOM AV1
Crypto++
Apache Hadoop:
  Open - 500 - 100000
  Delete - 50 - 100000
Apache IoTDB:
  500 - 100 - 800 - 400:
    Average Latency
    point/sec
Apache Hadoop
SVT-AV1
Rodinia
Apache IoTDB:
  500 - 100 - 800 - 100:
    Average Latency
    point/sec
  800 - 100 - 500 - 100:
    Average Latency
    point/sec
Apache Hadoop:
  Open - 50 - 100000
  Create - 100 - 100000
  Create - 50 - 100000
  Rename - 50 - 100000
PostgreSQL:
  1000 - 800 - Read Only - Average Latency
  1000 - 800 - Read Only
  100 - 1000 - Read Write - Average Latency
  100 - 1000 - Read Write
Apache Hadoop
Rodinia
Stress-NG
OpenRadioss:
  Bird Strike on Windshield
  Chrysler Neon 1M
Stress-NG:
  Pipe
  Fused Multiply-Add
  Mutex
  Function Call
  Glibc C String Functions
  Memory Copying
  Hash
  Floating Point
  SENDFILE
  Socket Activity
  NUMA
  Mixed Scheduler
  MEMFD
OpenRadioss
Stress-NG:
  Futex
  Forking
TensorFlow
PostgreSQL:
  100 - 800 - Read Write - Average Latency
  100 - 800 - Read Write
Stress-NG
High Performance Conjugate Gradient
Stress-NG:
  Matrix Math
  Context Switching
  Crypto
  Poll
Timed Linux Kernel Compilation
OpenVINO:
  Vehicle Detection FP16 - CPU:
    ms
    FPS
VVenC
OpenRadioss
OpenFOAM:
  drivaerFastback, Medium Mesh Size - Execution Time
  drivaerFastback, Medium Mesh Size - Mesh Time
Apache IoTDB:
  500 - 100 - 200 - 400:
    Average Latency
    point/sec
OpenRadioss
TensorFlow
AOM AV1:
  Speed 6 Realtime - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
Rodinia
Blender
AOM AV1
OpenVINO:
  Face Detection FP16-INT8 - CPU:
    ms
    FPS
libavif avifenc
AOM AV1
7-Zip Compression:
  Decompression Rating
  Compression Rating
Redis 7.0.12 + memtier_benchmark
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU:
    ms
    FPS
  Face Detection FP16 - CPU:
    ms
    FPS
nekRS
Apache Hadoop
OpenVINO:
  Road Segmentation ADAS FP16-INT8 - CPU:
    ms
    FPS
  Person Detection FP32 - CPU:
    ms
    FPS
  Person Detection FP16 - CPU:
    ms
    FPS
  Machine Translation EN To DE FP16 - CPU:
    ms
    FPS
  Vehicle Detection FP16-INT8 - CPU:
    ms
    FPS
AOM AV1
OpenVINO:
  Road Segmentation ADAS FP16 - CPU:
    ms
    FPS
  Face Detection Retail FP16 - CPU:
    ms
    FPS
  Face Detection Retail FP16-INT8 - CPU:
    ms
    FPS
  Weld Porosity Detection FP16 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16 - CPU:
    ms
    FPS
  Handwritten English Recognition FP16-INT8 - CPU:
    ms
    FPS
NAS Parallel Benchmarks:
  EP.D
  IS.D
SVT-AV1
Blender
TensorFlow
Stress-NG
nekRS
OpenRadioss
Blender
VVenC
Stress-NG
libavif avifenc
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 12 - Bosphorus 4K
NAS Parallel Benchmarks
Rodinia
VVenC
Stress-NG:
  Pthread
  AVX-512 VNNI
  Matrix 3D Math
  AVL Tree
  Glibc Qsort Data Sorting
  Wide Vector Math
  CPU Stress
  Vector Shuffle
  Vector Floating Point
  x86_64 RdRand
  Semaphores
  Zlib
  MMAP
  System V Message Passing
  Malloc
SVT-AV1
OpenFOAM:
  drivaerFastback, Small Mesh Size - Execution Time
  drivaerFastback, Small Mesh Size - Mesh Time
NAS Parallel Benchmarks
Blender
SVT-AV1
NAS Parallel Benchmarks:
  SP.C
  SP.B
SVT-AV1:
  Preset 12 - Bosphorus 1080p
  Preset 13 - Bosphorus 1080p
Rodinia
NAS Parallel Benchmarks:
  LU.C
  MG.C
  CG.C
  EP.C
libavif avifenc:
  6, Lossless
  10, Lossless
  6