new xeon

Tests for a future article. Intel Xeon Gold 6421N testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2307315-NE-NEWXEON9432&grs&rdt.

new xeonProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen ResolutionabIntel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce512GB3 x 3841GB Micron_9300_MTFDHAL3T8TDPASPEEDVGA HDMI4 x Intel E810-C for QSFPUbuntu 22.045.15.0-47-generic (x86_64)GNOME Shell 42.4X Server 1.21.1.31.2.204GCC 11.2.0ext41600x1200OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0 Java Details- OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

new xeonapache-iotdb: 100 - 100 - 200apache-iotdb: 100 - 100 - 200stress-ng: CPU Cachelibxsmm: 256apache-iotdb: 200 - 100 - 200apache-iotdb: 100 - 100 - 500apache-iotdb: 500 - 1 - 500heffte: c2c - Stock - double - 128deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streammemtier-benchmark: Redis - 100 - 1:10apache-iotdb: 200 - 100 - 200deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Streamapache-iotdb: 100 - 100 - 500apache-iotdb: 500 - 1 - 500stress-ng: Cloningdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamapache-iotdb: 500 - 1 - 200heffte: r2c - Stock - float - 256apache-iotdb: 500 - 1 - 200heffte: c2c - FFTW - double - 128stress-ng: Futexsrsran: PUSCH Processor Benchmark, Throughput Totalstress-ng: Pipeheffte: r2c - FFTW - float - 256apache-iotdb: 100 - 1 - 200apache-iotdb: 200 - 1 - 200stress-ng: SENDFILEmemtier-benchmark: Redis - 100 - 1:5stress-ng: Matrix Mathapache-iotdb: 500 - 100 - 500apache-iotdb: 200 - 100 - 500apache-iotdb: 200 - 1 - 500apache-iotdb: 200 - 100 - 500liquid-dsp: 16 - 256 - 512apache-iotdb: 100 - 1 - 200deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamsrsran: PUSCH Processor Benchmark, Throughput Threadstress-ng: IO_uringliquid-dsp: 16 - 256 - 57heffte: r2c - Stock - double - 128deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamapache-iotdb: 500 - 100 - 200stress-ng: Socket Activityapache-iotdb: 200 - 1 - 500liquid-dsp: 32 - 256 - 512vvenc: Bosphorus 4K - Fastbuild-llvm: Unix Makefilesheffte: r2c - Stock - float - 128deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Streamlibxsmm: 128heffte: c2c - FFTW - double - 256libxsmm: 32memtier-benchmark: Redis - 50 - 1:10heffte: c2c - FFTW - float - 256vvenc: Bosphorus 1080p - Faststress-ng: Atomicstress-ng: Semaphoresheffte: c2c - Stock - double - 256libxsmm: 64srsran: Downlink Processor Benchmarkapache-iotdb: 100 - 1 - 500stress-ng: MMAPheffte: r2c - FFTW - double - 128palabos: 400apache-iotdb: 100 - 1 - 500heffte: c2c - FFTW - float - 128heffte: r2c - FFTW - float - 128laghos: Triple Point Problemapache-iotdb: 500 - 100 - 500deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamdeepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Streamstress-ng: Fused Multiply-Addhpcg: 160 160 160 - 60stress-ng: Function Callapache-iotdb: 500 - 100 - 200heffte: r2c - FFTW - double - 512liquid-dsp: 32 - 256 - 57stress-ng: NUMAstress-ng: Mutexheffte: c2c - Stock - float - 128deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamstress-ng: Wide Vector Mathapache-iotdb: 200 - 1 - 200liquid-dsp: 64 - 256 - 57deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamdeepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Streamvvenc: Bosphorus 4K - Fastermemtier-benchmark: Redis - 50 - 1:5build-gdb: Time To Compilestress-ng: Glibc C String Functionshpcg: 104 104 104 - 60heffte: c2c - Stock - float - 256openfoam: drivaerFastback, Small Mesh Size - Execution Timedeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Streamheffte: c2c - Stock - double - 512palabos: 500heffte: r2c - Stock - double - 256heffte: c2c - FFTW - float - 512openfoam: drivaerFastback, Medium Mesh Size - Mesh Timeheffte: r2c - FFTW - float - 512deepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamlaghos: Sedov Blast Wave, ube_922_hex.meshdeepsparse: ResNet-50, Baseline - Asynchronous Multi-Streamblender: BMW27 - CPU-Onlyheffte: r2c - Stock - float - 512stress-ng: AVL Treepalabos: 100stress-ng: Floating Pointliquid-dsp: 16 - 256 - 32heffte: r2c - FFTW - double - 256stress-ng: Mallocstress-ng: Hashdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamhpcg: 144 144 144 - 60deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streambuild-llvm: Ninjastress-ng: Pthreadblender: Fishy Cat - CPU-Onlyheffte: c2c - FFTW - double - 512openfoam: drivaerFastback, Medium Mesh Size - Execution Timedeepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Streamdeepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Streambuild-php: Time To Compilestress-ng: MEMFDliquid-dsp: 32 - 256 - 32stress-ng: Context Switchingdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamstress-ng: Pollvvenc: Bosphorus 1080p - Fasterstress-ng: Memory Copyingopenfoam: drivaerFastback, Small Mesh Size - Mesh Timestress-ng: Matrix 3D Mathstress-ng: Forkingdeepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Streamstress-ng: Glibc Qsort Data Sortingstress-ng: Zlibstress-ng: System V Message Passingblender: Barbershop - CPU-Onlybuild-linux-kernel: defconfigheffte: c2c - Stock - float - 512stress-ng: Vector Mathliquid-dsp: 64 - 256 - 32liquid-dsp: 64 - 256 - 512stress-ng: Vector Floating Pointblender: Classroom - CPU-Onlystress-ng: CPU Stressheffte: r2c - Stock - double - 512stress-ng: Cryptodeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamstress-ng: x86_64 RdRandstress-ng: Vector Shufflebuild-linux-kernel: allmodconfigbrl-cad: VGR Performance Metriccassandra: Writesblender: Pabellon Barcelona - CPU-Onlymemtier-benchmark: Redis - 500 - 1:5ab31.8343074031.841537111.20879.629.5469.0822.9746.635735.15292447092.0154224351.1453.480259041436.641916642.99740.57137.3780116.37619.49157.8671576432.2564.42631541676.365372.935837711.85149.82514.5811.86582724.632285996.17160653.4467607191.64101.251505080.3445677447.24243940000710382.4433.9358468.8046240.41529665.9884843500092.3973295.829554.067456894390.6124947.1426.293835550005.842323.856149.935208.847176.58071211.838.9304440.02316281.2676.029916.100133.8362126446.2138.9613833.8705.828.27861.28121.794287.2681191500.88131.656207.244177.7868.34345.149146.330734197705.6327.508622028.0331.5874.47341328100000390.8715147444.5185.7398131.4497121.68931745029.271045806.811728850000390.907640.910911.0202211638.6541.90526067360.6027.780875.089267.7073313227.09544.941640.7438300.27676.904278.8291144.69646141.41479.7876216.8633.327847.15137.536294.26235.18610587.4855794500072.289399373474.315577252.3276.559727.42131074.8218504.6114263.154136846.0164.0743.9665615.9907414.860031.675042.351549.948470850002572801.7534.5311478.910833.38943669281.6930.9467176.1927.9652149599.9389918.21208.8975696.652647.815852281.71493.4540.43872.5609151386.31157730000051313500058243.38127.7864111.1176.611050240.09460.7818331416.52167204.21445.385466686155626159.9443.8634191814.861885833.11758.931.6373.5621.6349.523037.32532304730.1951199962.11428.669556018457.872009050.469326.09143.4387111.49769.87164.0471521587.462.29741492979.465543.736852791.12154.05314.9812.18598173.562227152.02156668.4365935725.6798.871469808.8946726912.46248820000697217.5534.5447460.6707236.31503623.7986219500090.9851299.927753.329156137174.725282.3126.643786500005.917319.852151.803211.227075.72181225.038.5182444.62293467.6275.300116.249132.6161651485.4338.6757839.9710.928.45856.14122.460285.7611185338.02130.982206.217176.9268.01343.517046.549134050669.2327.397822106.4931.6974.71481323900000392.0815192892.5985.4850131.0664122.03671750003.431042859.031733700000391.912540.806110.9922217192.1242.00626125214.8427.840574.928667.5631633233.95884.931240.6648300.85577.034578.9605144.93674141.193480.5223217.1933.278147.22137.740294.66234.87410601.1055865500072.198199251227.285583978.1476.468427.38901075.9571505.1309262.884136709.8164.0144.0064615.4601814.847331.648842.382549.558476750002571092.6934.5539479.224133.36803671617.9730.9277180.4327.9487179605.3089966.29208.9908696.922648.815854201.78493.6140.45172.5391151431.15157685000051304000058232.70127.7664118.8776.604150243.48460.7588331423.04167202.07445.380OpenBenchmarking.org

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab102030405031.8343.86MAX: 790.74MAX: 2550.76

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200ab9M18M27M36M45M43074031.8434191814.86

Stress-NG

Test: CPU Cache

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Cacheab400K800K1200K1600K2000KSE +/- 31294.95, N = 2SE +/- 234949.06, N = 21537111.201885833.111. (CXX) g++ options: -O2 -std=gnu99 -lc

libxsmm

M N K: 256

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 256ab2004006008001000SE +/- 0.65, N = 2SE +/- 5.75, N = 2879.6758.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab71421283529.5431.63MAX: 746.57MAX: 718.08

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab163248648069.0873.56MAX: 1049.85MAX: 1309.93

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ab61218243022.9721.63MAX: 864.74MAX: 867.44

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 128ab1122334455SE +/- 0.26, N = 2SE +/- 3.39, N = 246.6449.521. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab918273645SE +/- 0.01, N = 2SE +/- 0.38, N = 235.1537.33

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10ab500K1000K1500K2000K2500KSE +/- 114392.77, N = 2SE +/- 12975.09, N = 22447092.012304730.191. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200ab12M24M36M48M60M54224351.1051199962.11

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.26, N = 2SE +/- 4.41, N = 2453.48428.67

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500ab13M26M39M52M65M59041436.6456018457.87

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500ab400K800K1200K1600K2000K1916642.902009050.46

Stress-NG

Test: Cloning

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cloningab2K4K6K8K10KSE +/- 114.33, N = 2SE +/- 100.16, N = 29740.579326.091. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 4.10, N = 2SE +/- 0.80, N = 2137.38143.44

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 3.45, N = 2SE +/- 0.59, N = 2116.38111.50

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ab36912159.499.87MAX: 845.95MAX: 820.85

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 256ab4080120160200SE +/- 6.51, N = 2SE +/- 3.21, N = 2157.87164.051. (CXX) g++ options: -O3

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 200ab300K600K900K1200K1500K1576432.251521587.40

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 128ab1428425670SE +/- 2.73, N = 2SE +/- 2.41, N = 264.4362.301. (CXX) g++ options: -O3

Stress-NG

Test: Futex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Futexab300K600K900K1200K1500KSE +/- 56630.43, N = 2SE +/- 45385.58, N = 21541676.361492979.461. (CXX) g++ options: -O2 -std=gnu99 -lc

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Totalab12002400360048006000SE +/- 143.30, N = 2SE +/- 95.40, N = 25372.95543.71. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Stress-NG

Test: Pipe

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pipeab8M16M24M32M40MSE +/- 1105250.10, N = 2SE +/- 79631.10, N = 235837711.8536852791.121. (CXX) g++ options: -O2 -std=gnu99 -lc

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 256ab306090120150SE +/- 3.76, N = 2SE +/- 1.59, N = 2149.83154.051. (CXX) g++ options: -O3

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab4812162014.5814.98MAX: 679.89MAX: 612.21

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab369121511.8612.18MAX: 573.1MAX: 586.62

Stress-NG

Test: SENDFILE

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: SENDFILEab130K260K390K520K650KSE +/- 6799.74, N = 2SE +/- 243.97, N = 2582724.63598173.561. (CXX) g++ options: -O2 -std=gnu99 -lc

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5ab500K1000K1500K2000K2500KSE +/- 6000.63, N = 2SE +/- 3990.38, N = 22285996.172227152.021. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Stress-NG

Test: Matrix Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix Mathab30K60K90K120K150KSE +/- 2867.57, N = 2SE +/- 332.46, N = 2160653.44156668.431. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500ab14M28M42M56M70M67607191.6465935725.67

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab20406080100101.2598.87MAX: 3631.89MAX: 3564.64

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab300K600K900K1200K1500K1505080.341469808.89

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500ab10M20M30M40M50M45677447.2446726912.46

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 512ab50M100M150M200M250MSE +/- 1950000.00, N = 2SE +/- 3170000.00, N = 22439400002488200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 200ab150K300K450K600K750K710382.44697217.55

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.07, N = 2SE +/- 0.03, N = 233.9434.54

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 1.46, N = 2SE +/- 0.20, N = 2468.80460.67

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: PUSCH Processor Benchmark, Throughput Threadab50100150200250SE +/- 3.55, N = 2SE +/- 0.10, N = 2240.4236.31. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Stress-NG

Test: IO_uring

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: IO_uringab300K600K900K1200K1500KSE +/- 22482.34, N = 2SE +/- 5229.94, N = 21529665.981503623.791. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 57ab200M400M600M800M1000MSE +/- 14365000.00, N = 2SE +/- 695000.00, N = 28484350008621950001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 128ab20406080100SE +/- 0.90, N = 2SE +/- 0.09, N = 292.4090.991. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab70140210280350SE +/- 0.05, N = 2SE +/- 0.51, N = 2295.83299.93

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Streamab1224364860SE +/- 0.01, N = 2SE +/- 0.09, N = 254.0753.33

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200ab12M24M36M48M60M56894390.6156137174.70

Stress-NG

Test: Socket Activity

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Socket Activityab5K10K15K20K25KSE +/- 72.57, N = 2SE +/- 267.39, N = 224947.1425282.311. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 500ab61218243026.2926.64MAX: 620.79MAX: 636.93

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 512ab80M160M240M320M400MSE +/- 1955000.00, N = 2SE +/- 4920000.00, N = 23835550003786500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastab1.33132.66263.99395.32526.6565SE +/- 0.074, N = 2SE +/- 0.015, N = 25.8425.9171. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Timed LLVM Compilation

Build System: Unix Makefiles

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesab70140210280350SE +/- 5.08, N = 2SE +/- 5.88, N = 2323.86319.85

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 128ab306090120150SE +/- 1.93, N = 2SE +/- 1.24, N = 2149.94151.801. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab50100150200250SE +/- 0.34, N = 2SE +/- 0.12, N = 2208.85211.23

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Streamab20406080100SE +/- 0.13, N = 2SE +/- 0.04, N = 276.5875.72

libxsmm

M N K: 128

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 128ab30060090012001500SE +/- 4.60, N = 2SE +/- 1.10, N = 21211.81225.01. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 256ab918273645SE +/- 0.25, N = 2SE +/- 0.16, N = 238.9338.521. (CXX) g++ options: -O3

libxsmm

M N K: 32

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 32ab100200300400500SE +/- 0.25, N = 2SE +/- 0.15, N = 2440.0444.61. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10ab500K1000K1500K2000K2500KSE +/- 13610.76, N = 2SE +/- 4548.93, N = 22316281.262293467.621. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 256ab20406080100SE +/- 0.70, N = 2SE +/- 0.08, N = 276.0375.301. (CXX) g++ options: -O3

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastab48121620SE +/- 0.17, N = 2SE +/- 0.02, N = 216.1016.251. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Stress-NG

Test: Atomic

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Atomicab306090120150SE +/- 1.05, N = 2SE +/- 0.20, N = 2133.83132.611. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Semaphores

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Semaphoresab13M26M39M52M65MSE +/- 2077286.42, N = 2SE +/- 466593.23, N = 262126446.2161651485.431. (CXX) g++ options: -O2 -std=gnu99 -lc

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 256ab918273645SE +/- 0.07, N = 2SE +/- 0.07, N = 238.9638.681. (CXX) g++ options: -O3

libxsmm

M N K: 64

OpenBenchmarking.orgGFLOPS/s, More Is Betterlibxsmm 2-1.17-3645M N K: 64ab2004006008001000SE +/- 1.05, N = 2SE +/- 0.20, N = 2833.8839.91. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2

srsRAN Project

Test: Downlink Processor Benchmark

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.5Test: Downlink Processor Benchmarkab150300450600750SE +/- 5.15, N = 2SE +/- 1.60, N = 2705.8710.91. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab71421283528.2728.45MAX: 671.77MAX: 664.29

Stress-NG

Test: MMAP

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MMAPab2004006008001000SE +/- 3.32, N = 2SE +/- 2.06, N = 2861.28856.141. (CXX) g++ options: -O2 -std=gnu99 -lc

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 128ab306090120150SE +/- 0.56, N = 2SE +/- 1.22, N = 2121.79122.461. (CXX) g++ options: -O3

Palabos

Grid Size: 400

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 400ab60120180240300SE +/- 0.49, N = 2SE +/- 1.54, N = 2287.27285.761. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

Apache IoTDB

Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 100 - Batch Size Per Write: 1 - Sensor Count: 500ab300K600K900K1200K1500K1191500.881185338.02

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128ab306090120150SE +/- 0.77, N = 2SE +/- 0.61, N = 2131.66130.981. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 128ab50100150200250SE +/- 0.61, N = 2SE +/- 0.19, N = 2207.24206.221. (CXX) g++ options: -O3

Laghos

Test: Triple Point Problem

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Triple Point Problemab4080120160200SE +/- 0.13, N = 2SE +/- 0.02, N = 2177.78176.921. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500ab153045607568.3468.01MAX: 2006.68MAX: 1606.75

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab80160240320400SE +/- 0.15, N = 2SE +/- 1.63, N = 2345.15343.52

Neural Magic DeepSparse

Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Streamab1122334455SE +/- 0.02, N = 2SE +/- 0.20, N = 246.3346.55

Stress-NG

Test: Fused Multiply-Add

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Fused Multiply-Addab7M14M21M28M35MSE +/- 137631.48, N = 2SE +/- 285.63, N = 234197705.6334050669.231. (CXX) g++ options: -O2 -std=gnu99 -lc

High Performance Conjugate Gradient

X Y Z: 160 160 160 - RT: 60

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 160 160 160 - RT: 60ab612182430SE +/- 0.03, N = 2SE +/- 0.07, N = 227.5127.401. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Stress-NG

Test: Function Call

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Function Callab5K10K15K20K25KSE +/- 80.03, N = 2SE +/- 74.09, N = 222028.0322106.491. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200

OpenBenchmarking.orgAverage Latency, More Is BetterApache IoTDB 1.1.2Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200ab71421283531.5831.69MAX: 1920.32MAX: 1610.79

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 512ab20406080100SE +/- 0.48, N = 2SE +/- 0.16, N = 274.4774.711. (CXX) g++ options: -O3

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 57ab300M600M900M1200M1500MSE +/- 300000.00, N = 2SE +/- 4400000.00, N = 2132810000013239000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Test: NUMA

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: NUMAab90180270360450SE +/- 0.88, N = 2SE +/- 0.05, N = 2390.87392.081. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Mutex

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mutexab3M6M9M12M15MSE +/- 23940.47, N = 2SE +/- 2864.48, N = 215147444.5115192892.591. (CXX) g++ options: -O2 -std=gnu99 -lc

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: float - X Y Z: 128

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 128ab20406080100SE +/- 1.30, N = 2SE +/- 0.88, N = 285.7485.491. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.05, N = 2SE +/- 0.22, N = 2131.45131.07

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Streamab306090120150SE +/- 0.05, N = 2SE +/- 0.22, N = 2121.69122.04

Stress-NG

Test: Wide Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Wide Vector Mathab400K800K1200K1600K2000KSE +/- 918.08, N = 2SE +/- 4139.63, N = 21745029.271750003.431. (CXX) g++ options: -O2 -std=gnu99 -lc

Apache IoTDB

Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200

OpenBenchmarking.orgpoint/sec, More Is BetterApache IoTDB 1.1.2Device Count: 200 - Batch Size Per Write: 1 - Sensor Count: 200ab200K400K600K800K1000K1045806.811042859.03

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 57

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 57ab400M800M1200M1600M2000MSE +/- 550000.00, N = 2SE +/- 900000.00, N = 2172885000017337000001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab90180270360450SE +/- 1.01, N = 2SE +/- 0.12, N = 2390.91391.91

Neural Magic DeepSparse

Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Streamab918273645SE +/- 0.11, N = 2SE +/- 0.01, N = 240.9140.81

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterab3691215SE +/- 0.00, N = 2SE +/- 0.03, N = 211.0210.991. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Redis 7.0.12 + memtier_benchmark

Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5

OpenBenchmarking.orgOps/sec, More Is BetterRedis 7.0.12 + memtier_benchmark 2.0Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5ab500K1000K1500K2000K2500KSE +/- 31848.80, N = 2SE +/- 39004.04, N = 22211638.652217192.121. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Timed GDB GNU Debugger Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GDB GNU Debugger Compilation 10.2Time To Compileab1020304050SE +/- 0.06, N = 2SE +/- 0.12, N = 241.9142.01

Stress-NG

Test: Glibc C String Functions

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc C String Functionsab6M12M18M24M30MSE +/- 150617.25, N = 2SE +/- 69329.81, N = 226067360.6026125214.841. (CXX) g++ options: -O2 -std=gnu99 -lc

High Performance Conjugate Gradient

X Y Z: 104 104 104 - RT: 60

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 104 104 104 - RT: 60ab714212835SE +/- 0.03, N = 2SE +/- 0.01, N = 227.7827.841. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: float - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 256ab20406080100SE +/- 0.48, N = 2SE +/- 0.10, N = 275.0974.931. (CXX) g++ options: -O3

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Execution Timeab1530456075SE +/- 0.09, N = 2SE +/- 0.11, N = 267.7167.561. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab7001400210028003500SE +/- 8.40, N = 2SE +/- 3.51, N = 23227.103233.96

Neural Magic DeepSparse

Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Streamab1.11192.22383.33574.44765.5595SE +/- 0.0128, N = 2SE +/- 0.0056, N = 24.94164.9312

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: double - X Y Z: 512ab918273645SE +/- 0.05, N = 2SE +/- 0.00, N = 240.7440.661. (CXX) g++ options: -O3

Palabos

Grid Size: 500

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 500ab70140210280350SE +/- 1.63, N = 2SE +/- 1.17, N = 2300.28300.861. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 256ab20406080100SE +/- 0.40, N = 2SE +/- 0.65, N = 276.9077.031. (CXX) g++ options: -O3

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: float - X Y Z: 512ab20406080100SE +/- 0.36, N = 2SE +/- 0.06, N = 278.8378.961. (CXX) g++ options: -O3

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Mesh Timeab306090120150SE +/- 0.01, N = 2SE +/- 0.08, N = 2144.70144.941. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: float - X Y Z: 512ab306090120150SE +/- 0.63, N = 2SE +/- 0.20, N = 2141.41141.191. (CXX) g++ options: -O3

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.12, N = 2SE +/- 0.54, N = 2479.79480.52

Laghos

Test: Sedov Blast Wave, ube_922_hex.mesh

OpenBenchmarking.orgMajor Kernels Total Rate, More Is BetterLaghos 3.1Test: Sedov Blast Wave, ube_922_hex.meshab50100150200250SE +/- 0.24, N = 2SE +/- 0.18, N = 2216.86217.191. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi

Neural Magic DeepSparse

Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.01, N = 2SE +/- 0.04, N = 233.3333.28

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: BMW27 - Compute: CPU-Onlyab1122334455SE +/- 0.02, N = 2SE +/- 0.08, N = 247.1547.22

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: float - X Y Z: 512ab306090120150SE +/- 0.00, N = 2SE +/- 0.33, N = 2137.54137.741. (CXX) g++ options: -O3

Stress-NG

Test: AVL Tree

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: AVL Treeab60120180240300SE +/- 0.32, N = 2SE +/- 0.85, N = 2294.26294.661. (CXX) g++ options: -O2 -std=gnu99 -lc

Palabos

Grid Size: 100

OpenBenchmarking.orgMega Site Updates Per Second, More Is BetterPalabos 2.3Grid Size: 100ab50100150200250SE +/- 0.02, N = 2SE +/- 0.34, N = 2235.19234.871. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm

Stress-NG

Test: Floating Point

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Floating Pointab2K4K6K8K10KSE +/- 1.07, N = 2SE +/- 17.77, N = 210587.4810601.101. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

Threads: 16 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 16 - Buffer Length: 256 - Filter Length: 32ab120M240M360M480M600MSE +/- 2065000.00, N = 2SE +/- 605000.00, N = 25579450005586550001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: FFTW - Precision: double - X Y Z: 256ab1632486480SE +/- 0.44, N = 2SE +/- 0.12, N = 272.2972.201. (CXX) g++ options: -O3

Stress-NG

Test: Malloc

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Mallocab20M40M60M80M100MSE +/- 129754.02, N = 2SE +/- 83929.32, N = 299373474.3199251227.281. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Hash

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Hashab1.2M2.4M3.6M4.8M6MSE +/- 3166.95, N = 2SE +/- 2865.25, N = 25577252.325583978.141. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab20406080100SE +/- 0.03, N = 2SE +/- 0.04, N = 276.5676.47

High Performance Conjugate Gradient

X Y Z: 144 144 144 - RT: 60

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1X Y Z: 144 144 144 - RT: 60ab612182430SE +/- 0.01, N = 2SE +/- 0.06, N = 227.4227.391. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -lmpi_cxx -lmpi

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab2004006008001000SE +/- 0.57, N = 2SE +/- 1.01, N = 21074.821075.96

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab110220330440550SE +/- 0.18, N = 2SE +/- 0.12, N = 2504.61505.13

Timed LLVM Compilation

Build System: Ninja

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaab60120180240300SE +/- 0.15, N = 2SE +/- 0.15, N = 2263.15262.88

Stress-NG

Test: Pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pthreadab30K60K90K120K150KSE +/- 971.78, N = 2SE +/- 102.07, N = 2136846.01136709.811. (CXX) g++ options: -O2 -std=gnu99 -lc

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Fishy Cat - Compute: CPU-Onlyab1428425670SE +/- 0.08, N = 2SE +/- 0.20, N = 264.0764.01

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: FFTW - Precision: double - X Y Z: 512ab1020304050SE +/- 0.04, N = 2SE +/- 0.02, N = 243.9744.011. (CXX) g++ options: -O3

OpenFOAM

Input: drivaerFastback, Medium Mesh Size - Execution Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Medium Mesh Size - Execution Timeab130260390520650SE +/- 0.42, N = 2SE +/- 0.03, N = 2615.99615.461. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Streamab48121620SE +/- 0.01, N = 2SE +/- 0.01, N = 214.8614.85

Neural Magic DeepSparse

Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Streamab714212835SE +/- 0.01, N = 2SE +/- 0.01, N = 231.6831.65

Timed PHP Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed PHP Compilation 8.1.9Time To Compileab1020304050SE +/- 0.34, N = 2SE +/- 0.48, N = 242.3542.38

Stress-NG

Test: MEMFD

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: MEMFDab120240360480600SE +/- 1.31, N = 2SE +/- 1.20, N = 2549.94549.551. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

Threads: 32 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 32 - Buffer Length: 256 - Filter Length: 32ab200M400M600M800M1000MSE +/- 25000.00, N = 2SE +/- 85000.00, N = 28470850008476750001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Test: Context Switching

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Context Switchingab600K1200K1800K2400K3000KSE +/- 678.57, N = 2SE +/- 604.17, N = 22572801.752571092.691. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.06, N = 2SE +/- 0.12, N = 234.5334.55

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.05, N = 2SE +/- 0.02, N = 2478.91479.22

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Streamab816243240SE +/- 0.00, N = 2SE +/- 0.00, N = 233.3933.37

Stress-NG

Test: Poll

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Pollab800K1600K2400K3200K4000KSE +/- 2536.76, N = 2SE +/- 1953.54, N = 23669281.693671617.971. (CXX) g++ options: -O2 -std=gnu99 -lc

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterab714212835SE +/- 0.06, N = 2SE +/- 0.04, N = 230.9530.931. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

Stress-NG

Test: Memory Copying

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Memory Copyingab15003000450060007500SE +/- 8.71, N = 2SE +/- 11.04, N = 27176.197180.431. (CXX) g++ options: -O2 -std=gnu99 -lc

OpenFOAM

Input: drivaerFastback, Small Mesh Size - Mesh Time

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenFOAM 10Input: drivaerFastback, Small Mesh Size - Mesh Timeab714212835SE +/- 0.02, N = 2SE +/- 0.05, N = 227.9727.951. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm

Stress-NG

Test: Matrix 3D Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Matrix 3D Mathab2K4K6K8K10KSE +/- 34.45, N = 2SE +/- 4.08, N = 29599.939605.301. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Forking

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Forkingab20K40K60K80K100KSE +/- 469.20, N = 2SE +/- 421.24, N = 289918.2189966.291. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.5Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Streamab50100150200250SE +/- 0.10, N = 2SE +/- 0.05, N = 2208.90208.99

Stress-NG

Test: Glibc Qsort Data Sorting

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Glibc Qsort Data Sortingab150300450600750SE +/- 0.40, N = 2SE +/- 0.46, N = 2696.65696.921. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Zlib

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Zlibab6001200180024003000SE +/- 0.06, N = 2SE +/- 0.65, N = 22647.812648.811. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: System V Message Passing

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: System V Message Passingab1.3M2.6M3.9M5.2M6.5MSE +/- 7174.98, N = 2SE +/- 9802.94, N = 25852281.715854201.781. (CXX) g++ options: -O2 -std=gnu99 -lc

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Barbershop - Compute: CPU-Onlyab110220330440550SE +/- 0.22, N = 2SE +/- 0.42, N = 2493.45493.61

Timed Linux Kernel Compilation

Build: defconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigab918273645SE +/- 0.72, N = 2SE +/- 0.69, N = 240.4440.45

HeFFTe - Highly Efficient FFT for Exascale

Test: c2c - Backend: Stock - Precision: float - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: c2c - Backend: Stock - Precision: float - X Y Z: 512ab1632486480SE +/- 0.21, N = 2SE +/- 0.00, N = 272.5672.541. (CXX) g++ options: -O3

Stress-NG

Test: Vector Math

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Mathab30K60K90K120K150KSE +/- 47.16, N = 2SE +/- 5.98, N = 2151386.31151431.151. (CXX) g++ options: -O2 -std=gnu99 -lc

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 32

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 32ab300M600M900M1200M1500MSE +/- 300000.00, N = 2SE +/- 450000.00, N = 2157730000015768500001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Liquid-DSP

Threads: 64 - Buffer Length: 256 - Filter Length: 512

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 1.6Threads: 64 - Buffer Length: 256 - Filter Length: 512ab110M220M330M440M550MSE +/- 385000.00, N = 2SE +/- 800000.00, N = 25131350005130400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Stress-NG

Test: Vector Floating Point

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Floating Pointab12K24K36K48K60KSE +/- 30.71, N = 2SE +/- 4.11, N = 258243.3858232.701. (CXX) g++ options: -O2 -std=gnu99 -lc

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Classroom - Compute: CPU-Onlyab306090120150SE +/- 0.05, N = 2SE +/- 0.13, N = 2127.78127.76

Stress-NG

Test: CPU Stress

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: CPU Stressab14K28K42K56K70KSE +/- 12.73, N = 2SE +/- 38.95, N = 264111.1164118.871. (CXX) g++ options: -O2 -std=gnu99 -lc

HeFFTe - Highly Efficient FFT for Exascale

Test: r2c - Backend: Stock - Precision: double - X Y Z: 512

OpenBenchmarking.orgGFLOP/s, More Is BetterHeFFTe - Highly Efficient FFT for Exascale 2.3Test: r2c - Backend: Stock - Precision: double - X Y Z: 512ab20406080100SE +/- 0.01, N = 2SE +/- 0.11, N = 276.6176.601. (CXX) g++ options: -O3

Stress-NG

Test: Crypto

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Cryptoab11K22K33K44K55KSE +/- 3.65, N = 2SE +/- 18.13, N = 250240.0950243.481. (CXX) g++ options: -O2 -std=gnu99 -lc

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.5Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Streamab100200300400500SE +/- 0.42, N = 2SE +/- 2.44, N = 2460.78460.76

Stress-NG

Test: x86_64 RdRand

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: x86_64 RdRandab70K140K210K280K350KSE +/- 2.35, N = 2SE +/- 1.14, N = 2331416.52331423.041. (CXX) g++ options: -O2 -std=gnu99 -lc

Stress-NG

Test: Vector Shuffle

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.10Test: Vector Shuffleab40K80K120K160K200KSE +/- 6.63, N = 2SE +/- 6.04, N = 2167204.21167202.071. (CXX) g++ options: -O2 -std=gnu99 -lc

Timed Linux Kernel Compilation

Build: allmodconfig

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: allmodconfigab100200300400500SE +/- 1.46, N = 2SE +/- 1.13, N = 2445.39445.38

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metrica100K200K300K400K500KSE +/- 3768.50, N = 24666861. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

Apache Cassandra

Test: Writes

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 4.1.3Test: Writesa30K60K90K120K150KSE +/- 803.50, N = 2155626

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.6Blend File: Pabellon Barcelona - Compute: CPU-Onlya4080120160200SE +/- 0.04, N = 2159.94


Phoronix Test Suite v10.8.5