new tests extra cpus

Tests for a future article. Intel Xeon Gold 6346 testing with a Supermicro X12SPO-NTF v2.00 (1.2 BIOS) and astdrmfb on AlmaLinux 9.1 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2304246-NE-NEWTESTSE15
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 6 Tests
C/C++ Compiler Tests 8 Tests
CPU Massive 13 Tests
Creator Workloads 9 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
Fortran Tests 3 Tests
Game Development 3 Tests
HPC - High Performance Computing 5 Tests
Common Kernel Benchmarks 4 Tests
Machine Learning 3 Tests
Multi-Core 15 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 6 Tests
Server 6 Tests
Server CPU Tests 10 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
e
April 04 2023
  5 Hours, 33 Minutes
f
April 04 2023
  5 Hours, 32 Minutes
g
April 04 2023
  2 Hours, 7 Minutes
h
April 22 2023
  1 Day, 8 Hours, 40 Minutes
i
April 23 2023
  11 Hours, 28 Minutes
j
April 23 2023
  11 Hours, 30 Minutes
Invert Hiding All Results Option
  11 Hours, 28 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


new tests extra cpusOpenBenchmarking.orgPhoronix Test SuiteIntel Xeon Gold 6338 @ 3.20GHz (32 Cores / 64 Threads)Intel Xeon Gold 6346 @ 3.60GHz (16 Cores / 32 Threads)Supermicro X12SPO-NTF v2.00 (1.2 BIOS)8 x 64 GB DDR4-3200MT/s Samsung M393A8G40AB2-CWE2 x 1920GB SAMSUNG MZQL21T9HCJR-00A07astdrmfbDELL E207WFPAlmaLinux 9.15.14.0-162.12.1.el9_1.x86_64 (x86_64)GCC 11.3.1 20220421ext41280x1024ProcessorsMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen Resolution New Tests Extra Cpus PerformanceSystem Logs- Transparent Huge Pages: always- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-host-bind-now --enable-host-pie --enable-initfini-array --enable-languages=c,c++,fortran,lto --enable-link-serialization=1 --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=x86-64 --with-arch_64=x86-64-v2 --with-build-config=bootstrap-lto --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver --without-isl - Scaling Governor: intel_cpufreq performance - CPU Microcode: 0xd000375- Python 3.9.14- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Vulnerable: Clear buffers attempted no microcode; SMT vulnerable + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

efghijResult OverviewPhoronix Test Suite100%114%127%141%Embree7-Zip CompressionOpenSSLJohn The RipperDarmstadt Automotive Parallel Heterogeneous SuiteTimed Linux Kernel CompilationStress-NGTimed FFmpeg CompilationRocksDBFFmpegSVT-AV1MariaDBGoogle Draco

new tests extra cpusstress-ng: Mallocstress-ng: Malloconednn: IP Shapes 1D - u8s8f32 - CPUstress-ng: Socket Activitystress-ng: NUMAstress-ng: Atomicstress-ng: Semaphoresstress-ng: Semaphoresstress-ng: NUMAstress-ng: MMAPstress-ng: Pollstress-ng: Pollstress-ng: Atomicstress-ng: MMAPembree: Pathtracer ISPC - Asian Dragonembree: Pathtracer ISPC - Asian Dragon Objembree: Pathtracer ISPC - Crownrocksdb: Read While Writingstress-ng: Zlibembree: Pathtracer - Asian Dragonembree: Pathtracer - Asian Dragon Objstress-ng: Glibc C String Functionsrocksdb: Rand Readnginx: 1000daphne: OpenMP - Points2Imageonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUembree: Pathtracer - Crownmemcached: 1:100blender: Barbershop - CPU-Onlystress-ng: CPU Cacheblender: Pabellon Barcelona - CPU-Onlyopenssl: ChaCha20-Poly1305stress-ng: Vector Mathstress-ng: CPU Stressopenssl: AES-256-GCMdaphne: OpenMP - Euclidean Clustertensorflow: CPU - 256 - AlexNetblender: Fishy Cat - CPU-Onlyopenssl: RSA4096john-the-ripper: WPA PSKcompress-7zip: Decompression Ratingstress-ng: Glibc Qsort Data Sortingstress-ng: Cryptostress-ng: Vector Mathstress-ng: SENDFILEstress-ng: MEMFDopenssl: AES-128-GCMstress-ng: CPU Stressstress-ng: Cryptoopenssl: SHA512stress-ng: Function Callcompress-7zip: Compression Ratingstress-ng: SENDFILEonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUjohn-the-ripper: MD5onednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUopenssl: ChaCha20blender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlystress-ng: Hashstress-ng: Hashjohn-the-ripper: Blowfishjohn-the-ripper: bcrypttensorflow: CPU - 512 - AlexNetstress-ng: Matrix Mathopenssl: SHA256onednn: Recurrent Neural Network Training - f32 - CPUstress-ng: Futexonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUstress-ng: Zlibrocksdb: Rand Fill Syncnginx: 500openssl: RSA4096stress-ng: Memory Copyingstress-ng: Futexbuild-llvm: Ninjaonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUstress-ng: Matrix Mathbuild-nodejs: Time To Compileonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUtensorflow: CPU - 256 - ResNet-50onednn: Deconvolution Batch shapes_3d - f32 - CPUtensorflow: CPU - 256 - GoogLeNetmemcached: 1:10nginx: 200tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUtensorflow: CPU - 512 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 32 - ResNet-50build-godot: Time To Compiletensorflow: CPU - 32 - GoogLeNetbuild-llvm: Unix Makefilesbuild-linux-kernel: defconfigopencv: Object Detectiontensorflow: CPU - 32 - AlexNetmemcached: 1:5onednn: IP Shapes 1D - f32 - CPUnwchem: C240 Buckyballtensorflow: CPU - 16 - GoogLeNetnginx: 100build-ffmpeg: Time To Compileopencv: Videotensorflow: CPU - 16 - ResNet-50opencv: Image Processingonednn: Deconvolution Batch shapes_1d - f32 - CPUtensorflow: CPU - 16 - AlexNetopencv: Stitchingonednn: IP Shapes 3D - u8s8f32 - CPUstress-ng: Mutexstress-ng: Mutexopencv: Features 2Donednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUopencv: DNN - Deep Neural Networkrocksdb: Read Rand Write Randsvt-av1: Preset 8 - Bosphorus 4Kstress-ng: Forkingrocksdb: Seq Filljohn-the-ripper: HMAC-SHA512opencv: Graph APIapache: 100rocksdb: Rand Fillonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUstress-ng: CPU Cachestress-ng: System V Message Passingonednn: IP Shapes 3D - f32 - CPUsvt-av1: Preset 4 - Bosphorus 1080pffmpeg: libx264 - Uploadffmpeg: libx264 - Uploadffmpeg: libx264 - Platformffmpeg: libx264 - Platformffmpeg: libx264 - Video On Demandffmpeg: libx264 - Video On Demandstress-ng: Pthreadffmpeg: libx264 - Liveffmpeg: libx264 - Livemysqlslap: 8192rocksdb: Update Randdaphne: OpenMP - NDT Mappingonednn: Convolution Batch Shapes Auto - f32 - CPUopencv: Coresvt-av1: Preset 13 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 4Kbuild2: Time To Compileffmpeg: libx265 - Liveffmpeg: libx265 - Liveapache: 500stress-ng: Context Switchingstress-ng: Forkingapache: 200apache: 1000vvenc: Bosphorus 1080p - Fasterffmpeg: libx265 - Video On Demandffmpeg: libx265 - Video On Demandffmpeg: libx265 - Uploadffmpeg: libx265 - Uploadffmpeg: libx265 - Platformffmpeg: libx265 - Platformsvt-av1: Preset 12 - Bosphorus 1080pmysqlslap: 2048draco: Liononednn: IP Shapes 3D - bf16bf16bf16 - CPUsvt-av1: Preset 13 - Bosphorus 4Kmysqlslap: 256mysqlslap: 1024mysqlslap: 512vvenc: Bosphorus 1080p - Faststress-ng: Pthreaddraco: Church Facadevvenc: Bosphorus 4K - Fastsvt-av1: Preset 8 - Bosphorus 1080pmysqlslap: 4096vvenc: Bosphorus 4K - Fasterdeeprec: DIN - BF16petsc: Streamsdeeprec: DLRM - BF16deeprec: DIN - FP32svt-av1: Preset 4 - Bosphorus 4Kdeeprec: DLRM - FP32deeprec: BST - BF16deeprec: PLE - FP32deeprec: BST - FP32deeprec: MMOE - FP32deeprec: DCNv2 - FP32deeprec: PLE - BF16deeprec: DCNv2 - BF16encode-opus: WAV To Opus Encodedeeprec: MMOE - BF16stress-ng: x86_64 RdRanddeeprec: DCNv2 - FP32deeprec: DCNv2 - BF16deeprec: PLE - FP32deeprec: PLE - BF16deeprec: BST - FP32deeprec: BST - BF16deeprec: MMOE - FP32deeprec: MMOE - BF16deeprec: DLRM - FP32deeprec: DLRM - BF16deeprec: DIN - FP32deeprec: DIN - BF16stress-ng: MEMFDbuild-linux-kernel: allmodconfigefghij78931999.9978610054.561.598949732.02166.99157.056279831.976280163.34179.61774.1540383844035573.84156.11729.2644.404337.924735.003146912181789.1335.470632.23443501538.25106366292164243.213099.4070335087.7205929.40922973860.21606.438.99194.87111222988250119392.9152534.22267443782180690.33566.1976.55419714.9189308136996276.8338963.93119310.97489496.07394.9930364044438054460.7339091.088346380060152594.25196121492399.27.7720441270005.6151815221416810059.84164.854466781.044835927.284039640440578.04121611.7620773085880886.912073056.27903.519891.6991620.6300473160858.8213789.36083.672103532.73335.322508.746511.49113551.29295.654513.73572.721.88199217.054268751.71160594.88213.1971.50.4129853.112772.87218.09468.6669.44187.577204.46403.97853.83330587393.954317194.511.308394764.8191.59151096.1928.014840862.631236784.53853301.942417480.536328969776.95969913.81631560.41477529168278100355.79475346.361274287147448000188005128779.9811111791.633512223417.176895520.281.630067.231247.5810.2039.33192.59191.8939.48144344.1828.37178.003938016971165.731.8492274952480.119191.68396.83744.63113.16186259.015166885.1277939.2189134.12185846.0325.43941.74181.48895509520.64122.3442.01180.30502.71884854721.68355176.74687886786813.339142321.2666544.747127.6057989.0463.049334310.64430.0679082149.2878868686.90.8318239783.02163.39157.766288116.396274228.79169.83742.734035027.514027385.59160.27727.5244.407538.096634.953247219781788.235.381932.06763529363.29106552921164831.2712910.6390120787.7387529.28652808783.97596.9740.44194.12111171056710119349.9552157.84267503957880712.93565.6776.93419839189854136440275.5838948.07119310.27490342.61415.3330328281416054403.0439071.558348167710152593.8197139493574.137.7729341190005.6263915220771308060.17160.314463109.624835657.844046040421576.64121882.8620783323840897.8392082786.91887.466892.2831591.75300713160487.1913782.46066.732125283.93336.192509.809510.863113258.03295.08522.14772.781.86105217.014165587.24160596.18213.2171.490.4087124.1943772.89218.2466.0369.42187.047204.59405.54453.20730347393.424295820.031.300874753.6191.96151139.5727.872816262.841226564.312283022396830.560392972025.68969633.43617920.41475629288277890156.24274817.21267556146567000188565129247.9511099941.545792431294.876905447.591.618197.976247.5310.2039.48191.85192.0339.45144244.528.27178.653938003191166.691.6947876273483.353193.99796.6444.65113.11186542.925115907.3877970.3190175.9185721.1625.49841.88180.8820.57122.7641.85181.017315444514.74285354921.64727174.64987085187313.281142232.0666564.755126.8818039.0363.049334324.2437.8579143065.749861.35184.83160.156274788.61735.314023520.1244.284838.069834.7406479917235.505532.19623645309.6310786789413009.34287640929.358840.1211110376680052459.63266810352160722.67419763.7189308136419276.1339002.26119385.68492918.31464.423030874043708346418580152594.6719811441220001522102026204460696.994040240460208500175202109816.631321.5330136913813.46092.1113380.2353.15728.036969577.51277657055.299128286914719400011043557787657.637.515247.8710.1939.47191.89192.4139.37144040.0228.30178.463917979701142.64486.252193.38944.78112.775204045.0777787.3841.75181.4520.61122.5041.85181.01521.868495525173.4948878648666530128.1428103.054334325.6423388075.5823508474.850.4964003522.2693.25260.373241988.273244081.7395.60433.202257645.942257415.18260.09431.3028.310824.363322.434731220701203.5023.069521.00212408058.8672619868109256.7119476.08498270111.213319.71762136467.21889.4157.06287.697525230965381804.4535599.20183035127033961.40386.26112.07286665.812964993752189.5026680.4781800.43337606.34557.2920799413498337556.0826814.165732289320104780.05136235338894.5411.292628463338.1258510498895162386.69232.063089184.073348679.102802828013400.9884915.66145351412501269.702969099.971264.551268.131135.63211679113684.799773.24302.252977539.72472.555700.363706.98081986.25406.289705.24753.262.53342159.463134084.27118060.34157.2452.770.5453884.1648154.14162.44348.9152.15247.789154.60527.02167.75224753306.793380951.401.674605851.2150.35118589.5834.957679950.521046753.72347248.812023490.6338001165638.381165454.03526890.49455625656235360347.30864172.351498799126293000161540148306.0012843771.794362473550.017943561.311.860158.059216.2511.6844.91168.67168.5444.94163837.0324.95202.384447104321294.741.9173367526541.995173.515107.67240.55124.54168427.665373285.4271159.22173426.91168568.7927.79345.65165.9622.29113.2845.67165.88540.94480951671.75412166.19489082988013.853147732.4863904.566123.7978269.26442109.68137247.862249814.1852319.063.081137681.756323.7336149.2110910.2296261.9417343.9717420.7712340.0734.26131100.06334563.4733.8724.1070.6134.0321.3112.35188.0160.74268.9197.29102.1982.25538.8523517728.1623474659.320.4973966574.2194.5256.923242574.793243813.6494.09430.952258819.862257168.81273.44431.1928.271324.281622.359631113381155.223.052520.98332496277.1972533469109346.818470.68855648511.228219.77392164313.83889.8952.96287.137524941879081676.8335705.981828145176601013.54385.88111.89286768.112973193671190.126761.4681695.62340726.6454.1320813644377037349.0326817.835730328490104763.64136308340013.711.295328520008.1449810500306698086.62231.783087253.763343242.272800328032400.9885096.01145404757101271.332815652.71269.411274.191140.67211476113549.169755.34310.232583399.14472.977702.461709.05782002.88406.189696.91953.222.52744159.343152408.51118216.29157.4152.820.5515044.1333854.12162.47349.152.21247.872154.92533.568.80523632306.643411591.011.670355938.9150.41118462.7935.076669350.641007663.7238248.152030550.6451011166455.131165955.27549130.49346724592233718447.73164087.971485391125816000164124150468.5712713861.735762540532.657960924.711.825378.296216.2411.6844.97168.45168.4444.97163053.3525.06201.534427083411247.751.9040468984537.138172.35105.92840.55124.54173949.235303334.5871274.12182855.42173404.2227.845.44166.7222.28113.3445.62166.05540.89579651761.74051166.27791882286413.886148250.9864114.566123.1128229.24542145.59138390.810750501.652878.343.061138919.176343.236290.1110929.6596362.2417300.8317426.6512314.6334.24431134.98334550.1133.7924.0570.8834.0421.3512.39188.2160.81271.3398.64103.2882.32391.5123552788.523468513.730.492176586.7299.76306.33244195.863244457.4294.02431.592256203.862254827.3261.74431.828.430524.277922.323230973221206.8423.158920.97652386711.9971326607109558.9818237.24836396711.57719.61931993539.11888.5857.86287.417527494342080822.7135839.551816474110601014.53386.41112.28286274.212963893599189.2726729.281778.72342626.17381.7420834499761037389.3826817.865727203610104750.99136005341456.2311.304328430008.1358610499509753086.71232.133090952.63342591.62801228022400.2685213.63145460665501260.992728277.391269.071263.731137.34211657113739.849754.14304.942906460.44473.739713.052699.0881862.66405.753702.07753.252.53518159.483161848.43118198.09157.3852.80.5456264.1609554.14162.25348.5452.16248.554154.99522.27568.86723743304.843343846.781.677396087.3150.68118453.7435.025681350.541040333.72002248.492005360.6248011165016.21165479.96528780.49521526708233760847.80363405.191492605125180000160766149032.7512848901.775442570371.447955059.051.837098.258216.5211.6644.95168.50168.8844.85164013.1725.13200.924437065081246.031.8908167760537.548173.551106.06740.43124.90170495.744864771.1870654.25172379.56168958.5527.9145.49166.5322.46112.4445.53166.39536.00481051731.75393167.43390386584013.823148426.9564074.574123.2718109.33241131.21139367.204150252.0552806.143.05137744.286354.0936137.8210952.9696002.5617325.7917458.8512337.8434.31331093.88334550.333.8424.170.5834.121.3912.41187.560.73269.0398.15103.1480.33390.83OpenBenchmarking.org

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mallocefghij20M40M60M80M100MSE +/- 109768.81, N = 378931999.9979082149.2879143065.7423388075.5823517728.1623552788.501. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mallocefghij14M28M42M56M70MMin: 23168587.41 / Avg: 23388075.58 / Max: 23501854.321. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Mallocefhij20M40M60M80M100MSE +/- 12210.94, N = 378610054.5678868686.9023508474.8523474659.3223468513.731. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Mallocefhij14M28M42M56M70MMin: 23486973.12 / Avg: 23508474.85 / Max: 23529254.551. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUefhij0.35980.71961.07941.43921.799SE +/- 0.002966, N = 31.5989400.8318230.4964000.4973960.492170MIN: 0.67MIN: 0.53MIN: 0.39MIN: 0.4MIN: 0.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUefhij246810Min: 0.49 / Avg: 0.5 / Max: 0.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Socket Activityefghij2K4K6K8K10KSE +/- 622.10, N = 159732.029783.029861.353522.266574.216586.721. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Socket Activityefghij2K4K6K8K10KMin: 8.8 / Avg: 3522.26 / Max: 6654.091. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: NUMAefghij4080120160200SE +/- 0.91, N = 3166.99163.39184.8393.2594.5099.761. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: NUMAefghij306090120150Min: 91.61 / Avg: 93.25 / Max: 94.751. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Atomicefghij70140210280350SE +/- 1.08, N = 3157.05157.76160.15260.37256.92306.301. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Atomicefghij60120180240300Min: 258.39 / Avg: 260.37 / Max: 262.091. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Semaphoresefhij1.3M2.6M3.9M5.2M6.5MSE +/- 1059.44, N = 36279831.976288116.393241988.273242574.793244195.861. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Semaphoresefhij1.1M2.2M3.3M4.4M5.5MMin: 3240056.99 / Avg: 3241988.27 / Max: 3243708.781. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Semaphoresefghij1.3M2.6M3.9M5.2M6.5MSE +/- 626.84, N = 36280163.346274228.796274788.613244081.733243813.643244457.421. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Semaphoresefghij1.1M2.2M3.3M4.4M5.5MMin: 3242828.51 / Avg: 3244081.73 / Max: 3244737.931. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: NUMAefhij4080120160200SE +/- 1.20, N = 15179.61169.8395.6094.0994.021. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: NUMAefhij306090120150Min: 87.55 / Avg: 95.6 / Max: 103.231. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MMAPefghij170340510680850SE +/- 0.70, N = 3774.15742.73735.31433.20430.95431.591. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MMAPefghij140280420560700Min: 431.82 / Avg: 433.2 / Max: 434.091. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Pollefhij900K1800K2700K3600K4500KSE +/- 1054.76, N = 34038384.004035027.512257645.942258819.862256203.861. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Pollefhij700K1400K2100K2800K3500KMin: 2256051.14 / Avg: 2257645.94 / Max: 2259639.181. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pollefghij900K1800K2700K3600K4500KSE +/- 230.77, N = 34035573.844027385.594023520.122257415.182257168.812254827.301. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Pollefghij700K1400K2100K2800K3500KMin: 2256957.44 / Avg: 2257415.18 / Max: 2257695.251. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Atomicefhij60120180240300SE +/- 2.20, N = 15156.11160.27260.09273.44261.741. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Atomicefhij50100150200250Min: 248.06 / Avg: 260.09 / Max: 274.321. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: MMAPefhij160320480640800SE +/- 0.39, N = 3729.26727.52431.30431.19431.801. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: MMAPefhij130260390520650Min: 430.52 / Avg: 431.3 / Max: 431.781. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonefghij1020304050SE +/- 0.07, N = 344.4044.4144.2828.3128.2728.43MIN: 43.8 / MAX: 46.79MIN: 43.88 / MAX: 46.85MIN: 43.73 / MAX: 46.57MIN: 27.61 / MAX: 30.41MIN: 27.6 / MAX: 30.18MIN: 27.79 / MAX: 30.42
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragonefghij918273645Min: 28.21 / Avg: 28.31 / Max: 28.45

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objefghij918273645SE +/- 0.03, N = 337.9238.1038.0724.3624.2824.28MIN: 37.43 / MAX: 40.04MIN: 37.56 / MAX: 40.12MIN: 37.55 / MAX: 40.11MIN: 23.83 / MAX: 26.05MIN: 23.75 / MAX: 25.83MIN: 23.75 / MAX: 25.58
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Asian Dragon Objefghij816243240Min: 24.31 / Avg: 24.36 / Max: 24.39

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownefghij816243240SE +/- 0.04, N = 335.0034.9534.7422.4322.3622.32MIN: 34.13 / MAX: 36.64MIN: 34.07 / MAX: 36.73MIN: 33.94 / MAX: 36.36MIN: 21.85 / MAX: 24.32MIN: 21.84 / MAX: 24MIN: 21.84 / MAX: 24.02
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer ISPC - Model: Crownefghij714212835Min: 22.36 / Avg: 22.43 / Max: 22.51

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingefghij1000K2000K3000K4000K5000KSE +/- 10221.65, N = 34691218472197847991723122070311133830973221. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Read While Writingefghij800K1600K2400K3200K4000KMin: 3107898 / Avg: 3122070.33 / Max: 31419161. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Zlibefhij400800120016002000SE +/- 1.64, N = 31789.131788.201203.501155.201206.841. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Zlibefhij30060090012001500Min: 1200.28 / Avg: 1203.5 / Max: 1205.631. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragonefghij816243240SE +/- 0.05, N = 335.4735.3835.5123.0723.0523.16MIN: 34.22 / MAX: 37.08MIN: 34.3 / MAX: 36.96MIN: 34.23 / MAX: 37MIN: 22.9 / MAX: 23.77MIN: 22.96 / MAX: 23.68MIN: 23.06 / MAX: 23.75
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragonefghij816243240Min: 22.99 / Avg: 23.07 / Max: 23.16

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objefghij714212835SE +/- 0.04, N = 332.2332.0732.2021.0020.9820.98MIN: 31.25 / MAX: 33.72MIN: 31.19 / MAX: 33.77MIN: 31.14 / MAX: 33.69MIN: 20.84 / MAX: 21.33MIN: 20.9 / MAX: 21.18MIN: 20.89 / MAX: 21.22
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Asian Dragon Objefghij714212835Min: 20.93 / Avg: 21 / Max: 21.05

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc C String Functionsefghij800K1600K2400K3200K4000KSE +/- 3246.22, N = 33501538.253529363.293645309.632408058.862496277.192386711.991. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc C String Functionsefghij600K1200K1800K2400K3000KMin: 2401702.94 / Avg: 2408058.86 / Max: 2412383.771. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readefghij20M40M60M80M100MSE +/- 707833.96, N = 31063662921065529211078678947261986872533469713266071. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Readefghij20M40M60M80M100MMin: 71210926 / Avg: 72619868.33 / Max: 734437041. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000efhij40K80K120K160K200KSE +/- 28.90, N = 3164243.20164831.27109256.71109346.80109558.981. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 1000efhij30K60K90K120K150KMin: 109198.91 / Avg: 109256.71 / Max: 109286.321. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Points2Imageefghij4K8K12K16K20KSE +/- 30.76, N = 313099.4112910.6413009.3419476.0818470.6918237.251. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Points2Imageefghij3K6K9K12K15KMin: 19416.09 / Avg: 19476.08 / Max: 19517.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUefhij3691215SE +/- 0.00893, N = 37.720597.7387511.2133011.2282011.57700MIN: 7.58MIN: 7.58MIN: 11.11MIN: 11.11MIN: 11.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUefhij3691215Min: 11.2 / Avg: 11.21 / Max: 11.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownefghij714212835SE +/- 0.05, N = 329.4129.2929.3619.7219.7719.62MIN: 28.72 / MAX: 31.05MIN: 28.61 / MAX: 30.83MIN: 28.73 / MAX: 30.61MIN: 19.49 / MAX: 20.5MIN: 19.64 / MAX: 20.73MIN: 19.47 / MAX: 20.27
OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.0.1Binary: Pathtracer - Model: Crownefghij714212835Min: 19.65 / Avg: 19.72 / Max: 19.82

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100efhij600K1200K1800K2400K3000KSE +/- 24953.13, N = 152973860.212808783.972136467.212164313.831993539.111. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:100efhij500K1000K1500K2000K2500KMin: 1976151.61 / Avg: 2136467.21 / Max: 2278691.031. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlyefhij2004006008001000SE +/- 4.90, N = 3606.40596.97889.41889.89888.58
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlyefhij160320480640800Min: 879.62 / Avg: 889.41 / Max: 894.51

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Cacheefghij1326395265SE +/- 3.29, N = 1238.9940.4440.1257.0652.9657.861. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Cacheefghij1122334455Min: 48.66 / Avg: 57.06 / Max: 91.981. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlyefhij60120180240300SE +/- 0.12, N = 3194.87194.12287.69287.13287.41
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlyefhij50100150200250Min: 287.45 / Avg: 287.69 / Max: 287.84

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305efghij20000M40000M60000M80000M100000MSE +/- 6468540.55, N = 31112229882501111710567101111037668007525230965375249418790752749434201. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20-Poly1305efghij20000M40000M60000M80000M100000MMin: 75239970410 / Avg: 75252309653.33 / Max: 752618457801. (CC) gcc options: -pthread -m64 -O3 -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Vector Mathefhij30K60K90K120K150KSE +/- 56.68, N = 3119392.91119349.9581804.4581676.8380822.711. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Vector Mathefhij20K40K60K80K100KMin: 81694.78 / Avg: 81804.45 / Max: 81884.111. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Stressefghij11K22K33K44K55KSE +/- 35.16, N = 352534.2252157.8452459.6335599.2035705.9835839.551. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: CPU Stressefghij9K18K27K36K45KMin: 35533.14 / Avg: 35599.2 / Max: 35653.11. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMefghij60000M120000M180000M240000M300000MSE +/- 253599393.56, N = 32674437821802675039578802668103521601830351270331828145176601816474110601. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-256-GCMefghij50000M100000M150000M200000M250000MMin: 182751829740 / Avg: 183035127033.33 / Max: 1835411174701. (CC) gcc options: -pthread -m64 -O3 -ldl

Darmstadt Automotive Parallel Heterogeneous Suite

DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Euclidean Clusterefghij2004006008001000SE +/- 10.31, N = 5690.33712.93722.67961.401013.541014.531. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.orgTest Cases Per Minute, More Is BetterDarmstadt Automotive Parallel Heterogeneous Suite 2021.11.02Backend: OpenMP - Kernel: Euclidean Clusterefghij2004006008001000Min: 949.21 / Avg: 961.4 / Max: 1002.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetefhij120240360480600SE +/- 0.07, N = 3566.19565.67386.26385.88386.41
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetefhij100200300400500Min: 386.19 / Avg: 386.26 / Max: 386.39

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlyefhij306090120150SE +/- 0.09, N = 376.5576.93112.07111.89112.28
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlyefhij20406080100Min: 111.91 / Avg: 112.07 / Max: 112.21

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096efghij90K180K270K360K450KSE +/- 136.10, N = 3419714.9419839.0419763.7286665.8286768.1286274.21. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgverify/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096efghij70K140K210K280K350KMin: 286397.1 / Avg: 286665.8 / Max: 286837.81. (CC) gcc options: -pthread -m64 -O3 -ldl

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKefghij40K80K120K160K200KSE +/- 27.79, N = 31893081898541893081296491297311296381. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: WPA PSKefghij30K60K90K120K150KMin: 129595 / Avg: 129648.67 / Max: 1296881. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingefghij30K60K90K120K150KSE +/- 74.28, N = 31369961364401364199375293671935991. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Decompression Ratingefghij20K40K60K80K100KMin: 93611 / Avg: 93752 / Max: 938631. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc Qsort Data Sortingefghij60120180240300SE +/- 0.26, N = 3276.83275.58276.13189.50190.10189.271. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Glibc Qsort Data Sortingefghij50100150200250Min: 189.07 / Avg: 189.5 / Max: 189.971. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Cryptoefghij8K16K24K32K40KSE +/- 22.35, N = 338963.9338948.0739002.2626680.4726761.4626729.201. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Cryptoefghij7K14K21K28K35KMin: 26657.6 / Avg: 26680.47 / Max: 26725.171. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Vector Mathefghij30K60K90K120K150KSE +/- 32.35, N = 3119310.97119310.27119385.6881800.4381695.6281778.721. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Vector Mathefghij20K40K60K80K100KMin: 81737.88 / Avg: 81800.43 / Max: 81846.041. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: SENDFILEefghij110K220K330K440K550KSE +/- 71.80, N = 3489496.07490342.61492918.31337606.34340726.60342626.171. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: SENDFILEefghij90K180K270K360K450KMin: 337512.4 / Avg: 337606.34 / Max: 337747.371. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MEMFDefghij120240360480600SE +/- 13.12, N = 15394.99415.33464.42557.29454.13381.741. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: MEMFDefghij100200300400500Min: 424.62 / Avg: 557.29 / Max: 625.891. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMefghij70000M140000M210000M280000M350000MSE +/- 20712512.35, N = 33036404443803032828141603030874043702079941349832081364437702083449976101. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: AES-128-GCMefghij50000M100000M150000M200000M250000MMin: 207953056560 / Avg: 207994134983.33 / Max: 2080193052701. (CC) gcc options: -pthread -m64 -O3 -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: CPU Stressefhij12K24K36K48K60KSE +/- 54.92, N = 354460.7354403.0437556.0837349.0337389.381. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: CPU Stressefhij9K18K27K36K45KMin: 37475.17 / Avg: 37556.08 / Max: 37660.871. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Cryptoefhij8K16K24K32K40KSE +/- 2.51, N = 339091.0839071.5526814.1626817.8326817.861. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Cryptoefhij7K14K21K28K35KMin: 26809.15 / Avg: 26814.16 / Max: 26817.021. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512efghij2000M4000M6000M8000M10000MSE +/- 1617191.98, N = 38346380060834816771083464185805732289320573032849057272036101. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA512efghij1400M2800M4200M5600M7000MMin: 5729343940 / Avg: 5732289320 / Max: 57349193701. (CC) gcc options: -pthread -m64 -O3 -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Function Callefghij30K60K90K120K150KSE +/- 3.58, N = 3152594.25152593.80152594.67104780.05104763.64104750.991. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Function Callefghij30K60K90K120K150KMin: 104773.4 / Avg: 104780.05 / Max: 104785.681. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

7-Zip Compression

This is a test of 7-Zip compression/decompression with its integrated benchmark feature. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingefghij40K80K120K160K200KSE +/- 426.89, N = 31961211971391981141362351363081360051. (CXX) g++ options: -lpthread -ldl -O2 -fPIC
OpenBenchmarking.orgMIPS, More Is Better7-Zip Compression 22.01Test: Compression Ratingefghij30K60K90K120K150KMin: 135432 / Avg: 136234.67 / Max: 1368881. (CXX) g++ options: -lpthread -ldl -O2 -fPIC

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: SENDFILEefhij110K220K330K440K550KSE +/- 82.32, N = 3492399.20493574.13338894.54340013.70341456.231. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: SENDFILEefhij90K180K270K360K450KMin: 338741.45 / Avg: 338894.54 / Max: 339023.551. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUefhij3691215SE +/- 0.00717, N = 37.772047.7729311.2926011.2953011.30430MIN: 7.7MIN: 7.71MIN: 11.22MIN: 11.22MIN: 11.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUefhij3691215Min: 11.28 / Avg: 11.29 / Max: 11.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5efghij900K1800K2700K3600K4500KSE +/- 2962.73, N = 34127000411900041220002846333285200028430001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: MD5efghij700K1400K2100K2800K3500KMin: 2842000 / Avg: 2846333.33 / Max: 28520001. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUefhij246810SE +/- 0.00039, N = 35.615185.626398.125858.144988.13586MIN: 5.59MIN: 5.59MIN: 8.12MIN: 8.12MIN: 8.121. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUefhij3691215Min: 8.13 / Avg: 8.13 / Max: 8.131. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20efghij30000M60000M90000M120000M150000MSE +/- 8688544.64, N = 31522141681001522077130801522102026201049889516231050030669801049950975301. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: ChaCha20efghij30000M60000M90000M120000M150000MMin: 104972120340 / Avg: 104988951623.33 / Max: 1050011093701. (CC) gcc options: -pthread -m64 -O3 -ldl

Blender

Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlyefhij20406080100SE +/- 0.02, N = 359.8460.1786.6986.6286.71
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlyefhij1632486480Min: 86.66 / Avg: 86.69 / Max: 86.73

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlyefhij50100150200250SE +/- 0.19, N = 3164.85160.31232.06231.78232.13
OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlyefhij4080120160200Min: 231.72 / Avg: 232.06 / Max: 232.37

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Hashefghij1000K2000K3000K4000K5000KSE +/- 126.06, N = 34466781.044463109.624460696.993089184.073087253.763090952.601. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Hashefghij800K1600K2400K3200K4000KMin: 3089016.94 / Avg: 3089184.07 / Max: 3089431.121. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Hashefhij1000K2000K3000K4000K5000KSE +/- 1653.89, N = 34835927.284835657.843348679.103343242.273342591.601. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Hashefhij800K1600K2400K3200K4000KMin: 3345376.15 / Avg: 3348679.1 / Max: 3350485.271. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

John The Ripper

This is a benchmark of John The Ripper, which is a password cracker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishefghij9K18K27K36K45KSE +/- 11.70, N = 34039640460404022802828003280121. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: Blowfishefghij7K14K21K28K35KMin: 28012 / Avg: 28028.33 / Max: 280511. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptefghij9K18K27K36K45KSE +/- 11.26, N = 34044040421404602801328032280221. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt
OpenBenchmarking.orgReal C/S, More Is BetterJohn The Ripper 2023.03.14Test: bcryptefghij7K14K21K28K35KMin: 27993 / Avg: 28012.67 / Max: 280321. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lm -lrt -lz -ldl -lcrypt

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetefhij120240360480600SE +/- 0.19, N = 3578.04576.64400.98400.98400.26
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetefhij100200300400500Min: 400.63 / Avg: 400.98 / Max: 401.26

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Matrix Mathefhij30K60K90K120K150KSE +/- 51.94, N = 3121611.76121882.8684915.6685096.0185213.631. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Matrix Mathefhij20K40K60K80K100KMin: 84812.41 / Avg: 84915.66 / Max: 84977.141. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256efghij4000M8000M12000M16000M20000MSE +/- 3599683.66, N = 32077308588020783323840208500175201453514125014540475710145460665501. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgbyte/s, More Is BetterOpenSSL 3.1Algorithm: SHA256efghij4000M8000M12000M16000M20000MMin: 14530122210 / Avg: 14535141250 / Max: 145421206801. (CC) gcc options: -pthread -m64 -O3 -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUefhij30060090012001500SE +/- 2.73, N = 3886.91897.841269.701271.331260.99MIN: 844.74MIN: 851.56MIN: 1215.95MIN: 1221.68MIN: 1212.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUefhij2004006008001000Min: 1265.28 / Avg: 1269.7 / Max: 1274.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Futexefghij600K1200K1800K2400K3000KSE +/- 44269.26, N = 152073056.272082786.912109816.632969099.972815652.702728277.391. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Futexefghij500K1000K1500K2000K2500KMin: 2648678.08 / Avg: 2969099.97 / Max: 3315763.091. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUefhij30060090012001500SE +/- 2.92, N = 3903.52887.471264.551269.411269.07MIN: 849.4MIN: 844.57MIN: 1210.64MIN: 1219.37MIN: 1220.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUefhij2004006008001000Min: 1261.32 / Avg: 1264.55 / Max: 1270.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUefhij30060090012001500SE +/- 3.00, N = 3891.70892.281268.131274.191263.73MIN: 849.48MIN: 849.32MIN: 1214.46MIN: 1225.59MIN: 1212.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUefhij2004006008001000Min: 1264.27 / Avg: 1268.13 / Max: 1274.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Zlibefghij30060090012001500SE +/- 1.93, N = 31620.601591.751321.531135.631140.671137.341. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Zlibefghij30060090012001500Min: 1133.7 / Avg: 1135.63 / Max: 1139.481. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

RocksDB

This is a benchmark of Meta/Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncefghij60K120K180K240K300KSE +/- 58.27, N = 33004733007133013692116792114762116571. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti
OpenBenchmarking.orgOp/s, More Is BetterRocksDB 8.0Test: Random Fill Syncefghij50K100K150K200K250KMin: 211591 / Avg: 211678.67 / Max: 2117891. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500efhij30K60K90K120K150KSE +/- 17.26, N = 3160858.82160487.19113684.79113549.16113739.841. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500efhij30K60K90K120K150KMin: 113651.82 / Avg: 113684.79 / Max: 113710.151. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

OpenSSL

OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test profile makes use of the built-in "openssl speed" benchmarking capabilities. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096efghij3K6K9K12K15KSE +/- 2.48, N = 313789.313782.413813.49773.29755.39754.11. (CC) gcc options: -pthread -m64 -O3 -ldl
OpenBenchmarking.orgsign/s, More Is BetterOpenSSL 3.1Algorithm: RSA4096efghij2K4K6K8K10KMin: 9768.4 / Avg: 9773.23 / Max: 9776.61. (CC) gcc options: -pthread -m64 -O3 -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Memory Copyingefghij13002600390052006500SE +/- 0.91, N = 36083.676066.736092.104302.254310.234304.941. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Memory Copyingefghij11002200330044005500Min: 4300.72 / Avg: 4302.25 / Max: 4303.881. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Futexefhij600K1200K1800K2400K3000KSE +/- 51354.36, N = 122103532.732125283.932977539.722583399.142906460.441. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Futexefhij500K1000K1500K2000K2500KMin: 2742966.85 / Avg: 2977539.72 / Max: 3316921.511. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaefhij100200300400500SE +/- 0.23, N = 3335.32336.19472.56472.98473.74
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Ninjaefhij80160240320400Min: 472.15 / Avg: 472.55 / Max: 472.95

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUefhij150300450600750SE +/- 2.34, N = 3508.75509.81700.36702.46713.05MIN: 489.62MIN: 489.85MIN: 667.01MIN: 672.81MIN: 680.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUefhij130260390520650Min: 696.39 / Avg: 700.36 / Max: 704.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUefhij150300450600750SE +/- 0.30, N = 3511.49510.86706.98709.06699.08MIN: 490.8MIN: 492.47MIN: 677.17MIN: 677.09MIN: 668.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUefhij120240360480600Min: 706.41 / Avg: 706.98 / Max: 707.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Matrix Mathefghij20K40K60K80K100KSE +/- 62.64, N = 3113551.29113258.03113380.2381986.2582002.8881862.661. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Matrix Mathefghij20K40K60K80K100KMin: 81861.1 / Avg: 81986.25 / Max: 82053.891. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

Timed Node.js Compilation

This test profile times how long it takes to build/compile Node.js itself from source. Node.js is a JavaScript run-time built from the Chrome V8 JavaScript engine while itself is written in C/C++. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compileefhij90180270360450SE +/- 0.10, N = 3295.65295.08406.29406.19405.75
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Node.js Compilation 19.8.1Time To Compileefhij70140210280350Min: 406.1 / Avg: 406.29 / Max: 406.42

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUefhij150300450600750SE +/- 0.94, N = 3513.74522.15705.25696.92702.08MIN: 495.91MIN: 502.11MIN: 674.1MIN: 668.26MIN: 673.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUefhij120240360480600Min: 703.85 / Avg: 705.25 / Max: 707.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50efhij1632486480SE +/- 0.01, N = 372.7272.7853.2653.2253.25
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50efhij1428425670Min: 53.24 / Avg: 53.26 / Max: 53.28

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUefhij0.57041.14081.71122.28162.852SE +/- 0.00464, N = 31.881991.861052.533422.527442.53518MIN: 1.65MIN: 1.65MIN: 2.24MIN: 2.31MIN: 2.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUefhij246810Min: 2.53 / Avg: 2.53 / Max: 2.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetefhij50100150200250SE +/- 0.02, N = 3217.05217.01159.46159.34159.48
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetefhij4080120160200Min: 159.43 / Avg: 159.46 / Max: 159.5

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10efhij900K1800K2700K3600K4500KSE +/- 26057.18, N = 34268751.714165587.243134084.273152408.513161848.431. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:10efhij700K1400K2100K2800K3500KMin: 3093541.96 / Avg: 3134084.27 / Max: 3182713.341. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200efhij30K60K90K120K150KSE +/- 10.78, N = 3160594.88160596.18118060.34118216.29118198.091. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 200efhij30K60K90K120K150KMin: 118038.79 / Avg: 118060.34 / Max: 118071.581. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetefhij50100150200250SE +/- 0.13, N = 3213.19213.21157.24157.41157.38
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetefhij4080120160200Min: 156.98 / Avg: 157.24 / Max: 157.39

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50efhij1632486480SE +/- 0.01, N = 371.5071.4952.7752.8252.80
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50efhij1428425670Min: 52.76 / Avg: 52.77 / Max: 52.79

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUefhij0.12410.24820.37230.49640.6205SE +/- 0.000311, N = 30.4129850.4087120.5453880.5515040.545626MIN: 0.34MIN: 0.34MIN: 0.45MIN: 0.45MIN: 0.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUefhij246810Min: 0.54 / Avg: 0.55 / Max: 0.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUefhij0.94371.88742.83113.77484.7185SE +/- 0.01380, N = 33.112704.194374.164814.133384.16095MIN: 2.91MIN: 2.98MIN: 4.03MIN: 4.03MIN: 4.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUefhij246810Min: 4.15 / Avg: 4.16 / Max: 4.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50efhij1632486480SE +/- 0.00, N = 372.8772.8954.1454.1254.14
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50efhij1428425670Min: 54.14 / Avg: 54.14 / Max: 54.15

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetefhij50100150200250SE +/- 0.01, N = 3218.09218.20162.44162.47162.25
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetefhij4080120160200Min: 162.42 / Avg: 162.44 / Max: 162.46

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetefhij100200300400500SE +/- 0.39, N = 3468.66466.03348.91349.10348.54
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetefhij80160240320400Min: 348.16 / Avg: 348.91 / Max: 349.45

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50efhij1530456075SE +/- 0.01, N = 369.4469.4252.1552.2152.16
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50efhij1326395265Min: 52.14 / Avg: 52.15 / Max: 52.16

Timed Godot Game Engine Compilation

This test times how long it takes to compile the Godot Game Engine. Godot is a popular, open-source, cross-platform 2D/3D game engine and is built using the SCons build system and targeting the X11 platform. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compileefhij50100150200250SE +/- 0.31, N = 3187.58187.05247.79247.87248.55
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Godot Game Engine Compilation 4.0Time To Compileefhij4080120160200Min: 247.41 / Avg: 247.79 / Max: 248.4

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetefhij4080120160200SE +/- 0.08, N = 3204.46204.59154.60154.92154.99
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetefhij4080120160200Min: 154.45 / Avg: 154.6 / Max: 154.68

Timed LLVM Compilation

This test times how long it takes to compile/build the LLVM compiler stack. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesefhij120240360480600SE +/- 3.74, N = 3403.98405.54527.02533.50522.28
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 16.0Build System: Unix Makefilesefhij90180270360450Min: 522.55 / Avg: 527.02 / Max: 534.45

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration (defconfig) for the architecture being tested or alternatively an allmodconfig for building all possible kernel modules for the build. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigefghij1530456075SE +/- 0.52, N = 353.8353.2153.1667.7568.8168.87
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 6.1Build: defconfigefghij1326395265Min: 67.23 / Avg: 67.75 / Max: 68.78

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Object Detectionefhij7K14K21K28K35KSE +/- 286.60, N = 1530587303472475323632237431. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Object Detectionefhij5K10K15K20K25KMin: 23597 / Avg: 24753.27 / Max: 274101. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetefhij90180270360450SE +/- 0.15, N = 3393.95393.42306.79306.64304.84
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetefhij70140210280350Min: 306.52 / Avg: 306.79 / Max: 307.05

Memcached

Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5efhij900K1800K2700K3600K4500KSE +/- 12477.68, N = 34317194.514295820.033380951.403411591.013343846.781. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterMemcached 1.6.19Set To Get Ratio: 1:5efhij700K1400K2100K2800K3500KMin: 3366339.75 / Avg: 3380951.4 / Max: 3405777.271. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUefhij0.37740.75481.13221.50961.887SE +/- 0.00645, N = 31.308391.300871.674601.670351.67739MIN: 1.19MIN: 1.17MIN: 1.45MIN: 1.44MIN: 1.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUefhij246810Min: 1.67 / Avg: 1.67 / Max: 1.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NWChem

NWChem is an open-source high performance computational chemistry package. Per NWChem's documentation, "NWChem aims to provide its users with computational chemistry tools that are scalable both in their ability to treat large scientific computational chemistry problems efficiently, and in their use of available parallel computing resources from high-performance parallel supercomputers to conventional workstation clusters." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0.2Input: C240 Buckyballefhij130026003900520065004764.84753.65851.25938.96087.31. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetefhij4080120160200SE +/- 0.06, N = 3191.59191.96150.35150.41150.68
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetefhij4080120160200Min: 150.24 / Avg: 150.35 / Max: 150.44

nginx

This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100efhij30K60K90K120K150KSE +/- 33.69, N = 3151096.19151139.57118589.58118462.79118453.741. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 100efhij30K60K90K120K150KMin: 118551.44 / Avg: 118589.58 / Max: 118656.751. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Timed FFmpeg Compilation

This test times how long it takes to build the FFmpeg multimedia library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compileefghij816243240SE +/- 0.03, N = 328.0127.8728.0434.9635.0835.03
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed FFmpeg Compilation 6.0Time To Compileefghij714212835Min: 34.9 / Avg: 34.96 / Max: 35.01

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Videoefhij2K4K6K8K10KSE +/- 13.02, N = 3840881626799669368131. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Videoefhij15003000450060007500Min: 6777 / Avg: 6798.67 / Max: 68221. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50efhij1428425670SE +/- 0.02, N = 362.6362.8450.5250.6450.54
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50efhij1224364860Min: 50.49 / Avg: 50.52 / Max: 50.55

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Image Processingefhij30K60K90K120K150KSE +/- 916.27, N = 31236781226561046751007661040331. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Image Processingefhij20K40K60K80K100KMin: 103440 / Avg: 104675 / Max: 1064651. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUefhij1.02122.04243.06364.08485.106SE +/- 0.00329, N = 34.538534.312283.723473.723803.72002MIN: 3.54MIN: 3.56MIN: 3.03MIN: 3.04MIN: 3.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUefhij246810Min: 3.72 / Avg: 3.72 / Max: 3.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetefhij70140210280350SE +/- 0.29, N = 3301.94302.00248.81248.15248.49
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetefhij50100150200250Min: 248.43 / Avg: 248.81 / Max: 249.37

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Stitchingefhij50K100K150K200K250KSE +/- 801.57, N = 32417482396832023492030552005361. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Stitchingefhij40K80K120K160K200KMin: 201104 / Avg: 202348.67 / Max: 2038461. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUefhij0.14510.29020.43530.58040.7255SE +/- 0.006251, N = 30.5363280.5603920.6338000.6451010.624801MIN: 0.48MIN: 0.51MIN: 0.57MIN: 0.59MIN: 0.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUefhij246810Min: 0.63 / Avg: 0.63 / Max: 0.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Stress-NG

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Mutexefhij200K400K600K800K1000KSE +/- 265.04, N = 3969776.95972025.681165638.381166455.131165016.201. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.06Test: Mutexefhij200K400K600K800K1000KMin: 1165128.82 / Avg: 1165638.38 / Max: 1166019.681. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lz -pthread

OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mutexefghij200K400K600K800K1000KSE +/- 196.52, N = 3969913.81969633.43969577.511165454.031165955.271165479.961. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread
OpenBenchmarking.orgBogo Ops/s, More Is BetterStress-NG 0.15.04Test: Mutexefghij200K400K600K800K1000KMin: 1165225.07 / Avg: 1165454.03 / Max: 1165845.171. (CC) gcc options: -std=gnu99 -O2 -lm -lc -lcrypt -ldl -lEGL -lGLESv2 -lrt -lz -pthread

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Features 2Defhij14K28K42K56K70KSE +/- 672.84, N = 363156617925268954913528781. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared
OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: Features 2Defhij11K22K33K44K55KMin: 52010 / Avg: 52689.33 / Max: 540351. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUefhij0.11140.22280.33420.44560.557SE +/- 0.000235, N = 30.4147750.4147560.4945560.4934670.495215MIN: 0.35MIN: 0.36MIN: 0.42MIN: 0.42MIN: 0.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUefhij246810Min: 0.49 / Avg: 0.49 / Max: 0.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenCV

This is a benchmark of the OpenCV (Computer Vision) library's built-in performance tests. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOpenCV 4.7Test: DNN - Deep Neural Networkefhij6K12K18K24K30KSE +/- 264.62, N = 529168292882565624592267081. (CXX) g++ options: -fPIC -fsigned-char -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse -msse2 -msse3 -fvisibility=hidden -O3 -shared