5800x3d smoke okt

AMD Ryzen 7 5800X3D 8-Core testing with a ASUS ROG CROSSHAIR VIII HERO (4201 BIOS) and Intel DG2 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210143-PTS-5800X3DS35
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

C/C++ Compiler Tests 3 Tests
CPU Massive 3 Tests
Creator Workloads 3 Tests
HPC - High Performance Computing 5 Tests
Machine Learning 4 Tests
Multi-Core 4 Tests
Python Tests 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
October 13 2022
  6 Hours, 8 Minutes
B
October 13 2022
  2 Hours, 49 Minutes
C
October 13 2022
  1 Hour, 27 Minutes
D
October 13 2022
  6 Hours, 2 Minutes
Invert Hiding All Results Option
  4 Hours, 7 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5800x3d smoke oktOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR VIII HERO (4201 BIOS)AMD Starship/Matisse32GB1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GBIntel DG2 8GB (2400MHz)Intel Device 4f90ASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 22.045.15.47+prerelease3723 (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.2.0-devel (git-44289c46d9)1.3.219GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution5800x3d Smoke Okt BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120a - Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDResult OverviewPhoronix Test Suite100%100%101%101%102%oneDNNPostgreSQLOpenRadiossspaCyY-CruncherSMHasherQuadRayAOM AV1

5800x3d smoke oktquadray: 1 - 4Kquadray: 2 - 4Kquadray: 3 - 4Kquadray: 5 - 4Kquadray: 1 - 1080pquadray: 2 - 1080pquadray: 3 - 1080pquadray: 5 - 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080ptensorflow: CPU - 16 - VGG-16tensorflow: CPU - 32 - VGG-16tensorflow: CPU - 64 - VGG-16tensorflow: CPU - 16 - AlexNettensorflow: CPU - 256 - VGG-16tensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNetdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamsmhasher: wyhashsmhasher: SHA3-256smhasher: Spooky32smhasher: fasthash32smhasher: FarmHash128smhasher: t1ha2_atoncesmhasher: FarmHash32 x86_64 AVXsmhasher: t1ha0_aes_avx2 x86_64smhasher: MeowHash x86_64 AES-NIspacy: en_core_web_lgspacy: en_core_web_trfpgbench: 1 - 1 - Read Onlypgbench: 1 - 1 - Read Writepgbench: 1 - 50 - Read Onlypgbench: 1 - 100 - Read Onlypgbench: 1 - 50 - Read Writepgbench: 100 - 1 - Read Onlypgbench: 1 - 100 - Read Writepgbench: 100 - 1 - Read Writepgbench: 100 - 50 - Read Onlypgbench: 100 - 100 - Read Onlypgbench: 100 - 50 - Read Writepgbench: 100 - 100 - Read Writesmhasher: wyhashsmhasher: SHA3-256smhasher: Spooky32smhasher: fasthash32smhasher: FarmHash128smhasher: t1ha2_atoncesmhasher: FarmHash32 x86_64 AVXsmhasher: t1ha0_aes_avx2 x86_64smhasher: MeowHash x86_64 AES-NIonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUpgbench: 1 - 1 - Read Only - Average Latencypgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 50 - Read Only - Average Latencypgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 50 - Read Write - Average Latencypgbench: 100 - 1 - Read Only - Average Latencypgbench: 1 - 100 - Read Write - Average Latencypgbench: 100 - 1 - Read Write - Average Latencypgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 100 - Read Write - Average Latencydeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamy-cruncher: 1By-cruncher: 500Mopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testopenradioss: Bird Strike on Windshieldopenradioss: Rubber O-Ring Seal Installationopenradioss: INIVOL and Fluid Structure Interaction Drop ContainerABCD8.422.31.960.5332.628.87.72.140.217.6136.3414.4358.9580.7884.140.6417.7174.8144.51156.1189.92199.115.555.815.9468.325.8691.5109.77122.89123.6641.3114.5539.3713.5337.8112.6536.2611.8736.028.298.067334.176830.05345.909344.982595.074984.57271.730660.256335.251430.11198.32358.127726419.11191.2319181.67536.6817295.3119795.3333756.4677605.0745994.361541674037199250731094230756231273436228612378299777297424381403899716.2512033.49631.71525.9755.27324.31930.2923.24650.1052.936366.730911.249360.60864612.78637.233185.6135711.88861.799352.520022717.671395.862724.51392.311.086352724.191400.140.8474340.0270.3990.1610.32515.9910.02934.9480.4210.1670.3361.3112.564482.491123.9518117.012733.267287.105622.22542.05911.818855.749216.5906113.457233.2041478.9736123.031239.18717.737144.69100.35279.3146.06569.418.432.311.980.5332.7797.752.140.217.6936.6114.6758.8481.5885.070.6417.7173.644.99155.18189.67200.465.555.825.9568.458.22828.036234.045530.024146.080445.053495.070184.500871.635260.849434.991830.07798.31168.070426334.38196.1519233.267602.6117506.6219752.8334042.8273947.945752.011535774835381248831113830925031313387528562400301135299533379943936516.2971979.32531.65125.73155.25824.11530.2923.06450.0962.914816.806591.246040.61005512.8656.839685.5999210.67451.79392.486772701.461389.292715.721385.111.07922707.341388.940.8416640.0280.4020.1610.32315.9710.0335.0190.4170.1660.3341.3162.54484.9651124.431117.464633.299286.780122.190242.060211.82955.82116.4288114.292533.2415480.4444123.904139.03117.612145.55100.26291.67146.45571.318.382.311.970.5332.668.97.622.140.217.6636.3614.5258.1881.4683.560.6417.7374.1544.92154.69188.95194.6526302.3183.1419034.77573.1317175.3220051.0833899.0574795.0445844.241540175635743248630839330758631383494816.2122115.64631.89725.80955.24724.09330.32223.08750.1052.917917.432051.248580.61094413.23717.379165.6158611.18441.797852.543662711.471391.772719.711389.431.082532717.021393.850.8402430.0280.4020.1620.32515.9310.02939.05117.696149.29100.11279.49146.6568.98.42.31.980.5332.68.977.672.140.217.6736.5214.5358.1481.0484.350.6417.5973.1445.05155.73189.14198.995.555.815.9368.165.8891.54109.92123.47124.4441.3914.4939.3513.5837.8512.6736.3811.8635.968.34018.083434.327529.998646.505644.993595.064284.527171.793560.516335.230530.15958.37178.082225891.34194.5519265.237589.9217426.1719749.2434051.875226.2345447.311542474735777248431454931089131333556228732394305622300952381083970016.751988.98531.54925.75455.25824.06332.55923.06450.252.930087.507741.249440.61344713.35077.103115.6117111.37021.797172.536242726.781393.372722.91390.761.086632721.311393.640.8443850.0280.4030.1590.32215.9610.02834.810.4180.1640.3321.3122.519478.2818123.7043116.50133.327385.9322.219142.056811.82655.699716.5194113.516433.1516476.7667123.723539.23617.827145.41103.13280.05145.56571.12OpenBenchmarking.org

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KBADC2468108.438.428.408.381. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KCBDA0.51981.03961.55942.07922.5992.312.312.302.301. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KDBCA0.44550.8911.33651.7822.22751.981.981.971.961. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KDCBA0.11930.23860.35790.47720.59650.530.530.530.531. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pBCAD81624324032.7732.6632.6232.601. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pBDCA36912159.008.978.908.801. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pBADC2468107.757.707.677.621. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pDCBA0.48150.9631.44451.9262.40752.142.142.142.141. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KDCBA0.04730.09460.14190.18920.23650.210.210.210.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KBDCA2468107.697.677.667.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KBDCA81624324036.6136.5236.3636.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KBDCA4812162014.6714.5314.5214.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABCD132639526558.9558.8458.1858.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KBCDA2040608010081.5881.4681.0480.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KBDAC2040608010085.0784.3584.1483.561. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pDCBA0.1440.2880.4320.5760.720.640.640.640.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pCBAD4812162017.7317.7117.7117.591. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pACBD2040608010074.8174.1573.6073.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pDBCA102030405045.0544.9944.9244.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pADBC306090120150156.10155.73155.18154.691. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABDC4080120160200189.92189.67189.14188.951. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pBADC4080120160200200.46199.11198.99194.651. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: VGG-16DBA1.24882.49763.74644.99526.2445.555.555.55

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: VGG-16BDA1.30952.6193.92855.2386.54755.825.815.81

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: VGG-16BAD1.33882.67764.01645.35526.6945.955.945.93

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetBAD153045607568.4568.3268.16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: VGG-16DA1.3232.6463.9695.2926.6155.885.86

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetDA2040608010091.5491.50

Device: CPU - Batch Size: 512 - Model: VGG-16

A: The test quit with a non-zero exit status. E: Fatal Python error: Aborted

D: The test quit with a non-zero exit status. E: Fatal Python error: Aborted

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetDA20406080100109.92109.77

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetDA306090120150123.47122.89

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetDA306090120150124.44123.66

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetDA91827364541.3941.31

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50AD4812162014.5514.49

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetAD91827364539.3739.35

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50DA369121513.5813.53

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetDA91827364537.8537.81

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50DA369121512.6712.65

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetDA81624324036.3836.26

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50AD369121511.8711.86

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetAD81624324036.0235.96

Device: CPU - Batch Size: 512 - Model: ResNet-50

A: The test quit with a non-zero exit status.

D: The test quit with a non-zero exit status.

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDAB2468108.34018.29008.2282

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDAB2468108.08348.06738.0362

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDAB81624324034.3334.1834.05

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamABD71421283530.0530.0230.00

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDBA112233445546.5146.0845.91

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamBDA102030405045.0544.9944.98

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamABD2040608010095.0795.0795.06

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamADB2040608010084.5784.5384.50

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDAB163248648071.7971.7371.64

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamBDA142842567060.8560.5260.26

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamADB81624324035.2535.2334.99

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDAB71421283530.1630.1130.08

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDAB2468108.37178.32358.3116

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamADB2468108.12778.08228.0704

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashABCD6K12K18K24K30K26419.1126334.3826302.3025891.341. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256BDAC4080120160200196.15194.55191.23183.141. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32DBAC4K8K12K16K20K19265.2319233.2619181.6019034.701. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32BDCA160032004800640080007602.617589.927573.137536.681. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128BDAC4K8K12K16K20K17506.6217426.1717295.3117175.321. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceCABD4K8K12K16K20K20051.0819795.3319752.8319749.241. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXDBCA7K14K21K28K35K34051.8034042.8233899.0533756.461. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64ADCB17K34K51K68K85K77605.0775226.2374795.0473947.901. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIACBD10K20K30K40K50K45994.3645844.2445752.0145447.311. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgDACB3K6K9K12K15K15424154161540115357

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfCBDA160320480640800756748747740

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyADCB8K16K24K32K40K371993577735743353811. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read WriteABCD500100015002000250025072488248624841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyDBAC70K140K210K280K350K3145493111383109423083931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyDBCA70K140K210K280K350K3108913092503075863075621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read WriteCDBA700140021002800350031383133313131271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyDCAB8K16K24K32K40K355623494834362338751. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read WriteDAB60012001800240030002873286128561. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read WriteBDA50010001500200025002400239423781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyDBA70K140K210K280K350K3056223011352997771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyDBA60K120K180K240K300K3009522995332974241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read WriteADB8K16K24K32K40K3814038108379941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read WriteDBA9K18K27K36K45K3970039365389971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUBCDA0.66071.32141.98212.64283.30352.914812.917912.930082.93636MIN: 2.87MIN: 2.86MIN: 2.87MIN: 2.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABCD2468106.730916.806597.432057.50774MIN: 6.51MIN: 6.65MIN: 7.34MIN: 7.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUBCAD0.28110.56220.84331.12441.40551.246041.248581.249361.24944MIN: 1.23MIN: 1.23MIN: 1.23MIN: 1.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABCD0.1380.2760.4140.5520.690.6086460.6100550.6109440.613447MIN: 0.59MIN: 0.59MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABCD369121512.7912.8713.2413.35MIN: 12.55MIN: 12.51MIN: 13.1MIN: 13.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUBDAC2468106.839687.103117.233187.37916MIN: 5.09MIN: 5.11MIN: 5.1MIN: 5.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUBDAC1.26362.52723.79085.05446.3185.599925.611715.613575.61586MIN: 5.5MIN: 5.46MIN: 5.45MIN: 5.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUBCDA369121510.6711.1811.3711.89MIN: 10.47MIN: 10.98MIN: 11.19MIN: 10.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUBDCA0.40490.80981.21471.61962.02451.793901.797171.797851.79935MIN: 1.77MIN: 1.77MIN: 1.77MIN: 1.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUBADC0.57231.14461.71692.28922.86152.486772.520022.536242.54366MIN: 2.44MIN: 2.46MIN: 2.48MIN: 2.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUBCAD60012001800240030002701.462711.472717.672726.78MIN: 2690.63MIN: 2702.18MIN: 2708.62MIN: 2706.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUBCDA300600900120015001389.291391.771393.371395.86MIN: 1380.09MIN: 1385.84MIN: 1387.32MIN: 1389.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUBCDA60012001800240030002715.722719.712722.902724.50MIN: 2708.81MIN: 2712.91MIN: 2715.56MIN: 2717.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUBCDA300600900120015001385.111389.431390.761392.31MIN: 1379.59MIN: 1383.24MIN: 1384.5MIN: 1386.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUBCAD0.24450.4890.73350.9781.22251.079201.082531.086351.08663MIN: 1.04MIN: 1.05MIN: 1.04MIN: 1.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUBCDA60012001800240030002707.342717.022721.312724.19MIN: 2699.45MIN: 2710.9MIN: 2713.72MIN: 2717.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUBDCA300600900120015001388.941393.641393.851400.14MIN: 1383.27MIN: 1387.06MIN: 1387.96MIN: 1392.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCBDA0.19070.38140.57210.76280.95350.8402430.8416640.8443850.847434MIN: 0.8MIN: 0.81MIN: 0.81MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyABCD0.00630.01260.01890.02520.03150.0270.0280.0280.0281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyABCD0.09070.18140.27210.36280.45350.3990.4020.4020.4031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyDABC0.03650.0730.10950.1460.18250.1590.1610.1610.1621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyDBAC0.07310.14620.21930.29240.36550.3220.3230.3250.3251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyCDBA4812162015.9315.9615.9715.991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyDACB0.00680.01360.02040.02720.0340.0280.0290.0290.0301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyDAB81624324034.8134.9535.021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyBDA0.09470.18940.28410.37880.47350.4170.4180.4211. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyDBA0.03760.07520.11280.15040.1880.1640.1660.1671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyDBA0.07560.15120.22680.30240.3780.3320.3340.3361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyADB0.29610.59220.88831.18441.48051.3111.3121.3161. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyDBA0.57691.15381.73072.30762.88452.5192.5402.5641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDAB100200300400500478.28482.49484.97

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDAB306090120150123.70123.95124.43

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDAB306090120150116.50117.01117.46

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamABD81624324033.2733.3033.33

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDBA2040608010085.9386.7887.11

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamBDA51015202522.1922.2222.23

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamDAB102030405042.0642.0642.06

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamADB369121511.8211.8311.83

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDAB132639526555.7055.7555.82

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamBDA4812162016.4316.5216.59

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamADB306090120150113.46113.52114.29

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDAB81624324033.1533.2033.24

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDAB100200300400500476.77478.97480.44

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamADB306090120150123.03123.72123.90

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BBCAD91827364539.0339.0539.1939.24

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MBCAD4812162017.6117.7017.7417.83

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamADBC306090120150144.69145.41145.55149.29

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestCBAD20406080100100.11100.26100.35103.13

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldACDB60120180240300279.30279.49280.05291.67

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationDABC306090120150145.56146.06146.45146.60

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerCADB120240360480600568.90569.41571.12571.31

128 Results Shown

QuadRay:
  1 - 4K
  2 - 4K
  3 - 4K
  5 - 4K
  1 - 1080p
  2 - 1080p
  3 - 1080p
  5 - 1080p
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
  Speed 6 Two-Pass - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
  Speed 9 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
TensorFlow:
  CPU - 16 - VGG-16
  CPU - 32 - VGG-16
  CPU - 64 - VGG-16
  CPU - 16 - AlexNet
  CPU - 256 - VGG-16
  CPU - 32 - AlexNet
  CPU - 64 - AlexNet
  CPU - 256 - AlexNet
  CPU - 512 - AlexNet
  CPU - 16 - GoogLeNet
  CPU - 16 - ResNet-50
  CPU - 32 - GoogLeNet
  CPU - 32 - ResNet-50
  CPU - 64 - GoogLeNet
  CPU - 64 - ResNet-50
  CPU - 256 - GoogLeNet
  CPU - 256 - ResNet-50
  CPU - 512 - GoogLeNet
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
SMHasher:
  wyhash
  SHA3-256
  Spooky32
  fasthash32
  FarmHash128
  t1ha2_atonce
  FarmHash32 x86_64 AVX
  t1ha0_aes_avx2 x86_64
  MeowHash x86_64 AES-NI
spaCy:
  en_core_web_lg
  en_core_web_trf
PostgreSQL:
  1 - 1 - Read Only
  1 - 1 - Read Write
  1 - 50 - Read Only
  1 - 100 - Read Only
  1 - 50 - Read Write
  100 - 1 - Read Only
  1 - 100 - Read Write
  100 - 1 - Read Write
  100 - 50 - Read Only
  100 - 100 - Read Only
  100 - 50 - Read Write
  100 - 100 - Read Write
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
PostgreSQL:
  1 - 1 - Read Only - Average Latency
  1 - 1 - Read Write - Average Latency
  1 - 50 - Read Only - Average Latency
  1 - 100 - Read Only - Average Latency
  1 - 50 - Read Write - Average Latency
  100 - 1 - Read Only - Average Latency
  1 - 100 - Read Write - Average Latency
  100 - 1 - Read Write - Average Latency
  100 - 50 - Read Only - Average Latency
  100 - 100 - Read Only - Average Latency
  100 - 50 - Read Write - Average Latency
  100 - 100 - Read Write - Average Latency
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream
  CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream
Y-Cruncher:
  1B
  500M
OpenRadioss:
  Bumper Beam
  Cell Phone Drop Test
  Bird Strike on Windshield
  Rubber O-Ring Seal Installation
  INIVOL and Fluid Structure Interaction Drop Container