5800x3d smoke okt

AMD Ryzen 7 5800X3D 8-Core testing with a ASUS ROG CROSSHAIR VIII HERO (4201 BIOS) and Intel DG2 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2210143-PTS-5800X3DS35
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
October 13 2022
  6 Hours, 8 Minutes
B
October 13 2022
  2 Hours, 49 Minutes
C
October 13 2022
  1 Hour, 27 Minutes
D
October 13 2022
  6 Hours, 2 Minutes
Invert Behavior (Only Show Selected Data)
  4 Hours, 7 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


5800x3d smoke oktOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR VIII HERO (4201 BIOS)AMD Starship/Matisse32GB1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GBIntel DG2 8GB (2400MHz)Intel Device 4f90ASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 22.045.15.47+prerelease3723 (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.2.0-devel (git-44289c46d9)1.3.219GCC 11.2.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen Resolution5800x3d Smoke Okt BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120a - Python 3.10.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABCDResult OverviewPhoronix Test Suite100%100%101%101%102%oneDNNPostgreSQLOpenRadiossspaCyY-CruncherSMHasherQuadRayAOM AV1

5800x3d smoke oktonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUpgbench: 100 - 1 - Read Only - Average Latencysmhasher: SHA3-256pgbench: 1 - 1 - Read Onlypgbench: 100 - 1 - Read Onlysmhasher: t1ha0_aes_avx2 x86_64openradioss: Bird Strike on Windshieldonednn: Convolution Batch Shapes Auto - f32 - CPUpgbench: 1 - 1 - Read Only - Average Latencyopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testaom-av1: Speed 10 Realtime - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUaom-av1: Speed 6 Realtime - Bosphorus 1080pquadray: 2 - 1080pspacy: en_core_web_trfsmhasher: wyhashpgbench: 1 - 50 - Read Onlypgbench: 100 - 50 - Read Onlysmhasher: FarmHash128pgbench: 1 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Only - Average Latencyaom-av1: Speed 10 Realtime - Bosphorus 4Kpgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencyquadray: 3 - 1080paom-av1: Speed 6 Two-Pass - Bosphorus 4Ksmhasher: t1ha2_atoncedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamaom-av1: Speed 8 Realtime - Bosphorus 4Kdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamy-cruncher: 500Maom-av1: Speed 6 Two-Pass - Bosphorus 1080psmhasher: Spooky32pgbench: 100 - 100 - Read Only - Average Latencysmhasher: MeowHash x86_64 AES-NIpgbench: 100 - 100 - Read Onlypgbench: 1 - 100 - Read Onlyaom-av1: Speed 4 Two-Pass - Bosphorus 4Kquadray: 3 - 4Kpgbench: 1 - 1 - Read Write - Average Latencyaom-av1: Speed 9 Realtime - Bosphorus 4Kdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streampgbench: 100 - 1 - Read Write - Average Latencyonednn: Recurrent Neural Network Training - f32 - CPUpgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 1 - Read Writepgbench: 100 - 1 - Read Writeaom-av1: Speed 8 Realtime - Bosphorus 1080psmhasher: FarmHash32 x86_64 AVXsmhasher: fasthash32onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 1080ponednn: IP Shapes 3D - u8s8f32 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamaom-av1: Speed 6 Realtime - Bosphorus 4Kdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamonednn: IP Shapes 1D - f32 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenradioss: Rubber O-Ring Seal Installationdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUtensorflow: CPU - 512 - AlexNetonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUpgbench: 1 - 100 - Read Write - Average Latencyquadray: 1 - 4Kpgbench: 1 - 100 - Read Writedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamy-cruncher: 1Bquadray: 1 - 1080ponednn: Recurrent Neural Network Inference - u8s8f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080ponednn: Recurrent Neural Network Inference - f32 - CPUtensorflow: CPU - 256 - AlexNetspacy: en_core_web_lgquadray: 2 - 4Ktensorflow: CPU - 16 - AlexNetopenradioss: INIVOL and Fluid Structure Interaction Drop Containertensorflow: CPU - 16 - ResNet-50pgbench: 100 - 50 - Read Writepgbench: 100 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Write - Average Latencytensorflow: CPU - 32 - ResNet-50pgbench: 1 - 50 - Read Writetensorflow: CPU - 256 - VGG-16tensorflow: CPU - 64 - VGG-16tensorflow: CPU - 256 - GoogLeNetonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 16 - GoogLeNetdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamtensorflow: CPU - 32 - VGG-16tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 64 - ResNet-50deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamtensorflow: CPU - 64 - AlexNettensorflow: CPU - 64 - GoogLeNetdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamtensorflow: CPU - 256 - ResNet-50deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamtensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - AlexNetdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamtensorflow: CPU - 16 - VGG-16aom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Kquadray: 5 - 1080pquadray: 5 - 4Ksmhasher: MeowHash x86_64 AES-NIsmhasher: t1ha0_aes_avx2 x86_64smhasher: FarmHash32 x86_64 AVXsmhasher: t1ha2_atoncesmhasher: FarmHash128smhasher: fasthash32smhasher: Spooky32smhasher: SHA3-256smhasher: wyhashABCD6.7309111.88867.233180.029191.23371993436277605.07279.312.78630.027144.69100.35199.112.5200274.818.874026419.1131094229977717295.310.1610.16784.14389972.5647.714.4319795.33482.49158.9587.10568.2945.909317.73744.5119181.60.33645994.362974243075627.611.960.39980.7816.590660.25630.4212717.670.32525072378156.133756.467536.680.84743434.1768117.01271400.1417.710.608646478.973636.3435.25142.93636113.45728.3235146.068.1277123.03121.08635123.662724.1934.9488.422861123.95188.067339.18732.621392.31189.921395.86122.89154162.368.32569.4114.55381401.31115.99113.5331275.865.9436.262724.51.799355.613571.2493630.111933.204171.730655.749241.3130.05333.26725.8136.0212.6544.982522.225109.7737.8111.818811.8784.57239.3791.595.074942.0595.550.640.212.140.5350.10523.24630.2924.31955.27325.9731.7152033.49616.2516.8065910.67456.839680.03196.15353813387573947.9291.6712.8650.028145.55100.26200.462.4867773.6974826334.3831113830113517506.620.1610.16685.07393652.547.7514.6719752.83484.965158.8486.78018.228246.080417.61244.9919233.260.33445752.012995333092507.691.980.40281.5816.428860.84940.4172701.460.32324882400155.1834042.827602.610.84166434.0455117.46461388.9417.710.610055480.444436.6134.99182.91481114.29258.3116146.458.0704123.90411.07922707.3435.0198.432856124.4318.036239.03132.771385.11189.671389.29153572.3168.45571.31379941.31615.97131315.952715.721.79395.599921.2460430.077933.241571.635255.82130.024133.29925.8245.053422.190211.82984.500895.070142.06025.550.640.212.140.5350.09623.06430.2924.11555.25825.73131.6511979.32516.2977.4320511.18447.379160.029183.14357433494874795.04279.4913.23710.028149.29100.11194.652.5436674.158.975626302.330839317175.320.16283.567.6214.5220051.0858.1817.69644.9219034.745844.243075867.661.970.40281.462711.470.3252486154.6933899.057573.130.8402431393.8517.730.61094436.362.91791146.61.082532717.028.3839.05132.661389.43188.951391.77154012.31568.915.93131382719.711.797855.615861.248580.640.212.140.5350.10523.08730.32224.09355.24725.80931.8972115.64616.2127.5077411.37027.103110.028194.55357773556275226.23280.0513.35070.028145.41103.13198.992.5362473.148.9774725891.3431454930562217426.170.1590.16484.35397002.5197.6714.5319749.24478.281858.1485.938.340146.505617.82745.0519265.230.33245447.313009523108917.671.980.40381.0416.519460.51630.4182726.780.32224842394155.7334051.87589.920.84438534.3275116.5011393.6417.590.613447476.766736.5235.23052.93008113.51648.3717145.568.0822123.72351.08663124.442721.3134.818.42873123.70438.083439.23632.61390.76189.141393.37123.47154242.368.16571.1214.49381081.31215.96113.5831335.885.9336.382722.91.797175.611711.2494430.159533.151671.793555.699741.3929.998633.32735.8135.9612.6744.993522.2191109.9237.8511.82611.8684.527139.3591.5495.064242.05685.550.640.212.140.5350.2523.06432.55924.06355.25825.75431.5491988.98516.75OpenBenchmarking.org

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUDCBA2468107.507747.432056.806596.73091MIN: 7.38MIN: 7.34MIN: 6.65MIN: 6.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUDCBA369121511.3711.1810.6711.89MIN: 11.19MIN: 10.98MIN: 10.47MIN: 10.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUDCBA2468107.103117.379166.839687.23318MIN: 5.11MIN: 5.1MIN: 5.09MIN: 5.11. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyDCBA0.00680.01360.02040.02720.0340.0280.0290.0300.0291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256DCBA4080120160200194.55183.14196.15191.231. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyDCBA8K16K24K32K40K357773574335381371991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyDCBA8K16K24K32K40K355623494833875343621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64DCBA17K34K51K68K85K75226.2374795.0473947.9077605.071. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldDCBA60120180240300280.05279.49291.67279.30

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUDCBA369121513.3513.2412.8712.79MIN: 13.18MIN: 13.1MIN: 12.51MIN: 12.551. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyDCBA0.00630.01260.01890.02520.03150.0280.0280.0280.0271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamDCBA306090120150145.41149.29145.55144.69

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestDCBA20406080100103.13100.11100.26100.35

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pDCBA4080120160200198.99194.65200.46199.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUDCBA0.57231.14461.71692.28922.86152.536242.543662.486772.52002MIN: 2.48MIN: 2.49MIN: 2.44MIN: 2.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pDCBA2040608010073.1474.1573.6074.811. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pDCBA36912158.978.909.008.801. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfDCBA160320480640800747756748740

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashDCBA6K12K18K24K30K25891.3426302.3026334.3826419.111. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyDCBA70K140K210K280K350K3145493083933111383109421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyDBA70K140K210K280K350K3056223011352997771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128DCBA4K8K12K16K20K17426.1717175.3217506.6217295.311. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyDCBA0.03650.0730.10950.1460.18250.1590.1620.1610.1611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyDBA0.03760.07520.11280.15040.1880.1640.1660.1671. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KDCBA2040608010084.3583.5685.0784.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read WriteDBA9K18K27K36K45K3970039365389971. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyDBA0.57691.15381.73072.30762.88452.5192.5402.5641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pDCBA2468107.677.627.757.701. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KDCBA4812162014.5314.5214.6714.431. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceDCBA4K8K12K16K20K19749.2420051.0819752.8319795.331. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDBA100200300400500478.28484.97482.49

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KDCBA132639526558.1458.1858.8458.951. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDBA2040608010085.9386.7887.11

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamDBA2468108.34018.22828.2900

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamDBA112233445546.5146.0845.91

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MDCBA4812162017.8317.7017.6117.74

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pDCBA102030405045.0544.9244.9944.511. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32DCBA4K8K12K16K20K19265.2319034.7019233.2619181.601. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyDBA0.07560.15120.22680.30240.3780.3320.3340.3361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIDCBA10K20K30K40K50K45447.3145844.2445752.0145994.361. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyDBA60K120K180K240K300K3009522995332974241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyDCBA70K140K210K280K350K3108913075863092503075621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KDCBA2468107.677.667.697.611. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KDCBA0.44550.8911.33651.7822.22751.981.971.981.961. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyDCBA0.09070.18140.27210.36280.45350.4030.4020.4020.3991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KDCBA2040608010081.0481.4681.5880.781. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamDBA4812162016.5216.4316.59

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamDBA142842567060.5260.8560.26

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyDBA0.09470.18940.28410.37880.47350.4180.4170.4211. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUDCBA60012001800240030002726.782711.472701.462717.67MIN: 2706.89MIN: 2702.18MIN: 2690.63MIN: 2708.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyDCBA0.07310.14620.21930.29240.36550.3220.3250.3230.3251. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read WriteDCBA500100015002000250024842486248825071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read WriteDBA50010001500200025002394240023781. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pDCBA306090120150155.73154.69155.18156.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

SMHasher is a hash function tester supporting various algorithms and able to make use of AVX and other modern CPU instruction set extensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXDCBA7K14K21K28K35K34051.8033899.0534042.8233756.461. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32DCBA160032004800640080007589.927573.137602.617536.681. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUDCBA0.19070.38140.57210.76280.95350.8443850.8402430.8416640.847434MIN: 0.81MIN: 0.8MIN: 0.81MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDBA81624324034.3334.0534.18

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamDBA306090120150116.50117.46117.01

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUDCBA300600900120015001393.641393.851388.941400.14MIN: 1387.06MIN: 1387.96MIN: 1383.27MIN: 1392.751. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pDCBA4812162017.5917.7317.7117.711. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUDCBA0.1380.2760.4140.5520.690.6134470.6109440.6100550.608646MIN: 0.6MIN: 0.6MIN: 0.59MIN: 0.591. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDBA100200300400500476.77480.44478.97

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KDCBA81624324036.5236.3636.6136.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamDBA81624324035.2334.9935.25

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUDCBA0.66071.32141.98212.64283.30352.930082.917912.914812.93636MIN: 2.87MIN: 2.86MIN: 2.87MIN: 2.881. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamDBA306090120150113.52114.29113.46

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamDBA2468108.37178.31168.3235

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationDCBA306090120150145.56146.60146.45146.06

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamDBA2468108.08228.07048.1277

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamDBA306090120150123.72123.90123.03

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUDCBA0.24450.4890.73350.9781.22251.086631.082531.079201.08635MIN: 1.05MIN: 1.05MIN: 1.04MIN: 1.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetDA306090120150124.44123.66

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUDCBA60012001800240030002721.312717.022707.342724.19MIN: 2713.72MIN: 2710.9MIN: 2699.45MIN: 2717.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyDBA81624324034.8135.0234.951. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KDCBA2468108.408.388.438.421. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read WriteDBA60012001800240030002873285628611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDBA306090120150123.70124.43123.95

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamDBA2468108.08348.03628.0673

Y-Cruncher

Y-Cruncher is a multi-threaded Pi benchmark capable of computing Pi to trillions of digits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BDCBA91827364539.2439.0539.0339.19

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pDCBA81624324032.6032.6632.7732.621. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUDCBA300600900120015001390.761389.431385.111392.31MIN: 1384.5MIN: 1383.24MIN: 1379.59MIN: 1386.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pDCBA4080120160200189.14188.95189.67189.921. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUDCBA300600900120015001393.371391.771389.291395.86MIN: 1387.32MIN: 1385.84MIN: 1380.09MIN: 1389.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetDA306090120150123.47122.89

spaCy

The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgDCBA3K6K9K12K15K15424154011535715416

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KDCBA0.51981.03961.55942.07922.5992.302.312.312.301. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetDBA153045607568.1668.4568.32

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerDCBA120240360480600571.12568.90571.31569.41

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50DA4812162014.4914.55

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read WriteDBA8K16K24K32K40K3810837994381401. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyDBA0.29610.59220.88831.18441.48051.3121.3161.3111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyDCBA4812162015.9615.9315.9715.991. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50DA369121513.5813.53

PostgreSQL

This is a benchmark of PostgreSQL using the integrated pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read WriteDCBA700140021002800350031333138313131271. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: VGG-16DA1.3232.6463.9695.2926.6155.885.86

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: VGG-16DBA1.33882.67764.01645.35526.6945.935.955.94

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetDA81624324036.3836.26

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUDCBA60012001800240030002722.902719.712715.722724.50MIN: 2715.56MIN: 2712.91MIN: 2708.81MIN: 2717.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUDCBA0.40490.80981.21471.61962.02451.797171.797851.793901.79935MIN: 1.77MIN: 1.77MIN: 1.77MIN: 1.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUDCBA1.26362.52723.79085.05446.3185.611715.615865.599925.61357MIN: 5.46MIN: 5.46MIN: 5.5MIN: 5.451. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUDCBA0.28110.56220.84331.12441.40551.249441.248581.246041.24936MIN: 1.23MIN: 1.23MIN: 1.23MIN: 1.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDBA71421283530.1630.0830.11

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamDBA81624324033.1533.2433.20

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDBA163248648071.7971.6471.73

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamDBA132639526555.7055.8255.75

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetDA91827364541.3941.31

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamDBA71421283530.0030.0230.05

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamDBA81624324033.3333.3033.27

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: VGG-16DBA1.30952.6193.92855.2386.54755.815.825.81

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetDA81624324035.9636.02

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50DA369121512.6712.65

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamDBA102030405044.9945.0544.98

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamDBA51015202522.2222.1922.23

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetDA20406080100109.92109.77

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetDA91827364537.8537.81

Neural Magic DeepSparse

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamDBA369121511.8311.8311.82

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50DA369121511.8611.87

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamDBA2040608010084.5384.5084.57

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetDA91827364539.3539.37

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetDA2040608010091.5491.50

Neural Magic DeepSparse

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamDBA2040608010095.0695.0795.07

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamDBA102030405042.0642.0642.06

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: VGG-16DBA1.24882.49763.74644.99526.2445.555.555.55

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google as the AV1 Codec Library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pDCBA0.1440.2880.4320.5760.720.640.640.640.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KDCBA0.04730.09460.14190.18920.23650.210.210.210.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

QuadRay

VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pDCBA0.48150.9631.44451.9262.40752.142.142.142.141. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KDCBA0.11930.23860.35790.47720.59650.530.530.530.531. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

TensorFlow

This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.

Device: CPU - Batch Size: 512 - Model: ResNet-50

A: The test quit with a non-zero exit status.

D: The test quit with a non-zero exit status.

Device: CPU - Batch Size: 512 - Model: VGG-16

A: The test quit with a non-zero exit status. E: Fatal Python error: Aborted

D: The test quit with a non-zero exit status. E: Fatal Python error: Aborted

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

C: The test run did not produce a result.

D: The test run did not produce a result.

128 Results Shown

oneDNN:
  IP Shapes 3D - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
PostgreSQL
SMHasher
PostgreSQL:
  1 - 1 - Read Only
  100 - 1 - Read Only
SMHasher
OpenRadioss
oneDNN
PostgreSQL
OpenRadioss:
  Bumper Beam
  Cell Phone Drop Test
AOM AV1
oneDNN
AOM AV1
QuadRay
spaCy
SMHasher
PostgreSQL:
  1 - 50 - Read Only
  100 - 50 - Read Only
SMHasher
PostgreSQL:
  1 - 50 - Read Only - Average Latency
  100 - 50 - Read Only - Average Latency
AOM AV1
PostgreSQL:
  100 - 100 - Read Write
  100 - 100 - Read Write - Average Latency
QuadRay
AOM AV1
SMHasher
Neural Magic DeepSparse
AOM AV1
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
  NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream
  CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream
Y-Cruncher
AOM AV1
SMHasher
PostgreSQL
SMHasher
PostgreSQL:
  100 - 100 - Read Only
  1 - 100 - Read Only
AOM AV1
QuadRay
PostgreSQL
AOM AV1
Neural Magic DeepSparse:
  NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream:
    ms/batch
    items/sec
PostgreSQL
oneDNN
PostgreSQL:
  1 - 100 - Read Only - Average Latency
  1 - 1 - Read Write
  100 - 1 - Read Write
AOM AV1
SMHasher:
  FarmHash32 x86_64 AVX
  fasthash32
oneDNN
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream:
    items/sec
    ms/batch
oneDNN
AOM AV1
oneDNN
Neural Magic DeepSparse
AOM AV1
Neural Magic DeepSparse
oneDNN
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream
  NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream
OpenRadioss
Neural Magic DeepSparse:
  NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream:
    items/sec
    ms/batch
oneDNN
TensorFlow
oneDNN
PostgreSQL
QuadRay
PostgreSQL
Neural Magic DeepSparse:
  NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream:
    ms/batch
    items/sec
Y-Cruncher
QuadRay
oneDNN
AOM AV1
oneDNN
TensorFlow
spaCy
QuadRay
TensorFlow
OpenRadioss
TensorFlow
PostgreSQL:
  100 - 50 - Read Write
  100 - 50 - Read Write - Average Latency
  1 - 50 - Read Write - Average Latency
TensorFlow
PostgreSQL
TensorFlow:
  CPU - 256 - VGG-16
  CPU - 64 - VGG-16
  CPU - 256 - GoogLeNet
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
Neural Magic DeepSparse:
  NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream:
    items/sec
    ms/batch
  NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream:
    items/sec
    ms/batch
TensorFlow
Neural Magic DeepSparse:
  NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream:
    items/sec
    ms/batch
TensorFlow:
  CPU - 32 - VGG-16
  CPU - 512 - GoogLeNet
  CPU - 64 - ResNet-50
Neural Magic DeepSparse:
  CV Detection,YOLOv5s COCO - Synchronous Single-Stream:
    items/sec
    ms/batch
TensorFlow:
  CPU - 64 - AlexNet
  CPU - 64 - GoogLeNet
Neural Magic DeepSparse
TensorFlow
Neural Magic DeepSparse
TensorFlow:
  CPU - 32 - GoogLeNet
  CPU - 32 - AlexNet
Neural Magic DeepSparse:
  CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream:
    items/sec
    ms/batch
TensorFlow
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 0 Two-Pass - Bosphorus 4K
QuadRay:
  5 - 1080p
  5 - 4K