5800x3d smoke okt

AMD Ryzen 7 5800X3D 8-Core testing with a ASUS ROG CROSSHAIR VIII HERO (4201 BIOS) and Intel DG2 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2210143-PTS-5800X3DS35&grs.

5800x3d smoke oktProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionABCDAMD Ryzen 7 5800X3D 8-Core @ 3.40GHz (8 Cores / 16 Threads)ASUS ROG CROSSHAIR VIII HERO (4201 BIOS)AMD Starship/Matisse32GB1000GB Western Digital WDS100T1X0E-00AFY0 + 2000GBIntel DG2 8GB (2400MHz)Intel Device 4f90ASUS VP28URealtek RTL8125 2.5GbE + Intel I211Ubuntu 22.045.15.47+prerelease3723 (x86_64)GNOME Shell 42.2X Server 1.21.1.3 + Wayland4.6 Mesa 22.2.0-devel (git-44289c46d9)1.3.219GCC 11.2.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa20120a Python Details- Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected

5800x3d smoke oktonednn: IP Shapes 3D - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUpgbench: 100 - 1 - Read Only - Average Latencysmhasher: SHA3-256pgbench: 1 - 1 - Read Onlypgbench: 100 - 1 - Read Onlysmhasher: t1ha0_aes_avx2 x86_64openradioss: Bird Strike on Windshieldonednn: Convolution Batch Shapes Auto - f32 - CPUpgbench: 1 - 1 - Read Only - Average Latencyopenradioss: Bumper Beamopenradioss: Cell Phone Drop Testaom-av1: Speed 10 Realtime - Bosphorus 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUaom-av1: Speed 6 Realtime - Bosphorus 1080pquadray: 2 - 1080pspacy: en_core_web_trfsmhasher: wyhashpgbench: 1 - 50 - Read Onlypgbench: 100 - 50 - Read Onlysmhasher: FarmHash128pgbench: 1 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Only - Average Latencyaom-av1: Speed 10 Realtime - Bosphorus 4Kpgbench: 100 - 100 - Read Writepgbench: 100 - 100 - Read Write - Average Latencyquadray: 3 - 1080paom-av1: Speed 6 Two-Pass - Bosphorus 4Ksmhasher: t1ha2_atoncedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamaom-av1: Speed 8 Realtime - Bosphorus 4Kdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Streamdeepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Streamy-cruncher: 500Maom-av1: Speed 6 Two-Pass - Bosphorus 1080psmhasher: Spooky32pgbench: 100 - 100 - Read Only - Average Latencysmhasher: MeowHash x86_64 AES-NIpgbench: 100 - 100 - Read Onlypgbench: 1 - 100 - Read Onlyaom-av1: Speed 4 Two-Pass - Bosphorus 4Kquadray: 3 - 4Kpgbench: 1 - 1 - Read Write - Average Latencyaom-av1: Speed 9 Realtime - Bosphorus 4Kdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Streampgbench: 100 - 1 - Read Write - Average Latencyonednn: Recurrent Neural Network Training - f32 - CPUpgbench: 1 - 100 - Read Only - Average Latencypgbench: 1 - 1 - Read Writepgbench: 100 - 1 - Read Writeaom-av1: Speed 8 Realtime - Bosphorus 1080psmhasher: FarmHash32 x86_64 AVXsmhasher: fasthash32onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Streamonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUaom-av1: Speed 4 Two-Pass - Bosphorus 1080ponednn: IP Shapes 3D - u8s8f32 - CPUdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamaom-av1: Speed 6 Realtime - Bosphorus 4Kdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamonednn: IP Shapes 1D - f32 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Streamopenradioss: Rubber O-Ring Seal Installationdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamdeepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Streamonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUtensorflow: CPU - 512 - AlexNetonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUpgbench: 1 - 100 - Read Write - Average Latencyquadray: 1 - 4Kpgbench: 1 - 100 - Read Writedeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamdeepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Streamy-cruncher: 1Bquadray: 1 - 1080ponednn: Recurrent Neural Network Inference - u8s8f32 - CPUaom-av1: Speed 9 Realtime - Bosphorus 1080ponednn: Recurrent Neural Network Inference - f32 - CPUtensorflow: CPU - 256 - AlexNetspacy: en_core_web_lgquadray: 2 - 4Ktensorflow: CPU - 16 - AlexNetopenradioss: INIVOL and Fluid Structure Interaction Drop Containertensorflow: CPU - 16 - ResNet-50pgbench: 100 - 50 - Read Writepgbench: 100 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Write - Average Latencytensorflow: CPU - 32 - ResNet-50pgbench: 1 - 50 - Read Writetensorflow: CPU - 256 - VGG-16tensorflow: CPU - 64 - VGG-16tensorflow: CPU - 256 - GoogLeNetonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamdeepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Streamtensorflow: CPU - 16 - GoogLeNetdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamdeepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Streamtensorflow: CPU - 32 - VGG-16tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 64 - ResNet-50deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamdeepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Streamtensorflow: CPU - 64 - AlexNettensorflow: CPU - 64 - GoogLeNetdeepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamtensorflow: CPU - 256 - ResNet-50deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Streamtensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - AlexNetdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamdeepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Streamtensorflow: CPU - 16 - VGG-16aom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Kquadray: 5 - 1080pquadray: 5 - 4Ksmhasher: MeowHash x86_64 AES-NIsmhasher: t1ha0_aes_avx2 x86_64smhasher: FarmHash32 x86_64 AVXsmhasher: t1ha2_atoncesmhasher: FarmHash128smhasher: fasthash32smhasher: Spooky32smhasher: SHA3-256smhasher: wyhashABCD6.7309111.88867.233180.029191.23371993436277605.07279.312.78630.027144.69100.35199.112.5200274.818.874026419.1131094229977717295.310.1610.16784.14389972.5647.714.4319795.33482.49158.9587.10568.2945.909317.73744.5119181.60.33645994.362974243075627.611.960.39980.7816.590660.25630.4212717.670.32525072378156.133756.467536.680.84743434.1768117.01271400.1417.710.608646478.973636.3435.25142.93636113.45728.3235146.068.1277123.03121.08635123.662724.1934.9488.422861123.95188.067339.18732.621392.31189.921395.86122.89154162.368.32569.4114.55381401.31115.99113.5331275.865.9436.262724.51.799355.613571.2493630.111933.204171.730655.749241.3130.05333.26725.8136.0212.6544.982522.225109.7737.8111.818811.8784.57239.3791.595.074942.0595.550.640.212.140.5350.10523.24630.2924.31955.27325.9731.7152033.49616.2516.8065910.67456.839680.03196.15353813387573947.9291.6712.8650.028145.55100.26200.462.4867773.6974826334.3831113830113517506.620.1610.16685.07393652.547.7514.6719752.83484.965158.8486.78018.228246.080417.61244.9919233.260.33445752.012995333092507.691.980.40281.5816.428860.84940.4172701.460.32324882400155.1834042.827602.610.84166434.0455117.46461388.9417.710.610055480.444436.6134.99182.91481114.29258.3116146.458.0704123.90411.07922707.3435.0198.432856124.4318.036239.03132.771385.11189.671389.29153572.3168.45571.31379941.31615.97131315.952715.721.79395.599921.2460430.077933.241571.635255.82130.024133.29925.8245.053422.190211.82984.500895.070142.06025.550.640.212.140.5350.09623.06430.2924.11555.25825.73131.6511979.32516.2977.4320511.18447.379160.029183.14357433494874795.04279.4913.23710.028149.29100.11194.652.5436674.158.975626302.330839317175.320.16283.567.6214.5220051.0858.1817.69644.9219034.745844.243075867.661.970.40281.462711.470.3252486154.6933899.057573.130.8402431393.8517.730.61094436.362.91791146.61.082532717.028.3839.05132.661389.43188.951391.77154012.31568.915.93131382719.711.797855.615861.248580.640.212.140.5350.10523.08730.32224.09355.24725.80931.8972115.64616.2127.5077411.37027.103110.028194.55357773556275226.23280.0513.35070.028145.41103.13198.992.5362473.148.9774725891.3431454930562217426.170.1590.16484.35397002.5197.6714.5319749.24478.281858.1485.938.340146.505617.82745.0519265.230.33245447.313009523108917.671.980.40381.0416.519460.51630.4182726.780.32224842394155.7334051.87589.920.84438534.3275116.5011393.6417.590.613447476.766736.5235.23052.93008113.51648.3717145.568.0822123.72351.08663124.442721.3134.818.42873123.70438.083439.23632.61390.76189.141393.37123.47154242.368.16571.1214.49381081.31215.96113.5831335.885.9336.382722.91.797175.611711.2494430.159533.151671.793555.699741.3929.998633.32735.8135.9612.6744.993522.2191109.9237.8511.82611.8684.527139.3591.5495.064242.05685.550.640.212.140.5350.2523.06432.55924.06355.25825.75431.5491988.98516.75OpenBenchmarking.org

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABCD2468106.730916.806597.432057.50774MIN: 6.51MIN: 6.65MIN: 7.34MIN: 7.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABCD369121511.8910.6711.1811.37MIN: 10.6MIN: 10.47MIN: 10.98MIN: 11.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABCD2468107.233186.839687.379167.10311MIN: 5.1MIN: 5.09MIN: 5.1MIN: 5.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyABCD0.00680.01360.02040.02720.0340.0290.0300.0290.0281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: SHA3-256ABCD4080120160200191.23196.15183.14194.551. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyABCD8K16K24K32K40K371993538135743357771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyABCD8K16K24K32K40K343623387534948355621. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64ABCD17K34K51K68K85K77605.0773947.9074795.0475226.231. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

OpenRadioss

Model: Bird Strike on Windshield

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bird Strike on WindshieldABCD60120180240300279.30291.67279.49280.05

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABCD369121512.7912.8713.2413.35MIN: 12.55MIN: 12.51MIN: 13.1MIN: 13.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyABCD0.00630.01260.01890.02520.03150.0270.0280.0280.0281. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

OpenRadioss

Model: Bumper Beam

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Bumper BeamABCD306090120150144.69145.55149.29145.41

OpenRadioss

Model: Cell Phone Drop Test

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Cell Phone Drop TestABCD20406080100100.35100.26100.11103.13

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABCD4080120160200199.11200.46194.65198.991. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABCD0.57231.14461.71692.28922.86152.520022.486772.543662.53624MIN: 2.46MIN: 2.44MIN: 2.49MIN: 2.481. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABCD2040608010074.8173.6074.1573.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

QuadRay

Scene: 2 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 1080pABCD36912158.809.008.908.971. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

spaCy

Model: en_core_web_trf

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_trfABCD160320480640800740748756747

SMHasher

Hash: wyhash

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: wyhashABCD6K12K18K24K30K26419.1126334.3826302.3025891.341. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyABCD70K140K210K280K350K3109423111383083933145491. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyABD70K140K210K280K350K2997773011353056221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash128ABCD4K8K12K16K20K17295.3117506.6217175.3217426.171. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyABCD0.03650.0730.10950.1460.18250.1610.1610.1620.1591. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyABD0.03760.07520.11280.15040.1880.1670.1660.1641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABCD2040608010084.1485.0783.5684.351. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read WriteABD9K18K27K36K45K3899739365397001. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyABD0.57691.15381.73072.30762.88452.5642.5402.5191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

QuadRay

Scene: 3 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 1080pABCD2468107.707.757.627.671. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABCD4812162014.4314.6714.5214.531. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceABCD4K8K12K16K20K19795.3319752.8320051.0819749.241. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamABD100200300400500482.49484.97478.28

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABCD132639526558.9558.8458.1858.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamABD2040608010087.1186.7885.93

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-StreamABD2468108.29008.22828.3401

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-StreamABD112233445545.9146.0846.51

Y-Cruncher

Pi Digits To Calculate: 500M

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 500MABCD4812162017.7417.6117.7017.83

AOM AV1

Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABCD102030405044.5144.9944.9245.051. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

Hash: Spooky32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: Spooky32ABCD4K8K12K16K20K19181.6019233.2619034.7019265.231. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyABD0.07560.15120.22680.30240.3780.3360.3340.3321. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIABCD10K20K30K40K50K45994.3645752.0145844.2445447.311. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

PostgreSQL

Scaling Factor: 100 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyABD60K120K180K240K300K2974242995333009521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Only

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyABCD70K140K210K280K350K3075623092503075863108911. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KABCD2468107.617.697.667.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

QuadRay

Scene: 3 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 3 - Resolution: 4KABCD0.44550.8911.33651.7822.22751.961.981.971.981. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyABCD0.09070.18140.27210.36280.45350.3990.4020.4020.4031. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABCD2040608010080.7881.5881.4681.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamABD4812162016.5916.4316.52

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-StreamABD142842567060.2660.8560.52

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyABD0.09470.18940.28410.37880.47350.4210.4170.4181. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABCD60012001800240030002717.672701.462711.472726.78MIN: 2708.62MIN: 2690.63MIN: 2702.18MIN: 2706.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyABCD0.07310.14620.21930.29240.36550.3250.3230.3250.3221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 1 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 1 - Mode: Read WriteABCD500100015002000250025072488248624841. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 1 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 1 - Mode: Read WriteABD50010001500200025002378240023941. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

AOM AV1

Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABCD306090120150156.10155.18154.69155.731. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXABCD7K14K21K28K35K33756.4634042.8233899.0534051.801. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: fasthash32

OpenBenchmarking.orgMiB/sec, More Is BetterSMHasher 2022-08-22Hash: fasthash32ABCD160032004800640080007536.687602.617573.137589.921. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABCD0.19070.38140.57210.76280.95350.8474340.8416640.8402430.844385MIN: 0.81MIN: 0.81MIN: 0.8MIN: 0.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamABD81624324034.1834.0534.33

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-StreamABD306090120150117.01117.46116.50

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABCD300600900120015001400.141388.941393.851393.64MIN: 1392.75MIN: 1383.27MIN: 1387.96MIN: 1387.061. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pABCD4812162017.7117.7117.7317.591. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABCD0.1380.2760.4140.5520.690.6086460.6100550.6109440.613447MIN: 0.59MIN: 0.59MIN: 0.6MIN: 0.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamABD100200300400500478.97480.44476.77

AOM AV1

Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABCD81624324036.3436.6136.3636.521. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamABD81624324035.2534.9935.23

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABCD0.66071.32141.98212.64283.30352.936362.914812.917912.93008MIN: 2.88MIN: 2.87MIN: 2.86MIN: 2.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-StreamABD306090120150113.46114.29113.52

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-StreamABD2468108.32358.31168.3717

OpenRadioss

Model: Rubber O-Ring Seal Installation

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: Rubber O-Ring Seal InstallationABCD306090120150146.06146.45146.60145.56

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamABD2468108.12778.07048.0822

Neural Magic DeepSparse

Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-StreamABD306090120150123.03123.90123.72

oneDNN

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABCD0.24450.4890.73350.9781.22251.086351.079201.082531.08663MIN: 1.04MIN: 1.04MIN: 1.05MIN: 1.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: AlexNetAD306090120150123.66124.44

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABCD60012001800240030002724.192707.342717.022721.31MIN: 2717.07MIN: 2699.45MIN: 2710.9MIN: 2713.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyABD81624324034.9535.0234.811. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

QuadRay

Scene: 1 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 4KABCD2468108.428.438.388.401. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

PostgreSQL

Scaling Factor: 1 - Clients: 100 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 100 - Mode: Read WriteABD60012001800240030002861285628731. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamABD306090120150123.95124.43123.70

Neural Magic DeepSparse

Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-StreamABD2468108.06738.03628.0834

Y-Cruncher

Pi Digits To Calculate: 1B

OpenBenchmarking.orgSeconds, Fewer Is BetterY-Cruncher 0.7.10.9513Pi Digits To Calculate: 1BABCD91827364539.1939.0339.0539.24

QuadRay

Scene: 1 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 1 - Resolution: 1080pABCD81624324032.6232.7732.6632.601. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABCD300600900120015001392.311385.111389.431390.76MIN: 1386.61MIN: 1379.59MIN: 1383.24MIN: 1384.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

AOM AV1

Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABCD4080120160200189.92189.67188.95189.141. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABCD300600900120015001395.861389.291391.771393.37MIN: 1389.6MIN: 1380.09MIN: 1385.84MIN: 1387.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: AlexNetAD306090120150122.89123.47

spaCy

Model: en_core_web_lg

OpenBenchmarking.orgtokens/sec, More Is BetterspaCy 3.4.1Model: en_core_web_lgABCD3K6K9K12K15K15416153571540115424

QuadRay

Scene: 2 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 2 - Resolution: 4KABCD0.51981.03961.55942.07922.5992.302.312.312.301. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: AlexNetABD153045607568.3268.4568.16

OpenRadioss

Model: INIVOL and Fluid Structure Interaction Drop Container

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2022.10.13Model: INIVOL and Fluid Structure Interaction Drop ContainerABCD120240360480600569.41571.31568.90571.12

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: ResNet-50AD4812162014.5514.49

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read WriteABD8K16K24K32K40K3814037994381081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyABD0.29610.59220.88831.18441.48051.3111.3161.3121. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyABCD4812162015.9915.9715.9315.961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: ResNet-50AD369121513.5313.58

PostgreSQL

Scaling Factor: 1 - Clients: 50 - Mode: Read Write

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL 15Scaling Factor: 1 - Clients: 50 - Mode: Read WriteABCD700140021002800350031273131313831331. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm

TensorFlow

Device: CPU - Batch Size: 256 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: VGG-16AD1.3232.6463.9695.2926.6155.865.88

TensorFlow

Device: CPU - Batch Size: 64 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: VGG-16ABD1.33882.67764.01645.35526.6945.945.955.93

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: GoogLeNetAD81624324036.2636.38

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABCD60012001800240030002724.502715.722719.712722.90MIN: 2717.11MIN: 2708.81MIN: 2712.91MIN: 2715.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABCD0.40490.80981.21471.61962.02451.799351.793901.797851.79717MIN: 1.77MIN: 1.77MIN: 1.77MIN: 1.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABCD1.26362.52723.79085.05446.3185.613575.599925.615865.61171MIN: 5.45MIN: 5.5MIN: 5.46MIN: 5.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.7Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABCD0.28110.56220.84331.12441.40551.249361.246041.248581.24944MIN: 1.23MIN: 1.23MIN: 1.23MIN: 1.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamABD71421283530.1130.0830.16

Neural Magic DeepSparse

Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-StreamABD81624324033.2033.2433.15

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamABD163248648071.7371.6471.79

Neural Magic DeepSparse

Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-StreamABD132639526555.7555.8255.70

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: GoogLeNetAD91827364541.3141.39

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamABD71421283530.0530.0230.00

Neural Magic DeepSparse

Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-StreamABD81624324033.2733.3033.33

TensorFlow

Device: CPU - Batch Size: 32 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: VGG-16ABD1.30952.6193.92855.2386.54755.815.825.81

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 512 - Model: GoogLeNetAD81624324036.0235.96

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: ResNet-50AD369121512.6512.67

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamABD102030405044.9845.0544.99

Neural Magic DeepSparse

Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-StreamABD51015202522.2322.1922.22

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: AlexNetAD20406080100109.77109.92

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 64 - Model: GoogLeNetAD91827364537.8137.85

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamABD369121511.8211.8311.83

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 256 - Model: ResNet-50AD369121511.8711.86

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-StreamABD2040608010084.5784.5084.53

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: GoogLeNetAD91827364539.3739.35

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 32 - Model: AlexNetAD2040608010091.5091.54

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgitems/sec, More Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamABD2040608010095.0795.0795.06

Neural Magic DeepSparse

Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream

OpenBenchmarking.orgms/batch, Fewer Is BetterNeural Magic DeepSparse 1.1Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-StreamABD102030405042.0642.0642.06

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.10Device: CPU - Batch Size: 16 - Model: VGG-16ABD1.24882.49763.74644.99526.2445.555.555.55

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABCD0.1440.2880.4320.5760.720.640.640.640.641. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

AOM AV1

Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.5Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABCD0.04730.09460.14190.18920.23650.210.210.210.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

QuadRay

Scene: 5 - Resolution: 1080p

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 1080pABCD0.48150.9631.44451.9262.40752.142.142.142.141. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

QuadRay

Scene: 5 - Resolution: 4K

OpenBenchmarking.orgFPS, More Is BetterQuadRay 2022.05.25Scene: 5 - Resolution: 4KABCD0.11930.23860.35790.47720.59650.530.530.530.531. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread

SMHasher

Hash: MeowHash x86_64 AES-NI

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: MeowHash x86_64 AES-NIABCD112233445550.1150.1050.1150.251. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha0_aes_avx2 x86_64

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha0_aes_avx2 x86_64ABCD61218243023.2523.0623.0923.061. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: FarmHash32 x86_64 AVX

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash32 x86_64 AVXABCD81624324030.2930.2930.3232.561. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: t1ha2_atonce

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: t1ha2_atonceABCD61218243024.3224.1224.0924.061. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: FarmHash128

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: FarmHash128ABCD122436486055.2755.2655.2555.261. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: fasthash32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: fasthash32ABCD61218243025.9725.7325.8125.751. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: Spooky32

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: Spooky32ABCD71421283531.7231.6531.9031.551. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: SHA3-256

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: SHA3-256ABCD50010001500200025002033.501979.332115.651988.991. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects

SMHasher

Hash: wyhash

OpenBenchmarking.orgcycles/hash, Fewer Is BetterSMHasher 2022-08-22Hash: wyhashABCD4812162016.2516.3016.2116.751. (CXX) g++ options: -march=native -O3 -flto -fno-fat-lto-objects


Phoronix Test Suite v10.8.4