EPYC 7F72

2 x AMD EPYC 7F72 24-Core testing with a Supermicro H11DSi-NT v2.00 (2.1 BIOS) and ASPEED on Ubuntu 20.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2012196-HA-EPYC7F72759
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Chess Test Suite 3 Tests
Timed Code Compilation 2 Tests
C/C++ Compiler Tests 12 Tests
CPU Massive 19 Tests
Creator Workloads 13 Tests
Database Test Suite 5 Tests
Encoding 4 Tests
Fortran Tests 4 Tests
HPC - High Performance Computing 15 Tests
Imaging 3 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 3 Tests
MPI Benchmarks 4 Tests
Multi-Core 15 Tests
NVIDIA GPU Compute 3 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 3 Tests
Python 2 Tests
Scientific Computing 6 Tests
Server 6 Tests
Server CPU Tests 11 Tests
Single-Threaded 6 Tests
Speech 2 Tests
Telephony 2 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F72
December 10 2020
  12 Hours, 13 Minutes
AMD 7F72
December 11 2020
  11 Hours, 50 Minutes
AMD EPYC 7F72
December 11 2020
  11 Hours, 39 Minutes
AMD EPYC 7F72 2P
December 16 2020
  1 Day, 26 Minutes
EPYC 7F72 2P
December 17 2020
  23 Hours, 54 Minutes
7F72 2P
December 18 2020
  1 Day, 4 Hours, 52 Minutes
Invert Hiding All Results Option
  18 Hours, 49 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 7F72ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLCompilerFile-SystemScreen ResolutionEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2PAMD EPYC 7F72 24-Core @ 3.20GHz (24 Cores / 48 Threads)Supermicro H11DSi-NT v2.00 (2.1 BIOS)AMD Starship/Matisse64GB1000GB Western Digital WD_BLACK SN850 1TBllvmpipeVE2282 x Intel 10G X550TUbuntu 20.105.8.0-29-generic (x86_64)GNOME Shell 3.38.1X Server 1.20.9modesetting 1.20.94.5 Mesa 20.2.1 (LLVM 11.0.0 256 bits)GCC 10.2.0ext41920x10802 x AMD EPYC 7F72 24-Core @ 3.20GHz (48 Cores / 96 Threads)126GBASPEEDOpenBenchmarking.orgCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Disk Details- NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Details- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301034Python Details- Python 3.8.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2PResult OverviewPhoronix Test Suite100%138%175%213%251%NCNNLevelDBHigh Performance Conjugate GradientNAMDBRL-CADGROMACSStockfishasmFishLAMMPS Molecular Dynamics SimulatorFFTETimed Linux Kernel CompilationKvazaarKeyDBTimed LLVM CompilationoneDNNTimed HMMer SearchAI Benchmark AlphaHuginBasis Universalx265InfluxDBLibRawMlpack BenchmarkRedisx264PostgreSQL pgbenchHPC ChallengeTimed Clash CompilationTNNBYTE Unix Benchmarkrav1eLZ4 CompressionNumpy BenchmarkPHPBenchCraftyWebP Image EncodeHierarchical INTegrationRNNoiseeSpeak-NG Speech EngineTensorFlow Lite

EPYC 7F72onednn: IP Shapes 3D - f32 - CPUhpcc: Rand Ring Bandwidthonednn: Convolution Batch Shapes Auto - f32 - CPUleveldb: Rand Deleteleveldb: Seq Fillleveldb: Overwriteleveldb: Rand Fillonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUleveldb: Seek Randleveldb: Rand Readleveldb: Hot Readpgbench: 1 - 100 - Read Onlypgbench: 1 - 100 - Read Only - Average Latencyhpcg: namd: ATPase Simulation - 327,506 Atomslammps: Rhodopsin Proteinbrl-cad: VGR Performance Metricgromacs: Water Benchmarkstockfish: Total Timeasmfish: 1024 Hash Memory, 26 Depthhpcc: Rand Ring Latencyffte: N=256, 3D Complex FFT Routineyquake2: Software CPU - 1920 x 1080leveldb: Seq Fillleveldb: Overwriteonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUleveldb: Rand Fillonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUlammps: 20k Atomskvazaar: Bosphorus 1080p - Slowtensorflow-lite: Inception ResNet V2kvazaar: Bosphorus 1080p - Mediumbasis: UASTC Level 3tensorflow-lite: Inception V4kvazaar: Bosphorus 1080p - Very Fastbuild-linux-kernel: Time To Compileonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUtensorflow-lite: Mobilenet Floatkvazaar: Bosphorus 4K - Slowkvazaar: Bosphorus 4K - Mediumtensorflow-lite: Mobilenet Quantleveldb: Fill Synconednn: IP Shapes 1D - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUhpcc: G-Rand Accesskeydb: kvazaar: Bosphorus 4K - Very Fastai-benchmark: Device Training Scoretensorflow-lite: NASNet Mobilebuild-llvm: Time To Compiletensorflow-lite: SqueezeNetonednn: IP Shapes 3D - u8s8f32 - CPUleveldb: Fill Synconednn: Recurrent Neural Network Training - f32 - CPUmlpack: scikit_qdapgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlybasis: UASTC Level 2onednn: Deconvolution Batch shapes_3d - f32 - CPUhmmer: Pfam Database Searchai-benchmark: Device AI Scorekvazaar: Bosphorus 1080p - Ultra Fastinfluxdb: 4 - 10000 - 2,5000,1 - 10000kvazaar: Bosphorus 4K - Ultra Fastai-benchmark: Device Inference Scorex265: Bosphorus 4Khugin: Panorama Photo Assistant + Stitching Timehpcc: G-HPLpgbench: 100 - 50 - Read Only - Average Latencypgbench: 100 - 50 - Read Onlyonednn: IP Shapes 1D - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUlibraw: Post-Processing Benchmarkx265: Bosphorus 1080pmlpack: scikit_linearridgeregressiononednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUtnn: CPU - MobileNet v2hpcc: EP-STREAM Triadx264: H.264 Video Encodinghpcc: EP-DGEMMpgbench: 100 - 50 - Read Write - Average Latencypgbench: 100 - 50 - Read Writepgbench: 1 - 50 - Read Write - Average Latencypgbench: 1 - 50 - Read Writeonednn: Deconvolution Batch shapes_1d - f32 - CPUpgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writepgbench: 1 - 100 - Read Writepgbench: 1 - 100 - Read Write - Average Latencypgbench: 1 - 50 - Read Onlypgbench: 1 - 50 - Read Only - Average Latencypgbench: 100 - 1 - Read Onlybasis: ETC1Spgbench: 100 - 1 - Read Only - Average Latencybuild-clash: Time To Compilemlpack: scikit_icarav1e: 10influxdb: 64 - 10000 - 2,5000,1 - 10000compress-lz4: 9 - Compression Speedcompress-lz4: 3 - Compression Speedbasis: UASTC Level 0pgbench: 1 - 1 - Read Only - Average Latencycompress-lz4: 3 - Decompression Speedbyte: Dhrystone 2pgbench: 1 - 1 - Read Onlycompress-lz4: 1 - Decompression Speedpgbench: 100 - 1 - Read Write - Average Latencymlpack: scikit_svmpgbench: 100 - 1 - Read Writewebp: Quality 100, Losslesscompress-lz4: 9 - Decompression Speedrav1e: 5rav1e: 6numpy: rav1e: 1phpbench: PHP Benchmark Suitecompress-lz4: 1 - Compression Speedpgbench: 1 - 1 - Read Write - Average Latencypgbench: 1 - 1 - Read Writebasis: UASTC Level 2 + RDO Post-Processingwebp: Quality 100webp: Quality 100, Lossless, Highest Compressioncrafty: Elapsed Timewebp: Defaulttnn: CPU - SqueezeNet v1.1webp: Quality 100, Highest Compressionhint: FLOATrnnoise: espeak: Text-To-Speech Synthesisindigobench: CPU - Supercarindigobench: CPU - Bedroomncnn: CPU - yolov4-tinyncnn: CPU - resnet50ncnn: CPU - alexnetncnn: CPU - resnet18ncnn: CPU - vgg16ncnn: CPU - googlenetncnn: CPU - blazefacencnn: CPU - efficientnet-b0ncnn: CPU - mnasnetncnn: CPU - shufflenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: CPU - squeezenetredis: SETredis: GETredis: LPUSHredis: SADDredis: LPOPonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUhpcc: Max Ping Pong Bandwidthhpcc: G-Ptranshpcc: G-FfteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2.778152.710762.84936209.853220.327229.121228.0386.3772564.33739.86739.7726984520.14314.99950.8816912.1293379242.84752597544643498371.16869111181.7368531014.524.123.12.3096123.31.4023515.78936.21117566337.1427.182134287783.0638.8635.3032060165.310.7310.9561482.04.51.437821616.570.03144424090.9624.001513104517298.14589588.10.5928041168.5801611.2140.050.18952931816.9013.15047142.1153473142.441197999.841.99196023.5350.00687.254500.0915516231.73051934.68635.1160.611.610.578226295.1933.11627178.7936.141002.3292147920.66324212.359463.39529486214546.6838472310.0592433649.3600.041462.28351.743.0391339487.549.7050.787.5780.03410606.337455257.22941911308.90.55424.45180519.04210630.91.0371.385324.130.3495687969777.430.4952019694.8692.59639.28074063091.611275.5228.538321206770.0579421.13732.79510.3094.76931.0524.9510.3813.8236.4822.074.0912.299.5810.459.5210.0421.4421.261495615.592038392.691295998.651658090.042134890.15931.2751613.91925.9769566.4967.757399.872382.750042.725152.90342210.651220.804228.697228.6076.4139063.75140.38340.9906987780.14314.97310.8753211.7003323942.83053421517644487411.16276111004.6587361123.724.023.22.3063323.21.3865115.81636.09117991336.9727.287134740082.4639.0545.3595059959.610.7010.9161260.74.51.426451606.750.03051426284.5723.981523103641289.65489393.60.8390251163.4551613.7039.490.18953058116.9803.14499142.1313532141.931199592.442.13200923.5950.32587.310770.0905553201.73388930.71135.0160.331.590.575254292.9013.38220177.8636.765532.3242152120.57124312.341523.38929537214446.6948526120.0592434549.5110.041462.32351.823.0321339130.148.3350.547.7330.03410661.537547272.92939111271.40.55624.49180119.01810616.01.0361.386321.810.3495678979740.160.4932027693.8972.58539.17073829161.602275.7038.549321880722.6368921.12632.75910.3394.77531.3324.5310.5113.8636.5621.554.1312.619.5110.359.6110.0622.2621.101500640.311814383.801332610.561757206.721376509.23924.0241610.95930.71410110.2598.175538.431212.782142.730352.83648210.334220.867228.708228.3176.3288464.87140.03040.6236953440.14415.44510.8927711.7053418852.84652704023635647661.15717113220.1974780814.524.023.22.2967223.21.4034615.77936.15117859037.0827.210134489383.2938.9035.3633060331.610.7310.9261596.24.51.426691602.590.03037424640.3023.931520104362294.11389812.70.5917021162.7141611.7439.860.18952906616.9263.13878142.0803527142.031197198.042.17200723.6050.46186.882970.0915514841.72765928.11534.9660.251.620.576657294.6533.29923177.7136.438802.3282148120.63024242.357263.38829552214246.7428420220.0592444649.5790.041461.62451.843.0261297582.548.9849.317.6060.03410548.837624546.92946211314.60.55325.02180719.01510685.71.0371.385318.470.3465735679780.380.4982009694.9952.59139.34273592171.612275.3818.548322432634.1313221.16832.71110.3144.77030.7525.0310.5214.1037.4822.014.1612.549.5010.459.9410.0822.2021.871475849.681897703.881317592.971723521.291379285.46932.4891605.90929.88610456.2868.285559.418950.7846520.801670.860679690.963707.544736.377731.8762.02580191.741114.896112.49714132470.07130.35370.4465122.5656324955.287982324881153985502.02418186579.066906911514.41.4367114.50.88175224.92456.4675856657.5617.658871018126.2425.6965.4922040228.615.9016.1941754.26.62.072471109.290.04383296614.5434.381063117648209.24763691.30.7423231594.9371177.1254.560.14170823412.3912.31947187.8912725181.72951085.753.26166220.4356.785100.700330.0816192191.55767838.99031.4254.701.780.518269316.8233.41432194.1939.200302.1402337322.34322392.305643.61227730202049.5658167560.0612317151.9040.043483.40451.722.9721328030.948.2149.947.7460.03510360.538124375.52868111021.80.56124.61178419.01010448.21.0361.378318.080.3485729059657.040.4962016687.0842.57339.01374176211.611274.3278.514322779429.0787421.12032.79945.7846.8911.0721.2454.1942.9511.4040.1836.4531.5331.7333.1846.4668.431498592.431888968.341365777.401818648.351388985.48839.8521201.41823.74511661.77615.9182320.857700.8163060.826490.868864690.868710.333736.558734.8181.99172191.127112.076112.69514072710.07130.18600.4464222.3856386635.278975827531165249542.05512185994.0960317514.914.41.4684114.50.89033324.92056.2975490657.3717.601872809126.7925.7964.8752340591.015.9416.1341926.26.52.030421222.200.04396301799.8834.311078115976212.02463239.20.7522901630.9361166.5654.180.14071504512.4142.36572186.3962773180.39949471.853.22169520.2056.426100.371670.0806255121.52395827.22331.1454.631.750.529186299.4473.42379193.4039.029402.1642312121.91722822.262693.61427720200749.8767975660.0632329551.8540.043483.96453.022.9671345908.148.8349.677.8080.03410409.937519864.52925711114.30.55824.51179418.76810473.31.0301.369322.490.3465777969652.000.5002001689.4712.58039.27474190021.604274.1298.523322198292.3995521.08032.83950.9244.5510.8930.4855.5540.6510.8447.1533.4529.7332.1832.6347.4439.161580536.811996216.741368468.711814208.701380069.40822.7711182.17848.2499370.09816.4567320.629570.8917760.802610.928138670.018702.123737.227733.8192.10700189.297113.380112.40413884650.07228.11610.4547519.9626297065.251973765951161892322.04033163435.0163233215.114.41.5012014.50.95990424.54956.4382311257.3117.704963296126.8825.9503.6506245501.815.8316.0449006.26.42.090221407.980.04279308314.4533.841075148263211.78776438.60.7921531643.4901290.0751.460.13772909312.4212.30768188.2622729181.26944430.652.76165419.9958.65999.064700.0796336361.67289873.59031.9353.831.700.567497322.3023.21782194.9735.913872.1412336021.98122762.445383.63927520200250.0068254480.0612339652.0490.043482.27553.732.9271337280.448.0849.247.7890.03410402.738477706.32945211168.70.56624.55176619.19810602.41.0161.358320.650.3435728339632.390.4982007689.7992.58339.35273859931.611274.4508.556322561482.0050821.13932.82949.6256.4812.1424.8069.5657.4613.1643.8639.1232.4733.5034.7060.7843.061519952.931852789.071345722.501784823.371508834.92882.0551334.58907.0718695.06216.1235720.07943OpenBenchmarking.org

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.6261.2521.8782.5043.13SE +/- 0.011681, N = 3SE +/- 0.017798, N = 3SE +/- 0.013730, N = 3SE +/- 0.007882, N = 6SE +/- 0.009922, N = 3SE +/- 0.008788, N = 32.7781502.7500402.7821400.7846520.8163060.891776MIN: 2.48MIN: 2.48MIN: 2.48MIN: 0.67MIN: 0.69MIN: 0.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.76 / Avg: 2.78 / Max: 2.8Min: 2.73 / Avg: 2.75 / Max: 2.79Min: 2.76 / Avg: 2.78 / Max: 2.81Min: 0.76 / Avg: 0.78 / Max: 0.82Min: 0.8 / Avg: 0.82 / Max: 0.83Min: 0.87 / Avg: 0.89 / Max: 0.91. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring BandwidthEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.61431.22861.84292.45723.0715SE +/- 0.01911, N = 3SE +/- 0.06549, N = 3SE +/- 0.03283, N = 3SE +/- 0.00576, N = 3SE +/- 0.02292, N = 3SE +/- 0.00700, N = 32.710762.725152.730350.801670.826490.802611. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring BandwidthEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.67 / Avg: 2.71 / Max: 2.74Min: 2.6 / Avg: 2.73 / Max: 2.81Min: 2.66 / Avg: 2.73 / Max: 2.76Min: 0.8 / Avg: 0.8 / Max: 0.81Min: 0.79 / Avg: 0.83 / Max: 0.87Min: 0.79 / Avg: 0.8 / Max: 0.811. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.65331.30661.95992.61323.2665SE +/- 0.008471, N = 3SE +/- 0.008767, N = 3SE +/- 0.018414, N = 3SE +/- 0.011932, N = 3SE +/- 0.002168, N = 3SE +/- 0.005045, N = 32.8493602.9034202.8364800.8606790.8688640.928138MIN: 2.46MIN: 2.52MIN: 2.45MIN: 0.79MIN: 0.79MIN: 0.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.83 / Avg: 2.85 / Max: 2.86Min: 2.89 / Avg: 2.9 / Max: 2.92Min: 2.81 / Avg: 2.84 / Max: 2.87Min: 0.84 / Avg: 0.86 / Max: 0.88Min: 0.86 / Avg: 0.87 / Max: 0.87Min: 0.92 / Avg: 0.93 / Max: 0.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P150300450600750SE +/- 0.57, N = 3SE +/- 0.41, N = 3SE +/- 0.20, N = 3SE +/- 1.87, N = 3SE +/- 7.14, N = 3SE +/- 3.35, N = 3209.85210.65210.33690.96690.87670.021. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random DeleteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P120240360480600Min: 208.72 / Avg: 209.85 / Max: 210.59Min: 210.14 / Avg: 210.65 / Max: 211.45Min: 210.04 / Avg: 210.33 / Max: 210.71Min: 688.75 / Avg: 690.96 / Max: 694.69Min: 676.58 / Avg: 690.87 / Max: 698.1Min: 663.38 / Avg: 670.02 / Max: 674.151. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P150300450600750SE +/- 0.18, N = 3SE +/- 0.58, N = 3SE +/- 0.16, N = 3SE +/- 0.53, N = 3SE +/- 2.35, N = 3SE +/- 2.29, N = 3220.33220.80220.87707.54710.33702.121. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Sequential FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P130260390520650Min: 219.99 / Avg: 220.33 / Max: 220.6Min: 219.82 / Avg: 220.8 / Max: 221.82Min: 220.55 / Avg: 220.87 / Max: 221.04Min: 706.82 / Avg: 707.54 / Max: 708.58Min: 707.08 / Avg: 710.33 / Max: 714.89Min: 697.79 / Avg: 702.12 / Max: 705.61. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P160320480640800SE +/- 0.36, N = 3SE +/- 0.71, N = 3SE +/- 0.57, N = 3SE +/- 1.17, N = 3SE +/- 0.59, N = 3SE +/- 0.43, N = 3229.12228.70228.71736.38736.56737.231. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: OverwriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P130260390520650Min: 228.43 / Avg: 229.12 / Max: 229.66Min: 227.46 / Avg: 228.7 / Max: 229.9Min: 227.57 / Avg: 228.71 / Max: 229.35Min: 734.2 / Avg: 736.38 / Max: 738.2Min: 735.96 / Avg: 736.56 / Max: 737.73Min: 736.45 / Avg: 737.23 / Max: 737.951. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P160320480640800SE +/- 0.24, N = 3SE +/- 0.18, N = 3SE +/- 0.12, N = 3SE +/- 1.35, N = 3SE +/- 2.04, N = 3SE +/- 3.93, N = 3228.04228.61228.32731.88734.82733.821. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P130260390520650Min: 227.7 / Avg: 228.04 / Max: 228.51Min: 228.28 / Avg: 228.61 / Max: 228.91Min: 228.13 / Avg: 228.32 / Max: 228.54Min: 729.22 / Avg: 731.88 / Max: 733.62Min: 731.97 / Avg: 734.82 / Max: 738.78Min: 726.01 / Avg: 733.82 / Max: 738.461. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810SE +/- 0.00849, N = 3SE +/- 0.01141, N = 3SE +/- 0.03710, N = 3SE +/- 0.01946, N = 3SE +/- 0.01945, N = 3SE +/- 0.01009, N = 36.377256.413906.328842.025801.991722.10700MIN: 5.79MIN: 5.78MIN: 5.76MIN: 1.87MIN: 1.86MIN: 1.861. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P3691215Min: 6.37 / Avg: 6.38 / Max: 6.39Min: 6.39 / Avg: 6.41 / Max: 6.43Min: 6.26 / Avg: 6.33 / Max: 6.38Min: 1.99 / Avg: 2.03 / Max: 2.05Min: 1.95 / Avg: 1.99 / Max: 2.02Min: 2.09 / Avg: 2.11 / Max: 2.121. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4080120160200SE +/- 0.66, N = 3SE +/- 0.14, N = 3SE +/- 0.52, N = 3SE +/- 1.65, N = 3SE +/- 0.59, N = 3SE +/- 0.57, N = 364.3463.7564.87191.74191.13189.301. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Seek RandomEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4080120160200Min: 63.57 / Avg: 64.34 / Max: 65.65Min: 63.5 / Avg: 63.75 / Max: 63.98Min: 63.83 / Avg: 64.87 / Max: 65.4Min: 188.96 / Avg: 191.74 / Max: 194.68Min: 190.36 / Avg: 191.13 / Max: 192.28Min: 188.21 / Avg: 189.3 / Max: 190.121. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P306090120150SE +/- 0.18, N = 3SE +/- 0.44, N = 4SE +/- 0.21, N = 3SE +/- 1.04, N = 15SE +/- 0.91, N = 3SE +/- 1.08, N = 1539.8740.3840.03114.90112.08113.381. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Random ReadEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20406080100Min: 39.63 / Avg: 39.87 / Max: 40.23Min: 39.65 / Avg: 40.38 / Max: 41.6Min: 39.66 / Avg: 40.03 / Max: 40.39Min: 106.18 / Avg: 114.9 / Max: 123.24Min: 110.46 / Avg: 112.08 / Max: 113.62Min: 107.54 / Avg: 113.38 / Max: 121.141. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P306090120150SE +/- 0.13, N = 3SE +/- 0.50, N = 4SE +/- 0.14, N = 3SE +/- 1.04, N = 15SE +/- 1.54, N = 3SE +/- 1.40, N = 339.7740.9940.62112.50112.70112.401. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Hot ReadEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20406080100Min: 39.53 / Avg: 39.77 / Max: 39.97Min: 39.79 / Avg: 40.99 / Max: 42.09Min: 40.36 / Avg: 40.62 / Max: 40.85Min: 106.07 / Avg: 112.5 / Max: 121.45Min: 111.04 / Avg: 112.7 / Max: 115.77Min: 110.25 / Avg: 112.4 / Max: 115.031. (CXX) g++ options: -O3 -lsnappy -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KSE +/- 1961.28, N = 3SE +/- 755.05, N = 3SE +/- 1176.24, N = 3SE +/- 18458.00, N = 3SE +/- 10614.63, N = 12SE +/- 12723.13, N = 36984526987786953441413247140727113884651. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P200K400K600K800K1000KMin: 695385.66 / Avg: 698452.18 / Max: 702103.74Min: 697404.24 / Avg: 698778.46 / Max: 700007.75Min: 692993.28 / Avg: 695344.46 / Max: 696587.51Min: 1378346.46 / Avg: 1413247.34 / Max: 1441116.04Min: 1305785.23 / Avg: 1407271.39 / Max: 1448157.42Min: 1371521.53 / Avg: 1388464.95 / Max: 1413378.211. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.03240.06480.09720.12960.162SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 12SE +/- 0.001, N = 30.1430.1430.1440.0710.0710.0721. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.07 / Avg: 0.07 / Max: 0.07Min: 0.07 / Avg: 0.07 / Max: 0.08Min: 0.07 / Avg: 0.07 / Max: 0.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835SE +/- 0.25, N = 12SE +/- 0.32, N = 12SE +/- 0.17, N = 3SE +/- 0.30, N = 3SE +/- 0.02, N = 3SE +/- 0.41, N = 1215.0014.9715.4530.3530.1928.121. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi
OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835Min: 13.09 / Avg: 15 / Max: 15.67Min: 12.94 / Avg: 14.97 / Max: 15.76Min: 15.1 / Avg: 15.45 / Max: 15.65Min: 30.04 / Avg: 30.35 / Max: 30.94Min: 30.15 / Avg: 30.19 / Max: 30.22Min: 25.34 / Avg: 28.12 / Max: 30.131. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi_cxx -lmpi

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.20090.40180.60270.80361.0045SE +/- 0.00562, N = 3SE +/- 0.00402, N = 3SE +/- 0.01077, N = 3SE +/- 0.00014, N = 3SE +/- 0.00027, N = 3SE +/- 0.00250, N = 30.881690.875320.892770.446510.446420.45475
OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 AtomsEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 0.87 / Avg: 0.88 / Max: 0.89Min: 0.87 / Avg: 0.88 / Max: 0.88Min: 0.88 / Avg: 0.89 / Max: 0.91Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.45Min: 0.45 / Avg: 0.45 / Max: 0.46

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025SE +/- 0.08, N = 3SE +/- 0.07, N = 3SE +/- 0.02, N = 3SE +/- 0.43, N = 15SE +/- 0.25, N = 3SE +/- 0.22, N = 512.1311.7011.7122.5722.3919.961. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin ProteinEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025Min: 11.96 / Avg: 12.13 / Max: 12.22Min: 11.55 / Avg: 11.7 / Max: 11.78Min: 11.67 / Avg: 11.71 / Max: 11.73Min: 20.17 / Avg: 22.57 / Max: 24.68Min: 22.1 / Avg: 22.39 / Max: 22.88Min: 19.52 / Avg: 19.96 / Max: 20.791. (CXX) g++ options: -O3 -pthread -lm

BRL-CAD

BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.30.8VGR Performance MetricEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P140K280K420K560K700K3379243323943418856324956386636297061. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1.18962.37923.56884.75845.948SE +/- 0.004, N = 3SE +/- 0.009, N = 3SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.007, N = 3SE +/- 0.007, N = 32.8472.8302.8465.2875.2785.2511. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm
OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.3Water BenchmarkEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.84 / Avg: 2.85 / Max: 2.85Min: 2.82 / Avg: 2.83 / Max: 2.85Min: 2.84 / Avg: 2.85 / Max: 2.85Min: 5.28 / Avg: 5.29 / Max: 5.29Min: 5.27 / Avg: 5.28 / Max: 5.29Min: 5.24 / Avg: 5.25 / Max: 5.271. (CXX) g++ options: -O3 -pthread -lrt -lpthread -lm

Stockfish

This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20M40M60M80M100MSE +/- 458927.44, N = 15SE +/- 678703.71, N = 3SE +/- 356097.54, N = 15SE +/- 946054.05, N = 3SE +/- 724949.76, N = 3SE +/- 639295.30, N = 35259754453421517527040239823248897582753973765951. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 12Total TimeEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20M40M60M80M100MMin: 49156210 / Avg: 52597544.33 / Max: 55870072Min: 52076342 / Avg: 53421517.33 / Max: 54251565Min: 50673179 / Avg: 52704022.87 / Max: 55172057Min: 96354022 / Avg: 98232487.67 / Max: 99368138Min: 96208771 / Avg: 97582753 / Max: 98670726Min: 96170510 / Avg: 97376595 / Max: 983472171. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver

asmFish

This is a test of asmFish, an advanced chess benchmark written in Assembly. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20M40M60M80M100MSE +/- 282969.84, N = 3SE +/- 380311.53, N = 3SE +/- 644607.06, N = 3SE +/- 412071.14, N = 3SE +/- 422109.99, N = 3SE +/- 1665400.53, N = 3643498376444874163564766115398550116524954116189232
OpenBenchmarking.orgNodes/second, More Is BetterasmFish 2018-07-231024 Hash Memory, 26 DepthEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20M40M60M80M100MMin: 64057538 / Avg: 64349837 / Max: 64915673Min: 63726239 / Avg: 64448740.67 / Max: 65015914Min: 62325986 / Avg: 63564766.33 / Max: 64493385Min: 114581667 / Avg: 115398550 / Max: 115901514Min: 115811770 / Avg: 116524954.33 / Max: 117272771Min: 113645952 / Avg: 116189232.33 / Max: 119323524

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.46240.92481.38721.84962.312SE +/- 0.00865, N = 3SE +/- 0.01886, N = 3SE +/- 0.01464, N = 3SE +/- 0.01426, N = 3SE +/- 0.03528, N = 3SE +/- 0.01822, N = 31.168691.162761.157172.024182.055122.040331. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.16 / Avg: 1.17 / Max: 1.19Min: 1.13 / Avg: 1.16 / Max: 1.2Min: 1.13 / Avg: 1.16 / Max: 1.18Min: 2 / Avg: 2.02 / Max: 2.05Min: 1.99 / Avg: 2.06 / Max: 2.11Min: 2.01 / Avg: 2.04 / Max: 2.071. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

FFTE

FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P40K80K120K160K200KSE +/- 1183.09, N = 3SE +/- 1197.97, N = 4SE +/- 1183.24, N = 3SE +/- 2264.92, N = 4SE +/- 2641.44, N = 3SE +/- 2147.54, N = 15111181.74111004.66113220.20186579.07185994.10163435.021. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
OpenBenchmarking.orgMFLOPS, More Is BetterFFTE 7.0N=256, 3D Complex FFT RoutineEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30K60K90K120K150KMin: 109161.14 / Avg: 111181.74 / Max: 113258.33Min: 109385.83 / Avg: 111004.66 / Max: 114487.26Min: 111881.55 / Avg: 113220.2 / Max: 115579.54Min: 182616.87 / Avg: 186579.07 / Max: 192633.36Min: 182318.5 / Avg: 185994.1 / Max: 191118.08Min: 146573.22 / Avg: 163435.02 / Max: 174724.31. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp

yquake2

This is a test of Yamagi Quake II. Yamagi Quake II is an enhanced client for id Software's Quake II with focus on offline and coop gameplay. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080EPYC 7F72AMD 7F72AMD EPYC 7F72612182430SE +/- 0.00, N = 3SE +/- 0.09, N = 3SE +/- 0.03, N = 314.523.714.51. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC
OpenBenchmarking.orgFrames Per Second, More Is Betteryquake2 7.45Renderer: Software CPU - Resolution: 1920 x 1080EPYC 7F72AMD 7F72AMD EPYC 7F72612182430Min: 14.5 / Avg: 14.5 / Max: 14.5Min: 23.6 / Avg: 23.73 / Max: 23.9Min: 14.5 / Avg: 14.53 / Max: 14.61. (CC) gcc options: -lm -ldl -rdynamic -shared -lSDL2 -O2 -pipe -fomit-frame-pointer -std=gnu99 -fno-strict-aliasing -fwrapv -fvisibility=hidden -MMD -mfpmath=sse -fPIC

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430SE +/- 0.00, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 324.124.024.015.014.915.11. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Sequential FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430Min: 24.1 / Avg: 24.1 / Max: 24.1Min: 23.9 / Avg: 24 / Max: 24.1Min: 24 / Avg: 24.03 / Max: 24.1Min: 14.8 / Avg: 14.93 / Max: 15Min: 15 / Avg: 15.1 / Max: 15.21. (CXX) g++ options: -O3 -lsnappy -lpthread

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 323.123.223.214.414.414.41. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: OverwriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025Min: 23.1 / Avg: 23.13 / Max: 23.2Min: 23.1 / Avg: 23.2 / Max: 23.3Min: 23.1 / Avg: 23.2 / Max: 23.3Min: 14.4 / Avg: 14.43 / Max: 14.5Min: 14.4 / Avg: 14.4 / Max: 14.4Min: 14.4 / Avg: 14.4 / Max: 14.41. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.51971.03941.55912.07882.5985SE +/- 0.00115, N = 3SE +/- 0.00978, N = 3SE +/- 0.00103, N = 3SE +/- 0.00707, N = 3SE +/- 0.01842, N = 15SE +/- 0.01121, N = 152.309612.306332.296721.436711.468411.50120MIN: 2.18MIN: 2.18MIN: 2.18MIN: 1.28MIN: 1.27MIN: 1.271. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.31 / Avg: 2.31 / Max: 2.31Min: 2.29 / Avg: 2.31 / Max: 2.32Min: 2.29 / Avg: 2.3 / Max: 2.3Min: 1.42 / Avg: 1.44 / Max: 1.44Min: 1.39 / Avg: 1.47 / Max: 1.64Min: 1.46 / Avg: 1.5 / Max: 1.591. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.07, N = 323.323.223.214.514.514.51. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Random FillEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025Min: 23.2 / Avg: 23.27 / Max: 23.3Min: 23.2 / Avg: 23.2 / Max: 23.2Min: 23.2 / Avg: 23.2 / Max: 23.2Min: 14.5 / Avg: 14.5 / Max: 14.5Min: 14.4 / Avg: 14.47 / Max: 14.5Min: 14.4 / Avg: 14.47 / Max: 14.61. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.31580.63160.94741.26321.579SE +/- 0.009113, N = 3SE +/- 0.005378, N = 3SE +/- 0.005195, N = 3SE +/- 0.004844, N = 3SE +/- 0.001013, N = 3SE +/- 0.007240, N = 31.4023501.3865101.4034600.8817520.8903330.959904MIN: 1.29MIN: 1.29MIN: 1.29MIN: 0.83MIN: 0.84MIN: 0.841. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.38 / Avg: 1.4 / Max: 1.41Min: 1.38 / Avg: 1.39 / Max: 1.4Min: 1.39 / Avg: 1.4 / Max: 1.41Min: 0.87 / Avg: 0.88 / Max: 0.89Min: 0.89 / Avg: 0.89 / Max: 0.89Min: 0.95 / Avg: 0.96 / Max: 0.971. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430SE +/- 0.03, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 3SE +/- 0.09, N = 315.7915.8215.7824.9224.9224.551. (CXX) g++ options: -O3 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k AtomsEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430Min: 15.75 / Avg: 15.79 / Max: 15.86Min: 15.71 / Avg: 15.82 / Max: 15.96Min: 15.76 / Avg: 15.78 / Max: 15.8Min: 24.88 / Avg: 24.92 / Max: 25Min: 24.81 / Avg: 24.92 / Max: 25Min: 24.37 / Avg: 24.55 / Max: 24.681. (CXX) g++ options: -O3 -pthread -lm

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1326395265SE +/- 0.07, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.05, N = 336.2136.0936.1556.4656.2956.431. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: SlowEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455Min: 36.07 / Avg: 36.21 / Max: 36.29Min: 35.99 / Avg: 36.09 / Max: 36.2Min: 36.11 / Avg: 36.15 / Max: 36.18Min: 56.41 / Avg: 56.46 / Max: 56.55Min: 56.17 / Avg: 56.29 / Max: 56.38Min: 56.37 / Avg: 56.43 / Max: 56.531. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KSE +/- 2727.68, N = 3SE +/- 2105.81, N = 3SE +/- 3330.08, N = 3SE +/- 2776.56, N = 3SE +/- 2200.08, N = 3SE +/- 7122.24, N = 3117566311799131178590758566754906823112
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception ResNet V2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P200K400K600K800K1000KMin: 1170500 / Avg: 1175663.33 / Max: 1179770Min: 1175780 / Avg: 1179913.33 / Max: 1182680Min: 1171930 / Avg: 1178590 / Max: 1181960Min: 753164 / Avg: 758565.67 / Max: 762382Min: 750960 / Avg: 754906.33 / Max: 758565Min: 815417 / Avg: 823112.33 / Max: 837341

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1326395265SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.18, N = 3SE +/- 0.18, N = 337.1436.9737.0857.5657.3757.311. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: MediumEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455Min: 37.03 / Avg: 37.14 / Max: 37.29Min: 36.95 / Avg: 36.97 / Max: 37Min: 37.04 / Avg: 37.08 / Max: 37.11Min: 57.49 / Avg: 57.56 / Max: 57.68Min: 57.07 / Avg: 57.37 / Max: 57.68Min: 57.02 / Avg: 57.31 / Max: 57.651. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 3SE +/- 0.06, N = 327.1827.2927.2117.6617.6017.701. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 3EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430Min: 27.16 / Avg: 27.18 / Max: 27.2Min: 27.26 / Avg: 27.29 / Max: 27.32Min: 27.17 / Avg: 27.21 / Max: 27.25Min: 17.56 / Avg: 17.66 / Max: 17.72Min: 17.5 / Avg: 17.6 / Max: 17.71Min: 17.59 / Avg: 17.7 / Max: 17.761. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KSE +/- 3318.77, N = 3SE +/- 2838.53, N = 3SE +/- 3249.51, N = 3SE +/- 2614.29, N = 3SE +/- 3825.03, N = 3SE +/- 8354.40, N = 15134287713474001344893871018872809963296
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Inception V4EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P200K400K600K800K1000KMin: 1338640 / Avg: 1342876.67 / Max: 1349420Min: 1344320 / Avg: 1347400 / Max: 1353070Min: 1339450 / Avg: 1344893.33 / Max: 1350690Min: 865790 / Avg: 871018 / Max: 873700Min: 865847 / Avg: 872809 / Max: 879036Min: 909376 / Avg: 963296.33 / Max: 1011090

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P306090120150SE +/- 0.22, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.48, N = 3SE +/- 0.83, N = 3SE +/- 0.61, N = 383.0682.4683.29126.24126.79126.881. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Very FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20406080100Min: 82.8 / Avg: 83.06 / Max: 83.5Min: 82.32 / Avg: 82.46 / Max: 82.6Min: 83.2 / Avg: 83.29 / Max: 83.47Min: 125.63 / Avg: 126.24 / Max: 127.19Min: 125.59 / Avg: 126.79 / Max: 128.38Min: 126.1 / Avg: 126.88 / Max: 128.071. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

Timed Linux Kernel Compilation

This test times how long it takes to build the Linux kernel in a default configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P918273645SE +/- 0.47, N = 3SE +/- 0.41, N = 5SE +/- 0.37, N = 6SE +/- 0.16, N = 13SE +/- 0.16, N = 13SE +/- 0.16, N = 1338.8639.0538.9025.7025.8025.95
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Linux Kernel Compilation 5.4Time To CompileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240Min: 38.08 / Avg: 38.86 / Max: 39.71Min: 38.46 / Avg: 39.05 / Max: 40.66Min: 37.86 / Avg: 38.9 / Max: 40.63Min: 25.45 / Avg: 25.7 / Max: 27.65Min: 25.54 / Avg: 25.8 / Max: 27.67Min: 25.64 / Avg: 25.95 / Max: 27.85

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1.23572.47143.70714.94286.1785SE +/- 0.01527, N = 3SE +/- 0.02370, N = 3SE +/- 0.03533, N = 3SE +/- 0.51653, N = 14SE +/- 0.48144, N = 12SE +/- 0.04554, N = 35.303205.359505.363305.492204.875233.65062MIN: 4.97MIN: 4.96MIN: 4.96MIN: 2.85MIN: 2.89MIN: 2.991. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 5.27 / Avg: 5.3 / Max: 5.33Min: 5.31 / Avg: 5.36 / Max: 5.38Min: 5.3 / Avg: 5.36 / Max: 5.42Min: 3.26 / Avg: 5.49 / Max: 7.99Min: 3.2 / Avg: 4.88 / Max: 7.49Min: 3.59 / Avg: 3.65 / Max: 3.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P13K26K39K52K65KSE +/- 43.27, N = 3SE +/- 39.74, N = 3SE +/- 126.43, N = 3SE +/- 154.16, N = 3SE +/- 383.17, N = 3SE +/- 394.42, N = 360165.359959.660331.640228.640591.045501.8
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet FloatEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P10K20K30K40K50KMin: 60108.8 / Avg: 60165.27 / Max: 60250.3Min: 59883.2 / Avg: 59959.57 / Max: 60016.8Min: 60082.9 / Avg: 60331.63 / Max: 60495.4Min: 39920.3 / Avg: 40228.57 / Max: 40387.6Min: 39831.9 / Avg: 40590.97 / Max: 41061.7Min: 45027.5 / Avg: 45501.77 / Max: 46284.8

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.01, N = 310.7310.7010.7315.9015.9415.831. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: SlowEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620Min: 10.72 / Avg: 10.73 / Max: 10.74Min: 10.69 / Avg: 10.7 / Max: 10.71Min: 10.72 / Avg: 10.73 / Max: 10.74Min: 15.84 / Avg: 15.9 / Max: 15.94Min: 15.84 / Avg: 15.94 / Max: 16.02Min: 15.81 / Avg: 15.83 / Max: 15.851. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 310.9510.9110.9216.1916.1316.041. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: MediumEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620Min: 10.93 / Avg: 10.95 / Max: 10.96Min: 10.91 / Avg: 10.91 / Max: 10.91Min: 10.91 / Avg: 10.92 / Max: 10.93Min: 16.17 / Avg: 16.19 / Max: 16.21Min: 16.06 / Avg: 16.13 / Max: 16.19Min: 16.01 / Avg: 16.04 / Max: 16.081. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P13K26K39K52K65KSE +/- 90.15, N = 3SE +/- 111.88, N = 3SE +/- 119.05, N = 3SE +/- 394.63, N = 3SE +/- 228.51, N = 3SE +/- 959.33, N = 1561482.061260.761596.241754.241926.249006.2
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: Mobilenet QuantEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P11K22K33K44K55KMin: 61320.3 / Avg: 61482.03 / Max: 61631.9Min: 61071.2 / Avg: 61260.67 / Max: 61458.5Min: 61383.4 / Avg: 61596.2 / Max: 61795.1Min: 41214.2 / Avg: 41754.2 / Max: 42522.7Min: 41620.2 / Avg: 41926.23 / Max: 42373.2Min: 44392.6 / Avg: 49006.18 / Max: 55019.8

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.07, N = 15SE +/- 0.06, N = 8SE +/- 0.09, N = 34.54.54.56.66.56.41. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMB/s, More Is BetterLevelDB 1.22Benchmark: Fill SyncEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P3691215Min: 4.5 / Avg: 4.5 / Max: 4.5Min: 4.5 / Avg: 4.5 / Max: 4.5Min: 4.5 / Avg: 4.5 / Max: 4.5Min: 6.2 / Avg: 6.61 / Max: 7.1Min: 6.3 / Avg: 6.46 / Max: 6.8Min: 6.3 / Avg: 6.43 / Max: 6.61. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.47030.94061.41091.88122.3515SE +/- 0.00957, N = 3SE +/- 0.00516, N = 3SE +/- 0.00056, N = 3SE +/- 0.02566, N = 4SE +/- 0.01974, N = 6SE +/- 0.01869, N = 71.437821.426451.426692.072472.030422.09022MIN: 1.33MIN: 1.33MIN: 1.32MIN: 1.66MIN: 1.66MIN: 1.741. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.42 / Avg: 1.44 / Max: 1.45Min: 1.42 / Avg: 1.43 / Max: 1.44Min: 1.43 / Avg: 1.43 / Max: 1.43Min: 2.02 / Avg: 2.07 / Max: 2.14Min: 1.96 / Avg: 2.03 / Max: 2.08Min: 2.01 / Avg: 2.09 / Max: 2.141. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500SE +/- 3.74, N = 3SE +/- 2.70, N = 3SE +/- 2.00, N = 3SE +/- 0.72, N = 3SE +/- 43.23, N = 12SE +/- 42.19, N = 151616.571606.751602.591109.291222.201407.98MIN: 1572.33MIN: 1564.52MIN: 1564.17MIN: 1072.44MIN: 1075.03MIN: 1169.391. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500Min: 1611.71 / Avg: 1616.57 / Max: 1623.92Min: 1602.3 / Avg: 1606.75 / Max: 1611.63Min: 1598.68 / Avg: 1602.59 / Max: 1605.27Min: 1108.32 / Avg: 1109.29 / Max: 1110.71Min: 1107.23 / Avg: 1222.2 / Max: 1547.46Min: 1278.47 / Avg: 1407.98 / Max: 1825.891. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random AccessEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.00990.01980.02970.03960.0495SE +/- 0.00015, N = 3SE +/- 0.00105, N = 3SE +/- 0.00070, N = 3SE +/- 0.00017, N = 3SE +/- 0.00002, N = 3SE +/- 0.00060, N = 30.031440.030510.030370.043830.043960.042791. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random AccessEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.041. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

KeyDB

A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P90K180K270K360K450KSE +/- 1068.73, N = 3SE +/- 4696.92, N = 3SE +/- 2330.52, N = 3SE +/- 2893.10, N = 15SE +/- 3213.58, N = 5SE +/- 3846.80, N = 3424090.96426284.57424640.30296614.54301799.88308314.451. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.orgOps/sec, More Is BetterKeyDB 6.0.16EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P70K140K210K280K350KMin: 422532.63 / Avg: 424090.96 / Max: 426137.12Min: 417825.88 / Avg: 426284.57 / Max: 434052.42Min: 421052.5 / Avg: 424640.3 / Max: 429010.97Min: 275740.68 / Avg: 296614.54 / Max: 315706.87Min: 289079.47 / Avg: 301799.88 / Max: 306380.64Min: 300885.05 / Avg: 308314.45 / Max: 313760.221. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.11, N = 324.0023.9823.9334.3834.3133.841. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Very FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835Min: 23.97 / Avg: 24 / Max: 24.04Min: 23.93 / Avg: 23.98 / Max: 24.01Min: 23.88 / Avg: 23.93 / Max: 23.98Min: 34.21 / Avg: 34.38 / Max: 34.46Min: 34.11 / Avg: 34.31 / Max: 34.48Min: 33.64 / Avg: 33.84 / Max: 34.011. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Training ScoreEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500151315231520106310781075

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30K60K90K120K150KSE +/- 165.43, N = 3SE +/- 239.98, N = 3SE +/- 60.45, N = 3SE +/- 1327.95, N = 15SE +/- 1433.11, N = 15SE +/- 6473.79, N = 15104517103641104362117648115976148263
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: NASNet MobileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30K60K90K120K150KMin: 104228 / Avg: 104517 / Max: 104801Min: 103196 / Avg: 103641.33 / Max: 104019Min: 104243 / Avg: 104361.67 / Max: 104441Min: 110090 / Avg: 117648.47 / Max: 125902Min: 107477 / Avg: 115975.6 / Max: 126859Min: 125344 / Avg: 148262.93 / Max: 215131

Timed LLVM Compilation

This test times how long it takes to build the LLVM compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P60120180240300SE +/- 1.69, N = 3SE +/- 2.80, N = 3SE +/- 4.06, N = 3SE +/- 1.26, N = 3SE +/- 0.63, N = 3SE +/- 2.37, N = 3298.15289.65294.11209.25212.02211.79
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed LLVM Compilation 10.0Time To CompileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P50100150200250Min: 295.53 / Avg: 298.14 / Max: 301.31Min: 285.32 / Avg: 289.65 / Max: 294.89Min: 287.13 / Avg: 294.11 / Max: 301.19Min: 206.87 / Avg: 209.25 / Max: 211.17Min: 210.82 / Avg: 212.02 / Max: 212.96Min: 207.1 / Avg: 211.79 / Max: 214.71

TensorFlow Lite

This is a benchmark of the TensorFlow Lite implementation. The current Linux support is limited to running on CPUs. This test profile is measuring the average inference time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20K40K60K80K100KSE +/- 44.82, N = 3SE +/- 45.73, N = 3SE +/- 87.10, N = 3SE +/- 717.51, N = 4SE +/- 386.78, N = 3SE +/- 1496.92, N = 1589588.189393.689812.763691.363239.276438.6
OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2020-08-23Model: SqueezeNetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P16K32K48K64K80KMin: 89511.6 / Avg: 89588.07 / Max: 89666.8Min: 89313.9 / Avg: 89393.57 / Max: 89472.3Min: 89694.8 / Avg: 89812.7 / Max: 89982.7Min: 62487.4 / Avg: 63691.25 / Max: 65664.9Min: 62654.9 / Avg: 63239.23 / Max: 63970.4Min: 69182.9 / Avg: 76438.62 / Max: 86863.9

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.18880.37760.56640.75520.944SE +/- 0.002974, N = 3SE +/- 0.017578, N = 12SE +/- 0.000920, N = 3SE +/- 0.002965, N = 3SE +/- 0.009511, N = 3SE +/- 0.001315, N = 30.5928040.8390250.5917020.7423230.7522900.792153MIN: 0.51MIN: 0.64MIN: 0.5MIN: 0.67MIN: 0.67MIN: 0.661. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 0.59 / Avg: 0.59 / Max: 0.6Min: 0.74 / Avg: 0.84 / Max: 0.93Min: 0.59 / Avg: 0.59 / Max: 0.59Min: 0.74 / Avg: 0.74 / Max: 0.75Min: 0.73 / Avg: 0.75 / Max: 0.76Min: 0.79 / Avg: 0.79 / Max: 0.791. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LevelDB

LevelDB is a key-value storage library developed by Google that supports making use of Snappy for data compression and has other modern features. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400800120016002000SE +/- 2.23, N = 3SE +/- 4.35, N = 3SE +/- 8.92, N = 3SE +/- 17.18, N = 15SE +/- 15.97, N = 8SE +/- 19.99, N = 31168.581163.461162.711594.941630.941643.491. (CXX) g++ options: -O3 -lsnappy -lpthread
OpenBenchmarking.orgMicroseconds Per Op, Fewer Is BetterLevelDB 1.22Benchmark: Fill SyncEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500Min: 1164.76 / Avg: 1168.58 / Max: 1172.5Min: 1154.97 / Avg: 1163.46 / Max: 1169.36Min: 1144.88 / Avg: 1162.71 / Max: 1171.66Min: 1479.7 / Avg: 1594.94 / Max: 1697.79Min: 1535.64 / Avg: 1630.94 / Max: 1676.25Min: 1605.44 / Avg: 1643.49 / Max: 1673.121. (CXX) g++ options: -O3 -lsnappy -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500SE +/- 2.75, N = 3SE +/- 10.75, N = 3SE +/- 3.90, N = 3SE +/- 20.62, N = 14SE +/- 20.53, N = 15SE +/- 11.72, N = 31611.211613.701611.741177.121166.561290.07MIN: 1572.74MIN: 1565.85MIN: 1572.94MIN: 1074.6MIN: 1069.01MIN: 1195.721. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500Min: 1606.72 / Avg: 1611.21 / Max: 1616.2Min: 1593.97 / Avg: 1613.7 / Max: 1630.96Min: 1607.18 / Avg: 1611.74 / Max: 1619.5Min: 1106.82 / Avg: 1177.12 / Max: 1397.87Min: 1105.59 / Avg: 1166.56 / Max: 1377.85Min: 1278.32 / Avg: 1290.07 / Max: 1313.511. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1224364860SE +/- 0.08, N = 3SE +/- 0.18, N = 3SE +/- 0.31, N = 10SE +/- 0.64, N = 15SE +/- 0.47, N = 15SE +/- 0.53, N = 1540.0539.4939.8654.5654.1851.46
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_qdaEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455Min: 39.89 / Avg: 40.05 / Max: 40.16Min: 39.3 / Avg: 39.49 / Max: 39.84Min: 39.28 / Avg: 39.86 / Max: 42.6Min: 48.72 / Avg: 54.56 / Max: 58.79Min: 51.19 / Avg: 54.18 / Max: 56.68Min: 48.63 / Avg: 51.46 / Max: 55.73

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.04250.0850.12750.170.2125SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 5SE +/- 0.001, N = 30.1890.1890.1890.1410.1400.1371. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.19 / Avg: 0.19 / Max: 0.19Min: 0.14 / Avg: 0.14 / Max: 0.14Min: 0.14 / Avg: 0.14 / Max: 0.15Min: 0.14 / Avg: 0.14 / Max: 0.141. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P160K320K480K640K800KSE +/- 602.59, N = 3SE +/- 493.86, N = 3SE +/- 613.62, N = 3SE +/- 2733.07, N = 3SE +/- 7684.00, N = 5SE +/- 4479.87, N = 35293185305815290667082347150457290931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P130K260K390K520K650KMin: 528306.36 / Avg: 529317.62 / Max: 530391Min: 529788.36 / Avg: 530581.01 / Max: 531487.71Min: 527842.41 / Avg: 529066.11 / Max: 529758.71Min: 703525.24 / Avg: 708234.45 / Max: 712992.51Min: 685963.31 / Avg: 715044.7 / Max: 729965.09Min: 720190.76 / Avg: 729093.25 / Max: 734420.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.03, N = 316.9016.9816.9312.3912.4112.421. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620Min: 16.88 / Avg: 16.9 / Max: 16.93Min: 16.95 / Avg: 16.98 / Max: 17Min: 16.9 / Avg: 16.93 / Max: 16.94Min: 12.3 / Avg: 12.39 / Max: 12.46Min: 12.38 / Avg: 12.41 / Max: 12.45Min: 12.37 / Avg: 12.42 / Max: 12.471. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.70891.41782.12672.83563.5445SE +/- 0.01172, N = 3SE +/- 0.01746, N = 3SE +/- 0.01523, N = 3SE +/- 0.05452, N = 12SE +/- 0.02839, N = 3SE +/- 0.03863, N = 153.150473.144993.138782.319472.365722.30768MIN: 2.98MIN: 2.97MIN: 2.98MIN: 1.96MIN: 2.01MIN: 1.951. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 3.13 / Avg: 3.15 / Max: 3.17Min: 3.11 / Avg: 3.14 / Max: 3.16Min: 3.11 / Avg: 3.14 / Max: 3.17Min: 2.09 / Avg: 2.32 / Max: 2.7Min: 2.32 / Avg: 2.37 / Max: 2.42Min: 2.13 / Avg: 2.31 / Max: 2.571. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4080120160200SE +/- 0.11, N = 3SE +/- 0.02, N = 3SE +/- 0.14, N = 3SE +/- 0.64, N = 3SE +/- 0.78, N = 3SE +/- 0.09, N = 3142.12142.13142.08187.89186.40188.261. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.1Pfam Database SearchEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P306090120150Min: 141.96 / Avg: 142.11 / Max: 142.33Min: 142.09 / Avg: 142.13 / Max: 142.15Min: 141.82 / Avg: 142.08 / Max: 142.3Min: 186.64 / Avg: 187.89 / Max: 188.79Min: 185.31 / Avg: 186.4 / Max: 187.91Min: 188.11 / Avg: 188.26 / Max: 188.41. (CC) gcc options: -O3 -pthread -lhmmer -leasel -lm

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device AI ScoreEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P8001600240032004000347335323527272527732729

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4080120160200SE +/- 0.27, N = 3SE +/- 0.44, N = 3SE +/- 0.12, N = 3SE +/- 1.83, N = 6SE +/- 1.97, N = 3SE +/- 0.82, N = 3142.44141.93142.03181.72180.39181.261. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 1080p - Video Preset: Ultra FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P306090120150Min: 141.89 / Avg: 142.44 / Max: 142.76Min: 141.09 / Avg: 141.93 / Max: 142.58Min: 141.86 / Avg: 142.03 / Max: 142.27Min: 175 / Avg: 181.72 / Max: 186.21Min: 177.48 / Avg: 180.39 / Max: 184.15Min: 179.64 / Avg: 181.26 / Max: 182.271. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KSE +/- 2428.05, N = 3SE +/- 3879.14, N = 3SE +/- 2015.92, N = 3SE +/- 941.41, N = 3SE +/- 288.02, N = 3SE +/- 2635.14, N = 31197999.81199592.41197198.0951085.7949471.8944430.6
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P200K400K600K800K1000KMin: 1194895.1 / Avg: 1197999.83 / Max: 1202785.9Min: 1192621.9 / Avg: 1199592.4 / Max: 1206027.6Min: 1194830.9 / Avg: 1197198 / Max: 1201208.1Min: 949254.9 / Avg: 951085.67 / Max: 952381.8Min: 948957.8 / Avg: 949471.8 / Max: 949954Min: 940475.8 / Avg: 944430.63 / Max: 949424.9

Kvazaar

This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1224364860SE +/- 0.07, N = 3SE +/- 0.07, N = 3SE +/- 0.16, N = 3SE +/- 0.22, N = 3SE +/- 0.48, N = 3SE +/- 0.37, N = 341.9942.1342.1753.2653.2252.761. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterKvazaar 2.0Video Input: Bosphorus 4K - Video Preset: Ultra FastEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455Min: 41.85 / Avg: 41.99 / Max: 42.06Min: 41.98 / Avg: 42.13 / Max: 42.22Min: 41.86 / Avg: 42.17 / Max: 42.37Min: 52.83 / Avg: 53.26 / Max: 53.56Min: 52.53 / Avg: 53.22 / Max: 54.15Min: 52.02 / Avg: 52.76 / Max: 53.221. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt

AI Benchmark Alpha

AI Benchmark Alpha is a Python library for evaluating artificial intelligence (AI) performance on diverse hardware platforms and relies upon the TensorFlow machine learning library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterAI Benchmark Alpha 0.1.2Device Inference ScoreEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400800120016002000196020092007166216951654

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430SE +/- 0.03, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.04, N = 323.5323.5923.6020.4320.2019.991. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430Min: 23.49 / Avg: 23.53 / Max: 23.59Min: 23.49 / Avg: 23.59 / Max: 23.67Min: 23.56 / Avg: 23.6 / Max: 23.65Min: 20.38 / Avg: 20.43 / Max: 20.5Min: 20.09 / Avg: 20.2 / Max: 20.32Min: 19.9 / Avg: 19.99 / Max: 20.031. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Hugin

Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1326395265SE +/- 0.51, N = 3SE +/- 0.39, N = 3SE +/- 0.24, N = 3SE +/- 0.45, N = 3SE +/- 0.22, N = 3SE +/- 0.73, N = 350.0150.3350.4656.7956.4358.66
OpenBenchmarking.orgSeconds, Fewer Is BetterHuginPanorama Photo Assistant + Stitching TimeEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1224364860Min: 49.23 / Avg: 50.01 / Max: 50.96Min: 49.85 / Avg: 50.32 / Max: 51.09Min: 50.1 / Avg: 50.46 / Max: 50.92Min: 55.89 / Avg: 56.79 / Max: 57.28Min: 56.13 / Avg: 56.43 / Max: 56.85Min: 57.78 / Avg: 58.66 / Max: 60.1

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20406080100SE +/- 0.39, N = 3SE +/- 0.30, N = 3SE +/- 0.42, N = 3SE +/- 0.04, N = 3SE +/- 0.10, N = 3SE +/- 0.20, N = 387.2587.3186.88100.70100.3799.061. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P20406080100Min: 86.63 / Avg: 87.25 / Max: 87.98Min: 86.73 / Avg: 87.31 / Max: 87.73Min: 86.05 / Avg: 86.88 / Max: 87.42Min: 100.62 / Avg: 100.7 / Max: 100.75Min: 100.18 / Avg: 100.37 / Max: 100.52Min: 98.84 / Avg: 99.06 / Max: 99.451. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.02050.0410.06150.0820.1025SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 4SE +/- 0.001, N = 3SE +/- 0.001, N = 30.0910.0900.0910.0810.0800.0791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.08 / Avg: 0.08 / Max: 0.08Min: 0.08 / Avg: 0.08 / Max: 0.08Min: 0.08 / Avg: 0.08 / Max: 0.081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P140K280K420K560K700KSE +/- 5773.06, N = 3SE +/- 2634.03, N = 3SE +/- 1352.44, N = 3SE +/- 6778.87, N = 4SE +/- 7480.51, N = 3SE +/- 6741.00, N = 35516235553205514846192196255126336361. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P110K220K330K440K550KMin: 540076.77 / Avg: 551622.87 / Max: 557410.44Min: 550086.48 / Avg: 555320.19 / Max: 558457.14Min: 549204.72 / Avg: 551483.89 / Max: 553884.94Min: 603711.78 / Avg: 619218.79 / Max: 636187.59Min: 615004.12 / Avg: 625512.39 / Max: 639989.12Min: 624849.1 / Avg: 633635.83 / Max: 646884.611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.39010.78021.17031.56041.9505SE +/- 0.00365, N = 3SE +/- 0.01016, N = 3SE +/- 0.00210, N = 3SE +/- 0.01626, N = 5SE +/- 0.01312, N = 3SE +/- 0.01869, N = 51.730511.733881.727651.557671.523951.67289MIN: 1.58MIN: 1.57MIN: 1.58MIN: 1.3MIN: 1.3MIN: 1.311. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.73 / Avg: 1.73 / Max: 1.74Min: 1.72 / Avg: 1.73 / Max: 1.75Min: 1.73 / Avg: 1.73 / Max: 1.73Min: 1.53 / Avg: 1.56 / Max: 1.62Min: 1.5 / Avg: 1.52 / Max: 1.55Min: 1.62 / Avg: 1.67 / Max: 1.711. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2004006008001000SE +/- 2.42, N = 3SE +/- 2.82, N = 3SE +/- 3.30, N = 3SE +/- 25.26, N = 12SE +/- 20.75, N = 12SE +/- 12.96, N = 15934.69930.71928.12838.99827.22873.59MIN: 900.24MIN: 900MIN: 898.12MIN: 702.98MIN: 702.57MIN: 754.761. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P160320480640800Min: 930.92 / Avg: 934.69 / Max: 939.19Min: 927.77 / Avg: 930.71 / Max: 936.35Min: 922.25 / Avg: 928.11 / Max: 933.67Min: 745.39 / Avg: 838.99 / Max: 1011.47Min: 748.34 / Avg: 827.22 / Max: 939.97Min: 815.02 / Avg: 873.59 / Max: 983.031. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240SE +/- 0.11, N = 3SE +/- 0.10, N = 3SE +/- 0.07, N = 3SE +/- 0.13, N = 3SE +/- 0.32, N = 3SE +/- 0.18, N = 335.1135.0134.9631.4231.1431.931. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240Min: 34.99 / Avg: 35.11 / Max: 35.32Min: 34.83 / Avg: 35.01 / Max: 35.18Min: 34.84 / Avg: 34.96 / Max: 35.07Min: 31.23 / Avg: 31.42 / Max: 31.68Min: 30.51 / Avg: 31.14 / Max: 31.54Min: 31.63 / Avg: 31.93 / Max: 32.241. (CXX) g++ options: -O2 -fopenmp -ljpeg -lz -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1428425670SE +/- 0.19, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 3SE +/- 0.41, N = 11SE +/- 0.21, N = 3SE +/- 0.45, N = 360.6160.3360.2554.7054.6353.831. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1224364860Min: 60.35 / Avg: 60.61 / Max: 60.98Min: 60.24 / Avg: 60.33 / Max: 60.38Min: 60.09 / Avg: 60.25 / Max: 60.38Min: 53.26 / Avg: 54.7 / Max: 57.23Min: 54.34 / Avg: 54.63 / Max: 55.04Min: 53.12 / Avg: 53.83 / Max: 54.651. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.40050.8011.20151.6022.0025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 14SE +/- 0.02, N = 15SE +/- 0.03, N = 151.611.591.621.781.751.70
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_linearridgeregressionEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.59 / Avg: 1.61 / Max: 1.62Min: 1.58 / Avg: 1.59 / Max: 1.6Min: 1.61 / Avg: 1.62 / Max: 1.64Min: 1.72 / Avg: 1.78 / Max: 1.87Min: 1.63 / Avg: 1.75 / Max: 1.87Min: 1.51 / Avg: 1.7 / Max: 1.95

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.13010.26020.39030.52040.6505SE +/- 0.001786, N = 3SE +/- 0.002676, N = 3SE +/- 0.001616, N = 3SE +/- 0.005740, N = 3SE +/- 0.005785, N = 5SE +/- 0.004259, N = 150.5782260.5752540.5766570.5182690.5291860.567497MIN: 0.52MIN: 0.52MIN: 0.52MIN: 0.43MIN: 0.43MIN: 0.431. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 0.58 / Avg: 0.58 / Max: 0.58Min: 0.57 / Avg: 0.58 / Max: 0.58Min: 0.57 / Avg: 0.58 / Max: 0.58Min: 0.51 / Avg: 0.52 / Max: 0.53Min: 0.52 / Avg: 0.53 / Max: 0.55Min: 0.55 / Avg: 0.57 / Max: 0.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P70140210280350SE +/- 0.66, N = 3SE +/- 0.39, N = 3SE +/- 0.88, N = 3SE +/- 2.64, N = 3SE +/- 0.89, N = 3SE +/- 2.69, N = 15295.19292.90294.65316.82299.45322.30MIN: 281.63 / MAX: 334.08MIN: 281.47 / MAX: 324.04MIN: 281.79 / MAX: 332.39MIN: 284.59 / MAX: 461.35MIN: 284.3 / MAX: 463.88MIN: 284.29 / MAX: 478.051. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P60120180240300Min: 294.14 / Avg: 295.19 / Max: 296.42Min: 292.14 / Avg: 292.9 / Max: 293.41Min: 292.94 / Avg: 294.65 / Max: 295.84Min: 313.92 / Avg: 316.82 / Max: 322.1Min: 297.67 / Avg: 299.45 / Max: 300.44Min: 310.59 / Avg: 322.3 / Max: 339.171. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM TriadEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.77041.54082.31123.08163.852SE +/- 0.13922, N = 3SE +/- 0.00767, N = 3SE +/- 0.06266, N = 3SE +/- 0.01458, N = 3SE +/- 0.00319, N = 3SE +/- 0.13098, N = 33.116273.382203.299233.414323.423793.217821. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM TriadEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.94 / Avg: 3.12 / Max: 3.39Min: 3.37 / Avg: 3.38 / Max: 3.39Min: 3.17 / Avg: 3.3 / Max: 3.36Min: 3.39 / Avg: 3.41 / Max: 3.43Min: 3.42 / Avg: 3.42 / Max: 3.43Min: 2.96 / Avg: 3.22 / Max: 3.361. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

x264

This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4080120160200SE +/- 1.85, N = 3SE +/- 1.47, N = 3SE +/- 1.51, N = 3SE +/- 1.79, N = 12SE +/- 2.07, N = 5SE +/- 1.79, N = 6178.79177.86177.71194.19193.40194.971. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.orgFrames Per Second, More Is Betterx264 2019-12-17H.264 Video EncodingEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4080120160200Min: 175.25 / Avg: 178.79 / Max: 181.51Min: 174.93 / Avg: 177.86 / Max: 179.5Min: 174.73 / Avg: 177.71 / Max: 179.57Min: 176.94 / Avg: 194.19 / Max: 199.63Min: 190.44 / Avg: 193.4 / Max: 201.22Min: 189.97 / Avg: 194.97 / Max: 201.141. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P918273645SE +/- 0.60, N = 3SE +/- 0.85, N = 3SE +/- 0.48, N = 3SE +/- 0.14, N = 3SE +/- 0.36, N = 3SE +/- 0.19, N = 336.1436.7736.4439.2039.0335.911. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240Min: 35.08 / Avg: 36.14 / Max: 37.16Min: 35.57 / Avg: 36.77 / Max: 38.41Min: 35.49 / Avg: 36.44 / Max: 37.09Min: 38.93 / Avg: 39.2 / Max: 39.42Min: 38.32 / Avg: 39.03 / Max: 39.48Min: 35.53 / Avg: 35.91 / Max: 36.111. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.5241.0481.5722.0962.62SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.007, N = 3SE +/- 0.006, N = 32.3292.3242.3282.1402.1642.1411. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.33 / Avg: 2.33 / Max: 2.33Min: 2.32 / Avg: 2.32 / Max: 2.33Min: 2.32 / Avg: 2.33 / Max: 2.34Min: 2.13 / Avg: 2.14 / Max: 2.15Min: 2.15 / Avg: 2.16 / Max: 2.17Min: 2.13 / Avg: 2.14 / Max: 2.151. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P5K10K15K20K25KSE +/- 24.14, N = 3SE +/- 9.19, N = 3SE +/- 31.97, N = 3SE +/- 40.02, N = 3SE +/- 68.23, N = 3SE +/- 62.07, N = 32147921521214812337323121233601. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 50 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4K8K12K16K20KMin: 21430.62 / Avg: 21478.69 / Max: 21506.64Min: 21503.14 / Avg: 21521.45 / Max: 21532.05Min: 21418.59 / Avg: 21481.09 / Max: 21523.99Min: 23309.36 / Avg: 23372.55 / Max: 23446.7Min: 23023.94 / Avg: 23120.87 / Max: 23252.51Min: 23255.46 / Avg: 23359.85 / Max: 23470.231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.07, N = 320.6620.5720.6322.3421.9221.981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025Min: 20.63 / Avg: 20.66 / Max: 20.71Min: 20.56 / Avg: 20.57 / Max: 20.59Min: 20.56 / Avg: 20.63 / Max: 20.71Min: 22.23 / Avg: 22.34 / Max: 22.49Min: 21.77 / Avg: 21.92 / Max: 22.12Min: 21.91 / Avg: 21.98 / Max: 22.111. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P5001000150020002500SE +/- 2.57, N = 3SE +/- 1.25, N = 3SE +/- 5.02, N = 3SE +/- 7.88, N = 3SE +/- 10.95, N = 3SE +/- 6.74, N = 32421243124242239228222761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400800120016002000Min: 2415.51 / Avg: 2420.57 / Max: 2423.94Min: 2429 / Avg: 2431.4 / Max: 2433.2Min: 2415.02 / Avg: 2424.41 / Max: 2432.16Min: 2223.83 / Avg: 2238.87 / Max: 2250.45Min: 2261.13 / Avg: 2282.38 / Max: 2297.63Min: 2262.21 / Avg: 2275.65 / Max: 2283.391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.55021.10041.65062.20082.751SE +/- 0.00293, N = 3SE +/- 0.02163, N = 3SE +/- 0.01986, N = 3SE +/- 0.03458, N = 12SE +/- 0.02846, N = 15SE +/- 0.03614, N = 152.359462.341522.357262.305642.262692.44538MIN: 2.12MIN: 2.1MIN: 2.1MIN: 1.86MIN: 1.86MIN: 1.831. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.35 / Avg: 2.36 / Max: 2.36Min: 2.3 / Avg: 2.34 / Max: 2.38Min: 2.34 / Avg: 2.36 / Max: 2.4Min: 2.1 / Avg: 2.31 / Max: 2.44Min: 2.08 / Avg: 2.26 / Max: 2.44Min: 2.12 / Avg: 2.45 / Max: 2.611. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.81881.63762.45643.27524.094SE +/- 0.006, N = 3SE +/- 0.013, N = 3SE +/- 0.003, N = 3SE +/- 0.029, N = 3SE +/- 0.010, N = 3SE +/- 0.011, N = 33.3953.3893.3883.6123.6143.6391. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 3.38 / Avg: 3.4 / Max: 3.4Min: 3.37 / Avg: 3.39 / Max: 3.41Min: 3.38 / Avg: 3.39 / Max: 3.39Min: 3.58 / Avg: 3.61 / Max: 3.67Min: 3.6 / Avg: 3.61 / Max: 3.63Min: 3.63 / Avg: 3.64 / Max: 3.661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P6K12K18K24K30KSE +/- 54.62, N = 3SE +/- 108.48, N = 3SE +/- 34.89, N = 3SE +/- 222.16, N = 3SE +/- 74.83, N = 3SE +/- 83.97, N = 32948629537295522773027720275201. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P5K10K15K20K25KMin: 29406.99 / Avg: 29486.49 / Max: 29591.13Min: 29338.64 / Avg: 29536.83 / Max: 29712.38Min: 29511.54 / Avg: 29552.27 / Max: 29621.7Min: 27287.19 / Avg: 27730.38 / Max: 27979.38Min: 27585.15 / Avg: 27720.01 / Max: 27843.66Min: 27351.89 / Avg: 27519.52 / Max: 27612.291. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P5001000150020002500SE +/- 4.79, N = 3SE +/- 6.61, N = 3SE +/- 3.93, N = 3SE +/- 4.42, N = 3SE +/- 2.08, N = 3SE +/- 2.35, N = 32145214421422020200720021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400800120016002000Min: 2139.05 / Avg: 2144.57 / Max: 2154.12Min: 2137.32 / Avg: 2144.44 / Max: 2157.66Min: 2135.33 / Avg: 2141.66 / Max: 2148.86Min: 2014.09 / Avg: 2020.25 / Max: 2028.83Min: 2004.69 / Avg: 2007.4 / Max: 2011.49Min: 1999.62 / Avg: 2002.24 / Max: 2006.931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455SE +/- 0.11, N = 3SE +/- 0.14, N = 3SE +/- 0.08, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 346.6846.6946.7449.5749.8850.011. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1020304050Min: 46.47 / Avg: 46.68 / Max: 46.81Min: 46.42 / Avg: 46.69 / Max: 46.85Min: 46.59 / Avg: 46.74 / Max: 46.87Min: 49.35 / Avg: 49.57 / Max: 49.72Min: 49.77 / Avg: 49.88 / Max: 49.95Min: 49.89 / Avg: 50.01 / Max: 50.071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P200K400K600K800K1000KSE +/- 11183.06, N = 3SE +/- 4242.40, N = 3SE +/- 9353.87, N = 3SE +/- 1336.65, N = 3SE +/- 7738.13, N = 3SE +/- 7882.33, N = 68472318526128420228167567975668254481. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P150K300K450K600K750KMin: 824865.85 / Avg: 847230.58 / Max: 858628.67Min: 847759.59 / Avg: 852612.47 / Max: 861066.44Min: 828236.85 / Avg: 842022.29 / Max: 859867.44Min: 814449.13 / Avg: 816755.92 / Max: 819079.33Min: 782896.62 / Avg: 797566.11 / Max: 809171.76Min: 790551.98 / Avg: 825448.08 / Max: 847616.981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.01420.02840.04260.05680.071SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 60.0590.0590.0590.0610.0630.0611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.06Min: 0.06 / Avg: 0.06 / Max: 0.061. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P5K10K15K20K25KSE +/- 171.06, N = 3SE +/- 144.26, N = 3SE +/- 325.52, N = 3SE +/- 277.60, N = 3SE +/- 206.81, N = 15SE +/- 275.23, N = 152433624345244462317123295233961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P4K8K12K16K20KMin: 24083.48 / Avg: 24336.14 / Max: 24662.24Min: 24131.41 / Avg: 24345.41 / Max: 24620.01Min: 23836.86 / Avg: 24446.39 / Max: 24949.24Min: 22616.33 / Avg: 23170.72 / Max: 23473.97Min: 21184.95 / Avg: 23295.43 / Max: 24018.47Min: 21138.29 / Avg: 23395.78 / Max: 24557.771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1224364860SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.13, N = 3SE +/- 0.33, N = 3SE +/- 0.21, N = 349.3649.5149.5851.9051.8552.051. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: ETC1SEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1020304050Min: 49.34 / Avg: 49.36 / Max: 49.38Min: 49.47 / Avg: 49.51 / Max: 49.54Min: 49.51 / Avg: 49.58 / Max: 49.7Min: 51.67 / Avg: 51.9 / Max: 52.13Min: 51.45 / Avg: 51.85 / Max: 52.51Min: 51.64 / Avg: 52.05 / Max: 52.271. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.00970.01940.02910.03880.0485SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 15SE +/- 0.001, N = 150.0410.0410.0410.0430.0430.0431. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.05Min: 0.04 / Avg: 0.04 / Max: 0.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Timed Clash Compilation

Build the clash-lang Haskell to VHDL/Verilog/SystemVerilog compiler with GHC 8.10.1 Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P100200300400500SE +/- 0.12, N = 3SE +/- 1.06, N = 3SE +/- 0.52, N = 3SE +/- 1.18, N = 3SE +/- 1.97, N = 3SE +/- 4.35, N = 3462.28462.32461.62483.40483.96482.28
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Clash CompilationTime To CompileEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P90180270360450Min: 462.16 / Avg: 462.28 / Max: 462.51Min: 460.54 / Avg: 462.32 / Max: 464.19Min: 460.95 / Avg: 461.62 / Max: 462.64Min: 481.45 / Avg: 483.4 / Max: 485.53Min: 481.17 / Avg: 483.96 / Max: 487.77Min: 475.8 / Avg: 482.28 / Max: 490.54

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1224364860SE +/- 0.11, N = 3SE +/- 0.08, N = 3SE +/- 0.02, N = 3SE +/- 0.73, N = 3SE +/- 0.49, N = 3SE +/- 0.57, N = 351.7451.8251.8451.7253.0253.73
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_icaEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455Min: 51.53 / Avg: 51.74 / Max: 51.88Min: 51.67 / Avg: 51.82 / Max: 51.96Min: 51.81 / Avg: 51.84 / Max: 51.88Min: 50.27 / Avg: 51.72 / Max: 52.45Min: 52.08 / Avg: 53.02 / Max: 53.75Min: 52.63 / Avg: 53.73 / Max: 54.54

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.68381.36762.05142.73523.419SE +/- 0.003, N = 3SE +/- 0.008, N = 3SE +/- 0.002, N = 3SE +/- 0.010, N = 3SE +/- 0.035, N = 3SE +/- 0.021, N = 33.0393.0323.0262.9722.9672.927
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 10EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 3.03 / Avg: 3.04 / Max: 3.04Min: 3.02 / Avg: 3.03 / Max: 3.05Min: 3.02 / Avg: 3.03 / Max: 3.03Min: 2.95 / Avg: 2.97 / Max: 2.99Min: 2.9 / Avg: 2.97 / Max: 3.01Min: 2.89 / Avg: 2.93 / Max: 2.96

InfluxDB

This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KSE +/- 1881.74, N = 3SE +/- 811.21, N = 3SE +/- 1608.37, N = 3SE +/- 961.62, N = 3SE +/- 11350.62, N = 3SE +/- 1149.51, N = 31339487.51339130.11297582.51328030.91345908.11337280.4
OpenBenchmarking.orgval/sec, More Is BetterInfluxDB 1.8.2Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P200K400K600K800K1000KMin: 1337542.7 / Avg: 1339487.53 / Max: 1343250.3Min: 1337830.3 / Avg: 1339130.13 / Max: 1340620.9Min: 1295132.3 / Avg: 1297582.53 / Max: 1300612.6Min: 1326148.7 / Avg: 1328030.87 / Max: 1329314.3Min: 1323232.9 / Avg: 1345908.13 / Max: 1358186.7Min: 1335075.1 / Avg: 1337280.37 / Max: 1338945.8

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455SE +/- 0.51, N = 5SE +/- 0.54, N = 5SE +/- 0.42, N = 3SE +/- 0.41, N = 3SE +/- 0.62, N = 3SE +/- 0.31, N = 349.7048.3348.9848.2148.8348.081. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1020304050Min: 48.53 / Avg: 49.7 / Max: 50.9Min: 46.91 / Avg: 48.33 / Max: 49.98Min: 48.14 / Avg: 48.98 / Max: 49.48Min: 47.75 / Avg: 48.21 / Max: 49.03Min: 47.77 / Avg: 48.83 / Max: 49.91Min: 47.46 / Avg: 48.08 / Max: 48.471. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455SE +/- 0.55, N = 4SE +/- 0.36, N = 3SE +/- 0.05, N = 3SE +/- 0.49, N = 6SE +/- 0.61, N = 4SE +/- 0.56, N = 1550.7850.5449.3149.9449.6749.241. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1020304050Min: 49.99 / Avg: 50.78 / Max: 52.35Min: 49.82 / Avg: 50.54 / Max: 50.94Min: 49.22 / Avg: 49.31 / Max: 49.36Min: 49.12 / Avg: 49.94 / Max: 51.59Min: 49 / Avg: 49.67 / Max: 51.5Min: 44.07 / Avg: 49.24 / Max: 51.231. (CC) gcc options: -O3

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810SE +/- 0.004, N = 3SE +/- 0.015, N = 3SE +/- 0.021, N = 3SE +/- 0.033, N = 3SE +/- 0.043, N = 3SE +/- 0.018, N = 37.5787.7337.6067.7467.8087.7891. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 0EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P3691215Min: 7.57 / Avg: 7.58 / Max: 7.59Min: 7.71 / Avg: 7.73 / Max: 7.76Min: 7.57 / Avg: 7.61 / Max: 7.64Min: 7.69 / Avg: 7.75 / Max: 7.8Min: 7.74 / Avg: 7.81 / Max: 7.88Min: 7.75 / Avg: 7.79 / Max: 7.821. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.00790.01580.02370.03160.0395SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 4SE +/- 0.000, N = 4SE +/- 0.000, N = 3SE +/- 0.000, N = 100.0340.0340.0340.0350.0340.0341. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.03Min: 0.03 / Avg: 0.03 / Max: 0.04Min: 0.03 / Avg: 0.04 / Max: 0.04Min: 0.03 / Avg: 0.03 / Max: 0.04Min: 0.03 / Avg: 0.03 / Max: 0.041. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KSE +/- 37.48, N = 4SE +/- 1.64, N = 3SE +/- 18.01, N = 3SE +/- 35.35, N = 6SE +/- 72.96, N = 4SE +/- 35.55, N = 1510606.310661.510548.810360.510409.910402.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KMin: 10520 / Avg: 10606.33 / Max: 10683.5Min: 10658.3 / Avg: 10661.47 / Max: 10663.8Min: 10513.1 / Avg: 10548.77 / Max: 10571Min: 10307.4 / Avg: 10360.48 / Max: 10530.7Min: 10265.2 / Avg: 10409.85 / Max: 10537.8Min: 10135.6 / Avg: 10402.73 / Max: 10629.11. (CC) gcc options: -O3

BYTE Unix Benchmark

This is a test of BYTE. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P8M16M24M32M40MSE +/- 295946.74, N = 12SE +/- 231062.98, N = 3SE +/- 93255.90, N = 3SE +/- 218663.85, N = 3SE +/- 455276.37, N = 3SE +/- 523722.43, N = 337455257.237547272.937624546.938124375.537519864.538477706.3
OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 3.6Computational Test: Dhrystone 2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P7M14M21M28M35MMin: 36026099.6 / Avg: 37455257.24 / Max: 39383837.8Min: 37101722.6 / Avg: 37547272.9 / Max: 37876274.2Min: 37523480.1 / Avg: 37624546.93 / Max: 37810834.2Min: 37764703.2 / Avg: 38124375.47 / Max: 38519661.2Min: 36709387.9 / Avg: 37519864.53 / Max: 38284512.7Min: 37786879.9 / Avg: 38477706.33 / Max: 39504973.4

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P6K12K18K24K30KSE +/- 161.30, N = 3SE +/- 325.35, N = 3SE +/- 335.87, N = 4SE +/- 320.79, N = 4SE +/- 230.23, N = 3SE +/- 222.86, N = 102941929391294622868129257294521. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read OnlyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P5K10K15K20K25KMin: 29111.99 / Avg: 29419.1 / Max: 29658.21Min: 29003.71 / Avg: 29391.47 / Max: 30037.89Min: 28740.66 / Avg: 29461.77 / Max: 30350.34Min: 27776.5 / Avg: 28680.92 / Max: 29249.25Min: 28797.22 / Avg: 29256.84 / Max: 29510.81Min: 28428.85 / Avg: 29452.4 / Max: 30717.441. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KSE +/- 35.72, N = 3SE +/- 3.46, N = 3SE +/- 28.90, N = 3SE +/- 47.03, N = 3SE +/- 54.20, N = 3SE +/- 80.64, N = 311308.911271.411314.611021.811114.311168.71. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Decompression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KMin: 11273.1 / Avg: 11308.87 / Max: 11380.3Min: 11266.3 / Avg: 11271.4 / Max: 11278Min: 11258.5 / Avg: 11314.6 / Max: 11354.7Min: 10963.6 / Avg: 11021.8 / Max: 11114.9Min: 11007 / Avg: 11114.3 / Max: 11181.3Min: 11014.5 / Avg: 11168.7 / Max: 11286.71. (CC) gcc options: -O3

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.12740.25480.38220.50960.637SE +/- 0.001, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 30.5540.5560.5530.5610.5580.5661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 0.55 / Avg: 0.55 / Max: 0.56Min: 0.55 / Avg: 0.56 / Max: 0.56Min: 0.55 / Avg: 0.55 / Max: 0.56Min: 0.56 / Avg: 0.56 / Max: 0.56Min: 0.55 / Avg: 0.56 / Max: 0.56Min: 0.57 / Avg: 0.57 / Max: 0.571. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Mlpack Benchmark

Mlpack benchmark scripts for machine learning libraries Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.15, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 324.4524.4925.0224.6124.5124.55
OpenBenchmarking.orgSeconds, Fewer Is BetterMlpack BenchmarkBenchmark: scikit_svmEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P612182430Min: 24.38 / Avg: 24.45 / Max: 24.51Min: 24.45 / Avg: 24.49 / Max: 24.52Min: 24.84 / Avg: 25.02 / Max: 25.31Min: 24.51 / Avg: 24.61 / Max: 24.81Min: 24.47 / Avg: 24.51 / Max: 24.54Min: 24.49 / Avg: 24.55 / Max: 24.65

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400800120016002000SE +/- 1.53, N = 3SE +/- 10.89, N = 3SE +/- 5.85, N = 3SE +/- 2.88, N = 3SE +/- 7.22, N = 3SE +/- 2.71, N = 31805180118071784179417661. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500Min: 1803.34 / Avg: 1805.1 / Max: 1808.14Min: 1784.12 / Avg: 1800.71 / Max: 1821.23Min: 1800.51 / Avg: 1806.58 / Max: 1818.27Min: 1778.98 / Avg: 1783.96 / Max: 1788.97Min: 1780.42 / Avg: 1794.04 / Max: 1805.02Min: 1760.96 / Avg: 1766.16 / Max: 1770.081. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 319.0419.0219.0219.0118.7719.201. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025Min: 19.02 / Avg: 19.04 / Max: 19.08Min: 19 / Avg: 19.02 / Max: 19.05Min: 19.01 / Avg: 19.02 / Max: 19.03Min: 18.92 / Avg: 19.01 / Max: 19.08Min: 18.62 / Avg: 18.77 / Max: 19.01Min: 19.07 / Avg: 19.2 / Max: 19.371. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KSE +/- 28.09, N = 5SE +/- 22.43, N = 5SE +/- 8.87, N = 3SE +/- 50.17, N = 3SE +/- 99.24, N = 3SE +/- 45.57, N = 310630.910616.010685.710448.210473.310602.41. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KMin: 10579.5 / Avg: 10630.88 / Max: 10710.9Min: 10577.3 / Avg: 10616.04 / Max: 10671Min: 10668 / Avg: 10685.7 / Max: 10695.7Min: 10355.3 / Avg: 10448.2 / Max: 10527.5Min: 10305.6 / Avg: 10473.27 / Max: 10649.1Min: 10514.3 / Avg: 10602.43 / Max: 10666.61. (CC) gcc options: -O3

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.23330.46660.69990.93321.1665SE +/- 0.003, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 31.0371.0361.0371.0361.0301.016
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 5EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.03 / Avg: 1.04 / Max: 1.04Min: 1.04 / Avg: 1.04 / Max: 1.04Min: 1.03 / Avg: 1.04 / Max: 1.04Min: 1.03 / Avg: 1.04 / Max: 1.04Min: 1.03 / Avg: 1.03 / Max: 1.03Min: 1.01 / Avg: 1.02 / Max: 1.02

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.31190.62380.93571.24761.5595SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.007, N = 3SE +/- 0.005, N = 31.3851.3861.3851.3781.3691.358
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 6EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.38 / Avg: 1.38 / Max: 1.39Min: 1.38 / Avg: 1.39 / Max: 1.39Min: 1.38 / Avg: 1.39 / Max: 1.39Min: 1.38 / Avg: 1.38 / Max: 1.38Min: 1.36 / Avg: 1.37 / Max: 1.38Min: 1.35 / Avg: 1.36 / Max: 1.37

Numpy Benchmark

This is a test to obtain the general Numpy performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P70140210280350SE +/- 0.44, N = 3SE +/- 2.42, N = 3SE +/- 0.87, N = 3SE +/- 2.70, N = 3SE +/- 0.52, N = 3SE +/- 1.06, N = 3324.13321.81318.47318.08322.49320.65
OpenBenchmarking.orgScore, More Is BetterNumpy BenchmarkEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P60120180240300Min: 323.39 / Avg: 324.13 / Max: 324.92Min: 317.26 / Avg: 321.81 / Max: 325.51Min: 317.57 / Avg: 318.47 / Max: 320.21Min: 314.71 / Avg: 318.08 / Max: 323.41Min: 321.82 / Avg: 322.49 / Max: 323.52Min: 319.27 / Avg: 320.65 / Max: 322.73

rav1e

Xiph rav1e is a Rust-written AV1 video encoder. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.07850.1570.23550.3140.3925SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.3490.3490.3460.3480.3460.343
OpenBenchmarking.orgFrames Per Second, More Is Betterrav1e 0.4 AlphaSpeed: 1EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P12345Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.35 / Avg: 0.35 / Max: 0.35Min: 0.34 / Avg: 0.34 / Max: 0.35

PHPBench

PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P120K240K360K480K600KSE +/- 5796.24, N = 3SE +/- 4364.31, N = 3SE +/- 3310.62, N = 3SE +/- 4510.18, N = 3SE +/- 5131.05, N = 3SE +/- 5940.23, N = 3568796567897573567572905577796572833
OpenBenchmarking.orgScore, More Is BetterPHPBench 0.8.1PHP Benchmark SuiteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P100K200K300K400K500KMin: 559904 / Avg: 568795.67 / Max: 579683Min: 562047 / Avg: 567896.67 / Max: 576432Min: 567604 / Avg: 573567 / Max: 579041Min: 563904 / Avg: 572904.67 / Max: 577921Min: 567707 / Avg: 577795.67 / Max: 584467Min: 561161 / Avg: 572832.67 / Max: 580589

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KSE +/- 29.36, N = 3SE +/- 22.56, N = 3SE +/- 17.93, N = 3SE +/- 55.92, N = 3SE +/- 79.54, N = 3SE +/- 45.18, N = 39777.439740.169780.389657.049652.009632.391. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 1 - Compression SpeedEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KMin: 9722.63 / Avg: 9777.43 / Max: 9823.08Min: 9710.44 / Avg: 9740.16 / Max: 9784.42Min: 9751.48 / Avg: 9780.38 / Max: 9813.23Min: 9566.8 / Avg: 9657.04 / Max: 9759.38Min: 9547.98 / Avg: 9652 / Max: 9808.23Min: 9545.13 / Avg: 9632.39 / Max: 9696.331. (CC) gcc options: -O3

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.11250.2250.33750.450.5625SE +/- 0.000, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 30.4950.4930.4980.4960.5000.4981. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average LatencyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.49 / Avg: 0.5 / Max: 0.5Min: 0.49 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.5 / Avg: 0.5 / Max: 0.51. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400800120016002000SE +/- 0.78, N = 3SE +/- 15.86, N = 3SE +/- 7.88, N = 3SE +/- 5.35, N = 3SE +/- 6.32, N = 3SE +/- 6.30, N = 32019202720092016200120071. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 1 - Clients: 1 - Mode: Read WriteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400800120016002000Min: 2017.31 / Avg: 2018.73 / Max: 2020Min: 1995.18 / Avg: 2026.9 / Max: 2043.52Min: 1995.66 / Avg: 2008.96 / Max: 2022.92Min: 2009.22 / Avg: 2015.88 / Max: 2026.46Min: 1990.86 / Avg: 2001.42 / Max: 2012.72Min: 1995.05 / Avg: 2006.88 / Max: 2016.581. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

Basis Universal

Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P150300450600750SE +/- 2.14, N = 3SE +/- 0.27, N = 3SE +/- 1.41, N = 3SE +/- 0.66, N = 3SE +/- 1.53, N = 3SE +/- 1.44, N = 3694.87693.90695.00687.08689.47689.801. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.12Settings: UASTC Level 2 + RDO Post-ProcessingEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P120240360480600Min: 692.38 / Avg: 694.87 / Max: 699.13Min: 693.4 / Avg: 693.9 / Max: 694.35Min: 692.18 / Avg: 695 / Max: 696.41Min: 686.2 / Avg: 687.08 / Max: 688.37Min: 686.41 / Avg: 689.47 / Max: 691.14Min: 687.28 / Avg: 689.8 / Max: 692.281. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.58411.16821.75232.33642.9205SE +/- 0.005, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 3SE +/- 0.003, N = 32.5962.5852.5912.5732.5802.5831. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 2.59 / Avg: 2.6 / Max: 2.6Min: 2.58 / Avg: 2.58 / Max: 2.59Min: 2.58 / Avg: 2.59 / Max: 2.6Min: 2.57 / Avg: 2.57 / Max: 2.57Min: 2.58 / Avg: 2.58 / Max: 2.58Min: 2.58 / Avg: 2.58 / Max: 2.591. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P918273645SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 3SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.13, N = 339.2839.1739.3439.0139.2739.351. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240Min: 39.1 / Avg: 39.28 / Max: 39.38Min: 39.09 / Avg: 39.17 / Max: 39.33Min: 39.17 / Avg: 39.34 / Max: 39.49Min: 38.96 / Avg: 39.01 / Max: 39.11Min: 39.25 / Avg: 39.27 / Max: 39.32Min: 39.1 / Avg: 39.35 / Max: 39.571. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Crafty

This is a performance test of Crafty, an advanced open-source chess engine. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1.6M3.2M4.8M6.4M8MSE +/- 14432.27, N = 3SE +/- 15235.97, N = 3SE +/- 62574.81, N = 3SE +/- 1368.48, N = 3SE +/- 4184.78, N = 3SE +/- 15876.84, N = 37406309738291673592177417621741900273859931. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm
OpenBenchmarking.orgNodes Per Second, More Is BetterCrafty 25.2Elapsed TimeEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1.3M2.6M3.9M5.2M6.5MMin: 7383037 / Avg: 7406309.33 / Max: 7432733Min: 7361528 / Avg: 7382916.33 / Max: 7412407Min: 7235678 / Avg: 7359216.67 / Max: 7438320Min: 7415815 / Avg: 7417621 / Max: 7420305Min: 7410787 / Avg: 7419001.67 / Max: 7424497Min: 7354988 / Avg: 7385993.33 / Max: 74074311. (CC) gcc options: -pthread -lstdc++ -fprofile-use -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P0.36270.72541.08811.45081.8135SE +/- 0.003, N = 3SE +/- 0.004, N = 3SE +/- 0.004, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 31.6111.6021.6121.6111.6041.6111. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810Min: 1.6 / Avg: 1.61 / Max: 1.62Min: 1.6 / Avg: 1.6 / Max: 1.61Min: 1.61 / Avg: 1.61 / Max: 1.62Min: 1.61 / Avg: 1.61 / Max: 1.61Min: 1.6 / Avg: 1.6 / Max: 1.61Min: 1.61 / Avg: 1.61 / Max: 1.611. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P60120180240300SE +/- 0.13, N = 3SE +/- 0.24, N = 3SE +/- 0.38, N = 3SE +/- 0.02, N = 3SE +/- 0.13, N = 3SE +/- 0.12, N = 3275.52275.70275.38274.33274.13274.45MIN: 274.32 / MAX: 281.55MIN: 274.17 / MAX: 289.54MIN: 273.95 / MAX: 287.65MIN: 273.5 / MAX: 275.08MIN: 273.18 / MAX: 274.92MIN: 273.58 / MAX: 288.271. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P50100150200250Min: 275.39 / Avg: 275.52 / Max: 275.78Min: 275.36 / Avg: 275.7 / Max: 276.17Min: 274.62 / Avg: 275.38 / Max: 275.79Min: 274.29 / Avg: 274.33 / Max: 274.37Min: 273.99 / Avg: 274.13 / Max: 274.38Min: 274.21 / Avg: 274.45 / Max: 274.611. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P246810SE +/- 0.004, N = 3SE +/- 0.021, N = 3SE +/- 0.001, N = 3SE +/- 0.006, N = 3SE +/- 0.009, N = 3SE +/- 0.027, N = 38.5388.5498.5488.5148.5238.5561. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P3691215Min: 8.53 / Avg: 8.54 / Max: 8.55Min: 8.53 / Avg: 8.55 / Max: 8.59Min: 8.55 / Avg: 8.55 / Max: 8.55Min: 8.51 / Avg: 8.51 / Max: 8.53Min: 8.51 / Avg: 8.52 / Max: 8.54Min: 8.53 / Avg: 8.56 / Max: 8.611. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff

Hierarchical INTegration

This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P70M140M210M280M350MSE +/- 739610.19, N = 3SE +/- 81095.29, N = 3SE +/- 256408.73, N = 3SE +/- 139034.34, N = 3SE +/- 767979.21, N = 3SE +/- 41543.71, N = 3321206770.06321880722.64322432634.13322779429.08322198292.40322561482.011. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgQUIPs, More Is BetterHierarchical INTegration 1.0Test: FLOATEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P60M120M180M240M300MMin: 319737613.96 / Avg: 321206770.06 / Max: 322090529.13Min: 321790838.8 / Avg: 321880722.64 / Max: 322042583.37Min: 322139793.26 / Avg: 322432634.13 / Max: 322943635.8Min: 322565326.01 / Avg: 322779429.08 / Max: 323040141.12Min: 320662645.64 / Avg: 322198292.4 / Max: 322992910.51Min: 322488789.04 / Avg: 322561482.01 / Max: 322632677.31. (CC) gcc options: -O3 -march=native -lm

RNNoise

RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 321.1421.1321.1721.1221.0821.141. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025Min: 21.13 / Avg: 21.14 / Max: 21.15Min: 21.12 / Avg: 21.13 / Max: 21.14Min: 21.15 / Avg: 21.17 / Max: 21.2Min: 21.11 / Avg: 21.12 / Max: 21.14Min: 21.05 / Avg: 21.08 / Max: 21.1Min: 21.08 / Avg: 21.14 / Max: 21.191. (CC) gcc options: -O2 -pedantic -fvisibility=hidden

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240SE +/- 0.11, N = 4SE +/- 0.11, N = 4SE +/- 0.09, N = 4SE +/- 0.08, N = 4SE +/- 0.06, N = 4SE +/- 0.15, N = 432.8032.7632.7132.8032.8432.831. (CC) gcc options: -O2 -std=c99
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech SynthesisEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835Min: 32.5 / Avg: 32.79 / Max: 33.05Min: 32.49 / Avg: 32.76 / Max: 32.97Min: 32.45 / Avg: 32.71 / Max: 32.86Min: 32.56 / Avg: 32.8 / Max: 32.92Min: 32.69 / Avg: 32.84 / Max: 32.93Min: 32.52 / Avg: 32.83 / Max: 33.131. (CC) gcc options: -O2 -std=c99

IndigoBench

This is a test of Indigo Renderer's IndigoBench benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F72AMD 7F72AMD EPYC 7F723691215SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 310.3110.3410.31
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: SupercarEPYC 7F72AMD 7F72AMD EPYC 7F723691215Min: 10.23 / Avg: 10.31 / Max: 10.42Min: 10.31 / Avg: 10.34 / Max: 10.38Min: 10.28 / Avg: 10.31 / Max: 10.36

OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7F72AMD 7F72AMD EPYC 7F721.07442.14883.22324.29765.372SE +/- 0.029, N = 3SE +/- 0.013, N = 3SE +/- 0.006, N = 34.7694.7754.770
OpenBenchmarking.orgM samples/s, More Is BetterIndigoBench 4.4Acceleration: CPU - Scene: BedroomEPYC 7F72AMD 7F72AMD EPYC 7F72246810Min: 4.74 / Avg: 4.77 / Max: 4.83Min: 4.75 / Avg: 4.78 / Max: 4.79Min: 4.76 / Avg: 4.77 / Max: 4.78

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455SE +/- 0.18, N = 3SE +/- 0.43, N = 3SE +/- 0.29, N = 3SE +/- 2.49, N = 9SE +/- 3.26, N = 9SE +/- 1.13, N = 1231.0531.3330.7545.7850.9249.62MIN: 28.99 / MAX: 116.2MIN: 28.52 / MAX: 71.5MIN: 28.27 / MAX: 134MIN: 35.06 / MAX: 1595.62MIN: 35.74 / MAX: 1673.98MIN: 38.65 / MAX: 235.851. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: yolov4-tinyEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1020304050Min: 30.76 / Avg: 31.05 / Max: 31.39Min: 30.48 / Avg: 31.33 / Max: 31.87Min: 30.29 / Avg: 30.75 / Max: 31.28Min: 40.72 / Avg: 45.78 / Max: 64.99Min: 41.01 / Avg: 50.92 / Max: 72.06Min: 43.6 / Avg: 49.62 / Max: 56.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1326395265SE +/- 0.13, N = 3SE +/- 0.54, N = 3SE +/- 0.20, N = 3SE +/- 4.19, N = 9SE +/- 1.38, N = 9SE +/- 5.95, N = 1224.9524.5325.0346.8944.5556.48MIN: 22.93 / MAX: 118.48MIN: 22.55 / MAX: 91.3MIN: 23.18 / MAX: 107.4MIN: 33.82 / MAX: 2829.75MIN: 32.73 / MAX: 132.9MIN: 36.09 / MAX: 4251.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet50EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455Min: 24.71 / Avg: 24.95 / Max: 25.15Min: 23.71 / Avg: 24.53 / Max: 25.55Min: 24.83 / Avg: 25.03 / Max: 25.44Min: 37.94 / Avg: 46.89 / Max: 78.89Min: 37.99 / Avg: 44.55 / Max: 48.56Min: 39.62 / Avg: 56.48 / Max: 114.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P3691215SE +/- 0.09, N = 3SE +/- 0.08, N = 3SE +/- 0.18, N = 3SE +/- 0.44, N = 9SE +/- 0.38, N = 9SE +/- 0.41, N = 1210.3810.5110.5211.0710.8912.14MIN: 9.08 / MAX: 21.34MIN: 9.04 / MAX: 16.98MIN: 9.15 / MAX: 90.44MIN: 8.87 / MAX: 549.19MIN: 8.79 / MAX: 91.38MIN: 8.73 / MAX: 262.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: alexnetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620Min: 10.19 / Avg: 10.38 / Max: 10.49Min: 10.35 / Avg: 10.51 / Max: 10.62Min: 10.19 / Avg: 10.52 / Max: 10.8Min: 9.44 / Avg: 11.07 / Max: 13.28Min: 9.42 / Avg: 10.89 / Max: 12.99Min: 10.15 / Avg: 12.14 / Max: 14.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835SE +/- 0.08, N = 3SE +/- 0.27, N = 3SE +/- 0.35, N = 3SE +/- 0.97, N = 9SE +/- 8.38, N = 9SE +/- 1.57, N = 1213.8213.8614.1021.2430.4824.80MIN: 12.13 / MAX: 30.86MIN: 11.95 / MAX: 34.76MIN: 12.23 / MAX: 36.59MIN: 17.2 / MAX: 114.52MIN: 16.86 / MAX: 1521.7MIN: 17.45 / MAX: 211.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: resnet18EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835Min: 13.68 / Avg: 13.82 / Max: 13.96Min: 13.46 / Avg: 13.86 / Max: 14.37Min: 13.41 / Avg: 14.1 / Max: 14.55Min: 17.65 / Avg: 21.24 / Max: 25.43Min: 18.19 / Avg: 30.48 / Max: 97.04Min: 19.01 / Avg: 24.8 / Max: 35.511. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1530456075SE +/- 0.23, N = 3SE +/- 0.88, N = 3SE +/- 0.64, N = 3SE +/- 2.16, N = 9SE +/- 3.59, N = 9SE +/- 2.81, N = 1236.4836.5637.4854.1955.5569.56MIN: 33.2 / MAX: 130.64MIN: 33.32 / MAX: 133.5MIN: 33.78 / MAX: 126.41MIN: 43.69 / MAX: 183.9MIN: 44.48 / MAX: 1203.03MIN: 45.69 / MAX: 825.71. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: vgg16EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1326395265Min: 36.02 / Avg: 36.48 / Max: 36.8Min: 35.38 / Avg: 36.56 / Max: 38.29Min: 36.2 / Avg: 37.48 / Max: 38.14Min: 46.7 / Avg: 54.19 / Max: 64.6Min: 48.17 / Avg: 55.55 / Max: 83.51Min: 56.84 / Avg: 69.56 / Max: 87.21. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1326395265SE +/- 0.24, N = 3SE +/- 0.20, N = 3SE +/- 0.41, N = 3SE +/- 3.10, N = 9SE +/- 2.94, N = 9SE +/- 4.22, N = 1222.0721.5522.0142.9540.6557.46MIN: 20.38 / MAX: 109.3MIN: 20.16 / MAX: 115.96MIN: 20.09 / MAX: 121.23MIN: 29.59 / MAX: 214.92MIN: 29.01 / MAX: 203.47MIN: 30.69 / MAX: 401.151. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: googlenetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455Min: 21.71 / Avg: 22.07 / Max: 22.53Min: 21.27 / Avg: 21.55 / Max: 21.93Min: 21.43 / Avg: 22.01 / Max: 22.79Min: 30.84 / Avg: 42.95 / Max: 58.7Min: 29.94 / Avg: 40.65 / Max: 51.94Min: 32.67 / Avg: 57.46 / Max: 80.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P3691215SE +/- 0.04, N = 3SE +/- 0.02, N = 3SE +/- 0.05, N = 3SE +/- 0.47, N = 9SE +/- 0.39, N = 9SE +/- 0.66, N = 124.094.134.1611.4010.8413.16MIN: 3.49 / MAX: 10.46MIN: 3.53 / MAX: 17.37MIN: 3.55 / MAX: 15.23MIN: 8.9 / MAX: 30.45MIN: 7.35 / MAX: 56.45MIN: 7.98 / MAX: 86.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: blazefaceEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620Min: 4.03 / Avg: 4.09 / Max: 4.16Min: 4.1 / Avg: 4.13 / Max: 4.17Min: 4.05 / Avg: 4.16 / Max: 4.22Min: 9.94 / Avg: 11.4 / Max: 14.61Min: 8.71 / Avg: 10.84 / Max: 12.2Min: 10.32 / Avg: 13.16 / Max: 16.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1122334455SE +/- 0.25, N = 3SE +/- 0.09, N = 3SE +/- 0.12, N = 3SE +/- 2.94, N = 9SE +/- 8.35, N = 9SE +/- 3.27, N = 1212.2912.6112.5440.1847.1543.86MIN: 10.71 / MAX: 71.52MIN: 10.98 / MAX: 64.21MIN: 11.02 / MAX: 93.95MIN: 23.32 / MAX: 3005.7MIN: 24.1 / MAX: 4132.05MIN: 23.72 / MAX: 2862.861. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: efficientnet-b0EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1020304050Min: 11.79 / Avg: 12.29 / Max: 12.58Min: 12.45 / Avg: 12.61 / Max: 12.74Min: 12.35 / Avg: 12.54 / Max: 12.76Min: 28.2 / Avg: 40.18 / Max: 56.64Min: 30.78 / Avg: 47.15 / Max: 112.52Min: 28.68 / Avg: 43.86 / Max: 63.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P918273645SE +/- 0.33, N = 3SE +/- 0.24, N = 3SE +/- 0.18, N = 3SE +/- 6.48, N = 9SE +/- 2.05, N = 9SE +/- 4.23, N = 129.589.519.5036.4533.4539.12MIN: 8.12 / MAX: 77.12MIN: 8.23 / MAX: 82.21MIN: 8.28 / MAX: 26.52MIN: 17.61 / MAX: 3072.99MIN: 16.65 / MAX: 742.59MIN: 17.58 / MAX: 1785.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mnasnetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240Min: 9.06 / Avg: 9.58 / Max: 10.2Min: 9.09 / Avg: 9.51 / Max: 9.93Min: 9.31 / Avg: 9.5 / Max: 9.87Min: 24.08 / Avg: 36.45 / Max: 87.55Min: 22.94 / Avg: 33.45 / Max: 46.24Min: 21.67 / Avg: 39.12 / Max: 68.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240SE +/- 0.49, N = 3SE +/- 0.14, N = 3SE +/- 0.16, N = 3SE +/- 4.58, N = 9SE +/- 1.30, N = 9SE +/- 1.47, N = 1210.4510.3510.4531.5329.7332.47MIN: 9 / MAX: 67.14MIN: 8.97 / MAX: 20.34MIN: 9.13 / MAX: 68.26MIN: 19.78 / MAX: 3234.51MIN: 18.72 / MAX: 99.3MIN: 19.24 / MAX: 158.481. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: shufflenet-v2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835Min: 9.84 / Avg: 10.45 / Max: 11.41Min: 10.1 / Avg: 10.35 / Max: 10.6Min: 10.14 / Avg: 10.45 / Max: 10.66Min: 24.57 / Avg: 31.53 / Max: 67.85Min: 25.6 / Avg: 29.73 / Max: 36.36Min: 24.41 / Avg: 32.47 / Max: 41.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240SE +/- 0.20, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 2.41, N = 9SE +/- 0.87, N = 9SE +/- 1.57, N = 119.529.619.9431.7332.1833.50MIN: 8.4 / MAX: 19.2MIN: 8.63 / MAX: 62.41MIN: 8.64 / MAX: 101.46MIN: 18.95 / MAX: 2531.7MIN: 19.35 / MAX: 124.33MIN: 18.64 / MAX: 1819.261. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v3-v3 - Model: mobilenet-v3EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835Min: 9.2 / Avg: 9.52 / Max: 9.9Min: 9.46 / Avg: 9.61 / Max: 9.83Min: 9.84 / Avg: 9.94 / Max: 10.12Min: 22.96 / Avg: 31.73 / Max: 43.61Min: 26.17 / Avg: 32.18 / Max: 36.16Min: 26.41 / Avg: 33.5 / Max: 42.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P816243240SE +/- 0.17, N = 3SE +/- 0.17, N = 3SE +/- 0.04, N = 3SE +/- 1.77, N = 9SE +/- 2.05, N = 9SE +/- 2.24, N = 1210.0410.0610.0833.1832.6334.70MIN: 8.78 / MAX: 67.63MIN: 8.8 / MAX: 95.72MIN: 8.99 / MAX: 20.35MIN: 16.98 / MAX: 1915.06MIN: 16.77 / MAX: 105.83MIN: 16.28 / MAX: 2990.341. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU-v2-v2 - Model: mobilenet-v2EPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P714212835Min: 9.73 / Avg: 10.04 / Max: 10.32Min: 9.8 / Avg: 10.06 / Max: 10.39Min: 10.02 / Avg: 10.08 / Max: 10.15Min: 27.17 / Avg: 33.18 / Max: 43.6Min: 21.86 / Avg: 32.63 / Max: 38.28Min: 23.33 / Avg: 34.7 / Max: 44.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1428425670SE +/- 0.66, N = 3SE +/- 0.49, N = 3SE +/- 0.56, N = 3SE +/- 3.70, N = 9SE +/- 1.97, N = 9SE +/- 4.13, N = 1221.4422.2622.2046.4647.4460.78MIN: 18.65 / MAX: 82.99MIN: 18.47 / MAX: 54.35MIN: 18.67 / MAX: 87.73MIN: 32.81 / MAX: 130.57MIN: 31.66 / MAX: 146.33MIN: 33.09 / MAX: 237.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: mobilenetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1224364860Min: 20.75 / Avg: 21.44 / Max: 22.75Min: 21.3 / Avg: 22.26 / Max: 22.87Min: 21.24 / Avg: 22.2 / Max: 23.17Min: 35.1 / Avg: 46.46 / Max: 70.44Min: 36.44 / Avg: 47.44 / Max: 56.8Min: 38.82 / Avg: 60.78 / Max: 79.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1530456075SE +/- 0.28, N = 3SE +/- 0.17, N = 3SE +/- 0.29, N = 3SE +/- 32.56, N = 9SE +/- 3.29, N = 9SE +/- 1.31, N = 1221.2621.1021.8768.4339.1643.06MIN: 18.08 / MAX: 111.42MIN: 18.23 / MAX: 105.85MIN: 18.11 / MAX: 113.58MIN: 29.51 / MAX: 3576.55MIN: 29.86 / MAX: 3350.4MIN: 31.76 / MAX: 540.591. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20200916Target: CPU - Model: squeezenetEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P1326395265Min: 20.89 / Avg: 21.26 / Max: 21.8Min: 20.8 / Avg: 21.1 / Max: 21.39Min: 21.31 / Avg: 21.87 / Max: 22.31Min: 32.14 / Avg: 68.43 / Max: 328.8Min: 32.44 / Avg: 39.16 / Max: 64Min: 37.64 / Avg: 43.06 / Max: 51.751. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Redis

Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KSE +/- 37419.69, N = 12SE +/- 35684.24, N = 15SE +/- 27970.44, N = 15SE +/- 40018.52, N = 15SE +/- 38603.79, N = 15SE +/- 22454.24, N = 151495615.591500640.311475849.681498592.431580536.811519952.931. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SETEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KMin: 1358739.12 / Avg: 1495615.59 / Max: 1727171Min: 1351394.62 / Avg: 1500640.31 / Max: 1706539.12Min: 1293661 / Avg: 1475849.68 / Max: 1686340.62Min: 1193546.5 / Avg: 1498592.43 / Max: 1721280.62Min: 1348054 / Avg: 1580536.81 / Max: 1733102.12Min: 1364736.75 / Avg: 1519952.93 / Max: 1645105.381. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400K800K1200K1600K2000KSE +/- 51555.46, N = 15SE +/- 29329.60, N = 15SE +/- 51919.80, N = 15SE +/- 53123.87, N = 12SE +/- 39950.00, N = 15SE +/- 37005.83, N = 152038392.691814383.801897703.881888968.341996216.741852789.071. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GETEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400K800K1200K1600K2000KMin: 1789023.25 / Avg: 2038392.69 / Max: 2359849Min: 1650270.62 / Avg: 1814383.8 / Max: 2020460.62Min: 1637080.25 / Avg: 1897703.88 / Max: 2252252.25Min: 1675309.88 / Avg: 1888968.34 / Max: 2233000Min: 1706757.62 / Avg: 1996216.74 / Max: 2217649.75Min: 1698118.88 / Avg: 1852789.07 / Max: 22375661. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KSE +/- 23366.84, N = 15SE +/- 24808.96, N = 12SE +/- 23963.24, N = 15SE +/- 26859.21, N = 15SE +/- 28243.04, N = 12SE +/- 25705.74, N = 151295998.651332610.561317592.971365777.401368468.711345722.501. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPUSHEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P200K400K600K800K1000KMin: 1176470.5 / Avg: 1295998.65 / Max: 1475020.75Min: 1191933.25 / Avg: 1332610.56 / Max: 1449460.88Min: 1228697.75 / Avg: 1317592.97 / Max: 1491028.25Min: 1145475.38 / Avg: 1365777.4 / Max: 1490837.5Min: 1196401.88 / Avg: 1368468.71 / Max: 1497772.5Min: 1166861.12 / Avg: 1345722.5 / Max: 1492632.751. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400K800K1200K1600K2000KSE +/- 37861.09, N = 12SE +/- 39190.67, N = 15SE +/- 39087.64, N = 15SE +/- 40431.61, N = 15SE +/- 39991.80, N = 15SE +/- 46378.46, N = 151658090.041757206.721723521.291818648.351814208.701784823.371. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SADDEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P300K600K900K1200K1500KMin: 1490456 / Avg: 1658090.04 / Max: 1872899Min: 1508295.62 / Avg: 1757206.72 / Max: 2012201.12Min: 1538609.25 / Avg: 1723521.29 / Max: 1961286.25Min: 1597444.12 / Avg: 1818648.35 / Max: 2004072.12Min: 1565345.88 / Avg: 1814208.7 / Max: 2016258.12Min: 1552894.38 / Avg: 1784823.37 / Max: 1992478.251. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P500K1000K1500K2000K2500KSE +/- 44675.79, N = 15SE +/- 22561.14, N = 15SE +/- 61951.41, N = 13SE +/- 28427.12, N = 15SE +/- 38584.49, N = 15SE +/- 95529.13, N = 152134890.151376509.231379285.461388985.481380069.401508834.921. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: LPOPEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P400K800K1200K1600K2000KMin: 1938232.62 / Avg: 2134890.15 / Max: 2475247.5Min: 1261034.12 / Avg: 1376509.23 / Max: 1525024.38Min: 1213864.12 / Avg: 1379285.46 / Max: 2053716.62Min: 1193622.88 / Avg: 1388985.48 / Max: 1517450.75Min: 1076529.62 / Avg: 1380069.4 / Max: 1526717.62Min: 1082355 / Avg: 1508834.92 / Max: 22132391. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2004006008001000SE +/- 6.19, N = 3SE +/- 3.80, N = 3SE +/- 4.03, N = 3SE +/- 20.53, N = 15SE +/- 18.50, N = 15SE +/- 14.28, N = 15931.28924.02932.49839.85822.77882.06MIN: 898.52MIN: 894.61MIN: 897.7MIN: 694.65MIN: 694.28MIN: 758.651. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P160320480640800Min: 921.25 / Avg: 931.27 / Max: 942.59Min: 920.19 / Avg: 924.02 / Max: 931.62Min: 925.34 / Avg: 932.49 / Max: 939.27Min: 749.01 / Avg: 839.85 / Max: 1018.38Min: 736.79 / Avg: 822.77 / Max: 975.62Min: 823.24 / Avg: 882.06 / Max: 1002.051. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500SE +/- 7.42, N = 3SE +/- 6.46, N = 3SE +/- 6.62, N = 3SE +/- 28.74, N = 13SE +/- 29.40, N = 15SE +/- 20.83, N = 151613.911610.951605.901201.411182.171334.58MIN: 1557.21MIN: 1556.69MIN: 1559.69MIN: 1072.26MIN: 1076.03MIN: 1172.821. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P30060090012001500Min: 1603.73 / Avg: 1613.91 / Max: 1628.34Min: 1598.24 / Avg: 1610.95 / Max: 1619.3Min: 1593.37 / Avg: 1605.9 / Max: 1615.88Min: 1106.55 / Avg: 1201.41 / Max: 1503.23Min: 1107.35 / Avg: 1182.17 / Max: 1523.07Min: 1261.41 / Avg: 1334.58 / Max: 1542.191. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2004006008001000SE +/- 1.31, N = 3SE +/- 0.29, N = 3SE +/- 7.20, N = 3SE +/- 14.28, N = 15SE +/- 16.69, N = 15SE +/- 15.65, N = 15925.98930.71929.89823.75848.25907.07MIN: 896.94MIN: 898.24MIN: 896.55MIN: 700.39MIN: 699.46MIN: 758.941. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.0Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P160320480640800Min: 923.47 / Avg: 925.98 / Max: 927.91Min: 930.22 / Avg: 930.71 / Max: 931.22Min: 922.24 / Avg: 929.89 / Max: 944.27Min: 750.46 / Avg: 823.74 / Max: 945.54Min: 739.45 / Avg: 848.25 / Max: 942.11Min: 818.45 / Avg: 907.07 / Max: 1007.61. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong BandwidthEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KSE +/- 564.95, N = 3SE +/- 760.03, N = 3SE +/- 1135.29, N = 3SE +/- 491.47, N = 3SE +/- 437.93, N = 3SE +/- 296.59, N = 39566.5010110.2610456.2911661.789370.108695.061. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong BandwidthEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P2K4K6K8K10KMin: 8667.35 / Avg: 9566.5 / Max: 10608.64Min: 8702.82 / Avg: 10110.26 / Max: 11311.26Min: 8498.65 / Avg: 10456.29 / Max: 12431.28Min: 10682.36 / Avg: 11661.78 / Max: 12223.45Min: 8581.08 / Avg: 9370.1 / Max: 10093.9Min: 8135.84 / Avg: 8695.06 / Max: 9145.971. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-PtransEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620SE +/- 0.43146, N = 3SE +/- 0.42362, N = 3SE +/- 0.32995, N = 3SE +/- 0.20382, N = 3SE +/- 0.32220, N = 3SE +/- 0.83051, N = 37.757398.175538.2855515.9182316.4567316.123571. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-PtransEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P48121620Min: 7.29 / Avg: 7.76 / Max: 8.62Min: 7.33 / Avg: 8.18 / Max: 8.61Min: 7.63 / Avg: 8.29 / Max: 8.63Min: 15.64 / Avg: 15.92 / Max: 16.31Min: 15.92 / Avg: 16.46 / Max: 17.04Min: 14.46 / Avg: 16.12 / Max: 17.011. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-FfteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025SE +/- 1.15527, N = 3SE +/- 0.56601, N = 3SE +/- 0.73072, N = 3SE +/- 0.18996, N = 3SE +/- 0.25898, N = 3SE +/- 0.86449, N = 39.872388.431219.4189520.8577020.6295720.079431. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3
OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-FfteEPYC 7F72AMD 7F72AMD EPYC 7F72AMD EPYC 7F72 2PEPYC 7F72 2P7F72 2P510152025Min: 7.56 / Avg: 9.87 / Max: 11.05Min: 7.51 / Avg: 8.43 / Max: 9.46Min: 8.31 / Avg: 9.42 / Max: 10.8Min: 20.51 / Avg: 20.86 / Max: 21.16Min: 20.12 / Avg: 20.63 / Max: 20.96Min: 18.35 / Avg: 20.08 / Max: 20.971. (CC) gcc options: -lblas -lm -pthread -lmpi -fomit-frame-pointer -funroll-loops2. ATLAS + Open MPI 4.0.3

156 Results Shown

oneDNN
HPC Challenge
oneDNN
LevelDB:
  Rand Delete
  Seq Fill
  Overwrite
  Rand Fill
oneDNN
LevelDB:
  Seek Rand
  Rand Read
  Hot Read
PostgreSQL pgbench:
  1 - 100 - Read Only
  1 - 100 - Read Only - Average Latency
High Performance Conjugate Gradient
NAMD
LAMMPS Molecular Dynamics Simulator
BRL-CAD
GROMACS
Stockfish
asmFish
HPC Challenge
FFTE
yquake2
LevelDB:
  Seq Fill
  Overwrite
oneDNN
LevelDB
oneDNN
LAMMPS Molecular Dynamics Simulator
Kvazaar
TensorFlow Lite
Kvazaar
Basis Universal
TensorFlow Lite
Kvazaar
Timed Linux Kernel Compilation
oneDNN
TensorFlow Lite
Kvazaar:
  Bosphorus 4K - Slow
  Bosphorus 4K - Medium
TensorFlow Lite
LevelDB
oneDNN:
  IP Shapes 1D - u8s8f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
HPC Challenge
KeyDB
Kvazaar
AI Benchmark Alpha
TensorFlow Lite
Timed LLVM Compilation
TensorFlow Lite
oneDNN
LevelDB
oneDNN
Mlpack Benchmark
PostgreSQL pgbench:
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
Basis Universal
oneDNN
Timed HMMer Search
AI Benchmark Alpha
Kvazaar
InfluxDB
Kvazaar
AI Benchmark Alpha
x265
Hugin
HPC Challenge
PostgreSQL pgbench:
  100 - 50 - Read Only - Average Latency
  100 - 50 - Read Only
oneDNN:
  IP Shapes 1D - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
LibRaw
x265
Mlpack Benchmark
oneDNN
TNN
HPC Challenge
x264
HPC Challenge
PostgreSQL pgbench:
  100 - 50 - Read Write - Average Latency
  100 - 50 - Read Write
  1 - 50 - Read Write - Average Latency
  1 - 50 - Read Write
oneDNN
PostgreSQL pgbench:
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
  1 - 100 - Read Write
  1 - 100 - Read Write - Average Latency
  1 - 50 - Read Only
  1 - 50 - Read Only - Average Latency
  100 - 1 - Read Only
Basis Universal
PostgreSQL pgbench
Timed Clash Compilation
Mlpack Benchmark
rav1e
InfluxDB
LZ4 Compression:
  9 - Compression Speed
  3 - Compression Speed
Basis Universal
PostgreSQL pgbench
LZ4 Compression
BYTE Unix Benchmark
PostgreSQL pgbench
LZ4 Compression
PostgreSQL pgbench
Mlpack Benchmark
PostgreSQL pgbench
WebP Image Encode
LZ4 Compression
rav1e:
  5
  6
Numpy Benchmark
rav1e
PHPBench
LZ4 Compression
PostgreSQL pgbench:
  1 - 1 - Read Write - Average Latency
  1 - 1 - Read Write
Basis Universal
WebP Image Encode:
  Quality 100
  Quality 100, Lossless, Highest Compression
Crafty
WebP Image Encode
TNN
WebP Image Encode
Hierarchical INTegration
RNNoise
eSpeak-NG Speech Engine
IndigoBench:
  CPU - Supercar
  CPU - Bedroom
NCNN:
  CPU - yolov4-tiny
  CPU - resnet50
  CPU - alexnet
  CPU - resnet18
  CPU - vgg16
  CPU - googlenet
  CPU - blazeface
  CPU - efficientnet-b0
  CPU - mnasnet
  CPU - shufflenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
  CPU - squeezenet
Redis:
  SET
  GET
  LPUSH
  SADD
  LPOP
oneDNN:
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
HPC Challenge:
  Max Ping Pong Bandwidth
  G-Ptrans
  G-Ffte