EPYC 7601 April

2 x AMD EPYC 7601 32-Core testing with a Dell 02MJ3T (1.2.5 BIOS) and Matrox G200eW3 on Ubuntu 19.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204154-NE-EPYC7601A82
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 2 Tests
Creator Workloads 5 Tests
Encoding 3 Tests
Common Kernel Benchmarks 2 Tests
Multi-Core 7 Tests
Intel oneAPI 2 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 15 2022
  1 Hour, 20 Minutes
B
April 15 2022
  1 Hour, 21 Minutes
Ubuntu 19.10 - 2 x AMD EPYC 7601 32-Core
April 15 2022
  1 Hour, 20 Minutes
D
April 15 2022
  9 Minutes
Invert Hiding All Results Option
  1 Hour, 2 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 7601 AprilOpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 7601 32-Core (64 Cores / 128 Threads)Dell 02MJ3T (1.2.5 BIOS)AMD 17h512GB280GB INTEL SSDPED1D280GA + 120GB INTEL SSDSCKJB120G7R + 12 x 500GB Samsung SSD 860Matrox G200eW3VE2282 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA + 2 x Broadcom NetXtreme BCM5720 2-port PCIeUbuntu 19.105.9.0-050900rc6daily20200922-generic (x86_64) 20200921GNOME Shell 3.34.1X Server 1.20.5GCC 9.2.1 20191008ext41600x1200ProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerCompilerFile-SystemScreen ResolutionEPYC 7601 April BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - CPU Microcode: 0x8001227- OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-2ubuntu219.10)- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected

ABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreDResult OverviewPhoronix Test Suite100%109%118%127%136%perf-benchperf-benchperf-benchperf-benchlibgav1libgav1perf-benchlibgav1perf-benchperf-benchMemset 1MBEpoll WaitMemcpy 1MBFutex Lock-PiS.N.1Summer Nature 4KSched PipeChimera 1080pSyscall BasicFutex Hash

EPYC 7601 Aprilonednn: Recurrent Neural Network Training - u8s8f32 - CPUperf-bench: Memset 1MBospray: gravity_spheres_volume/dim_512/scivis/real_timeonednn: Recurrent Neural Network Training - f32 - CPUcompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressiononednn: Recurrent Neural Network Inference - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUospray: gravity_spheres_volume/dim_512/ao/real_timeonednn: Recurrent Neural Network Inference - u8s8f32 - CPUperf-bench: Epoll Waitaom-av1: Speed 10 Realtime - Bosphorus 1080ponednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUrocksdb: Read While Writingonednn: IP Shapes 1D - f32 - CPUaom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 0 Two-Pass - Bosphorus 4Konednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 1080ponednn: Convolution Batch Shapes Auto - u8s8f32 - CPUavifenc: 6, Losslessavifenc: 6perf-bench: Memcpy 1MBperf-bench: Futex Lock-Piaom-av1: Speed 4 Two-Pass - Bosphorus 4Konednn: IP Shapes 1D - u8s8f32 - CPUlibgav1: Summer Nature 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080ponednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUaom-av1: Speed 10 Realtime - Bosphorus 4Klibgav1: Summer Nature 4Konednn: Deconvolution Batch shapes_1d - f32 - CPUaom-av1: Speed 6 Two-Pass - Bosphorus 4Kospray: particle_volume/pathtracer/real_timebuild-mplayer: Time To Compilelibgav1: Chimera 1080p 10-bitperf-bench: Sched Pipeonednn: Convolution Batch Shapes Auto - f32 - CPUavifenc: 10, Losslessrocksdb: Update Randlibgav1: Chimera 1080procksdb: Read Rand Write Randavifenc: 0ospray: particle_volume/ao/real_timeonednn: IP Shapes 3D - u8s8f32 - CPUavifenc: 2aom-av1: Speed 6 Realtime - Bosphorus 4Kospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray: particle_volume/scivis/real_timeonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUonednn: IP Shapes 3D - f32 - CPUrocksdb: Rand Readonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUaom-av1: Speed 6 Realtime - Bosphorus 1080pperf-bench: Syscall Basicjava-jmh: Throughputperf-bench: Futex Hashaom-av1: Speed 0 Two-Pass - Bosphorus 1080ponednn: IP Shapes 1D - bf16bf16bf16 - CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD4007.2742.3444864.711754030.634.1824299.156.0774.01993764.3167173.693.005324343.3852306893.7351156.980.14989.8715.429.4925.1476.0529.248911.6126.83814.522034773.223.5477173.285.113.2196132.5426.0820.59785.52129.94814.37231.324867919.93487.8118103362.191282665144.04744.09854.3282783.1069.48.3273842.92352.7461619.95392035093752.532365.031385346085584416443.34319303610.233592.0142.2074443.459764608.743.7314977.747.267214.849424312.13183773.713.100093970.6956058003.7669354.760.094633.7314.0727.2323.1477.8729.093911.6557.25113.972368783.083.6987373.755.233.1491233.2825.3420.66595.53133.04514.71130.6324967319.99987.87118120963.021263446142.83243.66574.3189883.4799.428.2815742.70512.7534619.97942033208512.529385.041383056685714469509.4419277110.235788.831.0439283.625415017.73.44047.075.931874.45684508.1199085.882.678844546.9158624103.371460.930.15131.0814.2229.7123.6671.7927.171510.9287.05514.799942773.113.6129372.645.33.1180832.2426.0121.18445.67132.01314.52331.0424464820.29227.73718405462.151271041144.76744.09194.3588582.9019.468.3035242.69572.7395620.0542026601152.522395.041385517285603467888.27319285320.2339.589435172514.0353788175.6525.7724684362138436041930450OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core120024003600480060004007.273592.015788.80MIN: 3694.96MIN: 3496.51MIN: 4369.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memset 1MBABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD102030405042.3442.2131.0439.591. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core1.06012.12023.18034.24045.30054.711753.459763.62541

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core110022003300440055004030.634608.745017.70MIN: 3806.1MIN: 4510.66MIN: 4503.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.9411.8822.8233.7644.7054.1823.7313.4001. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core110022003300440055004299.154977.744047.07MIN: 3231.93MIN: 4343.18MIN: 3885.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core2468106.077007.267215.93187MIN: 5.83MIN: 7.04MIN: 5.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core1.09112.18223.27334.36445.45554.019904.849424.45680

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core100020003000400050003764.304312.134508.10MIN: 3534.14MIN: 3958.06MIN: 3304.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Epoll WaitABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD40080012001600200016711837199017251. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core2040608010073.6973.7185.881. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.69751.3952.09252.793.48753.005323.100092.67884MIN: 2.34MIN: 2.89MIN: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core100020003000400050004343.383970.694546.91MIN: 3962.55MIN: 3819.96MIN: 4476.541. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core1.3M2.6M3.9M5.2M6.5M5230689560580058624101. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.84761.69522.54283.39044.2383.735113.766933.37140MIN: 3.1MIN: 3.11MIN: 2.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core142842567056.9854.7660.931. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.02250.0450.06750.090.11250.100.090.101. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core110022003300440055004989.874633.735131.08MIN: 4646.19MIN: 4338.61MIN: 3581.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core4812162015.4014.0714.221. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core71421283529.4927.2329.711. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core61218243025.1423.1423.661. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core2040608010076.0577.8771.791. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core71421283529.2529.0927.17MIN: 26.26MIN: 26.43MIN: 20.811. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core369121511.6111.6610.931. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6ABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core2468106.8387.2517.0551. (CXX) g++ options: -O3 -fPIC -lm

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/sec, More Is Betterperf-benchBenchmark: Memcpy 1MBABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD4812162014.5213.9714.8014.041. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex Lock-PiABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD20406080100777877811. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.72451.4492.17352.8983.62253.223.083.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.83221.66442.49663.32884.1613.547713.698733.61293MIN: 3.01MIN: 3.17MIN: 3.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD2040608010073.2873.7572.6475.651. (CXX) g++ options: -O3 -lpthread -lrt

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core1.19252.3853.57754.775.96255.115.235.301. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.72441.44882.17322.89763.6223.219613.149123.11808MIN: 2.42MIN: 2.72MIN: 2.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core81624324032.5433.2832.241. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Summer Nature 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD61218243026.0825.3426.0125.771. (CXX) g++ options: -O3 -lpthread -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core51015202520.6020.6721.18MIN: 18.58MIN: 18.39MIN: 18.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core1.27582.55163.82745.10326.3795.525.535.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core306090120150129.95133.05132.01

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core4812162014.3714.7114.52

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080p 10-bitABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core71421283531.3030.6331.041. (CXX) g++ options: -O3 -lpthread -lrt

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Sched PipeABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD50K100K150K200K250K2486792496732446482468431. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core51015202519.9320.0020.29MIN: 18.94MIN: 18.92MIN: 18.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core2468107.8107.8717.7371. (CXX) g++ options: -O3 -fPIC -lm

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core40K80K120K160K200K1810331812091840541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libgav1

Libgav1 is an AV1 decoder developed by Google for AV1 profile 0/1 compliance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterlibgav1 0.17Video Input: Chimera 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD142842567062.1963.0262.1562.001. (CXX) g++ options: -O3 -lpthread -lrt

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core300K600K900K1200K1500K1282665126344612710411. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0ABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core306090120150144.05142.83144.771. (CXX) g++ options: -O3 -fPIC -lm

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core102030405044.1043.6744.09

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.98071.96142.94213.92284.90354.328274.318984.35885MIN: 4.03MIN: 4.01MIN: 4.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2ABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core2040608010083.1183.4882.901. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core36912159.409.429.461. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core2468108.327388.281578.30352

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core102030405042.9242.7142.70

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.61951.2391.85852.4783.09752.746162.753462.73956MIN: 2.36MIN: 2.38MIN: 2.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core51015202519.9519.9820.05MIN: 19.46MIN: 19.44MIN: 19.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core40M80M120M160M200M2035093752033208512026601151. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.56981.13961.70942.27922.8492.532362.529382.52239MIN: 2.44MIN: 2.43MIN: 2.421. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -lpthread -ldl

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core1.1342.2683.4024.5365.675.035.045.041. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Syscall BasicABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD3M6M9M12M15M138534601383056613855172138436041. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core20000M40000M60000M80000M100000M85584416443.3485714469509.4485603467888.27

perf-bench

This test profile is used for running Linux perf-bench, the benchmark support within the Linux kernel's perf tool. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgops/sec, More Is Betterperf-benchBenchmark: Futex HashABUbuntu 19.10 - 2 x AMD EPYC 7601 32-CoreD400K800K1200K1600K2000K19303611927711192853219304501. (CC) gcc options: -pthread -shared -lunwind-x86_64 -lunwind -llzma -Xlinker -export-dynamic -O6 -ggdb3 -funwind-tables -std=gnu99 -fPIC -lnuma

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pABUbuntu 19.10 - 2 x AMD EPYC 7601 32-Core0.05180.10360.15540.20720.2590.230.230.231. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

Ubuntu 19.10 - 2 x AMD EPYC 7601 32-Core: The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

Ubuntu 19.10 - 2 x AMD EPYC 7601 32-Core: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

Ubuntu 19.10 - 2 x AMD EPYC 7601 32-Core: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

Ubuntu 19.10 - 2 x AMD EPYC 7601 32-Core: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

Ubuntu 19.10 - 2 x AMD EPYC 7601 32-Core: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result.

Ubuntu 19.10 - 2 x AMD EPYC 7601 32-Core: The test run did not produce a result.