Sci Clear Linux

Intel Core i9-10900K testing with a Gigabyte Z490 AORUS MASTER (F3 BIOS) and AMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB on Clear Linux OS 33250 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2006028-NI-SCICLEARL86
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Intel Core i9-10900K
June 01 2020
  13 Hours, 49 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Sci Clear LinuxOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-10900K @ 5.30GHz (10 Cores / 20 Threads)Gigabyte Z490 AORUS MASTER (F3 BIOS)Intel Device 06ef16GBSamsung SSD 970 EVO 250GBAMD Radeon RX 5600 OEM/5600 XT / 5700/5700 8GB (2060/875MHz)Realtek ALC1220DELL P2415QIntel Device 15f3 + Intel Device 06f0Clear Linux OS 332505.6.15-957.native (x86_64)GNOME Shell 3.36.2X Server 1.20.7radeon 19.0.14.6 Mesa 20.1.0-devel (LLVM 10.0.0)1.2.128GCC 10.1.1 20200529 releases/gcc-10.1.0-100-g49824d35e0 + Clang 10.0.0 + LLVM 10.0.0ext43840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerDisplay DriverOpenGLVulkanCompilerFile-SystemScreen ResolutionSci Clear Linux BenchmarksSystem Logs- FFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,--enable-new-dtags -Wa,-mbranches-within-32B-boundaries" CXXFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -Wa,-mbranches-within-32B-boundaries -fvisibility-inlines-hidden -Wl,--enable-new-dtags" MESA_GLSL_CACHE_DISABLE=0 FCFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -malign-data=abi -fno-semantic-interposition -ftree-vectorize -ftree-loop-vectorize -Wl,-sort-common -Wl,--enable-new-dtags" CFLAGS="-g -O3 -feliminate-unused-debug-types -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=32 -Wformat -Wformat-security -m64 -fasynchronous-unwind-tables -Wp,-D_REENTRANT -ftree-loop-distribute-patterns -Wl,-z -Wl,now -Wl,-z -Wl,relro -fno-semantic-interposition -ffat-lto-objects -fno-trapping-math -Wl,-sort-common -Wl,--enable-new-dtags -mtune=skylake -Wa,-mbranches-within-32B-boundaries" THEANO_FLAGS="floatX=float32,openmp=true,gcc.cxxflags="-ftree-vectorize -mavx"" - --build=x86_64-generic-linux --disable-libmpx --disable-libunwind-exceptions --disable-multiarch --disable-vtable-verify --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-clocale=gnu --enable-default-pie --enable-gnu-indirect-function --enable-languages=c,c++,fortran,go --enable-ld=default --enable-libstdcxx-pch --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --exec-prefix=/usr --includedir=/usr/include --target=x86_64-generic-linux --with-arch=westmere --with-gcc-major-version-only --with-glibc-version=2.19 --with-gnu-ld --with-isl --with-ppl=yes --with-tune=haswell - Scaling Governor: intel_pstate performance - CPU Microcode: 0xc8- itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + tsx_async_abort: Not affected

Sci Clear Linuxminife: Smallhpcc: G-Ptranshpcc: EP-STREAM Triadhpcc: Rand Ring Bandwidthhpcg: hpcc: G-HPLhpcc: G-Fftehpcc: EP-DGEMMhpcc: G-Rand Accesshpcc: Max Ping Pong Bandwidthgromacs: Water Benchmarklammps: Rhodopsin Proteinlulesh: namd: ATPase Simulation - 327,506 Atomspennant: sedovbigpennant: leblancbigcloverleaf: Lagrangian-Eulerian Hydrodynamicscp2k: Fayalite-FIST Datanwchem: C240 Buckyballhpcc: Rand Ring LatencyIntel Core i9-10900K4296.713.285332.269461.556104.3817130.367606.2038731.380000.0574930774.6840.9728.63510.3344421.198332814.5332697.1415.51789.82326631.80.26829OpenBenchmarking.org

miniFE

MiniFE Finite Element is an application for unstructured implicit finite element codes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgCG Mflops, More Is BetterminiFE 2.2Problem Size: SmallIntel Core i9-10900K9001800270036004500SE +/- 1.53, N = 34296.711. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-PtransIntel Core i9-10900K0.73921.47842.21762.95683.696SE +/- 0.00102, N = 33.285331. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: EP-STREAM TriadIntel Core i9-10900K0.51061.02121.53182.04242.553SE +/- 0.00284, N = 32.269461. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

OpenBenchmarking.orgGB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Random Ring BandwidthIntel Core i9-10900K0.35010.70021.05031.40041.7505SE +/- 0.01474, N = 31.556101. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

High Performance Conjugate Gradient

HPCG is the High Performance Conjugate Gradient and is a new scientific benchmark from Sandia National Lans focused for super-computer testing with modern real-world workloads compared to HPCC. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterHigh Performance Conjugate Gradient 3.1Intel Core i9-10900K0.98591.97182.95773.94364.9295SE +/- 0.00457, N = 34.381711. (CXX) g++ options: -O3 -ffast-math -ftree-vectorize -pthread -lmpi

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-HPLIntel Core i9-10900K714212835SE +/- 0.08, N = 330.371. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: G-FfteIntel Core i9-10900K246810SE +/- 0.02471, N = 36.203871. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

OpenBenchmarking.orgGFLOPS, More Is BetterHPC Challenge 1.5.0Test / Class: EP-DGEMMIntel Core i9-10900K714212835SE +/- 0.15, N = 331.381. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

OpenBenchmarking.orgGUP/s, More Is BetterHPC Challenge 1.5.0Test / Class: G-Random AccessIntel Core i9-10900K0.01290.02580.03870.05160.0645SE +/- 0.00034, N = 30.057491. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

OpenBenchmarking.orgMB/s, More Is BetterHPC Challenge 1.5.0Test / Class: Max Ping Pong BandwidthIntel Core i9-10900K7K14K21K28K35KSE +/- 159.71, N = 330774.681. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2

GROMACS

The GROMACS (GROningen MAchine for Chemical Simulations) molecular dynamics package testing on the CPU with the water_GMX50 data. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNs Per Day, More Is BetterGROMACS 2020.1Water BenchmarkIntel Core i9-10900K0.21870.43740.65610.87481.0935SE +/- 0.003, N = 30.9721. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -pthread -lrt -lpthread -lm -ldl

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 9Jan2020Model: Rhodopsin ProteinIntel Core i9-10900K246810SE +/- 0.023, N = 38.6351. (CXX) g++ options: -O3 -pipe -fexceptions -fstack-protector -m64 -ffat-lto-objects -fno-trapping-math -mtune=skylake -rdynamic -lmpi -ljpeg -lpng -lz -lfftw3 -lm

LULESH

LULESH is the Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgz/s, More Is BetterLULESH 2.0.3Intel Core i9-10900K3691215SE +/- 0.07, N = 310.331. (CXX) g++ options: -O3 -fopenmp -lm -pthread -lmpi

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.13ATPase Simulation - 327,506 AtomsIntel Core i9-10900K0.26960.53920.80881.07841.348SE +/- 0.00608, N = 31.19833

Pennant

Pennant is an application focused on hydrodynamics on general unstructured meshes in 2D. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: sedovbigIntel Core i9-10900K6001200180024003000SE +/- 0.24, N = 32814.531. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi

OpenBenchmarking.orgHydro Cycle Time - Seconds, Fewer Is BetterPennant 1.0.1Test: leblancbigIntel Core i9-10900K6001200180024003000SE +/- 0.17, N = 32697.141. (CXX) g++ options: -O3 -fopenmp -pthread -lmpi

CloverLeaf

CloverLeaf is a Lagrangian-Eulerian hydrodynamics benchmark. This test profile currently makes use of CloverLeaf's OpenMP version and benchmarked with the clover_bm8192.in input file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCloverLeafLagrangian-Eulerian HydrodynamicsIntel Core i9-10900K1.23982.47963.71944.95926.199SE +/- 0.01, N = 35.511. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp

CP2K Molecular Dynamics

CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. This test profile currently makes use of the OpenMP implementation and using the Fayalite-FIST molecular dynamics run and measures the total time to complete. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 6.1Fayalite-FIST DataIntel Core i9-10900K2004006008001000789.82

NWChem

NWChem is an open-source high performance computational chemistry package. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNWChem 7.0Input: C240 BuckyballIntel Core i9-10900K6K12K18K24K30K26631.81. (F9X) gfortran options: -lnwctask -lccsd -lmcscf -lselci -lmp2 -lmoints -lstepper -ldriver -loptim -lnwdft -lgradients -lcphf -lesp -lddscf -ldangchang -lguess -lhessian -lvib -lnwcutil -lrimp2 -lproperty -lsolvation -lnwints -lprepar -lnwmd -lnwpw -lofpw -lpaw -lpspw -lband -lnwpwlib -lcafe -lspace -lanalyze -lqhop -lpfft -ldplot -ldrdy -lvscf -lqmmm -lqmd -letrans -ltce -lbq -lmm -lcons -lperfm -ldntmc -lccca -ldimqm -lga -larmci -lpeigs -l64to32 -lopenblas -lpthread -lrt -llapack -lnwcblas -lmpi_usempif08 -lmpi_mpifh -lmpi -lcomex -lm -m64 -ffast-math -std=legacy -fdefault-integer-8 -finline-functions -O2

HPC Challenge

HPC Challenge (HPCC) is a cluster-focused benchmark consisting of the HPL Linpack TPP benchmark, DGEMM, STREAM, PTRANS, RandomAccess, FFT, and communication bandwidth and latency. This HPC Challenge test profile attempts to ship with standard yet versatile configuration/input files though they can be modified. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgusecs, Fewer Is BetterHPC Challenge 1.5.0Test / Class: Random Ring LatencyIntel Core i9-10900K0.06040.12080.18120.24160.302SE +/- 0.00162, N = 30.268291. (CC) gcc options: -O3 -lm -pthread -lmpi2. OpenBLAS + 3.2