GCC 6.1 Compiler Optimization Benchmarks GCC 6.1.0 compiler benchmarks with different optimization flags. Intel Xeon E5-2687W v3 GCC compiler benchmarks on Debian. Tests by Michael Larabel of Phoronix for a future article.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 1605151-GA-1605083HA39 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native Processor: Intel Xeon E5-2687W v3 @ 3.50GHz (20 Cores) , Motherboard: MSI X99S SLI PLUS (MS-7885) v1.0, Chipset: Intel Xeon E7 v3/Xeon, Memory: 16384MB, Disk: PNY CS1211 120GB + 80GB INTEL SSDSCKGW08, Graphics: AMD FirePro V7900 2048MB, Audio: Realtek ALC892, Monitor: ASUS PB278, Network: Intel Connection
OS: Debian testing, Kernel: 4.5.0-1-amd64 (x86_64), Display Server: X Server 1.18.3, Display Driver: modesetting 1.18.3, OpenGL: 3.3 Mesa 11.1.3 Gallium 0.4, Compiler: GCC 6.1.0, File-System: ext4, Screen Resolution: 2560x1440
Compiler Notes: --disable-multilib --enable-checking=releaseProcessor Notes: Scaling Governor: intel_pstate powersave
s10 Processor: Intel Xeon E31245 @ 3.70GHz (8 Cores) , Motherboard: ASUS P8B WS , Memory: 16384MB , Disk: 3001GB Hitachi HDS72303 + 128GB SAMSUNG MZNTE128 , Graphics: Intel Sandybridge Server (1350MHz) , Audio: Realtek Generic , Monitor: SyncMaster
OS: Gentoo 2.2, Kernel: 4.5.0-gentoo (x86_64), Desktop: KDE Frameworks 5, Display Server: X Server 1.18.3, Display Driver: intel 2.99.917, OpenGL: 3.3 Mesa 11.2.2, Compiler: GCC 5.3.0 + Clang 3.8.0 + LLVM 3.8.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --bindir=/usr/x86_64-pc-linux-gnu/gcc-bin/5.3.0 --build=x86_64-pc-linux-gnu --datadir=/usr/share/gcc-data/x86_64-pc-linux-gnu/5.3.0 --disable-altivec --disable-fixed-point --disable-libcilkrts --disable-libmpx --disable-libmudflap --disable-libssp --disable-werror --enable-__cxa_atexit --enable-checking=release --enable-clocale=gnu --enable-languages=c,c++,java,objc,fortran --enable-libgomp --enable-libsanitizer --enable-libstdcxx-time --enable-libvtv --enable-lto --enable-multilib --enable-nls --enable-obsolete --enable-secureplt --enable-shared --enable-targets=all --enable-threads=posix --enable-vtable-verify --host=x86_64-pc-linux-gnu --includedir=/usr/lib/gcc/x86_64-pc-linux-gnu/5.3.0/include --mandir=/usr/share/gcc-data/x86_64-pc-linux-gnu/5.3.0/man --with-multilib-list=m32,m64 --with-python-dir=/share/gcc-data/x86_64-pc-linux-gnu/5.3.0/python --without-islProcessor Notes: Scaling Governor: intel_pstate powersave
Timed HMMer Search This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Seconds, Fewer Is Better Timed HMMer Search 2.3.2 Pfam Database Search -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.19, N = 3 SE +/- 0.05, N = 3 SE +/- 0.16, N = 6 SE +/- 0.51, N = 6 SE +/- 0.35, N = 6 SE +/- 0.68, N = 6 SE +/- 0.27, N = 6 SE +/- 0.02, N = 3 13.82 10.68 8.23 10.19 11.63 13.08 13.04 8.32 15.29 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native -O2 1. (CC) gcc options: -pthread -lhmmer -lsquid -lm
Perf Per Core
OpenBenchmarking.org Seconds x Core, Fewer Is Better Timed HMMer Search 2.3.2 Performance Per Core - Pfam Database Search -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 60 120 180 240 300 276.40 213.60 164.60 203.80 232.60 261.60 260.80 166.40 122.32 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -Ofast -march=native: Detected core count of 20 9. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Seconds x Thread, Fewer Is Better Timed HMMer Search 2.3.2 Performance Per Thread - Pfam Database Search -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 60 120 180 240 300 276.40 213.60 164.60 203.80 232.60 261.60 260.80 166.40 122.32 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -Ofast -march=native: Detected thread count of 20 9. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Seconds x GHz, Fewer Is Better Timed HMMer Search 2.3.2 Performance Per Clock - Pfam Database Search -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 13 26 39 52 65 48.37 37.38 28.81 35.67 40.71 45.78 45.64 29.12 56.57 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Seconds, Fewer Is Better Timed HMMer Search 2.3.2 Pfam Database Search -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 4 8 12 16 20 Min: 13.8 / Avg: 13.82 / Max: 13.85 Min: 10.38 / Avg: 10.68 / Max: 11.03 Min: 8.14 / Avg: 8.23 / Max: 8.31 Min: 9.66 / Avg: 10.19 / Max: 10.51 Min: 10.1 / Avg: 11.63 / Max: 13.41 Min: 11.6 / Avg: 13.08 / Max: 14.22 Min: 10.71 / Avg: 13.04 / Max: 14.89 Min: 7.72 / Avg: 8.32 / Max: 9.48 Min: 15.27 / Avg: 15.29 / Max: 15.32 1. (CC) gcc options: -pthread -lhmmer -lsquid -lm
SciMark This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Composite -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 300 600 900 1200 1500 SE +/- 8.59, N = 4 SE +/- 2.83, N = 4 SE +/- 3.98, N = 4 SE +/- 1.73, N = 4 SE +/- 7.42, N = 4 SE +/- 2.58, N = 4 SE +/- 4.89, N = 4 SE +/- 7.57, N = 4 SE +/- 5.38, N = 4 SE +/- 15.87, N = 4 1407.02 1426.79 1427.70 1437.73 1426.80 1442.30 1388.10 1445.10 1421.80 1023.06 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native 1. (CXX) g++ options:
Perf Per Core
OpenBenchmarking.org Mflops Per Core, More Is Better SciMark 2.0 Performance Per Core - Computational Test: Composite -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 30 60 90 120 150 70.35 71.34 71.39 71.89 71.34 72.12 69.41 72.26 71.09 127.88 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Mflops Per Thread, More Is Better SciMark 2.0 Performance Per Thread - Computational Test: Composite -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 30 60 90 120 150 70.35 71.34 71.39 71.89 71.34 72.12 69.41 72.26 71.09 127.88 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Mflops Per GHz, More Is Better SciMark 2.0 Performance Per Clock - Computational Test: Composite -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 90 180 270 360 450 402.01 407.65 407.91 410.78 407.66 412.09 396.60 412.89 406.23 276.50 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Composite -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 300 600 900 1200 1500 Min: 1396.21 / Avg: 1407.02 / Max: 1432.53 Min: 1420.37 / Avg: 1426.79 / Max: 1431.8 Min: 1417.24 / Avg: 1427.7 / Max: 1435.37 Min: 1433.21 / Avg: 1437.73 / Max: 1441.42 Min: 1413.45 / Avg: 1426.8 / Max: 1448.01 Min: 1435.86 / Avg: 1442.3 / Max: 1446.63 Min: 1375.2 / Avg: 1388.1 / Max: 1398.87 Min: 1427.9 / Avg: 1445.1 / Max: 1461.47 Min: 1409.22 / Avg: 1421.8 / Max: 1431.02 Min: 987.73 / Avg: 1023.06 / Max: 1064.21 1. (CXX) g++ options:
Result
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Monte Carlo -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 130 260 390 520 650 SE +/- 6.16, N = 4 SE +/- 1.68, N = 4 SE +/- 5.79, N = 4 SE +/- 1.22, N = 4 SE +/- 9.59, N = 4 SE +/- 3.64, N = 4 SE +/- 5.84, N = 4 SE +/- 6.00, N = 4 SE +/- 1.67, N = 4 SE +/- 3.57, N = 4 545.51 552.04 546.26 551.44 537.90 547.85 547.58 614.59 553.28 464.45 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native 1. (CXX) g++ options:
Perf Per Core
OpenBenchmarking.org Mflops Per Core, More Is Better SciMark 2.0 Performance Per Core - Computational Test: Monte Carlo -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 13 26 39 52 65 27.28 27.60 27.31 27.57 26.90 27.39 27.38 30.73 27.66 58.06 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Mflops Per Thread, More Is Better SciMark 2.0 Performance Per Thread - Computational Test: Monte Carlo -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 13 26 39 52 65 27.28 27.60 27.31 27.57 26.90 27.39 27.38 30.73 27.66 58.06 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Mflops Per GHz, More Is Better SciMark 2.0 Performance Per Clock - Computational Test: Monte Carlo -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 40 80 120 160 200 155.86 157.73 156.07 157.55 153.69 156.53 156.45 175.60 158.08 125.53 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Monte Carlo -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 110 220 330 440 550 Min: 528.02 / Avg: 545.51 / Max: 555.3 Min: 547.1 / Avg: 552.04 / Max: 554.2 Min: 528.97 / Avg: 546.26 / Max: 553.28 Min: 548.6 / Avg: 551.44 / Max: 553.95 Min: 516.28 / Avg: 537.9 / Max: 554.98 Min: 538.09 / Avg: 547.85 / Max: 554.35 Min: 530.78 / Avg: 547.58 / Max: 556.85 Min: 596.6 / Avg: 614.59 / Max: 621.31 Min: 549.6 / Avg: 553.28 / Max: 557.59 Min: 457.14 / Avg: 464.45 / Max: 470.62 1. (CXX) g++ options:
Result
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Fast Fourier Transform -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 100 200 300 400 500 SE +/- 5.60, N = 4 SE +/- 1.29, N = 4 SE +/- 3.34, N = 4 SE +/- 3.48, N = 4 SE +/- 2.20, N = 4 SE +/- 2.05, N = 4 SE +/- 1.70, N = 4 SE +/- 3.19, N = 4 SE +/- 0.63, N = 4 SE +/- 6.76, N = 4 440.57 456.35 456.61 447.31 461.55 458.56 443.71 465.50 468.61 231.80 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native 1. (CXX) g++ options:
Perf Per Core
OpenBenchmarking.org Mflops Per Core, More Is Better SciMark 2.0 Performance Per Core - Computational Test: Fast Fourier Transform -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 7 14 21 28 35 22.03 22.82 22.83 22.37 23.08 22.93 22.19 23.28 23.43 28.98 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Mflops Per Thread, More Is Better SciMark 2.0 Performance Per Thread - Computational Test: Fast Fourier Transform -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 7 14 21 28 35 22.03 22.82 22.83 22.37 23.08 22.93 22.19 23.28 23.43 28.98 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Mflops Per GHz, More Is Better SciMark 2.0 Performance Per Clock - Computational Test: Fast Fourier Transform -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 30 60 90 120 150 125.88 130.39 130.46 127.80 131.87 131.02 126.77 133.00 133.89 62.65 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Fast Fourier Transform -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 80 160 240 320 400 Min: 427.34 / Avg: 440.57 / Max: 450.09 Min: 453.67 / Avg: 456.35 / Max: 459.8 Min: 450.85 / Avg: 456.61 / Max: 464.65 Min: 437.12 / Avg: 447.31 / Max: 452.81 Min: 457.53 / Avg: 461.55 / Max: 467.52 Min: 455.02 / Avg: 458.56 / Max: 464.35 Min: 441.22 / Avg: 443.71 / Max: 448.66 Min: 459.77 / Avg: 465.5 / Max: 474.63 Min: 467.66 / Avg: 468.61 / Max: 470.48 Min: 220.08 / Avg: 231.8 / Max: 243.82 1. (CXX) g++ options:
Result
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Sparse Matrix Multiply -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 600 1200 1800 2400 3000 SE +/- 9.07, N = 4 SE +/- 21.77, N = 4 SE +/- 13.12, N = 4 SE +/- 3.88, N = 4 SE +/- 32.30, N = 4 SE +/- 9.06, N = 4 SE +/- 12.32, N = 4 SE +/- 24.76, N = 4 SE +/- 19.13, N = 4 SE +/- 16.80, N = 4 2565.94 2589.56 2580.32 2609.45 2571.06 2622.50 2440.96 2511.39 2517.37 1544.66 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native 1. (CXX) g++ options:
Perf Per Core
OpenBenchmarking.org Mflops Per Core, More Is Better SciMark 2.0 Performance Per Core - Computational Test: Sparse Matrix Multiply -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 40 80 120 160 200 128.30 129.48 129.02 130.47 128.55 131.13 122.05 125.57 125.87 193.08 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Mflops Per Thread, More Is Better SciMark 2.0 Performance Per Thread - Computational Test: Sparse Matrix Multiply -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 40 80 120 160 200 128.30 129.48 129.02 130.47 128.55 131.13 122.05 125.57 125.87 193.08 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Mflops Per GHz, More Is Better SciMark 2.0 Performance Per Clock - Computational Test: Sparse Matrix Multiply -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 160 320 480 640 800 733.13 739.87 737.23 745.56 734.59 749.29 697.42 717.54 719.25 417.48 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Sparse Matrix Multiply -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 500 1000 1500 2000 2500 Min: 2549.53 / Avg: 2565.94 / Max: 2582.5 Min: 2526.92 / Avg: 2589.56 / Max: 2625.49 Min: 2542.25 / Avg: 2580.32 / Max: 2599.04 Min: 2601.63 / Avg: 2609.45 / Max: 2620.18 Min: 2484.48 / Avg: 2571.06 / Max: 2632.26 Min: 2601.01 / Avg: 2622.5 / Max: 2637.72 Min: 2411.22 / Avg: 2440.96 / Max: 2462.04 Min: 2474.34 / Avg: 2511.39 / Max: 2583.22 Min: 2482.26 / Avg: 2517.37 / Max: 2551.84 Min: 1506.04 / Avg: 1544.66 / Max: 1586.6 1. (CXX) g++ options:
Result
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Dense LU Matrix Factorization -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 600 1200 1800 2400 3000 SE +/- 26.42, N = 4 SE +/- 18.22, N = 4 SE +/- 4.63, N = 4 SE +/- 7.15, N = 4 SE +/- 3.32, N = 4 SE +/- 7.24, N = 4 SE +/- 10.80, N = 4 SE +/- 28.87, N = 4 SE +/- 11.60, N = 4 SE +/- 52.11, N = 4 2454.12 2482.13 2521.75 2534.08 2534.23 2531.29 2468.30 2586.62 2519.72 1835.64 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native 1. (CXX) g++ options:
Perf Per Core
OpenBenchmarking.org Mflops Per Core, More Is Better SciMark 2.0 Performance Per Core - Computational Test: Dense LU Matrix Factorization -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 50 100 150 200 250 122.71 124.11 126.09 126.70 126.71 126.56 123.42 129.33 125.99 229.46 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Mflops Per Thread, More Is Better SciMark 2.0 Performance Per Thread - Computational Test: Dense LU Matrix Factorization -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 50 100 150 200 250 122.71 124.11 126.09 126.70 126.71 126.56 123.42 129.33 125.99 229.46 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Mflops Per GHz, More Is Better SciMark 2.0 Performance Per Clock - Computational Test: Dense LU Matrix Factorization -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 160 320 480 640 800 701.18 709.18 720.50 724.02 724.07 723.23 705.23 739.03 719.92 496.12 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Dense LU Matrix Factorization -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 500 1000 1500 2000 2500 Min: 2395.81 / Avg: 2454.12 / Max: 2521.32 Min: 2431.21 / Avg: 2482.13 / Max: 2513.22 Min: 2515.56 / Avg: 2521.75 / Max: 2535.27 Min: 2516.31 / Avg: 2534.08 / Max: 2548.69 Min: 2525.97 / Avg: 2534.23 / Max: 2542.23 Min: 2514.65 / Avg: 2531.29 / Max: 2547.99 Min: 2441.36 / Avg: 2468.3 / Max: 2488.22 Min: 2512.18 / Avg: 2586.62 / Max: 2644.5 Min: 2485.35 / Avg: 2519.72 / Max: 2534.01 Min: 1727.52 / Avg: 1835.64 / Max: 1974.95 1. (CXX) g++ options:
Result
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Jacobi Successive Over-Relaxation -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 200 400 600 800 1000 SE +/- 9.29, N = 4 SE +/- 1.16, N = 4 SE +/- 8.62, N = 4 SE +/- 3.32, N = 4 SE +/- 12.72, N = 4 SE +/- 2.16, N = 4 SE +/- 6.48, N = 4 SE +/- 6.06, N = 4 SE +/- 0.76, N = 4 SE +/- 11.16, N = 4 1028.95 1053.88 1033.56 1046.36 1029.27 1051.27 1039.94 1047.43 1050.05 1038.75 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native 1. (CXX) g++ options:
Perf Per Core
OpenBenchmarking.org Mflops Per Core, More Is Better SciMark 2.0 Performance Per Core - Computational Test: Jacobi Successive Over-Relaxation -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 30 60 90 120 150 51.45 52.69 51.68 52.32 51.46 52.56 52.00 52.37 52.50 129.84 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Mflops Per Thread, More Is Better SciMark 2.0 Performance Per Thread - Computational Test: Jacobi Successive Over-Relaxation -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 30 60 90 120 150 51.45 52.69 51.68 52.32 51.46 52.56 52.00 52.37 52.50 129.84 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Mflops Per GHz, More Is Better SciMark 2.0 Performance Per Clock - Computational Test: Jacobi Successive Over-Relaxation -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 70 140 210 280 350 293.99 301.11 295.30 298.96 294.08 300.36 297.13 299.27 300.01 280.74 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Mflops, More Is Better SciMark 2.0 Computational Test: Jacobi Successive Over-Relaxation -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 200 400 600 800 1000 Min: 1013.78 / Avg: 1028.95 / Max: 1053.9 Min: 1051.09 / Avg: 1053.88 / Max: 1056.77 Min: 1013.95 / Avg: 1033.56 / Max: 1051.75 Min: 1039.94 / Avg: 1046.36 / Max: 1054.26 Min: 996.06 / Avg: 1029.27 / Max: 1053.34 Min: 1044.97 / Avg: 1051.27 / Max: 1054.27 Min: 1025.34 / Avg: 1039.94 / Max: 1052.98 Min: 1030.59 / Avg: 1047.43 / Max: 1058.27 Min: 1048.65 / Avg: 1050.05 / Max: 1052.21 Min: 1015.29 / Avg: 1038.75 / Max: 1066.79 1. (CXX) g++ options:
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests to stress the system's CPU. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Blur -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 30 60 90 120 150 SE +/- 0.00, N = 3 SE +/- 0.58, N = 3 SE +/- 1.53, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.58, N = 3 SE +/- 1.20, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 82 110 113 137 131 130 138 144 125 -O0 -ldl -Os -ldl -Og -ldl -O1 -ldl -O2 -ldl -O3 -ldl -O3 -march=native -ldl -Ofast -march=native -ldl -O2 -ljbig -lwebp -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -llzma -lbz2 -lxml2 -lgomp 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Perf Per Core
OpenBenchmarking.org Iterations Per Minute Per Core, More Is Better GraphicsMagick 1.3.19 Performance Per Core - Operation: Blur -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 4 8 12 16 20 4.10 5.50 5.65 6.85 6.55 6.50 6.90 7.20 15.63 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -Ofast -march=native: Detected core count of 20 9. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Iterations Per Minute Per Thread, More Is Better GraphicsMagick 1.3.19 Performance Per Thread - Operation: Blur -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 4 8 12 16 20 4.10 5.50 5.65 6.85 6.55 6.50 6.90 7.20 15.63 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -Ofast -march=native: Detected thread count of 20 9. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Iterations Per Minute Per GHz, More Is Better GraphicsMagick 1.3.19 Performance Per Clock - Operation: Blur -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 9 18 27 36 45 23.43 31.43 32.29 39.14 37.43 37.14 39.43 41.14 33.78 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Blur -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 30 60 90 120 150 Min: 82 / Avg: 82 / Max: 82 Min: 109 / Avg: 110 / Max: 111 Min: 110 / Avg: 113 / Max: 115 Min: 137 / Avg: 137.33 / Max: 138 Min: 130 / Avg: 130.67 / Max: 131 Min: 129 / Avg: 130 / Max: 131 Min: 136 / Avg: 138.33 / Max: 140 Min: 143 / Avg: 143.67 / Max: 144 Min: 124 / Avg: 124.67 / Max: 125 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Result
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Sharpen -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 30 60 90 120 150 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 71 124 100 135 134 136 143 145 103 -O0 -ldl -Os -ldl -Og -ldl -O1 -ldl -O2 -ldl -O3 -ldl -O3 -march=native -ldl -Ofast -march=native -ldl -O2 -ljbig -lwebp -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -llzma -lbz2 -lxml2 -lgomp 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Perf Per Core
OpenBenchmarking.org Iterations Per Minute Per Core, More Is Better GraphicsMagick 1.3.19 Performance Per Core - Operation: Sharpen -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 3 6 9 12 15 3.55 6.20 5.00 6.75 6.70 6.80 7.15 7.25 12.88 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -Ofast -march=native: Detected core count of 20 9. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Iterations Per Minute Per Thread, More Is Better GraphicsMagick 1.3.19 Performance Per Thread - Operation: Sharpen -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 3 6 9 12 15 3.55 6.20 5.00 6.75 6.70 6.80 7.15 7.25 12.88 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -Ofast -march=native: Detected thread count of 20 9. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Iterations Per Minute Per GHz, More Is Better GraphicsMagick 1.3.19 Performance Per Clock - Operation: Sharpen -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 9 18 27 36 45 20.29 35.43 28.57 38.57 38.29 38.86 40.86 41.43 27.84 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Sharpen -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 30 60 90 120 150 Min: 71 / Avg: 71.33 / Max: 72 Min: 123 / Avg: 123.67 / Max: 124 Min: 100 / Avg: 100 / Max: 100 Min: 135 / Avg: 135.33 / Max: 136 Min: 134 / Avg: 134 / Max: 134 Min: 135 / Avg: 135.67 / Max: 136 Min: 142 / Avg: 142.67 / Max: 143 Min: 145 / Avg: 145.33 / Max: 146 Min: 103 / Avg: 103.33 / Max: 104 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Result
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Resizing -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 40 80 120 160 200 SE +/- 0.00, N = 3 SE +/- 0.58, N = 3 SE +/- 0.33, N = 3 SE +/- 1.00, N = 3 SE +/- 0.67, N = 3 SE +/- 0.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.88, N = 3 97 168 149 168 174 171 180 182 151 -O0 -ldl -Os -ldl -Og -ldl -O1 -ldl -O2 -ldl -O3 -ldl -O3 -march=native -ldl -Ofast -march=native -ldl -O2 -ljbig -lwebp -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -llzma -lbz2 -lxml2 -lgomp 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Perf Per Core
OpenBenchmarking.org Iterations Per Minute Per Core, More Is Better GraphicsMagick 1.3.19 Performance Per Core - Operation: Resizing -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 5 10 15 20 25 4.85 8.40 7.45 8.40 8.70 8.55 9.00 9.10 18.88 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -Ofast -march=native: Detected core count of 20 9. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Iterations Per Minute Per Thread, More Is Better GraphicsMagick 1.3.19 Performance Per Thread - Operation: Resizing -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 5 10 15 20 25 4.85 8.40 7.45 8.40 8.70 8.55 9.00 9.10 18.88 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -Ofast -march=native: Detected thread count of 20 9. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Iterations Per Minute Per GHz, More Is Better GraphicsMagick 1.3.19 Performance Per Clock - Operation: Resizing -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 12 24 36 48 60 27.71 48.00 42.57 48.00 49.71 48.86 51.43 52.00 40.81 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Resizing -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 30 60 90 120 150 Min: 97 / Avg: 97 / Max: 97 Min: 167 / Avg: 168 / Max: 169 Min: 148 / Avg: 148.67 / Max: 149 Min: 167 / Avg: 168 / Max: 170 Min: 173 / Avg: 173.67 / Max: 175 Min: 171 / Avg: 171 / Max: 171 Min: 179 / Avg: 179.67 / Max: 180 Min: 181 / Avg: 181.67 / Max: 182 Min: 149 / Avg: 150.67 / Max: 152 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Result
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: HWB Color Space -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 40 80 120 160 200 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.58, N = 3 SE +/- 0.33, N = 3 SE +/- 0.58, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 110 188 168 187 186 185 190 204 159 -O0 -ldl -Os -ldl -Og -ldl -O1 -ldl -O2 -ldl -O3 -ldl -O3 -march=native -ldl -Ofast -march=native -ldl -O2 -ljbig -lwebp -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -llzma -lbz2 -lxml2 -lgomp 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Perf Per Core
OpenBenchmarking.org Iterations Per Minute Per Core, More Is Better GraphicsMagick 1.3.19 Performance Per Core - Operation: HWB Color Space -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 5 10 15 20 25 5.50 9.40 8.40 9.35 9.30 9.25 9.50 10.20 19.88 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -Ofast -march=native: Detected core count of 20 9. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Iterations Per Minute Per Thread, More Is Better GraphicsMagick 1.3.19 Performance Per Thread - Operation: HWB Color Space -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 5 10 15 20 25 5.50 9.40 8.40 9.35 9.30 9.25 9.50 10.20 19.88 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -Ofast -march=native: Detected thread count of 20 9. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Iterations Per Minute Per GHz, More Is Better GraphicsMagick 1.3.19 Performance Per Clock - Operation: HWB Color Space -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 13 26 39 52 65 31.43 53.71 48.00 53.43 53.14 52.86 54.29 58.29 42.97 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: HWB Color Space -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 40 80 120 160 200 Min: 109 / Avg: 109.67 / Max: 110 Min: 188 / Avg: 188.33 / Max: 189 Min: 167 / Avg: 167.67 / Max: 168 Min: 186 / Avg: 186.67 / Max: 187 Min: 185 / Avg: 186 / Max: 187 Min: 184 / Avg: 184.67 / Max: 185 Min: 189 / Avg: 190 / Max: 191 Min: 204 / Avg: 204.33 / Max: 205 Min: 158 / Avg: 158.67 / Max: 159 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Result
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Local Adaptive Thresholding -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 20 40 60 80 100 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 SE +/- 0.58, N = 3 SE +/- 0.33, N = 3 SE +/- 0.00, N = 3 17 68 54 76 82 83 85 86 78 -O0 -ldl -Os -ldl -Og -ldl -O1 -ldl -O2 -ldl -O3 -ldl -O3 -march=native -ldl -Ofast -march=native -ldl -O2 -ljbig -lwebp -llcms2 -ltiff -lfreetype -ljasper -ljpeg -lwmflite -llzma -lbz2 -lxml2 -lgomp 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Perf Per Core
OpenBenchmarking.org Iterations Per Minute Per Core, More Is Better GraphicsMagick 1.3.19 Performance Per Core - Operation: Local Adaptive Thresholding -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 3 6 9 12 15 0.85 3.40 2.70 3.80 4.10 4.15 4.25 4.30 9.75 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -Ofast -march=native: Detected core count of 20 9. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Iterations Per Minute Per Thread, More Is Better GraphicsMagick 1.3.19 Performance Per Thread - Operation: Local Adaptive Thresholding -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 3 6 9 12 15 0.85 3.40 2.70 3.80 4.10 4.15 4.25 4.30 9.75 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -Ofast -march=native: Detected thread count of 20 9. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Iterations Per Minute Per GHz, More Is Better GraphicsMagick 1.3.19 Performance Per Clock - Operation: Local Adaptive Thresholding -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 6 12 18 24 30 4.86 19.43 15.43 21.71 23.43 23.71 24.29 24.57 21.08 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.19 Operation: Local Adaptive Thresholding -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 16 32 48 64 80 Min: 17 / Avg: 17 / Max: 17 Min: 68 / Avg: 68 / Max: 68 Min: 53 / Avg: 53.67 / Max: 54 Min: 76 / Avg: 76 / Max: 76 Min: 82 / Avg: 82.33 / Max: 83 Min: 82 / Avg: 82.67 / Max: 83 Min: 84 / Avg: 85 / Max: 86 Min: 85 / Avg: 85.67 / Max: 86 Min: 78 / Avg: 78 / Max: 78 1. (CC) gcc options: -fopenmp -pthread -lXext -lSM -lICE -lX11 -lz -lm -lpthread
Himeno Benchmark The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org MFLOPS, More Is Better Himeno Benchmark 3.0 Poisson Pressure Solver -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 500 1000 1500 2000 2500 SE +/- 0.64, N = 3 SE +/- 3.40, N = 3 SE +/- 1.19, N = 3 SE +/- 1.09, N = 3 SE +/- 4.78, N = 3 SE +/- 6.74, N = 3 SE +/- 9.13, N = 3 SE +/- 7.85, N = 3 SE +/- 3.96, N = 3 SE +/- 0.96, N = 3 424.57 1181.18 1102.64 1060.93 1916.56 1895.45 2113.04 2150.96 2019.61 1474.95 -O0 -mavx2 -Os -mavx2 -Og -mavx2 -O1 -mavx2 -O2 -mavx2 -mavx2 -march=native -mavx2 -march=native -flto -mavx2 -Ofast -march=native -mavx2 1. (CC) gcc options: -O3
Perf Per Core
OpenBenchmarking.org MFLOPS Per Core, More Is Better Himeno Benchmark 3.0 Performance Per Core - Poisson Pressure Solver -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 40 80 120 160 200 21.23 59.06 55.13 53.05 95.83 94.77 105.65 107.55 100.98 184.37 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org MFLOPS Per Thread, More Is Better Himeno Benchmark 3.0 Performance Per Thread - Poisson Pressure Solver -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 40 80 120 160 200 21.23 59.06 55.13 53.05 95.83 94.77 105.65 107.55 100.98 184.37 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org MFLOPS Per GHz, More Is Better Himeno Benchmark 3.0 Performance Per Clock - Poisson Pressure Solver -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 130 260 390 520 650 121.31 337.48 315.04 303.12 547.59 541.56 603.73 614.56 577.03 398.64 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org MFLOPS, More Is Better Himeno Benchmark 3.0 Poisson Pressure Solver -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 400 800 1200 1600 2000 Min: 423.87 / Avg: 424.57 / Max: 425.86 Min: 1175.19 / Avg: 1181.18 / Max: 1186.94 Min: 1101.08 / Avg: 1102.64 / Max: 1104.98 Min: 1059.2 / Avg: 1060.93 / Max: 1062.94 Min: 1908.98 / Avg: 1916.56 / Max: 1925.38 Min: 1884.53 / Avg: 1895.45 / Max: 1907.74 Min: 2095.44 / Avg: 2113.04 / Max: 2126.02 Min: 2135.4 / Avg: 2150.96 / Max: 2160.54 Min: 2013.52 / Avg: 2019.61 / Max: 2027.03 Min: 1473.87 / Avg: 1474.95 / Max: 1476.87 1. (CC) gcc options: -O3
Timed ImageMagick Compilation This test times how long it takes to build ImageMagick. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Seconds, Fewer Is Better Timed ImageMagick Compilation 6.9.0 Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 30 60 90 120 150 SE +/- 0.10, N = 3 SE +/- 0.08, N = 3 SE +/- 0.13, N = 3 SE +/- 0.21, N = 3 SE +/- 0.06, N = 3 SE +/- 0.14, N = 3 SE +/- 0.23, N = 3 SE +/- 0.25, N = 3 SE +/- 0.09, N = 3 SE +/- 0.13, N = 3 9.34 32.43 13.35 27.24 38.55 55.45 55.40 121.45 55.89 74.78
Perf Per Core
OpenBenchmarking.org Seconds x Core, Fewer Is Better Timed ImageMagick Compilation 6.9.0 Performance Per Core - Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 500 1000 1500 2000 2500 186.80 648.60 267.00 544.80 771.00 1109.00 1108.00 2429.00 1117.80 598.24 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Seconds x Thread, Fewer Is Better Timed ImageMagick Compilation 6.9.0 Performance Per Thread - Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 500 1000 1500 2000 2500 186.80 648.60 267.00 544.80 771.00 1109.00 1108.00 2429.00 1117.80 598.24 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Seconds x GHz, Fewer Is Better Timed ImageMagick Compilation 6.9.0 Performance Per Clock - Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 90 180 270 360 450 32.69 113.51 46.73 95.34 134.93 194.08 193.90 425.08 195.62 276.69 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Seconds, Fewer Is Better Timed ImageMagick Compilation 6.9.0 Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 20 40 60 80 100 Min: 9.14 / Avg: 9.34 / Max: 9.44 Min: 32.3 / Avg: 32.43 / Max: 32.57 Min: 13.11 / Avg: 13.35 / Max: 13.56 Min: 26.83 / Avg: 27.24 / Max: 27.47 Min: 38.48 / Avg: 38.55 / Max: 38.67 Min: 55.19 / Avg: 55.45 / Max: 55.68 Min: 54.99 / Avg: 55.4 / Max: 55.78 Min: 121.04 / Avg: 121.45 / Max: 121.9 Min: 55.77 / Avg: 55.89 / Max: 56.06 Min: 74.52 / Avg: 74.78 / Max: 74.95
Timed PHP Compilation
Result
OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 5.2.9 Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 20 40 60 80 100 SE +/- 0.02, N = 3 SE +/- 0.06, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.04, N = 3 SE +/- 0.01, N = 3 SE +/- 0.19, N = 3 SE +/- 0.10, N = 3 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 5.58 11.61 8.18 9.76 16.08 17.59 18.10 82.86 17.99 34.49 -Os -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native -O2 1. (CC) gcc options: -pedantic -ldl -lz -lm
Perf Per Core
OpenBenchmarking.org Seconds x Core, Fewer Is Better Timed PHP Compilation 5.2.9 Performance Per Core - Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 400 800 1200 1600 2000 111.60 232.20 163.60 195.20 321.60 351.80 362.00 1657.20 359.80 275.92 1. -O0: Detected core count of 20 2. -Os: Detected core count of 20 3. -Og: Detected core count of 20 4. -O1: Detected core count of 20 5. -O2: Detected core count of 20 6. -O3: Detected core count of 20 7. -O3 -march=native: Detected core count of 20 8. -O3 -march=native -flto: Detected core count of 20 9. -Ofast -march=native: Detected core count of 20 10. s10: Detected core count of 8
Perf Per Thread
OpenBenchmarking.org Seconds x Thread, Fewer Is Better Timed PHP Compilation 5.2.9 Performance Per Thread - Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 400 800 1200 1600 2000 111.60 232.20 163.60 195.20 321.60 351.80 362.00 1657.20 359.80 275.92 1. -O0: Detected thread count of 20 2. -Os: Detected thread count of 20 3. -Og: Detected thread count of 20 4. -O1: Detected thread count of 20 5. -O2: Detected thread count of 20 6. -O3: Detected thread count of 20 7. -O3 -march=native: Detected thread count of 20 8. -O3 -march=native -flto: Detected thread count of 20 9. -Ofast -march=native: Detected thread count of 20 10. s10: Detected thread count of 8
Perf Per Clock
OpenBenchmarking.org Seconds x GHz, Fewer Is Better Timed PHP Compilation 5.2.9 Performance Per Clock - Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 60 120 180 240 300 19.53 40.64 28.63 34.16 56.28 61.57 63.35 290.01 62.97 127.61 1. -O0: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 2. -Os: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 3. -Og: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 4. -O1: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 5. -O2: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 6. -O3: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 7. -O3 -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 8. -O3 -march=native -flto: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 9. -Ofast -march=native: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.50 10. s10: Detected GHz base clock speed (use PTS sensors for real-time frequency/sensor reporting) count of 3.70
Result Confidence
OpenBenchmarking.org Seconds, Fewer Is Better Timed PHP Compilation 5.2.9 Time To Compile -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 16 32 48 64 80 Min: 5.54 / Avg: 5.58 / Max: 5.61 Min: 11.49 / Avg: 11.61 / Max: 11.72 Min: 8.16 / Avg: 8.18 / Max: 8.19 Min: 9.73 / Avg: 9.76 / Max: 9.79 Min: 16.01 / Avg: 16.08 / Max: 16.14 Min: 17.58 / Avg: 17.59 / Max: 17.62 Min: 17.85 / Avg: 18.1 / Max: 18.47 Min: 82.7 / Avg: 82.86 / Max: 83.04 Min: 17.9 / Avg: 17.99 / Max: 18.05 Min: 34.39 / Avg: 34.49 / Max: 34.56 1. (CC) gcc options: -pedantic -ldl -lz -lm
FLAC Audio Encoding This test times how long it takes to encode a sample WAV file to FLAC format three times. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.3.1 WAV To FLAC -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 11 22 33 44 55 SE +/- 0.12, N = 5 SE +/- 0.05, N = 5 SE +/- 0.07, N = 5 SE +/- 0.04, N = 5 SE +/- 0.07, N = 5 SE +/- 0.04, N = 5 SE +/- 0.10, N = 5 SE +/- 0.04, N = 5 SE +/- 0.02, N = 5 46.74 10.62 8.11 7.68 6.68 6.83 7.01 7.03 8.97 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native -O2 -logg 1. (CXX) g++ options: -fvisibility=hidden -lm
Result Confidence
OpenBenchmarking.org Seconds, Fewer Is Better FLAC Audio Encoding 1.3.1 WAV To FLAC -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 10 20 30 40 50 Min: 46.37 / Avg: 46.74 / Max: 47.07 Min: 10.5 / Avg: 10.62 / Max: 10.73 Min: 7.86 / Avg: 8.11 / Max: 8.28 Min: 7.61 / Avg: 7.68 / Max: 7.83 Min: 6.43 / Avg: 6.68 / Max: 6.82 Min: 6.7 / Avg: 6.83 / Max: 6.91 Min: 6.65 / Avg: 7.01 / Max: 7.18 Min: 6.95 / Avg: 7.03 / Max: 7.15 Min: 8.92 / Avg: 8.97 / Max: 9.06 1. (CXX) g++ options: -fvisibility=hidden -lm
LAME MP3 Encoding LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Seconds, Fewer Is Better LAME MP3 Encoding 3.99.3 WAV To MP3 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 8 16 24 32 40 SE +/- 0.09, N = 5 SE +/- 0.06, N = 5 SE +/- 0.06, N = 5 SE +/- 0.08, N = 5 SE +/- 0.10, N = 5 SE +/- 0.09, N = 5 SE +/- 0.10, N = 5 SE +/- 0.07, N = 5 SE +/- 0.03, N = 5 36.02 16.28 17.15 15.14 14.26 12.52 12.45 11.34 14.45 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native -O3 -ffast-math -funroll-loops -lncurses 1. (CC) gcc options: -pipe -lm
Result Confidence
OpenBenchmarking.org Seconds, Fewer Is Better LAME MP3 Encoding 3.99.3 WAV To MP3 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 8 16 24 32 40 Min: 35.87 / Avg: 36.02 / Max: 36.3 Min: 16.16 / Avg: 16.28 / Max: 16.46 Min: 17.03 / Avg: 17.15 / Max: 17.39 Min: 14.98 / Avg: 15.14 / Max: 15.41 Min: 13.99 / Avg: 14.26 / Max: 14.43 Min: 12.34 / Avg: 12.52 / Max: 12.74 Min: 12.27 / Avg: 12.45 / Max: 12.82 Min: 11.21 / Avg: 11.34 / Max: 11.6 Min: 14.38 / Avg: 14.45 / Max: 14.57 1. (CC) gcc options: -pipe -lm
PostgreSQL pgbench
Result
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 9.4.3 Scaling: Buffer Test - Test: Normal Load - Mode: Read Write -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native s10 1000 2000 3000 4000 5000 SE +/- 44.94, N = 3 SE +/- 32.55, N = 3 SE +/- 64.39, N = 5 SE +/- 80.41, N = 6 SE +/- 62.15, N = 5 SE +/- 64.78, N = 6 SE +/- 18.80, N = 3 SE +/- 10.47, N = 6 4468.97 4275.85 4364.38 4257.86 4322.67 4495.93 4281.30 683.11 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O2 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -pthread -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Result Confidence
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 9.4.3 Scaling: Buffer Test - Test: Normal Load - Mode: Read Write -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native s10 800 1600 2400 3200 4000 Min: 4413.42 / Avg: 4468.97 / Max: 4557.94 Min: 4212.5 / Avg: 4275.85 / Max: 4320.54 Min: 4191.47 / Avg: 4364.38 / Max: 4524.95 Min: 3963.53 / Avg: 4257.86 / Max: 4554.8 Min: 4181.93 / Avg: 4322.67 / Max: 4547.21 Min: 4199.36 / Avg: 4495.93 / Max: 4675.61 Min: 4243.76 / Avg: 4281.3 / Max: 4301.84 Min: 634.09 / Avg: 683.11 / Max: 703.04 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -pthread -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Result
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 9.4.3 Scaling: Buffer Test - Test: Single Thread - Mode: Read Write -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native s10 80 160 240 320 400 SE +/- 4.48, N = 6 SE +/- 4.13, N = 3 SE +/- 5.68, N = 6 SE +/- 5.58, N = 3 SE +/- 2.06, N = 3 SE +/- 2.78, N = 3 SE +/- 1.70, N = 3 SE +/- 0.53, N = 3 303.93 346.32 353.15 351.47 363.87 351.89 349.97 91.81 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O2 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -pthread -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Result Confidence
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 9.4.3 Scaling: Buffer Test - Test: Single Thread - Mode: Read Write -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native s10 60 120 180 240 300 Min: 295.49 / Avg: 303.93 / Max: 324.4 Min: 338.17 / Avg: 346.32 / Max: 351.6 Min: 341.19 / Avg: 353.15 / Max: 375.08 Min: 340.34 / Avg: 351.47 / Max: 357.78 Min: 361.57 / Avg: 363.87 / Max: 367.99 Min: 346.37 / Avg: 351.89 / Max: 355.22 Min: 347.7 / Avg: 349.97 / Max: 353.29 Min: 90.82 / Avg: 91.81 / Max: 92.64 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -pthread -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Result
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 9.4.3 Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Write -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native s10 1000 2000 3000 4000 5000 SE +/- 21.91, N = 3 SE +/- 27.80, N = 3 SE +/- 54.29, N = 3 SE +/- 67.44, N = 4 SE +/- 19.05, N = 3 SE +/- 40.24, N = 3 SE +/- 66.74, N = 3 SE +/- 15.48, N = 6 4840.76 4497.15 4538.18 4494.38 4494.08 4720.08 4539.62 791.05 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O2 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -pthread -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Result Confidence
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 9.4.3 Scaling: Buffer Test - Test: Heavy Contention - Mode: Read Write -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native s10 800 1600 2400 3200 4000 Min: 4803.96 / Avg: 4840.76 / Max: 4879.76 Min: 4444.71 / Avg: 4497.15 / Max: 4539.36 Min: 4470.86 / Avg: 4538.18 / Max: 4645.62 Min: 4343.16 / Avg: 4494.38 / Max: 4667.46 Min: 4457.76 / Avg: 4494.08 / Max: 4522.2 Min: 4668.65 / Avg: 4720.08 / Max: 4799.41 Min: 4464.88 / Avg: 4539.62 / Max: 4672.77 Min: 731.09 / Avg: 791.05 / Max: 824.06 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -pthread -lpgcommon -lpgport -lpq -lpthread -lrt -lcrypt -ldl -lm
Redis Redis is an open-source data structure server. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: LPOP -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 300K 600K 900K 1200K 1500K SE +/- 3542.72, N = 3 SE +/- 12533.78, N = 3 SE +/- 8860.98, N = 6 SE +/- 2503.49, N = 3 SE +/- 5733.33, N = 3 SE +/- 11650.40, N = 6 SE +/- 10960.53, N = 3 SE +/- 5651.00, N = 3 SE +/- 19069.23, N = 3 547091.88 642758.08 637156.51 655900.13 649030.83 646935.48 655097.69 656696.79 1559720.21 -std=gnu99 -pipe -g3 -O3 -funroll-loops 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result Confidence
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: LPOP -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 300K 600K 900K 1200K 1500K Min: 540832.88 / Avg: 547091.88 / Max: 553097.38 Min: 618046.94 / Avg: 642758.08 / Max: 658761.5 Min: 610873.56 / Avg: 637156.51 / Max: 665336 Min: 651890.44 / Avg: 655900.13 / Max: 660501.94 Min: 638569.62 / Avg: 649030.83 / Max: 658327.81 Min: 604960.69 / Avg: 646935.48 / Max: 671140.94 Min: 640615 / Avg: 655097.69 / Max: 676589.94 Min: 645577.81 / Avg: 656696.79 / Max: 664010.62 Min: 1524390.25 / Avg: 1559720.21 / Max: 1589825.12 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: SADD -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 300K 600K 900K 1200K 1500K SE +/- 1350.43, N = 3 SE +/- 4522.27, N = 3 SE +/- 11078.60, N = 6 SE +/- 2982.88, N = 3 SE +/- 9681.71, N = 3 SE +/- 3715.65, N = 3 SE +/- 126.17, N = 3 SE +/- 126.46, N = 3 SE +/- 3417.96, N = 3 491891.31 607478.25 582272.77 605722.93 598759.46 605861.48 615258.45 616016.48 1216564.21 -std=gnu99 -pipe -g3 -O3 -funroll-loops 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result Confidence
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: SADD -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 200K 400K 600K 800K 1000K Min: 490196.09 / Avg: 491891.31 / Max: 494559.81 Min: 598444.06 / Avg: 607478.25 / Max: 612369.88 Min: 557724.5 / Avg: 582272.77 / Max: 613873.56 Min: 600240.06 / Avg: 605722.93 / Max: 610500.62 Min: 580720.06 / Avg: 598759.46 / Max: 613873.56 Min: 599161.19 / Avg: 605861.48 / Max: 611995.12 Min: 615006.12 / Avg: 615258.45 / Max: 615384.62 Min: 615763.56 / Avg: 616016.48 / Max: 616142.94 Min: 1210653.75 / Avg: 1216564.21 / Max: 1222493.88 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: LPUSH -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 300K 600K 900K 1200K 1500K SE +/- 2663.98, N = 3 SE +/- 1356.34, N = 3 SE +/- 553.25, N = 3 SE +/- 8590.61, N = 6 SE +/- 1359.59, N = 3 SE +/- 4012.35, N = 3 SE +/- 11218.12, N = 6 SE +/- 2036.63, N = 3 SE +/- 17952.95, N = 3 476295.97 598808.52 602047.96 589230.57 599526.56 593409.02 584299.37 598935.85 1169238.33 -std=gnu99 -pipe -g3 -O3 -funroll-loops 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result Confidence
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: LPUSH -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 200K 400K 600K 800K 1000K Min: 471253.53 / Avg: 476295.97 / Max: 480307.41 Min: 596302.88 / Avg: 598808.52 / Max: 600961.5 Min: 600961.5 / Avg: 602047.96 / Max: 602772.75 Min: 550357.75 / Avg: 589230.57 / Max: 612369.88 Min: 597014.94 / Avg: 599526.56 / Max: 601684.75 Min: 585480.12 / Avg: 593409.02 / Max: 598444.06 Min: 550357.75 / Avg: 584299.37 / Max: 614628.19 Min: 594884 / Avg: 598935.85 / Max: 601322.94 Min: 1133786.75 / Avg: 1169238.33 / Max: 1191895.12 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: GET -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 300K 600K 900K 1200K 1500K SE +/- 1566.53, N = 3 SE +/- 3565.12, N = 3 SE +/- 2604.35, N = 3 SE +/- 5335.15, N = 3 SE +/- 5831.74, N = 3 SE +/- 4253.76, N = 3 SE +/- 10870.06, N = 4 SE +/- 6785.13, N = 3 SE +/- 27357.36, N = 3 548655.63 645755.96 652904.39 655681.23 628643.96 669846.73 631189.52 631191.87 1570800.87 -std=gnu99 -pipe -g3 -O3 -funroll-loops 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result Confidence
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: GET -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 300K 600K 900K 1200K 1500K Min: 545851.5 / Avg: 548655.63 / Max: 551267.94 Min: 641025.69 / Avg: 645755.96 / Max: 652741.56 Min: 648088.12 / Avg: 652904.39 / Max: 657030.25 Min: 646412.38 / Avg: 655681.23 / Max: 664893.62 Min: 618046.94 / Avg: 628643.96 / Max: 638162.06 Min: 661375.69 / Avg: 669846.73 / Max: 674763.81 Min: 600961.5 / Avg: 631189.52 / Max: 652741.56 Min: 623830.31 / Avg: 631191.87 / Max: 644745.31 Min: 1531393.62 / Avg: 1570800.87 / Max: 1623376.62 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: SET -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 200K 400K 600K 800K 1000K SE +/- 7680.16, N = 3 SE +/- 5397.69, N = 3 SE +/- 1909.49, N = 3 SE +/- 1956.89, N = 3 SE +/- 6845.46, N = 3 SE +/- 8111.26, N = 5 SE +/- 8041.99, N = 3 SE +/- 2097.48, N = 3 SE +/- 3977.33, N = 3 479934.92 596757.10 592312.44 597265.48 586099.29 587251.75 584905.04 588019.67 1149452.71 -std=gnu99 -pipe -g3 -O3 -funroll-loops 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Result Confidence
OpenBenchmarking.org Requests Per Second, More Is Better Redis 3.0.1 Test: SET -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -Ofast -march=native s10 200K 400K 600K 800K 1000K Min: 472143.53 / Avg: 479934.92 / Max: 495294.69 Min: 586166.5 / Avg: 596757.1 / Max: 603864.75 Min: 588581.5 / Avg: 592312.44 / Max: 594884 Min: 593824.25 / Avg: 597265.48 / Max: 600600.62 Min: 572409.88 / Avg: 586099.29 / Max: 593119.81 Min: 555247.06 / Avg: 587251.75 / Max: 600240.06 Min: 569152 / Avg: 584905.04 / Max: 595592.62 Min: 584453.56 / Avg: 588019.67 / Max: 591715.94 Min: 1145475.38 / Avg: 1149452.71 / Max: 1157407.38 1. (CC) gcc options: -ggdb -rdynamic -lm -pthread -ldl
Hierarchical INTegration This test runs the U.S. Department of Energy's Ames Laboratory Hierarchical INTegration (HINT) benchmark. Learn more via the OpenBenchmarking.org test page .
Result
OpenBenchmarking.org QUIPs, More Is Better Hierarchical INTegration 1.0 Test: FLOAT -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 70M 140M 210M 280M 350M SE +/- 246206.62, N = 3 SE +/- 119724.21, N = 3 SE +/- 179064.26, N = 3 SE +/- 222306.33, N = 3 SE +/- 1047951.03, N = 3 SE +/- 978215.41, N = 3 SE +/- 619811.97, N = 3 SE +/- 100984.48, N = 3 SE +/- 547786.00, N = 3 SE +/- 89816.12, N = 3 103731655.85 303914359.33 326497871.93 242450705.97 317711776.83 312279718.27 310268777.87 312975471.93 309403432.89 287050317.81 -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native -O3 -march=native 1. (CC) gcc options: -lm
Result Confidence
OpenBenchmarking.org QUIPs, More Is Better Hierarchical INTegration 1.0 Test: FLOAT -O0 -Os -Og -O1 -O2 -O3 -O3 -march=native -O3 -march=native -flto -Ofast -march=native s10 60M 120M 180M 240M 300M Min: 103281638.3 / Avg: 103731655.85 / Max: 104129772.24 Min: 303707426.5 / Avg: 303914359.33 / Max: 304122160.59 Min: 326185447.17 / Avg: 326497871.93 / Max: 326805692.69 Min: 242007569.42 / Avg: 242450705.97 / Max: 242703624.08 Min: 315646886.09 / Avg: 317711776.83 / Max: 319055306.98 Min: 310578566.27 / Avg: 312279718.27 / Max: 313967120.16 Min: 309298222.31 / Avg: 310268777.87 / Max: 311421900.74 Min: 312805345.7 / Avg: 312975471.93 / Max: 313154802.73 Min: 308439297.66 / Avg: 309403432.89 / Max: 310336101.17 Min: 286887283.99 / Avg: 287050317.81 / Max: 287197147.72 1. (CC) gcc options: -lm