9684x-march

2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2403279-NE-9684XMARC57
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

CPU Massive 3 Tests
Creator Workloads 2 Tests
HPC - High Performance Computing 2 Tests
Machine Learning 2 Tests
Multi-Core 2 Tests
Python Tests 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
PRE
March 27
  2 Hours, 34 Minutes
a
March 27
  8 Hours, 3 Minutes
Invert Hiding All Results Option
  5 Hours, 18 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


9684x-march OpenBenchmarking.orgPhoronix Test Suite2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a41520GB3201GB Micron_7450_MTFDKCB3T2TFS + 257GB Flash DriveASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-25-generic (x86_64)GCC 13.2.0ext4640x480ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen Resolution9684x-march BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - Python 3.11.6- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

PRE vs. a ComparisonPhoronix Test SuiteBaseline+2.3%+2.3%+4.6%+4.6%+6.9%+6.9%+9.2%+9.2%7.1%6.1%4.9%4%3.2%2.9%2.9%2.8%2.5%2.2%CPU - 32 - ResNet-509.3%CPU - 32 - ResNet-152CPU - 1 - ResNet-152CPU - 32 - GoogLeNet5%CPU - 1 - GoogLeNetCPU - 16 - ResNet-50CPU - 1 - ResNet-503.8%CPU - 64 - ResNet-1523.4%CPU - 32 - ResNet-50CPU - 256 - AlexNet3%CPU - 32 - AlexNetCPU - 16 - ResNet-50CPU - 512 - ResNet-50Read While Writing2.7%CPU - 1 - Efficientnet_v2_lCPU - 64 - ResNet-502.4%CPU - 16 - AlexNetCPU - 64 - AlexNet2.1%CPU - 256 - ResNet-502.1%TensorFlowPyTorchPyTorchTensorFlowTensorFlowTensorFlowTensorFlowPyTorchPyTorchTensorFlowTensorFlowPyTorchPyTorchRocksDBPyTorchPyTorchTensorFlowTensorFlowPyTorchPREa

9684x-march build-mesa: Time To Compilepytorch: CPU - 1 - ResNet-50pytorch: CPU - 1 - ResNet-152pytorch: CPU - 16 - ResNet-50pytorch: CPU - 32 - ResNet-50pytorch: CPU - 64 - ResNet-50pytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50pytorch: CPU - 32 - ResNet-152pytorch: CPU - 512 - ResNet-50pytorch: CPU - 64 - ResNet-152pytorch: CPU - 256 - ResNet-152pytorch: CPU - 512 - ResNet-152pytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 256 - Efficientnet_v2_lpytorch: CPU - 512 - Efficientnet_v2_ltensorflow: CPU - 1 - AlexNettensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 1 - ResNet-50tensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 512 - ResNet-50blender: BMW27 - CPU-Onlyblender: Junkshop - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyrocksdb: Overwriterocksdb: Rand Readrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write Randbrl-cad: VGR Performance MetricPREa14.6623.069.9720.9320.1921.598.9321.208.7220.439.218.929.476.292.332.332.322.292.3121.16242.29424.06765.5512.584.051652.231980.51112.6439.68185.1665.88275.3487.72400.03119.83493.31140.597.5511.418.039.9667.3822.994210491105306233421266271303633619142595661214.75623.2010.5821.5320.8421.089.0120.779.3421.018.919.099.336.452.332.312.312.332.3320.78247.55436.25749.4613.203.91604.522010.56114.2641.26176.3660.25273.6888.93399.46118.88484.02140.497.5511.4418.089.8567.6623.142161611088927764256872640666236432635927564OpenBenchmarking.org

Timed Mesa Compilation

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 24.0Time To CompilePREa48121620SE +/- 0.04, N = 314.6614.76
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 24.0Time To CompilePREa48121620Min: 14.68 / Avg: 14.76 / Max: 14.83

PyTorch

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50PREa612182430SE +/- 0.20, N = 1523.0623.20MIN: 12.95 / MAX: 24.52MIN: 12.21 / MAX: 25.13
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50PREa510152025Min: 21.58 / Avg: 23.2 / Max: 24.06

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-152PREa3691215SE +/- 0.10, N = 159.9710.58MIN: 4.85 / MAX: 10.69MIN: 4.55 / MAX: 11.67
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-152PREa3691215Min: 9.77 / Avg: 10.58 / Max: 11.11

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-50PREa510152025SE +/- 0.16, N = 320.9321.53MIN: 12.91 / MAX: 21.51MIN: 12.64 / MAX: 22.28
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-50PREa510152025Min: 21.22 / Avg: 21.53 / Max: 21.71

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-50PREa510152025SE +/- 0.16, N = 1520.1920.84MIN: 11.95 / MAX: 21.04MIN: 11.24 / MAX: 22.33
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-50PREa510152025Min: 19.16 / Avg: 20.84 / Max: 21.56

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-50PREa510152025SE +/- 0.23, N = 321.5921.08MIN: 14.02 / MAX: 22.21MIN: 13.2 / MAX: 22.07
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-50PREa510152025Min: 20.75 / Avg: 21.08 / Max: 21.52

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-152PREa3691215SE +/- 0.09, N = 38.939.01MIN: 8.8 / MAX: 9.04MIN: 4.81 / MAX: 9.31
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-152PREa3691215Min: 8.82 / Avg: 9.01 / Max: 9.12

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50PREa510152025SE +/- 0.10, N = 321.2020.77MIN: 12.68 / MAX: 21.88MIN: 12.97 / MAX: 21.67
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50PREa510152025Min: 20.64 / Avg: 20.77 / Max: 20.97

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-152PREa3691215SE +/- 0.08, N = 38.729.34MIN: 5.23 / MAX: 9.06MIN: 4.74 / MAX: 9.74
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-152PREa3691215Min: 9.2 / Avg: 9.34 / Max: 9.49

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50PREa510152025SE +/- 0.14, N = 1520.4321.01MIN: 13.46 / MAX: 21.1MIN: 11.92 / MAX: 22.65
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50PREa510152025Min: 19.87 / Avg: 21.01 / Max: 21.97

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-152PREa3691215SE +/- 0.09, N = 129.218.91MIN: 4.8 / MAX: 9.43MIN: 4.5 / MAX: 9.7
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-152PREa3691215Min: 8.55 / Avg: 8.91 / Max: 9.5

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152PREa3691215SE +/- 0.10, N = 128.929.09MIN: 5.04 / MAX: 9.16MIN: 4.84 / MAX: 10.03
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152PREa3691215Min: 8.58 / Avg: 9.09 / Max: 9.65

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152PREa3691215SE +/- 0.10, N = 39.479.33MIN: 5.17 / MAX: 9.87MIN: 4.69 / MAX: 9.66
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152PREa3691215Min: 9.19 / Avg: 9.33 / Max: 9.52

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lPREa246810SE +/- 0.09, N = 36.296.45MIN: 3.09 / MAX: 6.44MIN: 3.05 / MAX: 6.85
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lPREa3691215Min: 6.28 / Avg: 6.45 / Max: 6.56

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lPREa0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.33MIN: 1.76 / MAX: 2.72MIN: 1.77 / MAX: 2.9
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lPREa246810Min: 2.31 / Avg: 2.33 / Max: 2.35

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lPREa0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.31MIN: 1.78 / MAX: 2.8MIN: 1.88 / MAX: 2.74
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lPREa246810Min: 2.3 / Avg: 2.31 / Max: 2.32

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lPREa0.5221.0441.5662.0882.61SE +/- 0.01, N = 32.322.31MIN: 1.9 / MAX: 2.75MIN: 1.53 / MAX: 2.83
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lPREa246810Min: 2.3 / Avg: 2.31 / Max: 2.32

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lPREa0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.292.33MIN: 1.79 / MAX: 2.72MIN: 1.59 / MAX: 2.78
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_lPREa246810Min: 2.31 / Avg: 2.33 / Max: 2.34

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lPREa0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.312.33MIN: 1.7 / MAX: 2.84MIN: 1.58 / MAX: 2.83
OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_lPREa246810Min: 2.3 / Avg: 2.33 / Max: 2.35

TensorFlow

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: AlexNetPREa510152025SE +/- 0.16, N = 1521.1620.78
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: AlexNetPREa510152025Min: 19.53 / Avg: 20.78 / Max: 21.6

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: AlexNetPREa50100150200250SE +/- 2.30, N = 15242.29247.55
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: AlexNetPREa4080120160200Min: 229.93 / Avg: 247.55 / Max: 258.22

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: AlexNetPREa90180270360450SE +/- 6.62, N = 15424.06436.25
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: AlexNetPREa80160240320400Min: 402.31 / Avg: 436.25 / Max: 465.37

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: AlexNetPREa170340510680850SE +/- 5.39, N = 15765.55749.46
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: AlexNetPREa130260390520650Min: 713.42 / Avg: 749.46 / Max: 780.6

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: GoogLeNetPREa3691215SE +/- 0.14, N = 1512.5813.20
OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: GoogLeNetPREa48121620Min: 12.45 / Avg: 13.2 / Max: 13.99

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: ResNet-50PREa0.91131.82262.73393.64524.55654.053.90

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: AlexNetPREa4008001200160020001652.231604.52

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: AlexNetPREa4008001200160020001980.512010.56

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: GoogLeNetPREa306090120150112.64114.26

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: ResNet-50PREa91827364539.6841.26

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: GoogLeNetPREa4080120160200185.16176.36

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: ResNet-50aPRE153045607560.2565.88

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: GoogLeNetaPRE60120180240300273.68275.34

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: ResNet-50aPRE2040608010088.9387.72

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: GoogLeNetaPRE90180270360450399.46400.03

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: ResNet-50aPRE306090120150118.88119.83

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: GoogLeNetaPRE110220330440550484.02493.31

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: ResNet-50aPRE306090120150140.49140.59

Blender

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: BMW27 - Compute: CPU-OnlyaPRE2468107.557.55

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Junkshop - Compute: CPU-OnlyaPRE369121511.4411.40

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Classroom - Compute: CPU-OnlyaPRE4812162018.0818.03

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Fishy Cat - Compute: CPU-OnlyaPRE36912159.859.96

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Barbershop - Compute: CPU-OnlyaPRE153045607567.6667.38

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Pabellon Barcelona - Compute: CPU-OnlyaPRE61218243023.1022.99

RocksDB

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: OverwriteaPRE90K180K270K360K450K4216164210491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random ReadaPRE200M400M600M800M1000M110889277611053062331. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Update RandomaPRE90K180K270K360K450K4256874212661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read While WritingaPRE6M12M18M24M30M26406662271303631. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read Random Write RandomaPRE800K1600K2400K3200K4000K364326336191421. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

BRL-CAD

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.38.2VGR Performance MetricaPRE1.3M2.6M3.9M5.2M6.5M592756459566121. (CXX) g++ options: -std=c++17 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lnetpbm -lregex_brl -lz_brl -lassimp -ldl -lm -ltk8.6