Exaba Demonstrates Object Store
- Stuart Inglis
- Jul 13, 2024
- 1 min read
July 14th, 2024, 5:00 PM EST
Exaba today demonstrated a 12-node cluster achieving ~30–40,000 API calls per node per second, using common object store benchmarking tools.
Each server node was a combined Data Store ("D") and Control Path ("C") node. The nodes internal drives were a mixture of SATA drives, SAS drives, RAM drives and NVMe drives, all of which were exported as NVMe-oF. Either directly, or via Exaba's Data Bridge, the process that allows non-NVMe drives to be used in modern clusters.
The nodes were a heterogenous mix of NICs, including a mix of NVIDIA® ConnectX-6, Broadcom® BCM57508 and Intel® E810 NICs. Exaba's maximum performance is achieved with RoCE v2 (RDMA) compatible cards such as these and appropriate RoCE v2 Ethernet Fabric. For this benchmark we used 100Gb NVIDIA™ Mellanox® 2100 switches configured as an MLAG.
Each of the combined "DC" nodes can see all of the NVMe (including the legacy/bridged non-NVMe) devices. This type of architecture (simply a bunch of servers with internal or directly attached storage), is commonly referred to as a Converged, Shared Everything topology (CSE).
Depending on the use case, primarily driven by peak data write throughput and PBs written per year and the number of clients access the object store, the topology can change.
Exaba's object store is agnostic and supports single server as well as arbitrary sized clusters in various topologies. Topologies include Converged Shared Everything (CSE) along with optimized combinations of the "D" and "C" nodes using an Disaggregated (or non-converged) Shared Everything (DSE) topology.
All trademarks mentioned herein are the property of their respective owners. Exaba is not affiliated with, endorsed by, or sponsored by these trademark owners.
Comments