Contents
High Availability
SoliDB clusters ensure zero downtime and data persistence even if a node fails.
Cluster Setup
Deploy SoliDB in a high-availability cluster mode. Learn how to configure nodes, manage peer discovery, and scale your deployment.
Architecture
SoliDB uses a leaderless, shared-nothing architecture. Data is automatically sharded across all available nodes using consistent hashing.
Sharding
Collections are partitioned into shards. Each shard is owned by a specific node, determined by the hash of the document key.
Replication
Each shard can be replicated to multiple nodes for redundancy. If a node fails, its replicas are automatically promoted.
Cluster Setup
The easiest way to start a cluster is to spin up multiple instances on different ports and tell them about each other using the --peer flag.
3-Node Local Cluster
Security
Secure inter-node communication using a shared secret keyfile. This prevents unauthorized nodes from joining the cluster.
1. Generate Keyfile
2. Start with Keyfile
HMAC-SHA256 Authentication
Cluster nodes authenticate using HMAC-SHA256 challenge-response.
When a node connects, the server sends a random challenge. The connecting node must compute
HMAC-SHA256(keyfile, challenge) and return the 64-character hex response.
Command Line Options
SolidDB - A high-performance document database
Usage: solidb [OPTIONS]
Options:
-p, --port
Port to listen on [default: 6745]
--node-id
Unique node identifier (auto-generated if not provided)
--peer
Peer nodes to replicate with (e.g., --peer 192.168.1.2:6746)
--replication-port
Port for inter-node replication traffic [default: 6746]
--data-dir
Data directory path [default: ./data]
-d, --daemon
Run as a daemon (background process)
--pid-file
PID file path (used in daemon mode) [default: ./solidb.pid]
--log-file
Log file path (used in daemon mode) [default: ./solidb.log]
--keyfile
Optional keyfile for cluster node authentication
-h, --help