HOW IT WORKS

A walkthrough of the active topology and the interfaces that drive it.

How Sandstore Works

Sandstore is built around two top-level orchestration interfaces: a ControlPlaneOrchestrator that owns metadata, placement, and consensus coordination, and a DataPlaneOrchestrator that owns chunk movement, replica fanout, and read failover. Everything beneath those interfaces is swappable.

The current topology is a hyperconverged node where every node runs both planes. This page walks through how that topology works. The same framework supports different topologies built from different implementations of the same interfaces.

How a Write Works

Client Request

A client sends a store-file request to the cluster leader, discovered automatically via the topology router.

Placement Decision

The ControlPlaneOrchestrator selects chunk placement targets using the PlacementStrategy and creates a write intent in metadata.

Chunk Write

The DataPlaneOrchestrator fans out prepare RPCs to the selected nodes, then coordinates commit across replicas.

Metadata Commit

Once chunks are committed, metadata is applied through the MetadataReplicator and persisted across the cluster.

How a Read Works

The ControlPlaneOrchestrator resolves the file metadata, the DataPlaneOrchestrator retrieves chunks from available nodes with automatic failover, and the client receives the reassembled file.

Consensus: The Current Implementation

The current topology uses Raft consensus to ensure metadata consistency across all nodes. This is one implementation of the MetadataReplicator interface. The interface accepts any consensus mechanism. What follows describes how the active Raft implementation works.

Leader Election

Nodes elect a leader using randomized timeouts and majority voting. Only the leader accepts write operations.

Log Replication

Metadata operations are logged and replicated to followers. Commits happen only after majority acknowledgment.

Heartbeats

The leader sends regular heartbeats to maintain authority and prevent unnecessary elections.

System Architecture

ControlPlaneOrchestrator

Owns metadata lifecycle, namespace operations, placement decisions, write intent creation, and consensus RPC forwarding. The active implementation is backed by Raft and BoltDB.

View Code

DataPlaneOrchestrator

Owns chunk byte movement, replica prepare fanout, read failover across replicas, and inbound chunk RPC handlers. The active implementation coordinates 2PC chunk writes.

View Code

MetadataReplicator

Consensus-backed metadata application. The active implementation is a durable Raft replicator with WAL, CRC envelope protection, and corruption recovery.

View Code

ClusterService

Node discovery, membership management, and liveness tracking. The active implementation uses etcd. The interface accepts any membership backend.

View Code

AI Integration via MCP

Sandstore includes a Model Context Protocol server that exposes the storage layer to AI systems through standardized interfaces. The MCP server is currently being realigned with the active node architecture. The vision is a tool that lets AI systems explore, query, and interact with a running Sandstore cluster directly.

Planned Operations

Store, read, and delete files. Query cluster topology and node status. Inspect metadata and chunk placement. Trigger and observe leader election.

Usage Example

# Build the MCP server
make mcp
./sandstore-mcp

Try It Yourself

Quick Start Commands

git clone https://github.com/AnishMulay/sandstore
cd sandstore
make build
make test-server # Starts server and runs client test

The codebase is designed to be read. Start with servers/node/wire_grpc_etcd.go to see the full topology assembled in one place. Then read internal/orchestrators/interfaces.go to see where new implementations plug in.

Explore the Code

Consensus in Action: A Write Request

This shows one write request moving through the active Raft topology. The interfaces that drive this behavior are swappable.

↓ Scroll to explore each step ↓
1

Client Request

A client sends a store-file request to the cluster leader. The leader receives the request and prepares to replicate the metadata operation across the cluster.

2

Log Replication

The leader appends the metadata entry to its Raft log and replicates it to all follower nodes simultaneously.

3

Follower Acknowledgment

Each follower appends the entry to its own log and sends an acknowledgment back to the leader.

4

Commit and Response

Once the leader receives acknowledgments from a majority of nodes, it commits the entry and responds to the client with success.

ClientLeaderFollowerFollowerstore-file