No description
  • Java 99.5%
  • Shell 0.5%
Find a file
2025-10-23 21:05:13 +02:00
cluster-assembly Remove Apache Fory 2025-10-23 17:33:55 +02:00
cluster-cli Remove Apache Fory 2025-10-23 17:33:55 +02:00
cluster-client Refactor to use plain old java with virtual threads 2025-10-23 21:04:07 +02:00
cluster-client-api Update MicroModRaft to 1.1 and consolidate Maven dependencies 2025-10-21 20:08:45 +02:00
cluster-config Adding a bind address for raft coms. 2025-10-22 19:33:53 +02:00
cluster-core-api Messing around with configuration 2025-10-22 18:45:08 +02:00
cluster-core-impl Messing around with configuration 2025-10-22 18:45:08 +02:00
cluster-metrics Update MicroModRaft to 1.1 and consolidate Maven dependencies 2025-10-21 20:08:45 +02:00
cluster-protocol Remove Apache Fory 2025-10-23 17:33:55 +02:00
cluster-raft-adapter Making run script only use modules 2025-10-23 08:49:43 +02:00
cluster-serialization-api Remove Apache Fory 2025-10-23 17:33:55 +02:00
cluster-serialization-jackson Remove Apache Fory 2025-10-23 17:33:55 +02:00
cluster-server Refactor to use plain old java with virtual threads 2025-10-23 21:04:07 +02:00
cluster-statemachine Making run script only use modules 2025-10-23 08:49:43 +02:00
cluster-transport-api Refactor to use plain old java with virtual threads 2025-10-23 21:04:07 +02:00
cluster-transport-vt Refactor to use plain old java with virtual threads 2025-10-23 21:04:07 +02:00
docs Remove Apache Fory 2025-10-23 17:33:55 +02:00
MicroModRaft@cbd2b78d1d Update MicroModRaft to 1.1 and consolidate Maven dependencies 2025-10-21 20:08:45 +02:00
.gitignore Fixes gitignore 2025-10-23 08:51:39 +02:00
.gitmodules Implement client request routing through Raft consensus 2025-10-20 20:09:34 +02:00
CLAUDE.md Update MicroModRaft to 1.1 and consolidate Maven dependencies 2025-10-21 20:08:45 +02:00
IMPLEMENTATION_STATUS.md Add foundation layer modules and comprehensive documentation 2025-10-19 16:28:27 +02:00
LICENSE Initial commit 2025-10-19 11:43:35 +02:00
MODULE_PLAN.md Add foundation layer modules and comprehensive documentation 2025-10-19 16:28:27 +02:00
MODULE_STRUCTURE_RECOMMENDATIONS.md Add foundation layer modules and comprehensive documentation 2025-10-19 16:28:27 +02:00
pom.xml Refactor to use plain old java with virtual threads 2025-10-23 21:04:07 +02:00
README.md Implement client request routing through Raft consensus 2025-10-20 20:09:34 +02:00
SESSION_SUMMARY.md Add session summary for future reference 2025-10-19 20:03:16 +02:00
STATE_MACHINE_DESIGN.md Add foundation layer modules and comprehensive documentation 2025-10-19 16:28:27 +02:00

# MicroCluster

A distributed cluster management system with state machine-based consensus, built on clean architecture principles.

Features

  • Raft-based Consensus: Reliable distributed state management using the Raft consensus algorithm
  • Clean Architecture: Pure domain model isolated from infrastructure concerns
  • JPMS Modules: 17 modules organized in 6 architectural layers
  • Java 25: Modern Java with preview features (sealed interfaces, records, pattern matching)
  • High Performance: Apache Fory serialization (20-170x faster than Java serialization)
  • Async Operations: CompletableFuture-based non-blocking API
  • Pluggable Transport: Netty-based transport with abstraction for alternative implementations
  • Prometheus Metrics: Built-in observability with Prometheus-compatible metrics export

Getting Started

Prerequisites

  • Java 25 or later (required for preview features)
  • Linux, macOS, or Windows with bash support

Installation

  1. Download and extract the distribution:
tar -xzf microcluster-1.0-SNAPSHOT.tar.gz
cd microcluster-1.0-SNAPSHOT
  1. Verify Java version:
java -version
# Should show Java 25 or later

Starting the Server

Single Node (Development)

./bin/start-server.sh \
  --node-id=node1 \
  --host=localhost \
  --port=8080 \
  --data-dir=./data

Multi-Node Cluster (Production)

Start the first node (seed):

# Node 1 (seed)
./bin/start-server.sh \
  --node-id=node1 \
  --host=192.168.1.10 \
  --port=8080 \
  --data-dir=./data/node1

Start additional nodes with seed reference:

# Node 2
./bin/start-server.sh \
  --node-id=node2 \
  --host=192.168.1.11 \
  --port=8080 \
  --seeds=192.168.1.10:8080 \
  --data-dir=./data/node2

# Node 3
./bin/start-server.sh \
  --node-id=node3 \
  --host=192.168.1.12 \
  --port=8080 \
  --seeds=192.168.1.10:8080 \
  --data-dir=./data/node3

Configuration Options

Option Environment Variable Default Description
--node-id CLUSTER_NODE_ID auto-generated Unique identifier for this node
--host CLUSTER_HOST 0.0.0.0 Host address to bind to
--port CLUSTER_PORT 8080 Port to listen on
--data-dir CLUSTER_DATA_DIR ./data Directory for data storage
--seeds CLUSTER_SEEDS (none) Comma-separated seed servers (host:port)
--min-servers CLUSTER_MIN_SERVERS 3 Minimum servers for operations
--max-servers CLUSTER_MAX_SERVERS 100 Maximum number of servers

JVM Options

Customize JVM settings via the JVM_OPTS environment variable:

export JVM_OPTS="-Xms1g -Xmx4g -XX:+UseZGC"
./bin/start-server.sh --node-id=node1

Default JVM options: -Xms512m -Xmx2g -XX:+UseG1GC

Verification

Check that the server started successfully:

# View logs (stdout/stderr)
tail -f logs/cluster.log

# Check metrics endpoint (if exposed)
curl http://localhost:8080/metrics

Client Implementation

Adding the Dependency

Maven:

<dependency>
    <groupId>nu.zoom.cluster</groupId>
    <artifactId>cluster-client</artifactId>
    <version>1.0-SNAPSHOT</version>
</dependency>

Gradle:

implementation 'nu.zoom.cluster:cluster-client:1.0-SNAPSHOT'

Basic Usage

import nu.zoom.cluster.client.ClusterClient;
import nu.zoom.cluster.config.NodeAddress;
import nu.zoom.cluster.core.event.ServerJoinRequested;
import nu.zoom.cluster.core.model.ClusterState;
import java.time.Duration;

// Create client with builder
ClusterClient client = ClusterClient.builder()
    .addServer(new NodeAddress("localhost", 7000))
    .addServer(new NodeAddress("localhost", 7100))
    .addServer(new NodeAddress("localhost", 7200))
    .connectionTimeout(Duration.ofSeconds(5))
    .requestTimeout(Duration.ofSeconds(10))
    .maxRetries(3)
    .retryDelay(Duration.ofMillis(500))
    .build();

// Query cluster state (async)
CompletableFuture<ClusterState> stateFuture = client.getClusterState();
ClusterState state = stateFuture.get();
System.out.println("Cluster state: " + state.name());

// Submit an event (routed through Raft consensus)
StateEvent event = new ServerJoinRequested("server1", System.currentTimeMillis());
CompletableFuture<Boolean> result = client.submitEvent(event);
boolean success = result.get();
System.out.println("Event submitted: " + success);

// Check dispatch capability
boolean canDispatch = client.canDispatch().get();
System.out.println("Can dispatch: " + canDispatch);

// Get metrics
CompletableFuture<String> metricsFuture = client.getMetrics();
String metrics = metricsFuture.get();
System.out.println(metrics);

// Clean up
client.close();

Key Features:

  • Automatic Leader Redirection: Client automatically finds and follows the Raft leader
  • Retry with Exponential Backoff: Failed requests retried with configurable backoff
  • Request Timeout: Per-request timeout protection
  • Thread-Safe: Safe for concurrent use from multiple threads

Async Pattern with Callbacks

client.getClusterState()
    .thenAccept(state -> {
        System.out.println("Current state: " + state.name());
        System.out.println("Can dispatch: " + state.canDispatch());
    })
    .exceptionally(ex -> {
        System.err.println("Failed to get state: " + ex.getMessage());
        return null;
    });

Error Handling

try {
    ClusterState state = client.getClusterState()
        .orTimeout(5, TimeUnit.SECONDS)
        .get();
} catch (TimeoutException e) {
    System.err.println("Request timed out");
} catch (ClusterClientException e) {
    System.err.println("Cluster error: " + e.getMessage());
} catch (ExecutionException | InterruptedException e) {
    System.err.println("Client error: " + e.getMessage());
}

For more details, see docs/CLIENT_API.md.

Architecture Overview

MicroCluster is organized into 17 modules across 6 architectural layers:

┌─────────────────────────────────────────────────────────────────┐
│                    Assembly Layer (1 module)                     │
├─────────────────────────────────────────────────────────────────┤
│  cluster-assembly - Distribution packaging                       │
└─────────────────────────────────────────────────────────────────┘
                                ▲
┌─────────────────────────────────────────────────────────────────┐
│                  Application Layer (3 modules)                   │
├─────────────────────────────────────────────────────────────────┤
│  cluster-launcher  →  cluster-cli  →  cluster-server            │
│  Entry point         Command UI       Bootstrap & wiring         │
└─────────────────────────────────────────────────────────────────┘
                                ▲
┌─────────────────────────────────────────────────────────────────┐
│              Integration Layer (3 modules)                       │
├─────────────────────────────────────────────────────────────────┤
│  cluster-raft-adapter  |  cluster-config  |  cluster-metrics    │
│  Raft isolation        |  Multi-source    |  Prometheus export  │
└─────────────────────────────────────────────────────────────────┘
                    ▲               ▲               ▲
┌───────────────────┴───────┬───────┴───────┬───────┴─────────────┐
│   Core Domain (2)         │  Transport (3)│   Client (2)        │
├───────────────────────────┼───────────────┼─────────────────────┤
│ cluster-core-api          │ cluster-      │ cluster-client-api  │
│ cluster-core-impl         │  transport-api│ cluster-client      │
│ Pure state machine        │ cluster-      │ Client impl         │
│                           │  transport-   │                     │
│                           │  netty        │                     │
└───────────────────────────┴───────────────┴─────────────────────┘
                    ▲               ▲
┌───────────────────┴───────────────┴─────────────────────────────┐
│                Foundation Layer (3 modules)                      │
├─────────────────────────────────────────────────────────────────┤
│  cluster-protocol  |  cluster-serialization-api  | cluster-     │
│  Protocol types    |  Serialization SPI          | serialization│
│                    |                             | -fory        │
└─────────────────────────────────────────────────────────────────┘

Layer Responsibilities

Foundation Layer

  • Protocol definitions (RaftMessage, MessageType)
  • Serialization abstraction (MessageCodec, SerializationContext)
  • Apache Fory serialization implementation

Core Domain Layer

  • Pure domain model (ClusterState, StateEvent, TransitionResult)
  • State machine implementation (ClusterStateMachine)
  • Zero knowledge of Raft or transport concerns

Transport Layer

  • Transport abstraction (MessageTransport, TransportServer)
  • Netty-based implementation with async I/O

Client Layer

  • Client API (ClusterClient, ClusterClientBuilder)
  • Default implementation with failover and retry

Integration Layer

  • cluster-raft-adapter: Isolates Raft concepts from domain using adapter pattern
  • cluster-config: Multi-source configuration (CLI > env > files > defaults)
  • cluster-metrics: Metrics collection with Prometheus format

Application Layer

  • cluster-server: Bootstrap and component wiring
  • cluster-cli: PicoCLI command interface
  • cluster-launcher: Main entry point

Assembly Layer

  • Distribution packaging (tar.gz, zip)
  • Startup scripts and documentation

For detailed module documentation, see docs/MODULES.md.

For architectural design decisions, see docs/ARCHITECTURE.md.

Third-Party Dependencies

MicroCluster builds on proven open-source technologies:

Dependency Version Purpose License
Netty 4.1.115.Final Async network I/O transport layer Apache 2.0
PicoCLI 4.7.6 Command-line interface framework Apache 2.0
MicroModRaft 1.0 Raft consensus implementation (planned) TBD
Apache Fory (version TBD) High-performance binary serialization Apache 2.0
SLF4J 2.0.x Logging facade MIT
Logback 1.5.16 Logging implementation EPL / LGPL

Why These Dependencies?

  • Netty: Industry-standard for high-performance async networking (used by gRPC, Cassandra, Elasticsearch)
  • PicoCLI: Best-in-class CLI framework with annotation-based configuration
  • MicroModRaft: Lightweight Raft implementation designed for embedded use
  • Apache Fory: 20-170x faster serialization than Java's built-in serialization

For detailed dependency information, see docs/DEPENDENCIES.md.

Building from Source

Requirements

  • Java 25 or later
  • Maven 3.9+

Build Commands

# Full build with tests
mvn clean install

# Build without tests
mvn clean install -DskipTests

# Build specific module
mvn clean install -pl cluster-core-api

# Create distribution packages
mvn clean package -pl cluster-assembly -am

Build Output

Distribution packages are created in cluster-assembly/target/:

  • microcluster-1.0-SNAPSHOT.tar.gz
  • microcluster-1.0-SNAPSHOT.zip

Running Tests

# All tests
mvn test

# Specific module
mvn test -pl cluster-core-impl

Documentation

License

Copyright © 2025 Johan Maasing (johan@zoom.nu)

Licensed under the Apache License, Version 2.0. See LICENSE for details.

Status

Version: 1.0-SNAPSHOT Status: Active Development Java Version: 25 (preview features enabled)

Contributing

Contributions are welcome! Please ensure:

  1. All code compiles with Java 25
  2. License headers are present on all source files (mvn license:format)
  3. Tests pass (mvn test)
  4. Code follows existing architectural patterns

Repository

git clone ssh://git@vcs.zoom.nu:1122/zoom/MicroCluster.git