This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Overview
Relevant source files
Purpose and Scope
This document provides a high-level introduction to the rust-muxio system, a toolkit for building efficient, transport-agnostic multiplexed communication systems with type-safe RPC capabilities. This page explains what Muxio is, its architectural layers, and core design principles.
For detailed information about specific subsystems:
- Workspace organization and crate listings: see Workspace Structure
- Core multiplexing concepts: see Core Library (muxio))
- RPC framework details: see RPC Framework
- Transport implementations: see Transport Implementations
Sources: README.md:1-166 DRAFT.md:9-53
What is Muxio?
Muxio is a high-performance Rust framework that provides two primary capabilities:
-
Binary Stream Multiplexing : A low-level framing protocol that manages multiple concurrent data streams over a single connection, handling frame interleaving, reassembly, and ordering.
-
Lightweight RPC Framework : A minimalist RPC abstraction built on the multiplexing layer, providing request correlation, method dispatch, and bidirectional communication without imposing opinions about serialization or transport.
The system is designed around a "core + extensions" architecture. The muxio core library (Cargo.toml10) provides runtime-agnostic, transport-agnostic primitives. Extension crates build concrete implementations for specific environments (Tokio async runtime, WebAssembly/browser, etc.).
Sources: README.md:18-23 Cargo.toml:10-17
System Architecture
The following diagram illustrates the layered architecture and primary components:
Layered Architecture Overview
graph TB
subgraph "Application Layer"
APP["Application Code\nTyped RPC Calls"]
end
subgraph "Service Definition Layer"
SERVICE_DEF["Service Definition Crate\nRpcMethodPrebuffered implementations\nMETHOD_ID generation"]
end
subgraph "RPC Abstraction Layer"
CALLER["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
ENDPOINT["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
SERVICE["muxio-rpc-service\nRpcMethodPrebuffered trait"]
end
subgraph "Core Multiplexing Layer"
DISPATCHER["RpcDispatcher\nRequest correlation\nStream management"]
FRAMING["Binary Framing Protocol\nFrame chunking and reassembly"]
end
subgraph "Transport Implementations"
TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer\nAxum + WebSocket"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient\nTokio + tungstenite"]
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient\nwasm-bindgen"]
end
APP --> SERVICE_DEF
SERVICE_DEF --> CALLER
SERVICE_DEF --> ENDPOINT
SERVICE_DEF --> SERVICE
CALLER --> DISPATCHER
ENDPOINT --> DISPATCHER
SERVICE --> CALLER
SERVICE --> ENDPOINT
DISPATCHER --> FRAMING
FRAMING --> TOKIO_SERVER
FRAMING --> TOKIO_CLIENT
FRAMING --> WASM_CLIENT
| Layer | Crates | Responsibilities |
|---|---|---|
| Application | User code | Invokes typed RPC methods, receives typed responses |
| Service Definition | example-muxio-rpc-service-definition | Defines shared service contracts with compile-time METHOD_ID generation |
| RPC Abstraction | muxio-rpc-service, muxio-rpc-service-caller, muxio-rpc-service-endpoint | Provides traits for method definition, client invocation, and server dispatch |
| Core Multiplexing | muxio | Manages request correlation, stream multiplexing, and binary framing |
| Transport | muxio-tokio-rpc-server, muxio-tokio-rpc-client, muxio-wasm-rpc-client | Concrete implementations for specific runtimes and platforms |
Each layer depends only on the layers below it, enabling modular composition. The core muxio library has zero knowledge of RPC concepts, and the RPC layer has zero knowledge of specific transports.
Sources: README.md:14-40 Cargo.toml:19-31 DRAFT.md:9-26
Key Design Principles
Binary Protocol
All data transmission uses a compact binary format. The framing protocol uses minimal headers to reduce overhead. RPC payloads are serialized as raw bytes, with no assumptions about the serialization format (though extensions commonly use bitcode for efficiency).
Key characteristics:
- Frame headers contain only essential metadata
- No text-based parsing overhead
- Supports arbitrary binary payloads
- Low CPU and bandwidth requirements
Transport Agnostic
The muxio core library implements all multiplexing logic through callback interfaces. This design allows integration with any transport mechanism:
- WebSocket : Used by Tokio server/client and WASM client
- TCP : Can be implemented with custom transports
- In-memory channels : Used for testing
- Any byte-oriented transport : Custom implementations possible
The core library never directly performs I/O. Instead, it accepts bytes via callbacks and emits bytes through return values or callbacks.
Runtime Agnostic
The core muxio library uses synchronous control flow with callbacks, avoiding dependencies on specific async runtimes:
- No
async/awaitin core library - Compatible with Tokio, async-std, or no runtime at all
- WASM-compatible (runs in single-threaded browser environment)
- Extension crates adapt the core to specific runtimes (e.g.,
muxio-tokio-rpc-serveruses Tokio)
This design enables the same core logic to work across radically different execution environments.
graph LR
subgraph "Shared Definition"
SERVICE["example-muxio-rpc-service-definition\nAdd, Mult, Echo methods\nRpcMethodPrebuffered implementations"]
end
subgraph "Native Server"
SERVER["RpcServer\nTokio runtime\nLinux/macOS/Windows"]
end
subgraph "Native Client"
NATIVE["RpcClient\nTokio runtime\nCommand-line tools"]
end
subgraph "Web Client"
WASM["RpcWasmClient\nWebAssembly\nBrowser JavaScript"]
end
SERVICE -.shared contract.-> SERVER
SERVICE -.shared contract.-> NATIVE
SERVICE -.shared contract.-> WASM
NATIVE <-.WebSocket.-> SERVER
WASM <-.WebSocket.-> SERVER
Cross-Platform Deployment
The architecture supports "write once, deploy everywhere" through shared service definitions:
All implementations depend on the same service definition crate, ensuring API compatibility at compile time. A single server can handle requests from both native and WASM clients simultaneously.
Sources: README.md:41-52 DRAFT.md:48-52 README.md:63-160
Repository Structure
The repository uses a Cargo workspace with the following organization:
Core Library
muxio(Cargo.toml10): The foundational crate providing stream multiplexing and binary framing. This crate has minimal dependencies and makes no assumptions about RPC, serialization, or transport.
RPC Extensions
Located in extensions/ (Cargo.toml:20-28):
| Crate | Purpose |
|---|---|
muxio-rpc-service | Defines RpcMethodPrebuffered trait for service contracts |
muxio-rpc-service-caller | Provides RpcServiceCallerInterface for client-side RPC invocation |
muxio-rpc-service-endpoint | Provides RpcServiceEndpointInterface for server-side RPC dispatch |
muxio-tokio-rpc-server | Tokio-based server with Axum and WebSocket support |
muxio-tokio-rpc-client | Tokio-based client with connection management |
muxio-wasm-rpc-client | WebAssembly client for browser environments |
muxio-ext-test | Testing utilities for integration tests |
Examples
Located in examples/ (Cargo.toml:29-30):
example-muxio-rpc-service-definition: Demonstrates shared service definitions withAdd,Mult, andEchomethodsexample-muxio-ws-rpc-app: Complete WebSocket RPC application showing server and client usage
Dependency Flow
Extensions depend on the core library and build progressively more opinionated abstractions. Applications depend on extensions and service definitions, never directly on the core library.
Sources: Cargo.toml:19-31 Cargo.toml:39-47 README.md:61-62
Type Safety Through Shared Definitions
The system achieves compile-time type safety by requiring both clients and servers to depend on the same service definition crate. The RpcMethodPrebuffered trait defines the contract:
Compile-Time Guarantees:
graph TB
subgraph "Service Definition"
TRAIT["RpcMethodPrebuffered trait"]
ADD["Add struct\nMETHOD_ID = xxhash('Add')\nencode_request(Vec<f64>)\ndecode_response() -> f64"]
MULT["Mult struct\nMETHOD_ID = xxhash('Mult')\nencode_request(Vec<f64>)\ndecode_response() -> f64"]
end
subgraph "Client Usage"
CLIENT_CALL["Add::call(&client, vec![1.0, 2.0, 3.0])\nResult<f64, RpcServiceError>"]
end
subgraph "Server Handler"
SERVER_HANDLER["endpoint.register_prebuffered\n(Add::METHOD_ID, handler_fn)"]
HANDLER_FN["handler_fn(request_bytes) ->\ndecode -> compute -> encode"]
end
TRAIT --> ADD
TRAIT --> MULT
ADD -.compile-time guarantee.-> CLIENT_CALL
ADD -.compile-time guarantee.-> SERVER_HANDLER
SERVER_HANDLER --> HANDLER_FN
-
Method ID Consistency : Each method's
METHOD_IDis generated at compile time by hashing the method name withxxhash-rust. The same name always produces the same ID. -
Type Consistency : Both
encode_request/decode_requestandencode_response/decode_responseuse shared type definitions. Changing a parameter type breaks compilation for both client and server. -
Collision Detection : Duplicate method names produce duplicate
METHOD_IDvalues, causing runtime panics during handler registration (which surface during integration tests).
This design eliminates a common class of distributed system bugs where client and server APIs drift out of sync.
Sources: README.md49 README.md:69-118 Cargo.toml52 Cargo.toml64
sequenceDiagram
participant App as "Application"
participant Method as "Add::call"
participant Client as "RpcClient\n(or RpcWasmClient)"
participant Dispatcher as "RpcDispatcher"
participant Transport as "WebSocket"
participant Server as "RpcServer"
participant Handler as "Add handler"
App->>Method: call(&client, vec![1.0, 2.0, 3.0])
Method->>Method: encode_request() -> bytes
Method->>Client: invoke(METHOD_ID, bytes)
Client->>Dispatcher: send_request(METHOD_ID, bytes)
Dispatcher->>Dispatcher: assign request_id
Dispatcher->>Dispatcher: serialize to frames
Dispatcher->>Transport: write binary frames
Transport->>Server: receive frames
Server->>Dispatcher: process_incoming_bytes
Dispatcher->>Dispatcher: reassemble frames
Dispatcher->>Dispatcher: route by METHOD_ID
Dispatcher->>Handler: invoke(request_bytes)
Handler->>Handler: decode -> compute -> encode
Handler->>Dispatcher: return response_bytes
Dispatcher->>Dispatcher: serialize response
Dispatcher->>Transport: write binary frames
Transport->>Client: receive frames
Client->>Dispatcher: process_incoming_bytes
Dispatcher->>Dispatcher: match request_id
Dispatcher->>Client: resolve with bytes
Client->>Method: return bytes
Method->>Method: decode_response() -> f64
Method->>App: return Result<f64>
Communication Flow
The following diagram traces a complete RPC call from application code through all system layers:
Key Observations:
- Application code works with typed values (
Vec<f64>in,f64out) - Service definitions handle encoding/decoding
RpcDispatchermanages request correlation and multiplexing- Multiple requests can be in-flight simultaneously over a single connection
- The binary framing protocol handles interleaved frames from concurrent requests
Sources: README.md:69-160
Development Status
The project is currently in alpha status (Cargo.toml3) and under active development (README.md14). The core architecture is stable, but APIs may change before the 1.0 release.
Current Version: 0.10.0-alpha
Sources: README.md14 Cargo.toml3
Dismiss
Refresh this wiki
Enter email to refresh