This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Core Concepts
Relevant source files
Purpose and Scope
This document explains the fundamental design principles and architectural patterns that define the rust-muxio system. It covers the layered separation of concerns, the binary protocol foundation, the non-async callback-driven model, and the mechanisms that enable cross-platform deployment and type safety.
For detailed information about specific layers, see Layered Architecture and Design Philosophy. For implementation details of the multiplexing core, see Core Library (muxio)). For RPC-specific concepts, see RPC Framework.
Architectural Layers
The rust-muxio system is organized into three distinct layers, each with clear responsibilities and minimal coupling to the layers above or below it.
Sources: README.md:16-22 Cargo.toml:19-31
graph TB
subgraph "Application Code"
APP["Application Logic\nType-safe method calls"]
end
subgraph "Transport Layer"
TOKIO_SRV["muxio-tokio-rpc-server\nAxum + WebSocket"]
TOKIO_CLI["muxio-tokio-rpc-client\ntokio-tungstenite"]
WASM_CLI["muxio-wasm-rpc-client\nwasm-bindgen bridge"]
end
subgraph "RPC Abstraction Layer"
RPC_SVC["muxio-rpc-service\nRpcMethodPrebuffered trait"]
RPC_CALLER["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
RPC_EP["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
subgraph "Core Multiplexing Layer"
DISPATCHER["RpcDispatcher\nRequest correlation"]
FRAMING["Binary Framing Protocol\nFrame reassembly"]
REQ_RESP["RpcRequest / RpcResponse\nRpcHeader types"]
end
APP --> TOKIO_CLI
APP --> WASM_CLI
APP --> TOKIO_SRV
TOKIO_CLI --> RPC_CALLER
WASM_CLI --> RPC_CALLER
TOKIO_SRV --> RPC_EP
RPC_CALLER --> RPC_SVC
RPC_EP --> RPC_SVC
RPC_SVC --> DISPATCHER
RPC_CALLER --> DISPATCHER
RPC_EP --> DISPATCHER
DISPATCHER --> FRAMING
DISPATCHER --> REQ_RESP
FRAMING --> REQ_RESP
Layer Responsibilities
| Layer | Components | Responsibilities | Dependencies |
|---|---|---|---|
| Core Multiplexing | muxio crate | Binary framing, frame reassembly, stream management | Zero external dependencies for core logic |
| RPC Abstraction | muxio-rpc-service, muxio-rpc-service-caller, muxio-rpc-service-endpoint | Method ID generation, request/response encoding, type-safe traits | Depends on muxio core |
| Transport | muxio-tokio-rpc-server, muxio-tokio-rpc-client, muxio-wasm-rpc-client | WebSocket connections, async runtime integration, platform bridging | Depends on RPC abstraction |
Sources: README.md:36-40 Cargo.toml:40-47
Binary Protocol Foundation
The system uses a compact binary protocol at all levels to minimize overhead and maximize performance. There are no text-based formats or human-readable intermediates in the critical path.
graph LR
subgraph "Application Data"
TYPED["Rust Types\nVec<f64>, String, etc"]
end
subgraph "Serialization Layer"
BITCODE["bitcode::encode\nbitcode::decode"]
end
subgraph "RPC Protocol Layer"
METHOD_ID["METHOD_ID: u64\nxxhash-rust const hash"]
RPC_REQ["RpcRequest\nmethod_id + params + payload"]
RPC_RESP["RpcResponse\nresult bytes or error"]
RPC_HEADER["RpcHeader\ndiscriminator byte"]
end
subgraph "Framing Layer"
FRAME["Binary Frames\nMinimal headers"]
CHUNK["Chunking\nLarge payload splitting"]
end
subgraph "Transport Layer"
WS["WebSocket Binary Frames\nNetwork transmission"]
end
TYPED --> BITCODE
BITCODE --> RPC_REQ
BITCODE --> RPC_RESP
METHOD_ID --> RPC_REQ
RPC_REQ --> RPC_HEADER
RPC_RESP --> RPC_HEADER
RPC_HEADER --> FRAME
FRAME --> CHUNK
CHUNK --> WS
Protocol Stack
Sources: README.md:32-33 README.md:45-46 Cargo.toml52 Cargo.toml64
Binary Data Flow
All data in the system flows as raw bytes (Vec<u8> or &[u8]). This design choice has several implications:
- Serialization Agnostic : The core layer never assumes a serialization format. Applications can use
bitcode,bincode,postcard, or any other binary serializer. - FFI-Friendly : Byte slices are a universal interface that can cross language boundaries without special marshalling.
- Zero-Copy Opportunities : Raw bytes enable zero-copy optimizations in performance-critical paths.
- Minimal Overhead : Binary headers consume single-digit bytes rather than hundreds of bytes for JSON or XML.
Sources: README.md:51-52 DRAFT.md11
Non-Async, Callback-Driven Model
The core muxio library uses a non-async design with synchronous control flow and callbacks. This is a deliberate architectural choice that enables broad compatibility.
graph TB
subgraph "External Runtime"
TOKIO["Tokio async runtime"]
WASM_EVENT["WASM event loop"]
STD_THREAD["std::thread"]
end
subgraph "muxio Core"
DISPATCHER["RpcDispatcher"]
WRITE_CB["write_bytes_callback\nBox<dyn Fn(&[u8])>"]
READ_CB["handle_read_bytes\n(&[u8]) -> Result"]
end
subgraph "Application Handlers"
RPC_HANDLER["RPC method handlers\nasync closures"]
end
TOKIO -->|bytes in| READ_CB
WASM_EVENT -->|bytes in| READ_CB
STD_THREAD -->|bytes in| READ_CB
READ_CB --> DISPATCHER
DISPATCHER --> RPC_HANDLER
DISPATCHER --> WRITE_CB
WRITE_CB -->|bytes out| TOKIO
WRITE_CB -->|bytes out| WASM_EVENT
WRITE_CB -->|bytes out| STD_THREAD
Callback Architecture
Sources: DRAFT.md:50-52 README.md:34-35
Key Characteristics
| Aspect | Implementation | Benefit |
|---|---|---|
| Control Flow | Synchronous function calls | Deterministic execution order |
| I/O Model | Callback-driven | No async runtime dependency |
| Event Handling | Explicit invocations | Predictable performance characteristics |
| State Management | Direct mutation | No .await points, no hidden yields |
The RpcDispatcher receives bytes via handle_read_bytes() and emits bytes via a provided callback. It never blocks, never spawns tasks, and never assumes an async runtime exists. This enables:
- Tokio Integration : Wrap calls in
tokio::spawnas needed - WASM Integration : Bridge to JavaScript's Promise-based model
- Embedded Systems : Run in single-threaded, no-std environments
- Testing : Use in-memory channels without complex async mocking
Sources: DRAFT.md:50-52 README.md:34-35
Transport and Runtime Agnosticism
The core library's design enables the same multiplexing logic to run in radically different environments without modification.
graph TB
subgraph "Shared Core"
CORE_DISPATCH["RpcDispatcher\nsrc/rpc_dispatcher.rs"]
CORE_FRAME["Framing Protocol\nsrc/rpc_request_response.rs"]
end
subgraph "Native Server - Tokio"
AXUM["axum::Router"]
WS_UPGRADE["WebSocketUpgrade"]
TOKIO_TASK["tokio::spawn"]
TUNGSTENITE["tokio_tungstenite"]
end
subgraph "Native Client - Tokio"
WS_STREAM["WebSocketStream"]
TOKIO_CHANNEL["mpsc::channel"]
TOKIO_SELECT["tokio::select!"]
end
subgraph "WASM Client"
WASM_BINDGEN["#[wasm_bindgen]"]
JS_WEBSOCKET["JavaScript WebSocket"]
JS_PROMISE["JavaScript Promise"]
end
CORE_DISPATCH --> AXUM
CORE_DISPATCH --> WS_STREAM
CORE_DISPATCH --> WASM_BINDGEN
AXUM --> WS_UPGRADE
WS_UPGRADE --> TUNGSTENITE
TUNGSTENITE --> TOKIO_TASK
WS_STREAM --> TOKIO_CHANNEL
TOKIO_CHANNEL --> TOKIO_SELECT
WASM_BINDGEN --> JS_WEBSOCKET
JS_WEBSOCKET --> JS_PROMISE
CORE_FRAME --> AXUM
CORE_FRAME --> WS_STREAM
CORE_FRAME --> WASM_BINDGEN
Platform Abstraction
Sources: README.md:38-40 Cargo.toml:23-28
Abstraction Boundaries
The RpcDispatcher provides a minimal interface that any transport can implement:
- Byte Input :
handle_read_bytes(&mut self, bytes: &[u8])- Process incoming bytes - Byte Output :
write_bytes_callback: Box<dyn Fn(&[u8])>- Emit outgoing bytes - No I/O : The dispatcher never performs I/O directly
Transport implementations wrap this interface with platform-specific I/O:
- Tokio Server :
axum::extract::ws::WebSockethandles async I/O - Tokio Client :
tokio_tungstenite::WebSocketStreamwith message splitting - WASM Client :
wasm_bindgenbridges toWebSocket.send(ArrayBuffer)
Sources: README.md:34-35 README.md:47-48
graph TB
subgraph "Service Definition Crate"
TRAIT["RpcMethodPrebuffered"]
ADD_STRUCT["Add\nUnit struct"]
ADD_IMPL["impl RpcMethodPrebuffered for Add"]
ADD_METHOD_ID["Add::METHOD_ID\nconst u64 = xxh3_64("Add")"]
ADD_ENCODE_REQ["Add::encode_request"]
ADD_DECODE_REQ["Add::decode_request"]
ADD_ENCODE_RESP["Add::encode_response"]
ADD_DECODE_RESP["Add::decode_response"]
ADD_CALL["Add::call"]
end
subgraph "Client Code"
CLIENT_CALL["Add::call(&client, vec![1.0, 2.0])"]
CLIENT_ENCODE["Uses Add::encode_request"]
CLIENT_DECODE["Uses Add::decode_response"]
end
subgraph "Server Code"
SERVER_REGISTER["endpoint.register_prebuffered"]
SERVER_DECODE["Uses Add::decode_request"]
SERVER_ENCODE["Uses Add::encode_response"]
end
ADD_STRUCT --> ADD_IMPL
ADD_IMPL --> ADD_METHOD_ID
ADD_IMPL --> ADD_ENCODE_REQ
ADD_IMPL --> ADD_DECODE_REQ
ADD_IMPL --> ADD_ENCODE_RESP
ADD_IMPL --> ADD_DECODE_RESP
ADD_IMPL --> ADD_CALL
ADD_CALL --> CLIENT_CALL
ADD_ENCODE_REQ --> CLIENT_ENCODE
ADD_DECODE_RESP --> CLIENT_DECODE
ADD_METHOD_ID --> SERVER_REGISTER
ADD_DECODE_REQ --> SERVER_DECODE
ADD_ENCODE_RESP --> SERVER_ENCODE
CLIENT_CALL --> CLIENT_ENCODE
CLIENT_CALL --> CLIENT_DECODE
SERVER_REGISTER --> SERVER_DECODE
SERVER_REGISTER --> SERVER_ENCODE
Type Safety Through Shared Definitions
Type safety across distributed system boundaries is enforced at compile time through shared service definitions.
Shared Service Definition Pattern
Sources: README.md:49-50 README.md:70-73
Compile-Time Guarantees
| Guarantee | Mechanism | Failure Mode |
|---|---|---|
| Method ID Uniqueness | Const evaluation with xxhash_rust::const_xxh3::xxh3_64() | Duplicate method names detected at compile time |
| Parameter Type Match | Shared encode_request / decode_request | Type mismatch = compilation error |
| Response Type Match | Shared encode_response / decode_response | Type mismatch = compilation error |
| API Version Compatibility | Semantic versioning of service definition crate | Incompatible versions = linker error |
The RpcMethodPrebuffered trait requires implementers to define:
METHOD_ID: u64- Generated from method name hashencode_request(params: Self::RequestParams) -> Result<Vec<u8>>decode_request(bytes: &[u8]) -> Result<Self::RequestParams>encode_response(result: Self::ResponseResult) -> Result<Vec<u8>>decode_response(bytes: &[u8]) -> Result<Self::ResponseResult>
Both client and server code depend on the same implementation, making API drift impossible.
Sources: README.md:49-50 Cargo.toml42
sequenceDiagram
participant App
participant Caller as "RpcServiceCallerInterface"
participant Dispatcher as "RpcDispatcher"
participant Transport as "WebSocket"
participant Server as "Server Dispatcher"
participant Handler
Note over Dispatcher: Assign request_id = 1
App->>Caller: Add::call(vec![1.0, 2.0])
Caller->>Dispatcher: encode_request(method_id, params)
Dispatcher->>Transport: Binary frames [request_id=1]
Note over Dispatcher: Assign request_id = 2
App->>Caller: Mult::call(vec![3.0, 4.0])
Caller->>Dispatcher: encode_request(method_id, params)
Dispatcher->>Transport: Binary frames [request_id=2]
Transport->>Server: Interleaved frames arrive
Server->>Handler: Route by method_id (request_id=1)
Server->>Handler: Route by method_id (request_id=2)
Handler->>Server: Response [request_id=2]
Server->>Transport: Binary frames [request_id=2]
Transport->>Dispatcher: Binary frames [request_id=2]
Dispatcher->>Caller: Match request_id=2
Caller->>App: Return 12.0
Handler->>Server: Response [request_id=1]
Server->>Transport: Binary frames [request_id=1]
Transport->>Dispatcher: Binary frames [request_id=1]
Dispatcher->>Caller: Match request_id=1
Caller->>App: Return 3.0
Request Correlation and Multiplexing
The system supports concurrent requests over a single connection through request ID correlation and frame interleaving.
Request Lifecycle
Sources: README.md:28-29
Concurrent Request Management
The RpcDispatcher maintains internal state for all in-flight requests:
- Pending Requests Map :
HashMap<request_id, ResponseHandler>tracks active requests - Request ID Generation : Monotonically increasing counter ensures uniqueness
- Frame Reassembly : Collects interleaved frames until complete message received
- Response Routing : Matches incoming responses to pending requests by ID
This design enables:
- Pipelining : Multiple requests sent without waiting for responses
- Out-of-Order Completion : Responses processed as they arrive, not in request order
- Cancellation : Remove request from pending map to ignore future responses
- Connection Reuse : Single WebSocket connection handles unlimited concurrent requests
Sources: README.md:28-29
graph TB
subgraph "Shared Application Code"
APP_LOGIC["business_logic.rs\nUses RpcServiceCallerInterface"]
APP_CALL["Add::call(&caller, params)"]
end
subgraph "Platform-Specific Entry Points"
NATIVE_MAIN["main.rs (Native)\nCreates RpcClient"]
WASM_MAIN["lib.rs (WASM)\nCreates RpcWasmClient"]
end
subgraph "Client Implementations"
RPC_CLIENT["RpcClient\nTokio WebSocket"]
WASM_CLIENT["RpcWasmClient\nJS WebSocket bridge"]
TRAIT_IMPL["Both impl RpcServiceCallerInterface"]
end
APP_LOGIC --> APP_CALL
NATIVE_MAIN --> RPC_CLIENT
WASM_MAIN --> WASM_CLIENT
RPC_CLIENT --> TRAIT_IMPL
WASM_CLIENT --> TRAIT_IMPL
TRAIT_IMPL --> APP_CALL
APP_CALL --> RPC_CLIENT
APP_CALL --> WASM_CLIENT
Cross-Platform Code Reuse
Application logic written against the RpcServiceCallerInterface trait runs identically on all platforms without modification.
Platform-Independent Service Layer
Sources: README.md:47-48 Cargo.toml:27-28
Write Once, Deploy Everywhere
The RpcServiceCallerInterface trait provides:
call_prebuffered(method_id, request_bytes) -> Result<Vec<u8>>get_dispatcher() -> Arc<Mutex<RpcDispatcher>>- State change callbacks and connection management
Any type implementing this trait can execute application code. The service definitions (implementing RpcMethodPrebuffered) provide convenience methods that automatically delegate to the caller:
Platform-specific differences (async runtime, WebSocket implementation, JavaScript bridge) are isolated in the transport implementations, never exposed to application code.
Sources: README.md:47-48
Summary of Core Principles
| Principle | Implementation | Benefit |
|---|---|---|
| Layered Separation | Core → RPC → Transport | Each layer independently testable and replaceable |
| Binary Protocol | Raw bytes everywhere | Zero parsing overhead, FFI-friendly |
| Non-Async Core | Callback-driven dispatcher | Runtime-agnostic, deterministic execution |
| Type Safety | Shared service definitions | Compile-time API contract enforcement |
| Request Correlation | ID-based multiplexing | Concurrent requests over single connection |
| Platform Abstraction | Trait-based callers | Write once, deploy to native and WASM |
These principles work together to create a system that is simultaneously high-performance, type-safe, and broadly compatible across deployment targets.
Sources: README.md:16-52 DRAFT.md:9-26 Cargo.toml:10-11
Dismiss
Refresh this wiki
Enter email to refresh