This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Layered Architecture
Loading…
Layered Architecture
Relevant source files
Purpose and Scope
This document explains the layered transport kit design of the muxio system, describing how each layer builds upon the previous one to provide progressively higher-level abstractions. The architecture separates concerns into six distinct layers: binary framing, stream multiplexing, RPC protocol, RPC abstractions, service definitions, and platform extensions.
For information about the design principles that motivated this architecture, see Design Philosophy. For detailed implementation details of individual layers, see Core Library (muxio)) and RPC Framework.
Architectural Overview
The muxio system implements a layered transport kit where each layer has a well-defined responsibility and interacts only with adjacent layers. This separation enables runtime-agnostic operation and cross-platform deployment.
Sources:
graph TB
subgraph "Layer 6: Application Code"
APP["User Application\nBusiness Logic"]
end
subgraph "Layer 5: Service Definition Layer"
SD["RpcMethodPrebuffered Traits\nCompile-Time Method IDs\nShared Type Contracts"]
end
subgraph "Layer 4: RPC Abstraction Layer"
CALLER["RpcServiceCallerInterface\nPlatform-Agnostic Client API"]
ENDPOINT["RpcServiceEndpointInterface\nPlatform-Agnostic Server API"]
end
subgraph "Layer 3: RPC Protocol Layer"
DISPATCHER["RpcDispatcher\nRequest Correlation\nResponse Routing"]
end
subgraph "Layer 2: Stream Multiplexing Layer"
SESSION["RpcSession\nStream ID Allocation\nPer-Stream Decoders"]
end
subgraph "Layer 1: Binary Framing Layer"
ENCODER["RpcStreamEncoder\nFrame Construction"]
DECODER["RpcStreamDecoder\nFrame Reconstruction"]
end
subgraph "Layer 0: Platform Extensions"
TOKIO_CLIENT["RpcClient\ntokio + tokio-tungstenite"]
TOKIO_SERVER["RpcServer\naxum + tokio-tungstenite"]
WASM_CLIENT["RpcWasmClient\nwasm-bindgen + js-sys"]
end
APP --> SD
SD --> CALLER
SD --> ENDPOINT
CALLER --> DISPATCHER
ENDPOINT --> DISPATCHER
DISPATCHER --> SESSION
SESSION --> ENCODER
SESSION --> DECODER
ENCODER --> TOKIO_CLIENT
ENCODER --> TOKIO_SERVER
ENCODER --> WASM_CLIENT
DECODER --> TOKIO_CLIENT
DECODER --> TOKIO_SERVER
DECODER --> WASM_CLIENT
- README.md:25-41
- DRAFT.md:49-52
- Diagram 2 from high-level architecture
Layer 1: Binary Framing Protocol
The binary framing layer defines the wire format for all data transmission. It provides discrete message boundaries over byte streams using a compact header structure.
Frame Structure
Each frame consists of a fixed-size header followed by a variable-length payload chunk:
| Field | Type | Size | Description |
|---|---|---|---|
stream_id | u32 | 4 bytes | Identifies which logical stream this frame belongs to |
flags | u8 | 1 byte | Control flags (Start, End, Error, Cancelation) |
payload | [u8] | Variable | Binary data chunk |
The frame header is defined in RpcHeader and serialized using bytemuck for zero-copy conversion.
Frame Types
Frames are categorized by their flags field, encoded using num_enum:
- Start Frame : First frame of a stream, initializes decoder state
- Data Frame : Intermediate payload chunk
- End Frame : Final frame, triggers stream completion
- Error Frame : Signals stream-level error
- Cancelation Frame : Requests stream termination
Encoding and Decoding
The RpcStreamEncoder serializes data into frames with automatic chunking based on DEFAULT_MAX_CHUNK_SIZE. The RpcStreamDecoder reconstructs the original message from potentially out-of-order frames.
Sources:
graph LR
INPUT["Input Bytes"]
CHUNK["Chunk into\nDEFAULT_MAX_CHUNK_SIZE"]
HEADER["Add RpcHeader\nstream_id + flags"]
FRAME["Binary Frame"]
INPUT --> CHUNK
CHUNK --> HEADER
HEADER --> FRAME
RECV["Received Frames"]
DEMUX["Demultiplex by\nstream_id"]
BUFFER["Reassemble Chunks"]
OUTPUT["Output Bytes"]
RECV --> DEMUX
DEMUX --> BUFFER
BUFFER --> OUTPUT
- src/rpc/rpc_internals/rpc_types.rs (RpcHeader definition)
- src/rpc/rpc_internals/rpc_stream_encoder.rs (frame encoding)
- src/rpc/rpc_internals/rpc_stream_decoder.rs (frame decoding)
- README.md:33-34
Layer 2: Stream Multiplexing Layer
The stream multiplexing layer, implemented by RpcSession, manages multiple concurrent logical streams over a single connection. Each stream has independent state and lifecycle.
RpcSession Responsibilities
The RpcSession struct provides:
- Stream ID Allocation : Monotonically increasing
u32identifiers - Per-Stream Decoders :
HashMap<u32, RpcStreamDecoder>for concurrent reassembly - Frame Muxing : Interleaving frames from multiple streams
- Frame Demuxing : Routing incoming frames to the correct decoder
- Stream Lifecycle : Automatic decoder cleanup on End/Error events
Stream Lifecycle Management
The session maintains a decoder for each active stream in the decoders field. When a stream completes (End/Error/Cancelation), its decoder is removed from the map, freeing resources.
Concurrent Stream Operations
Multiple streams can be active simultaneously:
Sources:
- src/rpc/rpc_internals/rpc_session.rs:16-21
- src/rpc/rpc_internals/rpc_session.rs:52-120
- README.md:29-30
Layer 3: RPC Protocol Layer
The RPC protocol layer, implemented by RpcDispatcher, adds request/response semantics on top of the stream multiplexer. It correlates requests with responses using unique request IDs.
RpcDispatcher Structure
Request Correlation
The dispatcher assigns each RPC call a unique request_id:
- Client calls
RpcDispatcher::call(RpcRequest) - Dispatcher assigns monotonic
request_idfromnext_request_id - Request is serialized with embedded
request_id - Dispatcher stores callback in
pending_requestsmap - Server processes request and returns
RpcResponsewith samerequest_id - Dispatcher looks up callback in
pending_requestsand invokes it - Entry is removed from
pending_requests
RPC Message Types
The protocol uses num_enum to encode message types in frame payloads:
| Message Type | Direction | Contains |
|---|---|---|
RpcRequest | Client → Server | request_id, method_id, params |
RpcResponse | Server → Client | request_id, result or error |
RpcStreamChunk | Bidirectional | request_id, chunk_data |
RpcStreamEnd | Bidirectional | request_id |
Request/Response Flow with Code Entities
Sources:
Layer 4: RPC Abstraction Layer
The RPC abstraction layer defines platform-agnostic traits that enable the same application code to work across different runtime environments.
RpcServiceCallerInterface
The RpcServiceCallerInterface trait abstracts client-side RPC invocation:
This trait is implemented by:
RpcClient(Tokio-based native client)RpcWasmClient(WASM browser client)
RpcServiceEndpointInterface
The RpcServiceEndpointInterface trait abstracts server-side handler registration:
Platform Abstraction Benefits
| Aspect | Implementation Detail | Abstracted By |
|---|---|---|
| Transport | WebSocket, TCP, Browser APIs | Caller/Endpoint traits |
| Runtime | Tokio, WASM event loop, std::thread | Async trait methods |
| Serialization | bitcode encoding/decoding | Vec<u8> byte interface |
| Error Handling | Platform-specific errors | RpcServiceError enum |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:65-137
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:65-137
- README.md:48-49
Layer 5: Service Definition Layer
The service definition layer provides compile-time type safety through shared trait definitions between client and server.
RpcMethodPrebuffered Trait
Service methods are defined using the RpcMethodPrebuffered trait:
Compile-Time Method ID Generation
The METHOD_ID is computed at compile time using xxhash3_64 from the xxhash-rust crate:
graph TB
DEF["Service Definition Crate\nRpcMethodPrebuffered impls"]
SERVER["Server Crate"]
CLIENT["Client Crate"]
DEF -->|depends on| SERVER
DEF -->|depends on| CLIENT
SERVER -->|register_prebuffered Add::METHOD_ID, handler| ENDPOINT["RpcServiceEndpoint"]
CLIENT -->|Add::call client, params| CALLER["RpcServiceCaller"]
ENDPOINT -->|decode_request| DEF
ENDPOINT -->|encode_response| DEF
CALLER -->|encode_request| DEF
CALLER -->|decode_response| DEF
Note1["Compile Error if:\n- Method name mismatch\n- Type signature mismatch\n- Serialization incompatibility"]
This ensures that method names are never transmitted on the wire—only their compact 8-byte hash values.
Type Safety Enforcement
Sources:
- extensions/muxio-rpc-service/src/prebuffered.rs
- examples/example-muxio-rpc-service-definition/src/prebuffered.rs
- README.md:50-51
- Cargo.toml64 (xxhash-rust dependency)
graph TB
subgraph "Tokio Native Platform"
TOKIO_CLIENT["RpcClient\nArc<RpcClientInner>"]
TOKIO_INNER["RpcClientInner\ndispatcher: TokioMutex<RpcDispatcher>\nendpoint: Arc<RpcServiceEndpoint>"]
TOKIO_TRANSPORT["tokio-tungstenite\nWebSocketStream"]
TOKIO_TASKS["Background Tasks\nread_task\nwrite_task"]
TOKIO_CLIENT -->|owns Arc| TOKIO_INNER
TOKIO_INNER -->|uses| TOKIO_TRANSPORT
TOKIO_CLIENT -->|spawns| TOKIO_TASKS
end
subgraph "WASM Browser Platform"
WASM_CLIENT["RpcWasmClient\nRpcClientInner"]
WASM_BRIDGE["static_muxio_write_bytes\nJavaScript Bridge"]
WASM_STATIC["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local! RefCell"]
WASM_WSAPI["Browser WebSocket API\njs-sys bindings"]
WASM_CLIENT -->|calls| WASM_BRIDGE
WASM_BRIDGE -->|write to| WASM_WSAPI
WASM_STATIC -->|holds| WASM_CLIENT
end
subgraph "Shared Abstractions"
CALLER_TRAIT["RpcServiceCallerInterface"]
ENDPOINT_TRAIT["RpcServiceEndpointInterface"]
end
TOKIO_CLIENT -.implements.-> CALLER_TRAIT
WASM_CLIENT -.implements.-> CALLER_TRAIT
TOKIO_INNER -->|owns| ENDPOINT_TRAIT
WASM_CLIENT -->|owns| ENDPOINT_TRAIT
Layer 6: Platform Extensions
Platform extensions implement the abstraction layer traits for specific runtime environments, providing concrete transport mechanisms.
Platform Extension Architecture
Extension Crate Mapping
| Extension Crate | Implements | Runtime | Transport |
|---|---|---|---|
muxio-tokio-rpc-client | RpcServiceCallerInterface, RpcServiceEndpointInterface | Tokio async | tokio-tungstenite WebSocket |
muxio-tokio-rpc-server | RpcServiceEndpointInterface | Tokio + Axum | tokio-tungstenite WebSocket |
muxio-wasm-rpc-client | RpcServiceCallerInterface, RpcServiceEndpointInterface | Browser event loop | wasm-bindgen + js-sys |
Tokio Client Lifecycle
The RpcClient manages lifecycle through Arc reference counting:
Background tasks (read_task, write_task) hold Arc clones and automatically clean up when the connection drops.
WASM Client Singleton Pattern
The WASM client uses a thread-local singleton for JavaScript interop:
This enables JavaScript to write bytes into the Rust dispatcher without async overhead.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-32
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:10-12
- Cargo.toml:20-30
Cross-Cutting Concerns
Several subsystems span multiple layers:
Serialization (bitcode)
The bitcode crate provides compact binary serialization at Layer 5 (Service Definitions):
encode()inRpcMethodPrebuffered::encode_request/encode_responsedecode()inRpcMethodPrebuffered::decode_request/decode_response- Configured in service definition crates, used by both client and server
Observability (tracing)
The tracing crate provides structured logging at Layers 2-4:
- Frame-level events in
RpcSession - Request/response correlation in
RpcDispatcher - Connection state changes in platform extensions
Error Propagation
Errors flow upward through layers:
Each layer defines its own error type and converts lower-layer errors appropriately.
Sources:
- Cargo.toml52 (bitcode dependency)
- Cargo.toml62 (tracing dependency)
- src/rpc/rpc_dispatcher.rs:130-190
Layer Interaction Patterns
Write Path (Client → Server)
Read Path (Server → Client)
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-226
- src/rpc/rpc_dispatcher.rs:130-264
- src/rpc/rpc_internals/rpc_session.rs:52-120
Summary
The layered architecture enables:
- Separation of Concerns : Each layer has a single, well-defined responsibility
- Runtime Agnosticism : Core layers (1-3) use non-async, callback-driven design
- Platform Extensibility : Layer 6 implements platform-specific transports
- Type Safety : Layer 5 enforces compile-time contracts
- Code Reuse : Same service definitions work across all platforms
This design allows the same business logic to execute in Tokio native environments, WASM browsers, and potentially other runtimes without modification to the core layers.
Sources: