This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Core Library (muxio)
Loading…
Core Library (muxio)
Relevant source files
Purpose and Scope
This document describes the foundational muxio crate, which provides the low-level binary framing protocol and stream multiplexing capabilities that form the base layer of the Muxio framework. The core library is transport-agnostic, non-async, and callback-driven, enabling integration with any runtime environment including Tokio, standard library, and WebAssembly.
For higher-level RPC service abstractions built on top of this core, see RPC Framework. For concrete client and server implementations that use this core library, see Transport Implementations.
Sources: Cargo.toml:1-71 README.md:17-24
Architecture Overview
The muxio core library implements a layered architecture where each layer has a specific, well-defined responsibility. The design separates concerns into three distinct layers: binary framing, stream multiplexing, and RPC protocol.
Diagram: Core Library Component Layering
graph TB
subgraph "Application Code"
App["Application Logic"]
end
subgraph "RPC Protocol Layer"
Dispatcher["RpcDispatcher"]
Request["RpcRequest"]
Response["RpcResponse"]
Header["RpcHeader"]
end
subgraph "Stream Multiplexing Layer"
Session["RpcSession"]
StreamDecoder["RpcStreamDecoder"]
StreamEncoder["RpcStreamEncoder"]
StreamEvent["RpcStreamEvent"]
end
subgraph "Binary Framing Layer"
MuxDecoder["FrameMuxStreamDecoder"]
Frame["DecodedFrame"]
FrameKind["FrameKind"]
end
subgraph "Transport Layer"
Transport["Raw Bytes\nWebSocket/TCP/Custom"]
end
App --> Dispatcher
Dispatcher --> Request
Dispatcher --> Response
Request --> Header
Response --> Header
Dispatcher --> Session
Session --> StreamDecoder
Session --> StreamEncoder
StreamDecoder --> StreamEvent
StreamEncoder --> Header
Session --> MuxDecoder
MuxDecoder --> Frame
Frame --> FrameKind
MuxDecoder --> Transport
Session --> Transport
Sources: src/rpc/rpc_internals/rpc_session.rs:1-118
Key Components
The core library consists of several primary components organized into distinct functional layers:
| Component | File Location | Layer | Primary Responsibility | Details |
|---|---|---|---|---|
FrameMuxStreamDecoder | src/frame/ | Binary Framing | Decodes raw bytes into DecodedFrame structures | See Binary Framing Protocol |
DecodedFrame | src/frame/ | Binary Framing | Container for decoded frame data with stream ID and payload | See Binary Framing Protocol |
FrameKind | src/frame/ | Binary Framing | Frame type enumeration (Data, End, Cancel) | See Binary Framing Protocol |
RpcSession | src/rpc/rpc_internals/rpc_session.rs:20-117 | Stream Multiplexing | Manages stream ID allocation and per-stream decoders | See Stream Multiplexing |
RpcStreamDecoder | src/rpc/rpc_internals/rpc_stream_decoder.rs:11-186 | Stream Multiplexing | Maintains state machine for individual stream decoding | See Stream Multiplexing |
RpcStreamEncoder | src/rpc/rpc_internals/ | Stream Multiplexing | Encodes RPC headers and payloads into frames | See Stream Multiplexing |
RpcDispatcher | src/rpc/rpc_dispatcher.rs | RPC Protocol | Correlates requests with responses via request_id | See RPC Dispatcher |
RpcRequest | src/rpc/ | RPC Protocol | Request data structure with method ID and parameters | See Request and Response Types |
RpcResponse | src/rpc/ | RPC Protocol | Response data structure with result or error | See Request and Response Types |
RpcHeader | src/rpc/rpc_internals/ | RPC Protocol | Contains RPC metadata (message type, IDs, metadata bytes) | See Request and Response Types |
Sources: src/rpc/rpc_internals/rpc_session.rs:15-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18
Component Interaction Flow
The following diagram illustrates how the core components interact during a typical request/response cycle, showing the actual method calls and data structures involved:
Diagram: Request/Response Data Flow Through Core Components
sequenceDiagram
participant App as "Application"
participant Disp as "RpcDispatcher"
participant Sess as "RpcSession"
participant Enc as "RpcStreamEncoder"
participant Dec as "RpcStreamDecoder"
participant Mux as "FrameMuxStreamDecoder"
rect rgb(245, 245, 245)
Note over App,Mux: Outbound Request Flow
App->>Disp: call(RpcRequest)
Disp->>Disp: assign request_id
Disp->>Sess: init_request(RpcHeader, max_chunk_size, on_emit)
Sess->>Sess: allocate stream_id via increment_u32_id()
Sess->>Enc: RpcStreamEncoder::new(stream_id, header, on_emit)
Enc->>Enc: encode header + payload into frames
Enc->>App: emit bytes via on_emit callback
end
rect rgb(245, 245, 245)
Note over App,Mux: Inbound Response Flow
App->>Sess: read_bytes(input, on_rpc_stream_event)
Sess->>Mux: FrameMuxStreamDecoder::read_bytes(input)
Mux->>Sess: Iterator<Result<DecodedFrame>>
Sess->>Dec: RpcStreamDecoder::decode_rpc_frame(frame)
Dec->>Dec: state machine: AwaitHeader → AwaitPayload → Done
Dec->>Sess: Vec<RpcStreamEvent>
Sess->>App: on_rpc_stream_event(RpcStreamEvent::Header)
Sess->>App: on_rpc_stream_event(RpcStreamEvent::PayloadChunk)
Sess->>App: on_rpc_stream_event(RpcStreamEvent::End)
end
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-186
Non-Async, Callback-Driven Design
A fundamental design characteristic of the core library is its non-async, callback-driven architecture. This design choice enables the library to be used across different runtime environments without requiring a specific async runtime.
graph LR
subgraph "Callback Trait System"
RpcEmit["RpcEmit trait"]
RpcStreamEventHandler["RpcStreamEventDecoderHandler trait"]
RpcEmit --> TokioImpl["Tokio Implementation:\nasync fn + channels"]
RpcEmit --> WasmImpl["WASM Implementation:\nwasm_bindgen + JS bridge"]
RpcEmit --> CustomImpl["Custom Implementation:\nuser-defined"]
RpcStreamEventHandler --> DispatcherHandler["RpcDispatcher handler"]
RpcStreamEventHandler --> CustomHandler["Custom event handler"]
end
Callback Traits
The core library defines several callback traits that enable integration with different transport layers:
Diagram: Callback Trait Architecture
The callback-driven design means:
- No built-in async/await : Core methods like
RpcSession::read_bytes()andRpcSession::init_request()are synchronous - Emit callbacks : Output is sent via callback functions implementing the
RpcEmittrait - Event callbacks : Decoded events are delivered via
RpcStreamEventDecoderHandlerimplementations - Transport agnostic : Any byte transport can be used as long as it can call the core methods and handle callbacks
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 README.md:35-36
Core Abstractions
The core library provides three fundamental abstractions that enable runtime-agnostic operation:
Stream Lifecycle Management
The RpcSession component provides stream multiplexing by maintaining per-stream state. Each outbound call allocates a unique stream_id via increment_u32_id(), and incoming frames are demultiplexed to their corresponding RpcStreamDecoder instances. For detailed information on stream multiplexing mechanics, see Stream Multiplexing.
Binary Protocol Format
All communication uses a binary framing protocol with fixed-size headers and variable-length payloads. The protocol supports chunking large messages using DEFAULT_MAX_CHUNK_SIZE and includes frame types (FrameKind::Data, FrameKind::End, FrameKind::Cancel) for stream control. For complete protocol specification, see Binary Framing Protocol.
Request Correlation
The RpcDispatcher component manages request/response correlation using unique request_id values. It maintains a HashMap of pending requests and routes incoming responses to the appropriate callback handlers. For dispatcher implementation details, see RPC Dispatcher.
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:53-117
Dependencies and Build Configuration
The core muxio crate has minimal dependencies to maintain its lightweight, transport-agnostic design:
| Dependency | Purpose | Version |
|---|---|---|
chrono | Timestamp utilities | 0.4.41 |
once_cell | Lazy static initialization | 1.21.3 |
tracing | Logging and diagnostics | 0.1.41 |
Development dependencies include:
bitcode: Used in tests for serialization examplesrand: Random data generation for teststokio: Async runtime for integration tests
The crate is configured to publish to crates.io with Apache-2.0 license Cargo.toml:1-18
Sources: Cargo.toml:34-71
Integration with Extensions
The core library is designed to be extended through separate crates in the workspace. The callback-driven, non-async design enables these extensions without modification to the core:
Diagram: Core Library Extension Architecture
graph TB
subgraph "Core Library: muxio"
Session["RpcSession\n(callback-driven)"]
Dispatcher["RpcDispatcher\n(callback-driven)"]
end
subgraph "RPC Service Layer"
ServiceTrait["muxio-rpc-service\nRpcMethodPrebuffered trait"]
Caller["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
Endpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
subgraph "Runtime Implementations"
TokioServer["muxio-tokio-rpc-server\nasync Tokio + Axum"]
TokioClient["muxio-tokio-rpc-client\nasync Tokio + WebSocket"]
WasmClient["muxio-wasm-rpc-client\nwasm-bindgen"]
end
Session --> ServiceTrait
Dispatcher --> Caller
Dispatcher --> Endpoint
Caller --> TokioClient
Caller --> WasmClient
Endpoint --> TokioServer
Extensions built on the core library:
- RPC Service Layer RPC Framework: Provides method definition traits and abstractions
- Tokio Server Tokio RPC Server: Async server implementation using
tokioandaxum - Tokio Client Tokio RPC Client: Async client using
tokio-tungsteniteWebSockets - WASM Client WASM RPC Client: Browser-based client using
wasm-bindgen
Sources: Cargo.toml:19-31 README.md:37-41
Memory and Performance Characteristics
The core library is designed for efficiency with several key optimizations:
| Optimization | Implementation | Benefit |
|---|---|---|
| Zero-copy frame parsing | FrameMuxStreamDecoder processes bytes in-place | Eliminates unnecessary allocations during frame decoding |
| Shared headers | RpcHeader wrapped in Arc src/rpc/rpc_internals/rpc_stream_decoder.rs111 | Multiple events reference same header without cloning |
| Minimal buffering | Stream decoders emit chunks immediately after header parse | Low memory footprint for large payloads |
| Automatic cleanup | Streams removed from HashMap on End/Cancel src/rpc/rpc_internals/rpc_session.rs74 | Prevents memory leaks from completed streams |
| Configurable chunks | max_chunk_size parameter in init_request() src/rpc/rpc_internals/rpc_session.rs:35-50 | Tune for different payload sizes and network conditions |
Typical Memory Usage
RpcSession: Approximately 48 bytes base + HashMap overheadRpcStreamDecoder: Approximately 80 bytes + buffered payload sizeRpcHeader(shared): 24 bytes + metadata length- Active stream overhead: ~160 bytes per concurrent stream
For performance tuning strategies and benchmarks, see Performance Considerations.
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18 src/rpc/rpc_internals/rpc_stream_decoder.rs:111-116
Error Handling
The core library uses Result types with specific error enums:
FrameDecodeError: Returned byFrameMuxStreamDecoder::read_bytes()andRpcStreamDecoder::decode_rpc_frame()CorruptFrame: Invalid frame structure or header dataReadAfterCancel: Attempt to read after stream cancellation
FrameEncodeError: Returned by encoding operations- Propagated from
RpcStreamEncoder::new()
- Propagated from
Error events are emitted as RpcStreamEvent::Error src/rpc/rpc_internals/rpc_session.rs:84-91 src/rpc/rpc_internals/rpc_session.rs:103-110 containing:
rpc_header: The header if availablerpc_request_id: The request ID if knownrpc_method_id: The method ID if parsedframe_decode_error: The underlying error
For comprehensive error handling patterns, see Error Handling.
Sources: src/rpc/rpc_internals/rpc_session.rs:80-111