This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Overview
Loading…
Overview
Relevant source files
Purpose and Scope
This document provides a high-level introduction to the rust-muxio repository, explaining its purpose as a toolkit for building high-performance, multiplexed RPC systems. It covers the foundational architecture, the layered design philosophy, and how the different components of the system work together to enable efficient, cross-platform communication.
For detailed information about specific subsystems, see:
- Workspace organization: Workspace Structure
- Core design principles: Design Philosophy
- Low-level implementation: Core Library (muxio))
- RPC framework usage: RPC Framework
- Platform-specific implementations: Platform Implementations
Sources : README.md:1-18 Cargo.toml:9-17
What is Muxio?
Muxio is a layered transport toolkit for building multiplexed, binary RPC systems in Rust. It separates concerns across three distinct architectural layers:
| Layer | Primary Crates | Responsibility |
|---|---|---|
| Core Multiplexing | muxio | Binary framing protocol, stream multiplexing, non-async callback-driven primitives |
| RPC Framework | muxio-rpc-service | |
muxio-rpc-service-caller | ||
muxio-rpc-service-endpoint | Service definitions, method ID generation, client/server abstractions, request correlation | |
| Platform Extensions | muxio-tokio-rpc-server | |
muxio-tokio-rpc-client | ||
muxio-wasm-rpc-client | Concrete implementations for native (Tokio) and web (WASM) environments |
The system is designed around two key principles:
-
Runtime Agnosticism : The core library (
muxio) uses a callback-driven, non-async model that works in any Rust environment—Tokio, WASM, single-threaded, or multi-threaded contexts. -
Layered Abstraction : Each layer provides a clean interface to the layer above, enabling developers to build custom transports or replace individual components without affecting the entire stack.
Sources : README.md:19-41 Cargo.toml:19-31
System Architecture
The following diagram illustrates the complete system architecture, showing how components in the workspace relate to each other:
Sources : Cargo.toml:19-31 README.md:23-41 Cargo.lock:830-954
graph TB
subgraph Core["Core Foundation (muxio crate)"]
RpcDispatcher["RpcDispatcher\nRequest Correlation\nResponse Routing"]
RpcSession["RpcSession\nStream Multiplexing\nFrame Mux/Demux"]
FrameProtocol["Binary Framing\nRpcHeader + Payload Chunks"]
RpcDispatcher --> RpcSession
RpcSession --> FrameProtocol
end
subgraph RPCFramework["RPC Framework Layer"]
RpcService["muxio-rpc-service\nRpcMethodPrebuffered Trait\nxxhash Method IDs"]
RpcCaller["muxio-rpc-service-caller\nRpcServiceCallerInterface\nGeneric Client Logic"]
RpcEndpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface\nHandler Registration"]
RpcCaller --> RpcService
RpcEndpoint --> RpcService
end
subgraph NativeExt["Native Platform Extensions"]
TokioServer["muxio-tokio-rpc-server\nRpcServer + Axum\nWebSocket Transport"]
TokioClient["muxio-tokio-rpc-client\nRpcClient\ntokio-tungstenite"]
TokioServer --> RpcEndpoint
TokioServer --> RpcCaller
TokioClient --> RpcCaller
TokioClient --> RpcEndpoint
end
subgraph WASMExt["WASM Platform Extensions"]
WasmClient["muxio-wasm-rpc-client\nRpcWasmClient\nwasm-bindgen Bridge"]
WasmClient --> RpcCaller
WasmClient --> RpcEndpoint
end
subgraph AppLayer["Application Layer"]
ServiceDef["example-muxio-rpc-service-definition\nShared Service Contracts"]
ExampleApp["example-muxio-ws-rpc-app\nDemo Application"]
ServiceDef --> RpcService
ExampleApp --> ServiceDef
ExampleApp --> TokioServer
ExampleApp --> TokioClient
end
RpcCaller --> RpcDispatcher
RpcEndpoint --> RpcDispatcher
TokioClient <-->|Binary Frames| TokioServer
WasmClient <-->|Binary Frames| TokioServer
Core Components and Their Roles
muxio Core (muxio)
The foundational crate provides three critical components:
-
RpcSession: Manages stream multiplexing over a single connection. Allocates stream IDs, maintains per-stream decoders, and handles frame interleaving. Located at src/rpc/rpc_internals/rpc_session.rs:16-21 -
RpcDispatcher: Correlates RPC requests with responses using unique request IDs. Tracks pending requests in aHashMapand routes responses to the appropriate callback. Located at src/rpc/rpc_dispatcher.rs:19-30 -
Binary Framing Protocol : A schemaless, low-overhead protocol that chunks payloads into frames with minimal headers. Each frame contains a stream ID, flags, and a payload chunk. Defined at src/rpc/rpc_internals/rpc_session.rs:16-21
RPC Framework Layer
-
muxio-rpc-service: Defines theRpcMethodPrebufferedtrait for creating shared service contracts. Usesxxhash-rustto generate compile-time method IDs from method names. Located at extensions/muxio-rpc-service/ -
muxio-rpc-service-caller: Provides theRpcServiceCallerInterfacetrait, which abstracts client-side RPC invocation. Any client implementation (Tokio, WASM, custom) must implement this interface. Located at extensions/muxio-rpc-service-caller/ -
muxio-rpc-service-endpoint: Provides theRpcServiceEndpointInterfacetrait for server-side handler registration and request dispatch. Located at extensions/muxio-rpc-service-endpoint/
Platform Extensions
-
muxio-tokio-rpc-server: ImplementsRpcServerusing Axum for HTTP/WebSocket serving andtokio-tungstenitefor WebSocket framing. Located at extensions/muxio-tokio-rpc-server/ -
muxio-tokio-rpc-client: ImplementsRpcClientwith Arc-based lifecycle management and background tasks for connection handling. Located at extensions/muxio-tokio-rpc-client/ -
muxio-wasm-rpc-client: ImplementsRpcWasmClientusingwasm-bindgento bridge Rust to JavaScript. Communicates with browser WebSocket APIs via a static byte-passing interface. Located at extensions/muxio-wasm-rpc-client/
Sources : Cargo.toml:40-47 Cargo.lock:830-954 README.md:36-41
Key Design Characteristics
graph LR
subgraph CorePrimitives["Core Primitives (Non-Async)"]
RpcDispatcher["RpcDispatcher\nBox<dyn Fn> Callbacks"]
RpcSession["RpcSession\nCallback-Driven Events"]
end
subgraph TokioRuntime["Tokio Runtime"]
TokioClient["RpcClient\nArc + TokioMutex"]
TokioServer["RpcServer\ntokio::spawn Tasks"]
end
subgraph WASMRuntime["WASM Runtime"]
WasmClient["RpcWasmClient\nthread_local RefCell"]
JSBridge["JavaScript Bridge\nstatic_muxio_write_bytes"]
end
subgraph CustomRuntime["Custom Runtime"]
CustomImpl["Custom Implementation\nUser-Defined Threading"]
end
CorePrimitives --> TokioRuntime
CorePrimitives --> WASMRuntime
CorePrimitives --> CustomRuntime
TokioClient --> RpcDispatcher
TokioServer --> RpcDispatcher
WasmClient --> RpcDispatcher
CustomImpl --> RpcDispatcher
Runtime-Agnostic Core
The following diagram shows how the non-async, callback-driven core enables cross-runtime compatibility:
Key Implementation Details :
-
Callback Closures : Both
RpcDispatcherandRpcSessionacceptBox<dyn Fn(...)>callbacks rather than returningFutures, enabling use in any execution context. -
No Async Core : The
muxiocrate itself has noasyncfunctions in its public API. All asynchrony is introduced by the platform extensions (muxio-tokio-rpc-client, etc.). -
Flexible Synchronization : Platform extensions choose their own synchronization primitives—
TokioMutexfor Tokio,RefCellfor WASM, or custom locking for other runtimes.
Sources : README.md:35-36 Cargo.lock:830-839
Type Safety Through Shared Definitions
Muxio enforces compile-time type safety by requiring shared service definitions between client and server:
| Component | Location | Purpose |
|---|---|---|
| Service Definition Crate | examples/example-muxio-rpc-service-definition | Defines RpcMethodPrebuffered implementations for each RPC method |
| Method ID Constants | Generated via xxhash-rust | Compile-time hashes of method names (e.g., Add::METHOD_ID) |
| Request/Response Types | Defined in service crate | Shared structs serialized with bitcode |
Both client and server depend on the same service definition crate. If a client attempts to call a method with the wrong parameter types, or if the server returns a response with the wrong type, the code will not compile.
Example fromexample-muxio-rpc-service-definition:
Sources : README.md:50-51 Cargo.lock:426-431 examples/example-muxio-rpc-service-definition/
graph TB
subgraph ServerSide["Server-Side Deployment"]
RpcServer["RpcServer\nAxum + WebSocket"]
TokioRuntime["Tokio Runtime\nMulti-threaded Executor"]
RpcServer --> TokioRuntime
end
subgraph NativeClient["Native Client Deployment"]
RpcClient["RpcClient\ntokio-tungstenite"]
TokioClientRuntime["Tokio Runtime\nAsync Tasks"]
RpcClient --> TokioClientRuntime
end
subgraph WASMClient["WASM Browser Client"]
RpcWasmClient["RpcWasmClient\nwasm-bindgen"]
JSHost["JavaScript Host\nWebSocket APIs"]
RpcWasmClient --> JSHost
end
subgraph SharedContract["Shared Service Definition"]
ServiceDef["example-muxio-rpc-service-definition\nAdd, Mult, Echo Methods"]
end
RpcClient <-->|Binary Protocol| RpcServer
RpcWasmClient <-->|Binary Protocol| RpcServer
RpcClient -.depends on.-> ServiceDef
RpcWasmClient -.depends on.-> ServiceDef
RpcServer -.depends on.-> ServiceDef
Deployment Configurations
Muxio supports multiple deployment configurations, all using the same binary protocol:
Key Characteristics :
-
Same Service Definitions : All clients and servers depend on the same service definition crate, ensuring API consistency across platforms.
-
Binary Protocol Compatibility : Native clients, WASM clients, and servers all communicate using identical binary framing and serialization (via
bitcode). -
Platform-Specific Transports : Each platform extension provides its own WebSocket transport implementation—
tokio-tungstenitefor native, browser APIs for WASM.
Sources : README.md:66-161 Cargo.lock:898-954
Summary
Muxio provides a three-layer architecture for building efficient, cross-platform RPC systems:
-
Core Layer (
muxio): Binary framing, stream multiplexing, request/response correlation—all using callback-driven, non-async primitives. -
RPC Framework Layer : Service definitions with compile-time method IDs, generic caller/endpoint interfaces, and type-safe abstractions.
-
Platform Extensions : Concrete implementations for Tokio (native) and WASM (browser) environments, both implementing the same abstract interfaces.
The system prioritizes low-latency communication (via compact binary protocol), type safety (via shared service definitions), and runtime flexibility (via callback-driven core).
For implementation details, see:
- Core Library (muxio)) for framing and multiplexing internals
- RPC Framework for service definition patterns
- Platform Implementations for Tokio and WASM client/server usage
Sources : README.md:1-163 Cargo.toml:1-71 Cargo.lock:830-954
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Core Concepts
Loading…
Core Concepts
Relevant source files
This document explains the fundamental architectural concepts and design principles that underpin Muxio. It provides a conceptual overview of the binary protocol, multiplexing mechanisms, RPC abstraction layer, and cross-platform capabilities. For detailed implementation specifics of the core library, see Core Library (muxio)). For RPC framework details, see RPC Framework. For transport implementation patterns, see Transport Implementations.
Scope and Purpose
Muxio is built on a layered architecture where each layer serves a distinct purpose:
- Binary Framing Layer : Low-level frame encoding/decoding with stream identification
- Stream Multiplexing Layer : Managing multiple concurrent streams over a single connection
- RPC Protocol Layer : Request/response semantics and correlation
- Transport Abstraction Layer : Runtime-agnostic interfaces for different environments
The design prioritizes transport agnosticism, cross-platform compatibility (native + WASM), and type safety through shared service definitions.
Sources: README.md:17-54
Binary Protocol Foundation
Frame-Based Communication
Muxio uses a compact binary framing protocol where all data is transmitted as discrete frames. The RpcFrame struct defines the wire format, and the RpcFrameType enum distinguishes frame purposes.
RpcFrame Structure:
| Field | Type | Purpose |
|---|---|---|
stream_id | u32 | Identifies which logical stream the frame belongs to |
frame_type | RpcFrameType | Enum indicating frame purpose |
payload_bytes | Vec<u8> | Raw bytes specific to the frame type |
RpcFrameType Variants:
| Variant | Binary Value | Purpose |
|---|---|---|
Header | 0x01 | Contains serialized RpcHeader structure |
Data | 0x02 | Contains payload chunk (up to DEFAULT_MAX_CHUNK_SIZE) |
End | 0x03 | Signals stream completion |
Cancel | 0x04 | Aborts stream mid-transmission |
stateDiagram-v2
[*] --> AwaitingHeader
AwaitingHeader --> ReceivingPayload: RpcFrameType::Header
AwaitingHeader --> Complete: RpcFrameType::End
AwaitingHeader --> Cancelled: RpcFrameType::Cancel
ReceivingPayload --> ReceivingPayload: RpcFrameType::Data
ReceivingPayload --> Complete: RpcFrameType::End
ReceivingPayload --> Cancelled: RpcFrameType::Cancel
Complete --> [*]
Cancelled --> [*]
This framing approach enables multiple independent data streams to be interleaved over a single physical connection without interference. The RpcStreamEncoder::write_frame() method emits frames, while RpcStreamDecoder::decode_frame() processes incoming frames.
Diagram: Frame Type State Machine
Sources: README.md:33-34 DRAFT.md:9-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-100 src/rpc/rpc_internals/rpc_stream_encoder.rs:1-80
Non-Async Callback Design
The core multiplexing logic in muxio is implemented using synchronous control flow with callbacks rather than async/await. This design decision enables:
- WASM Compatibility : Works in single-threaded JavaScript environments
- Runtime Agnosticism : No dependency on Tokio, async-std, or any specific runtime
- Flexible Integration : Can be wrapped with async interfaces when needed
Sources: README.md:35-36 DRAFT.md:48-52
Stream Multiplexing Architecture
RpcSession: Multi-Stream Management
The RpcSession component manages multiple concurrent logical streams over a single connection. It maintains per-stream state and ensures frames are correctly routed to their destination stream.
Key Methods:
| Method | Signature | Purpose |
|---|---|---|
allocate_stream_id() | fn(&mut self) -> u32 | Assigns unique identifiers to new streams |
init_request() | fn(&mut self, header: RpcHeader) -> u32 | Creates new stream, returns stream_id |
read_bytes() | fn(&mut self, bytes: &[u8], callback: F) | Processes incoming frames, routes to decoders |
remove_decoder() | fn(&mut self, stream_id: u32) | Cleans up completed/cancelled streams |
Internal State:
decoders: HashMap<u32, RpcStreamDecoder>- Per-stream decoder instancesnext_stream_id: u32- Monotonically increasing stream identifier counterevent_callback: F- Closure invoked for eachRpcStreamEvent
Diagram: RpcSession Stream Routing
graph TB
RawBytes["Raw Bytes from Transport"]
subgraph RpcSession["RpcSession"]
ReadBytes["read_bytes(bytes, callback)"]
DecoderMap["decoders: HashMap<u32, RpcStreamDecoder>"]
Decoder1["RpcStreamDecoder { stream_id: 1 }"]
Decoder2["RpcStreamDecoder { stream_id: 2 }"]
DecoderN["RpcStreamDecoder { stream_id: N }"]
RemoveDecoder["remove_decoder(stream_id)"]
end
subgraph Events["RpcStreamEvent Variants"]
Header["Header { stream_id, header: RpcHeader }"]
Data["Data { stream_id, bytes: Vec<u8> }"]
End["End { stream_id }"]
Cancel["Cancel { stream_id }"]
end
RawBytes --> ReadBytes
ReadBytes --> DecoderMap
DecoderMap --> Decoder1
DecoderMap --> Decoder2
DecoderMap --> DecoderN
Decoder1 --> Header
Decoder2 --> Data
DecoderN --> End
End --> RemoveDecoder
Cancel --> RemoveDecoder
Each RpcStreamDecoder maintains its own state machine via the RpcStreamDecoderState enum:
AwaitingHeader- Waiting for initialRpcFrameType::HeaderframeReceivingPayload- AccumulatingRpcFrameType::DataframesComplete- Stream finalized afterRpcFrameType::End
Sources: README.md:29-30 src/rpc/rpc_internals/rpc_session.rs:1-150 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-120
RpcDispatcher: Request/Response Correlation
The RpcDispatcher sits above RpcSession and provides RPC-specific semantics. It maintains a HashMap<u32, PendingRequest> to track in-flight requests.
Key Methods:
| Method | Signature | Purpose |
|---|---|---|
call() | fn(&mut self, request: RpcRequest, on_response: F) | Initiates RPC call, registers response callback |
respond() | fn(&mut self, response: RpcResponse) | Sends response back to caller |
read_bytes() | fn(&mut self, bytes: &[u8]) | Processes incoming frames via RpcSession |
cancel() | fn(&mut self, request_id: u32) | Aborts pending request |
Internal State:
session: RpcSession- Underlying multiplexing layerpending_requests: HashMap<u32, PendingRequest>- Tracks active callsnext_request_id: u32- Monotonically increasing request identifierwrite_bytes_fn: F- Callback to emit frames to transport
Diagram: RpcDispatcher Call Flow with Code Entities
sequenceDiagram
participant App as "Application Code"
participant Dispatcher as "RpcDispatcher"
participant Session as "RpcSession"
participant Pending as "pending_requests: HashMap"
participant Encoder as "RpcStreamEncoder"
App->>Dispatcher: call(RpcRequest, on_response)
Dispatcher->>Dispatcher: next_request_id++
Dispatcher->>Pending: insert(request_id, PendingRequest)
Dispatcher->>Session: init_request(RpcHeader)
Session->>Session: allocate_stream_id()
Session->>Encoder: write_frame(RpcFrameType::Header)
Encoder->>Encoder: write_frame(RpcFrameType::Data)
Encoder->>Encoder: write_frame(RpcFrameType::End)
Note over App,Encoder: Response Path
Session->>Dispatcher: RpcStreamEvent::Header
Dispatcher->>Dispatcher: Buffer payload in PendingRequest
Session->>Dispatcher: RpcStreamEvent::Data
Session->>Dispatcher: RpcStreamEvent::End
Dispatcher->>Pending: remove(request_id)
Dispatcher->>App: on_response(RpcResponse)
The PendingRequest struct accumulates stream data:
struct PendingRequest {
header: RpcHeader,
accumulated_bytes: Vec<u8>,
on_response: Box<dyn FnOnce(Result<RpcResponse>)>
}
Sources: README.md:29-30 src/rpc/rpc_dispatcher.rs:1-300 src/rpc/rpc_internals/rpc_stream_encoder.rs:1-100
RPC Protocol Layer
Request and Response Types
Muxio defines structured types for RPC communication. These types are serialized using bitcode for transmission.
RpcHeader Structure:
pub struct RpcHeader {
pub msg_type: RpcMsgType, // Call(0x01) or Response(0x02)
pub request_id: u32, // Correlation identifier
pub method_id: u32, // xxhash of method name
pub rpc_param_bytes: Option<Vec<u8>>, // Inline params (if small)
pub metadata_bytes: Vec<u8>, // Optional auxiliary data
}
RpcMsgType Enum:
| Variant | num_enum Value | Purpose |
|---|---|---|
Call | 0x01 | Client-initiated request |
Response | 0x02 | Server-generated response |
RpcRequest Structure:
pub struct RpcRequest {
pub header: RpcHeader, // Contains method_id, request_id
pub param_bytes: Vec<u8>, // Full serialized parameters
pub param_stream_rx: Option<...>, // Optional streaming channel
}
RpcResponse Structure:
pub struct RpcResponse {
pub request_id: u32, // Matches original request
pub result_type: RpcResultType, // Ok(0x01) or Err(0x02)
pub result_bytes: Vec<u8>, // Serialized return value or error
}
Diagram: Type Relationships
Sources: README.md:33-34 src/rpc/types/rpc_header.rs:1-50 src/rpc/types/rpc_request.rs:1-40 src/rpc/types/rpc_response.rs:1-40
Method Routing
RPC methods are identified by numeric method_id values generated at compile-time using xxhash::xxh32() of the method name. This enables:
- Constant-time lookups : Direct
HashMap<u32, Handler>access rather than string comparison - Type safety : Method IDs are generated from trait definitions shared between client and server
- Compact representation : 4-byte method identifiers instead of variable-length strings
RpcMethodPrebuffered Trait Definition:
Example Service Definition:
Method Dispatch Flow:
Sources: README.md:50-51 README.md:102-118 extensions/muxio-rpc-service/src/lib.rs:1-100
Transport Agnosticism
Generic Caller Interface
The RpcServiceCallerInterface trait abstracts the client-side transport layer, enabling the same application code to work with multiple implementations.
Trait Definition:
Concrete Implementations:
| Type | Module | Transport | Platform |
|---|---|---|---|
RpcClient | muxio-tokio-rpc-client | tokio-tungstenite WebSocket | Native (Tokio) |
RpcWasmClient | muxio-wasm-rpc-client | wasm-bindgen → JavaScript WebSocket | Browser (WASM) |
Diagram: Trait Implementation and Usage
This abstraction allows writing code once that compiles for multiple targets:
- Native applications : Use
RpcClientwithtokio::spawn()background tasks - Browser/WASM : Use
RpcWasmClientwithstatic_muxio_write_bytes()JavaScript bridge - Custom transports : Implement the trait for specialized needs (e.g., IPC, embedded systems)
Sources: README.md:48-49 extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-100 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:1-50 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-50
Generic Endpoint Interface
The RpcServiceEndpointInterface trait abstracts the server-side handler registration. Handlers are stored in a HashMap<u32, Handler> indexed by method_id.
Trait Definition:
RpcContext Structure:
Diagram: Handler Registration and Dispatch
Sources: README.md:98-119 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-150
Cross-Platform Support
Single Codebase, Multiple Targets
Muxio enables true cross-platform RPC by separating concerns:
- Service Definition Layer : Platform-agnostic method definitions shared between client and server
- Transport Layer : Platform-specific implementations (Tokio, WASM) hidden behind traits
- Application Logic : Written once against the trait interface
The same Add::call(), Mult::call(), and Echo::call() method invocations work identically whether called from native code or WASM, as demonstrated in the example application.
Sources: README.md:48-49 README.md:64-161 Diagram 2 from high-level architecture
Type Safety Model
Compile-Time Contract Enforcement
Muxio enforces API contracts at compile-time through shared service definitions. Both client and server depend on the same crate containing method definitions, ensuring:
- Parameter type mismatches are caught at compile-time
- Return type mismatches are caught at compile-time
- Method ID collisions are prevented by the build system
- Serialization format consistency is guaranteed
The flow for type-safe RPC:
Any mismatch in the shared trait definition causes a compilation error, eliminating an entire class of runtime errors common in dynamically-typed RPC systems.
Sources: README.md:50-51 README.md:102-118
Serialization Layer
Muxio uses bitcode for efficient binary serialization, but the design is format-agnostic. The RpcMethodPrebuffered trait defines encode/decode methods, allowing alternative serialization libraries to be substituted if needed:
- Bitcode : Default choice for compact binary format
- Bincode : Alternative binary format
- MessagePack : Cross-language compatibility
- Custom formats : Full control over wire protocol
The key requirement is that both client and server use the same serialization implementation for a given method.
Sources: README.md:33-34
Summary
Muxio’s core concepts revolve around:
- Binary Framing : Efficient, low-overhead frame-based protocol
- Stream Multiplexing : Multiple concurrent streams via
RpcSessionand per-stream decoders - Request Correlation : Matching responses to requests via
RpcDispatcher - Transport Abstraction : Generic traits (
RpcServiceCallerInterface,RpcServiceEndpointInterface) enable multiple implementations - Non-Async Core : Callback-driven design supports WASM and multiple runtimes
- Type Safety : Shared service definitions provide compile-time contract enforcement
- Cross-Platform : Single codebase runs on native (Tokio) and browser (WASM) clients
These concepts are elaborated in subsequent sections: Design Philosophy covers the reasoning behind these choices, and Layered Architecture provides detailed implementation patterns.
Sources: README.md:17-54 DRAFT.md:9-52 All high-level architecture diagrams
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Design Philosophy
Loading…
Design Philosophy
Relevant source files
Purpose and Scope
This document describes the fundamental design principles that guide the architecture and implementation of the rust-muxio framework. It covers four core tenets: runtime-agnostic architecture through a non-async callback-driven model, binary protocol with schemaless RPC design, cross-platform compatibility spanning native and WASM environments, and bidirectional symmetric communication patterns.
For details on how these principles manifest in the layered architecture, see Layered Architecture. For concrete platform implementations, see Platform Implementations.
Runtime Agnosticism: The Non-Async Core
The Callback-Driven Model
The muxio core library is deliberately implemented without async/await primitives. Instead, it uses a synchronous control flow with callback functions to handle events. This architectural choice enables the same core logic to function across fundamentally different runtime environments without modification.
graph TB
subgraph CoreLibrary["muxio Core Library"]
RpcDispatcher["RpcDispatcher"]
RpcSession["RpcSession"]
FrameDecoder["FrameDecoder"]
end
subgraph Callbacks["Callback Interfaces"]
OnRead["on_read_bytes()\nInvoked when data arrives"]
OnFrame["Frame callbacks\nInvoked per decoded frame"]
OnResponse["Response handlers\nInvoked on RPC completion"]
end
subgraph Runtimes["Compatible Runtimes"]
Tokio["Tokio Multi-threaded\nasync runtime"]
StdThread["std::thread\nSingle-threaded or custom"]
WASM["WASM Browser\nJavaScript event loop"]
end
RpcDispatcher -->|registers| OnResponse
RpcSession -->|registers| OnFrame
FrameDecoder -->|invokes| OnFrame
Tokio -->|drives| CoreLibrary
StdThread -->|drives| CoreLibrary
WASM -->|drives| CoreLibrary
CoreLibrary -->|invokes| Callbacks
The key insight is that the core library never blocks, never spawns tasks, and never assumes an async executor. Instead:
- Data ingestion occurs through explicit
read_bytes()calls src/rpc/rpc_dispatcher.rs:265-389 - Event processing happens synchronously within the call stack
- Downstream actions are delegated via callbacks, allowing the caller to decide whether to spawn async tasks, queue work, or handle synchronously
Sources: DRAFT.md:48-52 README.md:35-36
Benefits of Runtime Agnosticism
| Benefit | Description | Code Reference |
|---|---|---|
| WASM Compatibility | No reliance on thread-based async executors that don’t exist in browser environments | extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-140 |
| Flexible Integration | Core can be wrapped in TokioMutex, StdMutex, or RefCell depending on context | extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38 |
| Deterministic Execution | No hidden task spawning or scheduling; control flow is explicit | src/rpc/rpc_dispatcher.rs:1-47 |
| Testing Simplicity | Can test without async harness; unit tests run synchronously | src/rpc/rpc_dispatcher.rs:391-490 |
Sources: DRAFT.md:48-52 README.md:35-36 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-32
Binary Protocol and Schemaless Design
Low-Overhead Binary Framing
The muxio protocol operates entirely on raw byte sequences. Unlike text-based protocols (JSON, XML), every layer—from frame headers to RPC payloads—is transmitted as compact binary data. This design decision prioritizes performance and minimizes CPU overhead.
Sources: README.md:33-34 README.md:46-47
graph LR
subgraph Application["Application Layer"]
RustStruct["Rust Struct\nAdd{a: f64, b: f64}"]
end
subgraph Serialization["Serialization Layer"]
Bitcode["bitcode::encode()\nCompact binary"]
end
subgraph RPC["RPC Protocol Layer"]
RpcRequest["RpcRequest\nmethod_id: u64\nparams: Vec<u8>"]
MethodID["xxhash(method_name)\nCompile-time constant"]
end
subgraph Framing["Framing Layer"]
FrameHeader["Frame Header\nstream_id: u64\nflags: u8\npayload_len: u32"]
FramePayload["Binary Payload\nRaw bytes"]
end
RustStruct -->|serialize| Bitcode
Bitcode -->|Vec<u8>| RpcRequest
RpcRequest -->|hash name| MethodID
RpcRequest -->|encode| FramePayload
FramePayload -->|wrap| FrameHeader
Schemaless RPC with Type Safety
The RPC layer is “schemaless” in that the protocol itself makes no assumptions about payload structure. Method IDs are 64-bit hashes computed at compile time via xxhash, and payloads are opaque byte vectors. However, type safety is enforced through shared service definitions :
This design achieves:
- Compile-time verification : Mismatched types between client and server result in compilation errors, not runtime failures
- Zero schema overhead : No runtime schema validation or parsing
- Flexibility : Different services can use different serialization formats (bitcode, bincode, protobuf, etc.) as long as both sides agree
Sources: README.md:50-51 extensions/muxio-rpc-service/src/prebuffered/mod.rs:1-50
Performance Characteristics
| Aspect | Text-Based (JSON) | Binary (muxio) |
|---|---|---|
| Parsing Overhead | Parse UTF-8, validate syntax, construct AST | Direct byte copying, minimal validation |
| Payload Size | Verbose keys, quoted strings, escape sequences | Compact type encodings, no metadata |
| CPU Usage | High for serialization/deserialization | Low, mainly memcpy operations |
| Latency | Higher due to parsing | Lower due to binary processing |
Sources: README.md:46-47 DRAFT.md11
Cross-Platform Compatibility
Platform-Specific Extensions on a Shared Core
The muxio architecture separates platform-agnostic logic from platform-specific implementations. The core library (muxio) contains all RPC logic, multiplexing, and framing. Platform extensions provide transport bindings:
Sources: README.md:37-41 README.md:48-49
graph TB
subgraph Shared["Platform-Agnostic Core"]
MuxioCore["muxio crate\nRpcDispatcher, RpcSession, FrameDecoder"]
RpcService["muxio-rpc-service\nRpcMethodPrebuffered trait"]
RpcCaller["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
RpcEndpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
subgraph Native["Native Platform (Tokio)"]
TokioClient["muxio-tokio-rpc-client\nRpcClient\ntokio-tungstenite"]
TokioServer["muxio-tokio-rpc-server\nRpcServer\naxum + tokio-tungstenite"]
end
subgraph Browser["Browser Platform (WASM)"]
WasmClient["muxio-wasm-rpc-client\nRpcWasmClient\nwasm-bindgen + js-sys"]
JSBridge["JavaScript Bridge\nstatic_muxio_write_bytes()"]
end
subgraph Application["Application Code"]
ServiceDef["example-muxio-rpc-service-definition\nShared service contracts"]
AppLogic["Application Logic\nSame code for both platforms"]
end
MuxioCore --> RpcCaller
MuxioCore --> RpcEndpoint
RpcService --> RpcCaller
RpcService --> RpcEndpoint
RpcCaller --> TokioClient
RpcCaller --> WasmClient
RpcEndpoint --> TokioServer
TokioClient --> AppLogic
WasmClient --> AppLogic
ServiceDef --> AppLogic
ServiceDef --> RpcService
WasmClient --> JSBridge
Write Once, Deploy Everywhere
Because the RpcServiceCallerInterface extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-97 abstracts the underlying transport, application code that calls RPC methods is platform-independent :
The same pattern applies to server-side handlers via RpcServiceEndpointInterface extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-137 Handlers registered on the endpoint work identically whether the server is Tokio-based or hypothetically implemented for another runtime.
Sources: README.md:48-49 Cargo.toml:20-41
WASM-Specific Considerations
The WASM client (muxio-wasm-rpc-client) demonstrates how the callback-driven core enables browser integration:
- No native async runtime : WASM doesn’t have threads or a native async executor. The callback model works directly with JavaScript’s event loop.
- Static singleton pattern : Uses
thread_local!withRefCellto maintain a global client reference extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:10-12 - JavaScript bridge : Exposes a single function
static_muxio_write_bytes()that JavaScript calls when WebSocket data arrives extensions/muxio-wasm-rpc-client/src/static_lib/mod.rs:63-75
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-140 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-49
Bidirectional and Symmetric Communication
Client-Server Symmetry
Unlike traditional RPC systems where clients can only call servers, muxio treats both sides symmetrically. Every connection has:
- An
RpcDispatchersrc/rpc/rpc_dispatcher.rs:19-47 for initiating calls and correlating responses - An
RpcServiceEndpointextensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-25 for registering handlers and processing incoming requests
This enables:
- Server-initiated calls : Servers can invoke methods on connected clients
- Bidirectional streaming : Either side can send data streams
- Event-driven architectures : Push notifications, real-time updates, etc.
Implementation Details:
- Both
RpcClientextensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38 andRpcServerextensions/muxio-tokio-rpc-server/src/rpc_server.rs:41-59 hold both a dispatcher and an endpoint - The
RpcServiceCallerInterfaceextensions/muxio-rpc-service-caller/src/caller_interface.rs:1-35 andRpcServiceEndpointInterfaceextensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-25 are orthogonal traits that can coexist on the same connection
Sources: DRAFT.md25 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38 extensions/muxio-tokio-rpc-server/src/rpc_server.rs:41-59
Streaming, Interleaving, and Cancellation
Support for Large Payloads
The multiplexing layer src/rpc/rpc_internals/rpc_session.rs:16-21 enables streaming operations:
- Chunked transmission : Large payloads are automatically split into frames based on
DEFAULT_MAX_CHUNK_SIZEsrc/rpc/rpc_internals/constants.rs:1-5 - Interleaved streams : Multiple concurrent RPCs can transmit simultaneously without head-of-line blocking
- Progressive decoding : Frames are decoded incrementally using per-stream
FrameDecoderinstances src/rpc/rpc_internals/rpc_session.rs:52-120
Sources: src/rpc/rpc_internals/rpc_session.rs:16-21 src/rpc/rpc_internals/constants.rs:1-5 DRAFT.md:19-21
Cancellation Support
The system supports mid-stream cancellation:
- Client-side : Cancel by dropping the response future or explicitly signaling cancellation (implementation-dependent)
- Protocol-level : Stream decoders are removed when
EndorErrorevents are received src/rpc/rpc_internals/rpc_session.rs:52-120 - Resource cleanup : Pending requests are cleared from the dispatcher’s hashmap when completed or errored src/rpc/rpc_dispatcher.rs:265-389
Sources: DRAFT.md21 src/rpc/rpc_dispatcher.rs:265-389 src/rpc/rpc_internals/rpc_session.rs:52-120
Design Trade-offs and Prioritization
What muxio Prioritizes
| Priority | Rationale | Implementation |
|---|---|---|
| Performance | Low latency and minimal overhead for high-throughput applications | Binary protocol, zero-copy where possible, compact serialization |
| Cross-platform | Same code works on native and WASM | Non-async core, callback-driven model |
| Type safety | Catch errors at compile time, not runtime | Shared service definitions, trait-based contracts |
| Simplicity | Easy to reason about, minimal magic | Explicit control flow, no hidden task spawning |
What muxio Does Not Prioritize
- Human readability of wire protocol : The binary format is not human-inspectable (unlike JSON). Debugging requires tooling.
- Built-in authentication/encryption : Transport security (TLS, authentication) is delegated to the transport layer (e.g.,
wss://WebSockets). - Schema evolution : No built-in versioning. Breaking changes require careful service definition updates.
- Automatic code generation : Service definitions are written manually using traits. No macros or code generation (by design, for transparency).
Sources: README.md:42-53 DRAFT.md:9-23
Summary
The muxio design philosophy centers on four pillars:
- Runtime agnosticism via non-async, callback-driven primitives
- Binary protocol with schemaless flexibility and compile-time type safety
- Cross-platform compatibility spanning native (Tokio) and WASM environments
- Bidirectional symmetry enabling client-server parity and streaming operations
These choices enable muxio to serve as a high-performance, flexible foundation for distributed systems that require low latency, cross-platform deployment, and type-safe communication.
Sources: README.md:42-53 DRAFT.md:9-52 Cargo.toml:20-41
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Workspace Structure
Loading…
Workspace Structure
Relevant source files
Purpose and Scope
This document details the Cargo workspace organization of the rust-muxio repository. It catalogs all workspace member crates, their locations in the directory tree, their roles within the overall system architecture, and their dependency relationships. For information about the design philosophy and layered architecture principles, see Design Philosophy and Layered Architecture.
Workspace Overview
The rust-muxio repository is organized as a Cargo workspace containing 11 member crates. The workspace is configured with resolver version 2 and defines shared metadata (version, authors, license, repository) inherited by all member crates.
Sources:
graph TB
subgraph "Root Directory"
ROOT["muxio (Root Crate)"]
end
subgraph "extensions/"
EXT_TEST["muxio-ext-test"]
RPC_SERVICE["muxio-rpc-service"]
RPC_CALLER["muxio-rpc-service-caller"]
RPC_ENDPOINT["muxio-rpc-service-endpoint"]
TOKIO_SERVER["muxio-tokio-rpc-server"]
TOKIO_CLIENT["muxio-tokio-rpc-client"]
WASM_CLIENT["muxio-wasm-rpc-client"]
end
subgraph "examples/"
EXAMPLE_APP["example-muxio-ws-rpc-app"]
EXAMPLE_DEF["example-muxio-rpc-service-definition"]
end
ROOT -->|extends| RPC_SERVICE
RPC_SERVICE -->|extends| RPC_CALLER
RPC_SERVICE -->|extends| RPC_ENDPOINT
RPC_CALLER -->|implements| TOKIO_CLIENT
RPC_CALLER -->|implements| WASM_CLIENT
RPC_ENDPOINT -->|implements| TOKIO_SERVER
EXAMPLE_DEF -->|uses| RPC_SERVICE
EXAMPLE_APP -->|demonstrates| EXAMPLE_DEF
Workspace Member Listing
The workspace members are declared in the root Cargo.toml and organized into three categories:
| Crate Name | Directory Path | Category | Primary Purpose |
|---|---|---|---|
muxio | . | Core | Binary framing protocol and stream multiplexing |
muxio-rpc-service | extensions/muxio-rpc-service | RPC Framework | Service trait definitions and method ID generation |
muxio-rpc-service-caller | extensions/muxio-rpc-service-caller | RPC Framework | Client-side RPC invocation interface |
muxio-rpc-service-endpoint | extensions/muxio-rpc-service-endpoint | RPC Framework | Server-side handler registration and dispatch |
muxio-tokio-rpc-server | extensions/muxio-tokio-rpc-server | Platform Extension | Tokio-based WebSocket RPC server |
muxio-tokio-rpc-client | extensions/muxio-tokio-rpc-client | Platform Extension | Tokio-based WebSocket RPC client |
muxio-wasm-rpc-client | extensions/muxio-wasm-rpc-client | Platform Extension | WASM browser-based RPC client |
muxio-ext-test | extensions/muxio-ext-test | Testing | Integration test suite |
example-muxio-rpc-service-definition | examples/example-muxio-rpc-service-definition | Example | Shared service definition for examples |
example-muxio-ws-rpc-app | examples/example-muxio-ws-rpc-app | Example | Demonstration application |
Sources:
Core Library: muxio
The root crate muxio (located at repository root .) provides the foundational binary framing protocol and stream multiplexing primitives. This crate is runtime-agnostic and has minimal dependencies.
Key Components:
RpcDispatcher- Request correlation and response routingRpcSession- Stream multiplexing and per-stream decodersRpcStreamEncoder/RpcStreamDecoder- Frame encoding/decodingRpcRequest/RpcResponse/RpcHeader- Core message types
Direct Dependencies:
bitcode- Binary serializationchrono- Timestamp generationonce_cell- Lazy static initializationtracing- Structured logging
Dev Dependencies:
rand- Random data generation for teststokio- Async runtime for tests
Sources:
RPC Framework Extensions
muxio-rpc-service
Located at extensions/muxio-rpc-service, this crate provides trait definitions for RPC services and compile-time method ID generation.
Key Exports:
RpcMethodPrebufferedtrait - Defines prebuffered RPC method signaturesDynamicChannelReceiver/DynamicChannelSender- Channel abstractions for streaming- Method ID constants via
xxhash-rust
Dependencies:
muxio- Core framing and session managementbitcode- Parameter/response serializationxxhash-rust- Compile-time method ID generationnum_enum- Message type discriminationasync-trait- Async trait supportfutures- Channel and stream utilities
Sources:
muxio-rpc-service-caller
Located at extensions/muxio-rpc-service-caller, this crate defines the RpcServiceCallerInterface trait for platform-agnostic client-side RPC invocation.
Key Exports:
RpcServiceCallerInterfacetrait - Abstract client interface- Helper functions for invoking prebuffered and streaming methods
Dependencies:
muxio- Core session and dispatchermuxio-rpc-service- Service definitionsasync-trait- Trait async supportfutures- Future combinatorstracing- Instrumentation
Sources:
muxio-rpc-service-endpoint
Located at extensions/muxio-rpc-service-endpoint, this crate defines the RpcServiceEndpointInterface trait for platform-agnostic server-side handler registration and request processing.
Key Exports:
RpcServiceEndpointInterfacetrait - Abstract server interface- Handler registration and dispatch utilities
Dependencies:
muxio- Core dispatcher and sessionmuxio-rpc-service- Service trait definitionsmuxio-rpc-service-caller- For bidirectional RPC (server-to-client calls)bitcode- Request/response deserializationasync-trait- Trait async supportfutures- Channel management
Sources:
Platform-Specific Extensions
muxio-tokio-rpc-server
Located at extensions/muxio-tokio-rpc-server, this crate provides a Tokio-based WebSocket RPC server implementation using Axum and tokio-tungstenite.
Key Exports:
RpcServerstruct - Main server implementation- Axum WebSocket handler integration
- Connection lifecycle management
Dependencies:
muxio- Core session primitivesmuxio-rpc-service- Service definitionsmuxio-rpc-service-caller- For bidirectional communicationmuxio-rpc-service-endpoint- Server endpoint interfaceaxum- HTTP/WebSocket frameworktokio- Async runtimetokio-tungstenite- WebSocket transportbytes- Byte buffer utilitiesfutures-util- Stream combinatorsasync-trait- Async trait implementations
Sources:
muxio-tokio-rpc-client
Located at extensions/muxio-tokio-rpc-client, this crate provides a Tokio-based WebSocket RPC client with Arc-based lifecycle management and background task coordination.
Key Exports:
RpcClientstruct - Main client implementation- Connection state tracking with
RpcClientConnectionState - Arc-based shared ownership model
Dependencies:
muxio- Core dispatcher and sessionmuxio-rpc-service- Service definitionsmuxio-rpc-service-caller- Client interface implementationmuxio-rpc-service-endpoint- For bidirectional communicationaxum- (Used for shared types)tokio- Async runtimetokio-tungstenite- WebSocket transportbytes- Byte buffer utilitiesfutures/futures-util- Async combinatorsasync-trait- Trait async support
Sources:
muxio-wasm-rpc-client
Located at extensions/muxio-wasm-rpc-client, this crate provides a WASM-compatible RPC client for browser environments using wasm-bindgen and JavaScript interop.
Key Exports:
RpcWasmClientstruct - WASM client implementationMUXIO_STATIC_RPC_CLIENT_REF- Thread-local static client reference- JavaScript bridge functions (
static_muxio_write_bytes, etc.)
Dependencies:
muxio- Core framing and sessionmuxio-rpc-service- Service definitionsmuxio-rpc-service-caller- Client interfacemuxio-rpc-service-endpoint- For bidirectional communicationwasm-bindgen- JavaScript FFIjs-sys- JavaScript API bindingswasm-bindgen-futures- Async/await in WASMfutures/futures-util- Future utilitiesasync-trait- Trait support
Sources:
Testing Infrastructure
muxio-ext-test
Located at extensions/muxio-ext-test, this integration test crate validates end-to-end functionality across client and server implementations.
Test Coverage:
- Native client to native server communication
- Service definition validation
- Error handling and edge cases
Dependencies:
muxio-rpc-service- Service trait testingmuxio-rpc-service-caller- Client interface testingmuxio-rpc-service-endpoint- Server endpoint testingmuxio-tokio-rpc-client- Client implementation testingmuxio-tokio-rpc-server- Server implementation testingexample-muxio-rpc-service-definition- Test service definitionstokio- Async test runtimetracing/tracing-subscriber- Test instrumentationbytemuck- Binary data utilities
Sources:
Example Applications
example-muxio-rpc-service-definition
Located at examples/example-muxio-rpc-service-definition, this crate defines shared RPC service contracts used across example applications.
Service Definitions:
- Basic arithmetic operations (Add, Multiply)
- Echo service
- Demonstrates
RpcMethodPrebufferedtrait implementation
Dependencies:
muxio-rpc-service- Service trait definitionsbitcode- Serialization for parameters and responses
Sources:
example-muxio-ws-rpc-app
Located at examples/example-muxio-ws-rpc-app, this demonstration application shows complete client-server setup with WebSocket transport.
Demonstrates:
- Server instantiation with
RpcServer - Client connection with
RpcClient - Service handler registration
- RPC method invocation
- Benchmarking with criterion
Dependencies:
example-muxio-rpc-service-definition- Shared service contractsmuxio- Core primitivesmuxio-rpc-service-caller- Client callingmuxio-tokio-rpc-client- Client implementationmuxio-tokio-rpc-server- Server implementationtokio- Async runtimeasync-trait- Handler trait implementationcriterion- Performance benchmarkingfutures- Async utilitiestracing/tracing-subscriber- Application loggingdoc-comment- Documentation tests
Sources:
Workspace Dependency Graph
Sources:
Directory Structure to Code Entity Mapping
Sources:
Shared Workspace Configuration
All workspace member crates inherit common metadata from the workspace-level configuration:
| Property | Value | Purpose |
|---|---|---|
version | 0.10.0-alpha | Synchronized versioning across all crates |
edition | 2024 | Rust edition (preview edition) |
authors | Jeremy Harris | Package authorship |
repository | https://github.com/jzombie/rust-muxio | Source location |
license | Apache-2.0 | Licensing terms |
publish | true | crates.io publication enabled |
resolver | 2 | Cargo feature resolver version |
Workspace-wide Third-Party Dependencies:
The workspace defines shared third-party dependency versions to ensure consistency:
| Dependency | Version | Used By |
|---|---|---|
async-trait | 0.1.88 | RPC service traits |
axum | 0.8.4 | Server framework |
bitcode | 0.6.6 | Serialization |
tokio | 1.45.1 | Async runtime |
tokio-tungstenite | 0.26.2 | WebSocket transport |
tracing | 0.1.41 | Logging infrastructure |
tracing-subscriber | 0.3.20 | Log output formatting |
xxhash-rust | 0.8.15 | Method ID hashing |
num_enum | 0.7.3 | Enum discriminants |
futures | 0.3.31 | Async utilities |
Sources:
Crate Size and Complexity Metrics
Based on Cargo.lock dependency counts:
| Crate | Direct Dependencies | Purpose Complexity |
|---|---|---|
muxio | 5 | Low - Core primitives only |
muxio-rpc-service | 6 | Medium - Trait definitions and hashing |
muxio-rpc-service-caller | 5 | Low - Interface abstraction |
muxio-rpc-service-endpoint | 6 | Low - Interface abstraction |
muxio-tokio-rpc-server | 10 | High - Full server stack |
muxio-tokio-rpc-client | 11 | High - Full client stack |
muxio-wasm-rpc-client | 11 | High - WASM bridge complexity |
muxio-ext-test | 8 | Medium - Integration testing |
example-muxio-rpc-service-definition | 2 | Low - Simple definitions |
example-muxio-ws-rpc-app | 9 | Medium - Demonstration code |
Sources:
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Layered Architecture
Loading…
Layered Architecture
Relevant source files
Purpose and Scope
This document explains the layered transport kit design of the muxio system, describing how each layer builds upon the previous one to provide progressively higher-level abstractions. The architecture separates concerns into six distinct layers: binary framing, stream multiplexing, RPC protocol, RPC abstractions, service definitions, and platform extensions.
For information about the design principles that motivated this architecture, see Design Philosophy. For detailed implementation details of individual layers, see Core Library (muxio)) and RPC Framework.
Architectural Overview
The muxio system implements a layered transport kit where each layer has a well-defined responsibility and interacts only with adjacent layers. This separation enables runtime-agnostic operation and cross-platform deployment.
Sources:
graph TB
subgraph "Layer 6: Application Code"
APP["User Application\nBusiness Logic"]
end
subgraph "Layer 5: Service Definition Layer"
SD["RpcMethodPrebuffered Traits\nCompile-Time Method IDs\nShared Type Contracts"]
end
subgraph "Layer 4: RPC Abstraction Layer"
CALLER["RpcServiceCallerInterface\nPlatform-Agnostic Client API"]
ENDPOINT["RpcServiceEndpointInterface\nPlatform-Agnostic Server API"]
end
subgraph "Layer 3: RPC Protocol Layer"
DISPATCHER["RpcDispatcher\nRequest Correlation\nResponse Routing"]
end
subgraph "Layer 2: Stream Multiplexing Layer"
SESSION["RpcSession\nStream ID Allocation\nPer-Stream Decoders"]
end
subgraph "Layer 1: Binary Framing Layer"
ENCODER["RpcStreamEncoder\nFrame Construction"]
DECODER["RpcStreamDecoder\nFrame Reconstruction"]
end
subgraph "Layer 0: Platform Extensions"
TOKIO_CLIENT["RpcClient\ntokio + tokio-tungstenite"]
TOKIO_SERVER["RpcServer\naxum + tokio-tungstenite"]
WASM_CLIENT["RpcWasmClient\nwasm-bindgen + js-sys"]
end
APP --> SD
SD --> CALLER
SD --> ENDPOINT
CALLER --> DISPATCHER
ENDPOINT --> DISPATCHER
DISPATCHER --> SESSION
SESSION --> ENCODER
SESSION --> DECODER
ENCODER --> TOKIO_CLIENT
ENCODER --> TOKIO_SERVER
ENCODER --> WASM_CLIENT
DECODER --> TOKIO_CLIENT
DECODER --> TOKIO_SERVER
DECODER --> WASM_CLIENT
- README.md:25-41
- DRAFT.md:49-52
- Diagram 2 from high-level architecture
Layer 1: Binary Framing Protocol
The binary framing layer defines the wire format for all data transmission. It provides discrete message boundaries over byte streams using a compact header structure.
Frame Structure
Each frame consists of a fixed-size header followed by a variable-length payload chunk:
| Field | Type | Size | Description |
|---|---|---|---|
stream_id | u32 | 4 bytes | Identifies which logical stream this frame belongs to |
flags | u8 | 1 byte | Control flags (Start, End, Error, Cancelation) |
payload | [u8] | Variable | Binary data chunk |
The frame header is defined in RpcHeader and serialized using bytemuck for zero-copy conversion.
Frame Types
Frames are categorized by their flags field, encoded using num_enum:
- Start Frame : First frame of a stream, initializes decoder state
- Data Frame : Intermediate payload chunk
- End Frame : Final frame, triggers stream completion
- Error Frame : Signals stream-level error
- Cancelation Frame : Requests stream termination
Encoding and Decoding
The RpcStreamEncoder serializes data into frames with automatic chunking based on DEFAULT_MAX_CHUNK_SIZE. The RpcStreamDecoder reconstructs the original message from potentially out-of-order frames.
Sources:
graph LR
INPUT["Input Bytes"]
CHUNK["Chunk into\nDEFAULT_MAX_CHUNK_SIZE"]
HEADER["Add RpcHeader\nstream_id + flags"]
FRAME["Binary Frame"]
INPUT --> CHUNK
CHUNK --> HEADER
HEADER --> FRAME
RECV["Received Frames"]
DEMUX["Demultiplex by\nstream_id"]
BUFFER["Reassemble Chunks"]
OUTPUT["Output Bytes"]
RECV --> DEMUX
DEMUX --> BUFFER
BUFFER --> OUTPUT
- src/rpc/rpc_internals/rpc_types.rs (RpcHeader definition)
- src/rpc/rpc_internals/rpc_stream_encoder.rs (frame encoding)
- src/rpc/rpc_internals/rpc_stream_decoder.rs (frame decoding)
- README.md:33-34
Layer 2: Stream Multiplexing Layer
The stream multiplexing layer, implemented by RpcSession, manages multiple concurrent logical streams over a single connection. Each stream has independent state and lifecycle.
RpcSession Responsibilities
The RpcSession struct provides:
- Stream ID Allocation : Monotonically increasing
u32identifiers - Per-Stream Decoders :
HashMap<u32, RpcStreamDecoder>for concurrent reassembly - Frame Muxing : Interleaving frames from multiple streams
- Frame Demuxing : Routing incoming frames to the correct decoder
- Stream Lifecycle : Automatic decoder cleanup on End/Error events
Stream Lifecycle Management
The session maintains a decoder for each active stream in the decoders field. When a stream completes (End/Error/Cancelation), its decoder is removed from the map, freeing resources.
Concurrent Stream Operations
Multiple streams can be active simultaneously:
Sources:
- src/rpc/rpc_internals/rpc_session.rs:16-21
- src/rpc/rpc_internals/rpc_session.rs:52-120
- README.md:29-30
Layer 3: RPC Protocol Layer
The RPC protocol layer, implemented by RpcDispatcher, adds request/response semantics on top of the stream multiplexer. It correlates requests with responses using unique request IDs.
RpcDispatcher Structure
Request Correlation
The dispatcher assigns each RPC call a unique request_id:
- Client calls
RpcDispatcher::call(RpcRequest) - Dispatcher assigns monotonic
request_idfromnext_request_id - Request is serialized with embedded
request_id - Dispatcher stores callback in
pending_requestsmap - Server processes request and returns
RpcResponsewith samerequest_id - Dispatcher looks up callback in
pending_requestsand invokes it - Entry is removed from
pending_requests
RPC Message Types
The protocol uses num_enum to encode message types in frame payloads:
| Message Type | Direction | Contains |
|---|---|---|
RpcRequest | Client → Server | request_id, method_id, params |
RpcResponse | Server → Client | request_id, result or error |
RpcStreamChunk | Bidirectional | request_id, chunk_data |
RpcStreamEnd | Bidirectional | request_id |
Request/Response Flow with Code Entities
Sources:
Layer 4: RPC Abstraction Layer
The RPC abstraction layer defines platform-agnostic traits that enable the same application code to work across different runtime environments.
RpcServiceCallerInterface
The RpcServiceCallerInterface trait abstracts client-side RPC invocation:
This trait is implemented by:
RpcClient(Tokio-based native client)RpcWasmClient(WASM browser client)
RpcServiceEndpointInterface
The RpcServiceEndpointInterface trait abstracts server-side handler registration:
Platform Abstraction Benefits
| Aspect | Implementation Detail | Abstracted By |
|---|---|---|
| Transport | WebSocket, TCP, Browser APIs | Caller/Endpoint traits |
| Runtime | Tokio, WASM event loop, std::thread | Async trait methods |
| Serialization | bitcode encoding/decoding | Vec<u8> byte interface |
| Error Handling | Platform-specific errors | RpcServiceError enum |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:65-137
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:65-137
- README.md:48-49
Layer 5: Service Definition Layer
The service definition layer provides compile-time type safety through shared trait definitions between client and server.
RpcMethodPrebuffered Trait
Service methods are defined using the RpcMethodPrebuffered trait:
Compile-Time Method ID Generation
The METHOD_ID is computed at compile time using xxhash3_64 from the xxhash-rust crate:
graph TB
DEF["Service Definition Crate\nRpcMethodPrebuffered impls"]
SERVER["Server Crate"]
CLIENT["Client Crate"]
DEF -->|depends on| SERVER
DEF -->|depends on| CLIENT
SERVER -->|register_prebuffered Add::METHOD_ID, handler| ENDPOINT["RpcServiceEndpoint"]
CLIENT -->|Add::call client, params| CALLER["RpcServiceCaller"]
ENDPOINT -->|decode_request| DEF
ENDPOINT -->|encode_response| DEF
CALLER -->|encode_request| DEF
CALLER -->|decode_response| DEF
Note1["Compile Error if:\n- Method name mismatch\n- Type signature mismatch\n- Serialization incompatibility"]
This ensures that method names are never transmitted on the wire—only their compact 8-byte hash values.
Type Safety Enforcement
Sources:
- extensions/muxio-rpc-service/src/prebuffered.rs
- examples/example-muxio-rpc-service-definition/src/prebuffered.rs
- README.md:50-51
- Cargo.toml64 (xxhash-rust dependency)
graph TB
subgraph "Tokio Native Platform"
TOKIO_CLIENT["RpcClient\nArc<RpcClientInner>"]
TOKIO_INNER["RpcClientInner\ndispatcher: TokioMutex<RpcDispatcher>\nendpoint: Arc<RpcServiceEndpoint>"]
TOKIO_TRANSPORT["tokio-tungstenite\nWebSocketStream"]
TOKIO_TASKS["Background Tasks\nread_task\nwrite_task"]
TOKIO_CLIENT -->|owns Arc| TOKIO_INNER
TOKIO_INNER -->|uses| TOKIO_TRANSPORT
TOKIO_CLIENT -->|spawns| TOKIO_TASKS
end
subgraph "WASM Browser Platform"
WASM_CLIENT["RpcWasmClient\nRpcClientInner"]
WASM_BRIDGE["static_muxio_write_bytes\nJavaScript Bridge"]
WASM_STATIC["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local! RefCell"]
WASM_WSAPI["Browser WebSocket API\njs-sys bindings"]
WASM_CLIENT -->|calls| WASM_BRIDGE
WASM_BRIDGE -->|write to| WASM_WSAPI
WASM_STATIC -->|holds| WASM_CLIENT
end
subgraph "Shared Abstractions"
CALLER_TRAIT["RpcServiceCallerInterface"]
ENDPOINT_TRAIT["RpcServiceEndpointInterface"]
end
TOKIO_CLIENT -.implements.-> CALLER_TRAIT
WASM_CLIENT -.implements.-> CALLER_TRAIT
TOKIO_INNER -->|owns| ENDPOINT_TRAIT
WASM_CLIENT -->|owns| ENDPOINT_TRAIT
Layer 6: Platform Extensions
Platform extensions implement the abstraction layer traits for specific runtime environments, providing concrete transport mechanisms.
Platform Extension Architecture
Extension Crate Mapping
| Extension Crate | Implements | Runtime | Transport |
|---|---|---|---|
muxio-tokio-rpc-client | RpcServiceCallerInterface, RpcServiceEndpointInterface | Tokio async | tokio-tungstenite WebSocket |
muxio-tokio-rpc-server | RpcServiceEndpointInterface | Tokio + Axum | tokio-tungstenite WebSocket |
muxio-wasm-rpc-client | RpcServiceCallerInterface, RpcServiceEndpointInterface | Browser event loop | wasm-bindgen + js-sys |
Tokio Client Lifecycle
The RpcClient manages lifecycle through Arc reference counting:
Background tasks (read_task, write_task) hold Arc clones and automatically clean up when the connection drops.
WASM Client Singleton Pattern
The WASM client uses a thread-local singleton for JavaScript interop:
This enables JavaScript to write bytes into the Rust dispatcher without async overhead.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-32
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:10-12
- Cargo.toml:20-30
Cross-Cutting Concerns
Several subsystems span multiple layers:
Serialization (bitcode)
The bitcode crate provides compact binary serialization at Layer 5 (Service Definitions):
encode()inRpcMethodPrebuffered::encode_request/encode_responsedecode()inRpcMethodPrebuffered::decode_request/decode_response- Configured in service definition crates, used by both client and server
Observability (tracing)
The tracing crate provides structured logging at Layers 2-4:
- Frame-level events in
RpcSession - Request/response correlation in
RpcDispatcher - Connection state changes in platform extensions
Error Propagation
Errors flow upward through layers:
Each layer defines its own error type and converts lower-layer errors appropriately.
Sources:
- Cargo.toml52 (bitcode dependency)
- Cargo.toml62 (tracing dependency)
- src/rpc/rpc_dispatcher.rs:130-190
Layer Interaction Patterns
Write Path (Client → Server)
Read Path (Server → Client)
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-226
- src/rpc/rpc_dispatcher.rs:130-264
- src/rpc/rpc_internals/rpc_session.rs:52-120
Summary
The layered architecture enables:
- Separation of Concerns : Each layer has a single, well-defined responsibility
- Runtime Agnosticism : Core layers (1-3) use non-async, callback-driven design
- Platform Extensibility : Layer 6 implements platform-specific transports
- Type Safety : Layer 5 enforces compile-time contracts
- Code Reuse : Same service definitions work across all platforms
This design allows the same business logic to execute in Tokio native environments, WASM browsers, and potentially other runtimes without modification to the core layers.
Sources:
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Core Library (muxio)
Loading…
Core Library (muxio)
Relevant source files
Purpose and Scope
This document describes the foundational muxio crate, which provides the low-level binary framing protocol and stream multiplexing capabilities that form the base layer of the Muxio framework. The core library is transport-agnostic, non-async, and callback-driven, enabling integration with any runtime environment including Tokio, standard library, and WebAssembly.
For higher-level RPC service abstractions built on top of this core, see RPC Framework. For concrete client and server implementations that use this core library, see Transport Implementations.
Sources: Cargo.toml:1-71 README.md:17-24
Architecture Overview
The muxio core library implements a layered architecture where each layer has a specific, well-defined responsibility. The design separates concerns into three distinct layers: binary framing, stream multiplexing, and RPC protocol.
Diagram: Core Library Component Layering
graph TB
subgraph "Application Code"
App["Application Logic"]
end
subgraph "RPC Protocol Layer"
Dispatcher["RpcDispatcher"]
Request["RpcRequest"]
Response["RpcResponse"]
Header["RpcHeader"]
end
subgraph "Stream Multiplexing Layer"
Session["RpcSession"]
StreamDecoder["RpcStreamDecoder"]
StreamEncoder["RpcStreamEncoder"]
StreamEvent["RpcStreamEvent"]
end
subgraph "Binary Framing Layer"
MuxDecoder["FrameMuxStreamDecoder"]
Frame["DecodedFrame"]
FrameKind["FrameKind"]
end
subgraph "Transport Layer"
Transport["Raw Bytes\nWebSocket/TCP/Custom"]
end
App --> Dispatcher
Dispatcher --> Request
Dispatcher --> Response
Request --> Header
Response --> Header
Dispatcher --> Session
Session --> StreamDecoder
Session --> StreamEncoder
StreamDecoder --> StreamEvent
StreamEncoder --> Header
Session --> MuxDecoder
MuxDecoder --> Frame
Frame --> FrameKind
MuxDecoder --> Transport
Session --> Transport
Sources: src/rpc/rpc_internals/rpc_session.rs:1-118
Key Components
The core library consists of several primary components organized into distinct functional layers:
| Component | File Location | Layer | Primary Responsibility | Details |
|---|---|---|---|---|
FrameMuxStreamDecoder | src/frame/ | Binary Framing | Decodes raw bytes into DecodedFrame structures | See Binary Framing Protocol |
DecodedFrame | src/frame/ | Binary Framing | Container for decoded frame data with stream ID and payload | See Binary Framing Protocol |
FrameKind | src/frame/ | Binary Framing | Frame type enumeration (Data, End, Cancel) | See Binary Framing Protocol |
RpcSession | src/rpc/rpc_internals/rpc_session.rs:20-117 | Stream Multiplexing | Manages stream ID allocation and per-stream decoders | See Stream Multiplexing |
RpcStreamDecoder | src/rpc/rpc_internals/rpc_stream_decoder.rs:11-186 | Stream Multiplexing | Maintains state machine for individual stream decoding | See Stream Multiplexing |
RpcStreamEncoder | src/rpc/rpc_internals/ | Stream Multiplexing | Encodes RPC headers and payloads into frames | See Stream Multiplexing |
RpcDispatcher | src/rpc/rpc_dispatcher.rs | RPC Protocol | Correlates requests with responses via request_id | See RPC Dispatcher |
RpcRequest | src/rpc/ | RPC Protocol | Request data structure with method ID and parameters | See Request and Response Types |
RpcResponse | src/rpc/ | RPC Protocol | Response data structure with result or error | See Request and Response Types |
RpcHeader | src/rpc/rpc_internals/ | RPC Protocol | Contains RPC metadata (message type, IDs, metadata bytes) | See Request and Response Types |
Sources: src/rpc/rpc_internals/rpc_session.rs:15-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18
Component Interaction Flow
The following diagram illustrates how the core components interact during a typical request/response cycle, showing the actual method calls and data structures involved:
Diagram: Request/Response Data Flow Through Core Components
sequenceDiagram
participant App as "Application"
participant Disp as "RpcDispatcher"
participant Sess as "RpcSession"
participant Enc as "RpcStreamEncoder"
participant Dec as "RpcStreamDecoder"
participant Mux as "FrameMuxStreamDecoder"
rect rgb(245, 245, 245)
Note over App,Mux: Outbound Request Flow
App->>Disp: call(RpcRequest)
Disp->>Disp: assign request_id
Disp->>Sess: init_request(RpcHeader, max_chunk_size, on_emit)
Sess->>Sess: allocate stream_id via increment_u32_id()
Sess->>Enc: RpcStreamEncoder::new(stream_id, header, on_emit)
Enc->>Enc: encode header + payload into frames
Enc->>App: emit bytes via on_emit callback
end
rect rgb(245, 245, 245)
Note over App,Mux: Inbound Response Flow
App->>Sess: read_bytes(input, on_rpc_stream_event)
Sess->>Mux: FrameMuxStreamDecoder::read_bytes(input)
Mux->>Sess: Iterator<Result<DecodedFrame>>
Sess->>Dec: RpcStreamDecoder::decode_rpc_frame(frame)
Dec->>Dec: state machine: AwaitHeader → AwaitPayload → Done
Dec->>Sess: Vec<RpcStreamEvent>
Sess->>App: on_rpc_stream_event(RpcStreamEvent::Header)
Sess->>App: on_rpc_stream_event(RpcStreamEvent::PayloadChunk)
Sess->>App: on_rpc_stream_event(RpcStreamEvent::End)
end
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-186
Non-Async, Callback-Driven Design
A fundamental design characteristic of the core library is its non-async, callback-driven architecture. This design choice enables the library to be used across different runtime environments without requiring a specific async runtime.
graph LR
subgraph "Callback Trait System"
RpcEmit["RpcEmit trait"]
RpcStreamEventHandler["RpcStreamEventDecoderHandler trait"]
RpcEmit --> TokioImpl["Tokio Implementation:\nasync fn + channels"]
RpcEmit --> WasmImpl["WASM Implementation:\nwasm_bindgen + JS bridge"]
RpcEmit --> CustomImpl["Custom Implementation:\nuser-defined"]
RpcStreamEventHandler --> DispatcherHandler["RpcDispatcher handler"]
RpcStreamEventHandler --> CustomHandler["Custom event handler"]
end
Callback Traits
The core library defines several callback traits that enable integration with different transport layers:
Diagram: Callback Trait Architecture
The callback-driven design means:
- No built-in async/await : Core methods like
RpcSession::read_bytes()andRpcSession::init_request()are synchronous - Emit callbacks : Output is sent via callback functions implementing the
RpcEmittrait - Event callbacks : Decoded events are delivered via
RpcStreamEventDecoderHandlerimplementations - Transport agnostic : Any byte transport can be used as long as it can call the core methods and handle callbacks
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 README.md:35-36
Core Abstractions
The core library provides three fundamental abstractions that enable runtime-agnostic operation:
Stream Lifecycle Management
The RpcSession component provides stream multiplexing by maintaining per-stream state. Each outbound call allocates a unique stream_id via increment_u32_id(), and incoming frames are demultiplexed to their corresponding RpcStreamDecoder instances. For detailed information on stream multiplexing mechanics, see Stream Multiplexing.
Binary Protocol Format
All communication uses a binary framing protocol with fixed-size headers and variable-length payloads. The protocol supports chunking large messages using DEFAULT_MAX_CHUNK_SIZE and includes frame types (FrameKind::Data, FrameKind::End, FrameKind::Cancel) for stream control. For complete protocol specification, see Binary Framing Protocol.
Request Correlation
The RpcDispatcher component manages request/response correlation using unique request_id values. It maintains a HashMap of pending requests and routes incoming responses to the appropriate callback handlers. For dispatcher implementation details, see RPC Dispatcher.
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:53-117
Dependencies and Build Configuration
The core muxio crate has minimal dependencies to maintain its lightweight, transport-agnostic design:
| Dependency | Purpose | Version |
|---|---|---|
chrono | Timestamp utilities | 0.4.41 |
once_cell | Lazy static initialization | 1.21.3 |
tracing | Logging and diagnostics | 0.1.41 |
Development dependencies include:
bitcode: Used in tests for serialization examplesrand: Random data generation for teststokio: Async runtime for integration tests
The crate is configured to publish to crates.io with Apache-2.0 license Cargo.toml:1-18
Sources: Cargo.toml:34-71
Integration with Extensions
The core library is designed to be extended through separate crates in the workspace. The callback-driven, non-async design enables these extensions without modification to the core:
Diagram: Core Library Extension Architecture
graph TB
subgraph "Core Library: muxio"
Session["RpcSession\n(callback-driven)"]
Dispatcher["RpcDispatcher\n(callback-driven)"]
end
subgraph "RPC Service Layer"
ServiceTrait["muxio-rpc-service\nRpcMethodPrebuffered trait"]
Caller["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
Endpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
subgraph "Runtime Implementations"
TokioServer["muxio-tokio-rpc-server\nasync Tokio + Axum"]
TokioClient["muxio-tokio-rpc-client\nasync Tokio + WebSocket"]
WasmClient["muxio-wasm-rpc-client\nwasm-bindgen"]
end
Session --> ServiceTrait
Dispatcher --> Caller
Dispatcher --> Endpoint
Caller --> TokioClient
Caller --> WasmClient
Endpoint --> TokioServer
Extensions built on the core library:
- RPC Service Layer RPC Framework: Provides method definition traits and abstractions
- Tokio Server Tokio RPC Server: Async server implementation using
tokioandaxum - Tokio Client Tokio RPC Client: Async client using
tokio-tungsteniteWebSockets - WASM Client WASM RPC Client: Browser-based client using
wasm-bindgen
Sources: Cargo.toml:19-31 README.md:37-41
Memory and Performance Characteristics
The core library is designed for efficiency with several key optimizations:
| Optimization | Implementation | Benefit |
|---|---|---|
| Zero-copy frame parsing | FrameMuxStreamDecoder processes bytes in-place | Eliminates unnecessary allocations during frame decoding |
| Shared headers | RpcHeader wrapped in Arc src/rpc/rpc_internals/rpc_stream_decoder.rs111 | Multiple events reference same header without cloning |
| Minimal buffering | Stream decoders emit chunks immediately after header parse | Low memory footprint for large payloads |
| Automatic cleanup | Streams removed from HashMap on End/Cancel src/rpc/rpc_internals/rpc_session.rs74 | Prevents memory leaks from completed streams |
| Configurable chunks | max_chunk_size parameter in init_request() src/rpc/rpc_internals/rpc_session.rs:35-50 | Tune for different payload sizes and network conditions |
Typical Memory Usage
RpcSession: Approximately 48 bytes base + HashMap overheadRpcStreamDecoder: Approximately 80 bytes + buffered payload sizeRpcHeader(shared): 24 bytes + metadata length- Active stream overhead: ~160 bytes per concurrent stream
For performance tuning strategies and benchmarks, see Performance Considerations.
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18 src/rpc/rpc_internals/rpc_stream_decoder.rs:111-116
Error Handling
The core library uses Result types with specific error enums:
FrameDecodeError: Returned byFrameMuxStreamDecoder::read_bytes()andRpcStreamDecoder::decode_rpc_frame()CorruptFrame: Invalid frame structure or header dataReadAfterCancel: Attempt to read after stream cancellation
FrameEncodeError: Returned by encoding operations- Propagated from
RpcStreamEncoder::new()
- Propagated from
Error events are emitted as RpcStreamEvent::Error src/rpc/rpc_internals/rpc_session.rs:84-91 src/rpc/rpc_internals/rpc_session.rs:103-110 containing:
rpc_header: The header if availablerpc_request_id: The request ID if knownrpc_method_id: The method ID if parsedframe_decode_error: The underlying error
For comprehensive error handling patterns, see Error Handling.
Sources: src/rpc/rpc_internals/rpc_session.rs:80-111
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Binary Framing Protocol
Loading…
Binary Framing Protocol
Relevant source files
Purpose and Scope
The binary framing protocol defines the lowest-level data structure for all communication in muxio. This protocol operates below the RPC layer, providing a schemaless, ordered, chunked byte transport mechanism.
Core Responsibilities:
- Define binary frame structure (7-byte header + variable payload)
- Encode structured data into frames via
FrameMuxStreamEncoder - Decode byte streams into
DecodedFramestructures viaFrameMuxStreamDecoder - Support frame types (
Data,End,Cancel) for stream lifecycle management - Enable payload chunking with configurable
max_chunk_size
The framing protocol is transport-agnostic and makes no assumptions about serialization formats. It operates purely on raw bytes. Higher-level concerns like RPC headers, method IDs, and serialization are handled by layers above this protocol.
Related Pages:
- Stream multiplexing and per-stream decoders: #3.2
- RPC protocol structures (RpcHeader, RpcRequest, RpcResponse): #3.4
- RPC session management and stream ID allocation: #3.2
Sources: src/rpc/rpc_internals/rpc_session.rs:15-24 DRAFT.md:11-21
Architecture Overview
The framing protocol sits between raw transport (WebSocket, TCP) and the RPC session layer. It provides discrete message boundaries over continuous byte streams.
Component Diagram: Frame Processing Pipeline
graph TB
RawBytes["Raw Byte Stream\nWebSocket/TCP"]
FrameEncoder["FrameMuxStreamEncoder\nencode frames"]
FrameDecoder["FrameMuxStreamDecoder\nparse frames"]
DecodedFrame["DecodedFrame\nstruct"]
RpcSession["RpcSession\nmultiplexer"]
RpcStreamEncoder["RpcStreamEncoder\nper-stream encoder"]
RpcSession -->|allocate stream_id| RpcStreamEncoder
RpcStreamEncoder -->|emit frames| FrameEncoder
FrameEncoder -->|write_bytes| RawBytes
RawBytes -->|read_bytes| FrameDecoder
FrameDecoder -->|yield| DecodedFrame
DecodedFrame -->|route by stream_id| RpcSession
Key Classes:
FrameMuxStreamEncoder: Encodes frames into bytes (referenced indirectly viaRpcStreamEncoder)FrameMuxStreamDecoder: Parses incoming bytes intoDecodedFramestructuresDecodedFrame: Represents a parsed frame withstream_id,kind, andpayloadRpcSession: Manages frame multiplexing and per-stream decoder lifecycle
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:52-60
Frame Structure
Each binary frame consists of a fixed-size header followed by a variable-length payload. The frame format is designed for efficient parsing with minimal copying.
Binary Layout
| Field | Offset | Size | Type | Description |
|---|---|---|---|---|
stream_id | 0 | 4 bytes | u32 (LE) | Logical stream identifier for multiplexing |
kind | 4 | 1 byte | u8 enum | FrameKind: Data=0, End=1, Cancel=2 |
payload_length | 5 | 2 bytes | u16 (LE) | Payload byte count (0-65535) |
payload | 7 | payload_length | [u8] | Raw payload bytes |
Total Frame Size: 7 bytes (header) + payload_length
graph LR
subgraph "Frame Header - 7 Bytes"
StreamID["stream_id\n4 bytes\nu32 LE"]
Kind["kind\n1 byte\nu8"]
PayloadLen["payload_length\n2 bytes\nu16 LE"]
end
subgraph "Frame Payload"
Payload["payload\n0-65535 bytes\n[u8]"]
end
StreamID --> Kind
Kind --> PayloadLen
PayloadLen --> Payload
All multi-byte integers use little-endian encoding. The u16 payload length field limits individual frames to 65,535 bytes, enforcing bounded memory consumption per frame.
Frame Header Diagram
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:3-6 (frame structure constants)
stateDiagram-v2
[*] --> AwaitingFirstFrame
AwaitingFirstFrame --> AcceptingData: FrameKind::Data
AcceptingData --> AcceptingData: FrameKind::Data
AcceptingData --> StreamClosed: FrameKind::End
AcceptingData --> StreamAborted: FrameKind::Cancel
AwaitingFirstFrame --> StreamAborted: FrameKind::Cancel
StreamClosed --> [*]
StreamAborted --> [*]
Frame Types
The FrameKind enum (from crate::frame) defines frame semantics. Each frame’s kind field determines how the decoder processes it.
Frame Lifecycle State Machine
FrameKind Values
| Enum Variant | Wire Value | Purpose | Effect |
|---|---|---|---|
FrameKind::Data | Implementation-defined | Payload chunk | Accumulate or emit payload bytes |
FrameKind::End | Implementation-defined | Normal termination | Finalize stream, emit RpcStreamEvent::End |
FrameKind::Cancel | Implementation-defined | Abnormal abort | Discard state, emit error or remove decoder |
Frame Type Semantics
- Data frames: Carry payload bytes. A stream may consist of 1 to N data frames depending on chunking.
- End frames: Signal successful completion. Payload length is typically 0. Decoder emits final event and removes stream state.
- Cancel frames: Signal early termination. Decoder removes stream from
rpc_stream_decodersmap and may emit error event.
Sources: src/rpc/rpc_internals/rpc_session.rs:98-100 (decoder cleanup), src/rpc/rpc_internals/rpc_stream_decoder.rs:156-166 (End/Cancel handling)
Frame Encoding
The encoding process transforms logical data into binary frames suitable for transmission. The core encoder is FrameMuxStreamEncoder, though specific encoding details are handled by stream-level encoders like RpcStreamEncoder.
Frame Encoding Sequence
sequenceDiagram
participant RpcSession
participant RpcStreamEncoder
participant on_emit
participant Transport
RpcSession->>RpcSession: allocate stream_id
RpcSession->>RpcStreamEncoder: new(stream_id, max_chunk_size, header, on_emit)
RpcStreamEncoder->>RpcStreamEncoder: encode RpcHeader into first frame
RpcStreamEncoder->>on_emit: emit(Frame{stream_id, Data, header_bytes})
on_emit->>Transport: write_bytes()
loop "For each payload chunk"
RpcStreamEncoder->>RpcStreamEncoder: chunk payload by max_chunk_size
RpcStreamEncoder->>on_emit: emit(Frame{stream_id, Data, chunk})
on_emit->>Transport: write_bytes()
end
RpcStreamEncoder->>on_emit: emit(Frame{stream_id, End, []})
on_emit->>Transport: write_bytes()
Encoding Process
- Stream ID Allocation:
RpcSession::init_request()allocates a uniquestream_idviaincrement_u32_id() - Encoder Creation: Creates
RpcStreamEncoderwithstream_id,max_chunk_size, andon_emitcallback - Frame Emission: Encoder calls
on_emitfor each frame. Callback receives raw frame bytes for transport writing. - Chunking: If payload exceeds
max_chunk_size, encoder emits multipleDataframes with samestream_id - Finalization: Final
Endframe signals completion
The callback-based on_emit pattern enables non-async, runtime-agnostic operation. Callers provide their own I/O strategy.
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 (init_request method)
graph TB
read_bytes["read_bytes(&[u8])"]
FrameMuxStreamDecoder["FrameMuxStreamDecoder\nstateful parser"]
DecodedFrame["DecodedFrame\nResult iterator"]
RpcSession["RpcSession"]
rpc_stream_decoders["rpc_stream_decoders\nHashMap<u32, RpcStreamDecoder>"]
RpcStreamDecoder["RpcStreamDecoder\nper-stream state"]
read_bytes -->|input bytes| FrameMuxStreamDecoder
FrameMuxStreamDecoder -->|yield| DecodedFrame
RpcSession -->|iterate frames| DecodedFrame
RpcSession -->|route by stream_id| rpc_stream_decoders
rpc_stream_decoders -->|or_default| RpcStreamDecoder
RpcStreamDecoder -->|decode_rpc_frame| RpcStreamEvent
Frame Decoding
The decoding process parses incoming byte streams into structured DecodedFrame objects. The FrameMuxStreamDecoder maintains parsing state across multiple read_bytes calls to handle partial frame reception.
Decoding Architecture
Decoding Sequence
Decoder State Management
Per-Connection State (RpcSession):
frame_mux_stream_decoder: FrameMuxStreamDecoder- parses frame boundaries from byte streamrpc_stream_decoders: HashMap<u32, RpcStreamDecoder>- mapsstream_idto per-stream state
Per-Stream State (RpcStreamDecoder):
state: RpcDecoderState-AwaitHeader,AwaitPayload, orDoneheader: Option<Arc<RpcHeader>>- parsed RPC header from first framebuffer: Vec<u8>- accumulates bytes across framesrpc_request_id: Option<u32>- extracted from headerrpc_method_id: Option<u64>- extracted from header
Lifecycle:
- Decoders created on-demand via
entry(stream_id).or_default() - Removed on
Endframe: src/rpc/rpc_internals/rpc_session.rs:73-75 - Removed on
Cancelframe: src/rpc/rpc_internals/rpc_session.rs:98-100 - Removed on decode error: src/rpc/rpc_internals/rpc_session.rs82
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-42
graph LR
Payload["Payload: 100 KB"]
Encoder["RpcStreamEncoder\nmax_chunk_size=16384"]
F1["Frame\nstream_id=42\nkind=Data\npayload=16KB"]
F2["Frame\nstream_id=42\nkind=Data\npayload=16KB"]
F3["Frame\nstream_id=42\nkind=Data\npayload=16KB"]
FN["Frame\nstream_id=42\nkind=Data\npayload=..."]
FEnd["Frame\nstream_id=42\nkind=End\npayload=0"]
Payload --> Encoder
Encoder --> F1
Encoder --> F2
Encoder --> F3
Encoder --> FN
Encoder --> FEnd
Chunk Management
Large messages are automatically split into multiple frames to avoid memory exhaustion and enable incremental processing. The chunking mechanism operates transparently at the frame level.
Chunking Strategy
Chunk Size Configuration
max_chunk_size controls payload bytes per frame. System-defined constant DEFAULT_MAX_CHUNK_SIZE provides default.
| Size Range | Latency | Overhead | Use Case |
|---|---|---|---|
| 4-8 KB | Lower | Higher | Interactive/real-time |
| 16-32 KB | Balanced | Moderate | General purpose |
| 64 KB | Higher | Lower | Bulk transfer |
Maximum frame payload is 65,535 bytes (u16::MAX) per frame structure. Practical values are typically 16-32 KB to balance latency and efficiency.
Reassembly Process
Frames with matching stream_id are processed by the same RpcStreamDecoder:
| Frame Type | Decoder Action | State Transition |
|---|---|---|
| First Data (with RPC header) | Parse header, emit RpcStreamEvent::Header | AwaitHeader → AwaitPayload |
| Subsequent Data | Emit RpcStreamEvent::PayloadChunk | Remain in AwaitPayload |
| End | Emit RpcStreamEvent::End, remove decoder | AwaitPayload → removed from map |
| Cancel | Remove decoder, optionally emit error | Any state → removed from map |
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:59-146 (decoder state machine), src/rpc/rpc_internals/rpc_session.rs:68-100 (decoder lifecycle)
RPC Header Layer
The framing protocol is payload-agnostic, but RPC usage places an RpcHeader structure in the first frame’s payload. This is an RPC-level concern, not a frame-level concern.
RPC Header Binary Structure
The first Data frame for an RPC stream contains a serialized RpcHeader:
| Field | Offset | Size | Type | Description |
|---|---|---|---|---|
rpc_msg_type | 0 | 1 byte | u8 | RpcMessageType: Call=0, Response=1 |
rpc_request_id | 1 | 4 bytes | u32 LE | Request correlation ID |
rpc_method_id | 5 | 8 bytes | u64 LE | xxhash of method name |
rpc_metadata_length | 13 | 2 bytes | u16 LE | Byte count of metadata |
rpc_metadata_bytes | 15 | variable | [u8] | Serialized parameters or result |
Total: 15 bytes + rpc_metadata_length
RPC Header Constants
Constants from src/constants.rs define the layout:
| Constant | Value | Purpose |
|---|---|---|
RPC_FRAME_FRAME_HEADER_SIZE | 15 | Minimum RPC header size |
RPC_FRAME_MSG_TYPE_OFFSET | 0 | Offset to rpc_msg_type |
RPC_FRAME_ID_OFFSET | 1 | Offset to rpc_request_id |
RPC_FRAME_METHOD_ID_OFFSET | 5 | Offset to rpc_method_id |
RPC_FRAME_METADATA_LENGTH_OFFSET | 13 | Offset to metadata length |
RPC_FRAME_METADATA_LENGTH_SIZE | 2 | Size of metadata length field |
RpcStreamDecoder uses these constants to parse the header from the first frame’s payload.
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:2-8 (constant imports), src/rpc/rpc_internals/rpc_stream_decoder.rs:64-125 (parsing logic)
Error Handling
Frame-level errors are represented by FrameDecodeError and FrameEncodeError enumerations. The decoder converts corrupted or invalid frames into error events rather than panicking.
Frame Decode Errors
| Error Variant | Cause | Decoder Behavior |
|---|---|---|
FrameDecodeError::CorruptFrame | Invalid header, failed parsing | Remove stream_id from map, emit error event |
FrameDecodeError::ReadAfterCancel | Data frame after Cancel frame | Return error, stop processing stream |
| Incomplete frame (not an error) | Insufficient bytes for full frame | FrameMuxStreamDecoder buffers, awaits more data |
Error Event Propagation
Error isolation ensures corruption in one stream (stream_id=42) does not affect other streams (stream_id=43, etc.) on the same connection.
Sources: src/rpc/rpc_internals/rpc_session.rs:80-94 (error handling and cleanup), src/rpc/rpc_internals/rpc_stream_decoder.rs:70-72 (error detection)
sequenceDiagram
participant Client
participant Connection
participant Server
Note over Client,Server: Three concurrent calls, interleaved frames
Client->>Connection: Frame{stream_id=1, Data, RpcHeader{Add}}
Client->>Connection: Frame{stream_id=2, Data, RpcHeader{Multiply}}
Client->>Connection: Frame{stream_id=3, Data, RpcHeader{Echo}}
Connection->>Server: Frames arrive in send order
Note over Server: Server processes concurrently
Server->>Connection: Frame{stream_id=2, Data, result}
Server->>Connection: Frame{stream_id=2, End}
Server->>Connection: Frame{stream_id=1, Data, result}
Server->>Connection: Frame{stream_id=1, End}
Server->>Connection: Frame{stream_id=3, Data, result}
Server->>Connection: Frame{stream_id=3, End}
Connection->>Client: Responses complete out-of-order
Multiplexing Example
Multiple concurrent RPC calls use distinct stream_id values to interleave over a single connection:
Concurrent Streams Over Single Connection
Frame multiplexing eliminates head-of-line blocking. A slow operation on stream_id=1 does not delay responses for stream_id=2 or stream_id=3.
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 (stream ID allocation), src/rpc/rpc_internals/rpc_session.rs:61-77 (decoder routing)
Summary
The binary framing protocol provides a lightweight, efficient mechanism for transmitting structured data over byte streams. Key characteristics:
- Fixed-format frames: 7-byte header + variable payload (max 64 KB per frame)
- Stream identification:
stream_idfield enables multiplexing - Lifecycle management: Data, End, and Cancel frame types control stream state
- Chunking support: Large messages split automatically into multiple frames
- Stateful decoding: Handles partial frame reception across multiple reads
- Error isolation: Frame errors affect only the associated stream
This framing protocol forms the foundation for higher-level stream multiplexing (#3.2) and RPC protocol implementation (#3.4).
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Stream Multiplexing
Loading…
Stream Multiplexing
Relevant source files
Purpose and Scope
This document describes the stream multiplexing mechanism in the Muxio core library, specifically focusing on the RpcSession component. Stream multiplexing enables multiple independent RPC request/response streams to be transmitted concurrently over a single underlying connection without interference.
This page covers the low-level mechanics of stream ID allocation, per-stream state management, frame routing, and cleanup. For information about the binary framing protocol that underlies stream multiplexing, see Binary Framing Protocol. For information about higher-level request/response correlation and RPC lifecycle management, see RPC Dispatcher.
Sources: src/rpc/rpc_internals/rpc_session.rs:1-118 README.md:15-36
Overview
The RpcSession struct is the central component for stream multiplexing. It operates at a layer above binary frame encoding/decoding but below RPC protocol semantics. Its primary responsibilities include:
- Allocating unique stream IDs for outbound requests
- Routing incoming frames to the appropriate per-stream decoder
- Managing the lifecycle of individual stream decoders
- Emitting stream events (header, payload chunks, completion) for higher layers to process
Each logical RPC call (request or response) is assigned a unique stream_id. Multiple streams can be active simultaneously, with their frames interleaved at the transport level. The RpcSession ensures that frames are correctly demultiplexed and reassembled into coherent stream events.
Sources: src/rpc/rpc_internals/rpc_session.rs:15-24
Architecture
Component Structure
Diagram 1: RpcSession Architecture
This diagram illustrates how RpcSession sits between the binary framing layer and the RPC protocol layer. The frame_mux_stream_decoder field processes raw bytes into DecodedFrame instances, which are then routed to individual RpcStreamDecoder instances stored in the rpc_stream_decoders HashMap based on their stream_id.
Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18 src/rpc/rpc_internals/rpc_session.rs:52-60
Stream ID Allocation
When an outbound RPC request or response is initiated, RpcSession must allocate a unique stream ID. This is managed by the next_stream_id counter, which is incremented for each new stream.
Allocation Process
| Step | Action | Implementation |
|---|---|---|
| 1 | Capture current next_stream_id value | src/rpc/rpc_internals/rpc_session.rs44 |
| 2 | Increment counter via increment_u32_id() | src/rpc/rpc_internals/rpc_session.rs45 |
| 3 | Create RpcStreamEncoder with allocated ID | src/rpc/rpc_internals/rpc_session.rs:47-48 |
| 4 | Return encoder to caller | src/rpc/rpc_internals/rpc_session.rs49 |
The increment_u32_id() utility function src/utils.rs wraps the counter on overflow, ensuring continuous operation.
Diagram 2: Stream ID Allocation Sequence
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/utils.rs5
Inbound Frame Processing
The read_bytes() method is the entry point for all incoming data. It processes raw bytes through the following pipeline:
graph LR
RawBytes["input: &[u8]"] --> ReadBytes["RpcSession::read_bytes()"]
ReadBytes --> FrameMuxDecoder["frame_mux_stream_decoder\n.read_bytes(input)"]
FrameMuxDecoder --> |frames: Iterator<Result<DecodedFrame>>| FrameLoop["for frame_result in frames"]
FrameLoop --> ExtractStreamID["frame.inner.stream_id"]
ExtractStreamID --> LookupDecoder["rpc_stream_decoders\n.entry(stream_id)\n.or_default()"]
LookupDecoder --> DecodeFrame["rpc_stream_decoder\n.decode_rpc_frame(&frame)"]
DecodeFrame --> |Ok events: Vec<RpcStreamEvent>| EmitEvents["for event in events"]
EmitEvents --> CheckEnd["matches!(event,\nRpcStreamEvent::End)"]
CheckEnd --> |true| CleanupDecoder["rpc_stream_decoders\n.remove(&stream_id)"]
CheckEnd --> |false| CallbackInvoke["on_rpc_stream_event(event)"]
DecodeFrame --> |Err e| ErrorCleanup["rpc_stream_decoders\n.remove(&stream_id)\nemit Error event"]
Processing Pipeline
Diagram 3: Inbound Frame Processing Pipeline
Sources: src/rpc/rpc_internals/rpc_session.rs:52-117
Per-Stream Decoder Management
Each unique stream_id encountered gets its own RpcStreamDecoder instance, which maintains the decoding state for that stream. These decoders are stored in the rpc_stream_decoders: HashMap<u32, RpcStreamDecoder> field and are lazily created on first access using the entry().or_default() pattern.
Decoders are automatically cleaned up when:
- An
RpcStreamEvent::Endis emitted after processing (line 73-75) - A
FrameKind::CancelorFrameKind::Endframe is received (line 98-100) - A decoding error occurs in
decode_rpc_frame()(line 82)
Sources: src/rpc/rpc_internals/rpc_session.rs68 src/rpc/rpc_internals/rpc_session.rs:73-75 src/rpc/rpc_internals/rpc_session.rs:80-82 src/rpc/rpc_internals/rpc_session.rs:98-100
RpcStreamDecoder State Machine
Each RpcStreamDecoder operates as a state machine that transitions through three states as it processes frames for a single stream.
stateDiagram-v2
[*] --> AwaitHeader: RpcStreamDecoder::new()
AwaitHeader --> AwaitHeader : buffer.len() < RPC_FRAME_FRAME_HEADER_SIZE or insufficient metadata
AwaitHeader --> AwaitPayload : Header complete - extract RpcHeader, state = AwaitPayload, emit Header event
AwaitPayload --> AwaitPayload: FrameKind::Data\nemit PayloadChunk event
AwaitPayload --> Done: frame.inner.kind == FrameKind::End\nstate = Done,\nemit End event
AwaitPayload --> [*]: frame.inner.kind == FrameKind::Cancel\nreturn ReadAfterCancel error
Done --> [*] : Decoder removed from HashMap
State Transitions
Diagram 4: RpcStreamDecoder State Machine
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:20-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-186 src/rpc/rpc_internals/rpc_stream_decoder.rs:64-65 src/rpc/rpc_internals/rpc_stream_decoder.rs120 src/rpc/rpc_internals/rpc_stream_decoder.rs:156-157 src/rpc/rpc_internals/rpc_stream_decoder.rs:165-166
State Descriptions
| State | Purpose | Buffer Usage | Events Emitted |
|---|---|---|---|
| AwaitHeader | Accumulate bytes until complete RPC header is available | Accumulates all incoming bytes | RpcStreamEvent::Header when header complete |
| AwaitPayload | Process payload chunks after header extracted | Not used (data forwarded directly) | RpcStreamEvent::PayloadChunk for each frame |
| Done | Stream has completed | Not used | RpcStreamEvent::End on transition |
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:11-42
Header Decoding
The RPC header is embedded at the beginning of each stream and contains metadata necessary for routing and processing. The header structure is fixed-size with a variable-length metadata field:
Header Structure
| Field | Offset Constant | Size | Description |
|---|---|---|---|
rpc_msg_type | RPC_FRAME_MSG_TYPE_OFFSET (0) | 1 byte | RpcMessageType enum (Call or Response) |
rpc_request_id | RPC_FRAME_ID_OFFSET (1) | 4 bytes | Request correlation ID |
rpc_method_id | RPC_FRAME_METHOD_ID_OFFSET (5) | 8 bytes | Method identifier (xxhash) |
metadata_length | RPC_FRAME_METADATA_LENGTH_OFFSET (13) | 2 bytes | Length of metadata bytes |
rpc_metadata_bytes | 15 | variable | Serialized metadata |
sequenceDiagram
participant Frame as "DecodedFrame"
participant Decoder as "RpcStreamDecoder"
participant Buffer as "self.buffer: Vec<u8>"
participant Events as "events: Vec<RpcStreamEvent>"
Note over Decoder: self.state = AwaitHeader
Frame->>Decoder: decode_rpc_frame(&frame)
Decoder->>Buffer: buffer.extend(&frame.inner.payload)
Decoder->>Buffer: Check buffer.len() >= RPC_FRAME_FRAME_HEADER_SIZE
alt "buffer.len() < RPC_FRAME_FRAME_HEADER_SIZE"
Decoder-->>Events: return Ok(events) // empty
else "Sufficient for header"
Decoder->>Decoder: rpc_msg_type = RpcMessageType::try_from(buffer[RPC_FRAME_MSG_TYPE_OFFSET])
Decoder->>Decoder: rpc_request_id = u32::from_le_bytes(buffer[RPC_FRAME_ID_OFFSET..RPC_FRAME_METHOD_ID_OFFSET])
Decoder->>Decoder: rpc_method_id = u64::from_le_bytes(buffer[RPC_FRAME_METHOD_ID_OFFSET..RPC_FRAME_METADATA_LENGTH_OFFSET])
Decoder->>Decoder: meta_len = u16::from_le_bytes(buffer[RPC_FRAME_METADATA_LENGTH_OFFSET..])
Decoder->>Buffer: Check buffer.len() >= header_size + meta_len
alt "Complete header + metadata available"
Decoder->>Decoder: rpc_metadata_bytes = buffer[15..15+meta_len].to_vec()
Decoder->>Decoder: header_arc = Arc::new(RpcHeader { ... })
Decoder->>Decoder: self.header = Some(header_arc.clone())
Decoder->>Decoder: self.state = AwaitPayload
Decoder->>Buffer: buffer.drain(..15+meta_len)
Decoder->>Events: events.push(RpcStreamEvent::Header { ... })
opt "!buffer.is_empty()"
Decoder->>Events: events.push(RpcStreamEvent::PayloadChunk { ... })
end
end
end
Decoder-->>Frame: return Ok(events)
The decoder buffers incoming data in buffer: Vec<u8> until at least RPC_FRAME_FRAME_HEADER_SIZE + metadata_length bytes are available, then extracts the header fields and transitions to AwaitPayload state.
Diagram 5: Header Decoding Process
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:60-145 src/constants.rs:13-17
graph TB
subgraph "RpcSession State at Time T"
Session["RpcSession\nnext_stream_id = 104"]
subgraph "rpc_stream_decoders HashMap"
Stream101["stream_id: 101\nState: AwaitPayload\nheader: Some(Add request)\nBuffered: 256 bytes"]
Stream102["stream_id: 102\nState: AwaitHeader\nheader: None\nBuffered: 8 bytes"]
Stream103["stream_id: 103\nState: Done\nheader: Some(Echo response)\nReady for cleanup"]
end
end
IncomingFrame["Incoming Frame\nstream_id: 102\npayload: 24 bytes"] --> Session
Session --> |Route to| Stream102
Stream102 --> |Now 32 bytes total Header complete!| HeaderEvent["RpcStreamEvent::Header\nfor stream 102"]
Stream103 --> |Remove from HashMap| Cleanup["Cleanup"]
Concurrent Stream Example
Multiple streams can be active simultaneously, each at different stages of processing. The following illustrates how three concurrent streams might be managed:
Diagram 6: Concurrent Stream Management Example
This illustrates a snapshot where stream 101 is actively receiving payload, stream 102 is still accumulating its header, and stream 103 has completed and is ready for cleanup.
Sources: src/rpc/rpc_internals/rpc_session.rs:22-32 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18
Stream Cleanup
Proper cleanup of stream decoders is essential to prevent memory leaks in long-lived connections with many sequential RPC calls. Cleanup occurs at several points:
Cleanup Triggers
| Trigger | Location | Cleanup Action |
|---|---|---|
RpcStreamEvent::End emitted | src/rpc/rpc_internals/rpc_session.rs:73-75 | self.rpc_stream_decoders.remove(&stream_id) |
FrameKind::End received | src/rpc/rpc_internals/rpc_session.rs:98-100 | self.rpc_stream_decoders.remove(&stream_id) |
FrameKind::Cancel received | src/rpc/rpc_internals/rpc_session.rs:98-100 | self.rpc_stream_decoders.remove(&stream_id) |
Decoding error in decode_rpc_frame() | src/rpc/rpc_internals/rpc_session.rs82 | self.rpc_stream_decoders.remove(&stream_id) |
graph TD
Event["Stream Event"] --> CheckType{"Event Type?"}
CheckType --> |RpcStreamEvent::End| Cleanup1["Remove decoder\nfrom HashMap"]
CheckType --> |FrameKind::End| Cleanup2["Remove decoder\nfrom HashMap"]
CheckType --> |FrameKind::Cancel| Cleanup3["Remove decoder\nfrom HashMap"]
CheckType --> |Decode Error| Cleanup4["Remove decoder\nfrom HashMap"]
CheckType --> |Other| Continue["Continue processing"]
Cleanup1 --> Done["Stream resources freed"]
Cleanup2 --> Done
Cleanup3 --> Done
Cleanup4 --> Done
All cleanup operations use the HashMap::remove(&stream_id) method to immediately deallocate the RpcStreamDecoder instance and its buffered data.
Diagram 7: Stream Cleanup Triggers
Sources: src/rpc/rpc_internals/rpc_session.rs:73-100
Error Handling
When errors occur during frame decoding or stream processing, the RpcSession generates RpcStreamEvent::Error events and performs cleanup:
Error Scenarios
| Error Type | Source | Response |
|---|---|---|
| Invalid frame structure | FrameMuxStreamDecoder | Emit error event, continue with other streams |
| Corrupt RPC header | RpcStreamDecoder | Emit error event, remove decoder, return error |
| Read after cancel | FrameKind::Cancel received | Return FrameDecodeError::ReadAfterCancel |
All error events include:
rpc_header: Header data if availablerpc_request_id: Request ID if header was decodedrpc_method_id: Method ID if header was decodedframe_decode_error: The underlying error type
This information allows higher layers to perform appropriate error handling and reporting.
Sources: src/rpc/rpc_internals/rpc_session.rs:80-111 src/rpc/rpc_internals/rpc_stream_decoder.rs:165-166
Key Implementation Details
Thread Safety
RpcSession itself is not thread-safe and does not implement Send or Sync. This design choice aligns with the core philosophy of using callbacks rather than async/await. Higher-level components (like RpcDispatcher) are responsible for managing thread safety if needed.
Memory Efficiency
- Each stream decoder maintains its own buffer, but only while in
AwaitHeaderstate - Once the header is extracted, payload chunks are forwarded directly without additional buffering
- Completed streams are immediately cleaned up, preventing unbounded memory growth
- The
HashMapof decoders shrinks automatically as streams complete
Arc-Wrapped Headers
Headers are wrapped in Arc<RpcHeader> immediately upon decoding:
This allows multiple RpcStreamEvent instances for the same stream to share the same header data via Arc::clone() without deep cloning the rpc_metadata_bytes, which is particularly important for streams with large metadata.
Sources: src/rpc/rpc_internals/rpc_session.rs:15-33 src/rpc/rpc_internals/rpc_stream_decoder.rs:111-117 src/rpc/rpc_internals/rpc_stream_decoder.rs13
Integration with Other Components
Relationship to Binary Framing
RpcSession depends on FrameMuxStreamDecoder from the binary framing layer but does not implement frame encoding/decoding itself. It operates at a higher level of abstraction, concerned with RPC-specific concepts like headers and stream events rather than raw frame structures.
See Binary Framing Protocol for details on frame encoding/decoding.
Relationship to RPC Dispatcher
RpcDispatcher (see RPC Dispatcher) sits above RpcSession and consumes the RpcStreamEvent callbacks. The dispatcher correlates requests with responses and manages the RPC protocol semantics, while RpcSession handles only the mechanical aspects of stream multiplexing.
Sources: README.md:29-35
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
RPC Dispatcher
Loading…
RPC Dispatcher
Relevant source files
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
The RpcDispatcher is the central request coordination component in muxio’s Core Transport Layer. It sits above RpcSession and below the RPC Service Layer, managing the complete lifecycle of RPC requests and responses. The dispatcher handles request correlation via unique IDs, multiplexed stream management, and response routing over the binary framed transport.
This document covers the internal architecture, request/response flow, queue management, and usage patterns. For the underlying stream multiplexing, see Stream Multiplexing. For the RpcRequest and RpcResponse data structures, see Request and Response Types.
Sources: src/rpc/rpc_dispatcher.rs:1-458
Architectural Context
Diagram: RpcDispatcher in Layered Architecture
The RpcDispatcher operates in a non-async , callback-based model compatible with WASM and multithreaded runtimes. Key responsibilities:
| Responsibility | Implementation |
|---|---|
| Request Correlation | next_rpc_request_id: u32 with monotonic increment |
| Response Routing | Per-request handlers in response_handlers: HashMap<u32, Handler> |
| Stream Management | Wraps RpcRespondableSession for lifecycle control |
| Payload Accumulation | rpc_request_queue: Arc<Mutex<VecDeque<(u32, RpcRequest)>>> |
| Error Propagation | fail_all_pending_requests() on connection drop |
Important : Each RpcDispatcher instance is bound to a single connection. Do not share across connections.
Sources: src/rpc/rpc_dispatcher.rs:20-51 src/rpc/rpc_dispatcher.rs:36-51
Core Components and Data Structures
Diagram: RpcDispatcher Internal Structure
Primary Fields
| Field | Type | Line | Purpose |
|---|---|---|---|
rpc_respondable_session | RpcRespondableSession<'a> | 37 | Delegates to RpcSession for frame encoding/decoding |
next_rpc_request_id | u32 | 42 | Monotonic counter for outbound request ID generation |
rpc_request_queue | Arc<Mutex<VecDeque<(u32, RpcRequest)>>> | 50 | Thread-safe queue tracking all active inbound requests |
RpcRespondableSession Internal State
| Field | Type | Purpose |
|---|---|---|
rpc_session | RpcSession | Manages stream IDs and frame encoding/decoding |
response_handlers | HashMap<u32, Box<dyn FnMut(RpcStreamEvent) + Send>> | Per-request response callbacks indexed by request_id |
catch_all_response_handler | Option<Box<dyn FnMut(RpcStreamEvent) + Send>> | Global fallback handler for unmatched events |
prebuffered_responses | HashMap<u32, Vec<u8>> | Accumulates payload bytes when prebuffering is enabled |
prebuffering_flags | HashMap<u32, bool> | Tracks which requests should prebuffer responses |
Sources: src/rpc/rpc_dispatcher.rs:36-51 src/rpc/rpc_internals/rpc_respondable_session.rs:21-28
Request Lifecycle
Outbound Request Flow
Diagram: Outbound Request Encoding via RpcDispatcher::call()
The call() method at src/rpc/rpc_dispatcher.rs:227-286 executes the following steps:
- ID Assignment ([line 241](https://github.com/jzombie/rust-muxio/blob/30450c98/line 241)): Captures
self.next_rpc_request_idas the uniquerpc_request_id - ID Increment ([line 242](https://github.com/jzombie/rust-muxio/blob/30450c98/line 242)): Advances
next_rpc_request_idusingincrement_u32_id() - Header Construction ([lines 252-257](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 252-257)): Creates
RpcHeaderstruct with:rpc_msg_type: RpcMessageType::Callrpc_request_id(from step 1)rpc_method_id(fromRpcRequest.rpc_method_id)rpc_metadata_bytes(converted fromRpcRequest.rpc_param_bytes)
- Handler Registration ([line 260-266](https://github.com/jzombie/rust-muxio/blob/30450c98/line 260-266)): Calls
init_respondable_request()which:- Stores
on_responsehandler inresponse_handlersHashMap - Sets
prebuffering_flags[request_id]to control response buffering
- Stores
- Payload Transmission ([lines 270-276](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 270-276)): If
rpc_prebuffered_payload_bytesexists, writes to encoder - Stream Finalization ([lines 279-283](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 279-283)): If
is_finalized, callsflush()andend_stream() - Encoder Return ([line 285](https://github.com/jzombie/rust-muxio/blob/30450c98/line 285)): Returns
RpcStreamEncoder<E>for additional streaming
Sources: src/rpc/rpc_dispatcher.rs:227-286 src/rpc/rpc_internals/rpc_respondable_session.rs:42-68
Inbound Response Flow
Diagram: Inbound Response Processing via RpcDispatcher::read_bytes()
The read_bytes() method at src/rpc/rpc_dispatcher.rs:362-374 processes incoming transport data:
- Frame Decoding ([line 364](https://github.com/jzombie/rust-muxio/blob/30450c98/line 364)): Delegates to
self.rpc_respondable_session.read_bytes(bytes) - Event Dispatch (src/rpc/rpc_internals/rpc_respondable_session.rs:93-173):
RpcSessiondecodes frames intoRpcStreamEventenum - Handler Invocation : For each event:
- Specific Handler ([line 152](https://github.com/jzombie/rust-muxio/blob/30450c98/line 152)): Calls
response_handlers[rpc_request_id]if registered - Catch-All Handler ([lines 102-208](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 102-208)): Always invoked to populate
rpc_request_queue
- Specific Handler ([line 152](https://github.com/jzombie/rust-muxio/blob/30450c98/line 152)): Calls
- Queue Mutations (via catch-all handler):
Headerevent: Creates newRpcRequestand pushes to queue ([line 140](https://github.com/jzombie/rust-muxio/blob/30450c98/line 140))PayloadChunkevent: Extendsrpc_prebuffered_payload_bytes([lines 154-157](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 154-157))Endevent: Setsis_finalized = true([line 177](https://github.com/jzombie/rust-muxio/blob/30450c98/line 177))
- Active IDs Return ([lines 367-371](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 367-371)): Locks queue and returns
Vec<u32>of allrequest_ids
Sources: src/rpc/rpc_dispatcher.rs:362-374 src/rpc/rpc_dispatcher.rs:99-209 src/rpc/rpc_internals/rpc_respondable_session.rs:93-173
Response Handling Mechanisms
Dual Handler System
Diagram: Dual Handler Dispatch System
The dispatcher uses two parallel handler mechanisms at src/rpc/rpc_internals/rpc_respondable_session.rs:93-173:
Specific Response Handlers
Storage: response_handlers: HashMap<u32, Box<dyn FnMut(RpcStreamEvent) + Send + 'a>> ([line 24](https://github.com/jzombie/rust-muxio/blob/30450c98/line 24))
| Aspect | Implementation |
|---|---|
| Registration | In init_respondable_request() at [line 61-62](https://github.com/jzombie/rust-muxio/blob/30450c98/line 61-62) when on_response is Some |
| Invocation | At [line 152-154](https://github.com/jzombie/rust-muxio/blob/30450c98/line 152-154) for each RpcStreamEvent matching rpc_request_id |
| Removal | At [line 161-162](https://github.com/jzombie/rust-muxio/blob/30450c98/line 161-162) when End or Error event received |
| Use Case | Application-level response processing with custom callbacks |
Catch-All Response Handler
Storage: catch_all_response_handler: Option<Box<dyn FnMut(RpcStreamEvent) + Send + 'a>> ([line 25](https://github.com/jzombie/rust-muxio/blob/30450c98/line 25))
| Aspect | Implementation |
|---|---|
| Registration | In set_catch_all_response_handler() at [line 86-91](https://github.com/jzombie/rust-muxio/blob/30450c98/line 86-91) during RpcDispatcher::new() |
| Invocation | At [line 165-167](https://github.com/jzombie/rust-muxio/blob/30450c98/line 165-167) for all events not handled by specific handlers |
| Primary Role | Populates rpc_request_queue for queue-based processing pattern |
Catch-All Handler Responsibilities ([lines 102-208](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 102-208)):
- Header Event ([lines 122-142](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 122-142)): Creates new
RpcRequest, pushes torpc_request_queue - PayloadChunk Event ([lines 144-169](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 144-169)): Extends
rpc_prebuffered_payload_bytesin existing request - End Event ([lines 171-185](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 171-185)): Sets
is_finalized = true - Error Event ([lines 187-206](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 187-206)): Logs error (currently no queue removal)
Sources: src/rpc/rpc_internals/rpc_respondable_session.rs:24-25 src/rpc/rpc_dispatcher.rs:99-209
Prebuffering vs. Streaming Modes
The prebuffer_response: bool parameter in call() ([line 233](https://github.com/jzombie/rust-muxio/blob/30450c98/line 233)) controls response delivery mode:
| Mode | prebuffer_response | Implementation | Use Case |
|---|---|---|---|
| Prebuffering | true | Accumulates all PayloadChunk events into single buffer, delivers as one chunk when stream ends | Complete request/response RPCs where full payload needed before processing |
| Streaming | false | Delivers each PayloadChunk event immediately as received | Progressive rendering, large file transfers, real-time data |
Prebuffering Implementation (src/rpc/rpc_internals/rpc_respondable_session.rs:112-147):
Diagram: Prebuffering Control Flow
Key Data Structures:
prebuffering_flags: HashMap<u32, bool>([line 27](https://github.com/jzombie/rust-muxio/blob/30450c98/line 27)): Tracks mode per request, set at [line 57-58](https://github.com/jzombie/rust-muxio/blob/30450c98/line 57-58)prebuffered_responses: HashMap<u32, Vec<u8>>([line 26](https://github.com/jzombie/rust-muxio/blob/30450c98/line 26)): Accumulates bytes for prebuffered requests
Prebuffering Sequence ([lines 112-147](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 112-147)):
- Set Flag ([line 57-58](https://github.com/jzombie/rust-muxio/blob/30450c98/line 57-58)):
prebuffering_flags.insert(rpc_request_id, true)ininit_respondable_request() - Accumulate Chunks ([line 127](https://github.com/jzombie/rust-muxio/blob/30450c98/line 127)):
buffer.extend_from_slice(bytes)for eachPayloadChunkevent - Flush on End ([lines 135-142](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 135-142)): Emit synthetic
PayloadChunkwith full buffer, then emitEnd - Cleanup ([line 145](https://github.com/jzombie/rust-muxio/blob/30450c98/line 145)):
prebuffered_responses.remove(&rpc_id)after delivery
Sources: src/rpc/rpc_internals/rpc_respondable_session.rs:112-147 src/rpc/rpc_internals/rpc_respondable_session.rs:26-27
Request Queue Management
Queue Structure and Threading
Type: Arc<Mutex<VecDeque<(u32, RpcRequest)>>> at src/rpc/rpc_dispatcher.rs50
Diagram: Request Queue Threading Model
The queue stores (request_id, RpcRequest) tuples for all active inbound requests. Each entry represents a request that has received at least a Header event.
RpcRequest Structure
Source: src/rpc/rpc_request_response.rs:10-33
| Field | Type | Mutability | Description |
|---|---|---|---|
rpc_method_id | u64 | Set on Header | Method identifier from header |
rpc_param_bytes | Option<Vec<u8>> | Set on Header | Metadata from RpcHeader.rpc_metadata_bytes |
rpc_prebuffered_payload_bytes | Option<Vec<u8>> | Grows on Chunks | Accumulated via extend_from_slice() at [line 157](https://github.com/jzombie/rust-muxio/blob/30450c98/line 157) |
is_finalized | bool | Set on End | true when End event received at [line 177](https://github.com/jzombie/rust-muxio/blob/30450c98/line 177) |
Sources: src/rpc/rpc_dispatcher.rs50 src/rpc/rpc_request_response.rs:10-33
Queue Operation Methods
Diagram: Queue Mutation and Access Operations
Public API Methods
get_rpc_request(header_id: u32)
Signature: -> Option<MutexGuard<'_, VecDeque<(u32, RpcRequest)>>>
Location: src/rpc/rpc_dispatcher.rs:381-394
Behavior:
- Acquires lock:
self.rpc_request_queue.lock()([line 385](https://github.com/jzombie/rust-muxio/blob/30450c98/line 385)) - Searches for
header_id:queue.iter().any(|(id, _)| *id == header_id)([line 389](https://github.com/jzombie/rust-muxio/blob/30450c98/line 389)) - Returns entire
MutexGuardif found,Noneotherwise
Rationale: Cannot return reference to queue element—lifetime would outlive MutexGuard. Caller must re-search under guard.
Example Usage:
is_rpc_request_finalized(header_id: u32)
Signature: -> Option<bool>
Location: src/rpc/rpc_dispatcher.rs:399-405
Returns:
Some(true): Request exists andis_finalized == trueSome(false): Request exists but not finalizedNone: Request not found in queue
Implementation: [line 401-404](https://github.com/jzombie/rust-muxio/blob/30450c98/line 401-404) searches queue and returns req.is_finalized
delete_rpc_request(header_id: u32)
Signature: -> Option<RpcRequest>
Location: src/rpc/rpc_dispatcher.rs:411-420
Behavior:
- Locks queue:
self.rpc_request_queue.lock()([line 412](https://github.com/jzombie/rust-muxio/blob/30450c98/line 412)) - Finds index:
queue.iter().position(|(id, _)| *id == header_id)([line 414](https://github.com/jzombie/rust-muxio/blob/30450c98/line 414)) - Removes entry:
queue.remove(index)?([line 416](https://github.com/jzombie/rust-muxio/blob/30450c98/line 416)) - Returns owned
RpcRequest, discarding request ID
Typical Usage: Call after is_rpc_request_finalized() returns true to consume the completed request.
Sources: src/rpc/rpc_dispatcher.rs:381-420
Server-Side Response Sending
respond() Method Flow
Diagram: RpcDispatcher::respond() Execution Flow
Method Signature: src/rpc/rpc_dispatcher.rs:298-337
Key Differences from call()
| Aspect | call() (Client) | respond() (Server) |
|---|---|---|
| Request ID | Generates new via next_rpc_request_id ([line 241](https://github.com/jzombie/rust-muxio/blob/30450c98/line 241)) | Uses rpc_response.rpc_request_id from original request ([line 308](https://github.com/jzombie/rust-muxio/blob/30450c98/line 308)) |
| Message Type | RpcMessageType::Call ([line 253](https://github.com/jzombie/rust-muxio/blob/30450c98/line 253)) | RpcMessageType::Response ([line 309](https://github.com/jzombie/rust-muxio/blob/30450c98/line 309)) |
| Metadata | rpc_request.rpc_param_bytes converted to rpc_metadata_bytes ([line 250](https://github.com/jzombie/rust-muxio/blob/30450c98/line 250)) | Only rpc_result_status byte if present ([lines 313-317](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 313-317)) |
| Handler Registration | Optionally registers on_response handler ([line 264](https://github.com/jzombie/rust-muxio/blob/30450c98/line 264)) | No handler registration (responses don’t receive responses) |
| Prebuffering | Supports prebuffer_response parameter | Not applicable (prebuffering is for receiving, not sending) |
Metadata Encoding
The metadata field in response headers carries only the result status ([lines 313-317](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 313-317)):
Convention: While muxio core doesn’t enforce semantics, 0 typically indicates success. See RPC Service Errors for error code conventions.
Sources: src/rpc/rpc_dispatcher.rs:298-337 src/rpc/rpc_internals/rpc_respondable_session.rs:70-82
Error Handling and Cleanup
Mutex Poisoning Strategy
The rpc_request_queue uses a Mutex for synchronization. If a thread panics while holding the lock, the mutex becomes poisoned.
Poisoning Detection at src/rpc/rpc_dispatcher.rs:104-118:
Design Rationale:
| Aspect | Justification |
|---|---|
| Panic on Poison | Poisoned mutex indicates another thread panicked during queue mutation |
| No Recovery | Inconsistent state could cause incorrect routing, data loss, silent failures |
| Fast Failure | Explicit crash provides clear debugging signal vs. undefined behavior |
| Production Safety | Better to fail loudly than corrupt application state |
Alternative Implementations (future consideration):
- Configurable panic policy via builder pattern
- Error reporting mechanism instead of panic
- Queue reconstruction from handler state
Other Lock Sites:
read_bytes()at [line 367-370](https://github.com/jzombie/rust-muxio/blob/30450c98/line 367-370): Maps poison toFrameDecodeError::CorruptFrameis_rpc_request_finalized()at [line 400](https://github.com/jzombie/rust-muxio/blob/30450c98/line 400): ReturnsNoneon lock failuredelete_rpc_request()at [line 412](https://github.com/jzombie/rust-muxio/blob/30450c98/line 412): ReturnsNoneon lock failure
Sources: src/rpc/rpc_dispatcher.rs:85-118 src/rpc/rpc_dispatcher.rs:362-374
Connection Failure Cleanup
When transport connection drops, fail_all_pending_requests() at src/rpc/rpc_dispatcher.rs:427-456 prevents indefinite hangs:
Diagram: fail_all_pending_requests() Execution Flow
Implementation Steps:
-
Ownership Transfer ([line 436](https://github.com/jzombie/rust-muxio/blob/30450c98/line 436)):
std::mem::take(&mut self.rpc_respondable_session.response_handlers)- Moves all handlers out of the HashMap
- Leaves
response_handlersempty (prevents further invocations)
-
Synthetic Error Creation ([lines 444-450](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 444-450)):
-
Handler Invocation ([line 452](https://github.com/jzombie/rust-muxio/blob/30450c98/line 452)): Calls
handler(error_event)for each pending request- Wakes async Futures waiting for responses
- Triggers error handling in callback-based code
-
Automatic Cleanup : Handlers dropped after loop completes
Usage Context: Transport implementations (e.g., muxio-tokio-rpc-client, muxio-wasm-rpc-client) call this in WebSocket close handlers.
Sources: src/rpc/rpc_dispatcher.rs:427-456
Thread Safety
The dispatcher achieves thread safety through:
Shared Request Queue
Arc: Enables shared ownership across threads and callbacksMutex: Ensures exclusive access during mutationsVecDeque: Efficient push/pop operations for queue semantics
Handler Storage
Response handlers are stored as boxed trait objects:
The Send bound allows handlers to be invoked from different threads if the dispatcher is shared across threads.
Sources: src/rpc/rpc_dispatcher.rs50 src/rpc/rpc_internals/rpc_respondable_session.rs24
Usage Patterns
Client-Side Pattern
1. Create RpcDispatcher
2. For each RPC call:
a. Build RpcRequest with method_id, params, payload
b. Call dispatcher.call() with on_response handler
c. Write bytes to transport via on_emit callback
3. When receiving data from transport:
a. Call dispatcher.read_bytes()
b. Response handlers are invoked automatically
Sources: tests/rpc_dispatcher_tests.rs:32-124
Server-Side Pattern
1. Create RpcDispatcher
2. When receiving data from transport:
a. Call dispatcher.read_bytes()
b. Returns list of active request IDs
3. For each active request ID:
a. Check is_rpc_request_finalized()
b. If finalized, delete_rpc_request() to retrieve full request
c. Process request (decode params, execute method)
d. Build RpcResponse with result
e. Call dispatcher.respond() to send response
4. Write response bytes to transport via on_emit callback
Sources: tests/rpc_dispatcher_tests.rs:126-202
graph TB
Dispatcher["RpcDispatcher"]
subgraph "Client Role"
OutboundCall["call()\nInitiate request"]
InboundResponse["read_bytes()\nReceive response"]
end
subgraph "Server Role"
InboundRequest["read_bytes()\nReceive request"]
OutboundResponse["respond()\nSend response"]
end
Dispatcher --> OutboundCall
Dispatcher --> InboundResponse
Dispatcher --> InboundRequest
Dispatcher --> OutboundResponse
OutboundCall -.emits.-> Transport["Transport Layer"]
Transport -.delivers.-> InboundRequest
OutboundResponse -.emits.-> Transport
Transport -.delivers.-> InboundResponse
Bidirectional Pattern
The same dispatcher instance can handle both client and server roles simultaneously:
Diagram: Bidirectional Request/Response Flow
This pattern enables peer-to-peer architectures where both endpoints can initiate requests and respond to requests.
Sources: src/rpc/rpc_dispatcher.rs:20-51
Implementation Notes
Request ID Generation
Location: src/rpc/rpc_dispatcher.rs:241-242
Mechanism:
- Captures current
self.next_rpc_request_idfor the outgoing request - Calls
increment_u32_id()from src/utils/increment_u32_id.rs (implementation incrate::utils) - Updates
self.next_rpc_request_idwith next value
Wraparound Behavior:
- After reaching
u32::MAX(4,294,967,295), wraps to0and continues - Provides monotonic sequence within 32-bit range
Collision Analysis:
| Connection Duration | Requests/Second | Time to Wraparound | Collision Risk |
|---|---|---|---|
| Short-lived (hours) | 1,000 | 49.7 days | Negligible |
| Long-lived (days) | 10,000 | 4.97 days | Very low |
| High-throughput | 100,000 | 11.9 hours | Consider ID reuse detection |
Mitigation for Long-Running Connections:
- Track active request IDs in a
HashSetbefore assignment - Reject or delay requests if ID would collide with pending request
- Not currently implemented (acceptable for typical usage)
Initialization: Starts at first ID from increment_u32_id() in [line 64](https://github.com/jzombie/rust-muxio/blob/30450c98/line 64) during RpcDispatcher::new()
Sources: src/rpc/rpc_dispatcher.rs:241-242 src/rpc/rpc_dispatcher.rs64
Non-Async Design
The dispatcher uses callbacks instead of async/await for several reasons:
- WASM Compatibility : Avoids dependency on async runtimes that may not work in WASM
- Runtime Agnostic : Works with Tokio, async-std, or no runtime at all
- Deterministic : No hidden scheduling or context switching
- Zero-Cost : No Future state machines or executor overhead
Higher-level abstractions (like those in muxio-rpc-service-caller) can wrap the dispatcher with async interfaces when desired.
Sources: src/rpc/rpc_dispatcher.rs:26-27
Summary
The RpcDispatcher provides the core request/response coordination layer for muxio’s RPC framework:
| Responsibility | Mechanism |
|---|---|
| Request Correlation | Unique request IDs with monotonic generation |
| Response Routing | Per-request handlers + catch-all fallback |
| Stream Management | Wraps RpcRespondableSession for encoder lifecycle |
| Payload Accumulation | Optional prebuffering or streaming delivery |
| Queue Management | Thread-safe VecDeque for tracking active requests |
| Error Propagation | Synthetic error events on connection failure |
| Thread Safety | Arc<Mutex<>> for shared state |
The dispatcher’s non-async, callback-based design enables deployment across native and WASM environments while maintaining type safety and performance.
Sources: src/rpc/rpc_dispatcher.rs:1-458
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Request and Response Types
Loading…
Request and Response Types
Relevant source files
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
This document defines the core data structures used to represent RPC requests and responses in the muxio framework: RpcRequest, RpcResponse, and RpcHeader. These types provide a runtime-agnostic, schemaless representation of method invocations and their corresponding replies. They serve as the fundamental building blocks for all RPC communication in the system.
For information about how these types are processed and routed, see RPC Dispatcher. For information about how they are serialized and transmitted at the binary level, see Binary Framing Protocol.
Type Hierarchy and Relationships
The request-response system is built on three primary types that work together to enable method invocation and correlation:
Sources:
graph TB
subgraph "Application Layer"
USER["User Code\nService Definitions"]
end
subgraph "RPC Protocol Layer Types"
REQ["RpcRequest\nsrc/rpc/rpc_request_response.rs"]
RESP["RpcResponse\nsrc/rpc/rpc_request_response.rs"]
HDR["RpcHeader\nsrc/rpc/rpc_internals/rpc_header.rs"]
end
subgraph "Processing Components"
DISP["RpcDispatcher::call()\nRpcDispatcher::respond()"]
SESSION["RpcRespondableSession"]
end
USER -->|constructs| REQ
USER -->|constructs| RESP
REQ -->|converted to| HDR
RESP -->|converted to| HDR
HDR -->|can be converted back to| RESP
DISP -->|processes| REQ
DISP -->|processes| RESP
DISP -->|delegates to| SESSION
style REQ fill:#f9f9f9
style RESP fill:#f9f9f9
style HDR fill:#f9f9f9
- src/rpc/rpc_request_response.rs:1-105
- src/rpc/rpc_internals/rpc_header.rs:1-25
- src/rpc/rpc_dispatcher.rs:1-457
RpcHeader Structure
RpcHeader is the internal protocol-level representation of RPC metadata transmitted in frame headers. It contains all information necessary to route and identify an RPC message.
Field Description
| Field | Type | Purpose |
|---|---|---|
rpc_msg_type | RpcMessageType | Discriminates between Call, Response, Event, etc. |
rpc_request_id | u32 | Unique identifier for request-response correlation |
rpc_method_id | u64 | Identifier (typically hash) of the method being invoked |
rpc_metadata_bytes | Vec<u8> | Schemaless encoded parameters or status information |
Type Definition
The complete structure is defined at src/rpc/rpc_internals/rpc_header.rs:3-24:
Metadata Semantics
The rpc_metadata_bytes field has different interpretations depending on the message type:
- For Calls : Contains serialized method parameters (e.g., bitcode-encoded arguments)
- For Responses : Contains a single-byte result status code, or is empty if no status is provided
- Empty Vector : Valid and indicates no metadata is present
Sources:
- src/rpc/rpc_internals/rpc_header.rs:1-25
- src/rpc/rpc_dispatcher.rs:250-257 (Call metadata handling)
- src/rpc/rpc_dispatcher.rs:311-319 (Response metadata handling)
RpcRequest Structure
RpcRequest represents an outbound RPC call initiated by a client. It is constructed by user code and passed to RpcDispatcher::call() for transmission.
Field Description
| Field | Type | Purpose |
|---|---|---|
rpc_method_id | u64 | Unique method identifier (typically xxhash of method name) |
rpc_param_bytes | Option<Vec<u8>> | Serialized method parameters (None if no parameters) |
rpc_prebuffered_payload_bytes | Option<Vec<u8>> | Optional payload to send after header |
is_finalized | bool | If true, stream ends immediately after header+payload |
Type Definition
Defined at src/rpc/rpc_request_response.rs:9-33:
Usage Patterns
Finalized Request (Single-Frame RPC)
When the entire request is known upfront:
This pattern is demonstrated at tests/rpc_dispatcher_tests.rs:42-49
Streaming Request (Multi-Frame RPC)
When payload will be written incrementally:
Conversion to RpcHeader
When RpcDispatcher::call() is invoked, the RpcRequest is converted to an RpcHeader at src/rpc/rpc_dispatcher.rs:249-257:
Sources:
- src/rpc/rpc_request_response.rs:3-33
- src/rpc/rpc_dispatcher.rs:227-286
- tests/rpc_dispatcher_tests.rs:30-203
RpcResponse Structure
RpcResponse represents the reply to a prior RPC request. It is constructed on the server side and passed to RpcDispatcher::respond() for transmission back to the client.
Field Description
| Field | Type | Purpose |
|---|---|---|
rpc_request_id | u32 | Must match the original request’s rpc_request_id for correlation |
rpc_method_id | u64 | Should match the original request’s rpc_method_id |
rpc_result_status | Option<u8> | Optional status byte (by convention, 0 = success) |
rpc_prebuffered_payload_bytes | Option<Vec<u8>> | Serialized response payload |
is_finalized | bool | If true, stream ends immediately after header+payload |
Type Definition
Defined at src/rpc/rpc_request_response.rs:40-76:
Construction from RpcHeader
The from_rpc_header() method provides a convenience constructor for server-side response creation at src/rpc/rpc_request_response.rs:90-103:
Usage Example
Server-side response construction from tests/rpc_dispatcher_tests.rs:161-167:
Conversion to RpcHeader
When RpcDispatcher::respond() is invoked, the RpcResponse is converted to an RpcHeader at src/rpc/rpc_dispatcher.rs:307-319:
Sources:
- src/rpc/rpc_request_response.rs:35-105
- src/rpc/rpc_dispatcher.rs:298-337
- tests/rpc_dispatcher_tests.rs:150-190
sequenceDiagram
participant Client as "Client Code"
participant Disp as "RpcDispatcher"
participant IDGen as "next_rpc_request_id\n(u32 counter)"
participant Queue as "rpc_request_queue\nArc<Mutex<VecDeque>>"
Note over Client,Queue: Request Path
Client->>Disp: call(RpcRequest)
Disp->>IDGen: Allocate ID
IDGen-->>Disp: rpc_request_id = N
Disp->>Disp: Build RpcHeader with\nrpc_request_id=N
Note over Disp: Transmit request frames...
Note over Client,Queue: Response Path
Disp->>Disp: Receive response frames
Disp->>Disp: Parse RpcHeader from frames
Disp->>Queue: Find request by rpc_request_id=N
Queue-->>Disp: Matched RpcRequest entry
Disp->>Disp: Invoke response handler
Disp->>Queue: Remove entry on End/Error
Request-Response Correlation Mechanism
The system correlates requests and responses using the rpc_request_id field, which is managed by the dispatcher’s monotonic ID generator.
ID Generation
The dispatcher maintains a monotonic counter at src/rpc/rpc_dispatcher.rs42:
Each call increments this counter using increment_u32_id() at src/rpc/rpc_dispatcher.rs:241-242:
Request Queue Management
Inbound requests (responses from the remote peer) are tracked in a shared queue at src/rpc/rpc_dispatcher.rs50:
The queue is populated by the catch-all response handler at src/rpc/rpc_dispatcher.rs:122-141:
- Header Event : Creates new
RpcRequestentry withrpc_request_id - PayloadChunk Event : Appends bytes to matching request’s payload
- End Event : Marks request as finalized
Requests can be retrieved and removed using:
get_rpc_request(header_id)at src/rpc/rpc_dispatcher.rs:381-394is_rpc_request_finalized(header_id)at src/rpc/rpc_dispatcher.rs:399-405delete_rpc_request(header_id)at src/rpc/rpc_dispatcher.rs:411-420
Sources:
- src/rpc/rpc_dispatcher.rs:31-51
- src/rpc/rpc_dispatcher.rs:227-286
- src/rpc/rpc_dispatcher.rs:99-209
- src/utils.rs (increment_u32_id implementation)
graph LR
subgraph "Client Side - Outbound Call"
C1["User constructs\nRpcRequest"]
C2["RpcDispatcher::call()"]
C3["Convert to RpcHeader\nrpc_msg_type=Call"]
C4["RpcStreamEncoder\nSerialize + Frame"]
C5["Bytes on wire"]
end
subgraph "Server Side - Inbound Call"
S1["Bytes received"]
S2["RpcSession::read_bytes()"]
S3["Decode to RpcHeader"]
S4["RpcStreamEvent::Header\n+PayloadChunk\n+End"]
S5["Reconstruct RpcRequest\nin rpc_request_queue"]
S6["User retrieves\ndelete_rpc_request()"]
end
subgraph "Server Side - Outbound Response"
R1["User constructs\nRpcResponse"]
R2["RpcDispatcher::respond()"]
R3["Convert to RpcHeader\nrpc_msg_type=Response"]
R4["RpcStreamEncoder\nSerialize + Frame"]
R5["Bytes on wire"]
end
subgraph "Client Side - Inbound Response"
R6["Bytes received"]
R7["RpcSession::read_bytes()"]
R8["Decode to RpcHeader"]
R9["RpcStreamEvent fired\nto response_handler"]
R10["User processes\nresponse payload"]
end
C1 --> C2
C2 --> C3
C3 --> C4
C4 --> C5
C5 -.network.-> S1
S1 --> S2
S2 --> S3
S3 --> S4
S4 --> S5
S5 --> S6
R1 --> R2
R2 --> R3
R3 --> R4
R4 --> R5
R5 -.network.-> R6
R6 --> R7
R7 --> R8
R8 --> R9
R9 --> R10
Data Flow Through Type Transformations
The following diagram illustrates how request and response data flows through type transformations:
Sources:
- src/rpc/rpc_dispatcher.rs:227-286 (Call path)
- src/rpc/rpc_dispatcher.rs:298-337 (Response path)
- src/rpc/rpc_dispatcher.rs:99-209 (Inbound request reconstruction)
- src/rpc/rpc_internals/rpc_respondable_session.rs:93-173 (Stream event handling)
Field Semantics and Special Cases
Prebuffered Payloads
Both RpcRequest and RpcResponse support rpc_prebuffered_payload_bytes:
- Purpose : Allows sending the entire payload in a single transmission without manual streaming
- Transmission : Sent immediately after header via
encoder.write_bytes()at src/rpc/rpc_dispatcher.rs:270-276 and src/rpc/rpc_dispatcher.rs:327-329 - Use Case : Suitable for small to medium-sized payloads where chunking overhead is undesirable
Finalization Flag
The is_finalized field controls stream lifecycle:
true: Stream is ended immediately after header and prebuffered payload are sentfalse: Caller retains theRpcStreamEncoderand must manually callend_stream()- Implementation : See src/rpc/rpc_dispatcher.rs:279-283 and src/rpc/rpc_dispatcher.rs:331-334
Result Status Conventions
While the core library does not enforce semantics for rpc_result_status, the following conventions are commonly used:
| Value | Meaning |
|---|---|
Some(0) | Success |
Some(1) | Generic error |
Some(2+) | Custom error codes |
None | No status information |
This convention is referenced in the documentation at src/rpc/rpc_request_response.rs:61-62
Sources:
- src/rpc/rpc_request_response.rs:1-105
- src/rpc/rpc_dispatcher.rs:227-337
- tests/rpc_dispatcher_tests.rs:30-203
Complete Request-Response Lifecycle Example
The following table illustrates a complete request-response cycle from the test suite at tests/rpc_dispatcher_tests.rs:42-198:
| Step | Location | Action | Data Structure |
|---|---|---|---|
| 1 | Client | Construct request | RpcRequest { method_id: ADD_METHOD_ID, param_bytes: Some(encoded), prebuffered_payload: None, is_finalized: true } |
| 2 | Client | Call dispatcher | client_dispatcher.call(rpc_request, 4, on_emit, on_response, true) |
| 3 | Client | Convert to header | RpcHeader { msg_type: Call, request_id: 1, method_id: ADD_METHOD_ID, metadata: encoded_params } |
| 4 | Transport | Transmit bytes | Binary frames written to outgoing_buf |
| 5 | Server | Receive bytes | server_dispatcher.read_bytes(chunk) |
| 6 | Server | Decode to events | RpcStreamEvent::Header, RpcStreamEvent::End |
| 7 | Server | Reconstruct request | Entry added to rpc_request_queue with (request_id, RpcRequest) |
| 8 | Server | Retrieve request | server_dispatcher.delete_rpc_request(request_id) |
| 9 | Server | Process and respond | RpcResponse { request_id: 1, method_id: ADD_METHOD_ID, result_status: Some(0), payload: encoded_result, is_finalized: true } |
| 10 | Server | Convert to header | RpcHeader { msg_type: Response, request_id: 1, method_id: ADD_METHOD_ID, metadata: [0] } |
| 11 | Transport | Transmit response | Binary frames via server_dispatcher.respond() |
| 12 | Client | Receive response | client_dispatcher.read_bytes() routes to on_response handler |
| 13 | Client | Process result | User code decodes payload from RpcStreamEvent::PayloadChunk |
Sources:
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
RPC Framework
Loading…
RPC Framework
Relevant source files
Purpose and Scope
This document provides a comprehensive overview of the RPC (Remote Procedure Call) abstraction layer in the rust-muxio system. The RPC framework is built on top of the core muxio multiplexing library and provides a structured, type-safe mechanism for defining and invoking remote methods across client-server boundaries.
The RPC framework consists of three primary components distributed across separate crates:
- Service definition traits and method identification (muxio-rpc-service)
- Client-side call invocation (muxio-rpc-service-caller)
- Server-side request handling (muxio-rpc-service-endpoint)
For details on specific transport implementations that use this RPC framework, see Transport Implementations. For information on the underlying multiplexing and framing protocol, see Core Library (muxio)).
Architecture Overview
The RPC framework operates as a middleware layer between application code and the underlying muxio multiplexing protocol. It provides compile-time type safety while maintaining flexibility in serialization and transport choices.
graph TB
subgraph "Application Layer"
APP["Application Code\nType-safe method calls"]
end
subgraph "RPC Service Definition Layer"
SERVICE["muxio-rpc-service"]
TRAIT["RpcMethodPrebuffered\nRpcMethodStreaming traits"]
METHOD_ID["METHOD_ID generation\nxxhash at compile-time"]
ENCODE["encode_request/response\ndecode_request/response"]
SERVICE --> TRAIT
SERVICE --> METHOD_ID
SERVICE --> ENCODE
end
subgraph "Client Side"
CALLER["muxio-rpc-service-caller"]
CALLER_IFACE["RpcServiceCallerInterface"]
PREBUF_CALL["call_prebuffered"]
STREAM_CALL["call_streaming"]
CALLER --> CALLER_IFACE
CALLER_IFACE --> PREBUF_CALL
CALLER_IFACE --> STREAM_CALL
end
subgraph "Server Side"
ENDPOINT["muxio-rpc-service-endpoint"]
ENDPOINT_IFACE["RpcServiceEndpointInterface"]
REGISTER_PREBUF["register_prebuffered"]
REGISTER_STREAM["register_streaming"]
ENDPOINT --> ENDPOINT_IFACE
ENDPOINT_IFACE --> REGISTER_PREBUF
ENDPOINT_IFACE --> REGISTER_STREAM
end
subgraph "Core Multiplexing Layer"
DISPATCHER["RpcDispatcher"]
MUXIO_CORE["muxio core\nBinary framing protocol"]
DISPATCHER --> MUXIO_CORE
end
APP --> TRAIT
APP --> CALLER_IFACE
TRAIT -.shared definitions.-> CALLER
TRAIT -.shared definitions.-> ENDPOINT
CALLER --> DISPATCHER
ENDPOINT --> DISPATCHER
PREBUF_CALL -.invokes.-> DISPATCHER
STREAM_CALL -.invokes.-> DISPATCHER
REGISTER_PREBUF -.handles via.-> DISPATCHER
REGISTER_STREAM -.handles via.-> DISPATCHER
RPC Framework Component Structure
Sources:
- Cargo.toml:19-31
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-rpc-service-caller/Cargo.toml
- extensions/muxio-rpc-service-endpoint/Cargo.toml
- High-level architecture diagrams
Core RPC Components
The RPC framework is divided into three specialized crates, each with a distinct responsibility in the RPC lifecycle.
Component Responsibilities
| Crate | Primary Responsibility | Key Traits/Types | Dependencies |
|---|---|---|---|
muxio-rpc-service | Service definition contracts | RpcMethodPrebuffered, RpcMethodStreaming, METHOD_ID | muxio, bitcode, xxhash-rust, num_enum |
muxio-rpc-service-caller | Client-side invocation | RpcServiceCallerInterface, call_prebuffered, call_streaming | muxio, muxio-rpc-service, futures |
muxio-rpc-service-endpoint | Server-side dispatch | RpcServiceEndpointInterface, register_prebuffered, register_streaming | muxio, muxio-rpc-service, muxio-rpc-service-caller |
Sources:
RPC Method Definition and Identification
The foundation of the RPC framework is the method definition system, which establishes compile-time contracts between clients and servers.
graph LR
subgraph "Compile Time"
METHOD_NAME["Method Name String\ne.g., 'Add'"]
XXHASH["xxhash-rust\nconst_xxh3"]
METHOD_ID["METHOD_ID: u64\nCompile-time constant"]
METHOD_NAME --> XXHASH
XXHASH --> METHOD_ID
end
subgraph "Service Definition Trait"
TRAIT_IMPL["RpcMethodPrebuffered impl"]
CONST_ID["const METHOD_ID"]
ENCODE_REQ["encode_request"]
DECODE_REQ["decode_request"]
ENCODE_RESP["encode_response"]
DECODE_RESP["decode_response"]
TRAIT_IMPL --> CONST_ID
TRAIT_IMPL --> ENCODE_REQ
TRAIT_IMPL --> DECODE_REQ
TRAIT_IMPL --> ENCODE_RESP
TRAIT_IMPL --> DECODE_RESP
end
subgraph "Bitcode Serialization"
BITCODE["bitcode crate"]
PARAMS["Request/Response types\nSerialize + Deserialize"]
ENCODE_REQ --> BITCODE
DECODE_REQ --> BITCODE
ENCODE_RESP --> BITCODE
DECODE_RESP --> BITCODE
BITCODE --> PARAMS
end
METHOD_ID --> CONST_ID
Method ID Generation Process
The METHOD_ID is a u64 value generated at compile time by hashing the method name using xxhash-rust. This approach ensures:
- Collision prevention : Hash-based IDs virtually eliminate accidental collisions
- Zero runtime overhead : IDs are compile-time constants
- Version independence : Method IDs remain stable across compilations
Sources:
Type Safety Through Shared Definitions
The RPC framework enforces type safety by requiring both client and server to depend on the same service definition crate. This creates a compile-time contract that prevents API mismatches.
sequenceDiagram
participant DEV as "Developer"
participant DEF as "Service Definition Crate"
participant CLIENT as "Client Crate"
participant SERVER as "Server Crate"
participant COMPILER as "Rust Compiler"
DEV->>DEF: Define RpcMethodPrebuffered
DEF->>DEF: Generate METHOD_ID
DEF->>DEF: Define Request/Response types
DEV->>CLIENT: Add dependency on DEF
DEV->>SERVER: Add dependency on DEF
CLIENT->>DEF: Import method traits
SERVER->>DEF: Import method traits
CLIENT->>COMPILER: Compile with encode_request
SERVER->>COMPILER: Compile with decode_request
alt Type Mismatch
COMPILER->>DEV: Compilation Error
else Types Match
COMPILER->>CLIENT: Successful build
COMPILER->>SERVER: Successful build
end
Note over CLIENT,SERVER: Both use identical\nMETHOD_ID and data structures
Shared Definition Workflow
This workflow demonstrates how compile-time validation eliminates an entire class of runtime errors. If the client attempts to send a request with a different structure than what the server expects, the code will not compile.
Sources:
sequenceDiagram
participant APP as "Application Code"
participant METHOD as "Method::call()\nRpcMethodPrebuffered"
participant CALLER as "RpcServiceCallerInterface"
participant DISP as "RpcDispatcher"
participant FRAME as "Binary Framing Layer"
participant TRANSPORT as "Transport\n(WebSocket, etc.)"
participant ENDPOINT as "RpcServiceEndpointInterface"
participant HANDLER as "Registered Handler"
APP->>METHOD: call(params)
METHOD->>METHOD: encode_request(params) → bytes
METHOD->>CALLER: call_prebuffered(METHOD_ID, bytes)
CALLER->>DISP: send_request(method_id, request_bytes)
DISP->>DISP: Assign unique request_id
DISP->>FRAME: Serialize to binary frames
FRAME->>TRANSPORT: Transmit frames
TRANSPORT->>FRAME: Receive frames
FRAME->>DISP: Reassemble frames
DISP->>DISP: Lookup handler by METHOD_ID
DISP->>ENDPOINT: dispatch_to_handler(METHOD_ID, bytes)
ENDPOINT->>HANDLER: invoke(request_bytes, context)
HANDLER->>METHOD: decode_request(bytes) → params
HANDLER->>HANDLER: Process business logic
HANDLER->>METHOD: encode_response(result) → bytes
HANDLER->>ENDPOINT: Return response_bytes
ENDPOINT->>DISP: send_response(request_id, bytes)
DISP->>FRAME: Serialize to binary frames
FRAME->>TRANSPORT: Transmit frames
TRANSPORT->>FRAME: Receive frames
FRAME->>DISP: Reassemble frames
DISP->>DISP: Match request_id to pending call
DISP->>CALLER: resolve_future(request_id, bytes)
CALLER->>METHOD: decode_response(bytes) → result
METHOD->>APP: Return typed result
RPC Call Flow
Understanding how an RPC call travels through the system is essential for debugging and optimization.
Complete RPC Invocation Sequence
Key observations:
- The
METHOD_IDis used for routing on the server side - The
request_id(assigned by the dispatcher) is used for correlation - All serialization/deserialization happens at the method trait level
- The dispatcher only handles raw bytes
Sources:
- README.md:70-160
- High-level architecture diagram 3 (RPC Communication Flow)
Prebuffered vs. Streaming RPC
The RPC framework supports two distinct calling patterns, each optimized for different use cases.
RPC Pattern Comparison
| Aspect | Prebuffered RPC | Streaming RPC |
|---|---|---|
| Request Size | Complete request buffered in memory | Request can be sent in chunks |
| Response Size | Complete response buffered in memory | Response can be received in chunks |
| Memory Usage | Higher for large payloads | Lower, constant memory footprint |
| Latency | Lower for small payloads | Higher initial latency, better throughput |
| Trait | RpcMethodPrebuffered | RpcMethodStreaming |
| Use Cases | Small to medium payloads (< 10MB) | Large payloads, file transfers, real-time data |
| Multiplexing | Multiple calls can be concurrent | Streams can be interleaved |
Sources:
- Section titles from table of contents
- README.md28
classDiagram
class RpcServiceCallerInterface {<<trait>>\n+call_prebuffered(method_id: u64, params: Option~Vec~u8~~, payload: Option~Vec~u8~~) Future~Result~Vec~u8~~~\n+call_streaming(method_id: u64, params: Option~Vec~u8~~) Future~Result~StreamResponse~~\n+get_transport_state() RpcTransportState\n+set_state_change_handler(handler: Fn) Future}
class RpcTransportState {<<enum>>\nConnecting\nConnected\nDisconnected\nFailed}
class RpcClient {+new(host, port) RpcClient\nimplements RpcServiceCallerInterface}
class RpcWasmClient {+new(url) RpcWasmClient\nimplements RpcServiceCallerInterface}
class CustomClient {+new(...) CustomClient\nimplements RpcServiceCallerInterface}
RpcServiceCallerInterface <|.. RpcClient : implements
RpcServiceCallerInterface <|.. RpcWasmClient : implements
RpcServiceCallerInterface <|.. CustomClient : implements
RpcServiceCallerInterface --> RpcTransportState : returns
Client-Side: RpcServiceCallerInterface
The client-side RPC invocation is abstracted through the RpcServiceCallerInterface trait, which allows different transport implementations to provide identical calling semantics.
RpcServiceCallerInterface Contract
This design allows application code to be written once against RpcServiceCallerInterface and work with any compliant transport implementation (Tokio, WASM, custom transports, etc.).
Sources:
classDiagram
class RpcServiceEndpointInterface {<<trait>>\n+register_prebuffered(method_id: u64, handler: Fn) Future~Result~~~\n+register_streaming(method_id: u64, handler: Fn) Future~Result~~~\n+unregister(method_id: u64) Future~Result~~~\n+is_registered(method_id: u64) Future~bool~}
class HandlerContext {+client_id: Option~String~\n+metadata: HashMap~String, String~}
class PrebufferedHandler {<<function>>\n+Fn(Vec~u8~, HandlerContext) Future~Result~Vec~u8~~~}
class StreamingHandler {<<function>>\n+Fn(Option~Vec~u8~~, DynamicChannel, HandlerContext) Future~Result~~~}
class RpcServer {
+new(config) RpcServer
+endpoint() Arc~RpcServiceEndpointInterface~
+serve_with_listener(listener) Future
}
RpcServiceEndpointInterface --> PrebufferedHandler : accepts
RpcServiceEndpointInterface --> StreamingHandler : accepts
RpcServiceEndpointInterface --> HandlerContext : provides
RpcServer --> RpcServiceEndpointInterface : provides
Server-Side: RpcServiceEndpointInterface
The server-side request handling is abstracted through the RpcServiceEndpointInterface trait, which manages method registration and dispatch.
RpcServiceEndpointInterface Contract
Handlers are registered by METHOD_ID and receive:
- Request bytes : The serialized request parameters (for prebuffered) or initial params (for streaming)
- Context : Metadata about the client and connection
- Dynamic channel (streaming only): For incremental data transmission
Sources:
Data Serialization with Bitcode
The RPC framework uses the bitcode crate for binary serialization. This provides compact, efficient encoding of Rust types.
Serialization Requirements
For a type to be used in RPC method definitions, it must implement:
serde::Serialize- For encodingserde::Deserialize- For decoding
The bitcode crate provides these implementations for most standard Rust types, including:
- Primitive types (
u64,f64,bool, etc.) - Standard collections (
Vec<T>,HashMap<K, V>, etc.) - Custom structs with
#[derive(Serialize, Deserialize)]
Serialization Flow
The compact binary format of bitcode significantly reduces payload sizes compared to JSON or other text-based formats, contributing to the framework’s low-latency characteristics.
Sources:
- Cargo.lock:158-168
- Cargo.toml52
- README.md32
- High-level architecture diagram 6 (Data Flow and Serialization Strategy)
Method Registration and Dispatch
On the server side, methods must be registered with the endpoint before they can be invoked. The registration process associates a METHOD_ID with a handler function.
Handler Registration Pattern
From the example application, the registration pattern is:
Registration Lifecycle
Once registered, handlers remain active until explicitly unregistered or the server shuts down. Multiple concurrent invocations of the same handler are supported through the underlying multiplexing layer.
Sources:
graph TB
subgraph "Shared Application Logic"
APP_CODE["Application Code\nPlatform-agnostic"]
METHOD_CALL["Method::call(&client, params)"]
APP_CODE --> METHOD_CALL
end
subgraph "Native Platform"
TOKIO_CLIENT["RpcClient\n(Tokio-based)"]
TOKIO_RUNTIME["Tokio async runtime"]
TOKIO_WS["tokio-tungstenite\nWebSocket"]
METHOD_CALL -.uses.-> TOKIO_CLIENT
TOKIO_CLIENT --> TOKIO_RUNTIME
TOKIO_CLIENT --> TOKIO_WS
end
subgraph "Web Platform"
WASM_CLIENT["RpcWasmClient\n(WASM-based)"]
WASM_RUNTIME["Browser event loop"]
WASM_WS["JavaScript WebSocket API\nvia wasm-bindgen"]
METHOD_CALL -.uses.-> WASM_CLIENT
WASM_CLIENT --> WASM_RUNTIME
WASM_CLIENT --> WASM_WS
end
subgraph "Custom Platform"
CUSTOM_CLIENT["Custom RpcClient\nimplements RpcServiceCallerInterface"]
CUSTOM_TRANSPORT["Custom Transport"]
METHOD_CALL -.uses.-> CUSTOM_CLIENT
CUSTOM_CLIENT --> CUSTOM_TRANSPORT
end
Cross-Platform RPC Invocation
A key design goal of the RPC framework is enabling the same application code to work across different platforms and transports. This is achieved through the abstraction provided by RpcServiceCallerInterface.
Platform-Agnostic Application Code
The application layer depends only on:
- The service definition crate (for method traits)
- The
RpcServiceCallerInterfacetrait (for invocation)
This allows the same business logic to run in servers, native desktop applications, mobile apps, and web browsers with minimal platform-specific code.
Sources:
- README.md47
- Cargo.lock:898-916 (tokio client)
- Cargo.lock:935-954 (wasm client)
- High-level architecture diagram 2 (Cross-Platform Deployment Model)
graph TD
subgraph "Application-Level Errors"
BIZ_ERR["Business Logic Errors\nDomain-specific"]
end
subgraph "RPC Framework Errors"
RPC_ERR["RpcServiceError"]
METHOD_NOT_FOUND["MethodNotFound\nInvalid METHOD_ID"]
ENCODING_ERR["EncodingError\nSerialization failure"]
SYSTEM_ERR["SystemError\nInternal dispatcher error"]
TRANSPORT_ERR["TransportError\nNetwork failure"]
RPC_ERR --> METHOD_NOT_FOUND
RPC_ERR --> ENCODING_ERR
RPC_ERR --> SYSTEM_ERR
RPC_ERR --> TRANSPORT_ERR
end
subgraph "Core Layer Errors"
CORE_ERR["Muxio Core Errors\nFraming protocol errors"]
end
BIZ_ERR -.propagates through.-> RPC_ERR
TRANSPORT_ERR -.wraps.-> CORE_ERR
Error Handling in RPC
The RPC framework uses Rust’s Result type throughout, with error types defined at the appropriate abstraction levels.
RPC Error Hierarchy
Error handling patterns:
- Client-side : Errors are returned as
Result<T, E>from RPC calls - Server-side : Handler errors are serialized and transmitted back to the client
- Transport errors : Automatically trigger state changes (see
RpcTransportState)
For detailed error type definitions, see Error Handling.
Sources:
- Section reference to page 7
Performance Characteristics
The RPC framework is designed for low latency and high throughput. Key performance features include:
Performance Optimizations
| Feature | Benefit | Implementation |
|---|---|---|
| Compile-time method IDs | Zero runtime hash overhead | xxhash-rust with const_xxh3 |
| Binary serialization | Smaller payload sizes | bitcode crate |
| Minimal frame headers | Reduced per-message overhead | Custom binary protocol |
| Request multiplexing | Concurrent calls over single connection | RpcDispatcher correlation |
| Zero-copy streaming | Reduced memory allocations | DynamicChannel for chunked data |
| Callback-driven dispatch | No polling overhead | Async handlers with futures |
The combination of these optimizations makes the RPC framework suitable for:
- Low-latency trading systems
- Real-time gaming
- Interactive remote tooling
- High-throughput data processing
Sources:
graph TB
subgraph "RPC Abstraction Layer"
CALLER_IF["RpcServiceCallerInterface"]
ENDPOINT_IF["RpcServiceEndpointInterface"]
end
subgraph "Core Dispatcher"
DISPATCHER["RpcDispatcher\nRequest correlation"]
SEND_CB["send_callback\nVec<u8> → ()"]
RECV_CB["receive_callback\n() → Vec<u8>"]
end
subgraph "Tokio WebSocket Transport"
TOKIO_SERVER["TokioRpcServer"]
TOKIO_CLIENT["TokioRpcClient"]
TUNGSTENITE["tokio-tungstenite"]
TOKIO_SERVER --> TUNGSTENITE
TOKIO_CLIENT --> TUNGSTENITE
end
subgraph "WASM WebSocket Transport"
WASM_CLIENT["WasmRpcClient"]
JS_BRIDGE["wasm-bindgen bridge"]
BROWSER_WS["Browser WebSocket API"]
WASM_CLIENT --> JS_BRIDGE
JS_BRIDGE --> BROWSER_WS
end
CALLER_IF -.implemented by.-> TOKIO_CLIENT
CALLER_IF -.implemented by.-> WASM_CLIENT
ENDPOINT_IF -.implemented by.-> TOKIO_SERVER
TOKIO_CLIENT --> DISPATCHER
WASM_CLIENT --> DISPATCHER
TOKIO_SERVER --> DISPATCHER
DISPATCHER --> SEND_CB
DISPATCHER --> RECV_CB
SEND_CB -.invokes.-> TUNGSTENITE
RECV_CB -.invokes.-> TUNGSTENITE
SEND_CB -.invokes.-> JS_BRIDGE
RECV_CB -.invokes.-> JS_BRIDGE
Integration with Transport Layer
The RPC framework is designed to be transport-agnostic, with concrete implementations provided for common scenarios.
Transport Integration Points
The RpcDispatcher accepts callbacks for sending and receiving bytes, allowing it to work with any transport mechanism. This design enables:
- WebSocket transports (Tokio and WASM implementations provided)
- TCP socket transports
- In-memory transports (for testing)
- Custom transports (by providing appropriate callbacks)
For implementation details of specific transports, see Transport Implementations.
Sources:
- README.md34
- Cargo.lock:898-933 (transport implementations)
- High-level architecture diagram 1 (Overall System Architecture)
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Service Definitions
Loading…
Service Definitions
Relevant source files
Purpose and Scope
Service definitions provide compile-time type-safe RPC method contracts that are shared between client and server implementations. The muxio-rpc-service crate defines the core traits and utilities for declaring RPC methods with automatic method ID generation and efficient binary serialization. Service definitions serve as the single source of truth for RPC interfaces, ensuring that both sides of the communication agree on method signatures, parameter types, and return types at compile time.
For information about implementing client-side invocation logic, see Service Caller Interface. For server-side handler registration, see Service Endpoint Interface. For a step-by-step guide on creating your own service definitions, see Creating Service Definitions.
Core Architecture
The service definition layer sits at the top of the RPC abstraction layer, providing the foundation for type-safe communication. It bridges application-level Rust types with the underlying binary protocol.
Sources:
graph TB
subgraph "Application Layer"
APP["Application Code\nBusiness Logic"]
end
subgraph "Service Definition Layer"
TRAIT["RpcMethodPrebuffered Trait\nMethod Signature Declaration"]
METHODID["METHOD_ID Constant\nxxhash::xxh3_64(method_name)"]
PARAMS["Parameter Types\nSerialize + Deserialize"]
RESULT["Result Types\nSerialize + Deserialize"]
end
subgraph "RPC Framework Layer"
CALLER["RpcServiceCallerInterface\nClient Invocation"]
ENDPOINT["RpcServiceEndpointInterface\nServer Dispatch"]
SERIALIZER["bitcode::encode/decode\nBinary Serialization"]
end
APP --> TRAIT
TRAIT --> METHODID
TRAIT --> PARAMS
TRAIT --> RESULT
CALLER --> METHODID
ENDPOINT --> METHODID
PARAMS --> SERIALIZER
RESULT --> SERIALIZER
The muxio-rpc-service Crate
The muxio-rpc-service crate provides the foundational types and traits for defining RPC services. It has minimal dependencies to remain runtime-agnostic and platform-independent.
Dependencies
| Dependency | Purpose |
|---|---|
async-trait | Enables async trait methods for service definitions |
futures | Provides stream abstractions for streaming RPC methods |
muxio | Core framing and multiplexing primitives |
num_enum | Discriminated union encoding for message types |
xxhash-rust | Fast compile-time hash generation for method IDs |
bitcode | Compact binary serialization for parameters and results |
Sources:
The RpcMethodPrebuffered Trait
The RpcMethodPrebuffered trait is the primary mechanism for defining RPC methods. It specifies the method signature, parameter types, result types, and automatically generates a unique method identifier.
graph LR
subgraph "RpcMethodPrebuffered Trait Definition"
NAME["const NAME: &'static str\nHuman-readable method name"]
METHODID["const METHOD_ID: u64\nxxh3_64(NAME)
at compile time"]
PARAMS["type Params\nSerialize + Deserialize + Send"]
RESULT["type Result\nSerialize + Deserialize + Send"]
end
subgraph "Example: AddMethod"
NAME_EX["NAME = 'Add'"]
METHODID_EX["METHOD_ID = 0x5f8b3c4a2e1d6f90"]
PARAMS_EX["Params = (i32, i32)"]
RESULT_EX["Result = i32"]
end
NAME --> NAME_EX
METHODID --> METHODID_EX
PARAMS --> PARAMS_EX
RESULT --> RESULT_EX
Key Components
| Component | Type | Description |
|---|---|---|
NAME | const &'static str | Human-readable method name (e.g., “Add”, “Multiply”) |
METHOD_ID | const u64 | Compile-time hash of NAME using xxhash |
Params | Associated Type | Input parameter type, must implement Serialize + Deserialize + Send |
Result | Associated Type | Return value type, must implement Serialize + Deserialize + Send |
Type Safety Guarantees
Service definitions enforce several type safety invariants:
- Compile-time method identification : Method IDs are computed at compile time from method names
- Consistent serialization : Both client and server use the same bitcode schema for parameters
- Type mismatch detection : Mismatched parameter or result types cause compilation errors
- Zero-cost abstraction : Method dispatch has no runtime overhead beyond the hash lookup
Sources:
Method ID Generation with xxhash
Method IDs are 64-bit unsigned integers generated at compile time by hashing the method name. This approach provides efficient dispatch while maintaining human-readable method names in code.
graph LR
subgraph "Compile Time"
METHNAME["Method Name (String)\ne.g., 'Add', 'Multiply', 'Echo'"]
HASH["xxhash::xxh3_64(bytes)"]
METHODID["METHOD_ID: u64\ne.g., 0x5f8b3c4a2e1d6f90"]
METHNAME --> HASH
HASH --> METHODID
end
subgraph "Runtime - Client"
CLIENT_CALL["Client calls method"]
CLIENT_ENCODE["RpcRequest::method_id = METHOD_ID"]
CLIENT_SEND["Send binary request"]
CLIENT_CALL --> CLIENT_ENCODE
CLIENT_ENCODE --> CLIENT_SEND
end
subgraph "Runtime - Server"
SERVER_RECV["Receive binary request"]
SERVER_DECODE["Extract method_id from RpcRequest"]
SERVER_MATCH["Match method_id to handler"]
SERVER_EXEC["Execute handler"]
SERVER_RECV --> SERVER_DECODE
SERVER_DECODE --> SERVER_MATCH
SERVER_MATCH --> SERVER_EXEC
end
METHODID --> CLIENT_ENCODE
METHODID --> SERVER_MATCH
Method ID Properties
| Property | Description |
|---|---|
| Size | 64-bit unsigned integer |
| Generation | Compile-time hash using xxh3_64 algorithm |
| Collision Resistance | Extremely low probability of collision for reasonable method counts |
| Performance | Single integer comparison for method dispatch |
| Stability | Same method name always produces same ID across compilations |
Benefits of Compile-Time Method IDs
- No string comparison overhead : Method dispatch uses integer comparison instead of string matching
- Compact wire format : Only 8 bytes sent over the network instead of method name strings
- Automatic generation : No manual assignment of method IDs required
- Type-safe verification : Compile-time guarantee that client and server agree on method IDs
Sources:
graph TB
subgraph "Parameter Encoding"
RUSTPARAM["Rust Type\ne.g., (i32, String, Vec<u8>)"]
BITCODEENC["bitcode::encode(params)"]
BINARY["Compact Binary Payload\nVariable-length encoding"]
RUSTPARAM --> BITCODEENC
BITCODEENC --> BINARY
end
subgraph "RPC Request Structure"
RPCREQ["RpcRequest"]
REQMETHOD["method_id: u64"]
REQPARAMS["params: Vec<u8>"]
RPCREQ --> REQMETHOD
RPCREQ --> REQPARAMS
end
subgraph "Result Decoding"
RESPBINARY["Binary Payload"]
BITCODEDEC["bitcode::decode::<T>(bytes)"]
RUSTRESULT["Rust Type\ne.g., Result<String, Error>"]
RESPBINARY --> BITCODEDEC
BITCODEDEC --> RUSTRESULT
end
BINARY --> REQPARAMS
REQPARAMS -.Wire Protocol.-> RESPBINARY
Serialization with Bitcode
All RPC parameters and results are serialized using the bitcode crate, which provides compact binary encoding with built-in support for Rust types.
Bitcode Characteristics
| Characteristic | Description |
|---|---|
| Encoding | Compact binary format with variable-length integers |
| Schema | Schemaless - structure implied by Rust types |
| Performance | Zero-copy deserialization where possible |
| Type Support | Built-in support for standard Rust types (primitives, tuples, Vec, HashMap, etc.) |
| Versioning | Field order and type changes require coordinated updates |
Serialization Requirements
For a type to be used as Params or Result in an RPC method definition, it must implement:
Serialize + Deserialize + Send
These bounds ensure:
- The type can be encoded to binary (
Serialize) - The type can be decoded from binary (
Deserialize) - The type can be safely sent across thread boundaries (
Send)
Sources:
graph TB
subgraph "Service Definition Crate"
CRATE["example-muxio-rpc-service-definition"]
subgraph "Method Definitions"
ADD["AddMethod\nNAME: 'Add'\nParams: (i32, i32)\nResult: i32"]
MULT["MultiplyMethod\nNAME: 'Multiply'\nParams: (i32, i32)\nResult: i32"]
ECHO["EchoMethod\nNAME: 'Echo'\nParams: String\nResult: String"]
end
CRATE --> ADD
CRATE --> MULT
CRATE --> ECHO
end
subgraph "Consumer Crates"
CLIENT["Client Application\nUses methods via RpcServiceCallerInterface"]
SERVER["Server Application\nImplements handlers via RpcServiceEndpointInterface"]
end
ADD --> CLIENT
MULT --> CLIENT
ECHO --> CLIENT
ADD --> SERVER
MULT --> SERVER
ECHO --> SERVER
Service Definition Structure
A complete service definition typically consists of multiple method trait implementations grouped together. Here’s the conceptual structure:
Typical Crate Layout
example-muxio-rpc-service-definition/
├── Cargo.toml
│ ├── [dependency] muxio-rpc-service
│ └── [dependency] bitcode
└── src/
└── lib.rs
├── struct AddMethod;
├── impl RpcMethodPrebuffered for AddMethod { ... }
├── struct MultiplyMethod;
├── impl RpcMethodPrebuffered for MultiplyMethod { ... }
└── ...
Sources:
graph TB
subgraph "Service Definition"
SERVICEDEF["RpcMethodPrebuffered Implementation\n- NAME\n- METHOD_ID\n- Params\n- Result"]
end
subgraph "Client Side"
CALLERIFACE["RpcServiceCallerInterface"]
CALLER_INVOKE["call_method<<M: RpcMethodPrebuffered>>()"]
DISPATCHER["RpcDispatcher"]
CALLERIFACE --> CALLER_INVOKE
CALLER_INVOKE --> DISPATCHER
end
subgraph "Server Side"
ENDPOINTIFACE["RpcServiceEndpointInterface"]
ENDPOINT_REGISTER["register<<M: RpcMethodPrebuffered>>()"]
HANDLER_MAP["HashMap<u64, Handler>"]
ENDPOINTIFACE --> ENDPOINT_REGISTER
ENDPOINT_REGISTER --> HANDLER_MAP
end
subgraph "Wire Protocol"
RPCREQUEST["RpcRequest\nmethod_id: u64\nparams: Vec<u8>"]
RPCRESPONSE["RpcResponse\nrequest_id: u64\nresult: Vec<u8>"]
end
SERVICEDEF --> CALLER_INVOKE
SERVICEDEF --> ENDPOINT_REGISTER
DISPATCHER --> RPCREQUEST
RPCREQUEST --> HANDLER_MAP
HANDLER_MAP --> RPCRESPONSE
RPCRESPONSE --> DISPATCHER
Integration with the RPC Framework
Service definitions integrate with the broader RPC framework through well-defined interfaces:
Compile-Time Guarantees
The service definition system provides several compile-time guarantees:
| Guarantee | Mechanism |
|---|---|
| Type Safety | Generic trait bounds enforce matching types across client/server |
| Method ID Uniqueness | Hashing function produces consistent IDs for method names |
| Serialization Compatibility | Shared trait implementations ensure same encoding/decoding |
| Parameter Validation | Rust type system validates parameter structure at compile time |
Runtime Flow
- Client : Invokes method through
RpcServiceCallerInterface::call::<MethodType>(params) - Serialization : Parameters are encoded using
bitcode::encode(params) - Request Construction :
RpcRequestcreated withMETHOD_IDand serialized params - Server Dispatch : Request routed to handler based on
method_idlookup - Handler Execution : Handler deserializes params, executes logic, serializes result
- Response Delivery :
RpcResponsesent back with serialized result - Client Deserialization : Result decoded using
bitcode::decode::<ResultType>(bytes)
Sources:
Cross-Platform Compatibility
Service definitions are completely platform-agnostic. The same service definition crate can be used by:
- Native Tokio-based clients and servers
- WASM browser-based clients
- Custom transport implementations
- Different runtime environments (async/sync)
This cross-platform capability is achieved because:
- Service definitions contain no platform-specific code
- Serialization is handled by platform-agnostic
bitcode - Method IDs are computed at compile time without runtime dependencies
- The trait system provides compile-time polymorphism
Sources:
Summary
Service definitions in muxio provide:
- Type-Safe Contracts : Compile-time verified method signatures shared between client and server
- Efficient Dispatch : 64-bit integer method IDs computed at compile time using xxhash
- Compact Serialization : Binary encoding using bitcode for minimal network overhead
- Platform Independence : Service definitions work across native, WASM, and custom platforms
- Zero Runtime Cost : All method resolution and type checking happens at compile time
The next sections cover how to use service definitions from the client side (Service Caller Interface) and server side (Service Endpoint Interface), as well as the specific patterns for prebuffered (Prebuffered RPC Calls) and streaming (Streaming RPC Calls) RPC methods.
Sources:
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Service Caller Interface
Loading…
Service Caller Interface
Relevant source files
- extensions/muxio-rpc-service-caller/Cargo.toml
- extensions/muxio-rpc-service-caller/src/caller_interface.rs
Purpose and Scope
The Service Caller Interface defines the core abstraction for client-side RPC invocation in Muxio. The RpcServiceCallerInterface trait provides a runtime-agnostic interface that allows application code to make RPC calls without depending on specific transport implementations. This abstraction enables the same client code to work across different runtimes (native Tokio, WASM) by implementing the trait for each platform.
For information about defining RPC services and methods, see Service Definitions. For server-side RPC handling, see Service Endpoint Interface. For concrete implementations of this interface, see Tokio RPC Client and WASM RPC Client.
Interface Overview
The RpcServiceCallerInterface is defined as an async trait that provides two primary RPC invocation patterns: streaming and buffered. It abstracts the underlying transport mechanism while exposing connection state and allowing customization through state change handlers.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:25-405
graph TB
subgraph "Application Layer"
AppCode["Application Code"]
ServiceDef["RPC Method Definitions\n(RpcMethodPrebuffered)"]
end
subgraph "Service Caller Interface"
CallerTrait["RpcServiceCallerInterface\nasync trait"]
CallBuffered["call_rpc_buffered()\nReturns complete response"]
CallStreaming["call_rpc_streaming()\nReturns stream of chunks"]
GetDispatcher["get_dispatcher()\nAccess to RpcDispatcher"]
IsConnected["is_connected()\nConnection status"]
StateHandler["set_state_change_handler()\nState notifications"]
end
subgraph "Concrete Implementations"
TokioImpl["muxio-tokio-rpc-client\nRpcClient (Tokio)"]
WasmImpl["muxio-wasm-rpc-client\nRpcWasmClient (WASM)"]
end
subgraph "Core Components"
Dispatcher["RpcDispatcher\nRequest correlation"]
EmitFn["Emit Function\nSend bytes to transport"]
end
AppCode --> ServiceDef
ServiceDef --> CallerTrait
CallerTrait --> CallBuffered
CallerTrait --> CallStreaming
CallerTrait --> GetDispatcher
CallerTrait --> IsConnected
CallerTrait --> StateHandler
TokioImpl -.implements.-> CallerTrait
WasmImpl -.implements.-> CallerTrait
GetDispatcher --> Dispatcher
CallBuffered --> Dispatcher
CallStreaming --> Dispatcher
CallBuffered --> EmitFn
CallStreaming --> EmitFn
Trait Definition
The RpcServiceCallerInterface trait requires implementors to provide access to core components and implement multiple invocation patterns:
| Method | Return Type | Purpose |
|---|---|---|
get_dispatcher() | Arc<TokioMutex<RpcDispatcher<'static>>> | Provides access to the RPC dispatcher for request/response correlation |
get_emit_fn() | Arc<dyn Fn(Vec<u8>) + Send + Sync> | Returns the function that sends binary frames to the transport |
is_connected() | bool | Checks current connection state |
call_rpc_streaming() | Result<(RpcStreamEncoder, DynamicReceiver), RpcServiceError> | Initiates streaming RPC call with incremental data reception |
call_rpc_buffered() | Result<(RpcStreamEncoder, Result<T, RpcServiceError>), RpcServiceError> | Initiates buffered RPC call that returns complete response |
set_state_change_handler() | async fn | Registers callback for transport state changes |
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:26-31 extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-405
Core Components Access
Dispatcher Access
The get_dispatcher() method returns an Arc<TokioMutex<RpcDispatcher>> that allows the implementation to register RPC calls and manage request/response correlation. The TokioMutex is used because these methods are async and may need to await the lock.
Emit Function
The get_emit_fn() method returns a closure that the caller interface uses to send binary frames to the underlying transport. This function is called by the RpcStreamEncoder when writing request payloads.
Connection State
The is_connected() method allows implementations to check the connection state before attempting RPC calls. When false, the call_rpc_streaming() method immediately returns a ConnectionAborted error.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:28-30 extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53
Streaming RPC Pattern
The call_rpc_streaming() method provides the foundation for all RPC calls, supporting incremental data reception through dynamic channels. This pattern is used directly for streaming RPC calls and internally by the buffered pattern.
sequenceDiagram
participant App as "Application"
participant Interface as "RpcServiceCallerInterface"
participant Dispatcher as "RpcDispatcher"
participant Channel as "DynamicChannel\n(mpsc)"
participant SendFn as "Emit Function"
participant RecvFn as "Response Handler\n(Closure)"
App->>Interface: call_rpc_streaming(request)
Interface->>Interface: Check is_connected()
Interface->>Channel: Create mpsc channel\n(Bounded/Unbounded)
Interface->>Interface: Create send_fn closure
Interface->>Interface: Create recv_fn closure
Interface->>Dispatcher: call(request, send_fn, recv_fn)
Dispatcher-->>Interface: Return RpcStreamEncoder
Note over Interface,Channel: Wait for readiness signal
RecvFn->>RecvFn: Receive RpcStreamEvent::Header
RecvFn->>Channel: Send readiness via oneshot
Interface-->>App: Return (encoder, receiver)
loop "For each response chunk"
RecvFn->>RecvFn: Receive RpcStreamEvent::PayloadChunk
RecvFn->>Channel: Send Ok(bytes) to DynamicReceiver
App->>Channel: next().await
Channel-->>App: Some(Ok(bytes))
end
RecvFn->>RecvFn: Receive RpcStreamEvent::End
RecvFn->>Channel: Close sender (drop)
App->>Channel: next().await
Channel-->>App: None
Streaming Call Flow
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-349
Dynamic Channel Types
The streaming method accepts a DynamicChannelType parameter that determines the channel buffering strategy:
| Channel Type | Buffer Size | Use Case |
|---|---|---|
DynamicChannelType::Bounded | DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE | Controlled memory usage, backpressure |
DynamicChannelType::Unbounded | Unlimited | Maximum throughput, simple buffering |
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73
Response Handler Implementation
The recv_fn closure handles all incoming RpcStreamEvent variants synchronously using StdMutex for WASM compatibility. The handler maintains internal state to track response status and buffer error payloads:
stateDiagram-v2
[*] --> WaitingHeader : recv_fn created
WaitingHeader --> ProcessingPayload: RpcStreamEvent::Header\nExtract RpcResultStatus
WaitingHeader --> SendReadiness : Send readiness signal
SendReadiness --> ProcessingPayload
ProcessingPayload --> BufferSuccess: RpcResultStatus::Success
ProcessingPayload --> BufferError: RpcResultStatus::*Error
BufferSuccess --> ProcessingPayload : More chunks
BufferError --> ProcessingPayload : More chunks
ProcessingPayload --> CompleteSuccess: RpcStreamEvent::End\nSuccess status
ProcessingPayload --> CompleteError: RpcStreamEvent::End\nError status
ProcessingPayload --> HandleError: RpcStreamEvent::Error
CompleteSuccess --> [*] : Close channel
CompleteError --> [*] : Send RpcServiceError Close channel
HandleError --> [*] : Send Transport error Close channel
The response handler uses StdMutex instead of TokioMutex because it operates in a synchronous context and must be compatible with WASM environments where Tokio mutexes are not available.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:91-287 extensions/muxio-rpc-service-caller/src/caller_interface.rs:75-79 extensions/muxio-rpc-service-caller/src/caller_interface.rs:94-96
Response Event Handling
The recv_fn closure processes four event types:
| Event Type | Actions | Channel Operation |
|---|---|---|
RpcStreamEvent::Header | Extract RpcResultStatus from metadata, send readiness signal | Signal header received via oneshot |
RpcStreamEvent::PayloadChunk | If success: forward to receiver; otherwise buffer error payload | sender.send_and_ignore(Ok(bytes)) |
RpcStreamEvent::End | Process final status, convert errors to RpcServiceError | Send final error or close channel |
RpcStreamEvent::Error | Send transport error to both readiness and data channels | sender.send_and_ignore(Err(error)) |
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:118-286
Buffered RPC Pattern
The call_rpc_buffered() method provides a higher-level interface for RPC calls where the entire response is buffered before being returned. This method is built on top of call_rpc_streaming() and is used by RpcMethodPrebuffered implementations.
sequenceDiagram
participant App as "Application"
participant Buffered as "call_rpc_buffered()"
participant Streaming as "call_rpc_streaming()"
participant Stream as "DynamicReceiver"
participant Decode as "decode: F"
App->>Buffered: call_rpc_buffered(request, decode)
Buffered->>Streaming: call_rpc_streaming(request, Unbounded)
Streaming-->>Buffered: Return (encoder, stream)
Buffered->>Buffered: Create empty success_buf
loop "Stream consumption"
Buffered->>Stream: stream.next().await
Stream-->>Buffered: Some(Ok(chunk))
Buffered->>Buffered: success_buf.extend(chunk)
end
alt "Stream completed successfully"
Stream-->>Buffered: None
Buffered->>Decode: decode(&success_buf)
Decode-->>Buffered: Return T
Buffered-->>App: Ok((encoder, Ok(decoded)))
else "Stream yielded error"
Stream-->>Buffered: Some(Err(e))
Buffered-->>App: Ok((encoder, Err(e)))
end
Buffered Call Implementation
The buffered pattern always uses DynamicChannelType::Unbounded to avoid backpressure complications when consuming the entire stream.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:351-399 extensions/muxio-rpc-service-caller/src/caller_interface.rs:368-370
Decode Function
The decode parameter is a closure that converts the buffered byte slice into the desired return type T. This function is provided by the RPC method implementation and typically uses bitcode::decode() for deserialization:
F: Fn(&[u8]) -> T + Send + Sync + 'static
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:363-365
Return Value Structure
Both streaming and buffered methods return a tuple containing an RpcStreamEncoder and the response data. The encoder allows the caller to write request payloads after initiating the call:
| Method | Return Type | Encoder Purpose | Response Type |
|---|---|---|---|
call_rpc_streaming() | (RpcStreamEncoder, DynamicReceiver) | Write request payload | Stream of Result<Vec<u8>, RpcServiceError> |
call_rpc_buffered() | (RpcStreamEncoder, Result<T, RpcServiceError>) | Write request payload | Complete decoded response |
The RpcStreamEncoder is returned even if the response contains an error, allowing the caller to properly finalize the request payload.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:37-42 extensions/muxio-rpc-service-caller/src/caller_interface.rs:357-362
Connection State Management
State Change Handler
The set_state_change_handler() method allows applications to register callbacks that are invoked when the transport state changes. Implementations store these handlers and invoke them during connection lifecycle events:
The RpcTransportState enum indicates whether the transport is connected or disconnected. For more details on transport state management, see Transport State Management.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:401-405
graph LR
CallStart["call_rpc_streaming()
invoked"]
CheckConn{"is_connected()?"}
RejectCall["Return ConnectionAborted error"]
ProceedCall["Create channels and proceed"]
CallStart --> CheckConn
CheckConn -->|false| RejectCall
CheckConn -->|true| ProceedCall
Connection Checks
The is_connected() method is checked at the beginning of call_rpc_streaming() to prevent RPC calls on disconnected transports:
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53
Error Handling
The caller interface propagates errors from multiple sources through the RpcServiceError enum:
| Error Source | Error Type | Trigger |
|---|---|---|
| Disconnected client | RpcServiceError::Transport(ConnectionAborted) | is_connected() returns false |
| Dispatcher call failure | RpcServiceError::Transport(io::Error) | dispatcher.call() returns error |
| Readiness channel closed | RpcServiceError::Transport(io::Error) | Oneshot channel drops before header received |
| Method not found | RpcServiceError::Rpc(NotFound) | RpcResultStatus::MethodNotFound in response |
| Application error | RpcServiceError::Rpc(Fail) | RpcResultStatus::Fail in response |
| System error | RpcServiceError::Rpc(System) | RpcResultStatus::SystemError in response |
| Frame decode error | RpcServiceError::Transport(io::Error) | RpcStreamEvent::Error received |
For comprehensive error handling documentation, see RPC Service Errors.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:49-52 extensions/muxio-rpc-service-caller/src/caller_interface.rs:186-232 extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-284
Implementation Requirements
Types implementing RpcServiceCallerInterface must:
- Provide thread-safe dispatcher access: Return
Arc<TokioMutex<RpcDispatcher>>fromget_dispatcher() - Implement emit function: Return closure that sends bytes to underlying transport
- Track connection state: Maintain boolean state returned by
is_connected() - Manage state handlers: Store and invoke state change handlers at appropriate lifecycle points
- Satisfy trait bounds: Implement
Send + Syncfor cross-thread usage
The trait uses #[async_trait::async_trait] to support async methods in trait definitions.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:25-26
graph TB
subgraph "Trait Definition"
Trait["RpcServiceCallerInterface\nextensions/muxio-rpc-service-caller"]
end
subgraph "Native Implementation"
TokioClient["RpcClient\nmuxio-tokio-rpc-client"]
TokioRuntime["Tokio async runtime\ntokio-tungstenite WebSocket"]
end
subgraph "Browser Implementation"
WasmClient["RpcWasmClient\nmuxio-wasm-rpc-client"]
WasmBridge["wasm-bindgen bridge\nBrowser WebSocket API"]
end
Trait -.implemented by.-> TokioClient
Trait -.implemented by.-> WasmClient
TokioClient --> TokioRuntime
WasmClient --> WasmBridge
style Trait fill:#f9f9f9,stroke:#333,stroke-width:2px
Platform-Specific Implementations
The RpcServiceCallerInterface is implemented by two platform-specific clients:
Both implementations provide the same interface to application code while adapting to their respective runtime environments. For implementation details, see Tokio RPC Client and WASM RPC Client.
graph LR
subgraph "Application Code"
Call["Add::call(&client, params)"]
end
subgraph "Method Definition"
Method["RpcMethodPrebuffered::call()\nType-safe wrapper"]
Encode["encode_request(params)\nSerialize arguments"]
Decode["decode_response(bytes)\nDeserialize result"]
end
subgraph "Caller Interface"
CallBuffered["call_rpc_buffered(request, decode)"]
BuildRequest["Build RpcRequest\nwith method_id and params"]
end
Call --> Method
Method --> Encode
Encode --> BuildRequest
BuildRequest --> CallBuffered
CallBuffered -.async.-> Decode
Decode --> Method
Method --> Call
Integration with RPC Methods
RPC method definitions use the caller interface through the RpcMethodPrebuffered trait, which provides a type-safe wrapper around call_rpc_buffered():
For details on method definitions and the prebuffered pattern, see Service Definitions and Prebuffered RPC Calls.
Package Information
The RpcServiceCallerInterface is defined in the muxio-rpc-service-caller package, which provides generic, runtime-agnostic client logic:
| Package | Description | Key Dependencies |
|---|---|---|
muxio-rpc-service-caller | Generic RPC client interface and logic | muxio, muxio-rpc-service, async-trait, futures, tokio (sync only) |
The package uses minimal Tokio features (only sync for TokioMutex) to remain as platform-agnostic as possible while supporting async methods.
Sources: extensions/muxio-rpc-service-caller/Cargo.toml:1-22
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Service Endpoint Interface
Loading…
Service Endpoint Interface
Relevant source files
- extensions/muxio-rpc-service-endpoint/Cargo.toml
- extensions/muxio-rpc-service-endpoint/src/endpoint.rs
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs
Purpose and Scope
This document describes the RpcServiceEndpointInterface trait, which provides the server-side abstraction for registering RPC method handlers and processing incoming requests in the muxio framework. This trait is runtime-agnostic and enables any platform-specific server implementation to handle RPC calls using a consistent interface.
For client-side RPC invocation, see Service Caller Interface. For details on defining RPC services, see Service Definitions.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-138
Trait Overview
The RpcServiceEndpointInterface<C> trait defines the server-side contract for handling incoming RPC requests. It is generic over a connection context type C, which allows handlers to access per-connection state or metadata.
Core Methods
| Method | Purpose |
|---|---|
register_prebuffered | Registers an async handler for a specific method ID |
read_bytes | Processes incoming transport bytes, routes to handlers, and sends responses |
get_prebuffered_handlers | Provides access to the handler registry (implementation detail) |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:8-64
Trait Definition Structure
Diagram: Trait structure showing methods, associated types, and key dependencies
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:8-14 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64
Handler Registration
The register_prebuffered method registers an asynchronous handler function for a specific RPC method. Handlers are identified by a 64-bit method ID, typically generated using the rpc_method_id! macro from the service definition layer.
Handler Function Signature
Registration Flow
Diagram: Handler registration sequence showing duplicate detection
The handler is wrapped in an Arc and stored in a HashMap<u64, RpcPrebufferedHandler<C>>. If a handler for the given method_id already exists, registration fails with an RpcServiceEndpointError::Handler.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64 extensions/muxio-rpc-service-endpoint/src/endpoint.rs:12-23
Three-Stage Request Processing Pipeline
The read_bytes method implements a three-stage pipeline for processing incoming RPC requests:
Stage 1: Decode and Identify
Incoming transport bytes are passed to the RpcDispatcher for frame decoding and stream demultiplexing. The dispatcher identifies which requests are now fully received (finalized) and ready for processing.
Diagram: Stage 1 - Synchronous decoding and request identification
graph LR
BYTES["Incoming bytes[]"]
DISPATCHER["RpcDispatcher::read_bytes()"]
REQUEST_IDS["Vec<request_id>"]
FINALIZED["Finalized requests\nVec<(id, RpcRequest)>"]
BYTES --> DISPATCHER
DISPATCHER --> REQUEST_IDS
REQUEST_IDS --> FINALIZED
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:78-97
Stage 2: Execute Handlers Asynchronously
Each finalized request is dispatched to its corresponding handler. Handlers execute concurrently using join_all, allowing the event loop to process multiple requests in parallel without blocking.
Diagram: Stage 2 - Concurrent handler execution
graph TB
subgraph "Handler Execution"
REQ1["Request 1"]
REQ2["Request 2"]
REQ3["Request 3"]
HANDLER1["Handler Future 1"]
HANDLER2["Handler Future 2"]
HANDLER3["Handler Future 3"]
JOIN["futures::join_all"]
RESULTS["Vec<RpcResponse>"]
end
REQ1 --> HANDLER1
REQ2 --> HANDLER2
REQ3 --> HANDLER3
HANDLER1 --> JOIN
HANDLER2 --> JOIN
HANDLER3 --> JOIN
JOIN --> RESULTS
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:99-126
graph LR
RESPONSES["Handler Results\nVec<RpcResponse>"]
ENCODE["dispatcher.respond()"]
CHUNK["Chunk by max_chunk_size"]
EMIT["on_emit callback"]
TRANSPORT["Transport layer"]
RESPONSES --> ENCODE
ENCODE --> CHUNK
CHUNK --> EMIT
EMIT --> TRANSPORT
Stage 3: Encode and Emit Responses
Handler results are synchronously encoded into the RPC protocol format and emitted back to the transport layer via the RpcDispatcher::respond method.
Diagram: Stage 3 - Synchronous response encoding and emission
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:127-137
Complete Processing Flow
Diagram: Complete three-stage request processing sequence
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:66-138
graph TB
subgraph "RpcServiceEndpoint<C>"
ENDPOINT["RpcServiceEndpoint<C>"]
HANDLERS["prebuffered_handlers:\nArc<Mutex<HashMap<u64, Handler>>>"]
PHANTOM["_context: PhantomData<C>"]
ENDPOINT --> HANDLERS
ENDPOINT --> PHANTOM
end
subgraph "Handler Type Definition"
HANDLER_TYPE["RpcPrebufferedHandler<C>"]
FN_TYPE["Arc<Fn(Vec<u8>, C) -> Pin<Box<Future>>>"]
HANDLER_TYPE -.alias.-> FN_TYPE
end
subgraph "Mutex Abstraction"
FEATURE{{"tokio_support feature"}}
TOKIO_MUTEX["tokio::sync::Mutex"]
STD_MUTEX["std::sync::Mutex"]
FEATURE -->|enabled| TOKIO_MUTEX
FEATURE -->|disabled| STD_MUTEX
end
HANDLERS -.type.-> HANDLER_TYPE
HANDLERS -.implementation.-> FEATURE
Concrete Implementation
The RpcServiceEndpoint<C> struct provides a concrete implementation of the RpcServiceEndpointInterface trait.
Structure
Diagram: Structure of the concrete RpcServiceEndpoint implementation
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint.rs:25-54
Key Implementation Details
| Component | Type | Purpose |
|---|---|---|
prebuffered_handlers | Arc<Mutex<HashMap<u64, RpcPrebufferedHandler<C>>>> | Thread-safe handler registry |
_context | PhantomData<C> | Zero-cost marker for generic type C |
Associated HandlersLock | Mutex<HashMap<u64, RpcPrebufferedHandler<C>>> | Provides access pattern for handler registry |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint.rs:25-32 extensions/muxio-rpc-service-endpoint/src/endpoint.rs:56-67
Connection Context Type Pattern
The endpoint is generic over a context type C, which must satisfy:
This context is passed to every handler invocation and enables:
- Per-connection state : Track connection-specific data like authentication tokens, session IDs, or user information
- Shared resources : Provide access to database pools, configuration, or other application state
- Request metadata : Include connection metadata like remote address, protocol version, or timing information
graph LR
TRANSPORT["Transport Layer\n(per connection)"]
CONTEXT["Context Instance\nC: Clone"]
ENDPOINT["RpcServiceEndpoint"]
READ_BYTES["read_bytes(context)"]
HANDLER["Handler(request, context)"]
TRANSPORT --> CONTEXT
CONTEXT --> ENDPOINT
ENDPOINT --> READ_BYTES
READ_BYTES --> HANDLER
Context Flow
Diagram: Context propagation from transport to handler
When calling read_bytes, the transport layer provides a context instance which is cloned for each handler invocation. This allows handlers to access per-connection state without shared mutable references.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:9-12 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:68-76
Runtime-Agnostic Mutex Abstraction
The endpoint uses conditional compilation to select the appropriate mutex implementation based on the runtime environment:
Feature-Based Selection
| Feature | Mutex Type | Use Case |
|---|---|---|
| Default (no features) | std::sync::Mutex | Blocking, non-async environments |
tokio_support | tokio::sync::Mutex | Async/await with Tokio runtime |
This abstraction allows the same endpoint code to work in both synchronous and asynchronous contexts without runtime overhead or complex trait abstractions.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint.rs:5-9 extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27
graph TB
TRAIT["WithHandlers<C> trait"]
METHOD["with_handlers<F, R>(f: F) -> R"]
STD_IMPL["impl for std::sync::Mutex"]
TOKIO_IMPL["impl for tokio::sync::Mutex"]
TRAIT --> METHOD
TRAIT -.implemented by.-> STD_IMPL
TRAIT -.implemented by.-> TOKIO_IMPL
STD_IMPL --> LOCK_STD["lock().unwrap()"]
TOKIO_IMPL --> LOCK_TOKIO["lock().await"]
WithHandlers Trait
The WithHandlers<C> trait provides a uniform interface for accessing the handler registry regardless of the underlying mutex implementation:
Diagram: WithHandlers abstraction over different mutex types
This trait enables the register_prebuffered method to work identically regardless of the feature flag, maintaining the runtime-agnostic design principle.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs13 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:46-62
Error Handling
The endpoint returns RpcServiceEndpointError for various failure conditions:
| Error Type | Cause | When |
|---|---|---|
Handler(Box<dyn Error>) | Handler already registered | During register_prebuffered with duplicate method_id |
Dispatcher(RpcDispatcherError) | Frame decode failure | During read_bytes if frames are malformed |
Dispatcher(RpcDispatcherError) | Request tracking failure | During read_bytes if request state is inconsistent |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:33-34 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:48-53 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs74
Integration with Platform Implementations
The RpcServiceEndpointInterface is implemented by platform-specific server types:
- Tokio Server : muxio-tokio-rpc-server wraps an
RpcServiceEndpointand handles WebSocket connections - WASM Client : muxio-wasm-rpc-client can act as both client and server endpoint in bidirectional scenarios
Both implementations use the same handler registration API and benefit from compile-time type safety through shared service definitions (see Service Definitions).
Sources: extensions/muxio-rpc-service-endpoint/Cargo.toml:1-33
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Prebuffered RPC Calls
Loading…
Prebuffered RPC Calls
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This document describes the prebuffered RPC call pattern in muxio, where the entire request payload is encoded and buffered before transmission, and the entire response payload is accumulated before being decoded and returned to the caller. This is in contrast to streaming RPC calls, which process data incrementally using channels (see Streaming RPC Calls).
Prebuffered calls are the simplest and most common RPC pattern, suitable for request/response operations where the entire input and output fit comfortably in memory. For service definition details, see Service Definitions. For caller interface abstractions, see Service Caller Interface.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21
Overview
A prebuffered RPC call is a synchronous-style remote procedure invocation where:
- The caller encodes the entire request payload using
bitcodeserialization - The request is transmitted as a complete unit to the server
- The server processes the request and generates a complete response
- The response is transmitted back as a complete unit
- The caller decodes the response and returns the result
The key characteristic is that both request and response are treated as atomic, indivisible units from the application’s perspective, even though the underlying transport may chunk them into multiple frames for transmission.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48
The RpcCallPrebuffered Trait
The RpcCallPrebuffered trait defines the interface for making prebuffered RPC calls. It is automatically implemented for any type that implements RpcMethodPrebuffered from the muxio-rpc-service crate.
Trait Definition
The trait is generic over any client type C that implements RpcServiceCallerInterface, making it runtime-agnostic and usable with both native Tokio clients and WASM clients.
Automatic Implementation
The blanket implementation at extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:24-98 automatically provides the call method for any service that defines request/response types via the RpcMethodPrebuffered trait.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-28
Request/Response Flow
The following diagram illustrates the complete flow of a prebuffered RPC call from the caller’s perspective:
sequenceDiagram
participant App as "Application Code"
participant Trait as "RpcCallPrebuffered::call"
participant Encode as "encode_request"
participant Strategy as "Transport Strategy"
participant Client as "RpcServiceCallerInterface"
participant Buffered as "call_rpc_buffered"
participant Decode as "decode_response"
App->>Trait: Echo::call(client, input)
Trait->>Encode: Self::encode_request(input)
Encode-->>Trait: Vec<u8>
Trait->>Strategy: Check encoded_args.len()
alt "len < DEFAULT_SERVICE_MAX_CHUNK_SIZE"
Strategy->>Strategy: Use rpc_param_bytes
note over Strategy: Small payload: inline in header
else "len >= DEFAULT_SERVICE_MAX_CHUNK_SIZE"
Strategy->>Strategy: Use rpc_prebuffered_payload_bytes
note over Strategy: Large payload: stream as chunks
end
Strategy->>Trait: RpcRequest struct
Trait->>Client: call_rpc_buffered(request, decode_fn)
Client->>Client: Transmit request frames
Client->>Client: Await response frames
Client->>Client: Accumulate response bytes
Client-->>Trait: (encoder, Result<Vec<u8>, Error>)
alt "Response is Ok(bytes)"
Trait->>Decode: decode_response(bytes)
Decode-->>Trait: Self::Output
Trait-->>App: Ok(output)
else "Response is Err"
Trait-->>App: Err(RpcServiceError)
end
Prebuffered RPC Call Sequence
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-97
Smart Transport Strategy for Large Payloads
The prebuffered implementation uses a “smart transport strategy” to handle arguments of any size, automatically selecting the most efficient encoding method based on payload size.
Strategy Logic
| Condition | Field Used | Transmission Method |
|---|---|---|
encoded_args.len() < DEFAULT_SERVICE_MAX_CHUNK_SIZE | rpc_param_bytes | Inline in initial header frame |
encoded_args.len() >= DEFAULT_SERVICE_MAX_CHUNK_SIZE | rpc_prebuffered_payload_bytes | Chunked and streamed after header |
RpcRequest Structure
RpcRequest {
rpc_method_id: u64,
rpc_param_bytes: Option<Vec<u8>>, // Used for small payloads
rpc_prebuffered_payload_bytes: Option<Vec<u8>>, // Used for large payloads
is_finalized: bool, // Always true for prebuffered
}
Implementation Details
The decision logic is implemented at extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-65:
This strategy ensures that:
- Small requests avoid unnecessary chunking overhead
- Large requests don’t fail due to header size limits (typically ~64KB)
- The underlying
RpcDispatcherautomatically handles chunking for large payloads - Server-side endpoint logic transparently locates arguments in either field
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-72
graph TB
Input["Application Input\n(Self::Input)"]
Encode["encode_request()\nbitcode serialization"]
EncodedBytes["Vec<u8>\nencoded_args"]
SizeCheck{"encoded_args.len() >=\nDEFAULT_SERVICE_MAX_CHUNK_SIZE?"}
SmallPath["rpc_param_bytes:\nSome(encoded_args)"]
SmallNote["Inline in header frame\nSingle transmission"]
LargePath["rpc_prebuffered_payload_bytes:\nSome(encoded_args)"]
LargeNote["Chunked by RpcDispatcher\nMultiple frames"]
Request["RpcRequest struct\nis_finalized: true"]
Dispatcher["RpcDispatcher\nEncodes and transmits"]
Network["WebSocket Transport\nBinary frames"]
Input-->Encode
Encode-->EncodedBytes
EncodedBytes-->SizeCheck
SizeCheck-->|No| SmallPath
SmallPath-->SmallNote
SmallNote-->Request
SizeCheck-->|Yes| LargePath
LargePath-->LargeNote
LargeNote-->Request
Request-->Dispatcher
Dispatcher-->Network
Transport Strategy Data Flow
The following diagram shows how data flows through the different encoding paths:
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:55-72 README.md:32-34
Error Propagation
Prebuffered RPC calls propagate errors through a nested Result structure to distinguish between transport errors and application-level errors.
graph TB
Call["RpcCallPrebuffered::call"]
BufferedCall["call_rpc_buffered"]
OuterResult["Result<(encoder, InnerResult), RpcServiceError>"]
InnerResult["Result<Self::Output, RpcServiceError>"]
TransportErr["Transport Error\n(Connection failed, timeout, etc.)"]
RemoteErr["Remote Service Error\n(Handler returned Err)"]
DecodeErr["Decode Error\n(Malformed response)"]
Success["Successful Response\nSelf::Output"]
Call-->BufferedCall
BufferedCall-->OuterResult
OuterResult-->|Outer Err| TransportErr
OuterResult-->|Outer Ok| InnerResult
InnerResult-->|Inner Err RpcServiceError::Rpc| RemoteErr
InnerResult-->|Inner Ok, decode fails| DecodeErr
InnerResult-->|Inner Ok, decode succeeds| Success
Error Flow Diagram
Error Handling Code
The error unwrapping logic at extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:87-96:
Error Types
| Error Type | Cause | Example |
|---|---|---|
RpcServiceError::Transport | Network failure, framing error, decode failure | Connection closed, malformed frame |
RpcServiceError::Rpc | Remote handler returned error | “item does not exist”, “Addition failed” |
RpcServiceError::NotConnected | Client not connected when call initiated | WebSocket not established |
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:79-96 extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177
Usage Example
The following test demonstrates typical usage of prebuffered RPC calls:
Basic Success Case
From extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:98-133:
Error Handling Example
From extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:179-212:
Large Payload Test
From extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:296-312:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:98-212 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:296-312
Integration with Service Definitions
Prebuffered RPC calls rely on service definitions that implement the RpcMethodPrebuffered trait. Each service method must provide:
Required Trait Methods
| Method | Purpose | Return Type |
|---|---|---|
METHOD_ID | Compile-time constant identifying the method | u64 |
encode_request() | Serialize input parameters | Result<Vec<u8>, io::Error> |
decode_request() | Deserialize input parameters | Result<Self::Input, io::Error> |
encode_response() | Serialize output result | Result<Vec<u8>, io::Error> |
decode_response() | Deserialize output result | Result<Self::Output, io::Error> |
Example Service Usage
The RpcCallPrebuffered trait automatically implements the call method for any type implementing RpcMethodPrebuffered, providing compile-time type safety and zero-cost abstractions.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-11 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-27
Comparison with Streaming Calls
| Aspect | Prebuffered Calls | Streaming Calls |
|---|---|---|
| Memory Usage | Entire payload in memory | Incremental processing |
| Latency | Higher (wait for complete payload) | Lower (process as data arrives) |
| Complexity | Simple request/response | Requires channel management |
| Use Cases | Small to medium payloads, simple operations | Large datasets, incremental results, progress tracking |
| Request Field | is_finalized: true | is_finalized: false initially |
| Response Handling | Single accumulated buffer | Stream of chunks via channels |
When to Use Prebuffered
- Request and response fit comfortably in memory
- Simple request/response semantics
- No need for progress tracking or cancellation
- Examples: database queries, RPC calculations, file uploads < 10MB
When to Use Streaming
- Large payloads that don’t fit in memory
- Need to process results incrementally
- Progress tracking or cancellation required
- Examples: video streaming, large file transfers, real-time data feeds
For detailed information on streaming RPC patterns, see Streaming RPC Calls.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs71 Table of Contents (4.4 vs 4.5)
graph TB
subgraph "Application Layer"
AppCode["Application Code\nBusiness Logic"]
end
subgraph "RPC Service Layer"
ServiceDef["Service Definition\nRpcMethodPrebuffered trait\nEcho, Add, Mult"]
MethodID["METHOD_ID constant\nxxhash generated"]
EncDec["encode_request()\ndecode_response()\nbitcode serialization"]
end
subgraph "Caller Layer"
CallTrait["RpcCallPrebuffered trait\ncall()
method"]
CallerIface["RpcServiceCallerInterface\ncall_rpc_buffered()"]
end
subgraph "Dispatcher Layer"
Dispatcher["RpcDispatcher\nRequest correlation\nResponse routing"]
Request["RpcRequest struct\nmethod_id\nparam_bytes or payload_bytes\nis_finalized: true"]
end
subgraph "Transport Layer"
Session["RpcSession\nStream multiplexing\nFrame encoding"]
Framing["Binary frames\nChunking strategy"]
end
AppCode-->|Echo::call client, input| CallTrait
CallTrait-->|implements for| ServiceDef
CallTrait-->|uses| MethodID
CallTrait-->|calls| EncDec
CallTrait-->|invokes| CallerIface
CallerIface-->|creates| Request
CallerIface-->|sends to| Dispatcher
Dispatcher-->|initializes stream in| Session
Session-->|chunks and encodes| Framing
ServiceDef-.defines.->MethodID
ServiceDef-.implements.->EncDec
Component Relationships
The following diagram shows how prebuffered RPC components relate to each other:
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-98 extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-18
Testing Strategy
The prebuffered RPC implementation includes comprehensive unit and integration tests:
Unit Tests
Located in extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-213:
- MockRpcClient : Test implementation of
RpcServiceCallerInterface - test_buffered_call_success : Verifies successful request/response roundtrip
- test_buffered_call_remote_error : Tests error propagation from server
- test_prebuffered_trait_converts_error : Validates error type conversion
Integration Tests
Located in extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313:
- Real server instance : Uses actual
RpcServerfrommuxio-tokio-rpc-server - WebSocket bridge : Connects WASM client to real server over network
- Large payload test : Validates chunking for payloads 200x chunk size
- Cross-platform validation : Same test logic for both Tokio and WASM clients
Test Architecture
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Streaming RPC Calls
Loading…
Streaming RPC Calls
Relevant source files
- extensions/muxio-rpc-service-caller/src/caller_interface.rs
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
Purpose and Scope
This document describes the streaming RPC mechanism in rust-muxio, which allows bidirectional data transfer over RPC calls with chunked payloads and asynchronous processing. Streaming RPC is used when responses are large, dynamic in size, or need to be processed incrementally.
For information about one-shot RPC calls with complete request/response buffers, see Prebuffered RPC Calls. For the underlying service definition traits, see Service Definitions. For client-side invocation patterns, see Service Caller Interface.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-406
Overview of Streaming RPC
Streaming RPC calls provide a mechanism for sending requests and receiving responses that may be too large to buffer entirely in memory, or where the response size is unknown at call time. Unlike prebuffered calls which return complete Result<T, RpcServiceError> values, streaming calls return:
RpcStreamEncoder- For sending additional payload chunks to the server after the initial requestDynamicReceiver- A stream that yieldsResult<Vec<u8>, RpcServiceError>chunks asynchronously
The streaming mechanism handles:
- Chunked payload transmission and reassembly
- Backpressure through bounded or unbounded channels
- Error propagation and early termination
- Request/response correlation across multiplexed streams
Key Distinction:
- Prebuffered RPC : Entire response buffered in memory before returning to caller
- Streaming RPC : Response chunks streamed incrementally as they arrive
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:33-73
Initiating a Streaming Call
The call_rpc_streaming method on RpcServiceCallerInterface initiates a streaming RPC call:
Method Parameters
| Parameter | Type | Description |
|---|---|---|
request | RpcRequest | Contains rpc_method_id, optional rpc_param_bytes, and optional rpc_prebuffered_payload_bytes |
dynamic_channel_type | DynamicChannelType | Specifies Bounded or Unbounded channel for response streaming |
Return Value
On success, returns a tuple containing:
RpcStreamEncoder- Used to send additional payload chunks after the initial requestDynamicReceiver- Stream that yields response chunks asResult<Vec<u8>, RpcServiceError>
On failure, returns RpcServiceError::Transport if the client is disconnected or if dispatcher registration fails.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-54
Dynamic Channel Types
The DynamicChannelType enum determines the backpressure characteristics of the response stream:
graph LR
DCT["DynamicChannelType"]
UNBOUNDED["Unbounded\nmpsc::unbounded()"]
BOUNDED["Bounded\nmpsc::channel(buffer_size)"]
DCT -->|No backpressure| UNBOUNDED
DCT -->|Backpressure at buffer_size| BOUNDED
UNBOUNDED -->|Creates| DS_UNBOUNDED["DynamicSender::Unbounded"]
UNBOUNDED -->|Creates| DR_UNBOUNDED["DynamicReceiver::Unbounded"]
BOUNDED -->|Creates| DS_BOUNDED["DynamicSender::Bounded"]
BOUNDED -->|Creates| DR_BOUNDED["DynamicReceiver::Bounded"]
Unbounded Channels
Created with DynamicChannelType::Unbounded. Uses mpsc::unbounded() internally, allowing unlimited buffering of response chunks. Suitable for:
- Fast consumers that can process chunks quickly
- Scenarios where response size is bounded and known to fit in memory
- Testing and development
Risk: Unbounded channels can lead to unbounded memory growth if the receiver is slower than the sender.
Bounded Channels
Created with DynamicChannelType::Bounded. Uses mpsc::channel(DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE) where DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE is typically 8. Provides backpressure when the buffer is full. Suitable for:
- Production systems with predictable memory usage
- Long-running streams with unknown total size
- Rate-limiting response processing
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73 extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:50-65
RpcStreamEncoder and DynamicReceiver
RpcStreamEncoder
The RpcStreamEncoder is created by RpcDispatcher::call() and provides methods to send additional payload chunks after the initial request. It wraps an RpcEmit trait implementation that sends binary frames over the transport.
Key characteristics:
- Created with
max_chunk_sizefromDEFAULT_SERVICE_MAX_CHUNK_SIZE - Automatically chunks large payloads into frames
- Shares the same
rpc_request_idas the original request - Can send multiple chunks before finalizing the stream
DynamicReceiver
The DynamicReceiver is a unified abstraction over mpsc::UnboundedReceiver and mpsc::Receiver that implements Stream<Item = Result<Vec<u8>, RpcServiceError>>.
| Variant | Underlying Type | Backpressure |
|---|---|---|
Unbounded | mpsc::UnboundedReceiver | None |
Bounded | mpsc::Receiver | Yes |
graph TB
subgraph "Call Flow"
CALL["call_rpc_streaming()"]
DISPATCHER["RpcDispatcher::call()"]
ENCODER["RpcStreamEncoder"]
RECEIVER["DynamicReceiver"]
end
subgraph "Response Flow"
RECV_FN["recv_fn closure\n(RpcResponseHandler)"]
TX["DynamicSender"]
RX["DynamicReceiver"]
APP["Application code\n.next().await"]
end
CALL -->|Creates channel| TX
CALL -->|Creates channel| RX
CALL -->|Registers| DISPATCHER
DISPATCHER -->|Returns| ENCODER
CALL -->|Returns| RECEIVER
RECV_FN -->|send_and_ignore| TX
TX -.->|mpsc| RX
RX -->|yields chunks| APP
Both variants provide the same next() interface through the StreamExt trait, abstracting the channel type from the caller.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73 extensions/muxio-rpc-service-caller/src/caller_interface.rs:289-323
stateDiagram-v2
[*] --> Waiting : recv_fn registered
Waiting --> HeaderReceived: RpcStreamEvent::Header
HeaderReceived --> Streaming: RpcResultStatus::Success
HeaderReceived --> ErrorBuffering: RpcResultStatus::MethodNotFound\nRpcResultStatus::Fail\nRpcResultStatus::SystemError
Streaming --> Streaming: RpcStreamEvent::PayloadChunk
ErrorBuffering --> ErrorBuffering: RpcStreamEvent::PayloadChunk
Streaming --> Complete: RpcStreamEvent::End
ErrorBuffering --> Complete: RpcStreamEvent::End
Waiting --> Error: RpcStreamEvent::Error
HeaderReceived --> Error: RpcStreamEvent::Error
Streaming --> Error: RpcStreamEvent::Error
ErrorBuffering --> Error: RpcStreamEvent::Error
Complete --> [*]
Error --> [*]
Stream Event Processing
The recv_fn closure registered with the dispatcher handles four types of RpcStreamEvent:
Event Types and State Machine
RpcStreamEvent::Header
Received first for every RPC response. Contains RpcHeader with:
rpc_msg_type- Should beRpcMessageType::Responserpc_request_id- Correlation ID matching the requestrpc_method_id- Method identifierrpc_metadata_bytes- First byte containsRpcResultStatus
The recv_fn extracts RpcResultStatus from rpc_metadata_bytes[0] and stores it for subsequent processing. A readiness signal is sent via the oneshot channel to unblock the call_rpc_streaming future.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:118-135
RpcStreamEvent::PayloadChunk
Contains a chunk of the response payload. Processing depends on the previously received RpcResultStatus:
| Status | Behavior |
|---|---|
Success | Chunk sent to DynamicSender with send_and_ignore(Ok(bytes)) |
MethodNotFound, Fail, SystemError | Chunk buffered in error_buffer for error message construction |
None (not yet received) | Chunk buffered defensively |
The synchronous recv_fn uses StdMutex to protect shared state (tx_arc, status, error_buffer).
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:136-174
RpcStreamEvent::End
Signals stream completion. Final actions depend on RpcResultStatus:
RpcResultStatus::MethodNotFound: ConstructsRpcServiceError::RpcwithRpcServiceErrorCode::NotFoundand buffered error payloadRpcResultStatus::Fail: SendsRpcServiceError::RpcwithRpcServiceErrorCode::FailRpcResultStatus::SystemError: SendsRpcServiceError::RpcwithRpcServiceErrorCode::Systemand buffered error payloadRpcResultStatus::Success: Closes the channel normally (no error sent)
The DynamicSender is taken from the Option wrapper and dropped, closing the channel.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:175-245
RpcStreamEvent::Error
Indicates a framing protocol error (e.g., malformed frames, decode errors). Sends RpcServiceError::Transport to the DynamicReceiver and also signals the readiness channel if still waiting for the header. The DynamicSender is dropped immediately.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-285
Error Handling in Streams
Error Propagation Path
Pre-Dispatch Errors
Before the dispatcher registers the request, errors are returned immediately from call_rpc_streaming():
- Disconnected client :
RpcServiceError::Transport(io::ErrorKind::ConnectionAborted) - Dispatcher registration failure :
RpcServiceError::Transportwith error details
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53 extensions/muxio-rpc-service-caller/src/caller_interface.rs:315-328
Post-Dispatch Errors
After the dispatcher registers the request, errors are sent through the DynamicReceiver stream:
- Framing errors :
RpcServiceError::TransportfromRpcStreamEvent::Error - RPC-level errors :
RpcServiceError::Rpcwith appropriateRpcServiceErrorCodebased onRpcResultStatus
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:185-238 extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-285
sequenceDiagram
participant Caller as "call_rpc_streaming()"
participant Dispatcher as "RpcDispatcher"
participant RecvFn as "recv_fn closure"
participant ReadyChan as "oneshot channel"
Caller->>ReadyChan: Create (ready_tx, ready_rx)
Caller->>Dispatcher: call(request, recv_fn)
Dispatcher-->>Caller: Returns encoder
Caller->>ReadyChan: .await on ready_rx
Note over RecvFn: Transport receives response
RecvFn->>RecvFn: RpcStreamEvent::Header
RecvFn->>RecvFn: Extract RpcResultStatus
RecvFn->>ReadyChan: ready_tx.send(Ok(()))
ReadyChan-->>Caller: Ok(())
Caller-->>Caller: Return (encoder, receiver)
Readiness Signaling
The call_rpc_streaming method uses a oneshot channel to signal when the RPC stream is ready to be consumed. This ensures the caller doesn’t begin processing until the header has been received and the RpcResultStatus is known.
Signaling Mechanism
Signaling on Error
If an error occurs before receiving the header (e.g., RpcStreamEvent::Error), the readiness channel is signaled with Err(io::Error) instead of Ok(()).
Implementation Details
The readiness sender is stored in Arc<StdMutex<Option<oneshot::Sender>>> and taken using mem::take() when signaling to ensure it’s only used once. The recv_fn closure acquires this mutex synchronously with .lock().unwrap().
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:78-80 extensions/muxio-rpc-service-caller/src/caller_interface.rs:127-134 extensions/muxio-rpc-service-caller/src/caller_interface.rs:332-348
Complete Streaming RPC Flow
End-to-End Sequence
Synchronization Points
- Channel Creation :
DynamicSenderandDynamicReceivercreated synchronously incall_rpc_streaming - Dispatcher Registration :
RpcDispatcher::call()registers the request and createsRpcStreamEncoder - Readiness Await :
call_rpc_streamingblocks onready_rx.awaituntil header received - Header Processing : First
RpcStreamEvent::Headerunblocks the caller - Chunk Processing : Each
RpcStreamEvent::PayloadChunkflows through the channel - Stream Termination :
RpcStreamEvent::Endcloses the channel
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:33-349
Integration with Transport Implementations
Tokio RPC Client Usage
The RpcClient struct in muxio-tokio-rpc-client implements RpcServiceCallerInterface, providing the transport-specific get_emit_fn() that sends binary data over the WebSocket connection.
When streaming RPC is used:
call_rpc_streaming()creates the channels and registers with dispatcherget_emit_fn()sends initial request frames viatx.send(WsMessage::Binary(chunk))- Receive loop processes incoming WebSocket binary messages
endpoint.read_bytes()called on received bytes, which dispatches torecv_fnrecv_fnforwards chunks toDynamicSender, which application receives viaDynamicReceiver
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:158-178 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:289-313
Connection State Impact
If the client disconnects during streaming:
is_connected()returnsfalse- Subsequent
call_rpc_streaming()attempts fail immediately withConnectionAborted - Pending streams receive
RpcStreamEvent::Errorfrom dispatcher’sfail_all_pending_requests() - Transport errors propagate through
DynamicReceiverasRpcServiceError::Transport
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:99-108 extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53
Testing Patterns
Mock Client Testing
Test the dynamic channel mechanism by creating mock implementations of RpcServiceCallerInterface:
Sources: extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:101-167
Integration Testing
Full integration tests with real client/server validate streaming across the WebSocket transport, testing scenarios like:
- Large payloads chunked correctly
- Bounded channel backpressure
- Early disconnect cancels pending streams
- Error status codes propagate correctly
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:168-292
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Platform Implementations
Loading…
Platform Implementations
Relevant source files
Purpose and Scope
This page documents the concrete platform-specific implementations that enable muxio to run in different runtime environments. These implementations provide the bridge between the runtime-agnostic RPC framework core and actual platform capabilities, including async runtimes, network protocols, and JavaScript interop.
The three production-ready implementations are:
- muxio-tokio-rpc-server : Native server using Tokio runtime and Axum framework
- muxio-tokio-rpc-client : Native client using Tokio runtime and tokio-tungstenite
- muxio-wasm-rpc-client : Browser client using wasm-bindgen and JavaScript WebSocket API
For details on the RPC abstraction layer these platforms implement, see RPC Framework. For guidance on creating custom platform implementations, see Extending the Framework.
Overview
The muxio framework provides three platform implementations, each targeting different deployment environments while sharing the same core RPC abstractions and service definitions. All implementations use WebSocket as the transport protocol and communicate using the same binary framing format.
| Implementation | Runtime Environment | Primary Dependencies | Typical Use Cases |
|---|---|---|---|
muxio-tokio-rpc-server | Native (Tokio async) | axum, tokio-tungstenite | HTTP/WebSocket servers, microservices |
muxio-tokio-rpc-client | Native (Tokio async) | tokio, tokio-tungstenite | CLI tools, native apps, integration tests |
muxio-wasm-rpc-client | WebAssembly (browser) | wasm-bindgen, js-sys | Web applications, browser extensions |
All implementations are located in extensions/ and follow the workspace structure defined in Cargo.toml:19-31
Sources: Cargo.toml:19-31 README.md:38-40 Cargo.lock:897-954
Platform Integration Architecture
The following diagram shows how platform implementations integrate with the RPC framework and muxio core components:
Sources: Cargo.toml:39-47 Cargo.lock:897-954 README.md:38-51
graph TB
subgraph "Application Layer"
APP["Application Code\nService Methods"]
end
subgraph "RPC Abstraction Layer"
CALLER["RpcServiceCallerInterface\nClient-side trait"]
ENDPOINT["RpcServiceEndpointInterface\nServer-side trait"]
SERVICE["RpcMethodPrebuffered\nService definitions"]
end
subgraph "Transport Implementations"
TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer struct"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient struct"]
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient struct"]
end
subgraph "Core Layer"
DISPATCHER["RpcDispatcher\nRequest correlation"]
FRAMING["Binary Framing Protocol\nStream multiplexing"]
end
subgraph "Network Layer"
WS_SERVER["tokio_tungstenite\nWebSocket server"]
WS_CLIENT_NATIVE["tokio_tungstenite\nWebSocket client"]
WS_CLIENT_WASM["Browser WebSocket API\nvia wasm_bindgen"]
end
APP --> SERVICE
SERVICE --> CALLER
SERVICE --> ENDPOINT
CALLER --> TOKIO_CLIENT
CALLER --> WASM_CLIENT
ENDPOINT --> TOKIO_SERVER
TOKIO_SERVER --> DISPATCHER
TOKIO_CLIENT --> DISPATCHER
WASM_CLIENT --> DISPATCHER
DISPATCHER --> FRAMING
TOKIO_SERVER --> WS_SERVER
TOKIO_CLIENT --> WS_CLIENT_NATIVE
WASM_CLIENT --> WS_CLIENT_WASM
FRAMING --> WS_SERVER
FRAMING --> WS_CLIENT_NATIVE
FRAMING --> WS_CLIENT_WASM
Tokio RPC Server
The extensions/muxio-tokio-rpc-server/ crate provides a production-ready WebSocket server implementation using the Tokio async runtime. The central type is RpcServer, which combines Axum’s HTTP/WebSocket capabilities with the RpcServiceEndpointInterface trait for handler registration.
graph TB
subgraph "RpcServer Structure"
SERVER["RpcServer\nArc-wrapped"]
ENDPOINT_FIELD["endpoint: Arc<RpcServiceEndpoint>"]
CALLER_FIELD["caller: Arc<RpcServiceCaller>"]
end
subgraph "Axum Integration"
ROUTER["axum::Router"]
WS_UPGRADE["WebSocketUpgrade handler"]
WS_ROUTE["/ws route"]
end
subgraph "Connection Handler"
ACCEPT_CONN["handle_websocket_connection()"]
TOKIO_SPAWN["tokio::spawn per connection"]
MSG_LOOP["Message read/write loop"]
end
subgraph "Dependencies"
AXUM_CRATE["axum v0.8.4"]
TOKIO_TUNG["tokio-tungstenite v0.26.2"]
TOKIO_RT["tokio v1.45.1"]
end
SERVER --> ENDPOINT_FIELD
SERVER --> CALLER_FIELD
SERVER --> ROUTER
ROUTER --> WS_ROUTE
WS_ROUTE --> WS_UPGRADE
WS_UPGRADE --> ACCEPT_CONN
ACCEPT_CONN --> TOKIO_SPAWN
TOKIO_SPAWN --> MSG_LOOP
ROUTER --> AXUM_CRATE
WS_UPGRADE --> TOKIO_TUNG
TOKIO_SPAWN --> TOKIO_RT
Core Components
Sources: Cargo.lock:917-933 README.md:94-128
Server Lifecycle
The server follows this initialization and operation sequence:
| Phase | Method | Description |
|---|---|---|
| Construction | RpcServer::new(config) | Creates server with optional configuration, initializes endpoint and caller |
| Handler Registration | endpoint().register_prebuffered() | Registers RPC method handlers before starting server |
| Binding | serve_with_listener(listener) | Accepts a TcpListener and starts serving on it |
| Connection Acceptance | Internal | Axum router upgrades HTTP connections to WebSocket |
| Per-Connection Spawn | Internal | Each WebSocket connection gets its own tokio::spawn task |
| Message Processing | Internal | Reads WebSocket binary messages, feeds to RpcDispatcher |
Example from README.md:94-128:
Sources: README.md:94-128
Integration with Axum
The server uses Axum’s router to expose a WebSocket endpoint at /ws. The implementation leverages:
axum::extract::ws::WebSocketUpgradefor protocol upgradeaxum::Router::new().route("/ws", get(handler))for routing- Per-connection state isolation using
Arccloning
Sources: Cargo.lock:80-114 Cargo.lock:917-933
Tokio RPC Client
The extensions/muxio-tokio-rpc-client/ crate provides a native client implementation that establishes WebSocket connections and makes RPC calls using the Tokio runtime. The primary type is RpcClient, which implements RpcServiceCallerInterface.
graph TB
subgraph "RpcClient Structure"
CLIENT["RpcClient"]
INNER["ClientInner\nArc<TokioMutex<...>>"]
DISPATCHER_REF["dispatcher: Arc<TokioMutex<RpcDispatcher>>"]
ENDPOINT_REF["endpoint: Arc<RpcServiceEndpoint>"]
STATE_HANDLER["state_handler: Option<Callback>"]
end
subgraph "Background Tasks"
READ_TASK["tokio::spawn read_task"]
WRITE_TASK["tokio::spawn write_task"]
STATE_TASK["State change publisher"]
end
subgraph "WebSocket Communication"
WS_STREAM["WebSocketStream"]
SPLIT_WRITE["SplitSink<write>"]
SPLIT_READ["SplitStream<read>"]
end
subgraph "Dependencies"
TOKIO_TUNG_CLI["tokio-tungstenite v0.26.2"]
TOKIO_RT_CLI["tokio v1.45.1"]
FUTURES["futures-util"]
end
CLIENT --> INNER
INNER --> DISPATCHER_REF
INNER --> ENDPOINT_REF
INNER --> STATE_HANDLER
CLIENT --> READ_TASK
CLIENT --> WRITE_TASK
READ_TASK --> SPLIT_READ
WRITE_TASK --> SPLIT_WRITE
SPLIT_READ --> WS_STREAM
SPLIT_WRITE --> WS_STREAM
WS_STREAM --> TOKIO_TUNG_CLI
READ_TASK --> TOKIO_RT_CLI
WRITE_TASK --> TOKIO_RT_CLI
SPLIT_READ --> FUTURES
SPLIT_WRITE --> FUTURES
Client Architecture
Sources: Cargo.lock:898-916 README.md:136-142
Connection Establishment
The client connection follows this sequence:
- DNS Resolution & TCP Connection:
tokio_tungstenite::connect_async()establishes TCP connection - WebSocket Handshake : HTTP upgrade to WebSocket protocol
- Stream Splitting : WebSocket stream split into separate read/write halves using
futures::StreamExt::split() - Background Task Spawn : Two tasks spawned for bidirectional communication
- State Notification : Connection state changes from
ConnectingtoConnected
Example from README.md:136-142:
Sources: README.md:136-142
Arc-Based Lifecycle Management
The client uses Arc reference counting for shared ownership:
RpcDispatcherwrapped inArc<TokioMutex<>>for concurrent accessRpcServiceEndpointwrapped inArc<>for shared handler registry- Background tasks hold
Arcclones to prevent premature cleanup - When all
Arcreferences drop, connection automatically closes
This design enables:
- Multiple concurrent RPC calls from different tasks
- Bidirectional RPC (client can handle incoming calls from server)
- Automatic cleanup on disconnect without manual resource management
Sources: Cargo.lock:898-916
Background Task Architecture
The client spawns two persistent Tokio tasks:
| Task | Purpose | Error Handling |
|---|---|---|
| Read Task | Reads WebSocket binary messages, feeds bytes to RpcDispatcher | Exits on error, triggers state change to Disconnected |
| Write Task | Receives bytes from RpcDispatcher write callback, sends as WebSocket messages | Exits on error, triggers state change to Disconnected |
Both tasks communicate through channels and callbacks, maintaining the non-async core design of muxio.
Sources: Cargo.lock:898-916
WASM RPC Client
The extensions/muxio-wasm-rpc-client/ crate enables RPC communication from browser environments by bridging Rust WebAssembly code with JavaScript’s native WebSocket API. This implementation demonstrates muxio’s cross-platform capability without requiring Tokio.
graph TB
subgraph "Rust WASM Layer"
WASM_CLIENT["RpcWasmClient"]
STATIC_REF["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local! RefCell"]
DISPATCHER_WASM["RpcDispatcher"]
ENDPOINT_WASM["RpcServiceEndpoint"]
end
subgraph "wasm-bindgen Bridge"
EXTERN_FN["#[wasm_bindgen]\nstatic_muxio_write_bytes()"]
CLOSURE["Closure::wrap callbacks"]
JS_VALUE["JsValue conversions"]
end
subgraph "JavaScript Environment"
WS_API["new WebSocket(url)"]
ONMESSAGE["ws.onmessage event"]
ONERROR["ws.onerror event"]
ONOPEN["ws.onopen event"]
SEND["ws.send(bytes)"]
end
subgraph "Dependencies"
WASM_BINDGEN_CRATE["wasm-bindgen v0.2.100"]
JS_SYS_CRATE["js-sys v0.3.77"]
WASM_FUTURES_CRATE["wasm-bindgen-futures v0.4.50"]
end
WASM_CLIENT --> STATIC_REF
WASM_CLIENT --> DISPATCHER_WASM
WASM_CLIENT --> ENDPOINT_WASM
DISPATCHER_WASM --> EXTERN_FN
EXTERN_FN --> SEND
ONMESSAGE --> CLOSURE
CLOSURE --> DISPATCHER_WASM
EXTERN_FN --> WASM_BINDGEN_CRATE
CLOSURE --> WASM_BINDGEN_CRATE
JS_VALUE --> JS_SYS_CRATE
WS_API --> JS_SYS_CRATE
WASM Bridge Architecture
Sources: Cargo.lock:934-954 README.md51
JavaScript Interop Pattern
The WASM client uses a bidirectional byte-passing bridge between Rust and JavaScript:
Rust → JavaScript (outgoing data):
RpcDispatcherinvokes write callback withVec<u8>- Callback invokes
#[wasm_bindgen]extern functionstatic_muxio_write_bytes() - JavaScript receives
Uint8Arrayand callsWebSocket.send()
JavaScript → Rust (incoming data):
- JavaScript
ws.onmessagereceivesArrayBuffer - Converts to
Uint8Arrayand passes to Rust entry point - Rust code accesses
MUXIO_STATIC_RPC_CLIENT_REFand feeds bytes toRpcDispatcher
This design eliminates async runtime dependencies while maintaining compatibility with the same service definitions used by native clients.
Sources: Cargo.lock:934-954 README.md:51-52
Static Client Pattern
The WASM client uses a thread-local static reference for JavaScript access:
This pattern enables:
- Simple JavaScript API that doesn’t require passing Rust objects
- Single global client instance per WebAssembly module
- Automatic memory management through
Arcreference counting
Sources: Cargo.lock:934-954
Browser Compatibility
The WASM client compiles to wasm32-unknown-unknown target and relies on standard browser APIs:
WebSocketconstructor for connection establishmentWebSocket.send()for binary message transmissionWebSocket.onmessagefor binary message receptionWebSocket.onerrorandWebSocket.onclosefor error handling
No polyfills or special browser features required beyond standard WebSocket support (available in all modern browsers).
Sources: Cargo.lock:934-954 Cargo.lock:1637-1646 Cargo.lock:1663-1674
WebSocket Protocol Selection
All transport implementations use WebSocket as the underlying protocol for several reasons:
| Criterion | Rationale |
|---|---|
| Binary support | Native support for binary frames aligns with muxio’s binary framing protocol |
| Bidirectional | Full-duplex communication enables server-initiated messages and streaming |
| Browser compatibility | Widely supported in all modern browsers via standard JavaScript API |
| Connection persistence | Single long-lived connection reduces overhead of multiple HTTP requests |
| Framing built-in | WebSocket’s message framing complements muxio’s multiplexing layer |
WebSocket messages carry the binary-serialized RPC frames defined by the muxio core protocol. The transport layer is responsible for:
- Establishing and maintaining WebSocket connections
- Converting between WebSocket binary messages and byte slices
- Handling connection lifecycle events (connect, disconnect, errors)
- Providing state change notifications to application code
Sources: Cargo.lock:1446-1455 Cargo.lock:1565-1580 README.md32
stateDiagram-v2
[*] --> Disconnected : Initial state
Disconnected --> Connecting: RpcClient::new() called
Connecting --> Connected : WebSocket handshake success
Connecting --> Disconnected : Connection failure DNS error Network timeout
Connected --> Disconnected : Network error Server closes connection Client drop
Disconnected --> [*] : All Arc references dropped
note right of Disconnected
RpcTransportState::Disconnected
end note
note right of Connecting
RpcTransportState::Connecting
end note
note right of Connected
RpcTransportState::Connected
end note
Connection Lifecycle and State Management
All client implementations (RpcClient and RpcWasmClient) implement connection state tracking through the RpcTransportState enum. State changes are exposed to application code via callback handlers, enabling reactive connection management.
Connection State Machine
Sources: README.md:138-141
State Change Handler Registration
Applications register state change callbacks using set_state_change_handler():
The handler receives RpcTransportState enum variants:
RpcTransportState::Disconnected- Not connected, safe to drop clientRpcTransportState::Connecting- Connection in progress, should not send requestsRpcTransportState::Connected- Fully connected, ready for RPC calls
Sources: README.md:138-141 README.md:75-76
Automatic Cleanup on Disconnect
Both client implementations use Arc reference counting for automatic resource cleanup:
| Resource | Cleanup Mechanism | Trigger |
|---|---|---|
| WebSocket connection | Dropped when read/write tasks exit | Network error, server close, last Arc dropped |
| Background tasks | tokio::spawn tasks exit naturally | Connection close detected |
| Pending requests | RpcDispatcher returns errors for in-flight requests | State change to Disconnected |
| Stream decoders | Removed from RpcSession decoder map | Stream end or connection close |
When the last Arc<RpcClient> reference is dropped:
- Destructor signals background tasks to exit
- Read/write tasks complete their current iteration and exit
- WebSocket connection closes gracefully (if still open)
- All pending request futures resolve with connection errors
- State transitions to
Disconnected
This design eliminates manual cleanup and prevents resource leaks.
Sources: Cargo.lock:898-916 Cargo.lock:934-954
State-Based Application Logic
Common patterns using state callbacks:
UI Connection Indicator:
Automatic Reconnection:
Request Queueing:
Sources: README.md:138-141
graph TD
subgraph "Tokio Server Stack"
TOKIO_SRV["muxio-tokio-rpc-server"]
AXUM["axum\nv0.8.4"]
TOKIO_1["tokio\nv1.45.1"]
TUNGSTENITE_1["tokio-tungstenite\nv0.26.2"]
end
subgraph "Tokio Client Stack"
TOKIO_CLI["muxio-tokio-rpc-client"]
TOKIO_2["tokio\nv1.45.1"]
TUNGSTENITE_2["tokio-tungstenite\nv0.26.2"]
end
subgraph "WASM Client Stack"
WASM_CLI["muxio-wasm-rpc-client"]
WASM_BINDGEN["wasm-bindgen\nv0.2.100"]
JS_SYS_DEP["js-sys\nv0.3.77"]
WASM_FUTURES["wasm-bindgen-futures\nv0.4.50"]
end
subgraph "Shared RPC Layer"
RPC_SERVICE["muxio-rpc-service"]
RPC_CALLER["muxio-rpc-service-caller"]
RPC_ENDPOINT["muxio-rpc-service-endpoint"]
end
subgraph "Core"
MUXIO_CORE["muxio"]
end
TOKIO_SRV --> AXUM
TOKIO_SRV --> TOKIO_1
TOKIO_SRV --> TUNGSTENITE_1
TOKIO_SRV --> RPC_ENDPOINT
TOKIO_CLI --> TOKIO_2
TOKIO_CLI --> TUNGSTENITE_2
TOKIO_CLI --> RPC_CALLER
WASM_CLI --> WASM_BINDGEN
WASM_CLI --> JS_SYS_DEP
WASM_CLI --> WASM_FUTURES
WASM_CLI --> RPC_CALLER
RPC_ENDPOINT --> RPC_SERVICE
RPC_CALLER --> RPC_SERVICE
RPC_SERVICE --> MUXIO_CORE
Dependency Graph
The following diagram shows the concrete dependency relationships between transport implementations and their supporting crates:
Sources: Cargo.lock:917-933 Cargo.lock:898-916 Cargo.lock:934-954 Cargo.toml:39-64
Cross-Platform Service Definition Sharing
A key design principle is that all transport implementations can consume the same service definitions. This is achieved through the RpcMethodPrebuffered trait, which defines methods with compile-time generated method IDs and encoding/decoding logic.
| Component | Role | Shared Across Transports |
|---|---|---|
RpcMethodPrebuffered trait | Defines RPC method signature | ✓ Yes |
encode_request() / decode_request() | Parameter serialization | ✓ Yes |
encode_response() / decode_response() | Result serialization | ✓ Yes |
METHOD_ID constant | Compile-time hash of method name | ✓ Yes |
| Transport connection logic | WebSocket handling | ✗ No (platform-specific) |
Example service definition usage from README.md:144-151:
The same service definitions work identically with RpcClient (Tokio), RpcWasmClient (WASM), and any future transport implementations that implement RpcServiceCallerInterface.
Sources: README.md:47-49 README.md:69-73 README.md:144-151 Cargo.toml42
Implementation Selection Guidelines
Choose the appropriate transport implementation based on your deployment target:
Usemuxio-tokio-rpc-server when:
- Building server-side applications
- Need to handle multiple concurrent client connections
- Require integration with existing Tokio/Axum infrastructure
- Operating in native Rust environments
Usemuxio-tokio-rpc-client when:
- Building native client applications (CLI tools, desktop apps)
- Writing integration tests for server implementations
- Need Tokio’s async runtime features
- Operating in native Rust environments
Usemuxio-wasm-rpc-client when:
- Building web applications that run in browsers
- Creating browser extensions
- Need to communicate with servers from JavaScript contexts
- Targeting the
wasm32-unknown-unknownplatform
For detailed usage examples of each transport, refer to the subsections Tokio RPC Server, Tokio RPC Client, and WASM RPC Client.
Sources: README.md:38-51 Cargo.toml:19-31
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Tokio RPC Server
Loading…
Tokio RPC Server
Relevant source files
Purpose and Scope
The muxio-tokio-rpc-server crate provides a production-ready WebSocket RPC server implementation for native Rust environments using the Tokio async runtime. This server integrates the muxio RPC framework with Axum’s HTTP/WebSocket capabilities and tokio-tungstenite’s WebSocket protocol handling.
This document covers the server-side implementation for native applications. For client-side Tokio implementations, see Tokio RPC Client. For browser-based clients, see WASM RPC Client. For general information about service definitions and endpoint interfaces, see Service Endpoint Interface.
Sources:
- extensions/muxio-tokio-rpc-server/Cargo.toml:1-23
- High-level architecture diagrams (Diagram 1)
Architecture Overview
The Tokio RPC Server sits at the intersection of three major subsystems: Axum’s HTTP framework, the muxio core multiplexing layer, and the RPC service framework. It provides a complete server implementation that accepts WebSocket connections, manages multiple concurrent clients, and dispatches RPC requests to registered handlers.
graph TB
subgraph "Application Layer"
APP["Application Code\nService Handler Implementations"]
end
subgraph "Server Interface Layer - muxio-tokio-rpc-server"
RPC_SERVER["RpcServer\nMain server struct\nHandler registration\nConnection management"]
WS_HANDLER["WebSocket Handler\nhandle_websocket_connection\nPer-connection async task"]
end
subgraph "RPC Framework Integration"
ENDPOINT["RpcServiceEndpointInterface\nDispatcher to handlers\nMethod routing"]
CALLER["RpcServiceCallerInterface\nServer-to-client calls\nBidirectional RPC"]
DISPATCHER["RpcDispatcher\nRequest correlation\nResponse routing"]
end
subgraph "Transport Layer"
AXUM["Axum Framework\nHTTP routing\nWebSocket upgrade"]
TUNGSTENITE["tokio-tungstenite\nWebSocket protocol\nFrame handling"]
end
subgraph "Core Multiplexing"
SESSION["RpcSession\nStream multiplexing\nFrame mux/demux"]
end
APP --> RPC_SERVER
RPC_SERVER --> WS_HANDLER
RPC_SERVER --> ENDPOINT
WS_HANDLER --> CALLER
WS_HANDLER --> ENDPOINT
WS_HANDLER --> DISPATCHER
WS_HANDLER --> AXUM
AXUM --> TUNGSTENITE
DISPATCHER --> SESSION
ENDPOINT --> DISPATCHER
CALLER --> DISPATCHER
TUNGSTENITE --> SESSION
style RPC_SERVER fill:#e1f5ff
style WS_HANDLER fill:#e1f5ff
Server Component Stack
Description: The server is built in layers. The RpcServer struct provides the high-level interface for configuring the server and registering handlers. When a WebSocket connection arrives via Axum, a dedicated handle_websocket_connection task is spawned. This task instantiates both an RpcServiceEndpointInterface (for handling incoming client requests) and an RpcServiceCallerInterface (for making server-initiated calls to the client), both sharing a single RpcDispatcher and RpcSession for the connection.
Sources:
- High-level architecture diagrams (Diagrams 1, 2, 3)
- extensions/muxio-tokio-rpc-server/Cargo.toml:11-22
Key Components
RpcServer Structure
The main server structure manages the overall server lifecycle, route configuration, and handler registry. It integrates with Axum’s routing system to expose WebSocket endpoints.
| Component | Type | Purpose |
|---|---|---|
RpcServer | Main struct | Server configuration, handler registration, Axum router integration |
| Handler registry | Internal storage | Maintains registered RPC service handlers |
| Axum router | axum::Router | HTTP routing and WebSocket upgrade handling |
| Connection tracking | State management | Tracks active connections and connection state |
Key Responsibilities:
- Registering service handlers via
RpcServiceEndpointInterface - Creating Axum routes for WebSocket endpoints
- Spawning per-connection tasks
- Managing server lifecycle (start, stop, graceful shutdown)
Sources:
- extensions/muxio-tokio-rpc-server/Cargo.toml:12-16
- High-level architecture diagrams (Diagram 1)
WebSocket Connection Handler
Each WebSocket connection is handled by a dedicated async task that manages the complete lifecycle of that connection.
Description: The connection handler orchestrates all communication for a single client. It creates a shared RpcDispatcher, instantiates both endpoint and caller interfaces, and manages the read/write loops for WebSocket frames. This enables full bidirectional RPC - clients can call server methods, and the server can call client methods over the same connection.
sequenceDiagram
participant Axum as "Axum Router"
participant Handler as "handle_websocket_connection"
participant Endpoint as "RpcServiceEndpointInterface"
participant Caller as "RpcServiceCallerInterface"
participant Dispatcher as "RpcDispatcher"
participant WS as "WebSocket\n(tokio-tungstenite)"
Axum->>Handler: WebSocket upgrade
Handler->>Handler: Create RpcDispatcher
Handler->>Endpoint: Instantiate with dispatcher
Handler->>Caller: Instantiate with dispatcher
Handler->>Handler: Spawn read loop
Handler->>Handler: Spawn write loop
Note over Handler,WS: Connection Active
WS->>Handler: Binary frame received
Handler->>Dispatcher: Feed bytes
Dispatcher->>Endpoint: Route request
Endpoint->>Handler: Handler result
Handler->>Dispatcher: Send response
Dispatcher->>WS: Binary frames
Note over Handler,WS: Bidirectional RPC Support
Handler->>Caller: call(request)
Caller->>Dispatcher: Outgoing request
Dispatcher->>WS: Binary frames
WS->>Handler: Response frames
Handler->>Dispatcher: Feed bytes
Dispatcher->>Caller: Deliver response
Note over Handler,WS: Connection Closing
WS->>Handler: Close frame
Handler->>Endpoint: Cleanup
Handler->>Caller: Cleanup
Handler->>Handler: Task exit
Sources:
- High-level architecture diagrams (Diagram 3)
- extensions/muxio-tokio-rpc-server/Cargo.toml:14-22
WebSocket Integration with Axum
The server leverages Axum’s built-in WebSocket support to handle the HTTP upgrade handshake and ongoing WebSocket communication.
graph LR
subgraph "Axum Router Configuration"
ROUTER["axum::Router"]
ROUTE["WebSocket Route\n/ws or custom path"]
UPGRADE["WebSocket Upgrade Handler\naxum::extract::ws"]
end
subgraph "Handler Function"
WS_FN["handle_websocket_connection\nAsync task per connection"]
EXTRACT["Extract WebSocket\nfrom HTTP upgrade"]
end
subgraph "Connection State"
DISPATCHER_STATE["Arc<Mutex<RpcDispatcher>>"]
ENDPOINT_STATE["Arc<RpcServiceEndpointInterface>"]
HANDLER_REGISTRY["Handler Registry\nRegistered service handlers"]
end
ROUTER --> ROUTE
ROUTE --> UPGRADE
UPGRADE --> WS_FN
WS_FN --> EXTRACT
WS_FN --> DISPATCHER_STATE
WS_FN --> ENDPOINT_STATE
WS_FN --> HANDLER_REGISTRY
style ROUTER fill:#fff4e1
style WS_FN fill:#e1f5ff
Route Configuration
Description: The server configures an Axum route (typically at /ws) that handles WebSocket upgrades. When a client connects, Axum invokes the handler function with the upgraded WebSocket. The handler then creates the necessary RPC infrastructure (dispatcher, endpoint, caller) and enters the connection loop.
Integration Points:
axum::extract::ws::WebSocketUpgrade- HTTP to WebSocket upgradeaxum::extract::ws::WebSocket- Bidirectional WebSocket streamaxum::routing::get()- Route registrationtokio::spawn()- Connection task spawning
Sources:
- Cargo.lock:80-114 (Axum dependencies)
- Cargo.lock:1446-1455 (tokio-tungstenite dependencies)
Handler Registration and Dispatch
Service handlers are registered with the server before it starts, allowing the endpoint to route incoming requests to the appropriate handler implementations.
Handler Registration Flow
| Step | Action | Code Entity |
|---|---|---|
| 1 | Create RpcServer instance | RpcServer::new() or ::builder() |
| 2 | Register service handlers | register_handler<T>() method |
| 3 | Build Axum router | Internal router configuration |
| 4 | Start server | serve() or run() method |
Request Dispatch Mechanism
Description: When a complete RPC request is received and decoded, the RpcServiceEndpointInterface looks up the registered handler by method ID. The handler is invoked asynchronously, and its result is serialized and sent back through the same connection. Error handling is built into each layer, with errors propagated back to the client as RpcServiceError responses.
Sources:
- extensions/muxio-tokio-rpc-server/Cargo.toml:18-20 (RPC service dependencies)
- High-level architecture diagrams (Diagram 3)
graph TB
subgraph "Per-Connection State"
SHARED_DISPATCHER["Arc<TokioMutex<RpcDispatcher>>\nShared between endpoint & caller"]
end
subgraph "Server-to-Client Direction"
CALLER["RpcServiceCallerInterface\nServer-initiated calls"]
CALLER_LOGIC["Call Logic\nawait response from client"]
end
subgraph "Client-to-Server Direction"
ENDPOINT["RpcServiceEndpointInterface\nClient-initiated calls"]
HANDLER["Registered Handlers\nServer-side implementations"]
end
subgraph "Transport Sharing"
READ_LOOP["Read Loop\nReceive WebSocket frames\nFeed to dispatcher"]
WRITE_LOOP["Write Loop\nSend WebSocket frames\nFrom dispatcher"]
WEBSOCKET["WebSocket Connection\nSingle TCP connection"]
end
CALLER --> SHARED_DISPATCHER
ENDPOINT --> SHARED_DISPATCHER
SHARED_DISPATCHER --> READ_LOOP
SHARED_DISPATCHER --> WRITE_LOOP
READ_LOOP --> WEBSOCKET
WRITE_LOOP --> WEBSOCKET
CALLER_LOGIC --> CALLER
HANDLER --> ENDPOINT
style SHARED_DISPATCHER fill:#e1f5ff
style WEBSOCKET fill:#fff4e1
Bidirectional RPC Support
A key feature of the Tokio RPC Server is support for server-initiated RPC calls to connected clients. This enables push notifications, server-side events, and bidirectional request/response patterns.
Bidirectional Communication Architecture
Description: Both the RpcServiceEndpointInterface and RpcServiceCallerInterface share the same RpcDispatcher and underlying RpcSession. This allows both directions to multiplex their requests over the same WebSocket connection. The dispatcher ensures that request IDs are unique and that responses are routed to the correct awaiting caller, whether that’s a client waiting for a server response or a server waiting for a client response.
Use Cases:
- Server pushing real-time updates to clients
- Server requesting information from clients
- Peer-to-peer style communication patterns
- Event-driven architectures where both sides can initiate actions
Sources:
- extensions/muxio-tokio-rpc-server/Cargo.toml:18-20
- High-level architecture diagrams (Diagram 3)
Connection Lifecycle Management
The server manages the complete lifecycle of each WebSocket connection, from initial upgrade through active communication to graceful or abrupt termination.
Connection States
| State | Description | Transitions |
|---|---|---|
Connecting | WebSocket upgrade in progress | → Connected |
Connected | Active RPC communication | → Disconnecting, Error |
Disconnecting | Graceful shutdown initiated | → Disconnected |
Disconnected | Connection closed cleanly | Terminal state |
Error | Connection error occurred | → Disconnected |
Lifecycle State Machine
Description: The connection lifecycle is managed by the per-connection task. State transitions are tracked internally, and state change callbacks (if registered) are invoked to notify application code of connection events. Resources are properly cleaned up in all terminal states.
Resource Cleanup:
- WebSocket stream closed
- Per-stream decoders removed from
RpcSession - Pending requests in
RpcDispatchercompleted with error - Connection removed from server’s tracking state
- Task exit and memory deallocation
Sources:
- High-level architecture diagrams (Diagram 6)
- extensions/muxio-tokio-rpc-server/Cargo.toml15 (Tokio dependency)
graph LR
subgraph "Inbound Path"
WS_MSG["WebSocket Binary Message\ntokio_tungstenite::Message"]
BYTES["Bytes Extract\nVec<u8> or Bytes"]
FEED["dispatcher.feed_bytes()\nProcess frame data"]
DECODE["RpcSession Decode\nReconstruct streams"]
EVENT["RpcDispatcher Events\nComplete requests/responses"]
end
subgraph "Outbound Path"
RESPONSE["Response Data\nFrom handlers or caller"]
ENCODE["RpcSession Encode\nFrame into chunks"]
WRITE_QUEUE["Write Queue\nPending frames"]
WS_SEND["WebSocket Send\nBinary message"]
end
WS_MSG --> BYTES
BYTES --> FEED
FEED --> DECODE
DECODE --> EVENT
RESPONSE --> ENCODE
ENCODE --> WRITE_QUEUE
WRITE_QUEUE --> WS_SEND
style FEED fill:#e1f5ff
style ENCODE fill:#e1f5ff
Binary Frame Processing
The server handles the low-level details of converting between WebSocket binary messages and muxio’s framing protocol.
Frame Processing Pipeline
Description: WebSocket messages arrive as binary frames. The server extracts the byte payload and feeds it to the RpcDispatcher, which uses the RpcSession to demultiplex and decode the frames. In the outbound direction, responses are encoded by RpcSession into chunked frames and queued for WebSocket transmission.
Frame Format Details:
- Binary WebSocket frames only (text frames rejected)
- Frames may be chunked at the WebSocket layer (transparent to muxio)
- muxio’s internal chunking is based on
DEFAULT_MAX_CHUNK_SIZE - Stream multiplexing allows multiple concurrent operations
Sources:
- Cargo.lock:1446-1455 (tokio-tungstenite)
- High-level architecture diagrams (Diagram 5)
Server Configuration and Builder Pattern
The server typically provides a builder pattern for configuration before starting.
Configuration Options
| Option | Purpose | Default |
|---|---|---|
| Bind address | TCP address and port | 127.0.0.1:3000 (typical) |
| WebSocket path | Route path for WebSocket upgrade | /ws (typical) |
| Connection limits | Max concurrent connections | Unlimited (configurable) |
| Timeout settings | Connection and request timeouts | Platform defaults |
| Handler registry | Registered RPC service handlers | Empty (must register) |
| State change handlers | Lifecycle event callbacks | None (optional) |
Typical Server Setup Pattern
Description: The server is configured using a builder pattern. Handlers are registered before the server starts. Once build() is called, the server configures its Axum router with the WebSocket route. The run() or serve() method starts the server’s event loop.
Sources:
- extensions/muxio-tokio-rpc-server/Cargo.toml12 (Axum dependency)
- extensions/muxio-tokio-rpc-server/Cargo.toml15 (Tokio dependency)
Integration with Tokio Async Runtime
The server is fully integrated with the Tokio async runtime, using async/await throughout.
Async Task Structure
| Task Type | Purpose | Lifetime |
|---|---|---|
| Server listener | Accept incoming connections | Until server shutdown |
| Per-connection task | Handle one WebSocket connection | Until connection closes |
| Read loop | Process incoming WebSocket frames | Per-connection lifetime |
| Write loop | Send outgoing WebSocket frames | Per-connection lifetime |
| Handler execution | Execute registered RPC handlers | Per-request duration |
Async Patterns:
async fnfor all I/O operationstokio::spawn()for concurrent task creationtokio::select!for graceful shutdown coordinationTokioMutexfor shared state protectionfutures::stream::StreamExtfor frame processing
Concurrency Model:
- One Tokio task per connection
- Handlers executed concurrently on Tokio thread pool
- Multiplexed streams allow concurrent RPC calls per connection
- Backpressure handled at WebSocket layer
Sources:
- Cargo.lock:1417-1432 (Tokio dependency)
- extensions/muxio-tokio-rpc-server/Cargo.toml15
- extensions/muxio-tokio-rpc-server/Cargo.toml21 (async-trait)
graph TB
subgraph "Error Sources"
WS_ERR["WebSocket Errors\nConnection failures\nProtocol violations"]
DECODE_ERR["Decode Errors\nInvalid frames\nMalformed data"]
HANDLER_ERR["Handler Errors\nBusiness logic failures\nRpcServiceError"]
DISPATCHER_ERR["Dispatcher Errors\nUnknown request ID\nMutex poisoning"]
end
subgraph "Error Handling"
LOG["tracing::error!\nStructured logging"]
CLIENT_RESP["Error Response\nRpcServiceError to client"]
DISCONNECT["Connection Close\nUnrecoverable errors"]
STATE_CALLBACK["State Change Handler\nNotify application"]
end
WS_ERR --> LOG
WS_ERR --> DISCONNECT
DECODE_ERR --> LOG
DECODE_ERR --> CLIENT_RESP
HANDLER_ERR --> LOG
HANDLER_ERR --> CLIENT_RESP
DISPATCHER_ERR --> LOG
DISPATCHER_ERR --> DISCONNECT
DISCONNECT --> STATE_CALLBACK
style LOG fill:#fff4e1
style DISCONNECT fill:#ffe1e1
Error Handling and Observability
The server provides comprehensive error handling and integrates with the tracing ecosystem for observability.
Error Propagation
Description: Errors are categorized by severity and handled appropriately. Handler errors are serialized and sent to the client as RpcServiceError responses. Transport and protocol errors are logged and result in connection termination. All errors are emitted as structured tracing events for monitoring and debugging.
Tracing Integration:
#[tracing::instrument]on key functions- Span context for each connection
- Structured fields for method IDs, request IDs, error codes
- Error, warn, info, debug, and trace level events
Sources:
- extensions/muxio-tokio-rpc-server/Cargo.toml22 (tracing dependency)
- Cargo.lock:1503-1562 (tracing ecosystem)
Performance Considerations
The Tokio RPC Server is designed for high-performance scenarios with multiple concurrent connections and high request throughput.
Performance Characteristics
| Metric | Behavior | Tuning |
|---|---|---|
| Connections | O(n) tasks, minimal per-connection overhead | Tokio thread pool size |
| Throughput | Stream multiplexing enables pipelining | DEFAULT_MAX_CHUNK_SIZE |
| Latency | Low - single async task hop per request | Handler execution time |
| Memory | Incremental - per-stream decoders only | Connection limits |
Optimization Strategies:
- Stream multiplexing reduces head-of-line blocking
- Binary protocol minimizes serialization overhead
- Zero-copy where possible (using
bytes::Bytes) - Efficient buffer management in
RpcSession - Concurrent handler execution on Tokio thread pool
Sources:
- extensions/muxio-tokio-rpc-server/Cargo.toml13 (bytes dependency)
- High-level architecture diagrams (Diagram 5)
- Cargo.lock:209-212 (bytes crate)
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Tokio RPC Client
Loading…
Tokio RPC Client
Relevant source files
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/Cargo.toml
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
Purpose and Scope
This document describes the muxio-tokio-rpc-client crate, which provides a Tokio-based native RPC client implementation for connecting to muxio WebSocket servers. The client uses tokio-tungstenite for WebSocket transport and implements the platform-agnostic RpcServiceCallerInterface trait to enable type-safe RPC calls.
For server-side implementation details, see Tokio RPC Server. For the browser-based WASM client alternative, see WASM RPC Client. For general connection lifecycle concepts applicable to both platforms, see Connection Lifecycle and State Management.
Sources : extensions/muxio-tokio-rpc-client/Cargo.toml:1-31 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:1-336
Architecture Overview
The RpcClient is built on Tokio’s asynchronous runtime and manages a persistent WebSocket connection to an muxio server. It encapsulates bidirectional RPC functionality through two primary abstractions:
- Client-side calling : Via the
RpcServiceCallerInterfacetrait, enabling outbound RPC invocations - Server-to-client calling : Via an embedded
RpcServiceEndpoint, enabling the server to invoke methods on the client
graph TB
subgraph "RpcClient Structure"
CLIENT["RpcClient"]
DISPATCHER["Arc<TokioMutex<RpcDispatcher>>\nRequest correlation & routing"]
ENDPOINT["Arc<RpcServiceEndpoint<()>>\nServer-to-client RPC handler"]
TX["mpsc::UnboundedSender<WsMessage>\nInternal message queue"]
STATE_HANDLER["Arc<StdMutex<Option<Box<dyn Fn>>>>\nState change callback"]
IS_CONNECTED["Arc<AtomicBool>\nConnection state flag"]
TASK_HANDLES["Vec<JoinHandle<()>>\nBackground task handles"]
end
subgraph "Background Tasks"
HEARTBEAT["Heartbeat Task\nSends Ping every 1s"]
RECV["Receive Loop Task\nProcesses WS frames"]
SEND["Send Loop Task\nTransmits WS messages"]
end
subgraph "WebSocket Layer"
WS_STREAM["tokio_tungstenite::WebSocketStream"]
WS_SENDER["SplitSink\nWrite half"]
WS_RECEIVER["SplitStream\nRead half"]
end
CLIENT -->|owns| DISPATCHER
CLIENT -->|owns| ENDPOINT
CLIENT -->|owns| TX
CLIENT -->|owns| STATE_HANDLER
CLIENT -->|owns| IS_CONNECTED
CLIENT -->|owns| TASK_HANDLES
TASK_HANDLES -->|contains| HEARTBEAT
TASK_HANDLES -->|contains| RECV
TASK_HANDLES -->|contains| SEND
HEARTBEAT -->|sends Ping via| TX
SEND -->|reads from| TX
SEND -->|writes to| WS_SENDER
RECV -->|reads from| WS_RECEIVER
RECV -->|processes via| DISPATCHER
RECV -->|processes via| ENDPOINT
WS_SENDER -->|part of| WS_STREAM
WS_RECEIVER -->|part of| WS_STREAM
The client spawns three background Tokio tasks to manage the WebSocket lifecycle: a heartbeat task for periodic pings, a receive loop for processing incoming WebSocket frames, and a send loop for transmitting outbound messages.
High-Level Component Structure
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-32 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:131-267
Core Components
RpcClient Structure
The RpcClient struct holds all state and resources necessary for maintaining an active WebSocket connection and processing RPC operations.
| Field | Type | Purpose |
|---|---|---|
dispatcher | Arc<TokioMutex<RpcDispatcher<'static>>> | Manages request ID allocation, response correlation, and stream multiplexing |
endpoint | Arc<RpcServiceEndpoint<()>> | Handles server-to-client RPC method dispatch |
tx | mpsc::UnboundedSender<WsMessage> | Internal channel for queuing outbound WebSocket messages |
state_change_handler | RpcTransportStateChangeHandler | User-registered callback invoked on connection state changes |
is_connected | Arc<AtomicBool> | Atomic flag tracking current connection status |
task_handles | Vec<JoinHandle<()>> | Handles to the three background Tokio tasks |
Type Alias : RpcTransportStateChangeHandler is defined as Arc<StdMutex<Option<Box<dyn Fn(RpcTransportState) + Send + Sync>>>>, allowing thread-safe storage and invocation of the state change callback.
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:22-32
Debug and Drop Implementations
The RpcClient implements Debug to display connection status and Drop to ensure proper cleanup:
The Drop implementation ensures that when the last Arc<RpcClient> reference is dropped, all background tasks are aborted and the state change handler is notified of disconnection.
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:34-52
Lifecycle Management
Client Creation
The RpcClient::new method establishes a WebSocket connection and initializes all client components. It uses Arc::new_cyclic to enable background tasks to hold weak references to the client, preventing circular reference cycles.
sequenceDiagram
participant App as "Application Code"
participant New as "RpcClient::new"
participant WS as "tokio_tungstenite"
participant Cyclic as "Arc::new_cyclic"
participant Tasks as "Background Tasks"
App->>New: RpcClient::new(host, port)
New->>New: Build websocket_url
New->>WS: connect_async(url)
WS-->>New: WebSocketStream + Response
New->>New: ws_stream.split()
New->>New: mpsc::unbounded_channel()
New->>Cyclic: "Arc::new_cyclic(|weak_client|)"
Cyclic->>Tasks: Spawn heartbeat task
Cyclic->>Tasks: Spawn receive loop task
Cyclic->>Tasks: Spawn send loop task
Cyclic-->>New: Arc<RpcClient>
New-->>App: Ok(Arc<RpcClient>)
Connection Establishment Flow
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271
Key Initialization Steps
- WebSocket URL Construction : Parses the host to determine if it’s an IP address or hostname, constructing the appropriate
ws://URL (rpc_client.rs:112-115) - Connection Establishment : Calls
connect_asyncfromtokio-tungstenite(rpc_client.rs:118-121) - Stream Splitting : Splits the WebSocket stream into separate read and write halves (rpc_client.rs127)
- Channel Creation : Creates an unbounded MPSC channel for internal message passing (rpc_client.rs128)
- Cyclic Arc Initialization : Uses
Arc::new_cyclicto allow tasks to hold weak references (rpc_client.rs131) - Component Initialization : Creates
RpcDispatcher,RpcServiceEndpoint, and state tracking (rpc_client.rs:132-136) - Task Spawning : Spawns the three background tasks (rpc_client.rs:139-257)
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271
Shutdown Mechanisms
The client provides both synchronous and asynchronous shutdown paths to handle different termination scenarios.
Shutdown Flow Comparison
| Aspect | shutdown_sync() | shutdown_async() |
|---|---|---|
| Context | Called from Drop implementation | Called from background tasks on error |
| Locking | Uses StdMutex::lock() (blocking) | Uses TokioMutex::lock().await (async) |
| Dispatcher | Does not acquire dispatcher lock | Acquires dispatcher lock to fail pending requests |
| Pending Requests | Left in dispatcher (client is dropping) | Explicitly failed with ReadAfterCancel error |
| State Handler | Invokes with Disconnected if connected | Invokes with Disconnected if connected |
Synchronous Shutdown (rpc_client.rs:56-77):
- Used when the client is being dropped
- Checks
is_connectedand swaps tofalseatomically - Invokes the state change handler if connected
- Does not fail pending requests (client is being destroyed)
Asynchronous Shutdown (rpc_client.rs:79-108):
- Used when background tasks detect connection errors
- Swaps
is_connectedtofalseatomically - Acquires the dispatcher lock asynchronously
- Calls
fail_all_pending_requestswithFrameDecodeError::ReadAfterCancel - Invokes the state change handler if connected
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:56-108
graph LR
subgraph "Task Lifecycle"
SPAWN["Arc::new_cyclic\ncreates Weak references"]
HEARTBEAT_TASK["Heartbeat Task"]
RECV_TASK["Receive Loop Task"]
SEND_TASK["Send Loop Task"]
SPAWN --> HEARTBEAT_TASK
SPAWN --> RECV_TASK
SPAWN --> SEND_TASK
end
subgraph "Shared State"
APP_TX["mpsc::UnboundedSender\nMessage queue"]
WS_SENDER["WebSocket write half"]
WS_RECEIVER["WebSocket read half"]
WEAK_CLIENT["Weak<RpcClient>\nUpgradable reference"]
end
HEARTBEAT_TASK -->|send Ping| APP_TX
SEND_TASK -->|recv| APP_TX
SEND_TASK -->|send msg| WS_SENDER
RECV_TASK -->|next| WS_RECEIVER
RECV_TASK -->|upgrade| WEAK_CLIENT
SEND_TASK -->|upgrade| WEAK_CLIENT
Background Tasks
The client spawns three independent Tokio tasks during initialization, each serving a specific role in the connection lifecycle.
Task Architecture
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:131-267
Heartbeat Task
The heartbeat task generates periodic ping messages to maintain connection liveness and detect silent disconnections.
Implementation (rpc_client.rs:139-154):
Behavior :
- Ticks every 1 second using
tokio::time::interval - Sends
WsMessage::Pingto the internal message queue - Exits when the channel is closed (client dropped)
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-154
Receive Loop Task
The receive loop processes incoming WebSocket frames and routes them through the RPC dispatcher and endpoint.
Implementation (rpc_client.rs:157-222):
Frame Processing :
| Message Type | Action |
|---|---|
Binary | Lock dispatcher, call endpoint.read_bytes() to process RPC frames |
Ping | Send Pong response via internal channel |
Pong | Log and ignore (response to our heartbeat pings) |
Text | Log and ignore (protocol uses binary frames only) |
Close | Break loop (connection closed) |
| Error | Spawn shutdown_async() task and break loop |
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:157-222
Send Loop Task
The send loop drains the internal message queue and transmits messages to the WebSocket.
Implementation (rpc_client.rs:224-257):
Behavior :
- Receives messages from the MPSC channel
- Checks
is_connectedbefore sending (prevents sending after disconnect signal) - Uses
Ordering::Acquirefor memory synchronization with the shutdown path - Spawns
shutdown_async()on send errors - Exits when the channel is closed or disconnected
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:224-257
Connection State Management
is_connected Flag
The is_connected field is an Arc<AtomicBool> that tracks the current connection status. It uses atomic operations to ensure thread-safe updates across multiple tasks.
Memory Ordering :
Ordering::SeqCstfor swapping in shutdown paths (strongest guarantee)Ordering::Acquirein send loop for reading (synchronizes with shutdown writes)Ordering::Relaxedfor general reads (no synchronization needed)
State Transitions :
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs30 extensions/muxio-tokio-rpc-client/src/rpc_client.rs61 extensions/muxio-tokio-rpc-client/src/rpc_client.rs85
State Change Handlers
Applications can register a state change handler to be notified of connection and disconnection events.
Handler Registration (rpc_client.rs:315-334):
Invocation Points :
- Connected : Immediately after setting the handler if already connected
- Disconnected : In
shutdown_sync()orshutdown_async()when connection ends
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
graph TB
subgraph "RpcServiceCallerInterface Trait"
GET_DISPATCHER["get_dispatcher()\nReturns Arc<TokioMutex<RpcDispatcher>>"]
IS_CONNECTED["is_connected()\nReturns bool"]
GET_EMIT_FN["get_emit_fn()\nReturns Arc<dyn Fn(Vec<u8>)>"]
SET_STATE_HANDLER["set_state_change_handler()\nRegisters callback"]
end
subgraph "RpcClient Implementation"
IMPL_GET_DISP["Clone dispatcher Arc"]
IMPL_IS_CONN["Load is_connected atomic"]
IMPL_EMIT["Closure sending to tx channel"]
IMPL_SET_STATE["Store handler, invoke if connected"]
end
GET_DISPATCHER --> IMPL_GET_DISP
IS_CONNECTED --> IMPL_IS_CONN
GET_EMIT_FN --> IMPL_EMIT
SET_STATE_HANDLER --> IMPL_SET_STATE
RpcServiceCallerInterface Implementation
The RpcClient implements the platform-agnostic RpcServiceCallerInterface trait, enabling it to be used with shared RPC service definitions.
Trait Methods Implementation
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:278-335
get_dispatcher
Returns a clone of the Arc<TokioMutex<RpcDispatcher>>, allowing RPC call implementations to access the dispatcher for request correlation and stream management.
Implementation (rpc_client.rs:280-282):
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:280-282
is_connected
Returns the current connection status by loading the atomic boolean with relaxed ordering.
Implementation (rpc_client.rs:284-286):
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-286
get_emit_fn
Returns a closure that captures the internal message channel and is_connected flag. This closure is used by the RPC dispatcher to emit binary frames for transmission.
Implementation (rpc_client.rs:289-313):
Behavior :
- Checks
is_connectedbefore sending (prevents writes after disconnect) - Converts
Vec<u8>toWsMessage::Binary - Sends to the internal MPSC channel (non-blocking unbounded send)
- Ignores send errors (channel closed means client is dropping)
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:289-313
WebSocket Transport Integration
tokio-tungstenite
The client uses tokio-tungstenite for WebSocket protocol implementation over Tokio’s async I/O.
Dependencies (Cargo.toml16):
Connection Flow :
- URL Construction : Builds
ws://host:port/wsURL string - Async Connect : Calls
connect_async(&websocket_url)which returns(WebSocketStream, Response) - Stream Split : Calls
ws_stream.split()to obtain(SplitSink, SplitStream) - Task Distribution : Distributes the sink to send loop, stream to receive loop
Sources : extensions/muxio-tokio-rpc-client/Cargo.toml16 extensions/muxio-tokio-rpc-client/src/rpc_client.rs19 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:118-127
Binary Frame Processing
All RPC communication uses WebSocket binary frames. The receive loop processes binary frames by passing them to the endpoint for decoding.
Binary Frame Flow :
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:163-178
Ping/Pong Handling
The client automatically responds to server pings and sends periodic pings via the heartbeat task.
Ping/Pong Matrix :
| Direction | Message Type | Sender | Handler |
|---|---|---|---|
| Client → Server | Ping | Heartbeat task (1s interval) | Server responds with Pong |
| Server → Client | Ping | Server | Receive loop sends Pong |
| Server → Client | Pong | Server (response to our Ping) | Receive loop logs and ignores |
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-154 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:179-182
Error Handling and Cleanup
Error Sources
The client handles errors from multiple sources:
| Error Source | Handling Strategy | Cleanup Action |
|---|---|---|
connect_async failure | Return io::Error from new() | No cleanup needed (not created) |
| WebSocket receive error | Spawn shutdown_async(), break receive loop | Fail pending requests, notify handler |
| WebSocket send error | Spawn shutdown_async(), break send loop | Fail pending requests, notify handler |
| MPSC channel closed | Break task loop | Task exits naturally |
| Client drop | Abort tasks, call shutdown_sync() | Notify handler, abort background tasks |
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:118-121 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:186-198 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:239-253
Pending Request Cleanup
When the connection is lost, all pending RPC requests in the dispatcher must be failed to prevent callers from waiting indefinitely.
Cleanup Flow (rpc_client.rs:100-103):
Error Propagation :
- Dispatcher lock is acquired (prevents new requests)
fail_all_pending_requestsis called withReadAfterCancelerror- All pending request callbacks receive
RpcServiceError::Transport - Waiting futures resolve with error
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-103
Task Abort on Drop
The Drop implementation ensures clean shutdown even if the client is dropped while background tasks are running.
Drop Sequence (rpc_client.rs:42-52):
- Iterate over
task_handlesand callabort()on each - Call
shutdown_sync()to notify state handlers - Background tasks receive abort signal and terminate
Abort Safety : Aborting tasks is safe because:
- The receive loop holds a weak reference (won’t prevent drop)
- The send loop checks
is_connectedbefore sending - The heartbeat task only sends pings (no critical state)
- All critical state is owned by the
RpcClientitself
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52
Usage Examples
Basic Connection and Call
This example demonstrates:
- Creating a client with
RpcClient::new(host, port) - Making a prebuffered RPC call via the trait method
- The client is automatically cleaned up when dropped
Sources : Example pattern from integration tests
State Change Handler Registration
The handler is invoked:
- Immediately with
Connectedif already connected - With
Disconnectedwhen the connection is lost or client is dropped
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
Server-to-Client RPC
The client can also handle incoming RPC calls from the server by registering handlers on its embedded endpoint.
This demonstrates bidirectional RPC:
- The client can call server methods via
RpcServiceCallerInterface - The server can call client methods via the registered endpoint handlers
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:273-275
Testing
The crate includes comprehensive integration tests covering connection lifecycle, error handling, and state management.
Test Coverage
| Test | File | Purpose |
|---|---|---|
test_client_errors_on_connection_failure | tests/transport_state_tests.rs:16-31 | Verifies connection errors are returned properly |
test_transport_state_change_handler | tests/transport_state_tests.rs:34-165 | Validates state handler invocations |
test_pending_requests_fail_on_disconnect | tests/transport_state_tests.rs:167-292 | Ensures pending requests fail on disconnect |
Sources : extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:1-293
Mock Implementations
The muxio-rpc-service-caller crate tests include mock client implementations demonstrating the trait contract:
MockRpcClient Structure (dynamic_channel_tests.rs:19-88):
- Implements
RpcServiceCallerInterfacefor testing - Uses
Arc<Mutex<Option<DynamicSender>>>to provide response senders - Demonstrates dynamic channel handling (bounded/unbounded)
- Uses
Arc<AtomicBool>for connection state simulation
Sources : extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:15-88
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
WASM RPC Client
Loading…
WASM RPC Client
Relevant source files
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-tokio-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/Cargo.toml
- extensions/muxio-wasm-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
The WASM RPC Client provides a WebAssembly-compatible implementation of the RPC transport layer for browser environments. It bridges Rust code compiled to WASM with JavaScript’s WebSocket API, enabling bidirectional RPC communication between WASM clients and native servers.
This page focuses on the client-side WASM implementation. For native Tokio-based clients, see Tokio RPC Client. For server-side implementations, see Tokio RPC Server. For the RPC abstraction layer, see RPC Framework.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-182 extensions/muxio-wasm-rpc-client/Cargo.toml:1-30
Architecture Overview
The WASM RPC client bridges Rust WASM code with JavaScript’s WebSocket API. Unlike the Tokio client which manages its own WebSocket connection, RpcWasmClient relies on JavaScript glue code to handle WebSocket events and delegates to Rust for RPC protocol processing.
Diagram: WASM Client Architecture and Data Flow
graph TB
subgraph "Browser JavaScript"
WS[WebSocket]
WRITE_BYTES["static_muxio_write_bytes()"]
APP["Web Application"]
end
subgraph "WASM Module"
TLS["MUXIO_STATIC_RPC_CLIENT_REF\nRefCell<Option<Arc<RpcWasmClient>>>"]
CLIENT["RpcWasmClient"]
DISP["Arc<Mutex<RpcDispatcher>>"]
EP["Arc<RpcServiceEndpoint<()>>"]
EMIT["emit_callback: Arc<dyn Fn(Vec<u8>)>"]
CONN["is_connected: Arc<AtomicBool>"]
end
subgraph "Core Layer"
MUXIO["muxio::rpc::RpcDispatcher\nmuxio::frame"]
end
APP -->|new WebSocket| WS
WS -->|onopen| TLS
WS -->|onmessage bytes| TLS
WS -->|onerror/onclose| TLS
TLS -->|handle_connect| CLIENT
TLS -->|read_bytes bytes| CLIENT
TLS -->|handle_disconnect| CLIENT
CLIENT --> DISP
CLIENT --> EP
CLIENT --> EMIT
CLIENT --> CONN
EMIT -->|invoke| WRITE_BYTES
WRITE_BYTES -->|websocket.send| WS
DISP --> MUXIO
EP --> MUXIO
The architecture consists of three layers:
- JavaScript Layer : Manages WebSocket lifecycle (
onopen,onmessage,onclose) and forwards events to WASM - WASM Bridge Layer :
RpcWasmClientwithemit_callbackfor outbound data and lifecycle methods for inbound events - Core RPC Layer :
RpcDispatcherfor request/response correlation andRpcServiceEndpoint<()>for handling incoming calls
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-11 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36
RpcWasmClient Structure
The RpcWasmClient struct manages bidirectional RPC communication in WASM environments. It implements RpcServiceCallerInterface for outbound calls and uses RpcServiceEndpoint<()> for inbound request handling.
Diagram: RpcWasmClient Class Structure
classDiagram
class RpcWasmClient {
-Arc~Mutex~RpcDispatcher~~ dispatcher
-Arc~RpcServiceEndpoint~()~~ endpoint
-Arc~dyn Fn(Vec~u8~)~ emit_callback
-RpcTransportStateChangeHandler state_change_handler
-Arc~AtomicBool~ is_connected
+new(emit_callback) RpcWasmClient
+handle_connect() async
+read_bytes(bytes: &[u8]) async
+handle_disconnect() async
+is_connected() bool
+get_endpoint() Arc~RpcServiceEndpoint~()~~
-dispatcher() Arc~Mutex~RpcDispatcher~~
-emit() Arc~dyn Fn(Vec~u8~)~
}
class RpcServiceCallerInterface {<<trait>>\n+get_dispatcher() Arc~Mutex~RpcDispatcher~~\n+get_emit_fn() Arc~dyn Fn(Vec~u8~)~\n+is_connected() bool\n+set_state_change_handler(handler) async}
class RpcDispatcher {
+read_bytes(bytes: &[u8]) Result~Vec~u32~, FrameDecodeError~
+respond(response, chunk_size, callback) Result
+is_rpc_request_finalized(id: u32) Option~bool~
+delete_rpc_request(id: u32) Option~RpcRequest~
+fail_all_pending_requests(error)
}
class RpcServiceEndpoint {+get_prebuffered_handlers() Arc\n+register_prebuffered_handler()}
RpcWasmClient ..|> RpcServiceCallerInterface
RpcWasmClient --> RpcDispatcher
RpcWasmClient --> RpcServiceEndpoint
| Field | Type | Purpose |
|---|---|---|
dispatcher | Arc<Mutex<RpcDispatcher<'static>>> | Manages request/response correlation via request_id and stream multiplexing |
endpoint | Arc<RpcServiceEndpoint<()>> | Dispatches incoming RPC requests to registered handlers by METHOD_ID |
emit_callback | Arc<dyn Fn(Vec<u8>) + Send + Sync> | Callback invoked to send bytes to JavaScript’s static_muxio_write_bytes() |
state_change_handler | Arc<Mutex<Option<Box<dyn Fn(RpcTransportState) + Send + Sync>>>> | Optional callback for Connected/Disconnected state transitions |
is_connected | Arc<AtomicBool> | Lock-free connection status tracking |
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181
Connection Lifecycle
The WASM client relies on JavaScript to manage the WebSocket connection. Three lifecycle methods must be called from JavaScript glue code in response to WebSocket events:
stateDiagram-v2
[*] --> Disconnected : new()
Disconnected --> Connected : handle_connect()
Connected --> Processing : read_bytes(data)
Processing --> Connected
Connected --> Disconnected : handle_disconnect()
Disconnected --> [*]
note right of Connected
is_connected = true
state_change_handler(Connected)
end note
note right of Processing
1. read_bytes() into dispatcher
2. process_single_prebuffered_request()
3. respond() with results
end note
note right of Disconnected
is_connected = false
state_change_handler(Disconnected)
fail_all_pending_requests()
end note
handle_connect
Called when JavaScript’s WebSocket onopen event fires. Sets is_connected to true via AtomicBool::store() and invokes the registered state_change_handler with RpcTransportState::Connected.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:38-44
read_bytes
The core message processing method, called when JavaScript’s WebSocket onmessage event fires. Implements a three-stage pipeline to avoid holding the dispatcher lock during expensive async operations.
Diagram: read_bytes Three-Stage Pipeline
sequenceDiagram
participant JS as "JavaScript\nonmessage"
participant RB as "read_bytes()"
participant Disp as "Mutex<RpcDispatcher>"
participant Proc as "process_single_prebuffered_request()"
participant Handler as "User Handler"
JS->>RB: read_bytes(bytes: &[u8])
rect rgb(240, 240, 240)
Note over RB,Disp: Stage 1: Synchronous Reading (lines 56-81)
RB->>Disp: lock().await
RB->>Disp: dispatcher.read_bytes(bytes)
Disp-->>RB: Ok(request_ids: Vec<u32>)
loop "for id in request_ids"
RB->>Disp: is_rpc_request_finalized(id)
Disp-->>RB: Some(true)
RB->>Disp: delete_rpc_request(id)
Disp-->>RB: Some(RpcRequest)
end
Note over RB: Lock dropped here
end
rect rgb(245, 245, 245)
Note over RB,Handler: Stage 2: Async Processing (lines 83-103)
loop "for (request_id, request)"
RB->>Proc: process_single_prebuffered_request()
Proc->>Handler: handler(context, request)
Handler-->>Proc: Result<Vec<u8>, RpcServiceError>
Proc-->>RB: RpcResponse
end
RB->>RB: join_all(response_futures).await
end
rect rgb(240, 240, 240)
Note over RB,JS: Stage 3: Synchronous Sending (lines 105-120)
RB->>Disp: lock().await
loop "for response"
RB->>Disp: dispatcher.respond(response, chunk_size, callback)
Disp->>RB: emit_callback(chunk)
RB->>JS: static_muxio_write_bytes(chunk)
end
Note over RB: Lock dropped here
end
Stage 1 (lines 56-81) : Acquires dispatcher lock via Mutex::lock().await, calls dispatcher.read_bytes(bytes) to decode frames, identifies finalized requests using is_rpc_request_finalized(), extracts them with delete_rpc_request(), then releases lock.
Stage 2 (lines 83-103) : Without holding any locks, calls process_single_prebuffered_request() for each request. This invokes user handlers asynchronously and collects RpcResponse results using join_all().
Stage 3 (lines 105-120) : Re-acquires dispatcher lock, calls dispatcher.respond() for each response, which invokes emit_callback synchronously to send chunks via static_muxio_write_bytes().
This three-stage design prevents deadlocks by releasing the lock during handler execution and enables concurrent request processing.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121
handle_disconnect
Called when JavaScript’s WebSocket onclose or onerror events fire. Uses AtomicBool::swap() to atomically set is_connected to false, invokes the state_change_handler with RpcTransportState::Disconnected, and calls dispatcher.fail_all_pending_requests() with FrameDecodeError::ReadAfterCancel to terminate all in-flight RPC calls.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:124-134
Static Client Pattern
For simplified JavaScript integration, the WASM client provides a static client pattern using thread-local storage via MUXIO_STATIC_RPC_CLIENT_REF. This eliminates the need to pass client instances through JavaScript’s FFI boundary.
Diagram: Static Client Initialization and Access Flow
graph TB
subgraph "JavaScript"
INIT["init()"]
CALL["callRpcMethod()"]
end
subgraph "WASM Exports"
INIT_EXPORT["#[wasm_bindgen]\ninit_static_client()"]
RPC_EXPORT["#[wasm_bindgen]\nexported_rpc_fn()"]
end
subgraph "Static Client Layer"
TLS["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local!\nRefCell<Option<Arc<RpcWasmClient>>>"]
WITH["with_static_client_async()"]
GET["get_static_client()"]
end
subgraph "Client"
CLIENT["Arc<RpcWasmClient>"]
end
INIT --> INIT_EXPORT
INIT_EXPORT -->|cell.borrow_mut| TLS
TLS -.->|stores| CLIENT
CALL --> RPC_EXPORT
RPC_EXPORT --> WITH
WITH -->|cell.borrow .clone| TLS
TLS -.->|retrieves| CLIENT
WITH -->|FnOnce Arc<RpcWasmClient>| CLIENT
init_static_client
Initializes MUXIO_STATIC_RPC_CLIENT_REF thread-local storage with Arc<RpcWasmClient>. The function is idempotent—subsequent calls have no effect. The client is constructed with RpcWasmClient::new(|bytes| static_muxio_write_bytes(&bytes)) to bridge outbound data to JavaScript.
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-11
with_static_client_async
Primary method for interacting with the static client from #[wasm_bindgen] exported functions. Retrieves Arc<RpcWasmClient> from MUXIO_STATIC_RPC_CLIENT_REF.with(), invokes the provided closure, and converts the result to a JavaScript Promise via future_to_promise().
| Parameter | Type | Description |
|---|---|---|
f | FnOnce(Arc<RpcWasmClient>) -> Fut + 'static | Closure receiving client reference |
Fut | Future<Output = Result<T, String>> + 'static | Future returned by closure |
T | Into<JsValue> | Result type convertible to JavaScript value |
| Returns | Promise | JavaScript promise resolving to T or rejecting with error string |
If the static client has not been initialized, the promise rejects with "RPC client not initialized".
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72
get_static_client
Returns the current static client if initialized, otherwise returns None. Useful for conditional logic or direct access without promise conversion.
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:79-81
JavaScript Integration
The WASM client requires JavaScript glue code to bridge WebSocket events to WASM function calls. The integration relies on the emit_callback mechanism for outbound data and lifecycle methods for inbound events.
Diagram: JavaScript-WASM Bridge
graph TB
subgraph "JavaScript WebSocket Events"
OPEN["ws.onopen"]
MESSAGE["ws.onmessage"]
ERROR["ws.onerror"]
CLOSE["ws.onclose"]
end
subgraph "WASM Exported Functions"
WASM_CONNECT["handle_connect()"]
WASM_READ["read_bytes(event.data)"]
WASM_DISCONNECT["handle_disconnect()"]
end
subgraph "WASM Emit Path"
EMIT["emit_callback(bytes: Vec<u8>)"]
STATIC_WRITE["static_muxio_write_bytes(&bytes)"]
end
subgraph "JavaScript Bridge"
WRITE_FN["muxioWriteBytes(bytes)"]
end
OPEN -->|await| WASM_CONNECT
MESSAGE -->|await read_bytes new Uint8Array| WASM_READ
ERROR -->|await| WASM_DISCONNECT
CLOSE -->|await| WASM_DISCONNECT
EMIT -->|invoke| STATIC_WRITE
STATIC_WRITE -->|#[wasm_bindgen]| WRITE_FN
WRITE_FN -->|websocket.send bytes| MESSAGE
The JavaScript layer must:
- Create and manage a
WebSocketinstance - Forward
onopenevents toawait handle_connect() - Forward
onmessagedata toawait read_bytes(new Uint8Array(event.data)) - Forward
onerror/oncloseevents toawait handle_disconnect() - Implement
muxioWriteBytes()function to receive data fromstatic_muxio_write_bytes()and callwebsocket.send(bytes)
The emit_callback is constructed with |bytes| static_muxio_write_bytes(&bytes) when creating the client, which bridges to the JavaScript muxioWriteBytes() function.
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-8 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:27-34
Making RPC Calls
The WASM client implements RpcServiceCallerInterface, enabling the same call patterns as the Tokio client. All methods defined using RpcMethodPrebuffered trait in service definitions are available via call_rpc_buffered().
Diagram: Outbound RPC Call Flow
sequenceDiagram
participant WASM as "WASM Code"
participant Method as "Add::call()"
participant Caller as "RpcServiceCallerInterface"
participant Disp as "RpcDispatcher"
participant Emit as "emit_callback"
participant JS as "static_muxio_write_bytes()"
participant WS as "WebSocket"
WASM->>Method: Add::call(&client, request).await
Method->>Method: encode_request(params)\nwith METHOD_ID
Method->>Caller: call_rpc_buffered(RpcRequest)
Caller->>Disp: get_dispatcher().lock()
Disp->>Disp: assign request_id
Disp->>Disp: encode frames
Disp->>Emit: get_emit_fn()(bytes)
Emit->>JS: static_muxio_write_bytes(&bytes)
JS->>WS: websocket.send(bytes)
WS->>JS: onmessage(response_bytes)
JS->>Caller: read_bytes(response_bytes)
Caller->>Disp: decode frames
Disp->>Disp: match request_id
Disp->>Method: decode_response(bytes)
Method-->>WASM: Result<Response, RpcServiceError>
Call Mechanics
From WASM code, RPC calls follow this pattern:
- Obtain
Arc<RpcWasmClient>viawith_static_client_async()or direct reference - Call service methods:
SomeMethod::call(&client, request).await - The trait implementation calls
client.call_rpc_buffered()which:- Serializes the request with
bitcode::encode() - Attaches
METHOD_IDconstant in theRpcHeader - Invokes
dispatcher.call()with a uniquerequest_id - Emits encoded frames via
emit_callback
- Serializes the request with
- Awaits response correlation by
request_idin the dispatcher - Returns
Result<DecodedResponse, RpcServiceError>
Sources: extensions/muxio-wasm-rpc-client/src/lib.rs:6-9 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-167
graph TB
subgraph "JavaScript"
WS["WebSocket.onmessage"]
end
subgraph "read_bytes()
Pipeline"
STAGE1["Stage 1:\ndelete_rpc_request(id)"]
STAGE2["Stage 2:\nprocess_single_prebuffered_request()"]
STAGE3["Stage 3:\ndispatcher.respond()"]
end
subgraph "RpcServiceEndpoint<()>"
HANDLERS["get_prebuffered_handlers()\nHashMap<u32, Box<Handler>>"]
LOOKUP["lookup by METHOD_ID"]
end
subgraph "User Handler"
HANDLER["async fn(context: (), request: Vec<u8>)\n-> Result<Vec<u8>, RpcServiceError>"]
end
WS -->|bytes| STAGE1
STAGE1 -->|Vec< u32, RpcRequest >| STAGE2
STAGE2 --> HANDLERS
HANDLERS --> LOOKUP
LOOKUP --> HANDLER
HANDLER -->|Result| STAGE2
STAGE2 -->|Vec<RpcResponse>| STAGE3
STAGE3 -->|emit_callback| WS
Handling Incoming RPC Calls
The WASM client supports bidirectional RPC by handling incoming calls from the server. The RpcServiceEndpoint<()> dispatches requests to registered handlers by METHOD_ID.
Diagram: Inbound RPC Request Processing
sequenceDiagram
participant Code as "User Code"
participant Client as "RpcWasmClient"
participant EP as "RpcServiceEndpoint<()>"
participant Map as "HashMap<u32, Handler>"
Code->>Client: get_endpoint()
Client-->>Code: Arc<RpcServiceEndpoint<()>>
Code->>EP: register_prebuffered_handler::<Method>(handler)
EP->>Map: insert(Method::METHOD_ID, Box<handler>)
Note over Map: Handler stored for Method::METHOD_ID
Registering Handlers
Handlers are registered with the RpcServiceEndpoint<()> obtained via get_endpoint():
Diagram: Handler Registration Flow
When an incoming request arrives in read_bytes():
- Stage 1 :
dispatcher.delete_rpc_request(id)extracts theRpcRequestcontainingMETHOD_IDin its header - Stage 2 :
process_single_prebuffered_request()looks up the handler viaget_prebuffered_handlers()usingMETHOD_ID - Handler executes:
handler(context: (), request.rpc_prebuffered_payload_bytes) - Stage 3 :
dispatcher.respond()serializes the response and invokesemit_callback
The context type for WASM client handlers is () since there is no per-connection state in WASM environments.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:86-120 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:141-143
stateDiagram-v2
[*] --> Disconnected
Disconnected --> Connected : handle_connect()
Connected --> Disconnected : handle_disconnect()
state Connected {
[*] --> Ready
Ready --> Processing : read_bytes()
Processing --> Ready
}
note right of Connected
is_connected = true
emit state_change_handler(Connected)
end note
note right of Disconnected
is_connected = false
emit state_change_handler(Disconnected)
fail_all_pending_requests()
end note
State Management
The WASM client tracks connection state using an AtomicBool and provides optional state change notifications.
State Change Handler
Applications can register a callback to receive notifications when the connection state changes:
| State | Trigger | Actions |
|---|---|---|
Connected | handle_connect() called | Handler invoked with RpcTransportState::Connected |
Disconnected | handle_disconnect() called | Handler invoked with RpcTransportState::Disconnected, all pending requests failed |
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:168-180 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:22-23
Dependencies
The WASM client has minimal dependencies focused on WASM/JavaScript interop:
| Dependency | Version | Purpose |
|---|---|---|
wasm-bindgen | 0.2.100 | JavaScript/Rust FFI bindings via #[wasm_bindgen] |
wasm-bindgen-futures | 0.4.50 | Convert Rust Future to JavaScript Promise via future_to_promise() |
js-sys | 0.3.77 | JavaScript standard library types (Promise, Uint8Array) |
tokio | workspace | Async runtime (only tokio::sync::Mutex used, not the executor) |
futures | workspace | Future composition (join_all() for concurrent request processing) |
async-trait | workspace | Async trait implementations (#[async_trait] for RpcServiceCallerInterface) |
muxio | workspace | Core multiplexing (RpcDispatcher, RpcSession, frame encoding) |
muxio-rpc-service | workspace | RPC trait definitions (RpcMethodPrebuffered, METHOD_ID) |
muxio-rpc-service-caller | workspace | RpcServiceCallerInterface trait |
muxio-rpc-service-endpoint | workspace | RpcServiceEndpoint, process_single_prebuffered_request() |
tracing | workspace | Logging macros (tracing::error!) |
Note: While tokio is included, the WASM client does not use Tokio’s executor. Only synchronization primitives like tokio::sync::Mutex are used, which work in single-threaded WASM environments.
Sources: extensions/muxio-wasm-rpc-client/Cargo.toml:11-22
Thread Safety and Concurrency
The WASM client is designed for single-threaded WASM environments but uses thread-safe primitives for API consistency with native code:
| Primitive | Purpose | WASM Behavior |
|---|---|---|
Arc<T> | Reference counting for shared ownership | Works in single-threaded context, no actual atomics needed |
tokio::sync::Mutex<RpcDispatcher> | Guards dispatcher state during frame encoding/decoding | Never contends (single-threaded), provides interior mutability |
Arc<AtomicBool> | Lock-free is_connected tracking | load()/store()/swap() operations work without OS threads |
Send + Sync bounds | Trait bounds on callbacks and handlers | Satisfied for API consistency, no actual thread migration |
The three-stage read_bytes() pipeline ensures the dispatcher lock is held only during:
- Stage 1:
read_bytes(),is_rpc_request_finalized(),delete_rpc_request()(lines 58-81) - Stage 3:
respond()calls (lines 108-119)
Lock is not held during Stage 2’s async handler execution (lines 85-103), enabling concurrent request processing via join_all().
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:7-14
Comparison with Tokio Client
| Feature | WASM Client | Tokio Client |
|---|---|---|
| WebSocket Management | Delegated to JavaScript | Built-in with tokio-tungstenite |
| Event Model | Callback-based (onopen, onmessage, etc.) | Async stream-based |
| Connection Initialization | handle_connect() | connect() |
| Data Reading | read_bytes() called from JS | read_loop() task |
| Async Runtime | None (WASM environment) | Tokio |
| State Tracking | AtomicBool + manual calls | Automatic with connection task |
| Bidirectional RPC | Yes, via RpcServiceEndpoint | Yes, via RpcServiceEndpoint |
| Static Client Pattern | Yes, via thread_local | Not applicable |
Both clients implement RpcServiceCallerInterface, ensuring identical call patterns and service definitions work across both environments.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Connection Lifecycle and State Management
Loading…
Connection Lifecycle and State Management
Relevant source files
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/lib.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
- extensions/muxio-wasm-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
Purpose and Scope
This document describes how muxio tracks connection state, manages lifecycle events, and performs automatic cleanup during disconnection across both Tokio-based native clients (5.2) and WASM browser clients (5.3). It covers the RpcTransportState representation, state change handlers, lifecycle transitions, and cleanup mechanisms that ensure pending requests are properly failed and resources are released when connections terminate.
Connection State Representation
RpcTransportState Enum
The RpcTransportState enum defines the two possible connection states:
| State | Description |
|---|---|
Connected | WebSocket connection is established and operational |
Disconnected | Connection has been closed or failed |
This enum is shared across all client implementations and is used as the parameter type for state change handlers.
Sources: extensions/muxio-rpc-service-caller/src/transport_state.rs
Atomic Connection Tracking
Both RpcClient and RpcWasmClient maintain an is_connected field of type Arc<AtomicBool> to track the current connection state:
The use of AtomicBool enables lock-free reads from multiple concurrent tasks (Tokio) or callbacks (WASM). The Arc wrapper allows the flag to be shared with background tasks and closures without ownership transfer. State changes use SeqCst (sequentially consistent) ordering to ensure all threads observe changes in a consistent order.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs30 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs23
State Change Handlers
Applications can register a callback to be notified of connection state changes using the set_state_change_handler() method from the RpcServiceCallerInterface trait:
The handler is stored in an RpcTransportStateChangeHandler:
When set, the handler is immediately invoked with the current state if the client is connected. This ensures the application receives the initial Connected event without race conditions.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:22-334 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:13-180
Connection Lifecycle Diagram
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-221 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:38-134
Connection Establishment
Tokio Client Initialization
The Tokio RpcClient::new() method establishes the connection and sets initial state:
- Connects to WebSocket endpoint via
connect_async() - Splits the stream into sender and receiver
- Creates MPSC channel for outbound messages
- Sets
is_connectedtotrueatomically - Spawns three background tasks:
- Heartbeat task (sends pings every 1 second)
- Receive loop (processes incoming WebSocket messages)
- Send loop (drains outbound MPSC channel)
The client uses Arc::new_cyclic() to allow background tasks to hold weak references, preventing reference cycles that would leak memory.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:111-271
WASM Client Initialization
The WASM RpcWasmClient::new() method creates the client structure but does not establish the connection immediately:
- Creates a new
RpcWasmClientwithis_connectedset tofalse - Stores the emit callback for sending data to JavaScript
- Returns the client instance
Connection establishment is triggered later by JavaScript calling the exported handle_connect() method when the browser’s WebSocket onopen event fires:
- Sets
is_connectedtotrueatomically - Invokes the state change handler with
RpcTransportState::Connected
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:27-44
Active Connection Management
Heartbeat Mechanism (Tokio Only)
The Tokio client spawns a dedicated heartbeat task that sends WebSocket ping frames every 1 second to keep the connection alive and detect failures:
If the channel is closed (indicating shutdown), the task exits cleanly. Pong responses from the server are handled by the receive loop.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-154
Connection Status Checks
Both implementations provide an is_connected() method via the RpcServiceCallerInterface trait:
This method is checked before initiating RPC calls or emitting data. If the client is disconnected, operations are aborted early to prevent sending data over a closed connection.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-296 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:137-166
Disconnection and Cleanup
Shutdown Paths
The Tokio RpcClient implements two shutdown methods:
shutdown_sync()
Used during Drop to ensure cleanup happens synchronously. This method:
- Swaps
is_connectedtofalseusingSeqCstordering - Acquires the state change handler lock (best-effort)
- Invokes the handler with
RpcTransportState::Disconnected - Does NOT fail pending requests (dispatcher lock not acquired)
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:55-77
shutdown_async()
Used when errors are detected during background task execution. This method:
- Swaps
is_connectedtofalseusingSeqCstordering - Acquires the state change handler lock (best-effort)
- Invokes the handler with
RpcTransportState::Disconnected - Acquires the dispatcher lock (async)
- Calls
fail_all_pending_requests()to cancel all in-flight RPCs
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-108
WASM Disconnect Handling
The WASM client’s handle_disconnect() method performs similar cleanup:
- Swaps
is_connectedtofalseusingSeqCstordering - Invokes the state change handler with
RpcTransportState::Disconnected - Acquires the dispatcher lock
- Calls
fail_all_pending_requests()withFrameDecodeError::ReadAfterCancel
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:124-134
Disconnect Sequence Diagram
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-108 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:124-134
Pending Request Cleanup
When a connection is lost, all pending RPC requests must be failed to prevent the application from waiting indefinitely. The RpcDispatcher::fail_all_pending_requests() method accomplishes this:
- Iterates over all pending requests in the dispatcher’s request map
- For each pending request, extracts the response callback
- Invokes the callback with a
FrameDecodeError(typicallyReadAfterCancel) - Clears the pending request map
This ensures that all outstanding RPC calls return an error immediately, allowing application code to handle the failure and retry or report the error to the user.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs102 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:130-132
Drop Behavior and Resource Cleanup
Tokio Client Drop Implementation
The Tokio RpcClient implements the Drop trait to ensure proper cleanup when the client is no longer in use:
This implementation:
- Aborts all background tasks (heartbeat, send loop, receive loop)
- Calls
shutdown_sync()to trigger state change handler - Does NOT fail pending requests (to avoid blocking in destructor)
The Drop implementation ensures that even if the application drops the client without explicit cleanup, resources are released and handlers are notified.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52
WASM Client Lifecycle
The WASM RpcWasmClient does not implement Drop because:
- WASM uses cooperative JavaScript event handling rather than background tasks
- Connection state is explicitly managed by JavaScript callbacks (
handle_connect(),handle_disconnect()) - No OS-level resources need cleanup (WebSocket is owned by JavaScript)
The JavaScript glue code is responsible for calling handle_disconnect() when the WebSocket closes.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35
Component Relationship Diagram
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-108 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:17-134
Platform-Specific Differences
| Aspect | Tokio Client | WASM Client |
|---|---|---|
| Connection Initialization | Automatic in ::new() via connect_async() | Manual via JavaScript handle_connect() call |
| Heartbeat | Automatic (1 second ping interval) | None (handled by browser) |
| Background Tasks | 3 spawned tasks (heartbeat, send, receive) | None (JavaScript event-driven) |
| Disconnect Detection | Automatic (WebSocket error, stream end) | Manual via JavaScript handle_disconnect() call |
| Drop Behavior | Aborts tasks, calls shutdown_sync() | No special Drop behavior |
| Cleanup Timing | Async (shutdown_async()) or sync (shutdown_sync()) | Async only (handle_disconnect()) |
| Pending Request Failure | Via shutdown_async() or explicitly triggered | Via handle_disconnect() |
| Threading Model | Multi-threaded with Tokio executor | Single-threaded WASM environment |
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
Ordering Guarantees
Both implementations use SeqCst (sequentially consistent) ordering when swapping the is_connected flag:
This ensures:
- Only one thread/task performs cleanup (the swap returns
trueonly once) - All subsequent reads of
is_connectedseefalseacross all threads - No operations are reordered across the swap boundary
The use of Relaxed ordering for reads (is_connected() method) is acceptable because the flag only transitions from true to false, never the reverse, making eventual consistency sufficient for read operations.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:61-284 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:39-138
Testing Connection Lifecycle
The codebase includes comprehensive integration tests verifying lifecycle behavior:
Test: Connection Failure
Verifies that attempting to connect to a non-listening port returns a ConnectionRefused error.
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-31
Test: State Change Handler
Verifies that:
- Handler is called with
Connectedwhen connection is established - Handler is called with
Disconnectedwhen server closes the connection - Events occur in the correct order
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:33-165
Test: Pending Requests Fail on Disconnect
Verifies that:
- RPC calls can be initiated while connected
- If the connection is lost before a response arrives, pending requests fail
- The error message indicates cancellation or transport failure
This test uses a oneshot::channel to capture the result of a spawned RPC task and validates that it receives an error after the server closes the connection.
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292
Best Practices
-
Always Set State Change Handler Early : Register the handler immediately after creating the client to avoid missing the initial
Connectedevent. -
Handle Disconnection Gracefully : Applications should assume connections can fail at any time and implement retry logic or user notifications.
-
Check Connection Before Critical Operations : Use
is_connected()to avoid attempting operations that will fail immediately. -
Avoid Long-Running Handlers : State change handlers should complete quickly to avoid blocking disconnect processing. Spawn separate tasks for expensive operations.
-
WASM: Call
handle_disconnect()Reliably: Ensure JavaScript glue code callshandle_disconnect()in bothoncloseandonerrorWebSocket event handlers. -
Testing: Use Adequate Delays : Integration tests should allow sufficient time for background tasks to register pending requests before triggering disconnects.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:316-334 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:168-180 extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:220-246
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Type Safety and Shared Definitions
Loading…
Type Safety and Shared Definitions
Relevant source files
Purpose and Scope
This document explains how muxio achieves compile-time type safety through shared service definitions. It covers the architecture that enables both client and server to depend on a single source of truth for RPC method signatures, parameter types, and return types. This design eliminates a common class of runtime errors by ensuring that any mismatch between client and server results in a compile-time error rather than a runtime failure.
For implementation details about creating service definitions, see Creating Service Definitions. For details about method ID generation, see Method ID Generation. For serialization specifics, see Serialization with Bitcode.
The Type Safety Challenge in RPC Systems
Traditional RPC systems often face a fundamental challenge: ensuring that client and server agree on method signatures, parameter types, and return types. Common approaches include:
| Approach | Compile-Time Safety | Cross-Platform | Example |
|---|---|---|---|
| Schema files | Partial (code generation) | Yes | gRPC with .proto files |
| Runtime validation | No | Yes | JSON-RPC with runtime checks |
| Separate definitions | No | Yes | OpenAPI with client/server divergence |
| Shared type definitions | Yes | Platform-dependent | Shared Rust code |
Muxio uses the shared type definitions approach, but extends it to work across platforms (native and WASM) through its runtime-agnostic design. The core innovation is the RpcMethodPrebuffered trait, which defines a contract that both client and server implementations must satisfy.
Sources:
- README.md:50-51
- Cargo.lock:426-431 (example-muxio-rpc-service-definition)
Shared Service Definitions Architecture
Analysis: The shared definition crate (example-muxio-rpc-service-definition) defines RPC methods by implementing the RpcMethodPrebuffered trait. Both server and client applications depend on this shared crate. The server uses decode_request() and encode_response() methods, while the client uses encode_request() and decode_response() (via the call() function). Because all implementations derive from the same trait implementation, the Rust compiler enforces type consistency at compile time.
Sources:
- README.md:71-74 (import structure)
- README.md:102-118 (server handler registration)
- README.md:145-152 (client method calls)
- Cargo.lock:426-431 (example-muxio-rpc-service-definition dependencies)
- Cargo.lock:858-867 (muxio-rpc-service dependencies)
Compile-Time Type Safety Guarantees
Type Flow Diagram
Analysis: The type flow ensures that parameters pass through multiple type-checked boundaries. On the client side, the call() function requires parameters matching the Request associated type. The encode_request() method accepts this exact type. On the server side, decode_request() produces the same type, which the handler must accept. The response follows the reverse path with identical type checking. Any mismatch at any stage results in a compilation error, not a runtime error.
Sources:
- README.md:102-107 (server handler with typed parameters)
- README.md:145-147 (client calls with typed parameters)
- Cargo.lock:158-168 (bitcode serialization dependency)
The RpcMethodPrebuffered Trait Contract
Trait Structure
Analysis: Each RPC method implements the RpcMethodPrebuffered trait, defining its unique METHOD_ID, associated Request and Response types, and the required encoding/decoding methods. The trait enforces that all four encoding/decoding methods use consistent types. The call() method provides a high-level client interface that internally uses encode_request() and decode_response(), ensuring type safety throughout the call chain.
Sources:
- README.md:72-74 (RpcMethodPrebuffered and method imports)
- README.md102 (Add::METHOD_ID usage)
- README.md:103-106 (Add::decode_request and encode_response usage)
- Cargo.lock:858-867 (muxio-rpc-service crate structure)
Compile-Time Error Prevention
Type Mismatch Detection
The Rust compiler enforces type safety through multiple mechanisms:
| Mismatch Type | Detection Point | Compiler Error |
|---|---|---|
| Parameter type mismatch | call() invocation | Expected Vec<f64>, found Vec<i32> |
| Handler input type mismatch | decode_request() call | Type mismatch in closure parameter |
| Handler output type mismatch | encode_response() call | Expected f64, found String |
| Response type mismatch | call() result binding | Expected f64, found i32 |
| Method ID collision | Trait implementation | Duplicate associated constant |
Example: Type Mismatch Prevention
Analysis: The Rust type system prevents type mismatches before the code ever runs. When a developer attempts to call an RPC method with incorrect parameter types, the compiler immediately flags the error by comparing the provided type against the Request associated type defined in the shared trait implementation. This catch-early approach eliminates an entire class of integration bugs that would otherwise only surface at runtime, potentially in production.
Sources:
- README.md:145-147 (typed client calls)
- README.md:154-159 (type-safe assertions)
Cross-Platform Type Safety
Shared Definitions Across Client Types
Analysis: The shared definition crate enables identical type safety guarantees across all client platforms. Both RpcClient (native Tokio) and RpcWasmClient (WASM browser) depend on the same example-muxio-rpc-service-definition crate. Application code written for one client type can be ported to another client type with minimal changes, because both implement RpcServiceCallerInterface and both use the same call() methods with identical type signatures. The Rust compiler enforces that all platforms use matching types.
Sources:
- README.md:48-49 (cross-platform code description)
- README.md:75-77 (RpcClient and RpcServiceCallerInterface imports)
- Cargo.lock:898-915 (muxio-tokio-rpc-client dependencies)
- Cargo.lock:935-953 (muxio-wasm-rpc-client dependencies)
Integration with Method IDs and Serialization
Type Safety Dependencies
Analysis: Type safety in muxio is achieved through the integration of three components. The RpcMethodPrebuffered trait defines the type contract. xxhash-rust generates unique METHOD_ID constants at compile time, enabling the compiler to detect method ID collisions. bitcode provides type-preserving binary serialization, ensuring that the types decoded on the server match the types encoded on the client. The Rust compiler verifies that all encoding, network transmission, and decoding operations preserve type integrity.
Sources:
- Cargo.lock:158-168 (bitcode dependency)
- Cargo.lock:1886-1889 (xxhash-rust dependency)
- Cargo.lock:858-867 (muxio-rpc-service with xxhash and bitcode)
- README.md:103-106 (encode/decode method usage)
Benefits Summary
The shared service definitions architecture provides several concrete benefits:
| Benefit | Mechanism | Impact |
|---|---|---|
| Compile-time error detection | Rust type system enforces trait contracts | Bugs caught before runtime |
| API consistency | Single source of truth for method signatures | No client-server divergence |
| Refactoring safety | Type changes propagate to all dependents | Compiler guides migration |
| Cross-platform uniformity | Same types work on native and WASM | Code reuse across platforms |
| Zero runtime overhead | All checks happen at compile time | No validation cost in production |
| Documentation through types | Type signatures are self-documenting | Reduced documentation burden |
Sources:
- README.md:50-51 (shared definitions philosophy)
- README.md:48-49 (cross-platform code benefits)
- Cargo.lock:426-431 (example-muxio-rpc-service-definition structure)
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Creating Service Definitions
Loading…
Creating Service Definitions
Relevant source files
- README.md
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This page provides a step-by-step guide for creating RPC service definitions using the RpcMethodPrebuffered trait. Service definitions are shared Rust crates that define the contract between client and server, enabling compile-time type safety across platform boundaries.
For conceptual background on service definitions and their role in the architecture, see Service Definitions. For details on how method IDs are generated, see Method ID Generation. For serialization internals, see Serialization with Bitcode.
What is a Service Definition?
A service definition is a Rust struct that implements the RpcMethodPrebuffered trait. It serves as a shared contract between client and server, defining:
- Input type : The parameters passed to the RPC method
- Output type : The value returned from the RPC method
- Method ID : A unique identifier for the method (compile-time generated hash)
- Serialization logic : How to encode/decode request and response data
Service definitions are typically packaged in a separate crate that both client and server applications depend on. This ensures that any mismatch in data structures results in a compile-time error rather than a runtime failure.
Sources: README.md50 README.md:71-74
The RpcMethodPrebuffered Trait Structure
The RpcMethodPrebuffered trait defines the contract that all prebuffered service definitions must implement. This trait is automatically extended with the RpcCallPrebuffered trait, which provides the high-level call() method for client invocation.
graph TB
RpcMethodPrebuffered["RpcMethodPrebuffered\n(Core trait)"]
RpcCallPrebuffered["RpcCallPrebuffered\n(Auto-implemented)"]
UserStruct["User Service Struct\n(e.g., Add, Echo, Mult)"]
RpcMethodPrebuffered --> RpcCallPrebuffered
UserStruct -.implements.-> RpcMethodPrebuffered
UserStruct -.gets.-> RpcCallPrebuffered
RpcMethodPrebuffered --> Input["Associated Type: Input"]
RpcMethodPrebuffered --> Output["Associated Type: Output"]
RpcMethodPrebuffered --> MethodID["Constant: METHOD_ID"]
RpcMethodPrebuffered --> EncodeReq["fn encode_request()"]
RpcMethodPrebuffered --> DecodeReq["fn decode_request()"]
RpcMethodPrebuffered --> EncodeRes["fn encode_response()"]
RpcMethodPrebuffered --> DecodeRes["fn decode_response()"]
Trait Hierarchy
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21
Required Components
| Component | Type | Purpose |
|---|---|---|
Input | Associated Type | Type of the request parameters |
Output | Associated Type | Type of the response value |
METHOD_ID | u64 constant | Unique identifier for the method |
encode_request() | Function | Serializes Input to Vec<u8> |
decode_request() | Function | Deserializes Vec<u8> to Input |
encode_response() | Function | Serializes Output to Vec<u8> |
decode_response() | Function | Deserializes Vec<u8> to Output |
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-6
Step-by-Step: Creating a Service Definition
Step 1: Create a Shared Crate
Create a new library crate that will be shared between client and server:
Add dependencies to Cargo.toml:
Sources: README.md71
Step 2: Define Request and Response Types
Define Rust structs for your input and output data. These must implement serde::Serialize and serde::Deserialize:
Note: The actual types can be as simple or complex as needed. They can be primitives (Vec<u8>), tuples, or complex nested structures.
Sources: README.md:146-151
Step 3: Implement the RpcMethodPrebuffered Trait
Create a struct for your service and implement the trait:
Sources: README.md:102-106 README.md146
Step 4: Use the Service Definition
Client-Side Usage
Import the service definition and call it using the RpcCallPrebuffered trait:
Sources: README.md146 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:50-53
Server-Side Usage
Register a handler using the service definition’s METHOD_ID and decode/encode methods:
Sources: README.md:102-107 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:52-56
sequenceDiagram
participant Client as "Client Application"
participant CallTrait as "RpcCallPrebuffered::call()"
participant ServiceDef as "Service Definition\n(Add struct)"
participant Network as "Network Transport"
participant ServerEndpoint as "Server Endpoint Handler"
Note over Client,ServerEndpoint: Request Path
Client->>CallTrait: Add::call(client, vec![1.0, 2.0, 3.0])
CallTrait->>ServiceDef: encode_request(vec![1.0, 2.0, 3.0])
ServiceDef->>ServiceDef: bitcode::encode()
ServiceDef-->>CallTrait: Vec<u8> (binary data)
CallTrait->>Network: Send with METHOD_ID
Network->>ServerEndpoint: Binary frame received
ServerEndpoint->>ServiceDef: decode_request(&bytes)
ServiceDef->>ServiceDef: bitcode::decode()
ServiceDef-->>ServerEndpoint: Vec<f64>
ServerEndpoint->>ServerEndpoint: Execute: sum = 6.0
Note over Client,ServerEndpoint: Response Path
ServerEndpoint->>ServiceDef: encode_response(6.0)
ServiceDef->>ServiceDef: bitcode::encode()
ServiceDef-->>ServerEndpoint: Vec<u8> (binary data)
ServerEndpoint->>Network: Send binary response
Network->>CallTrait: Binary frame received
CallTrait->>ServiceDef: decode_response(&bytes)
ServiceDef->>ServiceDef: bitcode::decode()
ServiceDef-->>CallTrait: f64: 6.0
CallTrait-->>Client: Result: 6.0
Service Definition Data Flow
The following diagram shows how data flows through a service definition during an RPC call:
Sources: README.md:92-161 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:50-97
Organizing Multiple Services
A service definition crate typically contains multiple service definitions. The recommended organization pattern is:
File Structure
my-service-definition/
├── Cargo.toml
└── src/
├── lib.rs
└── prebuffered/
├── mod.rs
├── add.rs
├── multiply.rs
└── echo.rs
Module Organization
src/lib.rs:
src/prebuffered/mod.rs:
This structure allows clients to import services with a clean syntax:
Sources: README.md:71-74 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs21
Service Definition Component Mapping
This diagram maps the conceptual components to their code entities:
Sources: README.md:71-119 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-68
Large Payload Handling
Service definitions automatically handle large payloads through a smart transport strategy. When encoded request parameters exceed DEFAULT_SERVICE_MAX_CHUNK_SIZE, they are automatically sent as a chunked payload rather than inline in the request header.
Transport Strategy Selection
| Encoded Size | Transport Method | Field Used |
|---|---|---|
< DEFAULT_SERVICE_MAX_CHUNK_SIZE | Inline in header | rpc_param_bytes |
≥ DEFAULT_SERVICE_MAX_CHUNK_SIZE | Chunked payload | rpc_prebuffered_payload_bytes |
This strategy is implemented automatically by the RpcCallPrebuffered trait and requires no special handling in service definitions:
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-65
Large Payload Test Example
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:295-311
Best Practices
Type Design
| Practice | Rationale |
|---|---|
Use #[derive(Serialize, Deserialize)] | Required for bitcode serialization |
Add #[derive(Debug, Clone)] | Helpful for testing and debugging |
| Keep types simple | Simpler types serialize more efficiently |
Use Vec<u8> for binary data | Avoids double-encoding overhead |
Method Naming
| Practice | Example | Rationale |
|---|---|---|
| Use PascalCase for struct names | Add, GetUserProfile | Rust convention for type names |
| Use descriptive names | CalculateSum vs Calc | Improves code readability |
| Match domain concepts | AuthenticateUser | Makes intent clear |
Error Handling
Service definitions should use io::Error for encoding/decoding failures:
This ensures consistent error propagation through the RPC framework.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:6-7
Versioning
When evolving service definitions:
- Additive changes are safe : Adding new optional fields to request/response types
- Breaking changes require new methods : Changing input/output types requires a new
METHOD_ID - Maintain backwards compatibility : Keep old service definitions until all clients migrate
Sources: README.md50
Complete Example: Echo Service
Here is a complete example of a simple Echo service definition:
Usage:
Sources: README.md:114-118 README.md:150-151 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:64-67 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:131-132
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Method ID Generation
Loading…
Method ID Generation
Relevant source files
This page explains how the muxio RPC framework generates unique identifiers for RPC methods at compile time using the xxhash algorithm. Method IDs enable efficient method dispatch without runtime string comparisons or schema files.
For information about defining RPC services using these method IDs, see Creating Service Definitions. For details about the serialization format that carries method IDs over the wire, see Serialization with Bitcode.
Overview
The muxio RPC framework generates a unique 64-bit method identifier for each RPC method by hashing the method’s fully-qualified name at compile time. This approach provides several key benefits:
| Aspect | Implementation |
|---|---|
| Hash Algorithm | xxhash (xxHash64 variant) |
| Input | Fully-qualified method name as UTF-8 string |
| Output | 64-bit unsigned integer (u64) |
| Generation Time | Compile time (zero runtime overhead) |
| Collision Handling | Deterministic; same name always produces same ID |
The generated method IDs are embedded directly into service definition constants, eliminating the need for runtime string hashing or lookup tables. Both client and server compile against the same service definitions, ensuring that method IDs match across platforms.
Sources:
- Cargo.lock:1886-1889 (xxhash-rust dependency)
- extensions/muxio-rpc-service/Cargo.toml:16 (xxhash-rust in dependencies)
The xxhash Algorithm
The muxio framework uses the xxhash-rust crate to generate method IDs. xxhash was selected for its specific characteristics:
graph LR
Input["Method Name String\n(UTF-8 bytes)"]
XXH64["xxHash64 Algorithm"]
Output["64-bit Method ID\n(u64)"]
Input --> XXH64
XXH64 --> Output
Props["Properties:\n- Deterministic\n- Fast (compile-time only)\n- Non-cryptographic\n- Low collision rate\n- Platform-independent"]
XXH64 -.-> Props
Algorithm Characteristics
Why xxhash?
| Property | Benefit |
|---|---|
| Deterministic | Same method name always produces the same ID across all platforms and compilations |
| Fast | Minimal compile-time overhead; speed matters less since generation is compile-time only |
| Non-cryptographic | No security requirements for method IDs; simpler algorithm reduces dependencies |
| Low collision rate | Statistical likelihood of two different method names producing the same ID is negligible |
| 64-bit output | Large enough keyspace (2^64 possible IDs) to avoid collisions in practice |
The framework does not perform collision detection because:
- The 64-bit keyspace makes collisions statistically improbable for any reasonable number of methods
- Methods are identified by fully-qualified names (including trait and module paths), further reducing collision likelihood
- If a collision does occur, it will manifest as a method routing error detectable during testing
Sources:
- Cargo.lock:1886-1889 (xxhash-rust package metadata)
- extensions/muxio-rpc-service/Cargo.toml:3 (core traits and method ID generation description)
Compile-Time Generation Mechanism
Method IDs are computed during compilation, not at runtime. The generation process integrates with Rust’s trait system and constant evaluation capabilities.
sequenceDiagram
participant Source as "Service Definition Source Code"
participant Compiler as "Rust Compiler"
participant XXHash as "xxhash-rust Crate"
participant Binary as "Compiled Binary"
Note over Source: RpcMethodPrebuffered trait\nwith method name
Source->>Compiler: Compile service definition
Compiler->>XXHash: Hash method name constant
XXHash-->>Compiler: Return u64 method ID
Compiler->>Compiler: Embed ID as constant (METHOD_ID)
Compiler->>Binary: Include ID in binary
Note over Binary: Method ID available at runtime\nas a simple constant
Generation Flow
Constant Evaluation
The method ID is generated using Rust’s const fn capabilities, making the hash computation part of compile-time constant evaluation:
- Service definition declares a method name as a string constant
- The xxhash function (or wrapper) is invoked at compile time
- The resulting
u64is stored as aconstassociated with the method - This constant is directly embedded in the compiled binary
This approach means:
- Zero runtime cost : No hashing occurs during program execution
- Type safety : Method IDs are compile-time constants that cannot be accidentally modified
- Cross-platform consistency : The same source code produces identical method IDs on all platforms
Sources:
- extensions/muxio-rpc-service/Cargo.toml:3 (compile-time method ID generation description)
- Cargo.lock:858-867 (muxio-rpc-service dependencies including xxhash-rust)
Integration with Service Definitions
Method IDs are tightly integrated with the RpcMethodPrebuffered trait, which defines the contract for RPC methods.
graph TB
Trait["RpcMethodPrebuffered Trait"]
MethodName["METHOD_NAME: &'static str\ne.g., 'MyService::calculate'"]
MethodID["METHOD_ID: u64\nxxhash64(METHOD_NAME)"]
Params["Params Type\n(Serializable)"]
Response["Response Type\n(Serializable)"]
Trait --> MethodName
Trait --> MethodID
Trait --> Params
Trait --> Response
MethodName -.generates.-> MethodID
RpcRequest["RpcRequest Structure"]
RpcRequest --> MethodIDField["method_id: u64"]
RpcRequest --> ParamsField["params: Vec<u8>\n(serialized)"]
MethodID -.copied into.-> MethodIDField
Params -.serialized into.-> ParamsField
Method ID in Service Definition Structure
Method ID Storage and Usage
Each service definition provides:
| Component | Type | Purpose |
|---|---|---|
METHOD_NAME | &'static str | Human-readable method identifier (used for debugging/logging) |
METHOD_ID | u64 | Hashed identifier used in binary protocol |
Params | Associated type | Request parameter structure |
Response | Associated type | Response result structure |
The METHOD_ID constant is used when:
- Encoding requests : Client includes method ID in
RpcRequestheader - Dispatching requests : Server uses method ID to route to the appropriate handler
- Validating responses : Client verifies the response corresponds to the correct method
Sources:
- extensions/muxio-rpc-service/Cargo.toml:3 (core traits and types description)
- Cargo.lock:858-867 (muxio-rpc-service package with xxhash dependency)
sequenceDiagram
participant Client as "RPC Client"
participant Network as "Binary Transport"
participant Dispatcher as "RpcDispatcher"
participant Endpoint as "RpcServiceEndpoint"
participant Handler as "Method Handler"
Note over Client: METHOD_ID = xxhash64("Add")
Client->>Client: Create RpcRequest with METHOD_ID
Client->>Network: Serialize and send request
Network->>Dispatcher: Receive binary frames
Dispatcher->>Dispatcher: Extract method_id from RpcRequest
Dispatcher->>Endpoint: Route by method_id (u64 comparison)
alt Method ID Registered
Endpoint->>Handler: Invoke handler for METHOD_ID
Handler-->>Endpoint: Return response
Endpoint-->>Dispatcher: Send RpcResponse
else Method ID Unknown
Endpoint-->>Dispatcher: Return MethodNotFound error
end
Dispatcher-->>Network: Serialize and send response
Network-->>Client: Deliver response
Method Dispatch Using IDs
At runtime, method IDs enable efficient dispatch without string comparisons or lookup tables.
Request Processing Flow
Dispatch Performance Characteristics
The use of 64-bit integer method IDs provides:
| Characteristic | Benefit |
|---|---|
| O(1) lookup | Hash map dispatch using HashMap<u64, Handler> |
| No string allocation | Method names never allocated at runtime |
| Cache-friendly | Integer comparison much faster than string comparison |
| Minimal memory | 8 bytes per method ID vs. variable-length strings |
The endpoint maintains a dispatch table:
HashMap<u64, Arc<dyn MethodHandler>>
key: METHOD_ID (e.g., 0x12ab34cd56ef7890)
value: Handler function for that method
When a request arrives with method_id = 0x12ab34cd56ef7890, the dispatcher performs a simple hash map lookup to find the handler.
Sources:
- extensions/muxio-rpc-service/Cargo.toml:3 (service traits and method dispatch)
- Cargo.lock:883-895 (muxio-rpc-service-endpoint package)
graph TB
subgraph "Development Time"
Define["Define Service:\ntrait MyMethod"]
Name["METHOD_NAME =\n'MyService::add'"]
Hash["Compile-time xxhash64"]
Constant["const METHOD_ID: u64 =\n0x3f8a4b2c1d9e7654"]
Define --> Name
Name --> Hash
Hash --> Constant
end
subgraph "Client Runtime"
CallSite["Call my_method(params)"]
CreateReq["Create RpcRequest:\nmethod_id: 0x3f8a4b2c1d9e7654\nparams: serialized"]
Encode["Encode to binary frames"]
CallSite --> CreateReq
Constant -.embedded in.-> CreateReq
CreateReq --> Encode
end
subgraph "Network"
Transport["WebSocket/TCP Transport\nBinary protocol"]
end
subgraph "Server Runtime"
Decode["Decode binary frames"]
ExtractID["Extract method_id:\n0x3f8a4b2c1d9e7654"]
Lookup["HashMap lookup:\nhandlers[0x3f8a4b2c1d9e7654]"]
Execute["Execute handler function"]
Decode --> ExtractID
ExtractID --> Lookup
Lookup --> Execute
Constant -.registered in.-> Lookup
end
Encode --> Transport
Transport --> Decode
style Constant fill:#f9f9f9
style CreateReq fill:#f9f9f9
style Lookup fill:#f9f9f9
Complete Method ID Lifecycle
This diagram traces a method ID from definition through compilation, serialization, and dispatch:
Sources:
- extensions/muxio-rpc-service/Cargo.toml:1-9 (service definition crate overview)
- Cargo.lock:858-867 (muxio-rpc-service dependencies)
- Cargo.lock:883-895 (muxio-rpc-service-endpoint for handler registration)
Benefits and Design Trade-offs
Advantages
| Benefit | Explanation |
|---|---|
| Zero Runtime Cost | Hash computation happens once at compile time; runtime operations use simple integer comparisons |
| Type Safety | Method IDs are compile-time constants; cannot be accidentally modified or corrupted |
| Platform Independence | Same method name produces identical ID on all platforms (Windows, Linux, macOS, WASM) |
| No Schema Files | Service definitions are Rust code; no external IDL or schema generation tools required |
| Fast Dispatch | Integer hash map lookup is faster than string comparison or reflection-based dispatch |
| Compact Wire Format | 8-byte method ID vs. variable-length method name string |
Design Considerations
| Consideration | Mitigation |
|---|---|
| Hash Collisions | 64-bit keyspace makes collisions statistically improbable; detected during testing if they occur |
| Method Versioning | Changing a method name produces a new ID; requires coordinated client/server updates |
| Debugging | Method names are logged alongside IDs for human readability during development |
| Binary Compatibility | Changing method names breaks wire compatibility; version management required at application level |
The framework prioritizes simplicity and performance over elaborate versioning mechanisms. Applications requiring complex API evolution strategies should implement versioning at a higher level (e.g., versioned service definitions or API paths).
Sources:
- extensions/muxio-rpc-service/Cargo.toml:3 (compile-time method ID generation)
- Cargo.lock:1886-1889 (xxhash-rust algorithm choice)
Example: Method ID Generation in Practice
Consider a simple service definition:
Method Definition Components
Step-by-Step Process
- Definition : Developer writes
struct AddMethodimplementingRpcMethodPrebuffered - Naming : Trait defines
const METHOD_NAME: &'static str = "calculator::Add" - Hashing : Compiler invokes
xxhash64("calculator::Add")at compile time - Constant : Result (e.g.,
0x9c5f3a2b8d7e4f61) stored asconst METHOD_ID: u64 - Client Usage : When calling
Add, client createsRpcRequest { method_id: 0x9c5f3a2b8d7e4f61, ... } - Server Registration : Server registers handler:
handlers.insert(0x9c5f3a2b8d7e4f61, add_handler) - Dispatch : Server receives request, extracts
method_id, performsHashMaplookup, invokes handler
This entire flow occurs with zero string operations at runtime, providing efficient method dispatch across client and server implementations.
Sources:
- extensions/muxio-rpc-service/Cargo.toml:1-18 (service definition crate structure)
- Cargo.lock:858-867 (muxio-rpc-service dependencies including xxhash-rust)
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Serialization with Bitcode
Loading…
Serialization with Bitcode
Relevant source files
- Cargo.lock
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
This page documents how the bitcode library provides compact binary serialization for RPC parameters, responses, and metadata throughout the muxio framework. For information about defining service methods that use these serialized types, see Creating Service Definitions. For details on how method identifiers are generated, see Method ID Generation.
Purpose and Role
The bitcode crate serves as the serialization layer that transforms strongly-typed Rust structs into compact binary representations suitable for network transmission. This enables type-safe RPC communication while maintaining minimal payload sizes and efficient encoding/decoding performance.
Sources:
Bitcode in the Data Pipeline
Diagram: Bitcode serialization flow between application types and wire format
Sources:
Core Traits and Functions
| Trait/Function | Purpose | Usage Context |
|---|---|---|
bitcode::Encode | Derive macro for serialization | Applied to request/response parameter structs |
bitcode::Decode | Derive macro for deserialization | Applied to request/response parameter structs |
bitcode::encode(&T) | Encodes a value to Vec<u8> | Used before sending RPC requests/responses |
bitcode::decode::<T>(&[u8]) | Decodes bytes to type T | Used after receiving RPC requests/responses |
Sources:
Type Definitions with Bitcode
Types used in RPC communication must derive both Encode and Decode traits. These derivations are typically combined with Debug and PartialEq for testing and debugging purposes.
Diagram: Type definition pattern for serializable RPC parameters
graph TB
subgraph "Example Type Definition"
Struct["#[derive(Encode, Decode, PartialEq, Debug)]\nstruct AddRequestParams"]
Field1["numbers: Vec<f64>"]
Struct --> Field1
end
subgraph "Bitcode Derive Macros"
EncodeMacro["bitcode_derive::Encode"]
DecodeMacro["bitcode_derive::Decode"]
end
subgraph "Generated Implementations"
EncodeImpl["impl Encode for AddRequestParams"]
DecodeImpl["impl Decode for AddRequestParams"]
end
Struct -.->|expands to| EncodeMacro
Struct -.->|expands to| DecodeMacro
EncodeMacro --> EncodeImpl
DecodeMacro --> DecodeImpl
Example from test suite:
tests/rpc_dispatcher_tests.rs:10-28 demonstrates the standard pattern:
Sources:
Integration with RPC Request/Response Types
The serialized bytes produced by bitcode::encode() are stored in specific fields of the RPC protocol structures:
| RPC Type | Field | Purpose |
|---|---|---|
RpcRequest | rpc_param_bytes: Option<Vec<u8>> | Encoded method parameters sent in header metadata |
RpcRequest | rpc_prebuffered_payload_bytes: Option<Vec<u8>> | Encoded payload data for prebuffered requests |
RpcResponse | rpc_prebuffered_payload_bytes: Option<Vec<u8>> | Encoded response payload |
RpcHeader | rpc_metadata_bytes: Vec<u8> | Encoded metadata (parameters or status) |
Sources:
- src/rpc/rpc_request_response.rs:9-33
- src/rpc/rpc_request_response.rs:40-76
- src/rpc/rpc_internals/rpc_header.rs:3-24
Request Encoding Pattern
Diagram: Encoding flow for RPC request parameters
Example from test suite:
tests/rpc_dispatcher_tests.rs:42-49 demonstrates encoding request parameters:
Sources:
Response Decoding Pattern
Diagram: Decoding flow for RPC response payload
Example from test suite:
tests/rpc_dispatcher_tests.rs:100-116 demonstrates decoding response payloads:
Sources:
graph LR
subgraph "Receive Phase"
ReqBytes["rpc_param_bytes"]
DecodeReq["bitcode::decode\n<AddRequestParams>"]
ReqParams["AddRequestParams"]
ReqBytes --> DecodeReq
DecodeReq --> ReqParams
end
subgraph "Processing"
Logic["Business Logic\n(sum numbers)"]
RespParams["AddResponseParams"]
ReqParams --> Logic
Logic --> RespParams
end
subgraph "Send Phase"
EncodeResp["bitcode::encode"]
RespBytes["rpc_prebuffered_payload_bytes"]
RespParams --> EncodeResp
EncodeResp --> RespBytes
end
Server-Side Processing Pattern
The server decodes incoming request parameters, processes them, and encodes response payloads:
Diagram: Complete encode-process-decode cycle on server
Example from test suite:
tests/rpc_dispatcher_tests.rs:151-167 demonstrates the complete server-side pattern:
Sources:
Supported Types
Bitcode supports a wide range of Rust types through its derive macros:
| Type Category | Examples | Notes |
|---|---|---|
| Primitives | i32, u64, f64, bool | Direct binary encoding |
| Standard collections | Vec<T>, HashMap<K,V>, Option<T> | Length-prefixed encoding |
| Tuples | (T1, T2, T3) | Sequential encoding |
| Structs | Custom types with #[derive(Encode, Decode)] | Field-by-field encoding |
| Enums | Tagged unions with variants | Discriminant + variant data |
Sources:
- Cargo.lock:158-168 (bitcode dependencies including arrayvec, bytemuck, glam, serde)
- tests/rpc_dispatcher_tests.rs:10-28
graph TB
Bytes["Incoming Bytes"]
Decode["bitcode::decode<T>()"]
Success["Ok(T)"]
Error["Err(bitcode::Error)"]
Bytes --> Decode
Decode -->|Valid binary| Success
Decode -->|Invalid/incompatible| Error
Error -->|Handle| ErrorHandler["Error Handler\n(log, return error response)"]
Error Handling
Decoding operations return Result types to handle malformed or incompatible binary data:
Diagram: Error handling in bitcode deserialization
The test code demonstrates using .unwrap() for simplicity, but production code should handle decode errors gracefully:
tests/rpc_dispatcher_tests.rs:152-153 shows unwrapping (test code):
Sources:
Compact Binary Format
Bitcode produces compact binary representations compared to text-based formats like JSON. The format characteristics include:
| Feature | Benefit |
|---|---|
| No field names in output | Reduces payload size by relying on struct definition order |
| Variable-length integer encoding | Smaller values use fewer bytes |
| No schema overhead | Binary is decoded based on compile-time type information |
| Aligned data structures | Optimized for fast deserialization via bytemuck |
Sources:
- Cargo.lock:158-168 (bitcode with bytemuck dependency for zero-copy operations)
Dependencies and Ecosystem Integration
The bitcode crate integrates with the broader Rust ecosystem:
Diagram: Bitcode dependency graph
Sources:
Usage in Muxio Ecosystem
Bitcode is used throughout the muxio workspace:
| Crate | Usage |
|---|---|
muxio | Core RPC protocol structures |
muxio-rpc-service | Service definition trait bounds |
example-muxio-rpc-service-definition | Shared RPC parameter types |
muxio-rpc-service-endpoint | Server-side deserialization |
muxio-rpc-service-caller | Client-side serialization |
Sources:
- Cargo.lock:426-431 (example-muxio-rpc-service-definition dependencies)
- Cargo.lock:858-867 (muxio-rpc-service dependencies)
- Cargo.lock:830-839 (muxio dependencies)
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Error Handling
Loading…
Error Handling
Relevant source files
- extensions/muxio-rpc-service-caller/src/caller_interface.rs
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
Purpose and Scope
This document describes error handling strategies, error types, and failure modes throughout the rust-muxio RPC system. It covers how errors are detected, propagated across layers, and delivered to calling code. This includes transport failures, framing errors, RPC-level failures, and critical system failures like mutex poisoning.
For information about defining service errors in your own RPC methods, see Creating Service Definitions. For connection lifecycle management and state changes, see Connection Lifecycle and State Management.
Error Type Hierarchy
The muxio system uses a layered error model that mirrors its architectural layers. Each layer defines specific error types appropriate to its abstraction level.
graph TB
RpcServiceError["RpcServiceError"]
RpcError["RpcServiceError::Rpc"]
TransportError["RpcServiceError::Transport"]
RpcServiceErrorPayload["RpcServiceErrorPayload"]
RpcServiceErrorCode["RpcServiceErrorCode"]
IoError["std::io::Error"]
FrameDecodeError["FrameDecodeError"]
FrameEncodeError["FrameEncodeError"]
RpcResultStatus["RpcResultStatus"]
RpcServiceError --> RpcError
RpcServiceError --> TransportError
RpcError --> RpcServiceErrorPayload
RpcServiceErrorPayload --> RpcServiceErrorCode
TransportError --> IoError
IoError -.wraps.-> FrameDecodeError
RpcServiceErrorCode --> NotFound["NotFound"]
RpcServiceErrorCode --> Fail["Fail"]
RpcServiceErrorCode --> System["System"]
RpcResultStatus --> Success["Success"]
RpcResultStatus --> MethodNotFound["MethodNotFound"]
RpcResultStatus --> FailStatus["Fail"]
RpcResultStatus --> SystemError["SystemError"]
RpcResultStatus -.maps_to.-> RpcServiceErrorCode
Error Type Relationships
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:16-17
- extensions/muxio-rpc-service/src/error.rs (referenced)
RpcServiceError
RpcServiceError is the primary error type exposed to application code when making RPC calls. It has two variants:
| Variant | Description | Contains |
|---|---|---|
Rpc | Remote service returned an error | RpcServiceErrorPayload with code and message |
Transport | Connection or framing failure | std::io::Error |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs42
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:254-257
RpcServiceErrorCode
Application-level error codes that indicate why an RPC call failed:
| Code | Meaning | Typical Cause |
|---|---|---|
NotFound | Method does not exist | Client calls unregistered method or method ID mismatch |
Fail | Method executed but failed | Handler returned an error |
System | Internal system error | Serialization failure, internal panic, resource exhaustion |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:193-198
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:204-210
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:216-228
RpcResultStatus
Wire-format status codes transmitted in RPC response headers. These are converted to RpcServiceErrorCode on the client side:
| Status | Wire Byte | Maps To |
|---|---|---|
Success | 0x00 | (no error) |
MethodNotFound | N/A | RpcServiceErrorCode::NotFound |
Fail | N/A | RpcServiceErrorCode::Fail |
SystemError | N/A | RpcServiceErrorCode::System |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:120-126
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:186-232
FrameDecodeError and FrameEncodeError
Low-level errors in the binary framing protocol:
FrameDecodeError: Occurs when incoming bytes cannot be parsed as valid frames (corrupt header, invalid stream ID, etc.)FrameEncodeError: Occurs when outgoing data cannot be serialized into frames (buffer issues, invalid state, etc.)
These errors are typically wrapped in io::Error and surfaced as RpcServiceError::Transport.
Sources:
Error Propagation Through Layers
Errors flow through multiple layers before reaching application code. The propagation path depends on whether the error originates from transport, framing, RPC protocol, or service logic.
sequenceDiagram
participant App as "Application Code"
participant Caller as "RpcServiceCallerInterface"
participant Dispatcher as "RpcDispatcher"
participant Session as "RpcSession"
participant Transport as "WebSocket Transport"
Note over Transport: Transport Error
Transport->>Session: read_bytes() returns Err
Session->>Dispatcher: FrameDecodeError
Dispatcher->>Dispatcher: fail_all_pending_requests()
Dispatcher->>Caller: RpcStreamEvent::Error
Caller->>Caller: Convert to RpcServiceError::Transport
Caller->>App: Err(RpcServiceError::Transport)
Note over Transport: RPC Method Error
Transport->>Session: Valid frames, status=Fail
Session->>Dispatcher: RpcStreamEvent::Header (status byte)
Dispatcher->>Caller: status=RpcResultStatus::Fail
Caller->>Caller: Convert to RpcServiceError::Rpc
Caller->>App: Err(RpcServiceError::Rpc)
Error Flow Diagram
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-284
- src/rpc/rpc_dispatcher.rs:187-206
- src/rpc/rpc_dispatcher.rs:422-456
Streaming vs Buffered Error Delivery
The system supports two error delivery modes depending on the RPC call type:
graph LR Error["Error Occurs"] --> RecvFn["recv_fn closure"] RecvFn --> Status["Parse RpcResultStatus"] Status --> ErrorBuffer["Buffer error payload"] ErrorBuffer --> End["RpcStreamEvent::End"] End --> Send["sender.send(Err(...))"] Send --> AppCode["Application receives Err from stream"]
Streaming Error Delivery
For streaming RPC calls using call_rpc_streaming(), errors are sent through the DynamicReceiver channel as they occur:
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:136-174
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:175-244
graph LR
Stream["call_rpc_streaming"] --> Loop["while let Some(result)"]
Loop --> CheckResult{"result?"}
CheckResult -->|Ok| Accumulate["success_buf.extend"]
CheckResult -->|Err| StoreError["err = Some(e); break"]
Accumulate --> Loop
StoreError --> Return["Err(rpc_service_error)"]
Loop -->|None| Decode["decode(success_buf)"]
Decode --> ReturnOk["Ok(T)"]
Buffered Error Delivery
For prebuffered RPC calls using call_rpc_buffered(), errors are accumulated until the stream ends, then returned as a Result<T, RpcServiceError>:
Sources:
graph TB
RecvFn["recv_fn(RpcStreamEvent)"]
Header["Header Event"]
Payload["PayloadChunk Event"]
End["End Event"]
Error["Error Event"]
RecvFn --> Header
RecvFn --> Payload
RecvFn --> End
RecvFn --> Error
Header --> ParseStatus["Parse RpcResultStatus from metadata"]
ParseStatus --> StoreStatus["Store in status Mutex"]
StoreStatus --> SendReady["Send readiness signal"]
Payload --> CheckStatus{"status?"}
CheckStatus -->|Success| SendChunk["sender.send(Ok(bytes))"]
CheckStatus -->|Error status| BufferError["error_buffer.extend(bytes)"]
End --> FinalStatus{"final status?"}
FinalStatus -->|MethodNotFound| SendNotFound["sender.send(Err(NotFound))"]
FinalStatus -->|Fail| SendFail["sender.send(Err(Fail))"]
FinalStatus -->|SystemError| SendSystem["sender.send(Err(SystemError))"]
FinalStatus -->|Success| Close["Close channel normally"]
Error --> CreateError["Create Transport error"]
CreateError --> SendError["sender.send(Err(Transport))"]
SendError --> DropSender["Drop sender"]
Error Handling in recv_fn Closure
The recv_fn closure in RpcServiceCallerInterface is the primary mechanism for receiving and transforming RPC stream events into application-level errors. It handles four event types:
RpcStreamEvent Processing
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:91-287
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:119-135
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:136-174
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:175-244
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-284
Error Buffering Logic
When a non-Success status is received, payload chunks are buffered into error_buffer instead of being sent to the application. This allows the complete error message to be assembled:
| Event Sequence | Status | Action |
|---|---|---|
| Header arrives | MethodNotFound | Store status, buffer subsequent payloads |
| PayloadChunk arrives | MethodNotFound | Append to error_buffer |
| PayloadChunk arrives | MethodNotFound | Append to error_buffer |
| End arrives | MethodNotFound | Decode error_buffer as error message, send Err(...) |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:152-159
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:177-181
Disconnection and Transport Errors
Transport-level failures require special handling to prevent hanging requests and ensure prompt error delivery.
Connection State Checks
Before initiating any RPC call, the caller checks connection state:
Sources:
graph TB
DispatcherCall["dispatcher.call()"] --> WaitReady["ready_rx.await"]
WaitReady --> CheckResult{"Result?"}
CheckResult -->|Ok| ReturnEncoder["Return (encoder, rx)"]
CheckResult -->|Err| ReturnError["Return Err(Transport)"]
TransportFail["Transport fails"] --> SendError["ready_tx.send(Err(io::Error))"]
SendError --> WaitReady
ChannelDrop["Handler drops ready_tx"] --> ChannelClosed["ready_rx returns Err"]
ChannelClosed --> CheckResult
Readiness Channel Errors
The call_rpc_streaming() method uses a oneshot channel to signal when the RPC call is ready (header received). If this channel closes prematurely, it indicates a transport failure:
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:78-80
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:333-347
graph TB FrameError["FrameDecodeError occurs"] --> CreateEvent["RpcStreamEvent::Error"] CreateEvent --> RecvFn["recv_fn(Error event)"] RecvFn --> WrapError["Wrap as io::Error::ConnectionAborted"] WrapError --> NotifyReady["Send to ready_tx if pending"] NotifyReady --> NotifyStream["Send to DynamicSender"] NotifyStream --> Drop["Drop sender, close channel"]
RpcStreamEvent::Error Handling
When a FrameDecodeError occurs during stream processing, the session generates an RpcStreamEvent::Error:
Sources:
Critical Failure Modes
Certain failures are considered unrecoverable and result in immediate panic or cleanup.
graph TB
LockAttempt["queue.lock()"] --> CheckResult{"Result?"}
CheckResult -->|Ok| ProcessEvent["Process RpcStreamEvent"]
CheckResult -->|Err poisoned| Panic["panic!()"]
Panic --> CrashMsg["'Request queue mutex poisoned'"]
CrashMsg --> Note["Note: Prevents data corruption\nand undefined behavior"]
Mutex Poisoning
The rpc_request_queue in RpcDispatcher is protected by a Mutex. If a thread panics while holding this lock, the mutex becomes “poisoned.” This is treated as a critical failure:
Mutex Poisoning Handling
The rationale for panicking on mutex poisoning is documented in src/rpc/rpc_dispatcher.rs:85-97:
If the lock is poisoned, it likely means another thread panicked while holding the mutex. The internal state of the request queue may now be inconsistent or partially mutated. Continuing execution could result in incorrect dispatch behavior, undefined state transitions, or silent data loss. This should be treated as a critical failure and escalated appropriately.
Sources:
graph LR
ReadBytes["read_bytes()"] --> SessionRead["rpc_respondable_session.read_bytes()"]
SessionRead --> LockQueue["rpc_request_queue.lock()"]
LockQueue --> CheckLock{"lock()?"}
CheckLock -->|Ok| ReturnIds["Ok(active_request_ids)"]
CheckLock -->|Err poisoned| CorruptFrame["Err(FrameDecodeError::CorruptFrame)"]
FrameDecodeError as Critical Failure
When read_bytes() returns a FrameDecodeError, the dispatcher may also fail to lock the queue and return FrameDecodeError::CorruptFrame:
Sources:
graph TB
ConnDrop["Connection Dropped"] --> FailAll["fail_all_pending_requests(error)"]
FailAll --> TakeHandlers["mem::take(response_handlers)"]
TakeHandlers --> Iterate["For each (request_id, handler)"]
Iterate --> CreateSynthetic["Create synthetic Error event"]
CreateSynthetic --> CallHandler["handler(error_event)"]
CallHandler --> WakesFuture["Wakes waiting Future/stream"]
WakesFuture --> Iterate
Iterate --> Done["All handlers notified"]
fail_all_pending_requests Cleanup
When a connection drops, all pending RPC requests must be notified to prevent hanging futures. The fail_all_pending_requests() method performs this cleanup:
Cleanup Flow
The synthetic error event structure:
RpcStreamEvent::Error {
rpc_header: None,
rpc_request_id: Some(request_id),
rpc_method_id: None,
frame_decode_error: error.clone(),
}
Sources:
Handler Cleanup Guarantee
Taking ownership of the handlers (mem::take) ensures:
- The
response_handlersmap is immediately cleared - No new events can be routed to removed handlers
- Each handler is called exactly once with the error
- Waiting futures/streams are unblocked promptly
Sources:
graph TB
Prebuffering{"prebuffer_response?"}
Prebuffering -->|true| AccumulateMode["Accumulate mode"]
Prebuffering -->|false| StreamMode["Stream mode"]
AccumulateMode --> HeaderEvt["Header Event"]
HeaderEvt --> CallHandler["Call handler with Header"]
HeaderEvt --> PayloadEvt["PayloadChunk Events"]
PayloadEvt --> BufferBytes["buffer.extend_from_slice(bytes)"]
BufferBytes --> PayloadEvt
PayloadEvt --> EndEvt["End Event"]
EndEvt --> SendAll["Send entire buffer at once"]
SendAll --> CallEndHandler["Call handler with End"]
StreamMode --> StreamHeader["Header Event"]
StreamHeader --> StreamPayload["PayloadChunk Events"]
StreamPayload --> CallHandlerImmediate["Call handler for each chunk"]
CallHandlerImmediate --> StreamPayload
StreamPayload --> StreamEnd["End Event"]
Error Handling in Prebuffering
The RpcRespondableSession supports prebuffering mode where response payloads are accumulated before delivery. Error handling in this mode differs from streaming:
Prebuffering Error Accumulation
In prebuffering mode, if an error status is detected, the entire error payload is still buffered until the End event, then delivered as a single chunk.
Sources:
Standard Error Handling Patterns
Pattern 1: Immediate Rejection on Disconnect
Always check connection state before starting expensive operations:
Sources:
Pattern 2: Error Conversion at Boundaries
Convert lower-level errors to RpcServiceError at API boundaries:
Sources:
Pattern 3: Synchronous Error Handling in Callbacks
The recv_fn closure is synchronous and uses StdMutex to avoid async context issues:
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:75-77
- extensions/muxio-rpc-service-caller/src/caller_interface.rs112
Pattern 4: Tracing for Error Diagnosis
All error paths include structured logging using tracing:
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:249-252
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:268-273
Summary Table of Error Types and Handling
| Error Type | Layer | Handling Strategy | Recoverable? |
|---|---|---|---|
RpcServiceError::Rpc | Application | Return to caller | Yes |
RpcServiceError::Transport | Transport | Return to caller, cleanup handlers | No (requires reconnect) |
FrameDecodeError | Framing | Wrapped in io::Error, propagated up | No |
FrameEncodeError | Framing | Wrapped in io::Error, propagated up | No |
| Mutex poisoning | Internal | panic!() | No |
| Connection closed | Transport | fail_all_pending_requests() | No (requires reconnect) |
| Method not found | RPC Protocol | RpcServiceErrorCode::NotFound | Yes |
| Handler failure | Application | RpcServiceErrorCode::Fail or System | Yes |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:16-17
- src/rpc/rpc_dispatcher.rs1
- src/rpc/rpc_dispatcher.rs:85-118
- src/rpc/rpc_dispatcher.rs:422-456
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
RPC Service Errors
Loading…
RPC Service Errors
Relevant source files
- extensions/muxio-rpc-service-caller/src/caller_interface.rs
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
Purpose and Scope
This page documents the RPC-level error types, error codes, and error propagation mechanisms in the muxio RPC framework. It covers how service method failures are represented, encoded in the binary protocol, and delivered to callers. For information about lower-level transport and framing errors (such as connection failures and frame decode errors), see Transport and Framing Errors.
Error Type Hierarchy
The muxio RPC framework defines a structured error system that distinguishes between RPC-level errors (method failures, not found errors) and transport-level errors (connection issues, protocol violations).
RpcServiceError Enum
The RpcServiceError is the primary error type returned by RPC calls. It has two variants:
| Variant | Description | Example Use Cases |
|---|---|---|
Rpc(RpcServiceErrorPayload) | An error originating from the remote service method execution | Method not found, business logic failure, panics in handler |
Transport(io::Error) | An error in the underlying transport or protocol layer | Connection dropped, timeout, frame decode failure |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs16
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:49-52
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:193-198
RpcServiceErrorPayload Structure
When an RPC method fails on the server side, the error details are transmitted using RpcServiceErrorPayload:
| Field | Type | Description |
|---|---|---|
code | RpcServiceErrorCode | Categorizes the error (NotFound, Fail, System) |
message | String | Human-readable error message from the service handler |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:194-197
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:155-158
RpcServiceErrorCode Enum
The RpcServiceErrorCode categorizes RPC errors into three types:
| Code | Meaning | When Used |
|---|---|---|
NotFound | The requested method ID does not exist on the server | Method dispatch fails due to unregistered handler |
Fail | The method executed but returned a business logic error | Handler returns Err in its result type |
System | A system-level error occurred during method execution | Panics, internal errors, resource exhaustion |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs196
- extensions/muxio-rpc-service-caller/src/caller_interface.rs208
- extensions/muxio-rpc-service-caller/src/caller_interface.rs226
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs156
Error Propagation Flow
The following diagram illustrates how errors flow from the server-side handler through the protocol layers to the client:
Sources:
sequenceDiagram
participant Handler as "Service Handler"
participant Endpoint as "RpcServiceEndpoint"
participant ServerDisp as "Server RpcDispatcher"
participant Protocol as "Binary Protocol"
participant ClientDisp as "Client RpcDispatcher"
participant RecvFn as "recv_fn Callback"
participant Channel as "DynamicSender"
participant Caller as "RPC Caller"
Note over Handler,Caller: Error Case: Handler Fails
Handler->>Endpoint: Return Err(error)
Endpoint->>ServerDisp: respond() with RpcResultStatus
ServerDisp->>Protocol: Encode status in metadata_bytes
Protocol->>Protocol: Transmit error message as payload
Protocol->>ClientDisp: RpcStreamEvent::Header
ClientDisp->>RecvFn: Event with RpcResultStatus
RecvFn->>RecvFn: Extract status from metadata_bytes[0]
RecvFn->>RecvFn: Buffer error payload
Protocol->>ClientDisp: RpcStreamEvent::PayloadChunk
ClientDisp->>RecvFn: Error message bytes
RecvFn->>RecvFn: Accumulate in error_buffer
Protocol->>ClientDisp: RpcStreamEvent::End
ClientDisp->>RecvFn: Stream complete
RecvFn->>RecvFn: Convert RpcResultStatus to RpcServiceError
RecvFn->>Channel: Send Err(RpcServiceError::Rpc)
Channel->>Caller: Receive error from stream
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:119-134
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:136-173
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:175-244
Error Encoding in RPC Protocol
RpcResultStatus Enum
The RpcResultStatus is encoded in the first byte of the rpc_metadata_bytes field in the RpcHeader. This status indicates whether the RPC call succeeded or failed, and if it failed, what category of failure occurred.
| Status Value | Description | Maps to RpcServiceErrorCode |
|---|---|---|
Success | Method executed successfully | N/A (no error) |
MethodNotFound | Handler not registered for method ID | NotFound |
Fail | Handler returned error | Fail |
SystemError | Handler panicked or system error | System |
Sources:
Status Extraction and Error Construction
The recv_fn callback extracts the status from the response header and buffers any error payload:
Sources:
graph TB
Header["RpcStreamEvent::Header\nrpc_metadata_bytes[0]"]
Extract["Extract RpcResultStatus\nvia try_from(byte)"]
Store["Store status in\nStdMutex<Option<RpcResultStatus>>"]
PayloadChunk["RpcStreamEvent::PayloadChunk\nbytes"]
CheckStatus{"Status is\nSuccess?"}
SendSuccess["Send Ok(bytes)\nto DynamicSender"]
BufferError["Accumulate in\nerror_buffer"]
End["RpcStreamEvent::End"]
MatchStatus{"Match\nfinal status"}
CreateNotFound["RpcServiceError::Rpc\ncode: NotFound"]
CreateFail["RpcServiceError::Rpc\ncode: Fail"]
CreateSystem["RpcServiceError::Rpc\ncode: System\nmessage: from buffer"]
SendError["Send Err(error)\nto DynamicSender"]
Header --> Extract
Extract --> Store
PayloadChunk --> CheckStatus
CheckStatus -->|Yes| SendSuccess
CheckStatus -->|No| BufferError
End --> MatchStatus
MatchStatus -->|MethodNotFound| CreateNotFound
MatchStatus -->|Fail| CreateFail
MatchStatus -->|SystemError| CreateSystem
CreateNotFound --> SendError
CreateFail --> SendError
CreateSystem --> SendError
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:119-134
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:136-173
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:186-238
Error Handling in Caller Interface
The RpcServiceCallerInterface trait defines error handling at two levels: during call setup and during response processing.
Connection State Validation
Before initiating an RPC call, the caller checks the connection state:
Sources:
Response Stream Error Handling
The recv_fn callback processes three types of events that can result in errors:
graph TD
ErrorEvent["RpcStreamEvent::Error\nframe_decode_error"]
CreateTransportErr["Create RpcServiceError::Transport\nio::Error::ConnectionAborted\nmessage: frame_decode_error.to_string()"]
NotifyReady["Send Err to ready_rx\nif not yet signaled"]
SendToChannel["Send Err to DynamicSender\nif still available"]
DropSender["Drop DynamicSender\nclose channel"]
ErrorEvent --> CreateTransportErr
CreateTransportErr --> NotifyReady
NotifyReady --> SendToChannel
SendToChannel --> DropSender
Error Event from Transport
When a RpcStreamEvent::Error is received (indicating a frame decode error or transport failure):
Sources:
RPC-Level Error from End Event
When a RpcStreamEvent::End is received with a non-success status:
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:175-244
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:186-238
graph TD
StartBuffered["call_rpc_buffered()"]
CallStreaming["call_rpc_streaming()\nDynamicChannelType::Unbounded"]
InitBuffers["success_buf = Vec::new()\nerr = None"]
LoopStart{"stream.next().await"}
MatchResult{"Match result"}
OkChunk["Ok(chunk)"]
ErrValue["Err(e)"]
ExtendBuf["success_buf.extend(chunk)"]
StoreErr["err = Some(e)\nbreak loop"]
CheckErr{"err.is_some()?"}
ReturnErr["Return Ok(encoder, Err(err))"]
Decode["decode(&success_buf)"]
ReturnOk["Return Ok(encoder, Ok(decoded))"]
StartBuffered --> CallStreaming
CallStreaming --> InitBuffers
InitBuffers --> LoopStart
LoopStart -->|Some result| MatchResult
LoopStart -->|None| CheckErr
MatchResult --> OkChunk
MatchResult --> ErrValue
OkChunk --> ExtendBuf
ExtendBuf --> LoopStart
ErrValue --> StoreErr
StoreErr --> CheckErr
CheckErr -->|Yes| ReturnErr
CheckErr -->|No| Decode
Decode --> ReturnOk
Buffered Call Error Aggregation
The call_rpc_buffered method consumes a streaming response and returns either the complete success payload or the first error encountered:
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:351-399
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:373-398
Test Examples
Handling MethodNotFound Error
Test demonstrating a method not found scenario:
Pattern:
- Mock sender emits
Err(RpcServiceError::Rpc)withcode: NotFound - Caller receives the error through the buffered call
- Error code and message are preserved
Sources:
Handling System Error
Test demonstrating a system-level error (e.g., panic in handler):
Pattern:
- Mock sender emits
Err(RpcServiceError::Rpc)withcode: System - Error message includes panic details
- Prebuffered trait call propagates the error to caller
Sources:
Error Variant Matching
When handling RpcServiceError in application code, match on the variants:
match result {
Ok(value) => { /* handle success */ },
Err(RpcServiceError::Rpc(payload)) => {
match payload.code {
RpcServiceErrorCode::NotFound => { /* method not registered */ },
RpcServiceErrorCode::Fail => { /* business logic error */ },
RpcServiceErrorCode::System => { /* handler panic or internal error */ },
}
},
Err(RpcServiceError::Transport(io_err)) => { /* connection or protocol error */ },
}
Sources:
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:170-176
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:205-211
Summary Table
| Error Origin | Status/Variant | RpcServiceErrorCode | Description |
|---|---|---|---|
| Handler not registered | MethodNotFound | NotFound | The requested method ID has no registered handler on the server |
Handler returns Err | Fail | Fail | The handler executed but returned a business logic error |
| Handler panics | SystemError | System | The handler panicked or encountered an internal system error |
| Connection dropped | N/A | N/A | Wrapped in RpcServiceError::Transport(io::Error) |
| Frame decode failure | N/A | N/A | Wrapped in RpcServiceError::Transport(io::Error) |
| Disconnected call attempt | N/A | N/A | io::ErrorKind::ConnectionAborted in Transport variant |
Sources:
- extensions/muxio-rpc-service-caller/src/caller_interface.rs16
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:186-238
- extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-284
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Transport and Framing Errors
Loading…
Transport and Framing Errors
Relevant source files
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
This page documents low-level transport and framing errors in the muxio system. These errors occur at the binary protocol layer, connection management layer, and dispatcher coordination layer. For application-level RPC service errors (method not found, invalid parameters, handler exceptions), see RPC Service Errors.
Scope : This page covers FrameDecodeError, FrameEncodeError, connection failures, transport state transitions, dispatcher mutex poisoning, and cleanup mechanisms when connections drop unexpectedly.
Overview of Transport and Framing Error Types
The muxio system defines errors at multiple layers of the transport stack. Each layer reports failures using specific error types that propagate upward through the system.
graph TB
subgraph "Low-Level Frame Errors"
FDE["FrameDecodeError"]
FEE["FrameEncodeError"]
end
subgraph "Connection Errors"
CE["io::Error\nConnectionRefused\nConnectionReset"]
TE["Transport Errors\nWebSocket failures\nNetwork timeouts"]
end
subgraph "Dispatcher Coordination Errors"
MP["Mutex Poisoning\nPoisonError"]
PC["Pending Call Failures\nReadAfterCancel"]
end
subgraph "RPC Layer Errors"
RSE["RpcServiceError\nTransport variant"]
end
FDE -->|wrapped in| RSE
CE -->|converted to| RSE
TE -->|triggers| PC
MP -->|panics| PANIC["System Panic"]
PC -->|error events to| RSE
style PANIC fill:#ffcccc
Error Type Hierarchy
Sources :
Framing Protocol Errors
FrameDecodeError
FrameDecodeError represents failures when parsing incoming binary frames. These errors occur in the RpcSession decoder when frame headers are malformed, stream state is inconsistent, or data is corrupted.
| Variant | Description | When It Occurs |
|---|---|---|
CorruptFrame | Frame header is invalid or stream state is inconsistent | Malformed binary data, protocol violation |
ReadAfterCancel | Attempt to read from a cancelled stream | Connection dropped mid-stream, explicit cancellation |
UnexpectedEnd | Stream ended prematurely without End frame | Transport closed unexpectedly |
| Other variants | (Implementation-specific) | Various protocol violations |
Sources :
FrameEncodeError
FrameEncodeError represents failures when encoding outbound frames. These are less common than decode errors since encoding is deterministic, but can occur when stream state is invalid or resources are exhausted.
| Variant | Description | When It Occurs |
|---|---|---|
CorruptFrame | Internal state inconsistency | Invalid encoder state, logic error |
| Other variants | (Implementation-specific) | Resource exhaustion, invalid input |
Sources :
Connection and Transport Errors
Connection Establishment Failures
When a client attempts to connect to a non-existent or unreachable server, the connection fails immediately with an io::Error of kind ConnectionRefused.
Connection Flow with Error :
Sources :
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-121
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-31
Connection Drop During Operation
When a connection drops while RPC calls are in flight, the client must:
- Detect the disconnection
- Fail all pending requests
- Notify state change handlers
- Prevent new requests from being sent
Disconnect Detection and Handling :
Sources :
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-108
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:158-221
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:224-257
Dispatcher Error Handling
Mutex Poisoning
The RpcDispatcher uses a Mutex<VecDeque<(u32, RpcRequest)>> to track pending inbound responses. If a thread panics while holding this mutex, the mutex becomes “poisoned” and all subsequent lock attempts return Err(PoisonError).
Design Decision : The dispatcher treats mutex poisoning as a critical, unrecoverable error and panics immediately rather than attempting recovery.
Rationale (from src/rpc/rpc_dispatcher.rs:85-97):
If the lock is poisoned, it likely means another thread panicked while holding the mutex. The internal state of the request queue may now be inconsistent or partially mutated. Continuing execution could result in incorrect dispatch behavior, undefined state transitions, or silent data loss. This should be treated as a critical failure and escalated appropriately.
Mutex Poisoning Detection :
Poisoning Sites :
| Location | Purpose | Panic Behavior |
|---|---|---|
| src/rpc/rpc_dispatcher.rs:104-118 | init_catch_all_response_handler | Panics if queue lock is poisoned |
| src/rpc/rpc_dispatcher.rs:367-370 | read_bytes queue access | Returns FrameDecodeError::CorruptFrame |
Sources :
Failing Pending Requests on Disconnect
When a transport connection drops, the RpcDispatcher::fail_all_pending_requests() method ensures that all in-flight RPC calls are notified of the failure. This prevents deadlocks where application code waits indefinitely for responses that will never arrive.
Cleanup Sequence :
Implementation Details (src/rpc/rpc_dispatcher.rs:422-456):
- Take Ownership :
std::mem::take(&mut self.rpc_respondable_session.response_handlers)moves all handlers out of the map, leaving it empty - Synthetic Error : Creates
RpcStreamEvent::Error { frame_decode_error: error, ... }for each handler - Invoke Handlers : Calls each handler with the error event, waking any awaiting futures
- Memory Safety : Handlers are dropped after invocation, preventing leaks
Sources :
- src/rpc/rpc_dispatcher.rs:422-456
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-103
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292
graph TB
subgraph "Transport Layer"
T1["TCP/IP Error\nio::Error"]
T2["WebSocket Error\ntungstenite::Error"]
end
subgraph "Framing Layer"
F1["FrameDecodeError\nRpcSession::read_bytes()"]
F2["FrameEncodeError\nRpcStreamEncoder"]
end
subgraph "RPC Protocol Layer"
R1["RpcDispatcher::read_bytes()\nReturns Vec<u32> or Error"]
R2["RpcDispatcher::call()\nReturns Encoder or Error"]
end
subgraph "RPC Service Layer"
S1["RpcServiceError::Transport\nWrapped frame errors"]
S2["RpcServiceCallerInterface::call_rpc_prebuffered()\nReturns Result"]
end
subgraph "Application Layer"
A1["Application Code\nReceives Result<T, RpcServiceError>"]
end
T1 -->|Connection drops| T2
T2 -->|Stream error in ws_receiver.next| F1
F1 -->|propagated via read_bytes| R1
R1 -->|FrameDecodeError| S1
F2 -->|Encode failure| R2
R2 -->|FrameEncodeError| S1
S1 --> S2
S2 --> A1
style T1 fill:#ffe6e6
style F1 fill:#fff0e6
style S1 fill:#e6f7ff
Error Propagation Through Layers
Errors flow upward through the muxio layer stack, with each layer translating or wrapping errors as appropriate for its abstraction level.
Error Flow Diagram
Sources :
Error Handling in Stream Processing
Per-Stream Error Events
When a stream encounters a decode error, the RpcSession emits an RpcStreamEvent::Error event to the registered handler. This allows stream-specific error handling without affecting other concurrent streams.
Error Event Structure :
Handler Processing (src/rpc/rpc_dispatcher.rs:187-206):
Sources :
Catch-All Response Handler
The dispatcher installs a catch-all handler to process incoming response events that don’t have a specific registered handler. This handler is responsible for error logging and queue management.
Handler Registration (src/rpc/rpc_dispatcher.rs:98-209):
Sources :
stateDiagram-v2
[*] --> Connecting: RpcClient::new() called
Connecting --> Connected : WebSocket handshake complete
Connecting --> [*]: Connection error\n(io::Error returned)
Connected --> Disconnecting : WebSocket error detected
Connected --> Disconnecting : shutdown_async() called
Disconnecting --> Disconnected : is_connected.swap(false)
Disconnected --> HandlerNotified : Call state_change_handler
HandlerNotified --> RequestsFailed : fail_all_pending_requests()
RequestsFailed --> [*] : Cleanup complete
note right of Connected
Heartbeat pings sent
RPC calls processed
end note
note right of RequestsFailed
All pending RPC calls
resolved with errors
end note
Connection State Management
The RpcClient tracks connection state using an AtomicBool (is_connected) and notifies application code via state change handlers.
State Transition Diagram
State Change Handler Contract :
| State | When Called | Guarantees |
|---|---|---|
RpcTransportState::Connected | Immediately after set_state_change_handler() if connected | Client is ready for RPC calls |
RpcTransportState::Disconnected | On connection drop, explicit shutdown, or client Drop | All pending requests have been failed |
Sources :
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:54-108
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:279-335
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:33-165
Recovery and Cleanup Strategies
Automatic Cleanup on Drop
The RpcClient implements Drop to ensure graceful shutdown when the client is destroyed:
Drop Implementation (extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52):
Limitations : The synchronous Drop trait cannot await async operations, so fail_all_pending_requests() is only called when shutdown_async() is explicitly invoked by background tasks detecting errors.
Sources :
Manual Disconnect Handling
Applications can register state change handlers to implement custom cleanup logic:
Best Practices :
- Always handle
Disconnected: Assume all in-flight RPC calls have failed - Avoid blocking operations : Handler is called synchronously from disconnect detection path
- Use channels for async work : Spawn tasks rather than awaiting in the handler
Sources :
Error Handling Patterns in Tests
Testing Connection Failures
The test suite validates error handling through various scenarios:
Connection Refusal Test (extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-31):
- Attempts connection to unused port
- Verifies
io::ErrorwithErrorKind::ConnectionRefused - Confirms no panic or hang
Disconnect During Operation Test (extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292):
- Establishes connection
- Spawns RPC call
- Closes server connection
- Verifies pending call fails with error containing “cancelled stream” or “Transport error”
sequenceDiagram
participant Test as Test Code
participant Server as Mock Server
participant Client as RpcClient
participant RPC as Pending RPC Call
Test->>Server: Start listener
Test->>Client: RpcClient::new()
Test->>RPC: Spawn Echo::call() in background
Test->>Test: Sleep to let RPC become pending
Note over RPC: RPC call waiting in\ndispatcher.response_handlers
Test->>Server: Signal to close connection
Server->>Client: Close WebSocket
Client->>Client: Detect error in ws_receiver
Client->>Client: shutdown_async()
Client->>Client: fail_all_pending_requests()
Client->>RPC: Emit Error event
RPC-->>Test: Return Err(RpcServiceError)
Test->>Test: Assert error contains\n"cancelled stream"
Test Pattern :
Sources :
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-31
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292
Summary Table: Error Types and Handling
| Error Type | Layer | Cause | Handling Strategy |
|---|---|---|---|
FrameDecodeError::CorruptFrame | Framing | Malformed binary data | Log error, drop stream |
FrameDecodeError::ReadAfterCancel | Framing | Stream cancelled | Propagate to RPC layer |
FrameEncodeError | Framing | Encoder state error | Return error to caller |
io::Error::ConnectionRefused | Transport | Server not reachable | Return from new() |
tungstenite::Error | Transport | WebSocket failure | Trigger shutdown_async() |
PoisonError<T> | Dispatcher | Thread panic with lock | PANIC (critical failure) |
RpcServiceError::Transport | RPC Service | Wrapped lower-level error | Return to application |
Sources :
- src/rpc/rpc_dispatcher.rs:1-8
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:1-19
- extensions/muxio-rpc-service-caller/src/lib.rs:1-10
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Testing
Loading…
Testing
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
This document provides an overview of testing strategies and patterns used in the rust-muxio codebase. It covers the testing philosophy, test organization, common testing patterns, and available testing utilities. For detailed information about specific testing approaches, see Unit Testing and Integration Testing.
The rust-muxio system emphasizes compile-time correctness through shared type definitions and trait-based abstractions. This design philosophy directly influences the testing strategy: many potential bugs are prevented by the type system, allowing tests to focus on runtime behavior, protocol correctness, and cross-platform compatibility.
Testing Philosophy
The rust-muxio testing approach is built on three core principles:
Compile-Time Guarantees Reduce Runtime Test Burden : By using shared service definitions from example-muxio-rpc-service-definition, both clients and servers depend on the same RpcMethodPrebuffered trait implementations. The METHOD_ID constants are generated at compile time via xxhash, ensuring parameter encoding/decoding, method identification, and data structures remain consistent. Tests focus on runtime behavior rather than type mismatches—the compiler prevents protocol incompatibilities.
Layered Testing Mirrors Layered Architecture : The system’s modular design (core muxio → muxio-rpc-service → transport extensions) enables focused testing at each layer. Unit tests in tests/rpc_dispatcher_tests.rs verify RpcDispatcher::call() and RpcDispatcher::respond() behavior without async dependencies, while integration tests in extension crates validate the complete stack including tokio-tungstenite WebSocket transports.
Cross-Platform Validation Is Essential : Because the same RpcMethodPrebuffered traits work across muxio-tokio-rpc-client, muxio-wasm-rpc-client, and muxio-tokio-rpc-server, tests verify that all client types communicate correctly. This is achieved through parallel integration test suites that use identical Add, Mult, and Echo service methods against different client implementations.
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-97 tests/rpc_dispatcher_tests.rs:1-30
Test Organization in the Workspace
Test Location Strategy
Tests are organized by scope and purpose:
| Test Type | Location | Purpose | Key Functions Tested |
|---|---|---|---|
| Core Unit Tests | tests/rpc_dispatcher_tests.rs | Validate RpcDispatcher::call(), RpcDispatcher::respond(), RpcDispatcher::read_bytes() without async runtime | rpc_dispatcher_call_and_echo_response() |
| Tokio Integration Tests | extensions/muxio-tokio-rpc-client/tests/ | Validate RpcClient → RpcServer communication over tokio-tungstenite | test_success_client_server_roundtrip(), test_error_client_server_roundtrip() |
| WASM Integration Tests | extensions/muxio-wasm-rpc-client/tests/ | Validate RpcWasmClient → RpcServer with WebSocket bridge | test_success_client_server_roundtrip(), test_large_prebuffered_payload_roundtrip_wasm() |
| Test Service Definitions | example-muxio-rpc-service-definition/src/prebuffered.rs | Shared RpcMethodPrebuffered implementations | Add::METHOD_ID, Mult::METHOD_ID, Echo::METHOD_ID |
This organization ensures that:
- Core library tests have no async runtime dependencies
- Extension tests can use their specific runtime environments
- Test service definitions are reusable across all client types
- Integration tests exercise the complete, realistic code paths
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-18 tests/rpc_dispatcher_tests.rs:1-7
Integration Test Architecture
Integration tests create realistic client-server scenarios to validate end-to-end behavior. The following diagram illustrates the typical test setup:
Key Components
sequenceDiagram
participant Test as "test_success_client_server_roundtrip()"
participant Listener as "TcpListener"
participant Server as "Arc<RpcServer>"
participant Endpoint as "RpcServiceEndpointInterface"
participant Client as "RpcClient"
participant Add as "Add::call()"
Test->>Listener: TcpListener::bind("127.0.0.1:0")
Test->>Server: RpcServer::new(None)
Test->>Server: server.endpoint()
Server-->>Endpoint: endpoint reference
Test->>Endpoint: register_prebuffered(Add::METHOD_ID, handler)
Note over Endpoint: Handler: |request_bytes, _ctx| async move
Test->>Endpoint: register_prebuffered(Mult::METHOD_ID, handler)
Test->>Endpoint: register_prebuffered(Echo::METHOD_ID, handler)
Test->>Test: tokio::spawn(server.serve_with_listener(listener))
Test->>Client: RpcClient::new(host, port).await
Test->>Add: Add::call(client.as_ref(), vec![1.0, 2.0, 3.0])
Add->>Add: Add::encode_request(input)
Add->>Client: call_rpc_buffered(RpcRequest)
Client->>Server: WebSocket binary frames
Server->>Endpoint: Dispatch by Add::METHOD_ID
Endpoint->>Endpoint: Execute registered handler
Endpoint->>Client: RpcResponse frames
Client->>Add: Buffered response bytes
Add->>Add: Add::decode_response(&bytes)
Add-->>Test: Ok(6.0)
Test->>Test: assert_eq!(res1.unwrap(), 6.0)
Random Port Binding : Tests use TcpListener::bind("127.0.0.1:0").await to obtain a random available port, preventing conflicts when running multiple tests in parallel extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:21-23
Arc-Wrapped Server : The RpcServer instance is wrapped in Arc::new(RpcServer::new(None)) to enable cloning into spawned tasks while maintaining shared state extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs28
Separate Endpoint Registration : Handlers are registered using endpoint.register_prebuffered(Add::METHOD_ID, handler).await, not directly on the server. The endpoint is obtained via server.endpoint(). This separation allows handler registration to complete before server.serve_with_listener() begins accepting connections extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:31-61
Background Server Task : The server runs via tokio::spawn(server.serve_with_listener(listener)), allowing the test to proceed with client operations on the main test task extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:64-70
Shared Service Definitions : Both client and server invoke the same Add::call(), Mult::call(), and Echo::call() methods, which internally use Add::encode_request(), Add::decode_response(), etc., ensuring type-safe, consistent serialization via bitcode extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs1
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:16-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20
Common Test Patterns
Success Case Testing
The most fundamental test pattern validates that RPC calls complete successfully with correct results. The pattern uses tokio::join! to execute multiple concurrent calls, verifying both concurrency handling and result correctness:
Each call internally invokes RpcServiceCallerInterface::call_rpc_buffered() with an RpcRequest containing the appropriate METHOD_ID and bitcode-encoded parameters extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-96
Error Propagation Testing
Tests verify that server-side errors are correctly propagated to clients with appropriate RpcServiceErrorCode values:
The server encodes the error into the RpcResponse.rpc_result_status field, which the client’s RpcDispatcher::read_bytes() method decodes back into a structured RpcServiceError extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152
Large Payload Testing
Tests ensure that payloads exceeding DEFAULT_SERVICE_MAX_CHUNK_SIZE are correctly chunked by RpcDispatcher and reassembled:
The request is automatically chunked in RpcRequest.rpc_prebuffered_payload_bytes via RpcDispatcher::call(), transmitted as multiple frames, buffered by RpcStreamDecoder, and reassembled before the handler executes. The response follows the same chunking path via RpcDispatcher::respond() extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:154-203 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:230-312
Method Not Found Testing
Tests verify that calling unregistered methods returns the correct error code:
This ensures the server correctly identifies missing handlers extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:205-240
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-240 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-142
WASM Client Testing with WebSocket Bridge
Testing the WASM client requires special handling because it is runtime-agnostic and designed for browser environments. Integration tests use a WebSocket bridge to connect the WASM client to a real Tokio server:
Bridge Implementation Details
graph TB
subgraph "Test Environment"
TEST["test_success_client_server_roundtrip()"]
end
subgraph "Server Side"
SERVER["Arc<RpcServer>"]
LISTENER["TcpListener::bind()"]
HANDLERS["endpoint.register_prebuffered()"]
end
subgraph "Bridge Infrastructure"
WS_CONN["connect_async(server_url)"]
TO_BRIDGE["tokio_mpsc::unbounded_channel()"]
WS_SENDER["ws_sender.send(WsMessage::Binary)"]
WS_RECEIVER["ws_receiver.next()"]
BRIDGE_TX["tokio::spawn(bridge_tx_task)"]
BRIDGE_RX["tokio::spawn(bridge_rx_task)"]
end
subgraph "WASM Client Side"
WASM_CLIENT["Arc<RpcWasmClient>::new()"]
DISPATCHER["client.get_dispatcher()"]
OUTPUT_CB["Output Callback:\nto_bridge_tx.send(bytes)"]
READ_BYTES["dispatcher.blocking_lock().read_bytes()"]
end
TEST --> SERVER
TEST --> LISTENER
SERVER --> HANDLERS
TEST --> WASM_CLIENT
TEST --> TO_BRIDGE
WASM_CLIENT --> OUTPUT_CB
OUTPUT_CB --> TO_BRIDGE
TO_BRIDGE --> BRIDGE_TX
BRIDGE_TX --> WS_SENDER
WS_SENDER --> WS_CONN
WS_CONN --> SERVER
SERVER --> WS_CONN
WS_CONN --> WS_RECEIVER
WS_RECEIVER --> BRIDGE_RX
BRIDGE_RX --> READ_BYTES
READ_BYTES --> DISPATCHER
DISPATCHER --> WASM_CLIENT
The WebSocket bridge consists of two spawned tasks that connect RpcWasmClient to the real RpcServer:
Client to Server Bridge : Receives bytes from RpcWasmClient’s output callback (invoked during RpcDispatcher::call()) and forwards them as WsMessage::Binary extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:98-108:
Server to Client Bridge : Receives WsMessage::Binary from the server and feeds them to RpcDispatcher::read_bytes() via task::spawn_blocking() extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:110-123:
Why spawn_blocking : RpcWasmClient::get_dispatcher() returns a type that uses blocking_lock() (a synchronous mutex) and RpcDispatcher::read_bytes() is synchronous. These are required for WASM compatibility where async is unavailable. In tests running on Tokio, synchronous blocking operations must run on the blocking thread pool via task::spawn_blocking() to prevent starving the async runtime.
Sources : extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-142 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-123
graph LR
CLIENT_DISP["client_dispatcher:\nRpcDispatcher::new()"]
OUT_BUF["outgoing_buf:\nRc<RefCell<Vec<u8>>>"]
SERVER_DISP["server_dispatcher:\nRpcDispatcher::new()"]
CLIENT_DISP -->|call rpc_request, 4, write_cb| OUT_BUF
OUT_BUF -->|chunks 4| SERVER_DISP
SERVER_DISP -->|read_bytes chunk| SERVER_DISP
SERVER_DISP -->|is_rpc_request_finalized| SERVER_DISP
SERVER_DISP -->|delete_rpc_request| SERVER_DISP
SERVER_DISP -->|respond rpc_response, 4, write_cb| OUT_BUF
OUT_BUF -->|client.read_bytes| CLIENT_DISP
Unit Testing the RpcDispatcher
The core RpcDispatcher can be tested in isolation without async runtimes or network transports. These tests use in-memory buffers to simulate data exchange:
Test Structure
The rpc_dispatcher_call_and_echo_response() test creates two RpcDispatcher instances representing client and server, connected via a shared Rc<RefCell<Vec<u8>>> buffer tests/rpc_dispatcher_tests.rs:30-38:
Request Flow : Client creates RpcRequest with rpc_method_id set to ADD_METHOD_ID or MULT_METHOD_ID, then invokes RpcDispatcher::call() with a write callback that appends to the buffer tests/rpc_dispatcher_tests.rs:42-124:
Server Processing : Server reads from the buffer in 4-byte chunks via RpcDispatcher::read_bytes(), checks is_rpc_request_finalized(), retrieves the request with delete_rpc_request(), processes it, and sends the response via RpcDispatcher::respond() tests/rpc_dispatcher_tests.rs:126-203:
This validates the complete request/response cycle including framing, chunking, request correlation via rpc_request_id, and method dispatch via rpc_method_id.
Sources : tests/rpc_dispatcher_tests.rs:30-203 tests/rpc_dispatcher_tests.rs:1-29
Test Coverage Matrix
The following table summarizes test coverage across different layers and client types:
| Test Scenario | Core Unit Tests | Tokio Integration | WASM Integration | Key Functions Validated |
|---|---|---|---|---|
| Basic RPC Call | ✓ | ✓ | ✓ | RpcDispatcher::call(), RpcDispatcher::respond() |
| Concurrent Calls | ✗ | ✓ | ✓ | tokio::join! with multiple RpcCallPrebuffered::call() |
Large Payloads (> DEFAULT_SERVICE_MAX_CHUNK_SIZE) | ✓ | ✓ | ✓ | RpcStreamEncoder::write(), RpcStreamDecoder::process_chunk() |
| Error Propagation | ✓ | ✓ | ✓ | RpcServiceError::Rpc, RpcServiceErrorCode |
| Method Not Found | ✗ | ✓ | ✓ | RpcServiceEndpointInterface::dispatch() |
| Framing Protocol | ✓ | Implicit | Implicit | RpcDispatcher::read_bytes() chunking |
| Request Correlation | ✓ | Implicit | Implicit | rpc_request_id in RpcHeader |
| WebSocket Transport | ✗ | ✓ | ✓ (bridged) | tokio-tungstenite, WsMessage::Binary |
| Connection State | ✗ | ✓ | ✓ | client.handle_connect(), client.handle_disconnect() |
Coverage Rationale
- Core unit tests validate the
RpcDispatcherwithout runtime dependencies - Tokio integration tests validate native client-server communication over real WebSocket connections
- WASM integration tests validate cross-platform compatibility by testing the WASM client against the same server
- Each layer is tested at the appropriate level of abstraction
Sources : tests/rpc_dispatcher_tests.rs:1-203 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313
Shared Test Service Definitions
All integration tests use service definitions from example-muxio-rpc-service-definition/src/prebuffered.rs:
| Service Method | Input Type | Output Type | Implementation | Purpose |
|---|---|---|---|---|
Add::METHOD_ID | Vec<f64> | f64 | request_params.iter().sum() | Sum of numbers |
Mult::METHOD_ID | Vec<f64> | f64 | request_params.iter().product() | Product of numbers |
Echo::METHOD_ID | Vec<u8> | Vec<u8> | Identity function | Round-trip validation |
These methods are intentionally simple to focus tests on protocol correctness rather than business logic. The Echo::METHOD_ID method is particularly useful for testing large payloads because it returns the exact input, enabling straightforward assert_eq!() assertions.
Method ID Generation : Each method has a unique METHOD_ID constant generated at compile time by xxhash::xxh3_64 hashing of the method name. This is defined in the RpcMethodPrebuffered trait implementation:
The Add::METHOD_ID, Mult::METHOD_ID, and Echo::METHOD_ID constants are used both in test code (Add::call()) and in server handler registration (endpoint.register_prebuffered(Add::METHOD_ID, handler)), ensuring consistent method identification across all implementations extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs18
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs1 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs21
Running Tests
Tests are executed using standard Cargo commands:
Test Execution Environment : Most integration tests require a Tokio runtime even when testing the WASM client, because the test infrastructure (server, WebSocket bridge) runs on Tokio. The WASM client itself remains runtime-agnostic.
For detailed information on specific testing approaches, see:
- Unit Testing - Patterns for testing individual components
- Integration Testing - End-to-end testing with real transports
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs18 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs39
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Unit Testing
Loading…
Unit Testing
Relevant source files
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
This document covers unit testing practices and patterns within the muxio codebase. Unit tests verify individual components in isolation using mock implementations and controlled test scenarios. These tests focus on validating the behavior of core RPC components including RpcDispatcher, RpcServiceCallerInterface, and protocol-level request/response handling.
For information about end-to-end testing with real server instances and WebSocket connections, see Integration Testing. For general testing infrastructure overview, see Testing.
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 tests/rpc_dispatcher_tests.rs:1-204
Unit Testing Architecture
The unit testing strategy in muxio uses isolated component testing with mock implementations to verify behavior without requiring full runtime environments or network transports.
Architecture Overview: Unit tests instantiate mock implementations that satisfy core trait interfaces (RpcServiceCallerInterface) without requiring actual network connections or async runtimes. Mock implementations use synchronization primitives (Arc<Mutex>, Arc<AtomicBool>) to coordinate test behavior and response injection. Tests verify both success paths and error handling using pattern matching against RpcServiceError types.
graph TB
subgraph "Test Layer"
TEST["Test Functions\n(#[tokio::test])"]
MOCK["MockRpcClient"]
ASSERTIONS["Test Assertions\nassert_eq!, match patterns"]
end
subgraph "Component Under Test"
INTERFACE["RpcServiceCallerInterface\ntrait implementation"]
DISPATCHER["RpcDispatcher"]
CALL["call_rpc_buffered\ncall_rpc_streaming"]
end
subgraph "Supporting Test Infrastructure"
CHANNELS["DynamicSender/Receiver\nmpsc::unbounded, mpsc::channel"]
ENCODER["RpcStreamEncoder\ndummy instances"]
ATOMIC["Arc<AtomicBool>\nconnection state"]
MUTEX["Arc<Mutex<Option<DynamicSender>>>\nresponse coordination"]
end
TEST --> MOCK
MOCK -.implements.-> INTERFACE
MOCK --> CHANNELS
MOCK --> ENCODER
MOCK --> ATOMIC
MOCK --> MUTEX
TEST --> CALL
CALL --> DISPATCHER
TEST --> ASSERTIONS
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-93 tests/rpc_dispatcher_tests.rs:30-39
Mock Implementation Pattern
Unit tests use mock implementations of core interfaces to isolate component behavior. The primary mock pattern implements RpcServiceCallerInterface with controllable behavior.
MockRpcClient Structure
| Component | Type | Purpose |
|---|---|---|
response_sender_provider | Arc<Mutex<Option<DynamicSender>>> | Shared reference to response channel sender for injecting test responses |
is_connected_atomic | Arc<AtomicBool> | Atomic flag controlling connection state returned by is_connected() |
get_dispatcher() | Returns Arc<TokioMutex<RpcDispatcher>> | Provides fresh dispatcher instance for each test |
get_emit_fn() | Returns Arc<dyn Fn(Vec<u8>)> | No-op emit function (network writes not needed) |
call_rpc_streaming() | Returns (RpcStreamEncoder, DynamicReceiver) | Creates test channels and stores sender for response injection |
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:22-93
Mock Implementation Details
The MockRpcClient struct implements the RpcServiceCallerInterface trait with minimal functionality required for unit testing:
Key Implementation Characteristics:
graph LR
subgraph "MockRpcClient Implementation"
MOCK["MockRpcClient"]
PROVIDER["response_sender_provider\nArc<Mutex<Option<DynamicSender>>>"]
CONNECTED["is_connected_atomic\nArc<AtomicBool>"]
end
subgraph "Trait Methods"
GET_DISP["get_dispatcher()\nreturns new RpcDispatcher"]
GET_EMIT["get_emit_fn()\nreturns no-op closure"]
IS_CONN["is_connected()\nreads atomic bool"]
CALL_STREAM["call_rpc_streaming()\ncreates channels, stores sender"]
end
MOCK --> PROVIDER
MOCK --> CONNECTED
MOCK -.implements.-> GET_DISP
MOCK -.implements.-> GET_EMIT
MOCK -.implements.-> IS_CONN
MOCK -.implements.-> CALL_STREAM
CALL_STREAM --> PROVIDER
IS_CONN --> CONNECTED
-
Fresh Dispatcher: Each call to
get_dispatcher()returns a newRpcDispatcherinstance wrapped inArc<TokioMutex>extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:32-34 -
No-op Emit: The
get_emit_fn()returns a closure that discards bytes, as network transmission is not needed in unit tests extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:36-38 -
Channel Creation:
call_rpc_streaming()creates either bounded or unbounded channels based onDynamicChannelType, stores the sender in shared state for later response injection, and returns a dummyRpcStreamEncoderextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:44-85 -
Connection State Control: Tests control the
is_connected()return value by modifying theAtomicBoolextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:40-42
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:24-93
Unit Test Structure and Patterns
Prebuffered RPC Call Tests
Tests for prebuffered RPC calls follow a standard pattern: instantiate mock client, spawn response injection task, invoke RPC method, verify result.
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133
sequenceDiagram
participant Test as "Test Function"
participant Mock as "MockRpcClient"
participant Task as "tokio::spawn\nresponse task"
participant Sender as "DynamicSender"
participant Call as "call_rpc_buffered"
participant Result as "Test Assertion"
Test->>Mock: Instantiate with Arc<Mutex<Option<DynamicSender>>>
Test->>Task: spawn background task
Test->>Call: invoke with RpcRequest
Call->>Mock: call_rpc_streaming()
Mock->>Mock: create channels
Mock->>Sender: store sender in Arc<Mutex>
Mock-->>Call: return (encoder, receiver)
Task->>Sender: poll for sender availability
Task->>Sender: send_and_ignore(Ok(response_bytes))
Call->>Call: await response from receiver
Call-->>Test: return (encoder, Result<T>)
Test->>Result: assert_eq! or match pattern
Test Case: Successful Buffered Call
The test_buffered_call_success function demonstrates the success path for prebuffered RPC calls:
Test Setup:
- Creates
MockRpcClientwithis_connectedset totrueextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:99-105 - Defines echo payload
b"hello world".to_vec()and decode function extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:107-108
Response Injection:
- Spawns background task that polls
sender_provideruntil sender becomes available extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:110-121 - Sends
Ok(echo_payload)through the channel usingsend_and_ignore()extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs119
Invocation:
- Constructs
RpcRequestwithEcho::METHOD_ID, param bytes, andis_finalized: trueextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:123-128 - Calls
client.call_rpc_buffered(request, decode_fn).awaitextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs130
Assertion:
- Verifies returned result equals original echo payload extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs132
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133
Test Case: Remote Error Handling
The test_buffered_call_remote_error function verifies error propagation from service handlers:
Error Injection:
- Background task sends
Err(RpcServiceError::Rpc(RpcServiceErrorPayload))with codeRpcServiceErrorCode::Failand message"item does not exist"extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:155-158
Error Verification:
- Uses pattern matching against
Err(RpcServiceError::Rpc(err))extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:170-176 - Asserts error code matches
RpcServiceErrorCode::Failextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs172 - Asserts error message equals expected string extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs173
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177
Test Case: Trait-Level Error Conversion
The test_prebuffered_trait_converts_error function verifies that the RpcMethodPrebuffered trait correctly propagates service errors:
Trait Invocation:
- Calls
Echo::call(&client, b"some input".to_vec()).awaitdirectly on trait method extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs203 - This tests the higher-level trait abstraction rather than the lower-level
call_rpc_buffered
Error Verification:
- Verifies system error with code
RpcServiceErrorCode::Systemand message"Method has panicked"extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:205-211
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:179-212
graph TB
subgraph "Test Setup"
TEST["rpc_dispatcher_call_and_echo_response"]
OUTBUF["outgoing_buf\nRc<RefCell<Vec<u8>>>"]
end
subgraph "Client Dispatcher"
CLIENT_DISP["client_dispatcher\nRpcDispatcher::new()"]
CLIENT_CALL["dispatcher.call()\nwith emit closure"]
CLIENT_STREAM["RpcStreamEvent handler\nreceives responses"]
end
subgraph "Server Dispatcher"
SERVER_DISP["server_dispatcher\nRpcDispatcher::new()"]
SERVER_READ["dispatcher.read_bytes()\nreturns request IDs"]
SERVER_DELETE["dispatcher.delete_rpc_request()\nretrieves buffered request"]
SERVER_RESPOND["dispatcher.respond()\nemits response frames"]
end
subgraph "Method Handlers"
ADD_HANDLER["ADD_METHOD_ID\nsum numbers"]
MULT_HANDLER["MULT_METHOD_ID\nmultiply numbers"]
end
TEST --> CLIENT_DISP
TEST --> SERVER_DISP
TEST --> OUTBUF
CLIENT_CALL --> OUTBUF
OUTBUF --> SERVER_READ
SERVER_READ --> SERVER_DELETE
SERVER_DELETE --> ADD_HANDLER
SERVER_DELETE --> MULT_HANDLER
ADD_HANDLER --> SERVER_RESPOND
MULT_HANDLER --> SERVER_RESPOND
SERVER_RESPOND --> CLIENT_STREAM
RPC Dispatcher Unit Tests
The RpcDispatcher component is tested using a client-server pair of dispatchers that communicate through shared buffers, simulating network transmission without actual I/O.
Test Architecture: Dispatcher Echo Test
Sources: tests/rpc_dispatcher_tests.rs:30-203
Dispatcher Test Flow
The rpc_dispatcher_call_and_echo_response test demonstrates the complete request-response cycle at the dispatcher level:
Phase 1: Request Encoding and Transmission
| Step | Component | Action |
|---|---|---|
| 1 | Test | Creates AddRequestParams and MultRequestParams structs tests/rpc_dispatcher_tests.rs:10-28 |
| 2 | Test | Encodes parameters using bitcode::encode() tests/rpc_dispatcher_tests.rs:44-46 |
| 3 | Test | Constructs RpcRequest with method ID, param bytes, is_finalized: true tests/rpc_dispatcher_tests.rs:42-49 |
| 4 | Client Dispatcher | Calls dispatcher.call() with chunk size 4, emit closure that writes to outgoing_buf, and stream event handler tests/rpc_dispatcher_tests.rs:74-122 |
| 5 | Emit Closure | Extends outgoing_buf with emitted bytes tests/rpc_dispatcher_tests.rs:80-82 |
Phase 2: Request Reception and Processing
| Step | Component | Action |
|---|---|---|
| 6 | Test | Chunks outgoing_buf into 4-byte segments tests/rpc_dispatcher_tests.rs:127-129 |
| 7 | Server Dispatcher | Calls dispatcher.read_bytes(chunk) returning RPC request IDs tests/rpc_dispatcher_tests.rs:130-132 |
| 8 | Server Dispatcher | Checks is_rpc_request_finalized() for each request ID tests/rpc_dispatcher_tests.rs:135-142 |
| 9 | Server Dispatcher | Calls delete_rpc_request() to retrieve and remove finalized request tests/rpc_dispatcher_tests.rs144 |
| 10 | Test | Decodes param bytes using bitcode::decode() tests/rpc_dispatcher_tests.rs:152-153 |
Phase 3: Response Generation and Transmission
| Step | Component | Action |
|---|---|---|
| 11 | Method Handler | Processes request (sum for ADD, product for MULT) tests/rpc_dispatcher_tests.rs:151-187 |
| 12 | Test | Encodes response using bitcode::encode() tests/rpc_dispatcher_tests.rs:157-159 |
| 13 | Test | Constructs RpcResponse with rpc_request_id, method ID, status, and payload tests/rpc_dispatcher_tests.rs:161-167 |
| 14 | Server Dispatcher | Calls dispatcher.respond() with chunk size 4 and emit closure tests/rpc_dispatcher_tests.rs:193-197 |
| 15 | Emit Closure | Calls client_dispatcher.read_bytes() directly tests/rpc_dispatcher_tests.rs195 |
| 16 | Client Stream Handler | Receives RpcStreamEvent::Header and RpcStreamEvent::PayloadChunk tests/rpc_dispatcher_tests.rs:86-118 |
| 17 | Client Stream Handler | Decodes response bytes and logs results tests/rpc_dispatcher_tests.rs:102-112 |
Sources: tests/rpc_dispatcher_tests.rs:30-203
classDiagram
class AddRequestParams {+Vec~f64~ numbers}
class MultRequestParams {+Vec~f64~ numbers}
class AddResponseParams {+f64 result}
class MultResponseParams {+f64 result}
AddRequestParams ..|> Encode
AddRequestParams ..|> Decode
MultRequestParams ..|> Encode
MultRequestParams ..|> Decode
AddResponseParams ..|> Encode
AddResponseParams ..|> Decode
MultResponseParams ..|> Encode
MultResponseParams ..|> Decode
Request and Response Types
The dispatcher test defines custom request and response types that demonstrate the serialization pattern:
Request Types:
All types derive Encode and Decode from the bitcode crate, enabling efficient binary serialization tests/rpc_dispatcher_tests.rs:10-28
Method ID Constants:
ADD_METHOD_ID: u64 = 0x01tests/rpc_dispatcher_tests.rs7MULT_METHOD_ID: u64 = 0x02tests/rpc_dispatcher_tests.rs8
Sources: tests/rpc_dispatcher_tests.rs:7-28
Test Coverage Areas
Components Under Test
| Component | Test File | Key Aspects Verified |
|---|---|---|
RpcDispatcher | tests/rpc_dispatcher_tests.rs | Request correlation, chunked I/O, response routing, finalization detection |
RpcServiceCallerInterface | extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs | Prebuffered call success, error propagation, trait-level invocation |
call_rpc_buffered | extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs | Response deserialization, error handling, channel coordination |
RpcMethodPrebuffered trait | extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs | Trait method invocation, error conversion |
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 tests/rpc_dispatcher_tests.rs:1-204
Assertion Patterns
Equality Assertions:
assert_eq!(result.unwrap(), echo_payload)- Verifies successful response matches expected data extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs132assert_eq!(rpc_header.rpc_method_id, rpc_method_id)- Validates header correlation tests/rpc_dispatcher_tests.rs91
Pattern Matching:
match result {
Err(RpcServiceError::Rpc(err)) => {
assert_eq!(err.code, RpcServiceErrorCode::Fail);
assert_eq!(err.message, "item does not exist");
}
_ => panic!("Expected a RemoteError"),
}
extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:170-176
Boolean Assertions:
assert!(result.is_err())- Verifies error return extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs205
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:132-211 tests/rpc_dispatcher_tests.rs91
Running Unit Tests
Unit tests use the standard Rust test infrastructure with async support via tokio::test:
Test Attributes:
#[tokio::test]- Enables async test execution with Tokio runtime extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs97#[test]- Standard synchronous test (used for non-async dispatcher tests) tests/rpc_dispatcher_tests.rs30#[instrument]- Optional tracing instrumentation for debugging tests/rpc_dispatcher_tests.rs31
Running Tests:
Test Organization:
Unit tests are located in two primary locations:
- Workspace-level tests:
tests/directory at repository root contains core component tests tests/rpc_dispatcher_tests.rs1 - Package-level tests:
extensions/<package>/tests/directories contain extension-specific tests extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs1
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs97 tests/rpc_dispatcher_tests.rs:30-31
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Integration Testing
Loading…
Integration Testing
Relevant source files
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
This document describes integration testing patterns in Muxio that validate full client-server communication flows over real network connections. Integration tests verify the complete RPC stack from method invocation through binary framing to response decoding, ensuring all layers work correctly together.
For unit testing of individual components like RpcDispatcher and RpcStreamDecoder, see Unit Testing. For end-to-end application examples, see WebSocket RPC Application.
Purpose and Scope
Integration tests in Muxio validate:
- Complete RPC request/response cycles across real network connections
- Interoperability between different client implementations (native Tokio and WASM) with the server
- Proper handling of large payloads that require chunking and streaming
- Error propagation from server to client through all layers
- WebSocket transport behavior under realistic conditions
These tests use actual RpcServer instances and establish real WebSocket connections, providing high-fidelity validation of the entire system.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20
Integration Test Architecture
Three-Component Test Setup
Muxio integration tests for WASM clients use a three-component architecture to simulate realistic browser-server communication within a Tokio test environment:
Component Roles:
graph TB
subgraph "Test Environment (Tokio Runtime)"
Server["RpcServer\n(muxio-tokio-rpc-server)\nListening on TCP"]
Bridge["WebSocket Bridge\ntokio-tungstenite\nConnect & Forward"]
Client["RpcWasmClient\nRuntime-agnostic\nCallback-based"]
end
Client -->|Callback: send bytes| Bridge
Bridge -->|WebSocket Binary| Server
Server -->|WebSocket Binary| Bridge
Bridge -->|dispatcher.read_bytes| Client
TestCode["Test Code"] -->|Echo::call| Client
Server -->|Register handlers| Handlers["RpcServiceEndpointInterface\nAdd, Mult, Echo"]
style Server fill:#f9f9f9
style Bridge fill:#f9f9f9
style Client fill:#f9f9f9
| Component | Type | Purpose |
|---|---|---|
RpcServer | Real server instance | Actual production server from muxio-tokio-rpc-server |
RpcWasmClient | Client under test | WASM-compatible client that sends bytes via callback |
| WebSocket Bridge | Test infrastructure | Connects client callback to server socket via real network |
This architecture ensures the WASM client is tested against the actual server implementation rather than mocks, providing realistic validation.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-19 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-78
WebSocket Bridge Pattern
Bridge Implementation
The WebSocket bridge connects the RpcWasmClient’s callback-based output to a real WebSocket connection. This pattern is essential because WASM clients are designed to be runtime-agnostic and cannot directly create network connections in a Tokio test environment.
sequenceDiagram
participant Test as "Test Code"
participant Client as "RpcWasmClient"
participant ToBridge as "to_bridge_rx\ntokio_mpsc channel"
participant WsSender as "WebSocket\nSender"
participant WsReceiver as "WebSocket\nReceiver"
participant Server as "RpcServer"
Test->>Client: Echo::call(data)
Client->>Client: encode_request()
Client->>ToBridge: Callback: send(bytes)
Note over ToBridge,WsSender: Bridge Task 1: Client → Server
ToBridge->>WsSender: recv() bytes
WsSender->>Server: WsMessage::Binary
Server->>Server: Process RPC
Server->>WsReceiver: WsMessage::Binary
Note over WsReceiver,Client: Bridge Task 2: Server → Client
WsReceiver->>Client: dispatcher.read_bytes()
Client->>Test: Return decoded result
Bridge Setup Code
The bridge consists of two async tasks and a channel:
- Client Output Channel - Created when constructing
RpcWasmClient:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-86
- Bridge Task 1: Client to Server - Forwards bytes from callback to WebSocket:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:98-108
- Bridge Task 2: Server to Client - Forwards bytes from WebSocket to dispatcher:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:110-123
Blocking Dispatcher Access
A critical detail is the use of task::spawn_blocking for dispatcher operations. The RpcDispatcher uses a blocking mutex (parking_lot::Mutex) because the core Muxio library is non-async to support WASM. In a Tokio runtime, blocking operations must be moved to dedicated threads:
This prevents freezing the async test runtime while maintaining compatibility with the non-async core design.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-123 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:116-120
Server Setup for Integration Tests
Creating the Test Server
Integration tests create a real RpcServer listening on a random port:
The server is wrapped in Arc for safe sharing between the test code and the spawned server task:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:42-48
Handler Registration
Handlers are registered using the RpcServiceEndpointInterface:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:51-69
The server is then spawned in the background:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:72-78
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:42-78
graph TB Test["Test Code"] -->|join!| Calls["Multiple RPC Calls"] Calls --> Add1["Add::call\nvec![1.0, 2.0, 3.0]"] Calls --> Add2["Add::call\nvec![8.0, 3.0, 7.0]"] Calls --> Mult1["Mult::call\nvec![8.0, 3.0, 7.0]"] Calls --> Mult2["Mult::call\nvec![1.5, 2.5, 8.5]"] Calls --> Echo1["Echo::call\nb'testing 1 2 3'"] Calls --> Echo2["Echo::call\nb'testing 4 5 6'"] Add1 --> Assert1["assert_eq! 6.0"] Add2 --> Assert2["assert_eq! 18.0"] Mult1 --> Assert3["assert_eq! 168.0"] Mult2 --> Assert4["assert_eq! 31.875"] Echo1 --> Assert5["assert_eq! bytes"] Echo2 --> Assert6["assert_eq! bytes"]
Test Scenarios
Success Path Testing
Basic integration tests validate correct request/response handling for multiple concurrent RPC calls:
The test makes six concurrent RPC calls using tokio::join!:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-142
This validates that:
- Multiple concurrent requests are correctly multiplexed
- Request/response correlation works via
request_idmatching - Type-safe encoding/decoding produces correct results
- The
RpcDispatcherhandles concurrent streams properly
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142
sequenceDiagram
participant Test as "Test Code"
participant Client as "RpcWasmClient"
participant Bridge as "WebSocket Bridge"
participant Server as "RpcServer"
participant Handler as "Failing Handler"
Test->>Client: Add::call(vec![1.0, 2.0, 3.0])
Client->>Bridge: RpcRequest binary
Bridge->>Server: WebSocket message
Server->>Handler: Invoke handler
Handler->>Handler: Return Err("Addition failed")
Handler->>Server: Error response
Server->>Bridge: RpcResponse with error code
Bridge->>Client: Binary frames
Client->>Client: Decode RpcServiceError
Client->>Test: Err(RpcServiceError::Rpc)
Test->>Test: match error.code\nassert_eq! System
Error Propagation Testing
Integration tests verify that server-side errors propagate correctly through all layers to the client:
Error Handler Registration
The test registers a handler that always fails:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:155-160
Error Assertion
The test verifies the error type, code, and message:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:210-226
This validates:
RpcServiceErrorserialization and deserialization- Error code preservation (
RpcServiceErrorCode::System) - Error message transmission
- Proper error variant matching on client side
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:144-227
graph TB
Test["Test: Large Payload"] --> Create["Create payload\n200 × DEFAULT_SERVICE_MAX_CHUNK_SIZE"]
Create --> Call["Echo::call(client, large_payload)"]
Call --> Encode["RpcCallPrebuffered\nencode_request()"]
Encode --> Check{"Size >= chunk_size?"}
Check -->|Yes| Payload["rpc_prebuffered_payload_bytes"]
Check -->|No| Param["rpc_param_bytes"]
Payload --> Stream["RpcDispatcher\nStream as chunks"]
Stream --> Frames["Hundreds of binary frames"]
Frames --> Server["Server receives & reassembles"]
Server --> Echo["Echo handler returns"]
Echo --> Response["Stream response chunks"]
Response --> Client["Client reassembles"]
Client --> Assert["assert_eq! result == input"]
Large Payload Testing
Chunked Streaming Validation
A critical integration test validates that large payloads exceeding the chunk size are properly streamed as multiple frames:
Test Implementation
The test creates a payload 200 times the chunk size to force extensive streaming:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:295-311
What This Tests
| Layer | Validation |
|---|---|
RpcCallPrebuffered | Large argument handling logic (extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-65) |
RpcDispatcher | Request chunking and response reassembly |
RpcSession | Stream multiplexing with many frames |
| Binary Framing | Frame encoding/decoding at scale |
| WebSocket Transport | Large binary message handling |
| Server Endpoint | Payload buffering and processing |
This end-to-end test validates the complete stack can handle payloads requiring hundreds of frames without data corruption or performance issues.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-73
Connection Lifecycle Testing
Handle Connect Pattern
Integration tests must explicitly call handle_connect() to simulate the browser’s WebSocket onopen event:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs95
Without this call, the client remains in Disconnected state and RPC calls will fail. This pattern reflects the WASM client’s design where connection state is managed by the JavaScript runtime in production.
Test Timing Considerations
Tests include explicit delays to ensure server initialization:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs80
This prevents race conditions where the client attempts to connect before the server is listening.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs80 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs95
Common Integration Test Patterns
Test Structure Template
All integration tests follow a consistent structure:
Code Structure
-
Server Initialization:
- Create
TcpListenerwith port 0 for random port - Wrap
RpcServerinArcfor shared ownership - Register handlers via
endpoint.register_prebuffered() - Spawn server with
tokio::spawn
- Create
-
Client and Bridge Creation:
- Create
tokio_mpscchannel for callback output - Instantiate
RpcWasmClientwith callback closure - Connect to server using
tokio_tungstenite::connect_async - Call
client.handle_connect()to mark connection ready
- Create
-
Bridge Tasks:
- Spawn task forwarding from channel to WebSocket
- Spawn task forwarding from WebSocket to dispatcher (using
spawn_blocking)
-
Test Execution:
- Use
tokio::join!for concurrent RPC calls - Use
async moveblocks where necessary for ownership
- Use
-
Assertions:
- Validate success results with
assert_eq! - Match error variants explicitly
- Check error codes and messages
- Validate success results with
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-312
Comparison with Unit Tests
Integration tests differ fundamentally from unit tests in Muxio:
| Aspect | Unit Tests | Integration Tests |
|---|---|---|
| Components | Mock RpcServiceCallerInterface | Real RpcServer and RpcWasmClient |
| Transport | In-memory channels | Real WebSocket connections |
| Response Simulation | Manual sender injection | Server processes actual requests |
| Scope | Single component (e.g., dispatcher) | Complete RPC stack |
| Example File | extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs | extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs |
Unit Test Pattern
Unit tests use a MockRpcClient that provides controlled response injection:
extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:24-28
The mock allows direct control of responses without network communication:
extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:110-121
This approach is suitable for testing client-side logic in isolation but does not validate network behavior or server processing.
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-93 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20
Best Practices
Resource Management
- Use
Arcfor Server Sharing:
This prevents server from being moved into the spawn closure while still allowing endpoint access.
-
Proper Channel Cleanup: WebSocket bridge tasks naturally terminate when channels close, preventing resource leaks.
-
Use
spawn_blockingfor Synchronous Operations: Always wrap blocking dispatcher operations intask::spawn_blockingto prevent runtime stalls.
Test Reliability
-
Random Port Allocation: Use port 0 to avoid conflicts:
TcpListener::bind("127.0.0.1:0") -
Server Initialization Delay: Include brief sleep after spawning server:
tokio::time::sleep(Duration::from_millis(200)) -
Explicit Connection Lifecycle: Always call
client.handle_connect()before making RPC calls -
Concurrent Call Testing: Use
tokio::join!to validate request multiplexing and correlation
Error Testing
-
Test Multiple Error Codes: Validate
NotFound,Fail, andSystemerror codes separately -
Check Error Message Preservation: Assert exact error messages to ensure proper serialization
-
Test Both Success and Failure: Each integration test suite should include both happy path and error scenarios
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-312
Integration Test Limitations
What Integration Tests Do Not Cover
Integration tests in Muxio focus on RPC protocol correctness over real network connections. They do not address:
-
Browser JavaScript Integration:
- Tests run in Tokio, not actual browser environment
wasm-bindgenJavaScript glue code not exercised- Browser WebSocket API behavior not validated
-
Network Failure Scenarios:
- Connection drops and reconnection logic
- Network partition handling
- Timeout behavior under slow networks
-
Concurrent Client Stress:
- High connection count scenarios
- Server resource exhaustion
- Backpressure handling at scale
-
Security:
- TLS/SSL connection validation
- Authentication and authorization flows
- Message tampering detection
For end-to-end browser testing, see JavaScript/WASM Integration. For performance testing, see Performance Optimization.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Examples and Tutorials
Loading…
Examples and Tutorials
Relevant source files
This section provides practical, hands-on examples demonstrating how to build applications using the muxio framework. The tutorials progress from understanding the complete example application to creating your own service definitions and implementations.
For information about the underlying architecture and design principles, see Core Concepts. For detailed documentation on specific platform implementations, see Platform Implementations.
Available Examples
The muxio repository includes a complete working example demonstrating the end-to-end process of building an RPC application:
| Example Crate | Purpose | Location |
|---|---|---|
example-muxio-rpc-service-definition | Shared service contract definitions | examples/example-muxio-rpc-service-definition/ |
example-muxio-ws-rpc-app | Complete WebSocket RPC application | examples/example-muxio-ws-rpc-app/ |
The example demonstrates:
- Defining shared service contracts using
RpcMethodPrebuffered - Setting up a Tokio-based RPC server with WebSocket transport
- Creating an RPC client and making concurrent calls
- Proper error handling and state management
- Connection lifecycle management
Sources : Cargo.lock:426-449 README.md:64-162
Example Application Architecture
The following diagram shows how the example application components relate to each other and to the muxio framework:
Sources : README.md:70-82 Cargo.lock:426-449
graph TB
subgraph "Shared Service Definition"
DEF["example-muxio-rpc-service-definition"]
ADD["Add::METHOD_ID\nRpcMethodPrebuffered"]
MULT["Mult::METHOD_ID\nRpcMethodPrebuffered"]
ECHO["Echo::METHOD_ID\nRpcMethodPrebuffered"]
DEF --> ADD
DEF --> MULT
DEF --> ECHO
end
subgraph "Server Side (example-muxio-ws-rpc-app)"
SERVER["RpcServer::new()"]
ENDPOINT["server.endpoint()"]
HANDLERS["endpoint.register_prebuffered()"]
LISTENER["TcpListener::bind()"]
SERVER --> ENDPOINT
ENDPOINT --> HANDLERS
SERVER --> LISTENER
end
subgraph "Client Side (example-muxio-ws-rpc-app)"
CLIENT["RpcClient::new()"]
CALLS["Add::call()\nMult::call()\nEcho::call()"]
STATE["set_state_change_handler()"]
CLIENT --> CALLS
CLIENT --> STATE
end
subgraph "Muxio Framework"
CALLER_IF["RpcServiceCallerInterface"]
ENDPOINT_IF["RpcServiceEndpointInterface"]
DISPATCHER["RpcDispatcher"]
CALLER_IF --> DISPATCHER
ENDPOINT_IF --> DISPATCHER
end
DEF -.->|imported by| HANDLERS
DEF -.->|imported by| CALLS
HANDLERS -->|implements| ENDPOINT_IF
CALLS -->|uses| CALLER_IF
CLIENT <-.->|WebSocket frames| SERVER
Project Structure for an Example Application
When building a muxio application, the typical project structure separates concerns into distinct crates:
graph LR
subgraph "Workspace Root"
CARGO["Cargo.toml\n[workspace]"]
end
subgraph "Service Definition Crate"
DEF_CARGO["service-definition/Cargo.toml"]
DEF_LIB["service-definition/src/lib.rs"]
DEF_PREBUF["RpcMethodPrebuffered impls"]
DEF_CARGO --> DEF_LIB
DEF_LIB --> DEF_PREBUF
end
subgraph "Application Crate"
APP_CARGO["app/Cargo.toml"]
APP_MAIN["app/src/main.rs"]
SERVER_CODE["Server setup + handlers"]
CLIENT_CODE["Client setup + calls"]
APP_CARGO --> APP_MAIN
APP_MAIN --> SERVER_CODE
APP_MAIN --> CLIENT_CODE
end
CARGO -->|members| DEF_CARGO
CARGO -->|members| APP_CARGO
APP_CARGO -.->|depends on| DEF_CARGO
DEF_CARGO -.->|muxio-rpc-service| MUXIO_RPC["muxio-rpc-service"]
APP_CARGO -.->|muxio-tokio-rpc-server| TOKIO_SERVER["muxio-tokio-rpc-server"]
APP_CARGO -.->|muxio-tokio-rpc-client| TOKIO_CLIENT["muxio-tokio-rpc-client"]
This structure ensures:
- Service definitions are shared between client and server (compile-time type safety)
- Application code depends on the definition crate
- Changes to the service contract require recompilation of both sides
Sources : Cargo.lock:426-449 README.md:70-74
Understanding the Request-Response Flow
The following sequence diagram traces a complete RPC call through the example application:
Sources : README.md:92-162
sequenceDiagram
participant Main as "main() function"
participant Client as "RpcClient"
participant AddTrait as "Add::call()"
participant Caller as "RpcServiceCallerInterface"
participant WS as "WebSocket Transport"
participant Server as "RpcServer"
participant Endpoint as "RpcServiceEndpointInterface"
participant Handler as "register_prebuffered handler"
Note over Main,Handler: Server Setup Phase
Main->>Server: RpcServer::new()
Main->>Server: server.endpoint()
Server->>Endpoint: returns endpoint handle
Main->>Endpoint: endpoint.register_prebuffered(Add::METHOD_ID, handler)
Endpoint->>Handler: stores async handler
Main->>Server: server.serve_with_listener(listener)
Note over Main,Handler: Client Connection Phase
Main->>Client: RpcClient::new(host, port)
Client->>WS: WebSocket connection established
Note over Main,Handler: RPC Call Phase
Main->>AddTrait: Add::call(&client, vec![1.0, 2.0, 3.0])
AddTrait->>AddTrait: Add::encode_request(params)
AddTrait->>Caller: client.call_prebuffered(METHOD_ID, encoded_bytes)
Caller->>WS: Binary frames with stream_id, method_id
WS->>Server: Receive frames
Server->>Endpoint: Dispatch to registered handler
Endpoint->>Handler: invoke with request_bytes, ctx
Handler->>Handler: Add::decode_request(&request_bytes)
Handler->>Handler: let sum = request_params.iter().sum()
Handler->>Handler: Add::encode_response(sum)
Handler->>Endpoint: Ok(response_bytes)
Endpoint->>WS: Binary frames with result
WS->>Caller: Receive frames
Caller->>AddTrait: response_bytes
AddTrait->>AddTrait: Add::decode_response(&response_bytes)
AddTrait->>Main: Ok(6.0)
Detailed Code Flow Through the Example
The following table maps natural language steps to specific code locations:
| Step | Description | Code Location |
|---|---|---|
| 1. Bind server socket | Create TCP listener on random port | README.md88 |
| 2. Create server instance | Instantiate RpcServer wrapped in Arc | README.md95 |
| 3. Get endpoint handle | Retrieve endpoint for handler registration | README.md98 |
| 4. Register Add handler | Register async handler for Add::METHOD_ID | README.md:102-107 |
| 5. Register Mult handler | Register async handler for Mult::METHOD_ID | README.md:108-113 |
| 6. Register Echo handler | Register async handler for Echo::METHOD_ID | README.md:114-118 |
| 7. Spawn server task | Start server with serve_with_listener() | README.md126 |
| 8. Create client | Connect to server via RpcClient::new() | README.md137 |
| 9. Set state handler | Register callback for connection state changes | README.md:139-142 |
| 10. Make concurrent calls | Use join! macro to await multiple RPC calls | README.md:145-152 |
| 11. Verify results | Assert response values match expected results | README.md:154-159 |
Sources : README.md:83-161
Service Definition Pattern
The service definition crate defines the contract between client and server. Each RPC method is implemented as a unit struct that implements the RpcMethodPrebuffered trait:
graph TB
subgraph "Service Definition Structure"
TRAIT["RpcMethodPrebuffered trait"]
ADD_STRUCT["pub struct Add"]
ADD_IMPL["impl RpcMethodPrebuffered for Add"]
ADD_METHOD_ID["const METHOD_ID: u64"]
ADD_REQUEST["type Request = Vec<f64>"]
ADD_RESPONSE["type Response = f64"]
MULT_STRUCT["pub struct Mult"]
MULT_IMPL["impl RpcMethodPrebuffered for Mult"]
ECHO_STRUCT["pub struct Echo"]
ECHO_IMPL["impl RpcMethodPrebuffered for Echo"]
TRAIT -.->|implemented by| ADD_IMPL
TRAIT -.->|implemented by| MULT_IMPL
TRAIT -.->|implemented by| ECHO_IMPL
ADD_STRUCT --> ADD_IMPL
ADD_IMPL --> ADD_METHOD_ID
ADD_IMPL --> ADD_REQUEST
ADD_IMPL --> ADD_RESPONSE
MULT_STRUCT --> MULT_IMPL
ECHO_STRUCT --> ECHO_IMPL
end
subgraph "Generated by Trait"
ENCODE_REQ["encode_request()"]
DECODE_REQ["decode_request()"]
ENCODE_RESP["encode_response()"]
DECODE_RESP["decode_response()"]
CALL_METHOD["call()
async fn"]
TRAIT -.->|provides| ENCODE_REQ
TRAIT -.->|provides| DECODE_REQ
TRAIT -.->|provides| ENCODE_RESP
TRAIT -.->|provides| DECODE_RESP
TRAIT -.->|provides| CALL_METHOD
end
subgraph "Serialization"
BITCODE["bitcode::encode/decode"]
ENCODE_REQ --> BITCODE
DECODE_REQ --> BITCODE
ENCODE_RESP --> BITCODE
DECODE_RESP --> BITCODE
end
subgraph "Method ID Generation"
XXHASH["xxhash-rust"]
METHOD_NAME["Method name string"]
ADD_METHOD_ID -.->|hash of| METHOD_NAME
METHOD_NAME --> XXHASH
end
The RpcMethodPrebuffered trait provides default implementations for encoding/decoding using bitcode serialization and a call() method that works with any RpcServiceCallerInterface implementation.
Sources : README.md:71-74 Cargo.lock:426-431
Handler Registration Pattern
Server-side handlers are registered on the endpoint using the register_prebuffered() method. Each handler is an async closure that:
- Receives raw request bytes and a context object
- Decodes the request using the service definition’s
decode_request()method - Performs the business logic
- Encodes the response using
encode_response() - Returns
Result<Vec<u8>, RpcServiceError>
Sources : README.md:101-119
graph LR
subgraph "Handler Registration Flow"
ENDPOINT["endpoint.register_prebuffered()"]
METHOD_ID["Add::METHOD_ID"]
CLOSURE["async closure"]
ENDPOINT -->|key| METHOD_ID
ENDPOINT -->|value| CLOSURE
end
subgraph "Handler Execution Flow"
REQ_BYTES["request_bytes: Vec<u8>"]
CTX["_ctx: Arc<RpcContext>"]
DECODE["Add::decode_request(&request_bytes)"]
LOGIC["Business logic: iter().sum()"]
ENCODE["Add::encode_response(sum)"]
RESULT["Ok(response_bytes)"]
REQ_BYTES --> DECODE
CTX -.->|available but unused| LOGIC
DECODE --> LOGIC
LOGIC --> ENCODE
ENCODE --> RESULT
end
CLOSURE -.->|contains| DECODE
graph TB
subgraph "Client Call Flow"
APP_CODE["Application code"]
CALL_METHOD["Add::call(&client, vec![1.0, 2.0, 3.0])"]
APP_CODE --> CALL_METHOD
end
subgraph "Inside call()
implementation"
ENCODE["encode_request(params)"]
CALL_PREBUF["client.call_prebuffered(METHOD_ID, bytes)"]
AWAIT["await response"]
DECODE["decode_response(&response_bytes)"]
RETURN["Ok(Response)"]
CALL_METHOD --> ENCODE
ENCODE --> CALL_PREBUF
CALL_PREBUF --> AWAIT
AWAIT --> DECODE
DECODE --> RETURN
end
subgraph "Client Implementation"
CLIENT["RpcClient\n(implements RpcServiceCallerInterface)"]
DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
CALL_PREBUF -.->|delegates to| CLIENT
CLIENT --> DISPATCHER
DISPATCHER --> SESSION
end
RETURN --> APP_CODE
Client Call Pattern
Client-side calls use the service definition’s call() method, which is automatically provided by the RpcMethodPrebuffered trait:
The call() method handles all serialization, transport, and deserialization automatically. Application code works with typed Rust structs, never touching raw bytes.
Sources : README.md:145-152
graph TB
subgraph "Concurrent Calls with join!"
JOIN["join!()
macro"]
CALL1["Add::call(&client, vec![1.0, 2.0, 3.0])"]
CALL2["Add::call(&client, vec![8.0, 3.0, 7.0])"]
CALL3["Mult::call(&client, vec![8.0, 3.0, 7.0])"]
CALL4["Mult::call(&client, vec![1.5, 2.5, 8.5])"]
CALL5["Echo::call(&client, b\"testing 1 2 3\")"]
CALL6["Echo::call(&client, b\"testing 4 5 6\")"]
JOIN --> CALL1
JOIN --> CALL2
JOIN --> CALL3
JOIN --> CALL4
JOIN --> CALL5
JOIN --> CALL6
end
subgraph "Multiplexing Layer"
SESSION["RpcSession"]
STREAM1["Stream ID: 1"]
STREAM2["Stream ID: 2"]
STREAM3["Stream ID: 3"]
STREAM4["Stream ID: 4"]
STREAM5["Stream ID: 5"]
STREAM6["Stream ID: 6"]
SESSION --> STREAM1
SESSION --> STREAM2
SESSION --> STREAM3
SESSION --> STREAM4
SESSION --> STREAM5
SESSION --> STREAM6
end
subgraph "Single WebSocket Connection"
WS["Binary frames interleaved\nover single connection"]
end
CALL1 -.->|assigned| STREAM1
CALL2 -.->|assigned| STREAM2
CALL3 -.->|assigned| STREAM3
CALL4 -.->|assigned| STREAM4
CALL5 -.->|assigned| STREAM5
CALL6 -.->|assigned| STREAM6
SESSION --> WS
subgraph "Result Tuple"
RESULTS["(res1, res2, res3, res4, res5, res6)"]
end
JOIN --> RESULTS
Concurrent Request Handling
The example demonstrates concurrent request handling using Tokio’s join! macro:
Each concurrent call is assigned a unique stream ID, allowing frames to be interleaved over the single WebSocket connection. The join! macro waits for all responses before proceeding.
Sources : README.md:145-152
State Change Handling
The client supports registering a state change handler to track connection lifecycle:
| State | Description | Typical Response |
|---|---|---|
Connecting | Initial connection attempt | Log connection start |
Connected | WebSocket established | Enable UI, start heartbeat |
Disconnected | Connection lost | Disable UI, attempt reconnect |
Error | Connection error occurred | Log error, notify user |
The handler is registered using set_state_change_handler():
This callback-driven pattern enables reactive behavior without blocking the main application logic.
Sources : README.md:139-142
graph LR
subgraph "Handler Error Flow"
HANDLER["Handler function"]
DECODE_ERR["decode_request()
fails"]
LOGIC_ERR["Business logic fails"]
ENCODE_ERR["encode_response()
fails"]
HANDLER --> DECODE_ERR
HANDLER --> LOGIC_ERR
HANDLER --> ENCODE_ERR
end
subgraph "Error Propagation"
RPC_ERR["Err(RpcServiceError)"]
ENDPOINT["RpcServiceEndpointInterface"]
DISPATCHER["RpcDispatcher"]
TRANSPORT["Binary error frame"]
DECODE_ERR --> RPC_ERR
LOGIC_ERR --> RPC_ERR
ENCODE_ERR --> RPC_ERR
RPC_ERR --> ENDPOINT
ENDPOINT --> DISPATCHER
DISPATCHER --> TRANSPORT
end
subgraph "Client Side"
CLIENT_CALL["call()
method"]
RESULT["Err(RpcServiceError)"]
APP["Application code"]
TRANSPORT --> CLIENT_CALL
CLIENT_CALL --> RESULT
RESULT --> APP
end
Error Handling in Handlers
Handlers return Result<Vec<u8>, RpcServiceError>. The framework automatically propagates errors to the client:
Common error scenarios:
- Malformed request bytes (deserialization failure)
- Business logic errors (e.g., division by zero, validation failure)
- Resource errors (e.g., database unavailable)
All errors are serialized and transmitted back to the client as part of the RPC protocol.
Sources : README.md:102-118
Running the Example
To run the complete example application:
- Clone the repository
- Navigate to the workspace root
- Execute the example:
Expected output includes:
- Server binding to random port
- Handler registration confirmations
- Client connection establishment
- State change callback invocations
- Successful assertion of all RPC results
The example demonstrates a complete lifecycle:
- Server starts and binds to a port
- Handlers are registered for three methods
- Client connects via WebSocket
- Six concurrent RPC calls are made
- All responses are verified
- The application exits cleanly
Sources : README.md:64-162
Best Practices from the Example
The example application demonstrates several important patterns:
| Pattern | Implementation | Benefit |
|---|---|---|
| Arc-wrapped server | Arc::new(RpcServer::new(None)) | Safe sharing across async tasks |
| Random port binding | TcpListener::bind("127.0.0.1:0") | Avoids port conflicts in testing |
| Concurrent registration | join!() for handler registration | Parallel setup reduces startup time |
| Async handler closures | async move { ... } | Enables async business logic |
| Destructured joins | let (res1, res2, ...) = join!(...) | Clear result assignment |
| State change callbacks | set_state_change_handler() | Reactive connection management |
These patterns can be adapted for production applications with appropriate error handling, logging, and configuration management.
Sources : README.md:83-161
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
WebSocket RPC Application Example
Loading…
WebSocket RPC Application Example
Relevant source files
Purpose and Scope
This document provides a complete walkthrough of the example-muxio-ws-rpc-app demonstration application, showing how to build a functional WebSocket-based RPC system using muxio. The example covers server initialization, handler registration, client connection, and executing RPC calls with shared type-safe service definitions.
For information about defining custom RPC services, see Defining a Simple RPC Service. For details about cross-platform deployment strategies, see Cross-Platform Deployment. For the underlying transport mechanisms, see Tokio RPC Server and Tokio RPC Client.
Application Architecture
The example application demonstrates a complete client-server interaction cycle using three main components:
| Component | Crate | Role |
|---|---|---|
| Service Definitions | example-muxio-rpc-service-definition | Shared RPC method contracts (Add, Mult, Echo) |
| Server | example-muxio-ws-rpc-app (server code) | Tokio-based WebSocket server with handler registration |
| Client | example-muxio-ws-rpc-app (client code) | Tokio-based client making concurrent RPC calls |
Diagram: Example Application Component Structure
Sources :
Shared Service Definitions
The example uses three RPC methods defined in example-muxio-rpc-service-definition:
| Method | Purpose | Request Type | Response Type |
|---|---|---|---|
Add | Sum a vector of floats | Vec<f64> | f64 |
Mult | Multiply a vector of floats | Vec<f64> | f64 |
Echo | Echo back binary data | Vec<u8> | Vec<u8> |
Each method implements the RpcMethodPrebuffered trait, providing:
METHOD_ID: Compile-time generated method identifiercall(): Client-side invocation functionencode_request()/decode_request(): Request serializationencode_response()/decode_response(): Response serialization
Diagram: Service Definition Structure
graph LR
subgraph "RpcMethodPrebuffered Trait"
TRAIT["const METHOD_ID: u64\nencode_request\ndecode_request\nencode_response\ndecode_response\ncall"]
end
subgraph "Implementations"
ADD_IMPL["Add struct\nMETHOD_ID = xxhash('Add')\nRequest: Vec<f64>\nResponse: f64"]
MULT_IMPL["Mult struct\nMETHOD_ID = xxhash('Mult')\nRequest: Vec<f64>\nResponse: f64"]
ECHO_IMPL["Echo struct\nMETHOD_ID = xxhash('Echo')\nRequest: Vec<u8>\nResponse: Vec<u8>"]
end
TRAIT -.implemented by.-> ADD_IMPL
TRAIT -.implemented by.-> MULT_IMPL
TRAIT -.implemented by.-> ECHO_IMPL
Sources :
Server Setup and Initialization
The server initialization occurs in a dedicated block within main():
Diagram: Server Initialization Flow
sequenceDiagram
participant MAIN as main function
participant LISTENER as TcpListener
participant SERVER as RpcServer
participant ENDPOINT as endpoint
participant TASK as tokio::spawn
MAIN->>LISTENER: TcpListener::bind("127.0.0.1:0")
Note over MAIN: Binds to random available port
MAIN->>MAIN: tcp_listener_to_host_port
Note over MAIN: Extracts host and port
MAIN->>SERVER: RpcServer::new(None)
MAIN->>SERVER: Arc::new(server)
Note over SERVER: Wrapped in Arc for sharing
MAIN->>ENDPOINT: server.endpoint()
Note over ENDPOINT: Get handler registration interface
MAIN->>ENDPOINT: register_prebuffered(Add::METHOD_ID, handler)
MAIN->>ENDPOINT: register_prebuffered(Mult::METHOD_ID, handler)
MAIN->>ENDPOINT: register_prebuffered(Echo::METHOD_ID, handler)
Note over ENDPOINT: Handlers registered concurrently with join!
MAIN->>TASK: tokio::spawn(server.serve_with_listener)
Note over TASK: Server runs in background task
Sources :
TcpListener Binding
The server binds to a random available port using port 0:
Key Code Entities :
TcpListener::bind(): Tokio’s async TCP listenertcp_listener_to_host_port(): Utility to extract host/port from listener
Sources :
RpcServer Creation
The RpcServer is created and wrapped in Arc for shared ownership:
Key Code Entities :
RpcServer::new(None): Creates server with default configurationserver.endpoint(): ReturnsRpcServiceEndpointInterfacefor handler registrationArc: Enables shared ownership across async tasks
Sources :
graph TB
ENDPOINT["endpoint\n(RpcServiceEndpointInterface)"]
subgraph "Handler Closure Signature"
CLOSURE["async move closure\n/request_bytes: Vec<u8>, _ctx/ -> Result"]
end
subgraph "Handler Implementation Steps"
DECODE["1. Decode request bytes\nMethod::decode_request(&request_bytes)"]
PROCESS["2. Process business logic\n(sum, product, echo)"]
ENCODE["3. Encode response\nMethod::encode_response(result)"]
end
ENDPOINT -->|register_prebuffered METHOD_ID, closure| CLOSURE
CLOSURE --> DECODE
DECODE --> PROCESS
PROCESS --> ENCODE
ENCODE -->|Ok response_bytes| ENDPOINT
Handler Registration
Handlers are registered using the register_prebuffered method, which accepts a method ID and an async closure:
Diagram: Handler Registration Pattern
Sources :
Add Handler Example
The Add handler sums a vector of floats:
Key Operations :
Add::decode_request(): Deserializes request bytes intoVec<f64>iter().sum(): Computes sum using standard iterator methodsAdd::encode_response(): Serializesf64result back to bytes
Sources :
Concurrent Handler Registration
All handlers are registered concurrently using tokio::join!:
This ensures all handlers are registered before the server begins accepting connections.
Sources :
Server Task Spawning
The server is spawned into a background task:
Key Code Entities :
tokio::spawn(): Spawns async task on Tokio runtimeArc::clone(&server): Clones Arc reference for task ownershipserve_with_listener(): Accepts connections and dispatches to handlers
Sources :
sequenceDiagram
participant MAIN as main function
participant CLIENT as RpcClient
participant HANDLER as state_change_handler
participant SERVER as RpcServer
MAIN->>MAIN: tokio::time::sleep(200ms)
Note over MAIN: Wait for server startup
MAIN->>CLIENT: RpcClient::new(host, port)
CLIENT->>SERVER: WebSocket connection
SERVER-->>CLIENT: Connection established
MAIN->>CLIENT: set_state_change_handler(callback)
Note over HANDLER: Callback invoked on state changes
MAIN->>CLIENT: Method::call(&client, params)
Note over MAIN: Ready to make RPC calls
Client Connection and Configuration
The client establishes a WebSocket connection to the server:
Diagram: Client Initialization Flow
Sources :
RpcClient Creation
Key Code Entities :
RpcClient::new(): Creates client and initiates WebSocket connection- Parameters: Server host (String) and port (u16)
- Returns:
Result<RpcClient>on successful connection
Sources :
State Change Handling
The client supports optional state change callbacks:
Key Code Entities :
set_state_change_handler(): Registers callback for transport state changesRpcTransportState: Enum representing connection states (Connected, Disconnected, etc.)- Callback signature:
Fn(RpcTransportState)
Sources :
graph LR
CLIENT["client code"]
CALL_METHOD["Method::call\n(client, params)"]
CALLER_IF["RpcServiceCallerInterface\ncall_prebuffered"]
DISPATCHER["RpcDispatcher\nassign request_id\ntrack pending"]
SESSION["RpcSession\nallocate stream_id\nencode frames"]
WS["WebSocket transport"]
CLIENT --> CALL_METHOD
CALL_METHOD --> CALLER_IF
CALLER_IF --> DISPATCHER
DISPATCHER --> SESSION
SESSION --> WS
WS -.response frames.-> SESSION
SESSION -.decoded response.-> DISPATCHER
DISPATCHER -.correlated result.-> CALLER_IF
CALLER_IF -.deserialized.-> CALL_METHOD
CALL_METHOD -.return value.-> CLIENT
Making RPC Calls
RPC calls are made using the static call() method on each service definition:
Diagram: RPC Call Execution Pattern
Sources :
Concurrent Call Execution
Multiple calls can be executed concurrently using tokio::join!:
Key Features :
- All six calls execute concurrently over the same WebSocket connection
- Each call gets a unique request ID for response correlation
- Stream multiplexing allows interleaved responses
join!waits for all responses before proceeding
Sources :
Call Syntax and Type Safety
The call() method is generic and type-safe:
| Call | Input Type | Return Type | Implementation |
|---|---|---|---|
Add::call(&client, vec![1.0, 2.0, 3.0]) | Vec<f64> | Result<f64> | Sums inputs |
Mult::call(&client, vec![8.0, 3.0, 7.0]) | Vec<f64> | Result<f64> | Multiplies inputs |
Echo::call(&client, b"test".into()) | Vec<u8> | Result<Vec<u8>> | Echoes input |
Type Safety Guarantees :
- Compile-time verification of parameter types
- Compile-time verification of return types
- Mismatched types result in compilation errors, not runtime errors
Sources :
Result Validation
The example validates all results with assertions:
Sources :
Complete Execution Flow
Diagram: End-to-End Request/Response Flow
Sources :
Running the Example
The example is located in the example-muxio-ws-rpc-app crate and can be executed with:
Expected Output :
[INFO] Transport state changed to: Connected
[INFO] All assertions passed
The example demonstrates:
- Server starts on random port
- Handlers registered for Add, Mult, Echo
- Client connects via WebSocket
- Six concurrent RPC calls execute successfully
- All responses validated with assertions
- Automatic cleanup on completion
Sources :
Key Takeaways
| Concept | Implementation |
|---|---|
| Shared Definitions | Service methods defined once in example-muxio-rpc-service-definition, used by both client and server |
| Type Safety | Compile-time verification of request/response types via RpcMethodPrebuffered trait |
| Concurrency | Multiple RPC calls multiplex over single WebSocket connection |
| Async Handlers | Server handlers are async closures, enabling non-blocking execution |
| State Management | Optional state change callbacks for connection monitoring |
| Zero Boilerplate | Method calls use simple Method::call(client, params) syntax |
This example provides a foundation for building production WebSocket RPC applications. For streaming RPC patterns, see Streaming RPC Calls. For WASM client integration, see WASM RPC Client.
Sources :
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Defining a Simple RPC Service
Loading…
Defining a Simple RPC Service
Relevant source files
- README.md
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This page provides a step-by-step tutorial for defining RPC services in the muxio framework. It demonstrates how to create a shared service definition crate containing type-safe RPC method contracts that are used by both clients and servers. The tutorial uses the example services Add, Mult, and Echo as practical demonstrations.
For information about implementing server-side handlers that process these requests, see Service Endpoint Interface. For client-side invocation patterns, see Service Caller Interface. For the complete working example application, see WebSocket RPC Application Example.
Overview: Service Definitions as Shared Contracts
Service definitions in muxio are compile-time type-safe contracts shared between clients and servers. By implementing the RpcMethodPrebuffered trait in a common crate, both sides of the communication agree on:
- The method’s unique identifier (via compile-time xxhash)
- The input parameter types
- The output return types
- The serialization/deserialization logic
This approach eliminates an entire class of runtime errors by catching type mismatches at compile time.
Sources: README.md:48-50 README.md:70-74
Architecture: Service Definition Flow
The following diagram illustrates how a service definition crate sits between client and server implementations:
Diagram: Service Definition Shared Between Client and Server
graph TB
subgraph "Shared Service Definition Crate"
SD["example-muxio-rpc-service-definition"]
AddDef["Add Service\nRpcMethodPrebuffered"]
MultDef["Mult Service\nRpcMethodPrebuffered"]
EchoDef["Echo Service\nRpcMethodPrebuffered"]
SD --> AddDef
SD --> MultDef
SD --> EchoDef
end
subgraph "Server Implementation"
Server["RpcServer"]
Endpoint["RpcServiceEndpoint"]
AddHandler["Add::decode_request\nAdd::encode_response"]
MultHandler["Mult::decode_request\nMult::encode_response"]
EchoHandler["Echo::decode_request\nEcho::encode_response"]
Server --> Endpoint
Endpoint --> AddHandler
Endpoint --> MultHandler
Endpoint --> EchoHandler
end
subgraph "Client Implementation"
Client["RpcClient / RpcWasmClient"]
Caller["RpcServiceCallerInterface"]
AddCall["Add::call"]
MultCall["Mult::call"]
EchoCall["Echo::call"]
Client --> Caller
Caller --> AddCall
Caller --> MultCall
Caller --> EchoCall
end
AddDef -.provides.-> AddHandler
AddDef -.provides.-> AddCall
MultDef -.provides.-> MultHandler
MultDef -.provides.-> MultCall
EchoDef -.provides.-> EchoHandler
EchoDef -.provides.-> EchoCall
AddHandler -.uses METHOD_ID.-> AddDef
MultHandler -.uses METHOD_ID.-> MultDef
EchoHandler -.uses METHOD_ID.-> EchoDef
Sources: README.md:66-162
Step 1: Create the Shared Service Definition Crate
The service definitions live in a dedicated crate that is referenced by both client and server applications. In the muxio examples, this is example-muxio-rpc-service-definition.
Crate Dependencies
The service definition crate requires these dependencies:
| Dependency | Purpose |
|---|---|
muxio-rpc-service | Provides RpcMethodPrebuffered trait |
bitcode | Binary serialization (with derive feature) |
serde | Required by bitcode for derive macros |
Sources: Inferred from README.md:71-74 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-6
Step 2: Define Input and Output Types
Each RPC method requires input and output types that implement Serialize and Deserialize from bitcode. For prebuffered methods, these types represent the complete request/response payloads.
Example Type Definitions
For the Add service that sums a vector of floating-point numbers:
- Input:
Vec<f64>(the numbers to sum) - Output:
f64(the result)
For the Echo service that returns data unchanged:
- Input:
Vec<u8>(arbitrary binary data) - Output:
Vec<u8>(same binary data)
For the Mult service that multiplies a vector of floating-point numbers:
- Input:
Vec<f64>(the numbers to multiply) - Output:
f64(the product)
Sources: README.md:102-118 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-27
Step 3: Implement the RpcMethodPrebuffered Trait
The RpcMethodPrebuffered trait is the core abstraction for defining prebuffered RPC methods. Each service implements this trait to specify its contract.
classDiagram
class RpcMethodPrebuffered {<<trait>>\n+Input: type\n+Output: type\n+METHOD_NAME: &'static str\n+METHOD_ID: u64\n+encode_request(input) Result~Vec~u8~~\n+decode_request(bytes) Result~Input~\n+encode_response(output) Result~Vec~u8~~\n+decode_response(bytes) Result~Output~}
class Add {+Input = Vec~f64~\n+Output = f64\n+METHOD_NAME = "Add"\n+METHOD_ID = xxhash64("Add")}
class Mult {+Input = Vec~f64~\n+Output = f64\n+METHOD_NAME = "Mult"\n+METHOD_ID = xxhash64("Mult")}
class Echo {+Input = Vec~u8~\n+Output = Vec~u8~\n+METHOD_NAME = "Echo"\n+METHOD_ID = xxhash64("Echo")}
RpcMethodPrebuffered <|.. Add
RpcMethodPrebuffered <|.. Mult
RpcMethodPrebuffered <|.. Echo
Trait Structure
Diagram: RpcMethodPrebuffered Trait Implementation Pattern
Required Associated Types and Constants
| Member | Description |
|---|---|
Input | The type of the method’s parameters |
Output | The type of the method’s return value |
METHOD_NAME | A unique string identifier (used for ID generation) |
METHOD_ID | A compile-time generated u64 via xxhash of METHOD_NAME |
Required Methods
| Method | Purpose |
|---|---|
encode_request(input: Self::Input) | Serialize input parameters to bytes using bitcode |
decode_request(bytes: &[u8]) | Deserialize bytes to input parameters using bitcode |
encode_response(output: Self::Output) | Serialize output value to bytes using bitcode |
decode_response(bytes: &[u8]) | Deserialize bytes to output value using bitcode |
Sources: Inferred from extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21 README.md:102-118
sequenceDiagram
participant Dev as "Developer"
participant Trait as "RpcMethodPrebuffered Trait"
participant Compile as "Compile Time"
participant Runtime as "Runtime"
Note over Dev,Runtime: Service Definition Phase
Dev->>Trait: Define Add struct
Dev->>Trait: Set Input = Vec<f64>
Dev->>Trait: Set Output = f64
Dev->>Trait: Set METHOD_NAME = "Add"
Compile->>Compile: Generate METHOD_ID via xxhash("Add")
Dev->>Trait: Implement encode_request
Note right of Trait: Uses bitcode::encode
Dev->>Trait: Implement decode_request
Note right of Trait: Uses bitcode::decode
Dev->>Trait: Implement encode_response
Note right of Trait: Uses bitcode::encode
Dev->>Trait: Implement decode_response
Note right of Trait: Uses bitcode::decode
Note over Dev,Runtime: Usage Phase
Runtime->>Trait: Call Add::encode_request(vec![1.0, 2.0])
Trait->>Runtime: Returns Vec<u8> (serialized)
Runtime->>Trait: Call Add::decode_response(bytes)
Trait->>Runtime: Returns f64 (deserialized sum)
Step 4: Implementation Example
The following diagram shows the complete implementation flow for a single service:
Diagram: Service Implementation and Usage Flow
Sources: README.md:102-106 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:52-56
Step 5: Using Service Definitions on the Server
Once defined, services are registered on the server using their METHOD_ID constant. The server-side handler receives serialized bytes and uses the service’s decode_request and encode_response methods.
Server Registration Pattern
The server registration follows this pattern:
- Access the
RpcServiceEndpointfrom the server instance - Call
register_prebufferedwith the service’sMETHOD_ID - Provide an async handler that:
- Decodes the request using
ServiceType::decode_request - Performs the business logic
- Encodes the response using
ServiceType::encode_response
- Decodes the request using
Diagram: Server-Side Service Registration Flow
Example: Add Service Handler
From the example application README.md:102-106:
- Handler receives
request_bytes: Vec<u8> - Calls
Add::decode_request(&request_bytes)?to getVec<f64> - Computes sum:
request_params.iter().sum() - Calls
Add::encode_response(sum)?to get response bytes - Returns
Ok(response_bytes)
Sources: README.md:100-119 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:51-69
Step 6: Using Service Definitions on the Client
Clients invoke RPC methods using the RpcCallPrebuffered trait, which is automatically implemented for all types that implement RpcMethodPrebuffered.
Client Invocation Pattern
The RpcCallPrebuffered::call method provides a high-level interface:
Diagram: Client-Side RPC Call Flow
Example: Client Call
From the example application README.md:145-152:
- Call
Add::call(&*rpc_client, vec![1.0, 2.0, 3.0]) - Returns
Result<f64, RpcServiceError> - The
callmethod handles all encoding, transport, and decoding
Sources: README.md:144-159 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-97
graph TD
EncodeArgs["Encode Input Arguments"]
CheckSize{"Size >= DEFAULT_SERVICE_MAX_CHUNK_SIZE?"}
SmallPath["Place in rpc_param_bytes\nSingle frame in header"]
LargePath["Place in rpc_prebuffered_payload_bytes\nStreamed after header"]
Transport["Send to RpcDispatcher"]
EncodeArgs --> CheckSize
CheckSize -->|No| SmallPath
CheckSize -->|Yes| LargePath
SmallPath --> Transport
LargePath --> Transport
Step 7: Smart Transport for Large Payloads
The RpcCallPrebuffered implementation includes automatic handling of large argument sets that exceed the default chunk size.
Transport Strategy Decision
Diagram: Automatic Large Payload Handling
| Condition | Strategy | Location |
|---|---|---|
encoded_args.len() < DEFAULT_SERVICE_MAX_CHUNK_SIZE | Send in header frame | rpc_param_bytes field |
encoded_args.len() >= DEFAULT_SERVICE_MAX_CHUNK_SIZE | Stream as payload | rpc_prebuffered_payload_bytes field |
This ensures that RPC calls work regardless of argument size without requiring application-level chunking logic.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-65 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312
Complete Service Definition Structure
The following table summarizes the complete structure of a service definition:
| Component | Implementation | Example for Add |
|---|---|---|
| Service struct | Zero-sized type | pub struct Add; |
| Input type | Associated type | Vec<f64> |
| Output type | Associated type | f64 |
| Method name | Static string | "Add" |
| Method ID | Compile-time hash | xxhash64("Add") |
| Request encoder | Bitcode serialize | bitcode::encode(input) |
| Request decoder | Bitcode deserialize | bitcode::decode(bytes) |
| Response encoder | Bitcode serialize | bitcode::encode(output) |
| Response decoder | Bitcode deserialize | bitcode::decode(bytes) |
Sources: README.md:71-74 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-6
Testing Service Definitions
Service definitions can be tested end-to-end using the integration test pattern shown in extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142:
- Start a real
RpcServerwith handlers registered - Connect a client (native or WASM)
- Invoke services using the
callmethod - Assert on the results
graph TB
Server["Start RpcServer\non random port"]
Register["Register handlers\nAdd, Mult, Echo"]
Client["Create RpcClient\nor RpcWasmClient"]
CallService["Invoke Add::call"]
Assert["Assert results"]
Server --> Register
Register --> Client
Client --> CallService
CallService --> Assert
Test Pattern
Diagram: End-to-End Service Test Pattern
This pattern validates the complete round-trip: serialization, transport, dispatch, execution, and deserialization.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312
Summary
Defining a simple RPC service in muxio requires:
- Creating a shared crate for service definitions
- Defining input/output types with
bitcodeserialization - Implementing the
RpcMethodPrebufferedtrait with encode/decode methods - Using compile-time generated
METHOD_IDfor registration and dispatch - Registering handlers on the server using the service’s static methods
- Invoking methods on the client using the
RpcCallPrebuffered::calltrait
This pattern ensures compile-time type safety, eliminates a large class of runtime errors, and enables the same service definitions to work across native and WASM clients without modification.
Sources: README.md:66-162 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-99 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-312
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Advanced Topics
Loading…
Advanced Topics
Relevant source files
Purpose and Scope
This section covers advanced usage patterns, optimization techniques, and extension points for the muxio framework. Topics include cross-platform deployment strategies, performance tuning, custom transport implementations, and deep integration with JavaScript environments via WASM.
For basic usage patterns, see Overview. For specific platform implementations, see Platform Implementations. For service definition basics, see Service Definitions.
Cross-Platform Deployment
Deployment Architecture
The muxio framework enables a single service definition to be deployed across multiple runtime environments without code duplication. The key enabler is the runtime-agnostic core design, which separates platform-specific concerns (transport, async runtime) from business logic.
Sources : README.md:48-52 README.md:66-84 extensions structure from Cargo.toml
graph TB
subgraph "Shared_Service_Contract"
ServiceDef["example-muxio-rpc-service-definition\nRpcMethodPrebuffered traits\nMETHOD_ID constants\nencode_request/decode_response"]
end
subgraph "Native_Tokio_Deployment"
NativeApp["Native Application\nRust Binary"]
RpcClient["RpcClient\nArc<TokioMutex<RpcDispatcher>>\ntokio-tungstenite WebSocket"]
RpcServer["RpcServer\nAxum HTTP Server\ntokio::spawn task pool"]
NativeTransport["TCP/IP Network\nNative OS Stack"]
NativeApp -->|implements| ServiceDef
NativeApp -->|uses| RpcClient
NativeApp -->|serves| RpcServer
RpcClient <-->|WebSocket frames| NativeTransport
RpcServer <-->|WebSocket frames| NativeTransport
end
subgraph "WASM_Browser_Deployment"
WasmApp["Web Application\nwasm-bindgen Module"]
RpcWasmClient["RpcWasmClient\nMUXIO_STATIC_RPC_CLIENT_REF\nthread_local RefCell"]
JsBridge["static_muxio_write_bytes\nJavaScript Callback\nWebSocket.send"]
BrowserTransport["Browser WebSocket API\nJavaScript Runtime"]
WasmApp -->|implements| ServiceDef
WasmApp -->|uses| RpcWasmClient
RpcWasmClient -->|calls| JsBridge
JsBridge -->|invokes| BrowserTransport
end
NativeTransport <-->|Binary Protocol| BrowserTransport
BuildConfig["Build Configuration\ncargo build --target x86_64\ncargo build --target wasm32"]
BuildConfig -.->|native| NativeApp
BuildConfig -.->|wasm32-unknown-unknown| WasmApp
Shared Service Definition Pattern
Both native and WASM clients use the same service definition crate, ensuring compile-time type safety. The service definition declares methods via the RpcMethodPrebuffered trait, which generates:
METHOD_ID- A compile-timeu64constant derived from the method name viaxxhashencode_request/decode_request- Serialization usingbitcodeencode_response/decode_response- Deserialization usingbitcode
| Component | Native Client | WASM Client | Server |
|---|---|---|---|
| Service Definition | Shared crate dependency | Shared crate dependency | Shared crate dependency |
| Caller Interface | RpcClient implements RpcServiceCallerInterface | RpcWasmClient implements RpcServiceCallerInterface | N/A |
| Endpoint Interface | Optional (bidirectional) | Optional (bidirectional) | RpcServer exposes RpcServiceEndpointInterface |
| Transport | tokio-tungstenite | JavaScript WebSocket | tokio-tungstenite via Axum |
| Async Runtime | Tokio | Browser event loop | Tokio |
Sources : README.md50 README.md:72-74
Build Configuration Strategy
The workspace uses Cargo features and target-specific dependencies to enable cross-platform builds:
The muxio-wasm-rpc-client crate uses conditional compilation to ensure WASM-specific dependencies like wasm-bindgen, js-sys, and web-sys are only included for WASM targets.
Sources : README.md:39-40 workspace structure from Cargo.toml
Performance Considerations
Data Flow and Chunking Strategy
The muxio framework employs a multi-layered data transformation pipeline optimized for low latency and minimal memory overhead. Understanding this pipeline is critical for performance tuning.
Sources : README.md architecture overview, bitcode usage from Cargo.lock, chunking from src/rpc/rpc_internals/rpc_session.rs
graph LR
subgraph "Application_Layer"
AppData["Rust Struct\nVec<f64> or Custom Type"]
end
subgraph "Serialization_Layer"
BitcodeEncode["bitcode::encode\nCompact Binary\n~70% smaller than JSON"]
SerializedBytes["Vec<u8>\nSerialized Payload"]
end
subgraph "RPC_Protocol_Layer"
RpcRequest["RpcRequest struct\nmethod_id: u64\nparams: Vec<u8>"]
RpcHeader["RpcHeader\nmessage_type: MessageType\nflags: HeaderFlags"]
end
subgraph "Chunking_Layer"
ChunkLogic["DEFAULT_MAX_CHUNK_SIZE\n8KB chunks\nRpcStreamEncoder"]
Chunk1["Chunk 1\n8KB"]
Chunk2["Chunk 2\n8KB"]
ChunkN["Chunk N\nRemaining"]
end
subgraph "Multiplexing_Layer"
StreamId["stream_id assignment\nRpcSession allocator"]
FrameHeader["Frame Header\n4 bytes: stream_id + flags"]
Frame1["Frame 1"]
Frame2["Frame 2"]
end
subgraph "Transport_Layer"
WsFrame["WebSocket Binary Frame"]
Network["TCP/IP Network"]
end
AppData -->|serialize| BitcodeEncode
BitcodeEncode --> SerializedBytes
SerializedBytes --> RpcRequest
RpcRequest --> RpcHeader
RpcHeader --> ChunkLogic
ChunkLogic --> Chunk1
ChunkLogic --> Chunk2
ChunkLogic --> ChunkN
Chunk1 --> StreamId
Chunk2 --> StreamId
ChunkN --> StreamId
StreamId --> FrameHeader
FrameHeader --> Frame1
FrameHeader --> Frame2
Frame1 --> WsFrame
Frame2 --> WsFrame
WsFrame --> Network
Chunking Configuration
The default chunk size is defined in the core library and affects memory usage, latency, and throughput trade-offs:
| Chunk Size | Memory Impact | Latency | Throughput | Use Case |
|---|---|---|---|---|
| 4KB | Lower peak memory | Higher (more frames) | Lower | Memory-constrained environments |
| 8KB (default) | Balanced | Balanced | Balanced | General purpose |
| 16KB | Higher peak memory | Lower (fewer frames) | Higher | High-bandwidth scenarios |
| 32KB+ | High peak memory | Lowest | Highest | Large file transfers |
The chunk size is currently a compile-time constant (DEFAULT_MAX_CHUNK_SIZE). Custom transports can override this by implementing custom encoding logic in their RpcStreamEncoder implementations.
Sources : src/rpc/rpc_internals/rpc_session.rs frame chunking logic
Prebuffering vs Streaming Trade-offs
The framework supports two RPC invocation patterns, each with distinct performance characteristics:
Prebuffered Pattern (RpcMethodPrebuffered):
- Entire request payload buffered in memory before processing
- Entire response payload buffered before returning
- Lower latency for small payloads (< 8KB)
- Simpler error handling (atomic success/failure)
- Example: README.md:102-118 handler registrations
Streaming Pattern (dynamic channels):
- Incremental processing via
bounded_channelorunbounded_channel - Memory usage proportional to channel buffer size
- Enables processing before full payload arrives
- Supports backpressure via bounded channels
- Required for payloads > available memory
| Scenario | Recommended Pattern | Rationale |
|---|---|---|
| Small JSON-like data (< 8KB) | Prebuffered | Single allocation, minimal overhead |
| Medium data (8KB - 1MB) | Prebuffered or Streaming | Depends on memory constraints |
| Large data (> 1MB) | Streaming | Prevents OOM, enables backpressure |
| Real-time data feeds | Streaming | Continuous processing required |
| File uploads/downloads | Streaming | Predictable memory usage |
Sources : README.md:102-118 prebuffered examples, streaming concepts from architecture overview
Smart Transport Strategy for Large Payloads
For payloads exceeding several megabytes, consider implementing a hybrid approach:
- Send small metadata message via muxio RPC
- Transfer large payload via alternative channel (HTTP multipart, object storage presigned URL)
- Send completion notification via muxio RPC
This pattern avoids WebSocket frame size limitations and allows specialized optimization for bulk data transfer while maintaining RPC semantics for control flow.
Example Flow:
Client -> Server: UploadRequest { file_id: "abc", size: 500MB }
Server -> Client: UploadResponse { presigned_url: "https://..." }
Client -> Storage: PUT to presigned_url (outside muxio)
Client -> Server: UploadComplete { file_id: "abc" }
Server -> Client: ProcessingResult { ... }
Sources : Design patterns implied by README.md:46-47 low-latency focus
Extending the Framework
graph TB
subgraph "Core_Traits"
CallerInterface["RpcServiceCallerInterface\nasync fn call_prebuffered\nasync fn call_streaming"]
EndpointInterface["RpcServiceEndpointInterface\nasync fn register_prebuffered\nasync fn register_streaming"]
end
subgraph "Transport_Abstraction"
ReadBytes["read_bytes callback\nfn('static [u8])"]
WriteBytes["write_bytes implementation\nfn send(&self Vec<u8>)"]
end
subgraph "Provided_Implementations"
RpcClient["RpcClient\nTokio + WebSocket"]
RpcWasmClient["RpcWasmClient\nwasm-bindgen bridge"]
RpcServer["RpcServer\nAxum + WebSocket"]
end
subgraph "Custom_Implementation_Example"
CustomTransport["CustomRpcClient\nYour transport layer"]
CustomDispatcher["Arc<Mutex<RpcDispatcher>>\nRequest correlation"]
CustomSession["RpcSession\nStream multiplexing"]
CustomSend["Custom send_bytes\ne.g. UDP, IPC, gRPC"]
CustomTransport -->|owns| CustomDispatcher
CustomDispatcher -->|owns| CustomSession
CustomTransport -->|implements| CustomSend
end
RpcClient -.implements.-> CallerInterface
RpcWasmClient -.implements.-> CallerInterface
RpcServer -.implements.-> EndpointInterface
CustomTransport -.implements.-> CallerInterface
CustomTransport -->|uses| ReadBytes
CustomTransport -->|uses| WriteBytes
CallerInterface -.requires.-> ReadBytes
CallerInterface -.requires.-> WriteBytes
Extension Points and Custom Transports
The muxio framework exposes several well-defined extension points for custom implementations:
Sources : README.md48 RpcServiceCallerInterface description, extensions/muxio-rpc-service-caller/src/caller_interface.rs extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs
Implementing a Custom Transport
To create a custom transport (e.g., for UDP, Unix domain sockets, or custom protocols), implement the following pattern:
Required Components:
- Transport Handler - Manages the underlying I/O
- RpcDispatcher - Handles request correlation (reuse from core)
- RpcSession - Handles stream multiplexing (reuse from core)
- Trait Implementation - Implements
RpcServiceCallerInterface
Key Integration Points:
| Component | Responsibility | Implementation Required |
|---|---|---|
read_bytes callback | Feed received bytes to dispatcher | Yes - transport-specific |
write_bytes function | Send frames to network | Yes - transport-specific |
RpcDispatcher::call() | Initiate RPC requests | No - use core implementation |
RpcDispatcher::read() | Process incoming frames | No - use core implementation |
| State management | Track connection lifecycle | Yes - transport-specific |
Example Structure:
custom_transport/
├── src/
│ ├── lib.rs
│ ├── custom_client.rs # Implements RpcServiceCallerInterface
│ ├── custom_transport.rs # Transport-specific I/O
│ └── custom_framing.rs # Adapts RpcSession to transport
Reference implementations: extensions/muxio-tokio-rpc-client/src/rpc_client.rs for async pattern, extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs for callback-driven pattern.
Sources : README.md:35-36 runtime agnostic design, extensions/muxio-tokio-rpc-client/src/rpc_client.rs client structure
Runtime-Specific Adapters
The core library’s non-async, callback-driven design enables integration with diverse runtime environments:
Tokio Integration Pattern:
- Wrap
RpcDispatcherinArc<TokioMutex<_>> - Spawn background task for
read_bytesprocessing - Use
tokio::spawnfor concurrent request handling - Example: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38
WASM Integration Pattern:
- Store client in
thread_local!RefCell - Use JavaScript callbacks for async operations
- Avoid blocking operations (no
std::sync::Mutex) - Example: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
Custom Single-Threaded Runtime:
- Wrap
RpcDispatcherinRc<RefCell<_>> - Process
read_byteson main thread - Use callback-based async pattern
- No thread spawning required
Custom Multi-Threaded Runtime:
- Wrap
RpcDispatcherinArc<StdMutex<_>> - Create thread pool for request handlers
- Use channels for cross-thread communication
- Example pattern from Tokio implementation
Sources : DRAFT.md:48-52 runtime model description, extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
graph TB
subgraph "Rust_WASM_Module"
WasmApp["Application Code\ncalls RPC methods"]
RpcWasmClient["RpcWasmClient\nimplements RpcServiceCallerInterface"]
StaticClient["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local RefCell"]
Dispatcher["RpcDispatcher\nrequest correlation"]
Session["RpcSession\nstream multiplexing"]
WasmApp -->|uses| RpcWasmClient
RpcWasmClient -->|stores in| StaticClient
RpcWasmClient -->|owns| Dispatcher
Dispatcher -->|owns| Session
end
subgraph "FFI_Boundary"
WriteBytes["#[wasm_bindgen]\nstatic_muxio_write_bytes\nfn(Vec<u8>)"]
JsCallback["JavaScript Callback\nwindow.muxioWriteBytes\n= function(bytes)"]
WriteBytes -->|invokes| JsCallback
end
subgraph "JavaScript_Runtime"
WebSocket["WebSocket Instance\nws.send(bytes)"]
EventLoop["Browser Event Loop"]
OnMessage["ws.onmessage handler"]
ReadBytes["Read Path\ncalls Rust static_muxio_read_bytes"]
JsCallback -->|calls| WebSocket
WebSocket -->|send| EventLoop
EventLoop -->|receive| OnMessage
OnMessage -->|invokes| ReadBytes
end
subgraph "Rust_Read_Path"
StaticReadFn["#[wasm_bindgen]\nstatic_muxio_read_bytes\nfn(&[u8])"]
DispatcherRead["dispatcher.read()\nframe decoding"]
ReadBytes -->|calls| StaticReadFn
StaticReadFn -->|delegates to| DispatcherRead
end
Session -->|generates frames| WriteBytes
DispatcherRead -->|delivers responses| WasmApp
JavaScript and WASM Integration
WASM Bridge Architecture
The WASM client integrates with JavaScript through a minimal FFI bridge that passes byte arrays between Rust and JavaScript:
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs README.md52 FFI description
Static Client Pattern
The WASM client uses a static singleton pattern due to WASM’s limitations with owned callbacks:
Problem : JavaScript callbacks cannot capture owned Rust data (no 'static lifetime guarantees)
Solution : Store client in thread-local static storage
Key Characteristics:
thread_local!ensures single-threaded access (WASM is single-threaded)RefCellprovides interior mutabilityOptionallows initialization after module load- All public API functions access the static client
Sources : extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:10-12 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
JavaScript Interop Patterns
The JavaScript side must implement a minimal bridge to complete the integration:
Required JavaScript Setup:
| Function | Purpose | Implementation |
|---|---|---|
window.muxioWriteBytes | Rust -> JS data transfer | Callback that sends bytes via WebSocket |
static_muxio_read_bytes() | JS -> Rust data transfer | WASM export invoked from onmessage |
static_muxio_create_client() | Initialize WASM client | WASM export called on WebSocket open |
static_muxio_handle_state_change() | Connection lifecycle | WASM export called on WebSocket state changes |
Example JavaScript Integration:
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs architecture, README.md52 wasm-bindgen bridge description
Memory Management Across FFI Boundary
The WASM FFI boundary requires careful memory management to prevent leaks and use-after-free issues:
Rust - > JavaScript (Write Path):
Vec<u8>ownership transferred to JavaScript viawasm-bindgen- JavaScript holds
Uint8Arrayview into WASM linear memory - Critical : JavaScript must not retain references after async operations
- Data copied by
WebSocket.send(), safe to release
JavaScript - > Rust (Read Path):
- JavaScript creates
Uint8Arrayfrom WebSocket data - Passes slice reference to Rust via
wasm-bindgen - Rust copies data into owned structures (
Vec<u8>) - JavaScript can release buffer after function returns
Best Practices:
- Always copy data across FFI boundary, never share references
- Use
wasm-bindgentype conversions (Vec<u8>,&[u8]) - Avoid storing JavaScript arrays in Rust (lifetime issues)
- Avoid storing Rust pointers in JavaScript (invalidation risk)
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs implementation, wasm-bindgen safety patterns
WASM Build and Deployment
Building and deploying the WASM client requires specific toolchain configuration:
Build Steps:
Integration into Web Application:
Sources : extensions/muxio-wasm-rpc-client/ structure, WASM build patterns from Cargo.toml target configuration
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Cross-Platform Deployment
Loading…
Cross-Platform Deployment
Relevant source files
Purpose and Scope
This page explains strategies for deploying muxio-based RPC services across multiple runtime environments using a single shared service definition. It covers how to structure projects, configure builds, and deploy the same business logic to native applications (using Tokio) and web browsers (using WebAssembly).
For details on creating service definitions themselves, see Creating Service Definitions. For information on the WASM JavaScript bridge architecture, see JavaScript and WASM Integration. For runtime-specific implementation details, see Tokio RPC Client and WASM RPC Client.
Cross-Platform Architecture Overview
Muxio achieves cross-platform compatibility through a layered architecture where the core multiplexing and RPC logic remain platform-agnostic, while platform-specific extensions provide concrete implementations for different environments.
graph TB
subgraph "Shared Business Logic"
APP["Application Code\nType-safe RPC calls"]
SERVICE_DEF["Service Definition Crate\nRpcMethodPrebuffered traits\nMETHOD_ID constants"]
end
subgraph "Abstraction Layer"
CALLER_INTERFACE["RpcServiceCallerInterface\nPlatform-agnostic trait"]
ENDPOINT_INTERFACE["RpcServiceEndpointInterface\nHandler registration"]
end
subgraph "Native Platform"
TOKIO_CLIENT["RpcClient\nArc-based lifecycle\ntokio-tungstenite"]
TOKIO_SERVER["RpcServer\nAxum + WebSocket"]
end
subgraph "Web Platform"
WASM_CLIENT["RpcWasmClient\nwasm-bindgen\nJS WebSocket bridge"]
end
subgraph "Core Layer"
MUXIO_CORE["muxio core\nCallback-driven\nNo async runtime"]
RPC_FRAMEWORK["muxio-rpc-service\nMethod ID generation\nbitcode serialization"]
end
APP --> SERVICE_DEF
APP --> CALLER_INTERFACE
SERVICE_DEF --> CALLER_INTERFACE
CALLER_INTERFACE -.implemented by.-> TOKIO_CLIENT
CALLER_INTERFACE -.implemented by.-> WASM_CLIENT
TOKIO_CLIENT --> RPC_FRAMEWORK
WASM_CLIENT --> RPC_FRAMEWORK
TOKIO_SERVER --> ENDPOINT_INTERFACE
RPC_FRAMEWORK --> MUXIO_CORE
ENDPOINT_INTERFACE --> RPC_FRAMEWORK
Platform Independence Strategy
Sources : README.md:35-41 README.md:48-49 Cargo.lock:426-431 Cargo.lock:898-915 Cargo.lock:935-953
Shared Service Definition Pattern
The foundation of cross-platform deployment is a shared service definition crate that contains no platform-specific code. This crate defines the RPC methods, request/response types, and method identifiers that both client and server implementations depend on.
Service Definition Crate Structure
| Component | Purpose | Example |
|---|---|---|
| RPC Method Traits | Define method signatures and serialization | impl RpcMethodPrebuffered for Add |
| Method ID Constants | Compile-time generated identifiers | Add::METHOD_ID |
| Request/Response Types | Shared data structures | Vec<f64> parameters |
| Serialization Logic | Encoding/decoding with bitcode | encode_request(), decode_response() |
Sources : README.md:50-51 README.md:71-74 Cargo.lock:426-431
Platform-Specific Client Implementations
While service definitions remain identical, client implementations differ based on the target platform. Both implementations satisfy the same RpcServiceCallerInterface trait, enabling application code to remain platform-agnostic.
Native (Tokio) Client
The muxio-tokio-rpc-client crate provides a native Rust client implementation using:
- Async runtime : Tokio for concurrent I/O
- Transport :
tokio-tungstenitefor WebSocket connections - Lifecycle :
Arc<RpcClient>for shared ownership across async tasks - State management : Background task for connection monitoring
Sources : Cargo.lock:898-915 README.md:75-77 README.md:137-142
WebAssembly Client
The muxio-wasm-rpc-client crate provides a browser-compatible client using:
- Platform : WebAssembly compiled from Rust
- Transport : JavaScript WebSocket APIs via
wasm-bindgen - Lifecycle : Thread-local static reference (
MUXIO_STATIC_RPC_CLIENT_REF) - Bridge :
static_muxio_write_bytes()function for JS-to-Rust communication
Sources : Cargo.lock:935-953 README.md:40-41 README.md:52-53
Unified Interface Usage
Application code can use the same RPC calling pattern regardless of platform:
Sources : README.md:145-152
graph TB
subgraph "Core Layer"
MUXIO["muxio\nno_std compatible"]
RPC_SERVICE["muxio-rpc-service\nplatform agnostic"]
end
subgraph "Framework Layer"
CALLER["muxio-rpc-service-caller"]
ENDPOINT["muxio-rpc-service-endpoint"]
end
subgraph "Native Extensions"
TOKIO_CLIENT["muxio-tokio-rpc-client\ntarget: native"]
TOKIO_SERVER["muxio-tokio-rpc-server\ntarget: native"]
end
subgraph "WASM Extensions"
WASM_CLIENT["muxio-wasm-rpc-client\ntarget: wasm32-unknown-unknown"]
end
subgraph "Application Layer"
SERVICE_DEF["example-muxio-rpc-service-definition\nplatform agnostic"]
EXAMPLE_APP["example-muxio-ws-rpc-app\nnative only"]
end
RPC_SERVICE --> MUXIO
CALLER --> RPC_SERVICE
ENDPOINT --> RPC_SERVICE
TOKIO_CLIENT --> CALLER
TOKIO_SERVER --> ENDPOINT
WASM_CLIENT --> CALLER
SERVICE_DEF --> RPC_SERVICE
EXAMPLE_APP --> SERVICE_DEF
EXAMPLE_APP --> TOKIO_CLIENT
EXAMPLE_APP --> TOKIO_SERVER
Cargo Workspace Configuration
Muxio uses a Cargo workspace to organize crates by layer and platform target. This structure enables selective compilation based on target architecture.
Workspace Structure
Dependency Requirements by Platform :
| Crate | Native | WASM | Notes |
|---|---|---|---|
muxio | ✓ | ✓ | Core, no async required |
muxio-rpc-service | ✓ | ✓ | Platform-agnostic serialization |
muxio-tokio-rpc-client | ✓ | ✗ | Requires Tokio runtime |
muxio-tokio-rpc-server | ✓ | ✗ | Requires Tokio + Axum |
muxio-wasm-rpc-client | ✗ | ✓ | Requires wasm-bindgen |
Sources : Cargo.lock:830-839 Cargo.lock:858-867 Cargo.lock:898-915 Cargo.lock:918-932 Cargo.lock:935-953
Build Targets and Compilation
Cross-platform deployment requires configuring separate build targets for native and WASM outputs.
Native Build Configuration
Native applications compile with the standard Rust toolchain:
Key dependencies for native builds :
tokiowithfullfeaturestokio-tungstenitefor WebSocket transportaxumfor HTTP server (server-side only)
Sources : Cargo.lock:1417-1432 Cargo.lock:1446-1455 Cargo.lock:80-114
WebAssembly Build Configuration
WASM applications require the wasm32-unknown-unknown target:
Key dependencies for WASM builds :
wasm-bindgenfor Rust-JavaScript interopwasm-bindgen-futuresfor async/await in WASMjs-sysfor JavaScript standard library access- No Tokio or native I/O dependencies
Sources : Cargo.lock:1637-1646 Cargo.lock:1663-1673 Cargo.lock:745-752
Deployment Patterns
Pattern 1: Unified Service Definition
Create a dedicated crate for service definitions that contains no platform-specific code:
my-rpc-services/
├── Cargo.toml # Only depend on muxio-rpc-service + bitcode
└── src/
├── lib.rs
└── methods/
├── user_auth.rs
├── data_sync.rs
└── notifications.rs
Dependencies :
This crate can be used by all platforms.
Sources : Cargo.lock:426-431
Pattern 2: Platform-Specific Applications
Structure application code to depend on platform-appropriate client implementations:
my-project/
├── shared-services/ # Platform-agnostic service definitions
├── native-app/ # Tokio-based desktop/server app
│ ├── Cargo.toml # Depends on muxio-tokio-rpc-client
│ └── src/
├── wasm-app/ # Browser-based web app
│ ├── Cargo.toml # Depends on muxio-wasm-rpc-client
│ └── src/
└── server/ # Backend server
├── Cargo.toml # Depends on muxio-tokio-rpc-server
└── src/
Pattern 3: Conditional Compilation
Use Cargo features to conditionally compile platform-specific code:
Application code uses conditional compilation:
Sources : Cargo.lock:434-449
sequenceDiagram
participant JS as "JavaScript Host"
participant WS as "WebSocket"
participant WASM as "WASM Module"
participant CLIENT as "RpcWasmClient"
Note over JS,CLIENT: Initialization
JS->>WASM: Import WASM module
WASM->>CLIENT: Initialize static client
JS->>WS: Create WebSocket connection
Note over JS,CLIENT: Outbound RPC Call
CLIENT->>WASM: Emit bytes via callback
WASM->>JS: Call write_bytes_to_js()
JS->>WS: WebSocket.send(bytes)
Note over JS,CLIENT: Inbound Response
WS->>JS: onmessage event
JS->>WASM: static_muxio_write_bytes(bytes)
WASM->>CLIENT: Process response
CLIENT->>CLIENT: Resolve awaited call
WebAssembly-Specific Considerations
JavaScript Bridge Requirements
WASM clients require a thin JavaScript layer to handle WebSocket communication:
Key Functions :
static_muxio_write_bytes(bytes: &[u8]): Entry point for JavaScript to pass received bytes to WASMwrite_bytes_to_js(): Callback function exposed to WASM for outbound data
Sources : README.md:52-53
Memory Management
WASM clients use different ownership patterns than native clients:
| Aspect | Native (Tokio) | WASM |
|---|---|---|
| Client storage | Arc<RpcClient> | thread_local! RefCell<RpcWasmClient> |
| Dispatcher mutex | TokioMutex | StdMutex |
| Task spawning | tokio::spawn() | Direct execution, no spawning |
| Async runtime | Tokio multi-threaded | Single-threaded, promise-based |
Sources : Cargo.lock:898-915 Cargo.lock:935-953
Build Optimization for WASM
Optimize WASM builds for size:
graph TB
subgraph "Server Process"
LISTENER["TcpListener\nBind to port"]
SERVER["Arc<RpcServer>"]
ENDPOINT["RpcServiceEndpoint\nHandler registry"]
HANDLERS["Registered handlers"]
end
subgraph "Per-Connection"
WS_UPGRADE["WebSocket upgrade"]
CONNECTION["Connection handler"]
DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
end
LISTENER --> WS_UPGRADE
WS_UPGRADE --> CONNECTION
SERVER --> ENDPOINT
ENDPOINT --> HANDLERS
CONNECTION --> DISPATCHER
DISPATCHER --> SESSION
HANDLERS --> DISPATCHER
Additional size reduction via wasm-opt:
Server Deployment Considerations
While clients differ by platform, servers typically run in native environments using Tokio.
Server Architecture
Deployment Checklist :
- Single server binary serves all client types (native and WASM)
- WebSocket endpoint accessible to both platforms
- Same binary protocol used for all connections
- Service handlers registered once, used by all clients
Sources : README.md:83-128
Integration Example
The following example shows how shared service definitions enable identical calling patterns across platforms:
Shared Service Definition
Native Client Usage
WASM Client Usage
Server Implementation
Sources : README.md:70-161
Benefits of Cross-Platform Deployment
| Benefit | Description | Implementation |
|---|---|---|
| Code Reuse | Write business logic once | Shared service definition crate |
| Type Safety | Compile-time API contract | RpcMethodPrebuffered trait |
| Binary Efficiency | Same protocol across platforms | bitcode serialization |
| Single Server | One backend serves all clients | Platform-agnostic RpcServer |
| Flexibility | Easy to add new platforms | Implement RpcServiceCallerInterface |
Sources : README.md:48-49 README.md:50-51
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Performance Considerations
Loading…
Performance Considerations
Relevant source files
- DRAFT.md
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This document covers performance characteristics, optimization strategies, and trade-offs in the muxio RPC framework. Topics include binary protocol efficiency, chunking strategies, payload size management, prebuffering versus streaming patterns, and memory management considerations.
For general architecture and design principles, see Design Philosophy. For detailed information about streaming RPC patterns, see Streaming RPC Calls. For cross-platform deployment strategies, see Cross-Platform Deployment.
Binary Protocol Efficiency
The muxio framework is designed for low-overhead communication through several architectural decisions:
Compact Binary Serialization
The framework uses bitcode for serialization instead of text-based formats like JSON. This provides:
- Smaller payload sizes : Binary encoding reduces network transfer costs
- Faster encoding/decoding : No string parsing or formatting overhead
- Type safety : Compile-time verification of serialized structures
- Zero schema overhead : No field names transmitted in messages
The serialization occurs at the RPC service definition layer, where RpcMethodPrebuffered::encode_request and RpcMethodPrebuffered::decode_response handle type conversion.
Schemaless Framing Protocol
The underlying framing protocol is schema-agnostic, meaning:
- No metadata about message structure is transmitted
- Frame headers contain only essential routing information (stream ID, flags)
- Method identification uses 64-bit xxhash values computed at compile time
- Response correlation uses numeric request IDs
This minimalist approach reduces per-message overhead while maintaining full type safety through shared service definitions.
Sources:
Chunking and Payload Size Management
DEFAULT_SERVICE_MAX_CHUNK_SIZE
The framework defines a constant chunk size used for splitting large payloads:
This value represents the maximum size of a single frame’s payload. Any data exceeding this size is automatically chunked by the RpcDispatcher and RpcSession layers.
Rationale for 64 KB chunks:
| Factor | Consideration |
|---|---|
| WebSocket compatibility | Many WebSocket implementations handle 64 KB frames efficiently |
| Memory footprint | Limits per-stream buffer requirements |
| Latency vs throughput | Balances sending small chunks quickly vs fewer total frames |
| TCP segment alignment | Aligns reasonably with typical TCP maximum segment sizes |
Smart Transport Strategy for Large Payloads
The framework implements an adaptive strategy for transmitting RPC arguments based on their encoded size:
Small Payload Path ( < 64 KB):
flowchart TB
EncodeArgs["RpcCallPrebuffered::call\nEncode input arguments"]
CheckSize{"encoded_args.len() >=\nDEFAULT_SERVICE_MAX_CHUNK_SIZE?"}
SmallPath["Small payload path:\nSet rpc_param_bytes\nHeader contains full args"]
LargePath["Large payload path:\nSet rpc_prebuffered_payload_bytes\nStreamed after header"]
Dispatcher["RpcDispatcher::call\nCreate RpcRequest"]
Session["RpcSession::write_bytes\nChunk if needed"]
Transport["WebSocket transport"]
EncodeArgs --> CheckSize
CheckSize -->|< 64 KB| SmallPath
CheckSize -->|>= 64 KB| LargePath
SmallPath --> Dispatcher
LargePath --> Dispatcher
Dispatcher --> Session
Session --> Transport
style CheckSize fill:#f9f9f9
style SmallPath fill:#f0f0f0
style LargePath fill:#f0f0f0
The encoded arguments fit in the rpc_param_bytes field of the RpcRequest structure. This field is transmitted as part of the initial request header frame, minimizing round-trips.
Large Payload Path ( >= 64 KB):
The encoded arguments are placed in rpc_prebuffered_payload_bytes. The RpcDispatcher automatically chunks this data into multiple frames, each with its own stream ID and sequence flags.
This prevents request header frames from exceeding transport limitations while ensuring arguments of any size can be transmitted.
Sources:
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-72
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:297-300
Prebuffering vs Streaming Trade-offs
The framework provides two distinct patterns for RPC calls, each with different performance characteristics:
Prebuffered RPC Pattern
Characteristics:
- Entire request payload buffered in memory before transmission begins
- Entire response payload buffered before processing begins
- Uses
RpcCallPrebufferedtrait andcall_rpc_bufferedmethod - Set
is_finalized: trueonRpcRequest
Performance implications:
| Aspect | Impact |
|---|---|
| Memory usage | Higher - full payload in memory simultaneously |
| Latency | Higher initial latency - must encode entire payload first |
| Throughput | Optimal for small-to-medium payloads |
| Simplicity | Simpler error handling - all-or-nothing semantics |
| Backpressure | None - sender controls pacing |
Optimal use cases:
- Small payloads (< 10 MB)
- Computations requiring full dataset before processing
- Simple request/response patterns
- Operations where atomicity is important
Streaming RPC Pattern
Characteristics:
- Incremental transmission using dynamic channels
- Processing begins before entire payload arrives
- Uses
RpcMethodStreamingtrait (bounded or unbounded channels) - Supports bidirectional streaming
Performance implications:
| Aspect | Impact |
|---|---|
| Memory usage | Lower - processes data incrementally |
| Latency | Lower initial latency - processing begins immediately |
| Throughput | Better for large payloads |
| Complexity | Requires async channel management |
| Backpressure | Supported via bounded channels |
Optimal use cases:
- Large payloads (> 10 MB)
- Real-time streaming data
- Long-running operations
- File uploads/downloads
- Bidirectional communication
Sources:
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:11-21
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312
Memory Management and Buffering
Per-Stream Decoder Allocation
The RpcSession maintains a separate decoder instance for each active stream:
Memory characteristics:
- Per-stream overhead : Each active stream allocates a decoder with internal buffer
- Buffer growth : Buffers grow dynamically as chunks arrive
- Cleanup timing : Decoders removed on
EndorErrorevents - Peak memory :
(concurrent_streams × average_payload_size) + overhead
Example calculation for prebuffered calls:
Scenario: 10 concurrent RPC calls, each with 5 MB response
Peak memory ≈ 10 × 5 MB = 50 MB (excluding overhead)
Encoder Lifecycle
The RpcStreamEncoder is created per-request and manages outbound chunking:
- Created when
RpcDispatcher::callinitiates a request - Holds reference to payload bytes during transmission
- Automatically chunks data based on
DEFAULT_SERVICE_MAX_CHUNK_SIZE - Dropped after final chunk transmitted
For prebuffered calls, the encoder is returned to the caller, allowing explicit lifecycle management:
Pending Request Tracking
The RpcDispatcher maintains a HashMap of pending requests:
Entry lifecycle:
- Inserted when
callorcall_rpc_bufferedinvoked - Maintained until response received or timeout
- Removed on successful response, error, or explicit cleanup
- Each entry holds
oneshot::Senderor callback for result delivery
Memory impact : Proportional to number of in-flight requests. Each entry contains minimal overhead (sender channel + metadata).
Sources:
graph LR
subgraph "Async/Await Model"
A1["Task spawn overhead"]
A2["Future state machine"]
A3["Runtime scheduler"]
A4["Context switching"]
end
subgraph "muxio Callback Model"
M1["Direct function calls"]
M2["No state machines"]
M3["No runtime dependency"]
M4["Deterministic execution"]
end
A1 -.higher overhead.-> M1
A2 -.higher overhead.-> M2
A3 -.higher overhead.-> M3
A4 -.higher overhead.-> M4
Non-Async Callback Model Performance
The framework’s non-async, callback-driven architecture provides specific performance characteristics:
Runtime Overhead Comparison
Performance advantages:
| Factor | Benefit |
|---|---|
| No async runtime | Eliminates scheduler overhead |
| Direct callbacks | No future polling or waker mechanisms |
| Deterministic flow | Predictable execution timing |
| WASM compatible | Works in single-threaded browser contexts |
| Memory efficiency | No per-task stack allocation |
Performance limitations:
| Factor | Impact |
|---|---|
| Synchronous processing | Long-running callbacks block progress |
| No implicit parallelism | Concurrency must be managed explicitly |
| Callback complexity | Deep callback chains increase stack usage |
Read/Write Operation Flow
This synchronous model means:
- Low latency : No context switching between read and callback invocation
- Predictable timing : Callback invoked immediately when data complete
- Stack-based execution : Entire chain executes on single thread/stack
- No allocations : No heap allocation for task state
Sources:
Connection and Stream Multiplexing Efficiency
Stream ID Allocation Strategy
The RpcSession allocates stream IDs sequentially:
Efficiency characteristics:
- O(1) allocation : No data structure lookup required
- Collision-free : Client/server use separate number spaces
- Reuse strategy : IDs wrap after exhaustion (u32 range)
- No cleanup needed : Decoders removed, IDs naturally recycled
graph TB
SingleConnection["Single WebSocket Connection"]
Multiplexer["RpcSession Multiplexer"]
subgraph "Interleaved Streams"
S1["Stream 1\nLarge file upload\n1000 chunks"]
S2["Stream 3\nQuick query\n1 chunk"]
S3["Stream 5\nMedium response\n50 chunks"]
end
SingleConnection --> Multiplexer
Multiplexer --> S1
Multiplexer --> S2
Multiplexer --> S3
Timeline["Frame sequence: [1,3,1,1,5,3,1,5,1,...]"]
Multiplexer -.-> Timeline
Note1["Stream 3 completes quickly\ndespite Stream 1 still transmitting"]
S2 -.-> Note1
Concurrent Request Handling
The framework supports concurrent requests over a single connection through stream multiplexing:
Performance benefits:
- Head-of-line avoidance : Small requests don’t wait for large transfers
- Resource efficiency : Single connection handles all operations
- Lower latency : No connection establishment overhead per request
- Fairness : Chunks from different streams interleave naturally
Example throughput:
Scenario: 1 large transfer (100 MB) + 10 small queries (10 KB each)
Without multiplexing: Small queries wait ~seconds for large transfer
With multiplexing: Small queries complete in ~milliseconds
Sources:
Best Practices and Recommendations
Payload Size Guidelines
| Payload Size | Recommended Pattern | Rationale |
|---|---|---|
| < 64 KB | Prebuffered, inline params | Single frame, no chunking overhead |
| 64 KB - 10 MB | Prebuffered, payload_bytes | Automatic chunking, simple semantics |
| 10 MB - 100 MB | Streaming (bounded channels) | Backpressure control, lower memory |
100 MB| Streaming (bounded channels)| Essential for memory constraints
Concurrent Request Optimization
For high-throughput scenarios:
Maximum concurrent requests = min(
server_handler_capacity,
client_memory_budget / average_payload_size
)
Example calculation:
Server: 100 concurrent handlers
Client memory budget: 500 MB
Average response size: 2 MB
Optimal concurrency = min(100, 500/2) = min(100, 250) = 100 requests
Chunking Strategy Selection
When DEFAULT_SERVICE_MAX_CHUNK_SIZE (64 KB) is optimal:
- General-purpose RPC with mixed payload sizes
- WebSocket transport (browser or native)
- Balanced latency/throughput requirements
When to consider smaller chunks (e.g., 16 KB):
- Real-time streaming with low-latency requirements
- Bandwidth-constrained networks
- Interactive applications requiring immediate feedback
When to consider larger chunks (e.g., 256 KB):
- High-bandwidth, low-latency networks
- Bulk data transfer scenarios
- When minimizing frame overhead is critical
Note: Chunk size is currently a compile-time constant. Custom chunk sizes require modifying DEFAULT_SERVICE_MAX_CHUNK_SIZE and recompiling.
Memory Optimization Patterns
Pattern 1: Limit concurrent streams
Pattern 2: Streaming for large data
Use streaming RPC methods instead of prebuffered when dealing with large datasets to process data incrementally.
Pattern 3: Connection pooling
For client-heavy scenarios, consider connection pooling to distribute load across multiple connections, avoiding single-connection bottlenecks.
Monitoring and Profiling
The framework uses tracing for observability. Key metrics to monitor:
RpcDispatcher::call: Request initiation timingRpcSession::write_bytes: Frame transmission timingRpcStreamDecoder: Chunk reassembly timing- Pending request count : Memory pressure indicator
- Active stream count : Multiplexing efficiency indicator
Sources:
Performance Testing Results
The integration test suite includes performance validation scenarios:
Large Payload Test (200x Chunk Size)
Test configuration:
- Payload size: 200 × 64 KB = 12.8 MB
- Pattern: Prebuffered echo (round-trip)
- Transport: WebSocket over TCP
- Client: WASM client with bridge
Results demonstrate:
- Successful transmission of 12.8 MB payload
- Automatic chunking into 200 frames
- Correct reassembly and verification
- No memory leaks or decoder issues
This validates the framework’s ability to handle multi-megabyte payloads using the prebuffered pattern with automatic chunking.
Sources:
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Extending the Framework
Loading…
Extending the Framework
Relevant source files
Purpose and Scope
This document provides guidance for extending the muxio framework with custom transport implementations, runtime-specific client and server implementations, and platform-specific adaptations. It covers the architectural extension points, required trait implementations, and patterns for integrating new runtime environments.
For information about deploying existing implementations across platforms, see Cross-Platform Deployment. For details about the existing Tokio and WASM implementations, see Platform Implementations.
Sources: Cargo.toml:19-31 extensions/muxio-rpc-service-endpoint/Cargo.toml:1-33
Extension Points in the Architecture
The muxio framework provides several well-defined extension points that enable custom implementations without modifying core logic. The layered architecture isolates platform-specific concerns from runtime-agnostic abstractions.
Diagram 1: Framework Extension Points
graph TB
subgraph "Core Layer - No Extensions Needed"
CORE["muxio Core"]
DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
FRAME["Binary Framing Protocol"]
end
subgraph "Framework Layer - Extension Points"
CALLER["RpcServiceCallerInterface\nTrait for Client Logic"]
ENDPOINT["RpcServiceEndpointInterface\nTrait for Server Logic"]
SERVICE["RpcMethodPrebuffered\nService Definitions"]
end
subgraph "Platform Layer - Your Extensions"
CUSTOM_CLIENT["Custom RPC Client\nYour Transport"]
CUSTOM_SERVER["Custom RPC Server\nYour Runtime"]
CUSTOM_TRANSPORT["Custom Transport Layer\nYour Protocol"]
end
CORE --> DISPATCHER
CORE --> SESSION
CORE --> FRAME
DISPATCHER --> CALLER
DISPATCHER --> ENDPOINT
SESSION --> CALLER
SESSION --> ENDPOINT
CALLER --> SERVICE
ENDPOINT --> SERVICE
CUSTOM_CLIENT -.implements.-> CALLER
CUSTOM_SERVER -.implements.-> ENDPOINT
CUSTOM_TRANSPORT -.uses.-> CORE
CUSTOM_CLIENT --> CUSTOM_TRANSPORT
CUSTOM_SERVER --> CUSTOM_TRANSPORT
Key Extension Points
| Extension Point | Location | Purpose | Required Traits |
|---|---|---|---|
| Client Implementation | Custom crate | Platform-specific RPC client | RpcServiceCallerInterface |
| Server Implementation | Custom crate | Platform-specific RPC server | RpcServiceEndpointInterface |
| Transport Layer | Any | Custom wire protocol or runtime | None (callback-driven) |
| Service Definitions | Shared crate | Business logic contracts | RpcMethodPrebuffered |
| Feature Flags | Cargo.toml | Conditional compilation | N/A |
Sources: Cargo.toml:20-31 extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27
Creating Custom Transports
Custom transports wrap the RpcDispatcher and provide platform-specific I/O mechanisms. The core dispatcher is runtime-agnostic and callback-driven, enabling integration with any execution model.
Diagram 2: Custom Transport Integration Pattern
graph LR
subgraph "Your Custom Transport"
INIT["Initialize Transport\nCustom I/O Setup"]
WRITE["Write Callback\nfn(Vec<u8>)"]
READ["Read Loop\nPlatform-Specific"]
LIFECYCLE["Connection Lifecycle\nState Management"]
end
subgraph "Core muxio Components"
DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
end
INIT --> DISPATCHER
DISPATCHER --> WRITE
READ --> DISPATCHER
LIFECYCLE --> DISPATCHER
DISPATCHER --> SESSION
Transport Implementation Requirements
- Initialize
RpcDispatcherwith a write callback that sends binary frames via your transport - Implement read loop that feeds received bytes to
RpcDispatcher::read() - Handle lifecycle events such as connection, disconnection, and errors
- Manage concurrency model appropriate for your runtime (async, sync, thread-per-connection, etc.)
Example Transport Structure
Key patterns:
- Write Callback : Closure or function that writes bytes to transport [see extensions/muxio-tokio-rpc-client/src/rpc_client.rs100-226](https://github.com/jzombie/rust-muxio/blob/30450c98/see extensions/muxio-tokio-rpc-client/src/rpc_client.rs#L100-L226)
- Read Integration : Feed incoming bytes to
RpcDispatcher::read()[see src/rpc/rpc_dispatcher.rs130-264](https://github.com/jzombie/rust-muxio/blob/30450c98/see src/rpc/rpc_dispatcher.rs#L130-L264) - State Management : Track connection lifecycle with callbacks [see extensions/muxio-tokio-rpc-client/src/rpc_client.rs30-38](https://github.com/jzombie/rust-muxio/blob/30450c98/see extensions/muxio-tokio-rpc-client/src/rpc_client.rs#L30-L38)
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-226 src/rpc/rpc_dispatcher.rs:130-264
Implementing Platform-Specific Clients
Platform-specific clients implement the RpcServiceCallerInterface trait to provide type-safe RPC invocation. The trait is defined in muxio-rpc-service-caller.
Diagram 3: Client Implementation Architecture
graph TB
subgraph "Your Client Implementation"
CLIENT["CustomRpcClient\nPlatform-Specific Lifecycle"]
CALLER_IMPL["impl RpcServiceCallerInterface"]
end
subgraph "Required Integration"
DISPATCHER["RpcDispatcher\nOwned or Arc Reference"]
MUTEX["Sync Primitive\nMutex, TokioMutex, etc."]
WRITE["write_callback\nPlatform I/O"]
READ["read_loop\nBackground Task/Thread"]
end
subgraph "Trait Methods to Implement"
CALL["async fn call()\nRpcRequest -> RpcResponse"]
GET_DISP["fn get_dispatcher()\nAccess to Dispatcher"]
end
CLIENT --> CALLER_IMPL
CLIENT --> DISPATCHER
CLIENT --> MUTEX
CLIENT --> WRITE
CLIENT --> READ
CALLER_IMPL -.implements.-> CALL
CALLER_IMPL -.implements.-> GET_DISP
CALL --> GET_DISP
GET_DISP --> DISPATCHER
Required Trait Implementation
The RpcServiceCallerInterface trait requires implementing two core methods:
| Method | Signature | Purpose |
|---|---|---|
call | async fn call(&self, request: RpcRequest) -> Result<RpcResponse> | Send RPC request and await response |
get_dispatcher | fn get_dispatcher(&self) -> Arc<...> | Provide access to underlying dispatcher |
Client Lifecycle Considerations
- Construction : Initialize
RpcDispatcherwith write callback - Connection : Establish transport and start read loop
- Operation : Handle
call()invocations by delegating to dispatcher - Cleanup : Close connection and stop background tasks on drop
Reference Implementation Pattern
For Tokio-based clients, see extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-226 which demonstrates:
- Arc-based shared ownership of
RpcDispatcher TokioMutexfor async-safe access- Background task for read loop using
tokio::spawn - Connection state tracking with callbacks
For WASM-based clients, see extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-32 which demonstrates:
- Single-threaded execution model
- JavaScript bridge integration
- Static singleton pattern for WASM constraints
Sources: extensions/muxio-rpc-service-caller/ extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-226 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-32
graph TB
subgraph "Your Server Implementation"
SERVER["CustomRpcServer\nPlatform-Specific Server"]
ENDPOINT_IMPL["impl RpcServiceEndpointInterface"]
HANDLER_REG["Handler Registration\nregister_handler()"]
end
subgraph "Request Processing Pipeline"
ACCEPT["Accept Connection\nPlatform-Specific"]
DISPATCHER["RpcDispatcher\nPer-Connection"]
READ["Read Loop\nFeed to Dispatcher"]
ENDPOINT["RpcServiceEndpoint\nHandler Registry"]
end
subgraph "Handler Execution"
DECODE["Decode RpcRequest\nDeserialize Parameters"]
DISPATCH["Dispatch to Handler\nMatch method_id"]
EXECUTE["Execute Handler\nUser Business Logic"]
RESPOND["Encode RpcResponse\nSend via Dispatcher"]
end
SERVER --> ENDPOINT_IMPL
SERVER --> HANDLER_REG
ACCEPT --> DISPATCHER
DISPATCHER --> READ
READ --> ENDPOINT
ENDPOINT --> DECODE
DECODE --> DISPATCH
DISPATCH --> EXECUTE
EXECUTE --> RESPOND
RESPOND --> DISPATCHER
Implementing Platform-Specific Servers
Platform-specific servers implement the RpcServiceEndpointInterface trait to handle incoming RPC requests and dispatch them to registered handlers. The trait is defined in muxio-rpc-service-endpoint.
Diagram 4: Server Implementation Flow
Required Trait Implementation
The RpcServiceEndpointInterface trait provides default implementations but allows customization:
| Method | Default Implementation | Override When |
|---|---|---|
register_handler | Registers handler in internal map | Custom dispatch logic needed |
handle_finalized_request | Deserializes and invokes handler | Custom request processing required |
handle_stream_open | Routes streamed requests | Custom stream handling needed |
Server Architecture Patterns
Connection Management:
- Create new
RpcDispatcherinstance per connection - Create new
RpcServiceEndpointinstance per connection - Share handler registrations across connections (using
Arc)
Handler Registration:
- Register handlers at server startup or connection time
- Use
RpcServiceEndpoint::register_handler()with closure - Handlers receive deserialized parameters and return typed results
Reference Implementation Pattern
For Tokio-based servers, see extensions/muxio-tokio-rpc-server/ which demonstrates:
- Axum framework integration for HTTP/WebSocket serving
- Per-connection
RpcDispatcherandRpcServiceEndpoint - Handler registration with async closures
- Graceful shutdown handling
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:65-137 extensions/muxio-tokio-rpc-server/
Feature Flags and Conditional Compilation
The muxio ecosystem uses Cargo feature flags to enable optional dependencies and platform-specific code. This pattern allows extensions to remain lightweight while supporting multiple runtimes.
Diagram 5: Feature Flag Pattern
graph LR
subgraph "Extension Crate Cargo.toml"
DEFAULT["default = []\nMinimal Dependencies"]
FEATURE1["tokio_support\ndep:tokio"]
FEATURE2["async_std_support\ndep:async-std"]
FEATURE3["custom_feature\nYour Feature"]
end
subgraph "Conditional Code"
CFG1["#[cfg(feature = tokio_support)]\nTokio-Specific Impl"]
CFG2["#[cfg(feature = async_std_support)]\nasync-std Impl"]
CFG3["#[cfg(not(any(...)))]\nFallback Impl"]
end
DEFAULT --> CFG3
FEATURE1 --> CFG1
FEATURE2 --> CFG2
FEATURE3 --> CFG1
Feature Flag Best Practices
InCargo.toml:
In source code:
Example: muxio-rpc-service-endpoint
The muxio-rpc-service-endpoint crate demonstrates this pattern at extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27:
This enables Tokio-specific mutex types while maintaining compatibility with synchronous code.
Sources: extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27 Cargo.toml:39-65
Integration Patterns
When creating a new platform extension, follow these patterns to ensure compatibility with the muxio ecosystem.
Workspace Integration
Directory Structure:
rust-muxio/
├── extensions/
│ ├── muxio-custom-client/
│ │ ├── Cargo.toml
│ │ └── src/
│ │ └── lib.rs
│ └── muxio-custom-server/
│ ├── Cargo.toml
│ └── src/
│ └── lib.rs
Add to workspace: Cargo.toml:19-31
Dependency Configuration
Workspace Dependencies: Cargo.toml:39-48
Extension Crate Dependencies:
Metadata Inheritance
Use workspace metadata inheritance to maintain consistency Cargo.toml:1-17:
Testing Integration
Create dev dependencies for integration testing:
Reference the example service definitions for end-to-end tests, as demonstrated in extensions/muxio-rpc-service-endpoint/Cargo.toml:29-33
Sources: Cargo.toml:1-71 extensions/muxio-rpc-service-endpoint/Cargo.toml:1-33
Extension Checklist
When implementing a new platform extension, ensure the following:
For Client Extensions
- Implement
RpcServiceCallerInterfacetrait - Create
RpcDispatcherwith platform-specific write callback - Start read loop that feeds bytes to
RpcDispatcher::read() - Manage connection lifecycle with state tracking
- Provide async or sync API appropriate for runtime
- Handle connection errors and cleanup
- Add feature flags for optional dependencies
- Document platform-specific requirements
For Server Extensions
- Implement
RpcServiceEndpointInterfacetrait - Accept connections and create per-connection
RpcDispatcher - Create per-connection
RpcServiceEndpoint - Register handlers from shared definitions
- Implement request routing and execution
- Send responses via
RpcDispatcher - Handle graceful shutdown
- Document server setup and configuration
For Transport Extensions
- Define platform-specific connection type
- Implement write callback for outgoing frames
- Implement read mechanism for incoming frames
- Handle connection establishment and teardown
- Provide error handling and recovery
- Document transport-specific limitations
Sources: extensions/muxio-tokio-rpc-client/ extensions/muxio-tokio-rpc-server/ extensions/muxio-wasm-rpc-client/
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
JavaScript and WASM Integration
Loading…
JavaScript and WASM Integration
Relevant source files
- README.md
- extensions/muxio-tokio-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
Purpose and Scope
This page documents the WebAssembly (WASM) integration layer that enables muxio RPC clients to run in browser environments. It covers the bridge architecture between Rust WASM code and JavaScript, the static client pattern used for managing singleton instances, and the specific integration points required for WebSocket communication.
For general information about the WASM RPC Client platform implementation, see WASM RPC Client. For cross-platform deployment strategies, see Cross-Platform Deployment.
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-182 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-82
WASM Bridge Architecture
The muxio WASM integration uses a byte-passing bridge pattern to connect Rust code compiled to WebAssembly with JavaScript host environments. This design is lightweight and avoids complex FFI patterns by restricting communication to byte arrays (Vec<u8> in Rust, Uint8Array in JavaScript).
Core Bridge Components
The bridge consists of three primary components:
| Component | Type | Role |
|---|---|---|
RpcWasmClient | Rust struct | Manages RPC dispatcher, endpoint, and connection state |
static_muxio_write_bytes | JavaScript-callable function | Emits bytes from Rust to JavaScript |
MUXIO_STATIC_RPC_CLIENT_REF | Thread-local singleton | Stores the static client instance |
The RpcWasmClient structure extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-24 contains:
dispatcher: AnArc<tokio::sync::Mutex<RpcDispatcher>>for request/response correlationendpoint: AnArc<RpcServiceEndpoint<()>>for handling incoming RPC callsemit_callback: AnArc<dyn Fn(Vec<u8>)>that bridges to JavaScriptstate_change_handler: Callback for connection state changesis_connected: Atomic boolean tracking connection status
Diagram: WASM-JavaScript Bridge Architecture
graph TB
subgraph "JavaScript/Browser Environment"
JS_WS["WebSocket API\n(Browser Native)"]
JS_BRIDGE["JavaScript Bridge Layer\nstatic_muxio_write_bytes"]
JS_APP["Web Application Code"]
end
subgraph "WASM Module (Rust Compiled)"
WASM_CLIENT["RpcWasmClient"]
WASM_DISP["RpcDispatcher"]
WASM_EP["RpcServiceEndpoint"]
STATIC_REF["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local RefCell"]
WASM_CLIENT -->|owns Arc| WASM_DISP
WASM_CLIENT -->|owns Arc| WASM_EP
STATIC_REF -->|stores Arc| WASM_CLIENT
end
JS_WS -->|onmessage bytes| JS_BRIDGE
JS_BRIDGE -->|read_bytes &[u8]| WASM_CLIENT
WASM_CLIENT -->|emit_callback Vec<u8>| JS_BRIDGE
JS_BRIDGE -->|send Uint8Array| JS_WS
JS_APP -->|RPC method calls| WASM_CLIENT
WASM_CLIENT -->|responses| JS_APP
JS_WS -->|onopen| WASM_CLIENT
JS_WS -->|onclose/onerror| WASM_CLIENT
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-11
Static Client Pattern
The WASM client uses a singleton pattern implemented with thread_local! storage and RefCell for interior mutability. This pattern is necessary because:
- WASM runs single-threaded, making thread-local storage equivalent to global storage
- Multiple JavaScript functions may need access to the same client instance
- The client must survive across multiple JavaScript->Rust function call boundaries
Thread-Local Storage Implementation
The static client is stored in MUXIO_STATIC_RPC_CLIENT_REF extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-11:
Key characteristics:
thread_local!: Creates a separate instance per thread (single thread in WASM)RefCell<Option<Arc<T>>>: Enables runtime-checked mutable borrowing with optional valueArc<RpcWasmClient>: Allows cloning references without copying the entire client
Initialization and Access Functions
Three functions manage the static client lifecycle:
| Function | Purpose | Idempotency |
|---|---|---|
init_static_client() | Creates the client if not present | Yes - multiple calls are safe |
get_static_client() | Retrieves the current client | N/A - read-only |
with_static_client_async() | Executes async operations with the client | N/A - wrapper |
The init_static_client() function extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36 checks for existing initialization before creating a new instance:
Diagram: Static Client Initialization Flow
Sources : extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36
JavaScript Integration Points
The WASM client exposes specific methods designed to be called from JavaScript in response to WebSocket events. These methods bridge the gap between JavaScript’s event-driven API and Rust’s async/await model.
Connection Lifecycle Methods
Three methods handle WebSocket lifecycle events:
| Method | JavaScript Event | Purpose |
|---|---|---|
handle_connect() | WebSocket.onopen | Sets connected state, invokes state change handler |
read_bytes(&[u8]) | WebSocket.onmessage | Processes incoming binary data |
handle_disconnect() | WebSocket.onclose / WebSocket.onerror | Clears connected state, fails pending requests |
handle_connect Implementation
The handle_connect() method extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-44 updates connection state and notifies handlers:
Key operations:
- Atomic flag update : Uses
AtomicBool::storewithSeqCstordering for thread-safe state change - Handler invocation : Calls registered state change callback if present
- Async execution : Returns
Futurethat JavaScript must await
read_bytes Implementation
The read_bytes() method extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121 implements a three-stage processing pipeline:
Stage 1: Synchronous Reading (dispatcher lock held briefly)
- Acquires dispatcher mutex
- Calls
dispatcher.read_bytes(bytes)to parse frames - Extracts finalized requests from dispatcher
- Releases dispatcher lock
Stage 2: Asynchronous Handler Execution (no locks held)
- Calls
process_single_prebuffered_request()for each request - User handlers execute concurrently via
join_all() - Dispatcher remains unlocked during handler execution
Stage 3: Synchronous Response Writing (dispatcher lock re-acquired)
- Re-acquires dispatcher mutex
- Calls
dispatcher.respond()for each result - Emits response bytes via
emit_callback - Releases dispatcher lock
graph TB
JS_MSG["JavaScript: WebSocket.onmessage"]
subgraph "Stage 1: Synchronous Reading"
LOCK1["Acquire dispatcher lock"]
READ["dispatcher.read_bytes(bytes)"]
EXTRACT["Extract finalized requests"]
UNLOCK1["Release dispatcher lock"]
end
subgraph "Stage 2: Async Handler Execution"
PROCESS["process_single_prebuffered_request()"]
HANDLERS["User handlers execute"]
JOIN["join_all()
responses"]
end
subgraph "Stage 3: Synchronous Response"
LOCK2["Re-acquire dispatcher lock"]
RESPOND["dispatcher.respond()"]
EMIT["emit_callback(bytes)"]
UNLOCK2["Release dispatcher lock"]
end
JS_MSG --> LOCK1
LOCK1 --> READ
READ --> EXTRACT
EXTRACT --> UNLOCK1
UNLOCK1 --> PROCESS
PROCESS --> HANDLERS
HANDLERS --> JOIN
JOIN --> LOCK2
LOCK2 --> RESPOND
RESPOND --> EMIT
EMIT --> UNLOCK2
UNLOCK2 --> JS_EMIT["JavaScript: WebSocket.send()"]
This staged approach prevents deadlocks by ensuring user handlers never execute while the dispatcher is locked.
Diagram: read_bytes Processing Pipeline
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121
handle_disconnect Implementation
The handle_disconnect() method extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:123-134 performs cleanup:
Key operations:
- Atomic swap : Uses
swap()to atomically test-and-set connection flag - Conditional execution : Only processes if connection was previously established
- Error propagation : Calls
fail_all_pending_requests()to reject outstanding futures - State notification : Invokes registered disconnect handler
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-44 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:123-134
wasm-bindgen Integration
The WASM client uses wasm-bindgen to generate JavaScript bindings for Rust code. Key integration patterns include:
Type Conversions
| Rust Type | JavaScript Type | Conversion Method |
|---|---|---|
Vec<u8> | Uint8Array | Automatic via wasm-bindgen |
Result<T, String> | Promise<T> | future_to_promise() |
T: Into<JsValue> | Any JS value | .into() conversion |
Promise-Based Async Functions
The with_static_client_async() helper extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72 wraps Rust async functions for JavaScript consumption:
This pattern:
- Retrieves static client : Clones
Arcfrom thread-local storage - Executes user closure : Awaits the provided async function
- Converts result : Transforms
Result<T, String>to JavaScriptPromise - Error handling : Converts Rust errors to rejected promises
Diagram: JavaScript-Rust Promise Interop
sequenceDiagram
participant JS as "JavaScript Code"
participant BINDGEN as "wasm-bindgen Layer"
participant WRAPPER as "with_static_client_async()"
participant TLS as "Thread-Local Storage"
participant CLOSURE as "User Closure<F>"
participant CLIENT as "RpcWasmClient"
JS->>BINDGEN: Call exported WASM function
BINDGEN->>WRAPPER: Invoke wrapper function
WRAPPER->>TLS: Get static client
TLS->>WRAPPER: Arc<RpcWasmClient>
WRAPPER->>CLOSURE: Execute f(client).await
CLOSURE->>CLIENT: Perform RPC operations
CLIENT->>CLOSURE: Result<T, String>
CLOSURE->>WRAPPER: Return result
alt "Success"
WRAPPER->>BINDGEN: Ok(value.into())
BINDGEN->>JS: Promise resolves with value
else "Error"
WRAPPER->>BINDGEN: Err(JsValue)
BINDGEN->>JS: Promise rejects with error
end
Sources : extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72
Bidirectional Data Flow
The WASM integration enables bidirectional RPC communication, where both JavaScript and WASM can initiate calls and handle responses.
Outbound Flow (WASM → JavaScript → Server)
- Application code calls RPC method on
RpcWasmClient RpcServiceCallerInterfaceimplementation encodes requestRpcDispatcherserializes and chunks dataemit_callbackinvokesstatic_muxio_write_bytes- JavaScript bridge sends bytes via
WebSocket.send()
Inbound Flow (Server → JavaScript → WASM)
- JavaScript receives
WebSocket.onmessageevent - Bridge calls
RpcWasmClient.read_bytes(&[u8]) - Dispatcher parses frames and routes to endpoint
- Endpoint dispatches to registered handlers
- Handlers execute and return responses
- Responses emitted back through
emit_callback
Diagram: Complete Bidirectional Message Flow
graph TB
subgraph "JavaScript Layer"
JS_APP["Web Application"]
WS_API["WebSocket API"]
BRIDGE_OUT["static_muxio_write_bytes"]
BRIDGE_IN["read_bytes handler"]
end
subgraph "WASM Client (RpcWasmClient)"
CALLER["RpcServiceCallerInterface"]
ENDPOINT["RpcServiceEndpoint"]
DISPATCHER["RpcDispatcher"]
EMIT["emit_callback"]
end
subgraph "Server"
SERVER["Tokio RPC Server"]
end
JS_APP -->|Call RPC method| CALLER
CALLER -->|Encode request| DISPATCHER
DISPATCHER -->|Serialize frames| EMIT
EMIT -->|Vec<u8>| BRIDGE_OUT
BRIDGE_OUT -->|Uint8Array| WS_API
WS_API -->|Binary message| SERVER
SERVER -->|Binary response| WS_API
WS_API -->|onmessage event| BRIDGE_IN
BRIDGE_IN -->|&[u8]| DISPATCHER
DISPATCHER -->|Route request| ENDPOINT
ENDPOINT -->|Invoke handler| ENDPOINT
ENDPOINT -->|Response| DISPATCHER
DISPATCHER -->|Serialize frames| EMIT
DISPATCHER -->|Deliver result| CALLER
CALLER -->|Return typed result| JS_APP
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121
RpcServiceCallerInterface Implementation
The RpcWasmClient implements the RpcServiceCallerInterface trait extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181 enabling it to be used interchangeably with RpcClient for Tokio environments. This trait provides:
| Method | Purpose | Return Type |
|---|---|---|
get_dispatcher() | Returns dispatcher for request/response management | Arc<Mutex<RpcDispatcher>> |
get_emit_fn() | Returns callback for byte emission | Arc<dyn Fn(Vec<u8>)> |
is_connected() | Checks current connection status | bool |
set_state_change_handler() | Registers connection state callback | async fn |
The implementation delegates to internal methods:
This implementation ensures that WASM and Tokio clients share the same interface, enabling code reuse at the application layer. The trait abstraction is documented in detail in Service Caller Interface.
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181 extensions/muxio-wasm-rpc-client/src/lib.rs:1-10
Module Structure and Re-exports
The muxio-wasm-rpc-client crate is organized with clear module boundaries:
| Module | Contents | Purpose |
|---|---|---|
rpc_wasm_client | RpcWasmClient struct | Core client implementation |
static_lib | Static client utilities | Singleton pattern helpers |
Root (lib.rs) | Re-exports | Simplified imports |
The root module extensions/muxio-wasm-rpc-client/src/lib.rs:1-10 re-exports key types for convenience:
This allows consumers to import all necessary types from a single crate:
Sources : extensions/muxio-wasm-rpc-client/src/lib.rs:1-10
Comparison with Tokio Client
While both RpcWasmClient and RpcClient implement RpcServiceCallerInterface, their internal architectures differ due to platform constraints:
| Aspect | RpcWasmClient | RpcClient |
|---|---|---|
| Runtime | Single-threaded WASM | Multi-threaded Tokio |
| Storage | thread_local! + RefCell | Arc + background tasks |
| Transport | JavaScript WebSocket API | tokio-tungstenite |
| Emit | Callback to static_muxio_write_bytes | Channel to writer task |
| Lifecycle | Manual JS event handlers | Automatic via async tasks |
| Initialization | init_static_client() | RpcClient::new() |
Despite these differences, both clients expose identical RPC method calling interfaces, enabling cross-platform application code. The platform-specific details are encapsulated behind the RpcServiceCallerInterface abstraction.
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-tokio-rpc-client/src/lib.rs:1-8 README.md:48-49