This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Overview
Relevant source files
Purpose and Scope
This document provides a high-level introduction to the rust-muxio system, a toolkit for building efficient, transport-agnostic multiplexed communication systems with type-safe RPC capabilities. This page explains what Muxio is, its architectural layers, and core design principles.
For detailed information about specific subsystems:
- Workspace organization and crate listings: see Workspace Structure
- Core multiplexing concepts: see Core Library (muxio))
- RPC framework details: see RPC Framework
- Transport implementations: see Transport Implementations
Sources: README.md:1-166 DRAFT.md:9-53
What is Muxio?
Muxio is a high-performance Rust framework that provides two primary capabilities:
-
Binary Stream Multiplexing : A low-level framing protocol that manages multiple concurrent data streams over a single connection, handling frame interleaving, reassembly, and ordering.
-
Lightweight RPC Framework : A minimalist RPC abstraction built on the multiplexing layer, providing request correlation, method dispatch, and bidirectional communication without imposing opinions about serialization or transport.
The system is designed around a "core + extensions" architecture. The muxio core library (Cargo.toml10) provides runtime-agnostic, transport-agnostic primitives. Extension crates build concrete implementations for specific environments (Tokio async runtime, WebAssembly/browser, etc.).
Sources: README.md:18-23 Cargo.toml:10-17
System Architecture
The following diagram illustrates the layered architecture and primary components:
Layered Architecture Overview
graph TB
subgraph "Application Layer"
APP["Application Code\nTyped RPC Calls"]
end
subgraph "Service Definition Layer"
SERVICE_DEF["Service Definition Crate\nRpcMethodPrebuffered implementations\nMETHOD_ID generation"]
end
subgraph "RPC Abstraction Layer"
CALLER["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
ENDPOINT["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
SERVICE["muxio-rpc-service\nRpcMethodPrebuffered trait"]
end
subgraph "Core Multiplexing Layer"
DISPATCHER["RpcDispatcher\nRequest correlation\nStream management"]
FRAMING["Binary Framing Protocol\nFrame chunking and reassembly"]
end
subgraph "Transport Implementations"
TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer\nAxum + WebSocket"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient\nTokio + tungstenite"]
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient\nwasm-bindgen"]
end
APP --> SERVICE_DEF
SERVICE_DEF --> CALLER
SERVICE_DEF --> ENDPOINT
SERVICE_DEF --> SERVICE
CALLER --> DISPATCHER
ENDPOINT --> DISPATCHER
SERVICE --> CALLER
SERVICE --> ENDPOINT
DISPATCHER --> FRAMING
FRAMING --> TOKIO_SERVER
FRAMING --> TOKIO_CLIENT
FRAMING --> WASM_CLIENT
| Layer | Crates | Responsibilities |
|---|---|---|
| Application | User code | Invokes typed RPC methods, receives typed responses |
| Service Definition | example-muxio-rpc-service-definition | Defines shared service contracts with compile-time METHOD_ID generation |
| RPC Abstraction | muxio-rpc-service, muxio-rpc-service-caller, muxio-rpc-service-endpoint | Provides traits for method definition, client invocation, and server dispatch |
| Core Multiplexing | muxio | Manages request correlation, stream multiplexing, and binary framing |
| Transport | muxio-tokio-rpc-server, muxio-tokio-rpc-client, muxio-wasm-rpc-client | Concrete implementations for specific runtimes and platforms |
Each layer depends only on the layers below it, enabling modular composition. The core muxio library has zero knowledge of RPC concepts, and the RPC layer has zero knowledge of specific transports.
Sources: README.md:14-40 Cargo.toml:19-31 DRAFT.md:9-26
Key Design Principles
Binary Protocol
All data transmission uses a compact binary format. The framing protocol uses minimal headers to reduce overhead. RPC payloads are serialized as raw bytes, with no assumptions about the serialization format (though extensions commonly use bitcode for efficiency).
Key characteristics:
- Frame headers contain only essential metadata
- No text-based parsing overhead
- Supports arbitrary binary payloads
- Low CPU and bandwidth requirements
Transport Agnostic
The muxio core library implements all multiplexing logic through callback interfaces. This design allows integration with any transport mechanism:
- WebSocket : Used by Tokio server/client and WASM client
- TCP : Can be implemented with custom transports
- In-memory channels : Used for testing
- Any byte-oriented transport : Custom implementations possible
The core library never directly performs I/O. Instead, it accepts bytes via callbacks and emits bytes through return values or callbacks.
Runtime Agnostic
The core muxio library uses synchronous control flow with callbacks, avoiding dependencies on specific async runtimes:
- No
async/awaitin core library - Compatible with Tokio, async-std, or no runtime at all
- WASM-compatible (runs in single-threaded browser environment)
- Extension crates adapt the core to specific runtimes (e.g.,
muxio-tokio-rpc-serveruses Tokio)
This design enables the same core logic to work across radically different execution environments.
graph LR
subgraph "Shared Definition"
SERVICE["example-muxio-rpc-service-definition\nAdd, Mult, Echo methods\nRpcMethodPrebuffered implementations"]
end
subgraph "Native Server"
SERVER["RpcServer\nTokio runtime\nLinux/macOS/Windows"]
end
subgraph "Native Client"
NATIVE["RpcClient\nTokio runtime\nCommand-line tools"]
end
subgraph "Web Client"
WASM["RpcWasmClient\nWebAssembly\nBrowser JavaScript"]
end
SERVICE -.shared contract.-> SERVER
SERVICE -.shared contract.-> NATIVE
SERVICE -.shared contract.-> WASM
NATIVE <-.WebSocket.-> SERVER
WASM <-.WebSocket.-> SERVER
Cross-Platform Deployment
The architecture supports "write once, deploy everywhere" through shared service definitions:
All implementations depend on the same service definition crate, ensuring API compatibility at compile time. A single server can handle requests from both native and WASM clients simultaneously.
Sources: README.md:41-52 DRAFT.md:48-52 README.md:63-160
Repository Structure
The repository uses a Cargo workspace with the following organization:
Core Library
muxio(Cargo.toml10): The foundational crate providing stream multiplexing and binary framing. This crate has minimal dependencies and makes no assumptions about RPC, serialization, or transport.
RPC Extensions
Located in extensions/ (Cargo.toml:20-28):
| Crate | Purpose |
|---|---|
muxio-rpc-service | Defines RpcMethodPrebuffered trait for service contracts |
muxio-rpc-service-caller | Provides RpcServiceCallerInterface for client-side RPC invocation |
muxio-rpc-service-endpoint | Provides RpcServiceEndpointInterface for server-side RPC dispatch |
muxio-tokio-rpc-server | Tokio-based server with Axum and WebSocket support |
muxio-tokio-rpc-client | Tokio-based client with connection management |
muxio-wasm-rpc-client | WebAssembly client for browser environments |
muxio-ext-test | Testing utilities for integration tests |
Examples
Located in examples/ (Cargo.toml:29-30):
example-muxio-rpc-service-definition: Demonstrates shared service definitions withAdd,Mult, andEchomethodsexample-muxio-ws-rpc-app: Complete WebSocket RPC application showing server and client usage
Dependency Flow
Extensions depend on the core library and build progressively more opinionated abstractions. Applications depend on extensions and service definitions, never directly on the core library.
Sources: Cargo.toml:19-31 Cargo.toml:39-47 README.md:61-62
Type Safety Through Shared Definitions
The system achieves compile-time type safety by requiring both clients and servers to depend on the same service definition crate. The RpcMethodPrebuffered trait defines the contract:
Compile-Time Guarantees:
graph TB
subgraph "Service Definition"
TRAIT["RpcMethodPrebuffered trait"]
ADD["Add struct\nMETHOD_ID = xxhash('Add')\nencode_request(Vec<f64>)\ndecode_response() -> f64"]
MULT["Mult struct\nMETHOD_ID = xxhash('Mult')\nencode_request(Vec<f64>)\ndecode_response() -> f64"]
end
subgraph "Client Usage"
CLIENT_CALL["Add::call(&client, vec![1.0, 2.0, 3.0])\nResult<f64, RpcServiceError>"]
end
subgraph "Server Handler"
SERVER_HANDLER["endpoint.register_prebuffered\n(Add::METHOD_ID, handler_fn)"]
HANDLER_FN["handler_fn(request_bytes) ->\ndecode -> compute -> encode"]
end
TRAIT --> ADD
TRAIT --> MULT
ADD -.compile-time guarantee.-> CLIENT_CALL
ADD -.compile-time guarantee.-> SERVER_HANDLER
SERVER_HANDLER --> HANDLER_FN
-
Method ID Consistency : Each method's
METHOD_IDis generated at compile time by hashing the method name withxxhash-rust. The same name always produces the same ID. -
Type Consistency : Both
encode_request/decode_requestandencode_response/decode_responseuse shared type definitions. Changing a parameter type breaks compilation for both client and server. -
Collision Detection : Duplicate method names produce duplicate
METHOD_IDvalues, causing runtime panics during handler registration (which surface during integration tests).
This design eliminates a common class of distributed system bugs where client and server APIs drift out of sync.
Sources: README.md49 README.md:69-118 Cargo.toml52 Cargo.toml64
sequenceDiagram
participant App as "Application"
participant Method as "Add::call"
participant Client as "RpcClient\n(or RpcWasmClient)"
participant Dispatcher as "RpcDispatcher"
participant Transport as "WebSocket"
participant Server as "RpcServer"
participant Handler as "Add handler"
App->>Method: call(&client, vec![1.0, 2.0, 3.0])
Method->>Method: encode_request() -> bytes
Method->>Client: invoke(METHOD_ID, bytes)
Client->>Dispatcher: send_request(METHOD_ID, bytes)
Dispatcher->>Dispatcher: assign request_id
Dispatcher->>Dispatcher: serialize to frames
Dispatcher->>Transport: write binary frames
Transport->>Server: receive frames
Server->>Dispatcher: process_incoming_bytes
Dispatcher->>Dispatcher: reassemble frames
Dispatcher->>Dispatcher: route by METHOD_ID
Dispatcher->>Handler: invoke(request_bytes)
Handler->>Handler: decode -> compute -> encode
Handler->>Dispatcher: return response_bytes
Dispatcher->>Dispatcher: serialize response
Dispatcher->>Transport: write binary frames
Transport->>Client: receive frames
Client->>Dispatcher: process_incoming_bytes
Dispatcher->>Dispatcher: match request_id
Dispatcher->>Client: resolve with bytes
Client->>Method: return bytes
Method->>Method: decode_response() -> f64
Method->>App: return Result<f64>
Communication Flow
The following diagram traces a complete RPC call from application code through all system layers:
Key Observations:
- Application code works with typed values (
Vec<f64>in,f64out) - Service definitions handle encoding/decoding
RpcDispatchermanages request correlation and multiplexing- Multiple requests can be in-flight simultaneously over a single connection
- The binary framing protocol handles interleaved frames from concurrent requests
Sources: README.md:69-160
Development Status
The project is currently in alpha status (Cargo.toml3) and under active development (README.md14). The core architecture is stable, but APIs may change before the 1.0 release.
Current Version: 0.10.0-alpha
Sources: README.md14 Cargo.toml3
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Core Concepts
Relevant source files
Purpose and Scope
This document explains the fundamental design principles and architectural patterns that define the rust-muxio system. It covers the layered separation of concerns, the binary protocol foundation, the non-async callback-driven model, and the mechanisms that enable cross-platform deployment and type safety.
For detailed information about specific layers, see Layered Architecture and Design Philosophy. For implementation details of the multiplexing core, see Core Library (muxio)). For RPC-specific concepts, see RPC Framework.
Architectural Layers
The rust-muxio system is organized into three distinct layers, each with clear responsibilities and minimal coupling to the layers above or below it.
Sources: README.md:16-22 Cargo.toml:19-31
graph TB
subgraph "Application Code"
APP["Application Logic\nType-safe method calls"]
end
subgraph "Transport Layer"
TOKIO_SRV["muxio-tokio-rpc-server\nAxum + WebSocket"]
TOKIO_CLI["muxio-tokio-rpc-client\ntokio-tungstenite"]
WASM_CLI["muxio-wasm-rpc-client\nwasm-bindgen bridge"]
end
subgraph "RPC Abstraction Layer"
RPC_SVC["muxio-rpc-service\nRpcMethodPrebuffered trait"]
RPC_CALLER["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
RPC_EP["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
subgraph "Core Multiplexing Layer"
DISPATCHER["RpcDispatcher\nRequest correlation"]
FRAMING["Binary Framing Protocol\nFrame reassembly"]
REQ_RESP["RpcRequest / RpcResponse\nRpcHeader types"]
end
APP --> TOKIO_CLI
APP --> WASM_CLI
APP --> TOKIO_SRV
TOKIO_CLI --> RPC_CALLER
WASM_CLI --> RPC_CALLER
TOKIO_SRV --> RPC_EP
RPC_CALLER --> RPC_SVC
RPC_EP --> RPC_SVC
RPC_SVC --> DISPATCHER
RPC_CALLER --> DISPATCHER
RPC_EP --> DISPATCHER
DISPATCHER --> FRAMING
DISPATCHER --> REQ_RESP
FRAMING --> REQ_RESP
Layer Responsibilities
| Layer | Components | Responsibilities | Dependencies |
|---|---|---|---|
| Core Multiplexing | muxio crate | Binary framing, frame reassembly, stream management | Zero external dependencies for core logic |
| RPC Abstraction | muxio-rpc-service, muxio-rpc-service-caller, muxio-rpc-service-endpoint | Method ID generation, request/response encoding, type-safe traits | Depends on muxio core |
| Transport | muxio-tokio-rpc-server, muxio-tokio-rpc-client, muxio-wasm-rpc-client | WebSocket connections, async runtime integration, platform bridging | Depends on RPC abstraction |
Sources: README.md:36-40 Cargo.toml:40-47
Binary Protocol Foundation
The system uses a compact binary protocol at all levels to minimize overhead and maximize performance. There are no text-based formats or human-readable intermediates in the critical path.
graph LR
subgraph "Application Data"
TYPED["Rust Types\nVec<f64>, String, etc"]
end
subgraph "Serialization Layer"
BITCODE["bitcode::encode\nbitcode::decode"]
end
subgraph "RPC Protocol Layer"
METHOD_ID["METHOD_ID: u64\nxxhash-rust const hash"]
RPC_REQ["RpcRequest\nmethod_id + params + payload"]
RPC_RESP["RpcResponse\nresult bytes or error"]
RPC_HEADER["RpcHeader\ndiscriminator byte"]
end
subgraph "Framing Layer"
FRAME["Binary Frames\nMinimal headers"]
CHUNK["Chunking\nLarge payload splitting"]
end
subgraph "Transport Layer"
WS["WebSocket Binary Frames\nNetwork transmission"]
end
TYPED --> BITCODE
BITCODE --> RPC_REQ
BITCODE --> RPC_RESP
METHOD_ID --> RPC_REQ
RPC_REQ --> RPC_HEADER
RPC_RESP --> RPC_HEADER
RPC_HEADER --> FRAME
FRAME --> CHUNK
CHUNK --> WS
Protocol Stack
Sources: README.md:32-33 README.md:45-46 Cargo.toml52 Cargo.toml64
Binary Data Flow
All data in the system flows as raw bytes (Vec<u8> or &[u8]). This design choice has several implications:
- Serialization Agnostic : The core layer never assumes a serialization format. Applications can use
bitcode,bincode,postcard, or any other binary serializer. - FFI-Friendly : Byte slices are a universal interface that can cross language boundaries without special marshalling.
- Zero-Copy Opportunities : Raw bytes enable zero-copy optimizations in performance-critical paths.
- Minimal Overhead : Binary headers consume single-digit bytes rather than hundreds of bytes for JSON or XML.
Sources: README.md:51-52 DRAFT.md11
Non-Async, Callback-Driven Model
The core muxio library uses a non-async design with synchronous control flow and callbacks. This is a deliberate architectural choice that enables broad compatibility.
graph TB
subgraph "External Runtime"
TOKIO["Tokio async runtime"]
WASM_EVENT["WASM event loop"]
STD_THREAD["std::thread"]
end
subgraph "muxio Core"
DISPATCHER["RpcDispatcher"]
WRITE_CB["write_bytes_callback\nBox<dyn Fn(&[u8])>"]
READ_CB["handle_read_bytes\n(&[u8]) -> Result"]
end
subgraph "Application Handlers"
RPC_HANDLER["RPC method handlers\nasync closures"]
end
TOKIO -->|bytes in| READ_CB
WASM_EVENT -->|bytes in| READ_CB
STD_THREAD -->|bytes in| READ_CB
READ_CB --> DISPATCHER
DISPATCHER --> RPC_HANDLER
DISPATCHER --> WRITE_CB
WRITE_CB -->|bytes out| TOKIO
WRITE_CB -->|bytes out| WASM_EVENT
WRITE_CB -->|bytes out| STD_THREAD
Callback Architecture
Sources: DRAFT.md:50-52 README.md:34-35
Key Characteristics
| Aspect | Implementation | Benefit |
|---|---|---|
| Control Flow | Synchronous function calls | Deterministic execution order |
| I/O Model | Callback-driven | No async runtime dependency |
| Event Handling | Explicit invocations | Predictable performance characteristics |
| State Management | Direct mutation | No .await points, no hidden yields |
The RpcDispatcher receives bytes via handle_read_bytes() and emits bytes via a provided callback. It never blocks, never spawns tasks, and never assumes an async runtime exists. This enables:
- Tokio Integration : Wrap calls in
tokio::spawnas needed - WASM Integration : Bridge to JavaScript's Promise-based model
- Embedded Systems : Run in single-threaded, no-std environments
- Testing : Use in-memory channels without complex async mocking
Sources: DRAFT.md:50-52 README.md:34-35
Transport and Runtime Agnosticism
The core library's design enables the same multiplexing logic to run in radically different environments without modification.
graph TB
subgraph "Shared Core"
CORE_DISPATCH["RpcDispatcher\nsrc/rpc_dispatcher.rs"]
CORE_FRAME["Framing Protocol\nsrc/rpc_request_response.rs"]
end
subgraph "Native Server - Tokio"
AXUM["axum::Router"]
WS_UPGRADE["WebSocketUpgrade"]
TOKIO_TASK["tokio::spawn"]
TUNGSTENITE["tokio_tungstenite"]
end
subgraph "Native Client - Tokio"
WS_STREAM["WebSocketStream"]
TOKIO_CHANNEL["mpsc::channel"]
TOKIO_SELECT["tokio::select!"]
end
subgraph "WASM Client"
WASM_BINDGEN["#[wasm_bindgen]"]
JS_WEBSOCKET["JavaScript WebSocket"]
JS_PROMISE["JavaScript Promise"]
end
CORE_DISPATCH --> AXUM
CORE_DISPATCH --> WS_STREAM
CORE_DISPATCH --> WASM_BINDGEN
AXUM --> WS_UPGRADE
WS_UPGRADE --> TUNGSTENITE
TUNGSTENITE --> TOKIO_TASK
WS_STREAM --> TOKIO_CHANNEL
TOKIO_CHANNEL --> TOKIO_SELECT
WASM_BINDGEN --> JS_WEBSOCKET
JS_WEBSOCKET --> JS_PROMISE
CORE_FRAME --> AXUM
CORE_FRAME --> WS_STREAM
CORE_FRAME --> WASM_BINDGEN
Platform Abstraction
Sources: README.md:38-40 Cargo.toml:23-28
Abstraction Boundaries
The RpcDispatcher provides a minimal interface that any transport can implement:
- Byte Input :
handle_read_bytes(&mut self, bytes: &[u8])- Process incoming bytes - Byte Output :
write_bytes_callback: Box<dyn Fn(&[u8])>- Emit outgoing bytes - No I/O : The dispatcher never performs I/O directly
Transport implementations wrap this interface with platform-specific I/O:
- Tokio Server :
axum::extract::ws::WebSockethandles async I/O - Tokio Client :
tokio_tungstenite::WebSocketStreamwith message splitting - WASM Client :
wasm_bindgenbridges toWebSocket.send(ArrayBuffer)
Sources: README.md:34-35 README.md:47-48
graph TB
subgraph "Service Definition Crate"
TRAIT["RpcMethodPrebuffered"]
ADD_STRUCT["Add\nUnit struct"]
ADD_IMPL["impl RpcMethodPrebuffered for Add"]
ADD_METHOD_ID["Add::METHOD_ID\nconst u64 = xxh3_64("Add")"]
ADD_ENCODE_REQ["Add::encode_request"]
ADD_DECODE_REQ["Add::decode_request"]
ADD_ENCODE_RESP["Add::encode_response"]
ADD_DECODE_RESP["Add::decode_response"]
ADD_CALL["Add::call"]
end
subgraph "Client Code"
CLIENT_CALL["Add::call(&client, vec![1.0, 2.0])"]
CLIENT_ENCODE["Uses Add::encode_request"]
CLIENT_DECODE["Uses Add::decode_response"]
end
subgraph "Server Code"
SERVER_REGISTER["endpoint.register_prebuffered"]
SERVER_DECODE["Uses Add::decode_request"]
SERVER_ENCODE["Uses Add::encode_response"]
end
ADD_STRUCT --> ADD_IMPL
ADD_IMPL --> ADD_METHOD_ID
ADD_IMPL --> ADD_ENCODE_REQ
ADD_IMPL --> ADD_DECODE_REQ
ADD_IMPL --> ADD_ENCODE_RESP
ADD_IMPL --> ADD_DECODE_RESP
ADD_IMPL --> ADD_CALL
ADD_CALL --> CLIENT_CALL
ADD_ENCODE_REQ --> CLIENT_ENCODE
ADD_DECODE_RESP --> CLIENT_DECODE
ADD_METHOD_ID --> SERVER_REGISTER
ADD_DECODE_REQ --> SERVER_DECODE
ADD_ENCODE_RESP --> SERVER_ENCODE
CLIENT_CALL --> CLIENT_ENCODE
CLIENT_CALL --> CLIENT_DECODE
SERVER_REGISTER --> SERVER_DECODE
SERVER_REGISTER --> SERVER_ENCODE
Type Safety Through Shared Definitions
Type safety across distributed system boundaries is enforced at compile time through shared service definitions.
Shared Service Definition Pattern
Sources: README.md:49-50 README.md:70-73
Compile-Time Guarantees
| Guarantee | Mechanism | Failure Mode |
|---|---|---|
| Method ID Uniqueness | Const evaluation with xxhash_rust::const_xxh3::xxh3_64() | Duplicate method names detected at compile time |
| Parameter Type Match | Shared encode_request / decode_request | Type mismatch = compilation error |
| Response Type Match | Shared encode_response / decode_response | Type mismatch = compilation error |
| API Version Compatibility | Semantic versioning of service definition crate | Incompatible versions = linker error |
The RpcMethodPrebuffered trait requires implementers to define:
METHOD_ID: u64- Generated from method name hashencode_request(params: Self::RequestParams) -> Result<Vec<u8>>decode_request(bytes: &[u8]) -> Result<Self::RequestParams>encode_response(result: Self::ResponseResult) -> Result<Vec<u8>>decode_response(bytes: &[u8]) -> Result<Self::ResponseResult>
Both client and server code depend on the same implementation, making API drift impossible.
Sources: README.md:49-50 Cargo.toml42
sequenceDiagram
participant App
participant Caller as "RpcServiceCallerInterface"
participant Dispatcher as "RpcDispatcher"
participant Transport as "WebSocket"
participant Server as "Server Dispatcher"
participant Handler
Note over Dispatcher: Assign request_id = 1
App->>Caller: Add::call(vec![1.0, 2.0])
Caller->>Dispatcher: encode_request(method_id, params)
Dispatcher->>Transport: Binary frames [request_id=1]
Note over Dispatcher: Assign request_id = 2
App->>Caller: Mult::call(vec![3.0, 4.0])
Caller->>Dispatcher: encode_request(method_id, params)
Dispatcher->>Transport: Binary frames [request_id=2]
Transport->>Server: Interleaved frames arrive
Server->>Handler: Route by method_id (request_id=1)
Server->>Handler: Route by method_id (request_id=2)
Handler->>Server: Response [request_id=2]
Server->>Transport: Binary frames [request_id=2]
Transport->>Dispatcher: Binary frames [request_id=2]
Dispatcher->>Caller: Match request_id=2
Caller->>App: Return 12.0
Handler->>Server: Response [request_id=1]
Server->>Transport: Binary frames [request_id=1]
Transport->>Dispatcher: Binary frames [request_id=1]
Dispatcher->>Caller: Match request_id=1
Caller->>App: Return 3.0
Request Correlation and Multiplexing
The system supports concurrent requests over a single connection through request ID correlation and frame interleaving.
Request Lifecycle
Sources: README.md:28-29
Concurrent Request Management
The RpcDispatcher maintains internal state for all in-flight requests:
- Pending Requests Map :
HashMap<request_id, ResponseHandler>tracks active requests - Request ID Generation : Monotonically increasing counter ensures uniqueness
- Frame Reassembly : Collects interleaved frames until complete message received
- Response Routing : Matches incoming responses to pending requests by ID
This design enables:
- Pipelining : Multiple requests sent without waiting for responses
- Out-of-Order Completion : Responses processed as they arrive, not in request order
- Cancellation : Remove request from pending map to ignore future responses
- Connection Reuse : Single WebSocket connection handles unlimited concurrent requests
Sources: README.md:28-29
graph TB
subgraph "Shared Application Code"
APP_LOGIC["business_logic.rs\nUses RpcServiceCallerInterface"]
APP_CALL["Add::call(&caller, params)"]
end
subgraph "Platform-Specific Entry Points"
NATIVE_MAIN["main.rs (Native)\nCreates RpcClient"]
WASM_MAIN["lib.rs (WASM)\nCreates RpcWasmClient"]
end
subgraph "Client Implementations"
RPC_CLIENT["RpcClient\nTokio WebSocket"]
WASM_CLIENT["RpcWasmClient\nJS WebSocket bridge"]
TRAIT_IMPL["Both impl RpcServiceCallerInterface"]
end
APP_LOGIC --> APP_CALL
NATIVE_MAIN --> RPC_CLIENT
WASM_MAIN --> WASM_CLIENT
RPC_CLIENT --> TRAIT_IMPL
WASM_CLIENT --> TRAIT_IMPL
TRAIT_IMPL --> APP_CALL
APP_CALL --> RPC_CLIENT
APP_CALL --> WASM_CLIENT
Cross-Platform Code Reuse
Application logic written against the RpcServiceCallerInterface trait runs identically on all platforms without modification.
Platform-Independent Service Layer
Sources: README.md:47-48 Cargo.toml:27-28
Write Once, Deploy Everywhere
The RpcServiceCallerInterface trait provides:
call_prebuffered(method_id, request_bytes) -> Result<Vec<u8>>get_dispatcher() -> Arc<Mutex<RpcDispatcher>>- State change callbacks and connection management
Any type implementing this trait can execute application code. The service definitions (implementing RpcMethodPrebuffered) provide convenience methods that automatically delegate to the caller:
Platform-specific differences (async runtime, WebSocket implementation, JavaScript bridge) are isolated in the transport implementations, never exposed to application code.
Sources: README.md:47-48
Summary of Core Principles
| Principle | Implementation | Benefit |
|---|---|---|
| Layered Separation | Core → RPC → Transport | Each layer independently testable and replaceable |
| Binary Protocol | Raw bytes everywhere | Zero parsing overhead, FFI-friendly |
| Non-Async Core | Callback-driven dispatcher | Runtime-agnostic, deterministic execution |
| Type Safety | Shared service definitions | Compile-time API contract enforcement |
| Request Correlation | ID-based multiplexing | Concurrent requests over single connection |
| Platform Abstraction | Trait-based callers | Write once, deploy to native and WASM |
These principles work together to create a system that is simultaneously high-performance, type-safe, and broadly compatible across deployment targets.
Sources: README.md:16-52 DRAFT.md:9-26 Cargo.toml:10-11
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Design Philosophy
Relevant source files
Purpose and Scope
This document details the fundamental design principles that guide the rust-muxio architecture. It explains the non-async runtime model, binary protocol design choices, transport abstraction strategy, and core goals that shape the system. For information about how these principles manifest in the layered architecture, see Layered Architecture. For practical implementation details of the binary protocol, see Binary Framing Protocol.
Non-Async Runtime Model
The core muxio library is implemented using a callback-driven, synchronous control flow rather than async/await. This design choice enables maximum portability and minimal runtime dependencies while still supporting concurrent operations, streaming, and cancellation.
Callback-Driven Architecture
The fundamental mechanism is the on_message_bytes callback pattern. The core dispatcher accepts a closure that will be invoked whenever bytes need to be sent:
Sources: DRAFT.md:48-52 README.md34
graph LR
subgraph "Application Code"
AppLogic["Application Logic"]
end
subgraph "Core Dispatcher"
RpcDispatcher["RpcDispatcher"]
CallbackRegistry["on_message_bytes callback"]
end
subgraph "Transport Layer (Async or Sync)"
AsyncTransport["Tokio WebSocket"]
SyncTransport["Standard Library TCP"]
WasmTransport["WASM JS Bridge"]
end
AppLogic --> RpcDispatcher
RpcDispatcher --> CallbackRegistry
CallbackRegistry --> AsyncTransport
CallbackRegistry --> SyncTransport
CallbackRegistry --> WasmTransport
AsyncTransport --> RpcDispatcher
SyncTransport --> RpcDispatcher
WasmTransport --> RpcDispatcher
Runtime Independence Benefits
This model provides several critical advantages:
| Benefit | Description |
|---|---|
| WASM Compatibility | Works in single-threaded JavaScript environments where async tasks are limited |
| Runtime Flexibility | Same core code runs on Tokio, async-std, or synchronous runtimes |
| Deterministic Execution | No hidden async state machines or yield points |
| FFI-Friendly | Callbacks can cross language boundaries more easily than async functions |
| Zero Runtime Overhead | No async runtime machinery in the core library |
The following diagram maps this philosophy to actual code entities:
Sources: muxio/src/rpc_dispatcher.rs DRAFT.md:48-52 README.md34
Binary Protocol Foundation
Muxio uses a low-overhead binary framing protocol for all communication. This is a deliberate architectural choice prioritizing performance over human readability.
graph LR
subgraph "Text-Based Approach (JSON/XML)"
TextData["Human-readable strings"]
TextParsing["Complex parsing\nTokenization\nString allocation"]
TextSize["Larger payload size\nQuotes, brackets, keys"]
TextCPU["High CPU cost\nUTF-8 validation\nEscape sequences"]
end
subgraph "Binary Approach (Muxio)"
BinaryData["Raw byte arrays"]
BinaryParsing["Simple framing\nFixed header offsets\nZero-copy reads"]
BinarySize["Minimal payload\nCompact encoding\nNo metadata"]
BinaryCPU["Low CPU cost\nDirect memory access\nNo parsing"]
end
TextData -->
TextParsing -->
TextSize --> TextCPU
BinaryData -->
BinaryParsing -->
BinarySize --> BinaryCPU
Why Binary Over Text
Performance Impact:
| Metric | Text-Based | Binary (Muxio) | Improvement |
|---|---|---|---|
| Serialization overhead | High (string formatting) | Minimal (bitcode) | ~10-100x faster |
| Payload size | Verbose | Compact | ~2-5x smaller |
| Parse complexity | O(n) with allocations | O(1) header reads | Constant time |
| CPU cache efficiency | Poor (scattered strings) | Good (contiguous bytes) | Better locality |
Sources: README.md32 README.md45 DRAFT.md11
Binary Protocol Stack
The following diagram shows how binary data flows through the protocol layers:
Sources: README.md:32-33 muxio/src/rpc_request_response.rs Cargo.toml (bitcode dependency)
Transport and Runtime Agnosticism
A core principle is that the muxio core makes zero assumptions about the transport or runtime. This is enforced through careful API design.
graph TB
subgraph "Muxio Core (Transport-Agnostic)"
CoreDispatcher["RpcDispatcher\nGeneric over callback\nNo transport dependencies"]
CoreTypes["RpcRequest\nRpcResponse\nRpcHeader"]
end
subgraph "Transport Abstraction Layer"
CallerInterface["RpcServiceCallerInterface\n(muxio-rpc-service-caller)\nAbstract trait"]
EndpointInterface["RpcServiceEndpointInterface\n(muxio-rpc-service-endpoint)\nAbstract trait"]
end
subgraph "Concrete Implementations"
TokioImpl["MuxioRpcServer\nmuxio-tokio-rpc-server\nUses: tokio-tungstenite"]
TokioClientImpl["RpcClient\nmuxio-tokio-rpc-client\nUses: tokio-tungstenite"]
WasmImpl["RpcWasmClient\nmuxio-wasm-rpc-client\nUses: wasm-bindgen"]
CustomImpl["Custom implementations\n(IPC, gRPC, etc.)"]
end
CoreDispatcher --> CoreTypes
CoreTypes --> CallerInterface
CoreTypes --> EndpointInterface
CallerInterface --> TokioClientImpl
CallerInterface --> WasmImpl
CallerInterface --> CustomImpl
EndpointInterface --> TokioImpl
Transport Abstraction Strategy
Key Abstraction Points:
The RpcServiceCallerInterface trait provides transport abstraction for clients:
- Method:
call_prebuffered()- Send request, receive response - Implementation: Each transport provides its own
RpcServiceCallerInterfaceimpl - Portability: Application code depends only on the trait, not concrete implementations
Sources: README.md47 extensions/muxio-rpc-service-caller/src/caller_interface.rs README.md34
Runtime Environment Support Matrix
| Runtime Environment | Server Support | Client Support | Implementation Crate |
|---|---|---|---|
| Tokio (async) | ✓ | ✓ | muxio-tokio-rpc-server, muxio-tokio-rpc-client |
| async-std | ✗ (possible) | ✗ (possible) | Not implemented |
| Standard Library (sync) | ✗ (possible) | ✗ (possible) | Not implemented |
| WASM/Browser | N/A | ✓ | muxio-wasm-rpc-client |
| Node.js/Deno | ✗ (possible) | ✗ (possible) | Not implemented |
The core's agnosticism means new runtime support requires only implementing the appropriate wrapper crates, not modifying core logic.
Sources: README.md:34-40 extensions/README.md
graph TB
subgraph "Layer 4: Application"
AppCode["Application Logic\nBusiness rules"]
end
subgraph "Layer 3: RPC Abstraction"
ServiceDef["RpcMethodPrebuffered\n(muxio-rpc-service)\nDefines API contract"]
Caller["RpcServiceCallerInterface\n(muxio-rpc-service-caller)\nClient-side calls"]
Endpoint["RpcServiceEndpointInterface\n(muxio-rpc-service-endpoint)\nServer-side dispatch"]
end
subgraph "Layer 2: Multiplexing"
Dispatcher["RpcDispatcher\n(muxio/rpc_dispatcher.rs)\nRequest correlation\nFrame multiplexing"]
end
subgraph "Layer 1: Binary Framing"
Framing["Binary Protocol\n(muxio/framing.rs)\nChunk/reassemble frames"]
end
subgraph "Layer 0: Transport"
Transport["WebSocket/TCP/IPC\nExternal implementations"]
end
AppCode --> ServiceDef
ServiceDef --> Caller
ServiceDef --> Endpoint
Caller --> Dispatcher
Endpoint --> Dispatcher
Dispatcher --> Framing
Framing --> Transport
Transport -.bytes up.-> Framing
Framing -.frames up.-> Dispatcher
Dispatcher -.responses up.-> Caller
Dispatcher -.requests up.-> Endpoint
Layered Separation of Concerns
Muxio enforces strict separation between system layers, with each layer unaware of layers above it:
Layer Independence Guarantees:
| Layer | Knowledge | Ignorance |
|---|---|---|
| Binary Framing | Bytes, frame headers | No knowledge of RPC, methods, or requests |
| Multiplexing | Request IDs, correlation | No knowledge of method semantics or serialization |
| RPC Abstraction | Method IDs, request/response pattern | No knowledge of specific transports |
| Application | Business logic | No knowledge of framing or multiplexing |
This enables:
- Testing: Each layer can be unit tested independently
- Extensibility: New transports don't affect RPC logic
- Reusability: Same multiplexing layer works for non-RPC protocols
- Maintainability: Changes isolated to single layers
Sources: README.md:16-17 README.md22 DRAFT.md:9-26
graph TB
subgraph "Shared Service Definition Crate"
ServiceTrait["RpcMethodPrebuffered\n(muxio-rpc-service)"]
AddMethod["impl RpcMethodPrebuffered for Add\nMETHOD_ID = xxhash('Add')\nRequest = Vec<f64>\nResponse = f64"]
MultMethod["impl RpcMethodPrebuffered for Mult\nMETHOD_ID = xxhash('Mult')\nRequest = Vec<f64>\nResponse = f64"]
end
subgraph "Server Code"
ServerHandler["endpoint.register_prebuffered(\n Add::METHOD_ID,\n /bytes, ctx/ async move {\n let params = Add::decode_request(&bytes)?;\n let sum = params.iter().sum();\n Add::encode_response(sum)\n }\n)"]
end
subgraph "Client Code"
ClientCall["Add::call(\n &rpc_client,\n vec![1.0, 2.0, 3.0]\n).await"]
end
subgraph "Compile-Time Guarantees"
TypeCheck["Type mismatch → Compiler error"]
MethodIDCheck["Duplicate METHOD_ID → Compiler error"]
SerdeCheck["Serde incompatibility → Compiler error"]
end
ServiceTrait --> AddMethod
ServiceTrait --> MultMethod
AddMethod -.defines.-> ServerHandler
AddMethod -.defines.-> ClientCall
AddMethod --> TypeCheck
AddMethod --> MethodIDCheck
AddMethod --> SerdeCheck
Type Safety Through Shared Definitions
The system enforces compile-time API contracts between client and server via shared service definitions.
Compile-Time Contract Enforcement
What Gets Checked at Compile Time:
- Type Consistency: Client and server must agree on
RequestandResponsetypes - Method ID Uniqueness: Hash collisions in method names are detected
- Serialization Compatibility: Both sides use identical
encode/decodeimplementations - API Changes: Modifying service definition breaks both client and server simultaneously
Example Service Definition Structure:
The trait RpcMethodPrebuffered requires:
METHOD_ID: u64- Compile-time constant generated from method name hashencode_request()/decode_request()- Serialization for parametersencode_response()/decode_response()- Serialization for results
Sources: README.md49 extensions/muxio-rpc-service/src/prebuffered.rs README.md:69-117 (example code)
Core Design Goals
The following table summarizes the fundamental design goals that drive all architectural decisions:
| Goal | Rationale | Implementation Approach |
|---|---|---|
| Binary Protocol | Minimize overhead, maximize performance | Raw byte arrays, bitcode serialization |
| Framed Transport | Discrete, ordered chunks enable multiplexing | Fixed-size frame headers, variable payloads |
| Bidirectional | Client/server symmetry | Same RpcDispatcher logic for both directions |
| WASM-Compatible | Deploy to browsers | Non-async core, callback-driven model |
| Streamable | Support large payloads | Chunked transmission via framing protocol |
| Cancelable | Abort in-flight requests | Request ID tracking in RpcDispatcher |
| Metrics-Capable | Observability for production | Hooks for latency, throughput measurement |
| Transport-Agnostic | Flexible deployment | Callback-based abstraction, no hard transport deps |
| Runtime-Agnostic | Work with any async runtime | Non-async core, async wrappers |
| Type-Safe | Eliminate runtime API mismatches | Shared service definitions, compile-time checks |
Sources: DRAFT.md:9-26 README.md:41-52
sequenceDiagram
participant App as "Application\n(Type-safe)"
participant Service as "Add::call()\n(Shared definition)"
participant Client as "RpcClient\n(Transport wrapper)"
participant Dispatcher as "RpcDispatcher\n(Core, non-async)"
participant Callback as "on_message_bytes\n(Callback)"
participant Transport as "WebSocket\n(Async transport)"
App->>Service: Add::call(&client, vec![1.0, 2.0])
Note over Service: Compile-time type check
Service->>Service: encode_request(vec![1.0, 2.0])\n→ bytes
Service->>Client: call_prebuffered(METHOD_ID, bytes)
Client->>Dispatcher: send_request(METHOD_ID, bytes)
Note over Dispatcher: Assign request ID\nStore pending request
Dispatcher->>Dispatcher: Serialize to binary frames
Dispatcher->>Callback: callback(frame_bytes)
Note over Callback: Synchronous invocation
Callback->>Transport: send_websocket_frame(frame_bytes)
Note over Transport: Async I/O (external)
Design Philosophy in Practice
The following sequence shows how these principles work together in a single RPC call:
This demonstrates:
- Type safety:
Add::call()enforces parameter types - Shared definitions: Same
encode_request()on both sides - Transport agnostic:
RpcDispatcherknows nothing about WebSocket - Non-async core:
RpcDispatcheris synchronous, invokes callback - Binary protocol: Everything becomes bytes before transmission
Sources: README.md:69-161 muxio/src/rpc_dispatcher.rs extensions/muxio-rpc-service-caller/src/caller_interface.rs
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Workspace Structure
Relevant source files
This document describes the organization of the rust-muxio workspace, including all crates, their locations, purposes, and how they relate to each other. For information about the conceptual architecture and design patterns, see Layered Architecture.
Purpose and Organization Pattern
The rust-muxio project is organized as a Cargo workspace following a "core + extensions" pattern. The workspace contains a single core library (muxio) that provides transport-agnostic stream multiplexing, plus seven extension crates that build RPC functionality on top of the core, plus two example crates demonstrating usage.
This modular structure allows consumers to depend on only the functionality they need. For instance, a WASM client application can include only muxio, muxio-rpc-service, muxio-rpc-service-caller, and muxio-wasm-rpc-client without pulling in Tokio server dependencies.
Sources: Cargo.toml:19-31 extensions/README.md:1-4
Workspace Directory Structure
Sources: Cargo.toml:19-31 extensions/README.md:1-4
Workspace Member Crates
Core Library
| Crate Name | Path | Version | Description |
|---|---|---|---|
muxio | . (root) | 0.10.0-alpha | Core library providing layered stream multiplexing and binary framing protocol |
The muxio crate is the foundation of the entire system. It provides runtime-agnostic, transport-agnostic multiplexing capabilities with no assumptions about RPC, serialization, or network protocols. All other crates in the workspace depend on this core.
Sources: Cargo.toml:9-17 Cargo.toml41
RPC Extension Crates
| Crate Name | Path | Version | Purpose |
|---|---|---|---|
muxio-rpc-service | extensions/muxio-rpc-service | 0.10.0-alpha | Defines RpcMethodPrebuffered trait and compile-time method ID generation |
muxio-rpc-service-caller | extensions/muxio-rpc-service-caller | 0.10.0-alpha | Provides RpcServiceCallerInterface for client-side RPC invocation |
muxio-rpc-service-endpoint | extensions/muxio-rpc-service-endpoint | 0.10.0-alpha | Provides RpcServiceEndpointInterface for server-side RPC dispatch |
muxio-tokio-rpc-server | extensions/muxio-tokio-rpc-server | 0.10.0-alpha | Tokio-based WebSocket RPC server with Axum integration |
muxio-tokio-rpc-client | extensions/muxio-tokio-rpc-client | 0.10.0-alpha | Tokio-based WebSocket RPC client implementation |
muxio-wasm-rpc-client | extensions/muxio-wasm-rpc-client | 0.10.0-alpha | WebAssembly RPC client for browser environments |
muxio-ext-test | extensions/muxio-ext-test | 0.10.0-alpha | Testing utilities and integration test helpers |
These seven extension crates build the RPC framework on top of the core multiplexing library. The first three (muxio-rpc-service, muxio-rpc-service-caller, muxio-rpc-service-endpoint) define the RPC abstraction layer, while the next three provide concrete transport implementations for different platforms. The muxio-ext-test crate contains shared testing infrastructure.
Sources: Cargo.toml:22-28 Cargo.toml:43-47
Example Crates
| Crate Name | Path | Version | Purpose |
|---|---|---|---|
example-muxio-rpc-service-definition | examples/example-muxio-rpc-service-definition | 0.10.0-alpha | Shared service definitions demonstrating RpcMethodPrebuffered implementation |
example-muxio-ws-rpc-app | examples/example-muxio-ws-rpc-app | 0.10.0-alpha | Complete WebSocket RPC application demonstrating server and client usage |
The example crates demonstrate real-world usage patterns. example-muxio-rpc-service-definition shows how to define shared service contracts that work across all platforms, while example-muxio-ws-rpc-app provides a working end-to-end application.
Sources: Cargo.toml:29-30 Cargo.toml42
Crate Dependency Relationships
Sources: Cargo.lock:426-954
Workspace Configuration
The workspace is configured through the root Cargo.toml file, which defines shared package metadata and dependency versions.
Shared Package Metadata
All workspace members inherit these common properties:
| Property | Value |
|---|---|
authors | ["Jeremy Harris <jeremy.harris@zenosmosis.com>"] |
version | "0.10.0-alpha" |
edition | "2024" |
repository | "https://github.com/jzombie/rust-muxio" |
license | "Apache-2.0" |
publish | true |
Sources: Cargo.toml:1-7
Workspace Dependencies
The workspace defines shared dependency versions in the [workspace.dependencies] section to ensure consistency across all crates:
Intra-workspace Dependencies:
- Path-based references to all workspace members
- Version locked to
"0.10.0-alpha"
Key External Dependencies:
async-trait = "0.1.88"- Async trait supportaxum = { version = "0.8.4", features = ["ws"] }- Web framework with WebSocket supportbitcode = "0.6.6"- Binary serializationtokio = { version = "1.45.1" }- Async runtimetokio-tungstenite = "0.26.2"- WebSocket implementationtracing = "0.1.41"- Structured loggingxxhash-rust = { version = "0.8.15", features = ["xxh3", "const_xxh3"] }- Fast hashing for method IDs
Sources: Cargo.toml:39-64
Platform-Specific Compilation Targets
The workspace supports both native and WASM compilation targets. The core library (muxio), RPC abstraction layer (muxio-rpc-service, muxio-rpc-service-caller, muxio-rpc-service-endpoint), and service definitions are fully platform-agnostic. Transport implementations are platform-specific: Tokio-based crates compile for native targets, while muxio-wasm-rpc-client compiles to WebAssembly.
Sources: Cargo.lock:830-954
Resolver Configuration
The workspace uses Cargo's v2 resolver:
This ensures consistent dependency resolution across all workspace members and enables Cargo's newer resolver features including better handling of target-specific dependencies.
Sources: Cargo.toml32
Development Dependencies
The core muxio crate defines development dependencies that are only used for testing:
bitcode(workspace version) - Used for serialization in testsrand = "0.9.1"- Random number generation for test datatokio(workspace version,features = ["full"]) - Full Tokio runtime for async tests
These dependencies are separate from the minimal runtime dependencies of the core library, maintaining its lightweight footprint in production builds.
Sources: Cargo.toml:67-70
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Layered Architecture
Relevant source files
Purpose and Scope
This page documents the separation of concerns in rust-muxio's three-layer architecture: the core multiplexing layer, the RPC abstraction layer, and the transport implementation layer. Each layer has distinct responsibilities and well-defined interfaces, enabling modularity, testability, and platform independence.
For information about the specific binary protocol used in the core layer, see Binary Framing Protocol. For details on how to create service definitions in the RPC layer, see Creating Service Definitions. For information about specific transport implementations, see Transport Implementations.
Architectural Overview
The rust-muxio system is structured as three independent layers, each building upon the previous one without creating tight coupling. This design allows developers to use only the layers they need and to implement custom components at any level.
Sources: Cargo.toml:19-31 README.md:16-23 extensions/README.md
graph TB
subgraph TransportLayer["Transport Layer (Platform-Specific)"]
TokioServer["muxio-tokio-rpc-server\nRpcServer struct"]
TokioClient["muxio-tokio-rpc-client\nRpcClient struct"]
WasmClient["muxio-wasm-rpc-client\nRpcWasmClient struct"]
end
subgraph RpcLayer["RPC Abstraction Layer (Transport-Agnostic)"]
RpcService["muxio-rpc-service\nRpcMethodPrebuffered trait"]
CallerInterface["muxio-rpc-service-caller\nRpcServiceCallerInterface trait"]
EndpointInterface["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface trait"]
end
subgraph CoreLayer["Core Multiplexing Layer (Runtime-Agnostic)"]
RpcDispatcher["muxio/src/rpc_dispatcher.rs\nRpcDispatcher struct"]
RpcRequest["muxio/src/rpc_request_response.rs\nRpcRequest/RpcResponse"]
BinaryFraming["muxio/src/rpc_dispatcher.rs\nBinary framing protocol"]
end
TokioServer --> CallerInterface
TokioServer --> EndpointInterface
TokioClient --> CallerInterface
WasmClient --> CallerInterface
CallerInterface --> RpcService
EndpointInterface --> RpcService
RpcService --> RpcDispatcher
CallerInterface --> RpcDispatcher
EndpointInterface --> RpcDispatcher
RpcDispatcher --> RpcRequest
RpcDispatcher --> BinaryFraming
Layer 1: Core Multiplexing Layer
The core layer, contained entirely within the muxio crate, provides transport-agnostic and runtime-agnostic stream multiplexing. This layer has zero knowledge of RPC semantics, serialization formats, or network transports.
Core Components
| Component | File Path | Responsibility |
|---|---|---|
RpcDispatcher | muxio/src/rpc_dispatcher.rs | Manages concurrent request/response correlation and frame routing |
RpcRequest | muxio/src/rpc_request_response.rs | Defines request structure with method ID, params, and payload |
RpcResponse | muxio/src/rpc_request_response.rs | Defines response structure with result or error |
RpcHeader | muxio/src/rpc_request_response.rs | Wraps requests/responses with metadata |
| Binary framing | muxio/src/rpc_dispatcher.rs | Low-level protocol for chunking and reassembling byte streams |
Key Characteristics
The core layer operates exclusively on raw bytes (Vec<u8>). It provides callbacks for receiving data and functions for sending data, but never interprets the semantic meaning of the data. The RpcDispatcher handles:
- Assigning unique request IDs for correlation
- Multiplexing multiple concurrent requests over a single connection
- Routing incoming responses to the correct waiting caller
- Fragmenting large payloads into transmission frames
- Reassembling frames into complete messages
Sources: muxio/src/rpc_dispatcher.rs muxio/src/rpc_request_response.rs README.md:28-29
Layer 2: RPC Abstraction Layer
The RPC layer provides type-safe abstractions for defining and invoking remote procedures without dictating transport implementation. This layer consists of three cooperating crates.
graph TB
subgraph ServiceDefinition["muxio-rpc-service"]
RpcMethodPrebuffered["RpcMethodPrebuffered trait\nMETHOD_ID: u64\nencode_request()\ndecode_request()\nencode_response()\ndecode_response()"]
end
subgraph CallerSide["muxio-rpc-service-caller"]
CallerInterface["RpcServiceCallerInterface trait\ncall_prebuffered()\nget_dispatcher()"]
CallImpl["RpcCallPrebuffered trait\ncall() - default implementation"]
end
subgraph EndpointSide["muxio-rpc-service-endpoint"]
EndpointInterface["RpcServiceEndpointInterface trait\nregister_prebuffered()\ndispatch_request()"]
HandlerRegistry["Method ID → Handler mapping"]
end
RpcMethodPrebuffered --> CallImpl
RpcMethodPrebuffered --> HandlerRegistry
CallerInterface --> CallImpl
EndpointInterface --> HandlerRegistry
RPC Layer Components
Trait Responsibilities
RpcMethodPrebuffered (defined in extensions/muxio-rpc-service)
This trait defines the contract for a single RPC method. Each implementation provides:
- A compile-time constant
METHOD_IDgenerated from the method name - Encoding/decoding functions for request parameters
- Encoding/decoding functions for response data
RpcServiceCallerInterface (defined in extensions/muxio-rpc-service-caller)
This trait abstracts the client-side capability to invoke RPC methods. Any type implementing this trait can:
- Send prebuffered requests via
call_prebuffered() - Access the underlying
RpcDispatcherfor low-level operations - Be used interchangeably across native and WASM clients
RpcServiceEndpointInterface (defined in extensions/muxio-rpc-service-endpoint)
This trait abstracts the server-side capability to handle RPC requests. Any type implementing this trait can:
- Register handler functions via
register_prebuffered() - Route incoming requests to appropriate handlers based on
METHOD_ID - Execute handlers and return responses through the dispatcher
Separation of Concerns
The RPC layer enforces a clean separation:
| Concern | Responsible Component |
|---|---|
| Method signature and data format | RpcMethodPrebuffered implementation |
| Client invocation mechanics | RpcServiceCallerInterface implementation |
| Server dispatch mechanics | RpcServiceEndpointInterface implementation |
| Request correlation and multiplexing | Core layer RpcDispatcher |
| Network transmission | Transport layer implementations |
Sources: extensions/muxio-rpc-service extensions/muxio-rpc-service-caller extensions/muxio-rpc-service-endpoint README.md:46-49
Layer 3: Transport Implementation Layer
The transport layer provides concrete implementations of the RPC abstraction layer interfaces for specific runtime environments and network transports. Each implementation handles platform-specific concerns like connection management, state tracking, and async runtime integration.
graph TB
subgraph TokioServerImpl["muxio-tokio-rpc-server"]
RpcServer["RpcServer struct\nserve_with_listener()\nendpoint() → RpcEndpoint"]
RpcEndpoint["RpcEndpoint struct\nimplements RpcServiceEndpointInterface\nregister_prebuffered()"]
AxumWs["Axum WebSocket handler\ntokio-tungstenite integration"]
end
subgraph TokioClientImpl["muxio-tokio-rpc-client"]
RpcClient["RpcClient struct\nimplements RpcServiceCallerInterface\nnew(host, port)\nset_state_change_handler()"]
ClientWs["tokio-tungstenite WebSocket\nConnection management"]
end
subgraph WasmClientImpl["muxio-wasm-rpc-client"]
RpcWasmClient["RpcWasmClient struct\nimplements RpcServiceCallerInterface\nnew(url)\nwasm-bindgen bridge"]
BrowserWs["JavaScript WebSocket API\nvia wasm-bindgen"]
end
RpcServer --> RpcEndpoint
RpcServer --> AxumWs
RpcClient --> ClientWs
RpcWasmClient --> BrowserWs
Transport Implementations
Implementation Comparison
| Feature | muxio-tokio-rpc-server | muxio-tokio-rpc-client | muxio-wasm-rpc-client |
|---|---|---|---|
| Runtime | Tokio async | Tokio async | Browser event loop |
| Transport | Axum + tokio-tungstenite | tokio-tungstenite | JavaScript WebSocket |
| Interface | RpcServiceEndpointInterface | RpcServiceCallerInterface | RpcServiceCallerInterface |
| State tracking | Built-in | RpcTransportState enum | RpcTransportState enum |
| Platform | Native (server-side) | Native (client-side) | WebAssembly (browser) |
Transport Layer Responsibilities
Each transport implementation handles:
- Connection Lifecycle : Establishing, maintaining, and closing connections
- State Management : Tracking connection state and notifying callbacks via
set_state_change_handler() - Byte Transport : Reading from and writing to the underlying socket
- Dispatcher Integration : Creating an
RpcDispatcherand wiring its callbacks to network I/O - Error Propagation : Translating transport errors to RPC errors
Sources: extensions/muxio-tokio-rpc-server extensions/muxio-tokio-rpc-client extensions/muxio-wasm-rpc-client README.md:36-40
Layer Interaction and Data Flow
The following diagram traces how a single RPC call flows through all three layers, from application code down to the network and back:
sequenceDiagram
participant App as "Application Code"
participant Method as "RpcMethodPrebuffered\n(Layer 2: RPC)"
participant Caller as "RpcServiceCallerInterface\n(Layer 2: RPC)"
participant Dispatcher as "RpcDispatcher\n(Layer 1: Core)"
participant Transport as "Transport Implementation\n(Layer 3)"
participant Network as "WebSocket\nConnection"
App->>Method: Add::call(rpc_client, [1.0, 2.0, 3.0])
Method->>Method: encode_request() → Vec<u8>
Method->>Caller: call_prebuffered(METHOD_ID, request_bytes)
Caller->>Dispatcher: dispatch_request(REQUEST_ID, METHOD_ID, bytes)
Dispatcher->>Dispatcher: Store pending request with REQUEST_ID
Dispatcher->>Dispatcher: Serialize to binary frames
Dispatcher->>Transport: send_bytes_callback(frame_bytes)
Transport->>Network: Write binary data
Network->>Transport: Receive binary data
Transport->>Dispatcher: receive_bytes(frame_bytes)
Dispatcher->>Dispatcher: Reassemble frames
Dispatcher->>Dispatcher: Match REQUEST_ID to pending request
Dispatcher->>Caller: Response ready
Caller->>Method: decode_response(response_bytes)
Method->>App: Return typed result: 6.0
Layer Boundaries
The boundaries between layers are enforced through well-defined interfaces:
| Boundary | Interface | Direction |
|---|---|---|
| Application → RPC | RpcMethodPrebuffered::call() | Typed parameters → Vec<u8> |
| RPC → Core | call_prebuffered() on RpcServiceCallerInterface | Vec<u8> + METHOD_ID → Request correlation |
| Core → Transport | Callbacks (send_bytes_callback, receive_bytes) | Binary frames ↔ Network I/O |
| Transport → Network | Platform-specific APIs | Raw socket operations |
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs muxio/src/rpc_dispatcher.rs
Benefits of Layered Separation
Modularity
Each layer can be developed, tested, and evolved independently. Changes to the binary framing protocol in Layer 1 do not require modifications to Layer 2 or Layer 3 code, as long as the callback interface remains stable.
Testability
Layers can be tested in isolation:
- Core layer : Unit tests with mock callbacks can verify frame reassembly without network I/O
- RPC layer : Integration tests can use in-memory transports to verify method dispatch
- Transport layer : Integration tests can verify connection management against real servers
graph TB
SharedDef["example-muxio-rpc-service-definition\nAdd, Mult, Echo methods\nRpcMethodPrebuffered implementations"]
SharedDef --> TokioClient["muxio-tokio-rpc-client\nNative Tokio runtime"]
SharedDef --> WasmClient["muxio-wasm-rpc-client\nBrowser WebAssembly"]
SharedDef --> TokioServer["muxio-tokio-rpc-server\nServer endpoint handlers"]
TokioClient --> NativeApp["Native Application"]
WasmClient --> BrowserApp["Browser Application"]
TokioServer --> ServerApp["Server Application"]
Platform Independence
The same service definition can be used across all platforms because Layers 1 and 2 have no platform-specific dependencies:
Extensibility
New transport implementations can be added without modifying existing code:
- Implement
RpcServiceCallerInterfacefor client-side transports - Implement
RpcServiceEndpointInterfacefor server-side transports - Use the same service definitions and core dispatcher logic
Examples of potential future transports:
- HTTP/2 with binary frames
- Unix domain sockets
- Named pipes
- In-process channels for testing
Sources: README.md:34-35 README.md:42-52 extensions/README.md
Code Organization by Layer
The workspace structure directly reflects the layered architecture:
| Layer | Crates | Location |
|---|---|---|
| Core (Layer 1) | muxio | Root directory |
| RPC Abstraction (Layer 2) | muxio-rpc-service | |
muxio-rpc-service-caller | ||
muxio-rpc-service-endpoint | extensions/ | |
| Transport (Layer 3) | muxio-tokio-rpc-server | |
muxio-tokio-rpc-client | ||
muxio-wasm-rpc-client | extensions/ | |
| Service Definitions | example-muxio-rpc-service-definition | examples/ |
| Testing Utilities | muxio-ext-test | extensions/ |
Dependency Graph
This dependency structure ensures that:
- The core has no dependencies on higher layers
- The RPC abstraction layer has no knowledge of transport implementations
- Transport implementations depend on both core and RPC layers
- Service definitions depend only on
muxio-rpc-service
Sources: Cargo.toml:19-31 Cargo.toml:40-47
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Core Library (muxio)
Relevant source files
- README.md
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
- src/rpc/rpc_internals/rpc_session.rs
- src/rpc/rpc_internals/rpc_stream_decoder.rs
Purpose and Scope
The muxio core library provides the foundational stream multiplexing engine that enables multiple independent data streams to coexist over a single connection. This document covers the core library's architecture, components, and design principles. The core library itself is transport-agnostic and runtime-agnostic, operating through a callback-driven model without requiring any specific async runtime.
For detailed information about specific subsystems:
- Binary framing protocol implementation: see Binary Framing Protocol
- Request correlation and stream management: see RPC Dispatcher
- Data structures for requests and responses: see Request and Response Types
For information about RPC abstractions built on top of the core: see RPC Framework
For concrete transport implementations using the core: see Transport Implementations
Sources: README.md:1-61 src/rpc/rpc_dispatcher.rs:1-457
Layered Architecture
The core library follows a three-layer design where each layer has distinct responsibilities and interfaces with adjacent layers through well-defined boundaries.
graph TB
subgraph "Layer 3: Application Interface"
RpcDispatcher["RpcDispatcher\nRequest correlation\nResponse tracking"]
end
subgraph "Layer 2: Session Management"
RpcRespondableSession["RpcRespondableSession\nResponse handler registration\nPer-request callbacks"]
RpcSession["RpcSession\nStream lifecycle management\nStream ID allocation"]
end
subgraph "Layer 1: Binary Protocol"
RpcStreamEncoder["RpcStreamEncoder\nEncodes outbound streams"]
RpcStreamDecoder["RpcStreamDecoder\nDecodes inbound streams"]
FrameMuxStreamDecoder["FrameMuxStreamDecoder\nFrame reassembly"]
end
subgraph "Transport Layer (Not in Core)"
Transport["WebSocket / TCP / Custom\nByte transmission"]
end
RpcDispatcher -->|uses| RpcRespondableSession
RpcRespondableSession -->|wraps| RpcSession
RpcSession -->|creates| RpcStreamEncoder
RpcSession -->|manages| RpcStreamDecoder
RpcSession -->|feeds bytes to| FrameMuxStreamDecoder
RpcStreamEncoder -->|emits bytes via callback| Transport
Transport -->|passes received bytes| FrameMuxStreamDecoder
Architecture Layers
Sources: src/rpc/rpc_dispatcher.rs:20-51 src/rpc/rpc_internals/rpc_session.rs:15-24 src/rpc/rpc_internals/rpc_respondable_session.rs:14-28
| Layer | Component | Responsibility | File Location |
|---|---|---|---|
| 3 | RpcDispatcher | Request/response correlation, queue management, ID generation | src/rpc/rpc_dispatcher.rs |
| 2 | RpcRespondableSession | Response handler registration, prebuffering | src/rpc/rpc_internals/rpc_respondable_session.rs |
| 2 | RpcSession | Stream ID allocation, frame routing | src/rpc/rpc_internals/rpc_session.rs |
| 1 | RpcStreamEncoder | Outbound stream encoding | src/rpc/rpc_internals/rpc_stream_encoder.rs |
| 1 | RpcStreamDecoder | Inbound stream decoding | src/rpc/rpc_internals/rpc_stream_decoder.rs |
| 1 | FrameMuxStreamDecoder | Frame multiplexing/demultiplexing | src/frame/ |
Sources: src/rpc/rpc_dispatcher.rs:36-50 src/rpc/rpc_internals/rpc_session.rs:20-24 README.md:28-34
Core Components
RpcSession
RpcSession is the low-level stream multiplexing engine. It manages stream ID allocation, maintains per-stream decoder state, and routes incoming frames to the appropriate decoder.
Key responsibilities:
- Allocates monotonically increasing stream IDs using
next_stream_id - Maintains a
HashMap<u32, RpcStreamDecoder>mapping stream IDs to decoders - Processes incoming bytes through
FrameMuxStreamDecoder - Cleans up completed or cancelled streams
Public API:
init_request()- Creates a newRpcStreamEncoderfor an outbound streamread_bytes()- Processes incoming bytes and invokes event callbacks
Sources: src/rpc/rpc_internals/rpc_session.rs:15-117
RpcRespondableSession
RpcRespondableSession wraps RpcSession and adds response handler tracking. It allows callers to register per-request callbacks that are invoked when response events arrive.
Key responsibilities:
- Maintains
response_handlers: HashMap<u32, Box<dyn FnMut(RpcStreamEvent)>>for per-request callbacks - Provides optional
catch_all_response_handlerfor unmatched events - Implements prebuffering logic to accumulate payload chunks into single events
- Manages
prebuffering_flags: HashMap<u32, bool>to control buffering per request
Public API:
init_respondable_request()- Starts a request with optional response callback and prebufferingstart_reply_stream()- Initiates a response streamset_catch_all_response_handler()- Registers global fallback handlerread_bytes()- Processes bytes and routes events to registered handlers
Sources: src/rpc/rpc_internals/rpc_respondable_session.rs:14-178
RpcDispatcher
RpcDispatcher is the highest-level component, providing request correlation and queue management. It generates unique request IDs, tracks active requests in a shared queue, and handles request/response lifecycle.
Key responsibilities:
- Generates unique
rpc_request_idvalues vianext_rpc_request_id: u32 - Maintains
rpc_request_queue: Arc<Mutex<VecDeque<(u32, RpcRequest)>>> - Installs a catch-all response handler to populate the request queue
- Provides methods to query, finalize, and delete requests from the queue
Public API:
call()- Initiates an RPC call with anRpcRequest, returnsRpcStreamEncoderrespond()- Sends an RPC response with anRpcResponseread_bytes()- Processes incoming bytes, returns list of active request IDsget_rpc_request()- Retrieves a request from the queue by IDis_rpc_request_finalized()- Checks if a request has received all payload chunksdelete_rpc_request()- Removes and returns a request from the queuefail_all_pending_requests()- Cancels all pending requests with an error
Sources: src/rpc/rpc_dispatcher.rs:36-457
Component Relationships
Initialization and Layering
Sources: src/rpc/rpc_dispatcher.rs:59-71 src/rpc/rpc_internals/rpc_respondable_session.rs:30-39 src/rpc/rpc_internals/rpc_session.rs:26-33
Stream Creation Flow
Sources: src/rpc/rpc_dispatcher.rs:226-286 src/rpc/rpc_internals/rpc_respondable_session.rs:42-68 src/rpc/rpc_internals/rpc_session.rs:35-50
Key Design Principles
Non-Async Callback-Driven Model
The core library does not use async/await or any specific async runtime. Instead, it operates through callbacks:
- Outbound data: Components accept
on_emitcallbacks implementing theRpcEmittrait - Inbound events: Components invoke event handlers implementing
RpcResponseHandler - Thread safety: Shared state uses
Arc<Mutex<>>for safe concurrent access
This design enables the core library to work in WASM environments, multithreaded native applications, and any async runtime without modification.
Sources: src/rpc/rpc_dispatcher.rs:26-30 README.md:34-35
Transport and Runtime Agnostic
The core library has zero dependencies on specific transports or async runtimes:
- Bytes are passed in/out via callbacks, not I/O operations
- No assumptions about WebSocket, TCP, or any protocol
- No tokio, async-std, or runtime dependencies in the core
Extension crates (e.g., muxio-tokio-rpc-client, muxio-wasm-rpc-client) provide transport bindings.
Sources: README.md:34-40 src/rpc/rpc_dispatcher.rs:20-35
Request Correlation via IDs
The core library uses two ID systems for multiplexing:
| ID Type | Scope | Generated By | Purpose |
|---|---|---|---|
stream_id | Transport frame | RpcSession.next_stream_id | Distinguishes interleaved frame streams |
rpc_request_id | RPC protocol | RpcDispatcher.next_rpc_request_id | Correlates requests with responses |
Both use monotonically increasing u32 values via increment_u32_id(). The stream_id operates at the framing layer, while rpc_request_id operates at the RPC layer and is encoded in the RpcHeader.
Sources: src/rpc/rpc_dispatcher.rs:41-42 src/rpc/rpc_internals/rpc_session.rs21 src/utils/increment_u32_id.rs
Mutex Poisoning Policy
The RpcDispatcher maintains a shared rpc_request_queue protected by Mutex. If the mutex becomes poisoned (a thread panicked while holding the lock), the dispatcher panics immediately rather than attempting recovery:
This is a deliberate design choice prioritizing correctness over availability. A poisoned queue likely indicates corrupted state, and continuing execution could lead to silent data loss or incorrect routing.
Sources: src/rpc/rpc_dispatcher.rs:86-118
Stream Lifecycle Management
Outbound Request Lifecycle
Sources: src/rpc/rpc_dispatcher.rs:226-286
Inbound Response Lifecycle
Sources: src/rpc/rpc_dispatcher.rs:98-209 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-185
Decoder State Machine
The RpcStreamDecoder maintains internal state as frames arrive:
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:11-186
Request Queue Management
The RpcDispatcher provides a shared request queue accessible to application code for inspecting and managing active requests.
Queue Operations
| Method | Return Type | Purpose |
|---|---|---|
read_bytes(&mut self, bytes: &[u8]) | Result<Vec<u32>, FrameDecodeError> | Processes incoming bytes, returns active request IDs |
get_rpc_request(&self, header_id: u32) | Option<MutexGuard<VecDeque<(u32, RpcRequest)>>> | Locks queue if header_id exists |
is_rpc_request_finalized(&self, header_id: u32) | Option<bool> | Checks if request received End frame |
delete_rpc_request(&self, header_id: u32) | Option<RpcRequest> | Removes and returns request from queue |
fail_all_pending_requests(&mut self, error: FrameDecodeError) | () | Cancels all pending requests |
Sources: src/rpc/rpc_dispatcher.rs:362-456
Queue Entry Structure
Each entry in the rpc_request_queue is a tuple:
Where:
- The
u32is therpc_request_id(also calledheader_idin the API) - The
RpcRequestcontains:rpc_method_id: u64rpc_param_bytes: Option<Vec<u8>>rpc_prebuffered_payload_bytes: Option<Vec<u8>>is_finalized: bool
The queue is populated automatically by the catch-all response handler installed during RpcDispatcher::new().
Sources: src/rpc/rpc_dispatcher.rs:45-50 src/rpc/rpc_dispatcher.rs:98-209
graph LR
Dispatcher["RpcDispatcher"]
Encoder["RpcStreamEncoder"]
OnEmit["on_emit callback\n(implements RpcEmit)"]
Transport["Transport Implementation\n(e.g., WebSocket)"]
Dispatcher -->|call creates| Encoder
Encoder -->|write_bytes emits via| OnEmit
OnEmit -->|sends bytes to| Transport
note1["Encoder chunks large payloads\nAdds frame headers\nNo I/O performed"]
Integration with Transport Layer
The core library interfaces with transport implementations through callback boundaries, never performing I/O directly.
Outbound Data Flow
Sources: src/rpc/rpc_dispatcher.rs:226-286
Inbound Data Flow
Sources: src/rpc/rpc_dispatcher.rs:362-374
Transport Requirements
Any transport implementation must:
- Provide byte transmission: Implement or accept callbacks matching the
RpcEmittrait signature - Feed received bytes: Call
RpcDispatcher.read_bytes()with incoming data - Handle connection lifecycle: Call
fail_all_pending_requests()on disconnection - Respect chunk boundaries: The core chunks large payloads; transport must transmit frames as-is
The core library makes no assumptions about:
- Connection establishment
- Error handling semantics
- Threading model
- Async vs sync execution
Sources: README.md:34-40 src/rpc/rpc_dispatcher.rs:362-374
Thread Safety and Concurrency
The core library is designed for safe concurrent access across multiple threads:
| Component | Thread Safety Mechanism | Purpose |
|---|---|---|
rpc_request_queue | Arc<Mutex<VecDeque<>>> | Allows multiple threads to inspect/modify queue |
| Response handlers | Box<dyn FnMut + Send> | Handlers can be called from any thread |
RpcDispatcher | Not Sync, use per-connection | Each connection should have its own dispatcher |
Per-Connection Isolation
The RpcDispatcher documentation explicitly states:
IMPORTANT: A unique dispatcher should be used per-client.
This design ensures that request IDs and stream IDs do not collide across different connections. Transport implementations should create one RpcDispatcher instance per active connection.
Sources: src/rpc/rpc_dispatcher.rs35 src/rpc/rpc_dispatcher.rs:45-50
Error Handling
The core library propagates errors through Result types and error events:
| Error Type | Used By | Represents |
|---|---|---|
FrameEncodeError | Encoding operations | Failed to create valid frames |
FrameDecodeError | Decoding operations | Invalid or corrupt incoming frames |
RpcStreamEvent::Error | Event callbacks | Runtime stream processing errors |
When a decoding error occurs, the core library:
- Removes the affected stream decoder from
rpc_stream_decoders - Emits an
RpcStreamEvent::Errorevent - Returns
Err(FrameDecodeError)to the caller
For connection-level failures, fail_all_pending_requests() should be called to notify all pending response handlers.
Sources: src/rpc/rpc_dispatcher.rs:427-456 src/rpc/rpc_internals/rpc_session.rs:53-117
Summary
The muxio core library provides a three-layer architecture for stream multiplexing:
- Binary Protocol Layer: Frame encoding/decoding with minimal overhead
- Session Management Layer: Stream lifecycle, ID allocation, handler registration
- Application Interface Layer: Request correlation, queue management, high-level operations
The non-async, callback-driven design enables deployment across native, WASM, and any async runtime without modification. Transport implementations integrate by passing bytes through the read_bytes() and on_emit callback boundaries.
For implementation details of specific layers, see:
Sources: README.md:12-61 src/rpc/rpc_dispatcher.rs:20-51 src/rpc/rpc_internals/rpc_session.rs:15-24
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Binary Framing Protocol
Relevant source files
Purpose and Scope
The Binary Framing Protocol is the foundational layer of the muxio system that enables efficient stream multiplexing over a single connection. This protocol defines the low-level binary format for chunking data into discrete frames, assigning stream identifiers, and managing frame lifecycle (start, payload, end, cancel). It operates below the RPC layer and is completely agnostic to message semantics.
For information about how the RPC layer uses these frames for request/response correlation, see RPC Dispatcher. For details on RPC-level message structures, see Request and Response Types.
Sources: README.md:28-29 DRAFT.md:11-21
Overview
The binary framing protocol provides a minimal, low-overhead mechanism for transmitting multiple independent data streams over a single bidirectional connection. Each frame contains:
- A stream identifier to distinguish concurrent streams
- A frame type indicating the frame's purpose in the stream lifecycle
- A payload containing application data
This design enables interleaved transmission of frames from different streams without requiring separate network connections, while maintaining correct reassembly on the receiving end.
Sources: README.md:20-32 src/rpc/rpc_internals/rpc_session.rs:15-24
Frame Structure
Basic Frame Layout
Frame Structure Diagram
Each frame consists of a header section and a payload section. The header contains metadata for routing and lifecycle management, while the payload carries the actual application data.
Sources: src/rpc/rpc_internals/rpc_session.rs:61-116
Frame Header Fields
| Field | Type | Size | Description |
|---|---|---|---|
stream_id | u32 | 4 bytes | Unique identifier for the stream this frame belongs to |
kind | FrameKind | 1 byte | Frame type: Start, Payload, End, or Cancel |
The header provides the minimal information needed to route frames to the correct stream decoder and manage stream lifecycle. Additional protocol-specific headers (such as RPC headers) are encoded within the frame payload itself.
Sources: src/rpc/rpc_internals/rpc_session.rs:66-68 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-56
Frame Types (FrameKind)
The protocol defines four frame types that manage stream lifecycle:
Frame Type Enumeration
| Frame Kind | Purpose | Payload Behavior |
|---|---|---|
Start | First frame of a new stream | Contains initial data, often including higher-layer headers |
Payload | Continuation frame carrying data | Contains chunked application data |
End | Final frame of a stream | May contain remaining data; signals normal completion |
Cancel | Aborts stream transmission | Signals abnormal termination; no further frames expected |
Frame Lifecycle State Machine
The Start frame initiates a stream and typically carries protocol headers (such as RPC headers) in its payload. Subsequent Payload frames carry chunked data. The stream terminates with either an End frame (normal completion) or a Cancel frame (abnormal termination).
Sources: src/rpc/rpc_internals/rpc_session.rs:98-100 src/rpc/rpc_internals/rpc_stream_decoder.rs:156-167
Stream Multiplexing
sequenceDiagram
participant App as "Application"
participant Session as "RpcSession"
participant Encoder as "RpcStreamEncoder"
App->>Session: init_request()
Session->>Session: stream_id = next_stream_id
Session->>Session: next_stream_id++
Session->>Encoder: new(stream_id, ...)
Encoder-->>Session: RpcStreamEncoder
Session-->>App: encoder with stream_id
Note over Session: Each request gets unique stream_id
Stream ID Allocation
Each stream is assigned a unique u32 identifier by the RpcSession. Stream IDs are allocated sequentially using an incrementing counter, ensuring that each initiated stream receives a distinct identifier.
Stream ID Allocation Flow
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50
Concurrent Stream Handling
The FrameMuxStreamDecoder receives interleaved frames from multiple streams and demultiplexes them based on stream_id. Each stream maintains its own RpcStreamDecoder instance in a HashMap, enabling independent decoding state management.
Stream Demultiplexing Architecture
graph LR
Input["Incoming Bytes"]
Mux["FrameMuxStreamDecoder"]
subgraph "Per-Stream Decoders"
D1["RpcStreamDecoder\nstream_id: 1"]
D2["RpcStreamDecoder\nstream_id: 2"]
D3["RpcStreamDecoder\nstream_id: 3"]
end
Events["RpcStreamEvent[]"]
Input --> Mux
Mux -->|stream_id: 1| D1
Mux -->|stream_id: 2| D2
Mux -->|stream_id: 3| D3
D1 --> Events
D2 --> Events
D3 --> Events
The session maintains a HashMap<u32, RpcStreamDecoder> where the key is the stream_id. When a frame arrives, the session looks up the appropriate decoder, creates one if it doesn't exist, and delegates frame processing to that decoder.
Sources: src/rpc/rpc_internals/rpc_session.rs:20-24 src/rpc/rpc_internals/rpc_session.rs:68-74
Frame Encoding and Decoding
Encoding Process
Frame encoding is performed by the RpcStreamEncoder, which chunks large payloads into multiple frames based on a configurable max_chunk_size:
Frame Encoding Process
graph TB
Input["Application Data"]
Header["RPC Header\n(First Frame)"]
Chunker["Payload Chunker\n(max_chunk_size)"]
subgraph "Emitted Frames"
F1["Frame 1: Start\n+ RPC Header\n+ Data Chunk 1"]
F2["Frame 2: Payload\n+ Data Chunk 2"]
F3["Frame 3: Payload\n+ Data Chunk 3"]
F4["Frame 4: End\n+ Data Chunk 4"]
end
Transport["on_emit Callback"]
Input --> Header
Input --> Chunker
Header --> F1
Chunker --> F1
Chunker --> F2
Chunker --> F3
Chunker --> F4
F1 --> Transport
F2 --> Transport
F3 --> Transport
F4 --> Transport
The encoder emits frames via a callback function (on_emit), allowing the transport layer to send data immediately without buffering entire streams in memory.
Sources: src/rpc/rpc_internals/rpc_session.rs:35-50
Decoding Process
Frame decoding occurs in two stages:
- Frame-level decoding by
FrameMuxStreamDecoder: Parses incoming bytes into individualDecodedFramestructures - Stream-level decoding by
RpcStreamDecoder: Reassembles frames into complete messages and emitsRpcStreamEvents
sequenceDiagram
participant Input as "Network Bytes"
participant FrameDecoder as "FrameMuxStreamDecoder"
participant Session as "RpcSession"
participant StreamDecoder as "RpcStreamDecoder"
participant Handler as "Event Handler"
Input->>FrameDecoder: read_bytes(input)
FrameDecoder->>FrameDecoder: Parse frame headers
FrameDecoder-->>Session: DecodedFrame[]
loop "For each frame"
Session->>Session: Get/Create decoder by stream_id
Session->>StreamDecoder: decode_rpc_frame(frame)
StreamDecoder->>StreamDecoder: Parse RPC header\n(if Start frame)
StreamDecoder->>StreamDecoder: Accumulate payload
StreamDecoder-->>Session: RpcStreamEvent[]
loop "For each event"
Session->>Handler: on_rpc_stream_event(event)
end
alt "Frame is End or Cancel"
Session->>Session: Remove stream decoder
end
end
Frame Decoding Sequence
The RpcSession.read_bytes() method orchestrates this process, managing decoder lifecycle and error propagation.
Sources: src/rpc/rpc_internals/rpc_session.rs:53-117 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-186
RPC Frame Header Structure
While the core framing protocol is agnostic to payload contents, the RPC layer adds its own header structure within the frame payload. The first frame of each RPC stream (the Start frame) contains an RPC header followed by optional data:
RPC Header Layout
| Offset | Field | Type | Size | Description |
|---|---|---|---|---|
| 0 | rpc_msg_type | RpcMessageType | 1 byte | Call or Response indicator |
| 1-4 | rpc_request_id | u32 | 4 bytes | Request correlation ID |
| 5-12 | rpc_method_id | u64 | 8 bytes | Method identifier hash |
| 13-14 | metadata_length | u16 | 2 bytes | Length of metadata section |
| 15+ | metadata_bytes | Vec<u8> | Variable | Serialized metadata |
The total RPC frame header size is 15 + metadata_length bytes. This header is parsed by the RpcStreamDecoder when processing the first frame of a stream.
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:60-125 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-8
Frame Flow Through System Layers
Complete Frame Flow Diagram
This diagram illustrates how frames flow from application code through the framing protocol to the network, and how they are decoded and reassembled on the receiving end. The framing layer is responsible for the chunking and frame type management, while higher layers handle RPC semantics.
Sources: src/rpc/rpc_internals/rpc_session.rs:1-118 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-187
stateDiagram-v2
[*] --> AwaitHeader : New stream
AwaitHeader --> AwaitHeader : Partial header (buffer incomplete)
AwaitHeader --> AwaitPayload : Header complete (emit Header event)
AwaitPayload --> AwaitPayload : Payload frame (emit PayloadChunk)
AwaitPayload --> Done : End frame (emit End event)
AwaitPayload --> [*] : Cancel frame (error + cleanup)
Done --> [*] : Stream complete
note right of AwaitHeader
Buffer accumulates bytes
until full RPC header
can be parsed
end note
note right of AwaitPayload
Each payload frame
emits immediately,
no buffering
end note
Frame Reassembly State Machine
The RpcStreamDecoder maintains internal state to reassemble frames into complete messages:
RpcStreamDecoder State Machine
The decoder starts in AwaitHeader state, buffering bytes until the complete RPC header (including variable-length metadata) is received. Once the header is parsed, it transitions to AwaitPayload state, where subsequent frames are emitted as PayloadChunk events without additional buffering. The stream completes when an End frame is received or terminates abnormally on a Cancel frame.
Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:11-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:59-183
Error Handling in Frame Processing
The framing protocol defines several error conditions that can occur during decoding:
| Error Type | Condition | Recovery Strategy |
|---|---|---|
CorruptFrame | Invalid frame structure or missing required fields | Stream decoder is removed; error event emitted |
ReadAfterCancel | Frame received after Cancel frame | Stop processing; stream is invalid |
| Decode error | Frame parsing fails in FrameMuxStreamDecoder | Error event emitted; continue processing other streams |
When an error occurs during frame decoding, the RpcSession removes the stream decoder from its HashMap, emits an RpcStreamEvent::Error, and propagates the error to the caller. This ensures that corrupted streams don't affect other concurrent streams.
Sources: src/rpc/rpc_internals/rpc_session.rs:80-94 src/rpc/rpc_internals/rpc_stream_decoder.rs:165-166 src/rpc/rpc_internals/rpc_session.rs:98-100
Cleanup and Resource Management
Stream decoders are automatically cleaned up in the following scenarios:
- Normal completion : When an
Endframe is received - Abnormal termination : When a
Cancelframe is received - Decode errors : When frame decoding fails
The cleanup process removes the RpcStreamDecoder from the session's HashMap, freeing associated resources:
This design ensures that long-lived sessions don't accumulate memory for completed streams.
Sources: src/rpc/rpc_internals/rpc_session.rs:74-100
Key Design Characteristics
The binary framing protocol exhibits several important design properties:
Minimal Overhead : Frame headers contain only essential fields (stream_id and kind), minimizing bytes-on-wire for high-throughput scenarios.
Stream Independence : Each stream maintains separate decoding state, enabling true concurrent multiplexing without head-of-line blocking between streams.
Callback-Driven Architecture : Frame encoding emits bytes immediately via callbacks, avoiding the need to buffer entire messages in memory. Frame decoding similarly emits events immediately as frames complete.
Transport Agnostic : The protocol operates on &[u8] byte slices and emits bytes via callbacks, making no assumptions about the underlying transport (WebSocket, TCP, in-memory channels, etc.).
Runtime Agnostic : The core framing logic uses synchronous control flow with callbacks, requiring no specific async runtime and enabling integration with both Tokio and WASM environments.
Sources: README.md:30-35 DRAFT.md:48-52 src/rpc/rpc_internals/rpc_session.rs:35-50
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
RPC Dispatcher
Relevant source files
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
The RpcDispatcher is the central request coordination component in the muxio core library. It manages the lifecycle of RPC requests and responses, handling request correlation, stream multiplexing, and response routing over a binary framed transport.
This document covers the internal architecture, request/response flow, queue management, and usage patterns of the RpcDispatcher. For information about the underlying binary framing protocol, see Binary Framing Protocol. For details on the RpcRequest and RpcResponse data structures, see Request and Response Types.
Sources: src/rpc/rpc_dispatcher.rs:1-458
Overview
The RpcDispatcher operates in a non-async , callback-based model that is compatible with both WASM environments and multithreaded runtimes. It provides:
- Request Correlation : Assigns unique IDs to outbound requests and matches inbound responses
- Stream Multiplexing : Allows multiple concurrent requests over a single connection
- Chunked Streaming : Supports payloads split across multiple frames
- Response Buffering : Optional prebuffering of complete responses before delivery
- Mid-stream Cancellation : Ability to abort in-flight requests
- Thread-safe Queue : Synchronized tracking of inbound response metadata
The dispatcher wraps a RpcRespondableSession and maintains a thread-safe request queue for tracking active requests.
Important : A unique dispatcher instance should be used per client connection.
Sources: src/rpc/rpc_dispatcher.rs:20-51
Core Components
Diagram: RpcDispatcher Internal Structure
| Component | Type | Purpose |
|---|---|---|
rpc_respondable_session | RpcRespondableSession<'a> | Manages stream lifecycles and response handlers |
next_rpc_request_id | u32 | Monotonic ID generator for outbound requests |
rpc_request_queue | Arc<Mutex<VecDeque<(u32, RpcRequest)>>> | Thread-safe queue of active inbound responses |
Sources: src/rpc/rpc_dispatcher.rs:36-51 src/rpc/rpc_internals/rpc_respondable_session.rs:21-28
Request Lifecycle
Outbound Request Flow
Diagram: Outbound Request Encoding and Transmission
The call() method follows these steps:
- ID Assignment : Increments
next_rpc_request_idto generate a unique request identifier - Header Construction : Creates an
RpcHeaderwithRpcMessageType::Call, request ID, method ID, and metadata bytes - Handler Registration : Installs the optional
on_responsecallback in the session's response handler map - Stream Initialization : Calls
init_respondable_request()to obtain anRpcStreamEncoder - Payload Transmission : Writes prebuffered payload bytes if provided
- Finalization : Optionally ends the stream if
is_finalizedis true - Encoder Return : Returns the encoder to the caller for additional streaming if needed
Sources: src/rpc/rpc_dispatcher.rs:226-286
Inbound Response Flow
Diagram: Inbound Response Processing
The read_bytes() method processes incoming data in three stages:
- Frame Decoding : Passes bytes to
RpcRespondableSession.read_bytes()for frame reassembly - Event Handling : Decoded frames trigger
RpcStreamEventcallbacks - Queue Population : The catch-all handler maintains the request queue
- Active IDs Return : Returns a list of all request IDs currently in the queue
Sources: src/rpc/rpc_dispatcher.rs:362-374 src/rpc/rpc_internals/rpc_respondable_session.rs:93-173
Response Handling
Handler Registration and Dispatch
The dispatcher supports two types of response handlers:
Specific Response Handlers
Per-request handlers registered via the on_response parameter in call(). These are stored in the RpcRespondableSession.response_handlers map, keyed by request ID.
response_handlers: HashMap<u32, Box<dyn FnMut(RpcStreamEvent) + Send + 'a>>
Handler Lifecycle:
- Registered when
call()is invoked with a non-Noneon_response - Invoked for each
RpcStreamEvent(Header, PayloadChunk, End) - Automatically removed when stream ends or errors
Sources: src/rpc/rpc_internals/rpc_respondable_session.rs24
Catch-All Response Handler
A global fallback handler that receives all response events, regardless of whether a specific handler is registered. Installed via init_catch_all_response_handler() during dispatcher construction.
Primary Responsibilities:
- Populate the
rpc_request_queuewith incoming response metadata - Accumulate payload bytes across multiple chunks
- Mark requests as finalized when stream ends
Sources: src/rpc/rpc_dispatcher.rs:99-209
Prebuffering vs. Streaming
The dispatcher supports two response delivery modes, controlled by the prebuffer_response parameter in call():
| Mode | Behavior | Use Case |
|---|---|---|
Prebuffering (true) | Accumulates all payload chunks into a single buffer, then delivers once via PayloadChunk event when stream ends | Complete request/response RPCs where the full payload is needed before processing |
Streaming (false) | Delivers each payload chunk immediately as it arrives | Progressive rendering, large file transfers, streaming data |
Prebuffering Implementation:
- Tracks prebuffering flags per request:
prebuffering_flags: HashMap<u32, bool> - Accumulates bytes:
prebuffered_responses: HashMap<u32, Vec<u8>> - On
RpcStreamEvent::End, emits accumulated buffer as a singlePayloadChunk, then emitsEnd
Sources: src/rpc/rpc_internals/rpc_respondable_session.rs:112-147
Request Queue Management
Queue Structure
Arc<Mutex<VecDeque<(u32, RpcRequest)>>>
The queue stores tuples of (request_id, RpcRequest) for all active inbound responses. Each entry represents a response that has received at least a Header event but may not yet be finalized.
Key Fields in QueuedRpcRequest:
| Field | Type | Description |
|---|---|---|
rpc_method_id | u64 | Method identifier from the response header |
rpc_param_bytes | Option<Vec<u8>> | Metadata bytes (converted from header) |
rpc_prebuffered_payload_bytes | Option<Vec<u8>> | Accumulated payload chunks |
is_finalized | bool | Whether End event has been received |
Sources: src/rpc/rpc_dispatcher.rs50 src/rpc/rpc_request_response.rs:9-33
Queue Operations
Diagram: Request Queue Mutation Operations
Core Methods
get_rpc_request(header_id: u32) -> Option<MutexGuard<VecDeque<(u32, RpcRequest)>>>
Returns a lock on the entire queue if the specified request exists. The caller must search the queue again within the guard.
Rationale : Cannot return a reference to a queue element directly due to Rust's borrow checker—the reference would outlive the MutexGuard.
Sources: src/rpc/rpc_dispatcher.rs:381-394
is_rpc_request_finalized(header_id: u32) -> Option<bool>
Checks if a request has received its End event. Returns None if the request is not in the queue.
Sources: src/rpc/rpc_dispatcher.rs:399-405
delete_rpc_request(header_id: u32) -> Option<RpcRequest>
Removes a request from the queue and transfers ownership to the caller. Typically used after confirming is_finalized is true.
Sources: src/rpc/rpc_dispatcher.rs:411-420
sequenceDiagram
participant App as "Server Application"
participant Dispatcher as "RpcDispatcher"
participant Session as "RpcRespondableSession"
participant Encoder as "RpcStreamEncoder"
participant Emit as "on_emit Callback"
App->>Dispatcher: respond(rpc_response, max_chunk_size, on_emit)
Dispatcher->>Dispatcher: Build RpcHeader
Note over Dispatcher: RpcMessageType::Response\nrequest_id, method_id\nmetadata = [result_status]
Dispatcher->>Session: start_reply_stream(header, ...)
Session-->>Dispatcher: RpcStreamEncoder
Dispatcher->>Encoder: write_bytes(payload)
Encoder->>Emit: Emit binary frames
alt "is_finalized == true"
Dispatcher->>Encoder: flush()
Dispatcher->>Encoder: end_stream()
end
Dispatcher-->>App: Return encoder
Outbound Response Flow
When acting as a server, the dispatcher sends responses using the respond() method:
Diagram: Server-Side Response Encoding
Key Differences fromcall():
- No request ID generation—uses
rpc_response.rpc_request_idfrom the original request RpcMessageType::Responseinstead ofCall- Metadata contains only the
rpc_result_statusbyte (if present) - No response handler registration (responses don't receive responses)
Sources: src/rpc/rpc_dispatcher.rs:298-337
Error Handling and Cleanup
Mutex Poisoning
The rpc_request_queue is protected by a Mutex. If a thread panics while holding the lock, the mutex becomes poisoned.
Dispatcher Behavior on Poisoned Lock:
The catch-all handler deliberately panics when encountering a poisoned mutex:
Rationale:
- A poisoned queue indicates inconsistent internal state
- Continuing could cause incorrect routing, data loss, or undefined behavior
- Fast failure provides better safety and debugging signals
Alternative Approaches:
- Graceful recovery could be implemented with a configurable panic policy
- Error reporting mechanism could replace panics
Sources: src/rpc/rpc_dispatcher.rs:85-118
graph TB
ConnectionDrop["Connection Drop Detected"]
FailAll["fail_all_pending_requests(error)"]
TakeHandlers["Take ownership of\nresponse_handlers map"]
IterateHandlers["For each (request_id, handler)"]
CreateError["Create RpcStreamEvent::Error"]
InvokeHandler["Invoke handler(error_event)"]
ClearMap["response_handlers now empty"]
ConnectionDrop --> FailAll
FailAll --> TakeHandlers
TakeHandlers --> IterateHandlers
IterateHandlers --> CreateError
CreateError --> InvokeHandler
InvokeHandler --> ClearMap
Connection Failure Cleanup
When a transport connection drops, pending requests must be notified to prevent indefinite hangs. The fail_all_pending_requests() method handles this:
Diagram: Pending Request Cleanup Flow
Implementation Details:
- Ownership Transfer : Uses
std::mem::take()to move handlers out of the map, leaving it empty - Synthetic Error Event : Creates
RpcStreamEvent::Errorwith the providedFrameDecodeError - Handler Invocation : Calls each handler with the error event
- Automatic Cleanup : Handlers are automatically dropped after invocation
Usage Context : Transport implementations call this method in their disconnection handlers (e.g., WebSocket close events).
Sources: src/rpc/rpc_dispatcher.rs:427-456
Thread Safety
The dispatcher achieves thread safety through:
Shared Request Queue
Arc: Enables shared ownership across threads and callbacksMutex: Ensures exclusive access during mutationsVecDeque: Efficient push/pop operations for queue semantics
Handler Storage
Response handlers are stored as boxed trait objects:
The Send bound allows handlers to be invoked from different threads if the dispatcher is shared across threads.
Sources: src/rpc/rpc_dispatcher.rs50 src/rpc/rpc_internals/rpc_respondable_session.rs24
Usage Patterns
Client-Side Pattern
1. Create RpcDispatcher
2. For each RPC call:
a. Build RpcRequest with method_id, params, payload
b. Call dispatcher.call() with on_response handler
c. Write bytes to transport via on_emit callback
3. When receiving data from transport:
a. Call dispatcher.read_bytes()
b. Response handlers are invoked automatically
Sources: tests/rpc_dispatcher_tests.rs:32-124
Server-Side Pattern
1. Create RpcDispatcher
2. When receiving data from transport:
a. Call dispatcher.read_bytes()
b. Returns list of active request IDs
3. For each active request ID:
a. Check is_rpc_request_finalized()
b. If finalized, delete_rpc_request() to retrieve full request
c. Process request (decode params, execute method)
d. Build RpcResponse with result
e. Call dispatcher.respond() to send response
4. Write response bytes to transport via on_emit callback
Sources: tests/rpc_dispatcher_tests.rs:126-202
graph TB
Dispatcher["RpcDispatcher"]
subgraph "Client Role"
OutboundCall["call()\nInitiate request"]
InboundResponse["read_bytes()\nReceive response"]
end
subgraph "Server Role"
InboundRequest["read_bytes()\nReceive request"]
OutboundResponse["respond()\nSend response"]
end
Dispatcher --> OutboundCall
Dispatcher --> InboundResponse
Dispatcher --> InboundRequest
Dispatcher --> OutboundResponse
OutboundCall -.emits.-> Transport["Transport Layer"]
Transport -.delivers.-> InboundRequest
OutboundResponse -.emits.-> Transport
Transport -.delivers.-> InboundResponse
Bidirectional Pattern
The same dispatcher instance can handle both client and server roles simultaneously:
Diagram: Bidirectional Request/Response Flow
This pattern enables peer-to-peer architectures where both endpoints can initiate requests and respond to requests.
Sources: src/rpc/rpc_dispatcher.rs:20-51
Implementation Notes
ID Generation
Request IDs are generated using increment_u32_id(), which provides monotonic incrementing IDs with wraparound:
Wraparound Behavior : After reaching u32::MAX, wraps to 0 and continues incrementing.
Collision Risk : With 4.29 billion possible IDs, collisions are extremely unlikely in typical usage. For long-running connections with billions of requests, consider implementing ID reuse detection.
Sources: src/rpc/rpc_dispatcher.rs242
Non-Async Design
The dispatcher uses callbacks instead of async/await for several reasons:
- WASM Compatibility : Avoids dependency on async runtimes that may not work in WASM
- Runtime Agnostic : Works with Tokio, async-std, or no runtime at all
- Deterministic : No hidden scheduling or context switching
- Zero-Cost : No Future state machines or executor overhead
Higher-level abstractions (like those in muxio-rpc-service-caller) can wrap the dispatcher with async interfaces when desired.
Sources: src/rpc/rpc_dispatcher.rs:26-27
Summary
The RpcDispatcher provides the core request/response coordination layer for muxio's RPC framework:
| Responsibility | Mechanism |
|---|---|
| Request Correlation | Unique request IDs with monotonic generation |
| Response Routing | Per-request handlers + catch-all fallback |
| Stream Management | Wraps RpcRespondableSession for encoder lifecycle |
| Payload Accumulation | Optional prebuffering or streaming delivery |
| Queue Management | Thread-safe VecDeque for tracking active requests |
| Error Propagation | Synthetic error events on connection failure |
| Thread Safety | Arc<Mutex<>> for shared state |
The dispatcher's non-async, callback-based design enables deployment across native and WASM environments while maintaining type safety and performance.
Sources: src/rpc/rpc_dispatcher.rs:1-458
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Request and Response Types
Relevant source files
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
This page documents the three core data structures used to represent RPC messages in the muxio core library: RpcRequest, RpcResponse, and RpcHeader. These types define the wire-level representation of remote procedure calls and their responses, serving as the primary interface between application code and the RPC dispatcher.
For information about how these types are processed by the dispatcher, see RPC Dispatcher. For details about the underlying binary framing protocol, see Binary Framing Protocol. For higher-level service definitions that encode/decode these structures, see Service Definitions.
Sources: src/rpc/rpc_request_response.rs:1-105 src/rpc/rpc_internals/rpc_header.rs:1-25
Type Hierarchy and Relationships
The three types form a layered abstraction: RpcRequest and RpcResponse are high-level types used by application code, while RpcHeader is an internal protocol-level type used for frame transmission.
Diagram: Type Conversion Flow
graph TB
subgraph "Application Layer"
RpcRequest["RpcRequest\n(client-side)"]
RpcResponse["RpcResponse\n(server-side)"]
end
subgraph "Protocol Layer"
RpcHeader["RpcHeader\n(wire format)"]
end
subgraph "Transport Layer"
BinaryFrames["Binary Frames\n(network transmission)"]
end
RpcRequest -->|RpcDispatcher::call| RpcHeader
RpcResponse -->|RpcDispatcher::respond| RpcHeader
RpcHeader -->|serialize| BinaryFrames
BinaryFrames -->|deserialize| RpcHeader
RpcHeader -->|queue processing| RpcRequest
RpcHeader -->|from_rpc_header| RpcResponse
The diagram shows how application-level types are converted to protocol-level types for transmission, and reconstructed on the receiving side. RpcDispatcher::call() converts RpcRequest to RpcHeader, while RpcDispatcher::respond() converts RpcResponse to RpcHeader. On the receive side, incoming headers are queued as RpcRequest structures or converted to RpcResponse via from_rpc_header().
Sources: src/rpc/rpc_dispatcher.rs:227-286 src/rpc/rpc_dispatcher.rs:298-337 src/rpc/rpc_request_response.rs:90-103
RpcRequest Structure
RpcRequest represents an outbound RPC call initiated by a client. It encapsulates the method identifier, encoded parameters, optional payload data, and a finalization flag.
Field Definitions
| Field | Type | Description |
|---|---|---|
rpc_method_id | u64 | Unique identifier for the remote method to invoke. Typically a hash of the method name. |
rpc_param_bytes | Option<Vec<u8>> | Optional encoded metadata (function parameters). Transmitted in RpcHeader.rpc_metadata_bytes. |
rpc_prebuffered_payload_bytes | Option<Vec<u8>> | Optional payload sent immediately after the header. Used for single-frame RPCs. |
is_finalized | bool | If true, the stream is closed immediately after sending header and payload. |
Sources: src/rpc/rpc_request_response.rs:10-33
Usage in RpcDispatcher::call()
When a client invokes RpcDispatcher::call(), the RpcRequest is converted to an internal RpcHeader:
rpc_method_idis copied directly toRpcHeader.rpc_method_idrpc_param_bytesis unwrapped (or defaults to empty) and stored inRpcHeader.rpc_metadata_bytes- A unique
rpc_request_idis generated and assigned toRpcHeader.rpc_request_id RpcMessageType::Callis set inRpcHeader.rpc_msg_type
If rpc_prebuffered_payload_bytes is present, it is immediately written to the stream encoder. If is_finalized is true, the stream is flushed and ended.
Sources: src/rpc/rpc_dispatcher.rs:239-283
Example Construction
Sources: tests/rpc_dispatcher_tests.rs:42-49
RpcResponse Structure
RpcResponse represents a reply to a prior RPC request. It contains the original request ID for correlation, a result status, and optional payload data.
Field Definitions
| Field | Type | Description |
|---|---|---|
rpc_request_id | u32 | The request header ID this response corresponds to. Must match the initiating request. |
rpc_method_id | u64 | The method ID associated with this response. Should match the original request. |
rpc_result_status | Option<u8> | Optional result status byte (e.g., 0 for success). Embedded in response metadata. |
rpc_prebuffered_payload_bytes | Option<Vec<u8>> | Optional payload to return with the response. |
is_finalized | bool | If true, the response stream is closed immediately after sending. |
Sources: src/rpc/rpc_request_response.rs:41-76
Usage in RpcDispatcher::respond()
When a server invokes RpcDispatcher::respond(), the RpcResponse is converted to an internal RpcHeader:
rpc_request_idis copied toRpcHeader.rpc_request_idfor correlationrpc_method_idis copied toRpcHeader.rpc_method_idrpc_result_statusis converted to a single-byte vector (or empty) and stored inRpcHeader.rpc_metadata_bytesRpcMessageType::Responseis set inRpcHeader.rpc_msg_type
If rpc_prebuffered_payload_bytes is present, it is immediately written to the stream encoder. If is_finalized is true, the stream is flushed and ended.
Sources: src/rpc/rpc_dispatcher.rs:307-335
Example Construction
Sources: tests/rpc_dispatcher_tests.rs:161-167
from_rpc_header() Factory Method
The RpcResponse::from_rpc_header() static method constructs a response from a received RpcHeader. This is typically called on the server side when processing a new request. The method interprets the first byte of rpc_metadata_bytes as the result status, if present.
Sources: src/rpc/rpc_request_response.rs:90-103
RpcHeader Structure
RpcHeader is an internal protocol-level type that represents the framed representation of an RPC message. It is not directly exposed to application code, but is created by RpcDispatcher when encoding RpcRequest or RpcResponse for transmission.
Field Definitions
| Field | Type | Description |
|---|---|---|
rpc_msg_type | RpcMessageType | The type of RPC message (Call or Response). |
rpc_request_id | u32 | Unique identifier for correlation. For calls, this is generated; for responses, this matches the request. |
rpc_method_id | u64 | Identifier (or hash) of the method being invoked. |
rpc_metadata_bytes | Vec<u8> | Schemaless metadata. For calls, contains encoded parameters; for responses, contains result status. |
Sources: src/rpc/rpc_internals/rpc_header.rs:5-24
Metadata Interpretation
The rpc_metadata_bytes field has different semantics depending on the message type:
- For Call messages: Contains the encoded function parameters (from
RpcRequest.rpc_param_bytes) - For Response messages: Contains a single-byte result status (from
RpcResponse.rpc_result_status), or is empty
This dual-purpose design allows the framing protocol to remain agnostic to the payload structure while providing a small metadata slot for control information.
Sources: src/rpc/rpc_dispatcher.rs:250-257 src/rpc/rpc_dispatcher.rs:313-319
sequenceDiagram
participant App as "Application Code"
participant Dispatcher as "RpcDispatcher"
participant Queue as "rpc_request_queue"
participant Wire as "Wire Protocol"
Note over App,Wire: Client-side Call
App->>Dispatcher: call(RpcRequest)
Dispatcher->>Dispatcher: Convert to RpcHeader\n(assign rpc_request_id)
Dispatcher->>Wire: Serialize RpcHeader\n+ payload bytes
Note over App,Wire: Server-side Receive
Wire->>Dispatcher: read_bytes()
Dispatcher->>Queue: Push (rpc_request_id, RpcRequest)
Queue->>App: Poll for finalized requests
Note over App,Wire: Server-side Respond
App->>Dispatcher: respond(RpcResponse)
Dispatcher->>Dispatcher: Convert to RpcHeader\n(copy rpc_request_id)
Dispatcher->>Wire: Serialize RpcHeader\n+ payload bytes
Note over App,Wire: Client-side Receive
Wire->>Dispatcher: read_bytes()
Dispatcher->>Dispatcher: Invoke response handler\n(match by rpc_request_id)
Dispatcher->>App: RpcStreamEvent callbacks
Data Flow and Transformations
The following diagram illustrates how data flows through the three types during a complete request-response cycle.
Diagram: Request-Response Data Flow
This sequence diagram shows the complete lifecycle of an RPC call:
- Client creates
RpcRequestand passes tocall() - Dispatcher converts to
RpcHeader, assigns uniquerpc_request_id, and serializes - Server receives bytes via
read_bytes(), reconstructs asRpcRequest, and queues byrpc_request_id - Server application polls queue, processes request, and creates
RpcResponse - Dispatcher converts to
RpcHeader(preservingrpc_request_id), and serializes - Client receives bytes, matches by
rpc_request_id, and invokes registered response handler
Sources: src/rpc/rpc_dispatcher.rs:227-286 src/rpc/rpc_dispatcher.rs:298-337 src/rpc/rpc_dispatcher.rs:362-374
graph LR
subgraph "RpcDispatcher"
ReadBytes["read_bytes()"]
Handler["catch_all_response_handler"]
Queue["rpc_request_queue\nVecDeque<(u32, RpcRequest)>"]
end
subgraph "Stream Events"
Header["RpcStreamEvent::Header"]
Chunk["RpcStreamEvent::PayloadChunk"]
End["RpcStreamEvent::End"]
end
ReadBytes -->|decode| Header
ReadBytes -->|decode| Chunk
ReadBytes -->|decode| End
Header -->|push_back| Queue
Chunk -->|append bytes| Queue
End -->|set is_finalized| Queue
Handler -->|processes| Header
Handler -->|processes| Chunk
Handler -->|processes| End
Request Queue Processing
On the receiving side, incoming RpcHeader frames are converted to RpcRequest structures and stored in a queue for processing. The dispatcher maintains this queue internally using a catch-all response handler.
Diagram: Request Queue Processing
The diagram shows how incoming stream events populate the request queue:
read_bytes()decodes incoming frames intoRpcStreamEventvariants- The catch-all handler processes each event type:
Header: Creates a newRpcRequestand pushes to queuePayloadChunk: Appends bytes to existing request'srpc_prebuffered_payload_bytesEnd: Setsis_finalizedtotrue
- Application code polls the queue using
get_rpc_request(),is_rpc_request_finalized(), anddelete_rpc_request()
Sources: src/rpc/rpc_dispatcher.rs:99-209
Queue Event Handling
The catch-all response handler installs a closure that processes incoming stream events and updates the queue:
- On Header event: Extracts
rpc_method_idandrpc_metadata_bytes(converted torpc_param_bytes), createsRpcRequestwithis_finalized: false, and pushes to queue - On PayloadChunk event: Finds matching request by
rpc_request_id, creates or extendsrpc_prebuffered_payload_bytes - On End event: Finds matching request by
rpc_request_id, setsis_finalized: true - On Error event: Logs error (future: may remove from queue or mark as errored)
Sources: src/rpc/rpc_dispatcher.rs:122-207
Field Mapping Reference
The following table shows how fields are mapped between types during conversion:
RpcRequest → RpcHeader (Client Call)
| RpcRequest Field | RpcHeader Field | Transformation |
|---|---|---|
rpc_method_id | rpc_method_id | Direct copy |
rpc_param_bytes | rpc_metadata_bytes | Unwrap or empty vector |
| N/A | rpc_request_id | Generated by dispatcher |
| N/A | rpc_msg_type | Set to RpcMessageType::Call |
Sources: src/rpc/rpc_dispatcher.rs:239-257
RpcHeader → RpcRequest (Server Receive)
| RpcHeader Field | RpcRequest Field | Transformation |
|---|---|---|
rpc_method_id | rpc_method_id | Direct copy |
rpc_metadata_bytes | rpc_param_bytes | Wrap in Some() if non-empty |
| N/A | rpc_prebuffered_payload_bytes | Initially None, populated by chunks |
| N/A | is_finalized | Initially false, set by End event |
Sources: src/rpc/rpc_dispatcher.rs:133-138
RpcResponse → RpcHeader (Server Respond)
| RpcResponse Field | RpcHeader Field | Transformation |
|---|---|---|
rpc_request_id | rpc_request_id | Direct copy |
rpc_method_id | rpc_method_id | Direct copy |
rpc_result_status | rpc_metadata_bytes | Convert to single-byte vector or empty |
| N/A | rpc_msg_type | Set to RpcMessageType::Response |
Sources: src/rpc/rpc_dispatcher.rs:307-319
RpcHeader → RpcResponse (Client Receive)
| RpcHeader Field | RpcResponse Field | Transformation |
|---|---|---|
rpc_request_id | rpc_request_id | Direct copy |
rpc_method_id | rpc_method_id | Direct copy |
rpc_metadata_bytes | rpc_result_status | First byte if non-empty, else None |
| N/A | rpc_prebuffered_payload_bytes | Initially None, populated by chunks |
| N/A | is_finalized | Hardcoded to false (non-determinable from header alone) |
Sources: src/rpc/rpc_request_response.rs:90-103
Prebuffered vs. Streaming Payloads
Both RpcRequest and RpcResponse support two modes of payload transmission:
-
Prebuffered mode: The entire payload is provided in
rpc_prebuffered_payload_bytesandis_finalizedis set totrue. The dispatcher sends the complete message in one operation. -
Streaming mode:
rpc_prebuffered_payload_bytesisNoneor partial, andis_finalizedisfalse. The dispatcher returns anRpcStreamEncoderthat allows incremental writes viawrite_bytes(), followed byflush()andend_stream().
Prebuffered Example
Sources: tests/rpc_dispatcher_tests.rs:42-49
Streaming Example
Sources: src/rpc/rpc_dispatcher.rs:298-337
Thread Safety and Poisoning
The rpc_request_queue is protected by a Mutex and shared via Arc. If the mutex becomes poisoned (due to a panic while holding the lock), the dispatcher will panic immediately rather than attempt recovery. This design choice ensures:
- Poisoned state is treated as a critical failure
- Inconsistent queue state does not lead to incorrect request routing
- Fast failure provides better debugging signals
The poisoning check occurs in:
- The catch-all response handler when updating the queue
read_bytes()when listing active request IDs- Queue accessor methods (
get_rpc_request(),is_rpc_request_finalized(),delete_rpc_request())
Sources: src/rpc/rpc_dispatcher.rs:104-118 src/rpc/rpc_dispatcher.rs:367-371
Related Types
The following types work in conjunction with RpcRequest, RpcResponse, and RpcHeader:
| Type | Purpose | Page Reference |
|---|---|---|
RpcMessageType | Enum distinguishing Call vs. Response | N/A |
RpcStreamEncoder | Incremental payload writer for streaming mode | Binary Framing Protocol |
RpcStreamEvent | Event-based callbacks for incoming streams | RPC Dispatcher |
RpcDispatcher | Main coordinator for request/response lifecycle | RPC Dispatcher |
RpcMethodPrebuffered | Trait for compile-time method definitions | Service Definitions |
Sources: src/rpc/rpc_request_response.rs:1-105 src/rpc/rpc_internals/rpc_header.rs:1-25
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
RPC Framework
Relevant source files
Purpose and Scope
This document provides a comprehensive overview of the RPC (Remote Procedure Call) abstraction layer in the rust-muxio system. The RPC framework is built on top of the core muxio multiplexing library and provides a structured, type-safe mechanism for defining and invoking remote methods across client-server boundaries.
The RPC framework consists of three primary components distributed across separate crates:
- Service definition traits and method identification (muxio-rpc-service)
- Client-side call invocation (muxio-rpc-service-caller)
- Server-side request handling (muxio-rpc-service-endpoint)
For details on specific transport implementations that use this RPC framework, see Transport Implementations. For information on the underlying multiplexing and framing protocol, see Core Library (muxio)).
Architecture Overview
The RPC framework operates as a middleware layer between application code and the underlying muxio multiplexing protocol. It provides compile-time type safety while maintaining flexibility in serialization and transport choices.
graph TB
subgraph "Application Layer"
APP["Application Code\nType-safe method calls"]
end
subgraph "RPC Service Definition Layer"
SERVICE["muxio-rpc-service"]
TRAIT["RpcMethodPrebuffered\nRpcMethodStreaming traits"]
METHOD_ID["METHOD_ID generation\nxxhash at compile-time"]
ENCODE["encode_request/response\ndecode_request/response"]
SERVICE --> TRAIT
SERVICE --> METHOD_ID
SERVICE --> ENCODE
end
subgraph "Client Side"
CALLER["muxio-rpc-service-caller"]
CALLER_IFACE["RpcServiceCallerInterface"]
PREBUF_CALL["call_prebuffered"]
STREAM_CALL["call_streaming"]
CALLER --> CALLER_IFACE
CALLER_IFACE --> PREBUF_CALL
CALLER_IFACE --> STREAM_CALL
end
subgraph "Server Side"
ENDPOINT["muxio-rpc-service-endpoint"]
ENDPOINT_IFACE["RpcServiceEndpointInterface"]
REGISTER_PREBUF["register_prebuffered"]
REGISTER_STREAM["register_streaming"]
ENDPOINT --> ENDPOINT_IFACE
ENDPOINT_IFACE --> REGISTER_PREBUF
ENDPOINT_IFACE --> REGISTER_STREAM
end
subgraph "Core Multiplexing Layer"
DISPATCHER["RpcDispatcher"]
MUXIO_CORE["muxio core\nBinary framing protocol"]
DISPATCHER --> MUXIO_CORE
end
APP --> TRAIT
APP --> CALLER_IFACE
TRAIT -.shared definitions.-> CALLER
TRAIT -.shared definitions.-> ENDPOINT
CALLER --> DISPATCHER
ENDPOINT --> DISPATCHER
PREBUF_CALL -.invokes.-> DISPATCHER
STREAM_CALL -.invokes.-> DISPATCHER
REGISTER_PREBUF -.handles via.-> DISPATCHER
REGISTER_STREAM -.handles via.-> DISPATCHER
RPC Framework Component Structure
Sources:
- Cargo.toml:19-31
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-rpc-service-caller/Cargo.toml
- extensions/muxio-rpc-service-endpoint/Cargo.toml
- High-level architecture diagrams
Core RPC Components
The RPC framework is divided into three specialized crates, each with a distinct responsibility in the RPC lifecycle.
Component Responsibilities
| Crate | Primary Responsibility | Key Traits/Types | Dependencies |
|---|---|---|---|
muxio-rpc-service | Service definition contracts | RpcMethodPrebuffered, RpcMethodStreaming, METHOD_ID | muxio, bitcode, xxhash-rust, num_enum |
muxio-rpc-service-caller | Client-side invocation | RpcServiceCallerInterface, call_prebuffered, call_streaming | muxio, muxio-rpc-service, futures |
muxio-rpc-service-endpoint | Server-side dispatch | RpcServiceEndpointInterface, register_prebuffered, register_streaming | muxio, muxio-rpc-service, muxio-rpc-service-caller |
Sources:
RPC Method Definition and Identification
The foundation of the RPC framework is the method definition system, which establishes compile-time contracts between clients and servers.
graph LR
subgraph "Compile Time"
METHOD_NAME["Method Name String\ne.g., 'Add'"]
XXHASH["xxhash-rust\nconst_xxh3"]
METHOD_ID["METHOD_ID: u64\nCompile-time constant"]
METHOD_NAME --> XXHASH
XXHASH --> METHOD_ID
end
subgraph "Service Definition Trait"
TRAIT_IMPL["RpcMethodPrebuffered impl"]
CONST_ID["const METHOD_ID"]
ENCODE_REQ["encode_request"]
DECODE_REQ["decode_request"]
ENCODE_RESP["encode_response"]
DECODE_RESP["decode_response"]
TRAIT_IMPL --> CONST_ID
TRAIT_IMPL --> ENCODE_REQ
TRAIT_IMPL --> DECODE_REQ
TRAIT_IMPL --> ENCODE_RESP
TRAIT_IMPL --> DECODE_RESP
end
subgraph "Bitcode Serialization"
BITCODE["bitcode crate"]
PARAMS["Request/Response types\nSerialize + Deserialize"]
ENCODE_REQ --> BITCODE
DECODE_REQ --> BITCODE
ENCODE_RESP --> BITCODE
DECODE_RESP --> BITCODE
BITCODE --> PARAMS
end
METHOD_ID --> CONST_ID
Method ID Generation Process
The METHOD_ID is a u64 value generated at compile time by hashing the method name using xxhash-rust. This approach ensures:
- Collision prevention : Hash-based IDs virtually eliminate accidental collisions
- Zero runtime overhead : IDs are compile-time constants
- Version independence : Method IDs remain stable across compilations
Sources:
Type Safety Through Shared Definitions
The RPC framework enforces type safety by requiring both client and server to depend on the same service definition crate. This creates a compile-time contract that prevents API mismatches.
sequenceDiagram
participant DEV as "Developer"
participant DEF as "Service Definition Crate"
participant CLIENT as "Client Crate"
participant SERVER as "Server Crate"
participant COMPILER as "Rust Compiler"
DEV->>DEF: Define RpcMethodPrebuffered
DEF->>DEF: Generate METHOD_ID
DEF->>DEF: Define Request/Response types
DEV->>CLIENT: Add dependency on DEF
DEV->>SERVER: Add dependency on DEF
CLIENT->>DEF: Import method traits
SERVER->>DEF: Import method traits
CLIENT->>COMPILER: Compile with encode_request
SERVER->>COMPILER: Compile with decode_request
alt Type Mismatch
COMPILER->>DEV: Compilation Error
else Types Match
COMPILER->>CLIENT: Successful build
COMPILER->>SERVER: Successful build
end
Note over CLIENT,SERVER: Both use identical\nMETHOD_ID and data structures
Shared Definition Workflow
This workflow demonstrates how compile-time validation eliminates an entire class of runtime errors. If the client attempts to send a request with a different structure than what the server expects, the code will not compile.
Sources:
sequenceDiagram
participant APP as "Application Code"
participant METHOD as "Method::call()\nRpcMethodPrebuffered"
participant CALLER as "RpcServiceCallerInterface"
participant DISP as "RpcDispatcher"
participant FRAME as "Binary Framing Layer"
participant TRANSPORT as "Transport\n(WebSocket, etc.)"
participant ENDPOINT as "RpcServiceEndpointInterface"
participant HANDLER as "Registered Handler"
APP->>METHOD: call(params)
METHOD->>METHOD: encode_request(params) → bytes
METHOD->>CALLER: call_prebuffered(METHOD_ID, bytes)
CALLER->>DISP: send_request(method_id, request_bytes)
DISP->>DISP: Assign unique request_id
DISP->>FRAME: Serialize to binary frames
FRAME->>TRANSPORT: Transmit frames
TRANSPORT->>FRAME: Receive frames
FRAME->>DISP: Reassemble frames
DISP->>DISP: Lookup handler by METHOD_ID
DISP->>ENDPOINT: dispatch_to_handler(METHOD_ID, bytes)
ENDPOINT->>HANDLER: invoke(request_bytes, context)
HANDLER->>METHOD: decode_request(bytes) → params
HANDLER->>HANDLER: Process business logic
HANDLER->>METHOD: encode_response(result) → bytes
HANDLER->>ENDPOINT: Return response_bytes
ENDPOINT->>DISP: send_response(request_id, bytes)
DISP->>FRAME: Serialize to binary frames
FRAME->>TRANSPORT: Transmit frames
TRANSPORT->>FRAME: Receive frames
FRAME->>DISP: Reassemble frames
DISP->>DISP: Match request_id to pending call
DISP->>CALLER: resolve_future(request_id, bytes)
CALLER->>METHOD: decode_response(bytes) → result
METHOD->>APP: Return typed result
RPC Call Flow
Understanding how an RPC call travels through the system is essential for debugging and optimization.
Complete RPC Invocation Sequence
Key observations:
- The
METHOD_IDis used for routing on the server side - The
request_id(assigned by the dispatcher) is used for correlation - All serialization/deserialization happens at the method trait level
- The dispatcher only handles raw bytes
Sources:
- README.md:70-160
- High-level architecture diagram 3 (RPC Communication Flow)
Prebuffered vs. Streaming RPC
The RPC framework supports two distinct calling patterns, each optimized for different use cases.
RPC Pattern Comparison
| Aspect | Prebuffered RPC | Streaming RPC |
|---|---|---|
| Request Size | Complete request buffered in memory | Request can be sent in chunks |
| Response Size | Complete response buffered in memory | Response can be received in chunks |
| Memory Usage | Higher for large payloads | Lower, constant memory footprint |
| Latency | Lower for small payloads | Higher initial latency, better throughput |
| Trait | RpcMethodPrebuffered | RpcMethodStreaming |
| Use Cases | Small to medium payloads (< 10MB) | Large payloads, file transfers, real-time data |
| Multiplexing | Multiple calls can be concurrent | Streams can be interleaved |
Sources:
- Section titles from table of contents
- README.md28
classDiagram
class RpcServiceCallerInterface {<<trait>>\n+call_prebuffered(method_id: u64, params: Option~Vec~u8~~, payload: Option~Vec~u8~~) Future~Result~Vec~u8~~~\n+call_streaming(method_id: u64, params: Option~Vec~u8~~) Future~Result~StreamResponse~~\n+get_transport_state() RpcTransportState\n+set_state_change_handler(handler: Fn) Future}
class RpcTransportState {<<enum>>\nConnecting\nConnected\nDisconnected\nFailed}
class RpcClient {+new(host, port) RpcClient\nimplements RpcServiceCallerInterface}
class RpcWasmClient {+new(url) RpcWasmClient\nimplements RpcServiceCallerInterface}
class CustomClient {+new(...) CustomClient\nimplements RpcServiceCallerInterface}
RpcServiceCallerInterface <|.. RpcClient : implements
RpcServiceCallerInterface <|.. RpcWasmClient : implements
RpcServiceCallerInterface <|.. CustomClient : implements
RpcServiceCallerInterface --> RpcTransportState : returns
Client-Side: RpcServiceCallerInterface
The client-side RPC invocation is abstracted through the RpcServiceCallerInterface trait, which allows different transport implementations to provide identical calling semantics.
RpcServiceCallerInterface Contract
This design allows application code to be written once against RpcServiceCallerInterface and work with any compliant transport implementation (Tokio, WASM, custom transports, etc.).
Sources:
classDiagram
class RpcServiceEndpointInterface {<<trait>>\n+register_prebuffered(method_id: u64, handler: Fn) Future~Result~~~\n+register_streaming(method_id: u64, handler: Fn) Future~Result~~~\n+unregister(method_id: u64) Future~Result~~~\n+is_registered(method_id: u64) Future~bool~}
class HandlerContext {+client_id: Option~String~\n+metadata: HashMap~String, String~}
class PrebufferedHandler {<<function>>\n+Fn(Vec~u8~, HandlerContext) Future~Result~Vec~u8~~~}
class StreamingHandler {<<function>>\n+Fn(Option~Vec~u8~~, DynamicChannel, HandlerContext) Future~Result~~~}
class RpcServer {
+new(config) RpcServer
+endpoint() Arc~RpcServiceEndpointInterface~
+serve_with_listener(listener) Future
}
RpcServiceEndpointInterface --> PrebufferedHandler : accepts
RpcServiceEndpointInterface --> StreamingHandler : accepts
RpcServiceEndpointInterface --> HandlerContext : provides
RpcServer --> RpcServiceEndpointInterface : provides
Server-Side: RpcServiceEndpointInterface
The server-side request handling is abstracted through the RpcServiceEndpointInterface trait, which manages method registration and dispatch.
RpcServiceEndpointInterface Contract
Handlers are registered by METHOD_ID and receive:
- Request bytes : The serialized request parameters (for prebuffered) or initial params (for streaming)
- Context : Metadata about the client and connection
- Dynamic channel (streaming only): For incremental data transmission
Sources:
Data Serialization with Bitcode
The RPC framework uses the bitcode crate for binary serialization. This provides compact, efficient encoding of Rust types.
Serialization Requirements
For a type to be used in RPC method definitions, it must implement:
serde::Serialize- For encodingserde::Deserialize- For decoding
The bitcode crate provides these implementations for most standard Rust types, including:
- Primitive types (
u64,f64,bool, etc.) - Standard collections (
Vec<T>,HashMap<K, V>, etc.) - Custom structs with
#[derive(Serialize, Deserialize)]
Serialization Flow
The compact binary format of bitcode significantly reduces payload sizes compared to JSON or other text-based formats, contributing to the framework's low-latency characteristics.
Sources:
- Cargo.lock:158-168
- Cargo.toml52
- README.md32
- High-level architecture diagram 6 (Data Flow and Serialization Strategy)
Method Registration and Dispatch
On the server side, methods must be registered with the endpoint before they can be invoked. The registration process associates a METHOD_ID with a handler function.
Handler Registration Pattern
From the example application, the registration pattern is:
Registration Lifecycle
Once registered, handlers remain active until explicitly unregistered or the server shuts down. Multiple concurrent invocations of the same handler are supported through the underlying multiplexing layer.
Sources:
graph TB
subgraph "Shared Application Logic"
APP_CODE["Application Code\nPlatform-agnostic"]
METHOD_CALL["Method::call(&client, params)"]
APP_CODE --> METHOD_CALL
end
subgraph "Native Platform"
TOKIO_CLIENT["RpcClient\n(Tokio-based)"]
TOKIO_RUNTIME["Tokio async runtime"]
TOKIO_WS["tokio-tungstenite\nWebSocket"]
METHOD_CALL -.uses.-> TOKIO_CLIENT
TOKIO_CLIENT --> TOKIO_RUNTIME
TOKIO_CLIENT --> TOKIO_WS
end
subgraph "Web Platform"
WASM_CLIENT["RpcWasmClient\n(WASM-based)"]
WASM_RUNTIME["Browser event loop"]
WASM_WS["JavaScript WebSocket API\nvia wasm-bindgen"]
METHOD_CALL -.uses.-> WASM_CLIENT
WASM_CLIENT --> WASM_RUNTIME
WASM_CLIENT --> WASM_WS
end
subgraph "Custom Platform"
CUSTOM_CLIENT["Custom RpcClient\nimplements RpcServiceCallerInterface"]
CUSTOM_TRANSPORT["Custom Transport"]
METHOD_CALL -.uses.-> CUSTOM_CLIENT
CUSTOM_CLIENT --> CUSTOM_TRANSPORT
end
Cross-Platform RPC Invocation
A key design goal of the RPC framework is enabling the same application code to work across different platforms and transports. This is achieved through the abstraction provided by RpcServiceCallerInterface.
Platform-Agnostic Application Code
The application layer depends only on:
- The service definition crate (for method traits)
- The
RpcServiceCallerInterfacetrait (for invocation)
This allows the same business logic to run in servers, native desktop applications, mobile apps, and web browsers with minimal platform-specific code.
Sources:
- README.md47
- Cargo.lock:898-916 (tokio client)
- Cargo.lock:935-954 (wasm client)
- High-level architecture diagram 2 (Cross-Platform Deployment Model)
graph TD
subgraph "Application-Level Errors"
BIZ_ERR["Business Logic Errors\nDomain-specific"]
end
subgraph "RPC Framework Errors"
RPC_ERR["RpcServiceError"]
METHOD_NOT_FOUND["MethodNotFound\nInvalid METHOD_ID"]
ENCODING_ERR["EncodingError\nSerialization failure"]
SYSTEM_ERR["SystemError\nInternal dispatcher error"]
TRANSPORT_ERR["TransportError\nNetwork failure"]
RPC_ERR --> METHOD_NOT_FOUND
RPC_ERR --> ENCODING_ERR
RPC_ERR --> SYSTEM_ERR
RPC_ERR --> TRANSPORT_ERR
end
subgraph "Core Layer Errors"
CORE_ERR["Muxio Core Errors\nFraming protocol errors"]
end
BIZ_ERR -.propagates through.-> RPC_ERR
TRANSPORT_ERR -.wraps.-> CORE_ERR
Error Handling in RPC
The RPC framework uses Rust's Result type throughout, with error types defined at the appropriate abstraction levels.
RPC Error Hierarchy
Error handling patterns:
- Client-side : Errors are returned as
Result<T, E>from RPC calls - Server-side : Handler errors are serialized and transmitted back to the client
- Transport errors : Automatically trigger state changes (see
RpcTransportState)
For detailed error type definitions, see Error Handling.
Sources:
- Section reference to page 7
Performance Characteristics
The RPC framework is designed for low latency and high throughput. Key performance features include:
Performance Optimizations
| Feature | Benefit | Implementation |
|---|---|---|
| Compile-time method IDs | Zero runtime hash overhead | xxhash-rust with const_xxh3 |
| Binary serialization | Smaller payload sizes | bitcode crate |
| Minimal frame headers | Reduced per-message overhead | Custom binary protocol |
| Request multiplexing | Concurrent calls over single connection | RpcDispatcher correlation |
| Zero-copy streaming | Reduced memory allocations | DynamicChannel for chunked data |
| Callback-driven dispatch | No polling overhead | Async handlers with futures |
The combination of these optimizations makes the RPC framework suitable for:
- Low-latency trading systems
- Real-time gaming
- Interactive remote tooling
- High-throughput data processing
Sources:
graph TB
subgraph "RPC Abstraction Layer"
CALLER_IF["RpcServiceCallerInterface"]
ENDPOINT_IF["RpcServiceEndpointInterface"]
end
subgraph "Core Dispatcher"
DISPATCHER["RpcDispatcher\nRequest correlation"]
SEND_CB["send_callback\nVec<u8> → ()"]
RECV_CB["receive_callback\n() → Vec<u8>"]
end
subgraph "Tokio WebSocket Transport"
TOKIO_SERVER["TokioRpcServer"]
TOKIO_CLIENT["TokioRpcClient"]
TUNGSTENITE["tokio-tungstenite"]
TOKIO_SERVER --> TUNGSTENITE
TOKIO_CLIENT --> TUNGSTENITE
end
subgraph "WASM WebSocket Transport"
WASM_CLIENT["WasmRpcClient"]
JS_BRIDGE["wasm-bindgen bridge"]
BROWSER_WS["Browser WebSocket API"]
WASM_CLIENT --> JS_BRIDGE
JS_BRIDGE --> BROWSER_WS
end
CALLER_IF -.implemented by.-> TOKIO_CLIENT
CALLER_IF -.implemented by.-> WASM_CLIENT
ENDPOINT_IF -.implemented by.-> TOKIO_SERVER
TOKIO_CLIENT --> DISPATCHER
WASM_CLIENT --> DISPATCHER
TOKIO_SERVER --> DISPATCHER
DISPATCHER --> SEND_CB
DISPATCHER --> RECV_CB
SEND_CB -.invokes.-> TUNGSTENITE
RECV_CB -.invokes.-> TUNGSTENITE
SEND_CB -.invokes.-> JS_BRIDGE
RECV_CB -.invokes.-> JS_BRIDGE
Integration with Transport Layer
The RPC framework is designed to be transport-agnostic, with concrete implementations provided for common scenarios.
Transport Integration Points
The RpcDispatcher accepts callbacks for sending and receiving bytes, allowing it to work with any transport mechanism. This design enables:
- WebSocket transports (Tokio and WASM implementations provided)
- TCP socket transports
- In-memory transports (for testing)
- Custom transports (by providing appropriate callbacks)
For implementation details of specific transports, see Transport Implementations.
Sources:
- README.md34
- Cargo.lock:898-933 (transport implementations)
- High-level architecture diagram 1 (Overall System Architecture)
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Service Definitions
Relevant source files
- README.md
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/Cargo.toml
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This document explains how RPC service definitions are created and shared in the rust-muxio system. Service definitions are the contracts that define RPC methods, their inputs, outputs, and unique identifiers. By implementing the RpcMethodPrebuffered trait, both clients and servers depend on the same type-safe API contract, eliminating an entire class of distributed system bugs through compile-time verification.
For information about how clients invoke these definitions, see Service Caller Interface. For information about how servers handle these definitions, see Service Endpoint Interface.
Sources: README.md:16-50
The RpcMethodPrebuffered Trait
The RpcMethodPrebuffered trait is the foundational abstraction for defining RPC methods in muxio. Each RPC method is a type that implements this trait, specifying the method's unique identifier, input type, output type, and serialization logic.
classDiagram
class RpcMethodPrebuffered {
<<trait>>
+METHOD_ID: u64\n+Input: type\n+Output: type
+encode_request(input: Input) Result~Vec~u8~, io::Error~
+decode_request(bytes: &[u8]) Result~Input, io::Error~
+encode_response(output: Output) Result~Vec~u8~, io::Error~
+decode_response(bytes: &[u8]) Result~Output, io::Error~
}
class Add {+METHOD_ID: u64\n+Input: Vec~f64~\n+Output: f64}
class Mult {+METHOD_ID: u64\n+Input: Vec~f64~\n+Output: f64}
class Echo {+METHOD_ID: u64\n+Input: Vec~u8~\n+Output: Vec~u8~}
RpcMethodPrebuffered <|.. Add
RpcMethodPrebuffered <|.. Mult
RpcMethodPrebuffered <|.. Echo
Trait Structure
Each concrete implementation (like Add, Mult, Echo) must provide:
| Component | Type | Purpose |
|---|---|---|
METHOD_ID | u64 | Compile-time generated unique identifier for the method |
Input | Associated type | The Rust type representing the method's parameters |
Output | Associated type | The Rust type representing the method's return value |
encode_request | Function | Serializes Input to Vec<u8> |
decode_request | Function | Deserializes Vec<u8> to Input |
encode_response | Function | Serializes Output to Vec<u8> |
decode_response | Function | Deserializes Vec<u8> to Output |
Sources: README.md49 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-28
Method ID Generation
Each RPC method is assigned a unique METHOD_ID at compile time. This identifier is generated by hashing the method's name using the xxhash-rust library. The hash-based approach ensures that method IDs are deterministic, collision-resistant, and do not require manual coordination.
graph LR
MethodName["Method Name\n(e.g., 'Add')"]
Hash["xxhash-rust\nHash Function"]
MethodID["METHOD_ID\n(u64 constant)"]
CompileTime["Compile Time\nConstant"]
MethodName --> Hash
Hash --> MethodID
MethodID --> CompileTime
CompileTime -.enforces.-> UniqueID["Unique Identifier\nper Method"]
CompileTime -.enables.-> TypeSafety["Compile-time\nCollision Detection"]
Hash-Based Identification
The METHOD_ID is computed once at compile time and stored as a constant. This approach provides several benefits:
- Deterministic : The same method name always produces the same ID across all builds
- Collision-Resistant : The 64-bit hash space makes accidental collisions extremely unlikely
- Zero Runtime Overhead : No string comparisons or lookups needed during RPC dispatch
- Type-Safe : Method IDs are baked into the type system, preventing runtime mismatches
When a client makes an RPC call using Add::call(), it automatically includes Add::METHOD_ID in the request. When the server receives this request, it uses the method ID to route to the correct handler.
Sources: README.md49 extensions/muxio-rpc-service/Cargo.toml16 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:35-36
Encoding and Decoding
Service definitions are responsible for converting between typed Rust structures and binary representations. The trait defines four serialization methods that operate on raw bytes:
graph LR
subgraph "Client Side"
Input1["Input\n(Typed Struct)"]
EncReq["encode_request()"]
ReqBytes["Request Bytes\n(Vec<u8>)"]
end
subgraph "Transport"
Network["Binary\nNetwork Frames"]
end
subgraph "Server Side"
ReqBytes2["Request Bytes\n(Vec<u8>)"]
DecReq["decode_request()"]
Input2["Input\n(Typed Struct)"]
end
Input1 --> EncReq
EncReq --> ReqBytes
ReqBytes --> Network
Network --> ReqBytes2
ReqBytes2 --> DecReq
DecReq --> Input2
Request Serialization Flow
Response Serialization Flow
Serialization Implementation
While service definitions can use any serialization format, the system commonly uses the bitcode library for efficient binary serialization. This provides:
- Compact binary representation (smaller than JSON or MessagePack)
- Fast encoding/decoding performance
- Native Rust type support without manual schema definitions
The separation of serialization logic into the service definition ensures that both client and server use identical encoding/decoding logic, preventing deserialization mismatches.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:55-76 extensions/muxio-rpc-service/Cargo.toml17
graph TB
subgraph "Shared Service Definition Crate"
AddDef["Add Method\nimpl RpcMethodPrebuffered"]
MultDef["Mult Method\nimpl RpcMethodPrebuffered"]
EchoDef["Echo Method\nimpl RpcMethodPrebuffered"]
end
subgraph "Client Implementation"
TokioClient["muxio-tokio-rpc-client"]
WasmClient["muxio-wasm-rpc-client"]
AddCall["Add::call()"]
MultCall["Mult::call()"]
end
subgraph "Server Implementation"
TokioServer["muxio-tokio-rpc-server"]
AddHandler["Add Handler\nregister_prebuffered()"]
MultHandler["Mult Handler\nregister_prebuffered()"]
end
AddDef -.depends on.-> TokioClient
AddDef -.depends on.-> WasmClient
AddDef -.depends on.-> TokioServer
MultDef -.depends on.-> TokioClient
MultDef -.depends on.-> WasmClient
MultDef -.depends on.-> TokioServer
AddDef --> AddCall
AddDef --> AddHandler
MultDef --> MultCall
MultDef --> MultHandler
Shared Definitions Pattern
The key architectural principle is that service definitions are placed in a shared crate that both client and server implementations depend on. This shared dependency enforces API contracts at compile time.
Example Usage Pattern
The integration tests demonstrate this pattern in practice:
Client-side invocation:
Server-side handler registration:
Both client and server use:
Add::METHOD_IDfor routingAdd::decode_request()for parsing inputsAdd::encode_response()for formatting outputs
Sources: README.md49 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-96 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-142
Type Safety Guarantees
Service definitions provide compile-time guarantees that prevent an entire class of distributed system bugs. Any mismatch between client and server results in a compilation error rather than a runtime failure.
Compile-Time Error Scenarios
| Scenario | Error Type | Detection Point |
|---|---|---|
Client and server use different Input types | Type mismatch | Compile time |
Client and server use different Output types | Type mismatch | Compile time |
| Adding/removing fields from request/response | Deserialization error | Compile time (via shared definition) |
| Using wrong method ID for handler | Method not found | Compile time (via shared constant) |
| Duplicate method names | Hash collision | Compile time (deterministic hashing) |
Example: Preventing Type Mismatches
If a developer attempts to change the Add method's input type on the server without updating the shared definition:
- The server code references
Add::decode_request()from the shared definition - The server handler expects the original
Vec<f64>type - Any type mismatch produces a compile error
- The server cannot be built until the shared definition is updated
- Once updated, all clients must also be rebuilt, automatically receiving the new type
This eliminates scenarios where:
- A client sends the wrong data format
- A server expects a different response structure
- Method IDs collide between different methods
- Serialization logic differs between client and server
Sources: README.md49 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:82-95
graph TB
App["Application Code"]
ServiceDef["Service Definition\n(RpcMethodPrebuffered)"]
CallTrait["RpcCallPrebuffered Trait"]
CallerInterface["RpcServiceCallerInterface"]
Transport["Transport\n(Tokio/WASM)"]
App -->|Add::call client, input| CallTrait
CallTrait -->|uses| ServiceDef
CallTrait -->|encode_request| ServiceDef
CallTrait -->|decode_response| ServiceDef
CallTrait -->|METHOD_ID| ServiceDef
CallTrait -->|call_rpc_buffered| CallerInterface
CallerInterface --> Transport
Integration with RPC Framework
Service definitions integrate with the broader RPC framework through the RpcCallPrebuffered trait, which provides the high-level call() method that applications use.
graph TB
subgraph "Trait Definition"
RpcCallPrebuffered["RpcCallPrebuffered\n(trait)"]
call_method["call()\n(async method)"]
end
subgraph "Trait Bounds"
RpcMethodPrebuffered["RpcMethodPrebuffered\n(provides encode/decode)"]
Send["Send + Sync\n(thread-safe)"]
sized["Sized\n(known size)"]
end
subgraph "Blanket Implementation"
blanket["impl<T> RpcCallPrebuffered for T\nwhere T: RpcMethodPrebuffered"]
end
subgraph "Example Types"
Add["Add\n(example service)"]
Mult["Mult\n(example service)"]
Echo["Echo\n(example service)"]
end
RpcCallPrebuffered --> call_method
blanket --> RpcCallPrebuffered
RpcMethodPrebuffered --> blanket
Send --> blanket
sized --> blanket
Add -.implements.-> RpcMethodPrebuffered
Mult -.implements.-> RpcMethodPrebuffered
Echo -.implements.-> RpcMethodPrebuffered
Add -.gets.-> RpcCallPrebuffered
Mult -.gets.-> RpcCallPrebuffered
Echo -.gets.-> RpcCallPrebuffered
The RpcCallPrebuffered trait is automatically implemented for any type that implements RpcMethodPrebuffered. This blanket implementation:
- Encodes the input using
encode_request() - Creates an
RpcRequestwith the encoded bytes andMETHOD_ID - Handles large argument payloads by routing them to the prebuffered payload field when needed
- Invokes the transport-specific caller interface
- Decodes the response using
decode_response() - Returns the typed result to the application
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-98 README.md:100-118
graph TB
subgraph "example-muxio-rpc-service-definition"
Add["Add\nInput: Vec<f64>\nOutput: f64\nMETHOD_ID"]
Mult["Mult\nInput: Vec<f64>\nOutput: f64\nMETHOD_ID"]
Echo["Echo\nInput: Vec<u8>\nOutput: Vec<u8>\nMETHOD_ID"]
end
subgraph "Client Crates"
TokioClient["muxio-tokio-rpc-client"]
WasmClient["muxio-wasm-rpc-client"]
end
subgraph "Server Crate"
TokioServer["muxio-tokio-rpc-server"]
end
subgraph "Example Application"
WsApp["example-muxio-ws-rpc-app"]
end
Add --> TokioClient
Add --> WasmClient
Add --> TokioServer
Add --> WsApp
Mult --> TokioClient
Mult --> WasmClient
Mult --> TokioServer
Mult --> WsApp
Echo --> TokioClient
Echo --> WasmClient
Echo --> TokioServer
Echo --> WsApp
Example Service Definition Crate
The example-muxio-rpc-service-definition crate demonstrates the shared definition pattern:
This crate exports the method definitions that are used across:
- Native Tokio client tests
- WASM client tests
- Server implementations
- Example applications
All consumers share the exact same type definitions, method IDs, and serialization logic.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-6 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-27 README.md:70-73
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Service Caller Interface
Relevant source files
- extensions/muxio-rpc-service-caller/Cargo.toml
- extensions/muxio-rpc-service-caller/src/caller_interface.rs
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
Purpose and Scope
The Service Caller Interface defines the client-side abstraction for making RPC calls in the rust-muxio system. This interface, defined by the RpcServiceCallerInterface trait in the muxio-rpc-service-caller crate, provides the core logic for encoding requests, managing response streams, and handling errors that is shared by all client implementations.
This page covers the client-side RPC invocation mechanism. For server-side request handling, see Service Endpoint Interface. For information on defining RPC methods, see Service Definitions. For details on using prebuffered methods with this interface, see Prebuffered RPC Calls. For concrete implementations of this interface, see Tokio RPC Client and WASM RPC Client.
Sources: extensions/muxio-rpc-service-caller/Cargo.toml:1-22
Trait Definition
The RpcServiceCallerInterface trait defines the contract that all RPC clients must implement. It is runtime-agnostic and transport-agnostic, focusing solely on the mechanics of RPC invocation.
Core Methods
| Method | Return Type | Purpose |
|---|---|---|
get_dispatcher() | Arc<TokioMutex<RpcDispatcher<'static>>> | Provides access to the RPC dispatcher for request management |
get_emit_fn() | Arc<dyn Fn(Vec<u8>) + Send + Sync> | Returns function to transmit encoded bytes to transport layer |
is_connected() | bool | Checks current transport connection state |
call_rpc_streaming() | Result<(RpcStreamEncoder, DynamicReceiver), RpcServiceError> | Initiates streaming RPC call with chunked response handling |
call_rpc_buffered() | Result<(RpcStreamEncoder, Result<T, RpcServiceError>), RpcServiceError> | Initiates buffered RPC call that collects complete response |
set_state_change_handler() | async | Registers callback for transport state changes |
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:25-405
RPC Call Flow
The following diagram illustrates how an RPC call flows through the service caller interface:
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-349
sequenceDiagram
participant App as "Application Code"
participant Trait as "RpcServiceCallerInterface"
participant Dispatcher as "RpcDispatcher"
participant SendFn as "send_fn Closure"
participant RecvFn as "recv_fn Closure"
participant Transport as "Transport Layer"
participant DynChan as "DynamicChannel"
App->>Trait: call_rpc_streaming(RpcRequest)
Trait->>Trait: Check is_connected()
alt Not Connected
Trait-->>App: Err(ConnectionAborted)
end
Trait->>DynChan: Create channel (Bounded/Unbounded)
Trait->>SendFn: Create emission closure
Trait->>RecvFn: Create response handler
Trait->>Dispatcher: dispatcher.call(request, send_fn, recv_fn)
Dispatcher-->>Trait: RpcStreamEncoder
Trait->>App: Wait on readiness channel
Transport->>RecvFn: RpcStreamEvent::Header
RecvFn->>RecvFn: Extract RpcResultStatus
RecvFn->>Trait: Signal ready via oneshot
Trait-->>App: Return (encoder, receiver)
loop For each chunk
Transport->>RecvFn: RpcStreamEvent::PayloadChunk
RecvFn->>DynChan: Send Ok(bytes) or buffer error
end
Transport->>RecvFn: RpcStreamEvent::End
RecvFn->>RecvFn: Check final status
alt Success
RecvFn->>DynChan: Close channel
else Error
RecvFn->>DynChan: Send Err(RpcServiceError)
end
Streaming RPC Calls
Method Signature
The call_rpc_streaming method is the foundation for all RPC invocations:
Channel Types
The method accepts a DynamicChannelType parameter that determines the channel behavior:
| Channel Type | Buffer Size | Use Case |
|---|---|---|
Unbounded | Unlimited | Large or unpredictable response sizes |
Bounded | DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE (8) | Controlled memory usage with backpressure |
Components Created
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:33-73 extensions/muxio-rpc-service-caller/src/caller_interface.rs:77-96
Connection State Validation
Before initiating any RPC call, the interface checks connection state:
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53
Buffered RPC Calls
The call_rpc_buffered method builds on call_rpc_streaming to provide a simpler interface for methods that return complete responses:
Method Signature
Buffering Strategy
The method accumulates all response chunks into a buffer, then applies the decode function once:
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:351-399
Response Handler Implementation
The recv_fn closure implements the response handling logic by processing RpcStreamEvent types:
stateDiagram-v2
[*] --> WaitingForHeader
WaitingForHeader --> ReadySignaled: RpcStreamEvent::Header
ReadySignaled --> ProcessingChunks : Extract RpcResultStatus
ProcessingChunks --> ProcessingChunks: RpcStreamEvent::PayloadChunk\n(Success → DynamicSender\nError → error_buffer)
ProcessingChunks --> Completed: RpcStreamEvent::End
ProcessingChunks --> ErrorState: RpcStreamEvent::Error
Completed --> [*] : Close DynamicSender
ErrorState --> [*] : Send error and close
Event Processing Flow
Status Handling
RpcResultStatus | Action |
|---|---|
Success | Forward payload chunks to DynamicSender |
MethodNotFound | Buffer payload, send RpcServiceError::Rpc with NotFound code |
Fail | Send RpcServiceError::Rpc with Fail code |
SystemError | Buffer error message, send RpcServiceError::Rpc with System code |
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:102-287
Mutex Usage Pattern
The response handler uses std::sync::Mutex (not tokio::sync::Mutex) because it executes in a synchronous context:
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:75-115
Error Handling Strategy
Error Types
Error Propagation Points
| Error Source | When | Error Type |
|---|---|---|
| Disconnected client | Before call_rpc_streaming | RpcServiceError::Transport(ConnectionAborted) |
| Dispatcher failure | During dispatcher.call() | RpcServiceError::Transport(io::Error::other) |
| Readiness timeout | No header received | RpcServiceError::Transport("channel closed prematurely") |
| Frame decode error | RpcStreamEvent::Error | RpcServiceError::Transport(frame_decode_error) |
| Remote RPC error | RpcStreamEvent::End with error status | RpcServiceError::Rpc(code + message) |
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-52 extensions/muxio-rpc-service-caller/src/caller_interface.rs:186-238 extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-284
Implementation Requirements
Required Dependencies
Implementations of RpcServiceCallerInterface must maintain:
RpcDispatcherinstance - For managing request correlation and stream multiplexing- Emit function - For transmitting encoded bytes to the transport layer
- Connection state - Boolean flag tracked by transport implementation
Trait Bounds
The trait requires Send + Sync for async context compatibility:
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:25-26
Integration with Service Definitions
The interface integrates with RpcMethodPrebuffered through the RpcCallPrebuffered extension trait, providing type-safe method invocation:
graph LR
subgraph "Service Definition"
Method["RpcMethodPrebuffered\ne.g., Echo"]
MethodID["METHOD_ID: u64"]
Encode["encode_request()"]
Decode["decode_response()"]
end
subgraph "Caller Extension"
CallTrait["RpcCallPrebuffered"]
CallFn["call(client, params)"]
end
subgraph "Caller Interface"
Interface["RpcServiceCallerInterface"]
Buffered["call_rpc_buffered()"]
end
Method --> MethodID
Method --> Encode
Method --> Decode
CallTrait --> Method
CallFn --> Encode
CallFn --> Decode
CallFn --> Interface
Interface --> Buffered
Example usage from tests:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs203
Mock Implementation for Testing
The test suite demonstrates a minimal implementation:
The mock stores a shared DynamicSender that tests can use to inject response data:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-93
State Change Handling
The interface provides a method for registering transport state change callbacks:
This allows applications to react to connection state transitions. See Transport State Management for details on state transitions.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:401-405
Relationship to Concrete Implementations
All three implement the same interface, differing only in their transport mechanisms:
- TokioRpcClient (#5.2) - Uses Tokio async runtime and
tokio-tungstenitefor native WebSocket connections - WasmRpcClient (#5.3) - Uses
wasm-bindgenand browser WebSocket APIs for WASM environments - MockRpcClient - Test implementation with injectable response channels
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:24-93
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Service Endpoint Interface
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service-endpoint/Cargo.toml
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This document describes the RpcServiceEndpointInterface trait, which provides the server-side interface for handling incoming RPC requests in the rust-muxio system. This interface is responsible for registering method handlers, decoding incoming byte streams into RPC requests, dispatching those requests to the appropriate handlers, and encoding responses back to clients.
For information about the client-side interface for making RPC calls, see Service Caller Interface. For details about service method definitions, see Service Definitions.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-138
Overview
The RpcServiceEndpointInterface<C> trait defines the contract that server-side endpoint implementations must fulfill. It is generic over a connection context type C, allowing each RPC handler to access connection-specific data such as authentication state, session information, or connection metadata.
Core Responsibilities
| Responsibility | Description |
|---|---|
| Handler Registration | Provides register_prebuffered method to associate method IDs with handler closures |
| Request Decoding | Processes incoming byte streams and identifies complete RPC requests |
| Request Dispatch | Routes requests to the appropriate handler based on METHOD_ID |
| Response Encoding | Encodes handler results back into binary frames for transmission |
| Concurrent Execution | Executes multiple handlers concurrently when multiple requests arrive |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:8-14
Trait Definition
Diagram: RpcServiceEndpointInterface Trait Structure
The trait is parameterized by a connection context type C that must be Send + Sync + Clone + 'static. This context is passed to every handler invocation, enabling stateful request processing.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:8-14
Handler Registration
The register_prebuffered Method
Handlers are registered by calling register_prebuffered with a unique METHOD_ID and an asynchronous closure. The method ensures that duplicate registrations are prevented at runtime.
Diagram: Handler Registration Flow
sequenceDiagram
participant App as "Application Code"
participant Endpoint as "RpcServiceEndpointInterface"
participant Handlers as "HandlersLock (WithHandlers)"
participant HashMap as "HashMap<u64, Handler>"
App->>Endpoint: register_prebuffered(METHOD_ID, handler)
Endpoint->>Handlers: with_handlers(|handlers| {...})
Handlers->>HashMap: entry(METHOD_ID)
alt METHOD_ID not registered
HashMap-->>Handlers: Entry::Vacant
Handlers->>HashMap: insert(Arc::new(wrapped_handler))
HashMap-->>Handlers: Ok(())
Handlers-->>Endpoint: Ok(())
Endpoint-->>App: Ok(())
else METHOD_ID already exists
HashMap-->>Handlers: Entry::Occupied
Handlers-->>Endpoint: Err(RpcServiceEndpointError::Handler)
Endpoint-->>App: Err("Handler already registered")
end
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64
Handler Signature
Handlers must conform to this signature:
| Component | Type |
|---|---|
| Input | Vec<u8> (raw request bytes) |
| Context | C (connection context) |
| Output | Future<Output = Result<Vec<u8>, Box<dyn Error + Send + Sync>>> |
The handler closure is wrapped in an Arc to allow shared ownership across multiple concurrent invocations.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:41-44
Example Usage
Integration tests demonstrate typical handler registration patterns:
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:35-43
Request Processing Flow
The read_bytes Method
The read_bytes method is the core entry point for processing incoming data. It implements a three-stage pipeline that separates synchronous framing operations from asynchronous handler execution.
Diagram: Three-Stage Request Processing Pipeline
graph TB
subgraph "Stage 1: Decode Incoming Frames"
A["read_bytes(bytes)"] --> B["RpcDispatcher::read_bytes(bytes)"]
B --> C["Returns Vec<request_id>"]
C --> D["Check is_rpc_request_finalized(id)"]
D --> E["delete_rpc_request(id)"]
E --> F["Collect finalized_requests"]
end
subgraph "Stage 2: Execute RPC Handlers"
F --> G["For each (request_id, request)"]
G --> H["process_single_prebuffered_request"]
H --> I["Lookup handler by METHOD_ID"]
I --> J["Invoke handler(request_bytes, ctx)"]
J --> K["Handler returns response_bytes"]
K --> L["join_all(response_futures)"]
end
subgraph "Stage 3: Encode & Emit Responses"
L --> M["For each response"]
M --> N["dispatcher.respond()"]
N --> O["Chunk and serialize"]
O --> P["on_emit(bytes)"]
end
style A fill:#f9f9f9
style L fill:#f9f9f9
style P fill:#f9f9f9
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:78-137
Stage 1: Frame Decoding
The first stage is synchronous and processes the raw byte stream:
-
Decode Frames :
dispatcher.read_bytes(bytes)parses the binary framing protocol and returns a list of request IDs that were affected by the incoming data. -
Identify Complete Requests : For each request ID, check
is_rpc_request_finalized(id)to determine if the request is fully received. -
Extract Request Data : Call
delete_rpc_request(id)to remove the complete request from the dispatcher's internal state and obtain the fullRpcRequeststructure.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:78-97
Stage 2: Asynchronous Handler Execution
The second stage executes all handlers concurrently:
Diagram: Concurrent Handler Execution
-
Lookup Handler : Use
METHOD_IDfrom the request to find the registered handler in the handlers map. -
Invoke Handler : If found, execute the handler closure with the request bytes and connection context. If not found, generate a
NotFounderror response. -
Await All : Use
join_allto wait for all handler futures to complete before proceeding to Stage 3.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:99-126
Stage 3: Response Encoding
The third stage is synchronous and encodes responses back to the transport:
-
Encode Response : Each
RpcResponseis passed todispatcher.respond()along with the chunk size and emit callback. -
Chunk and Serialize : The dispatcher chunks large responses and serializes them according to the binary framing protocol.
-
Emit Bytes : The
on_emitcallback is invoked with each chunk of bytes, which the transport implementation sends over the network.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:127-137
Connection Context
Generic Context Type C
The endpoint interface is generic over a connection context type C that represents per-connection state. This context is cloned and passed to every handler invocation, allowing handlers to access:
- Authentication credentials
- Session data
- Connection metadata (IP address, connection time, etc.)
- Per-connection resources (database connections, etc.)
Context Requirements
| Trait Bound | Reason |
|---|---|
Send | Handlers run in async tasks that may move between threads |
Sync | Multiple handlers may reference the same context concurrently |
Clone | Each handler receives its own clone of the context |
'static | Contexts must outlive handler invocations |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:9-11
Stateless Servers
For stateless servers, the context type can be ():
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:37-43
classDiagram
class WithHandlers~C~ {<<trait>>\n+with_handlers(F) Result~R~}
class PrebufferedHandlers {<<HashMap>>\nu64 → Arc~HandlerFn~}
class RwLock~PrebufferedHandlers~ {+read() RwLockReadGuard\n+write() RwLockWriteGuard}
class TokioRwLock~PrebufferedHandlers~ {+read() RwLockReadGuard\n+write() RwLockWriteGuard}
WithHandlers <|.. RwLock~PrebufferedHandlers~ : implements (std)
WithHandlers <|.. TokioRwLock~PrebufferedHandlers~ : implements (tokio)
RwLock --> PrebufferedHandlers : protects
TokioRwLock --> PrebufferedHandlers : protects
Handler Storage with WithHandlers
The HandlersLock associated type must implement the WithHandlers<C> trait, which provides thread-safe access to the handler storage.
Diagram: Handler Storage Implementations
The with_handlers method provides a closure-based API for accessing the handler map, abstracting over different locking mechanisms (std RwLock vs tokio RwLock).
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs13 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:46-62
Error Handling
Registration Errors
The register_prebuffered method returns RpcServiceEndpointError if:
| Error | Condition |
|---|---|
RpcServiceEndpointError::Handler | A handler for the given METHOD_ID is already registered |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:49-52
Request Processing Errors
During read_bytes, errors can occur at multiple stages:
| Stage | Error Type | Handling |
|---|---|---|
| Frame Decoding | RpcServiceEndpointError::Dispatch | Returned immediately, processing stops |
| Handler Execution | Box<dyn Error> | Converted to RpcResponse::error, sent to client |
| Response Encoding | RpcServiceEndpointError::Dispatch | Ignored (best-effort response delivery) |
Handler errors are caught and converted into error responses that are sent back to the client using the RPC error protocol. See RPC Service Errors for details on error serialization.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:74-137
graph LR A["RpcServer"] --> B["endpoint()"] B --> C["Arc<RpcEndpoint>"] C --> D["RpcServiceEndpointInterface impl"] D --> E["register_prebuffered()"] D --> F["read_bytes()"] G["WebSocket Handler"] --> H["receive bytes"] H --> F F --> I["on_emit callback"] I --> J["send bytes"]
Integration with Transport Implementations
Server Implementation Pattern
Transport implementations like RpcServer implement RpcServiceEndpointInterface and provide an endpoint() method to obtain a reference for handler registration:
Diagram: Server-Endpoint Relationship
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:26-61
Typical Usage Pattern
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:26-70
Cross-Platform Compatibility
The RpcServiceEndpointInterface is runtime-agnostic and can be implemented for different async runtimes:
| Implementation | Runtime | Locking Mechanism |
|---|---|---|
muxio-tokio-rpc-server | Tokio | tokio::sync::RwLock |
| Custom implementations | Any | std::sync::RwLock or custom |
The trait's design with async_trait allows both Tokio-based and non-Tokio implementations to coexist.
Sources: extensions/muxio-rpc-service-endpoint/Cargo.toml:21-27
Performance Considerations
Concurrent Handler Execution
The read_bytes method uses join_all to execute all handlers that can be dispatched from a single batch of incoming bytes. This maximizes throughput when multiple requests arrive simultaneously.
Zero-Copy Processing
Handlers receive Vec<u8> directly from the dispatcher without intermediate allocations. The binary framing protocol minimizes overhead during frame reassembly.
Handler Caching
Handlers are stored as Arc<Handler> in the handlers map, allowing them to be cloned efficiently when dispatching to multiple concurrent requests.
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:105-126
graph TB A["Integration Test"] --> B["Start RpcServer"] B --> C["endpoint.register_prebuffered()"] C --> D["server.serve_with_listener()"] A --> E["Start RpcClient"] E --> F["Method::call()"] F --> G["WebSocket Transport"] G --> H["Server read_bytes()"] H --> I["Handler Execution"] I --> J["Response"] J --> K["Client receives result"] K --> L["assert_eq!"]
Testing
Integration tests validate the endpoint interface by creating real server-client connections:
Diagram: Integration Test Flow
Tests cover:
- Successful request-response roundtrips
- Error propagation from handlers
- Large payload handling (chunked transmission)
- Method not found errors
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:19-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:40-142
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Prebuffered RPC Calls
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This page documents the prebuffered RPC mechanism in the muxio system, which provides a complete request/response pattern for RPC calls. Prebuffered calls send the entire request payload upfront, wait for the complete response, and return a typed result. This is the simplest and most common RPC pattern in the system.
For information about defining RPC methods, see Service Definitions. For streaming RPC calls that handle chunked data incrementally, see Streaming RPC Calls. For the underlying client and server interfaces, see Service Caller Interface and Service Endpoint Interface.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-99
Overview of Prebuffered RPC
Prebuffered RPC calls represent a synchronous, request-response communication pattern where:
- The client encodes the entire request before sending
- The request is transmitted to the server (potentially in chunks if large)
- The server processes the complete request and produces a response
- The response is transmitted back to the client (potentially in chunks if large)
- The client decodes and returns the typed response
sequenceDiagram
participant App as "Application Code"
participant Trait as "RpcCallPrebuffered::call()"
participant Encode as "encode_request()"
participant Client as "RpcServiceCallerInterface"
participant Network as "Network Transport"
participant Server as "RPC Server"
participant Decode as "decode_response()"
App->>Trait: Add::call(&client, vec![1.0, 2.0, 3.0])
Trait->>Encode: encode_request(vec![1.0, 2.0, 3.0])
Encode-->>Trait: encoded_bytes
alt "Small payload (<64KB)"
Trait->>Trait: rpc_param_bytes = Some(encoded_bytes)
Trait->>Trait: rpc_prebuffered_payload_bytes = None
else "Large payload (>=64KB)"
Trait->>Trait: rpc_param_bytes = None
Trait->>Trait: rpc_prebuffered_payload_bytes = Some(encoded_bytes)
end
Trait->>Client: call_rpc_buffered(RpcRequest)
Client->>Network: transmit (chunked if needed)
Network->>Server: receive and reassemble
Server->>Server: process request
Server->>Network: transmit response (chunked if needed)
Network->>Client: receive and reassemble
Client-->>Trait: Result<Vec<u8>, RpcServiceError>
Trait->>Decode: decode_response(response_bytes)
Decode-->>Trait: Result<f64, io::Error>
Trait-->>App: Result<f64, RpcServiceError>
The term "prebuffered" refers to the fact that both request and response payloads are fully buffered before being processed by application code, as opposed to streaming approaches where data is processed incrementally.
Diagram: Complete Prebuffered RPC Call Flow
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-98 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:18-97
The RpcCallPrebuffered Trait
The RpcCallPrebuffered trait provides the high-level interface for making prebuffered RPC calls. It is automatically implemented for any type that implements RpcMethodPrebuffered.
Diagram: RpcCallPrebuffered Trait Hierarchy
graph TB
subgraph "Trait Definition"
RpcCallPrebuffered["RpcCallPrebuffered\n(trait)"]
call_method["call()\n(async method)"]
end
subgraph "Trait Bounds"
RpcMethodPrebuffered["RpcMethodPrebuffered\n(provides encode/decode)"]
Send["Send + Sync\n(thread-safe)"]
sized["Sized\n(known size)"]
end
subgraph "Blanket Implementation"
blanket["impl<T> RpcCallPrebuffered for T\nwhere T: RpcMethodPrebuffered"]
end
subgraph "Example Types"
Add["Add\n(example service)"]
Mult["Mult\n(example service)"]
Echo["Echo\n(example service)"]
end
RpcCallPrebuffered --> call_method
blanket --> RpcCallPrebuffered
RpcMethodPrebuffered --> blanket
Send --> blanket
sized --> blanket
Add -.implements.-> RpcMethodPrebuffered
Mult -.implements.-> RpcMethodPrebuffered
Echo -.implements.-> RpcMethodPrebuffered
Add -.gets.-> RpcCallPrebuffered
Mult -.gets.-> RpcCallPrebuffered
Echo -.gets.-> RpcCallPrebuffered
The trait signature is:
| Trait | Method | Parameters | Returns |
|---|---|---|---|
RpcCallPrebuffered | call() | rpc_client: &C, input: Self::Input | Result<Self::Output, RpcServiceError> |
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:23-28
Request Encoding and Transport Strategy
The RpcCallPrebuffered::call() implementation uses a smart transport strategy to handle arguments of any size. This strategy is necessary because the RPC header frame has a maximum size limit (typically 64KB).
Diagram: Argument Size-Based Transport Selection
graph TB
start["encode_request(input)"]
check{"encoded_args.len()\n>= DEFAULT_SERVICE_MAX_CHUNK_SIZE?"}
small["Small Arguments Path"]
large["Large Arguments Path"]
param_bytes["rpc_param_bytes = Some(encoded_args)\nrpc_prebuffered_payload_bytes = None"]
payload_bytes["rpc_param_bytes = None\nrpc_prebuffered_payload_bytes = Some(encoded_args)"]
create_request["Create RpcRequest\nwith is_finalized = true"]
send["call_rpc_buffered(request)"]
start --> check
check -->|No < 64KB| small
check -->|Yes >= 64KB| large
small --> param_bytes
large --> payload_bytes
param_bytes --> create_request
payload_bytes --> create_request
create_request --> send
Small Arguments Path (< 64KB)
When encoded arguments are smaller than DEFAULT_SERVICE_MAX_CHUNK_SIZE:
- Arguments are placed in
RpcRequest.rpc_param_bytes - The entire request (header + arguments) is transmitted in a single frame
- Most efficient for typical RPC calls
Large Arguments Path (>= 64KB)
When encoded arguments exceed the chunk size:
- Arguments are placed in
RpcRequest.rpc_prebuffered_payload_bytes - The
RpcDispatcherautomatically chunks the payload - The header is sent first, followed by payload chunks
- The server reassembles chunks before invoking the handler
RpcRequest Structure
| Field | Type | Purpose |
|---|---|---|
rpc_method_id | u64 | Compile-time generated method identifier |
rpc_param_bytes | Option<Vec<u8>> | Small arguments sent in header |
rpc_prebuffered_payload_bytes | Option<Vec<u8>> | Large arguments sent as chunked payload |
is_finalized | bool | Always true for prebuffered calls |
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-73 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48
graph LR
subgraph "Server Processing"
handler["Method Handler\nprocess request"]
encode_response["encode_response(result)"]
end
subgraph "Network Layer"
chunk["Automatic Chunking\n(if response > 64KB)"]
reassemble["Automatic Reassembly\n(client-side)"]
end
subgraph "Client Processing"
call_buffered["call_rpc_buffered()"]
nested_result["Result<Result<Output, io::Error>, RpcServiceError>"]
decode["decode_response(bytes)"]
flatten["Flatten nested Result"]
final["Result<Output, RpcServiceError>"]
end
handler --> encode_response
encode_response --> chunk
chunk --> reassemble
reassemble --> call_buffered
call_buffered --> nested_result
nested_result --> decode
decode --> flatten
flatten --> final
Response Handling and Decoding
Once the request is sent via call_rpc_buffered(), the client waits for the complete response. The response may be chunked during transmission, but call_rpc_buffered() handles reassembly transparently.
Diagram: Response Processing Pipeline
The response handling involves nested Result types:
-
Outer Result :
Result<_, RpcServiceError>- Indicates whether the RPC infrastructure succeededOk: The request was sent and a response was receivedErr: Network error, serialization error, or remote RPC error
-
Inner Result :
Result<Output, io::Error>- Indicates whether decoding succeededOk: The response was successfully decoded into the typed outputErr: Deserialization error indecode_response()
The call() method flattens these nested results and converts the inner io::Error to RpcServiceError::Transport.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:75-96
Error Propagation
Prebuffered RPC calls can fail at multiple stages, all represented by RpcServiceError:
| Error Type | Cause | Example |
|---|---|---|
RpcServiceError::Rpc(code: NotFound) | Method not registered on server | Calling unregistered method |
RpcServiceError::Rpc(code: System) | Server handler returned error | Handler logic failure |
RpcServiceError::Rpc(code: Fail) | Application-level error | Business logic error |
RpcServiceError::Transport | Network or serialization error | Connection closed, decode failure |
RpcServiceError::Cancelled | Request cancelled | Client-side cancellation |
Diagram: Error Propagation Through Prebuffered Call
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:205-240
Usage Examples
Basic Prebuffered Call
This single line:
- Encodes the
Vec<f64>usingAdd::encode_request() - Creates an
RpcRequestwithMETHOD_IDand encoded parameters - Transmits the request to the server
- Waits for and receives the complete response
- Decodes the response using
Add::decode_response() - Returns the typed
f64result
Concurrent Prebuffered Calls
Multiple prebuffered calls can be made concurrently over a single connection:
Each call is assigned a unique request ID by the RpcDispatcher, allowing responses to be correlated correctly even when they arrive out of order.
Large Payload Handling
The prebuffered mechanism transparently handles large payloads:
The chunking and reassembly happen automatically in the RpcDispatcher and RpcStreamEncoder/RpcStreamDecoder layers.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-96 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:154-203
Testing Prebuffered Calls
graph LR
subgraph "Test Setup"
MockClient["MockRpcClient\n(implements RpcServiceCallerInterface)"]
SharedSender["Arc<Mutex<Option<DynamicSender>>>\n(response injection)"]
AtomicBool["Arc<AtomicBool>\n(connection state)"]
end
subgraph "Test Execution"
TestCode["Test code calls\nEcho::call(&mock_client, ...)"]
Background["Background task\ninjects response"]
end
subgraph "Verification"
Assert["Assert result matches\nexpected value or error"]
end
MockClient --> SharedSender
MockClient --> AtomicBool
TestCode --> MockClient
Background --> SharedSender
TestCode --> Assert
Unit Testing with Mock Clients
The prebuffered_caller_tests.rs file demonstrates testing RpcCallPrebuffered with a mock client implementation:
Diagram: Mock Client Testing Architecture
The mock client implementation:
- Returns a dummy
RpcDispatcherfromget_dispatcher() - Returns a no-op emit function from
get_emit_fn() - Provides a
DynamicSendervia shared state for response injection - Allows control of
is_connected()state viaAtomicBool
Integration Testing with Real Server
Integration tests use a real RpcServer and real clients (both Tokio and WASM):
| Test | Purpose | Files |
|---|---|---|
test_success_client_server_roundtrip | Validates successful RPC calls | tokio:19-97 wasm:39-142 |
test_error_client_server_roundtrip | Validates error propagation | tokio:99-152 wasm:144-227 |
test_large_prebuffered_payload_roundtrip | Validates chunked transmission | tokio:154-203 wasm:229-312 |
test_method_not_found_error | Validates NotFound error code | tokio:205-240 |
graph LR
WasmClient["RpcWasmClient\n(test subject)"]
Bridge["WebSocket Bridge\n(test harness)"]
TokioServer["RpcServer\n(real server)"]
WasmClient -->|emit callback| Bridge
Bridge -->|WebSocket frames| TokioServer
TokioServer -->|WebSocket frames| Bridge
Bridge -->|handle_message| WasmClient
The WASM integration tests use a WebSocket bridge to connect the RpcWasmClient to a real RpcServer:
Diagram: WASM Integration Test Architecture
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-93 extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:18-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20
Key Implementation Details
Finalization Requirement
All prebuffered calls set is_finalized: true in the RpcRequest. This signals to the RpcDispatcher that no additional data will be sent after the initial request (and optional prebuffered payload), allowing it to optimize resource management.
Decode Closure Pattern
The call() method creates a decode closure and passes it to call_rpc_buffered():
This pattern allows the generic call_rpc_buffered() method to decode the response without knowing the specific output type.
Instrumentation
The call() method uses the #[instrument(skip(rpc_client, input))] attribute from the tracing crate, providing detailed logging at various trace levels:
debug: Method ID, entry/exit pointstrace: Request structure, result detailswarn: Large payload detection
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-98 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs71 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:75-76
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Streaming RPC Calls
Relevant source files
- extensions/muxio-rpc-service-caller/src/caller_interface.rs
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
Purpose and Scope
This document describes the streaming RPC mechanism in rust-muxio, which allows bidirectional data transfer over RPC calls with chunked payloads and asynchronous processing. Streaming RPC is used when responses are large, dynamic in size, or need to be processed incrementally.
For information about one-shot RPC calls with complete request/response buffers, see Prebuffered RPC Calls. For the underlying service definition traits, see Service Definitions. For client-side invocation patterns, see Service Caller Interface.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-406
Overview of Streaming RPC
Streaming RPC calls provide a mechanism for sending requests and receiving responses that may be too large to buffer entirely in memory, or where the response size is unknown at call time. Unlike prebuffered calls which return complete Result<T, RpcServiceError> values, streaming calls return:
RpcStreamEncoder- For sending additional payload chunks to the server after the initial requestDynamicReceiver- A stream that yieldsResult<Vec<u8>, RpcServiceError>chunks asynchronously
The streaming mechanism handles:
- Chunked payload transmission and reassembly
- Backpressure through bounded or unbounded channels
- Error propagation and early termination
- Request/response correlation across multiplexed streams
Key Distinction:
- Prebuffered RPC : Entire response buffered in memory before returning to caller
- Streaming RPC : Response chunks streamed incrementally as they arrive
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:33-73
Initiating a Streaming Call
The call_rpc_streaming method on RpcServiceCallerInterface initiates a streaming RPC call:
Method Parameters
| Parameter | Type | Description |
|---|---|---|
request | RpcRequest | Contains rpc_method_id, optional rpc_param_bytes, and optional rpc_prebuffered_payload_bytes |
dynamic_channel_type | DynamicChannelType | Specifies Bounded or Unbounded channel for response streaming |
Return Value
On success, returns a tuple containing:
RpcStreamEncoder- Used to send additional payload chunks after the initial requestDynamicReceiver- Stream that yields response chunks asResult<Vec<u8>, RpcServiceError>
On failure, returns RpcServiceError::Transport if the client is disconnected or if dispatcher registration fails.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-54
Dynamic Channel Types
The DynamicChannelType enum determines the backpressure characteristics of the response stream:
graph LR
DCT["DynamicChannelType"]
UNBOUNDED["Unbounded\nmpsc::unbounded()"]
BOUNDED["Bounded\nmpsc::channel(buffer_size)"]
DCT -->|No backpressure| UNBOUNDED
DCT -->|Backpressure at buffer_size| BOUNDED
UNBOUNDED -->|Creates| DS_UNBOUNDED["DynamicSender::Unbounded"]
UNBOUNDED -->|Creates| DR_UNBOUNDED["DynamicReceiver::Unbounded"]
BOUNDED -->|Creates| DS_BOUNDED["DynamicSender::Bounded"]
BOUNDED -->|Creates| DR_BOUNDED["DynamicReceiver::Bounded"]
Unbounded Channels
Created with DynamicChannelType::Unbounded. Uses mpsc::unbounded() internally, allowing unlimited buffering of response chunks. Suitable for:
- Fast consumers that can process chunks quickly
- Scenarios where response size is bounded and known to fit in memory
- Testing and development
Risk: Unbounded channels can lead to unbounded memory growth if the receiver is slower than the sender.
Bounded Channels
Created with DynamicChannelType::Bounded. Uses mpsc::channel(DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE) where DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE is typically 8. Provides backpressure when the buffer is full. Suitable for:
- Production systems with predictable memory usage
- Long-running streams with unknown total size
- Rate-limiting response processing
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73 extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:50-65
RpcStreamEncoder and DynamicReceiver
RpcStreamEncoder
The RpcStreamEncoder is created by RpcDispatcher::call() and provides methods to send additional payload chunks after the initial request. It wraps an RpcEmit trait implementation that sends binary frames over the transport.
Key characteristics:
- Created with
max_chunk_sizefromDEFAULT_SERVICE_MAX_CHUNK_SIZE - Automatically chunks large payloads into frames
- Shares the same
rpc_request_idas the original request - Can send multiple chunks before finalizing the stream
DynamicReceiver
The DynamicReceiver is a unified abstraction over mpsc::UnboundedReceiver and mpsc::Receiver that implements Stream<Item = Result<Vec<u8>, RpcServiceError>>.
| Variant | Underlying Type | Backpressure |
|---|---|---|
Unbounded | mpsc::UnboundedReceiver | None |
Bounded | mpsc::Receiver | Yes |
graph TB
subgraph "Call Flow"
CALL["call_rpc_streaming()"]
DISPATCHER["RpcDispatcher::call()"]
ENCODER["RpcStreamEncoder"]
RECEIVER["DynamicReceiver"]
end
subgraph "Response Flow"
RECV_FN["recv_fn closure\n(RpcResponseHandler)"]
TX["DynamicSender"]
RX["DynamicReceiver"]
APP["Application code\n.next().await"]
end
CALL -->|Creates channel| TX
CALL -->|Creates channel| RX
CALL -->|Registers| DISPATCHER
DISPATCHER -->|Returns| ENCODER
CALL -->|Returns| RECEIVER
RECV_FN -->|send_and_ignore| TX
TX -.->|mpsc| RX
RX -->|yields chunks| APP
Both variants provide the same next() interface through the StreamExt trait, abstracting the channel type from the caller.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73 extensions/muxio-rpc-service-caller/src/caller_interface.rs:289-323
stateDiagram-v2
[*] --> Waiting : recv_fn registered
Waiting --> HeaderReceived: RpcStreamEvent::Header
HeaderReceived --> Streaming: RpcResultStatus::Success
HeaderReceived --> ErrorBuffering: RpcResultStatus::MethodNotFound\nRpcResultStatus::Fail\nRpcResultStatus::SystemError
Streaming --> Streaming: RpcStreamEvent::PayloadChunk
ErrorBuffering --> ErrorBuffering: RpcStreamEvent::PayloadChunk
Streaming --> Complete: RpcStreamEvent::End
ErrorBuffering --> Complete: RpcStreamEvent::End
Waiting --> Error: RpcStreamEvent::Error
HeaderReceived --> Error: RpcStreamEvent::Error
Streaming --> Error: RpcStreamEvent::Error
ErrorBuffering --> Error: RpcStreamEvent::Error
Complete --> [*]
Error --> [*]
Stream Event Processing
The recv_fn closure registered with the dispatcher handles four types of RpcStreamEvent:
Event Types and State Machine
RpcStreamEvent::Header
Received first for every RPC response. Contains RpcHeader with:
rpc_msg_type- Should beRpcMessageType::Responserpc_request_id- Correlation ID matching the requestrpc_method_id- Method identifierrpc_metadata_bytes- First byte containsRpcResultStatus
The recv_fn extracts RpcResultStatus from rpc_metadata_bytes[0] and stores it for subsequent processing. A readiness signal is sent via the oneshot channel to unblock the call_rpc_streaming future.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:118-135
RpcStreamEvent::PayloadChunk
Contains a chunk of the response payload. Processing depends on the previously received RpcResultStatus:
| Status | Behavior |
|---|---|
Success | Chunk sent to DynamicSender with send_and_ignore(Ok(bytes)) |
MethodNotFound, Fail, SystemError | Chunk buffered in error_buffer for error message construction |
None (not yet received) | Chunk buffered defensively |
The synchronous recv_fn uses StdMutex to protect shared state (tx_arc, status, error_buffer).
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:136-174
RpcStreamEvent::End
Signals stream completion. Final actions depend on RpcResultStatus:
RpcResultStatus::MethodNotFound: ConstructsRpcServiceError::RpcwithRpcServiceErrorCode::NotFoundand buffered error payloadRpcResultStatus::Fail: SendsRpcServiceError::RpcwithRpcServiceErrorCode::FailRpcResultStatus::SystemError: SendsRpcServiceError::RpcwithRpcServiceErrorCode::Systemand buffered error payloadRpcResultStatus::Success: Closes the channel normally (no error sent)
The DynamicSender is taken from the Option wrapper and dropped, closing the channel.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:175-245
RpcStreamEvent::Error
Indicates a framing protocol error (e.g., malformed frames, decode errors). Sends RpcServiceError::Transport to the DynamicReceiver and also signals the readiness channel if still waiting for the header. The DynamicSender is dropped immediately.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-285
Error Handling in Streams
Error Propagation Path
Pre-Dispatch Errors
Before the dispatcher registers the request, errors are returned immediately from call_rpc_streaming():
- Disconnected client :
RpcServiceError::Transport(io::ErrorKind::ConnectionAborted) - Dispatcher registration failure :
RpcServiceError::Transportwith error details
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53 extensions/muxio-rpc-service-caller/src/caller_interface.rs:315-328
Post-Dispatch Errors
After the dispatcher registers the request, errors are sent through the DynamicReceiver stream:
- Framing errors :
RpcServiceError::TransportfromRpcStreamEvent::Error - RPC-level errors :
RpcServiceError::Rpcwith appropriateRpcServiceErrorCodebased onRpcResultStatus
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:185-238 extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-285
sequenceDiagram
participant Caller as "call_rpc_streaming()"
participant Dispatcher as "RpcDispatcher"
participant RecvFn as "recv_fn closure"
participant ReadyChan as "oneshot channel"
Caller->>ReadyChan: Create (ready_tx, ready_rx)
Caller->>Dispatcher: call(request, recv_fn)
Dispatcher-->>Caller: Returns encoder
Caller->>ReadyChan: .await on ready_rx
Note over RecvFn: Transport receives response
RecvFn->>RecvFn: RpcStreamEvent::Header
RecvFn->>RecvFn: Extract RpcResultStatus
RecvFn->>ReadyChan: ready_tx.send(Ok(()))
ReadyChan-->>Caller: Ok(())
Caller-->>Caller: Return (encoder, receiver)
Readiness Signaling
The call_rpc_streaming method uses a oneshot channel to signal when the RPC stream is ready to be consumed. This ensures the caller doesn't begin processing until the header has been received and the RpcResultStatus is known.
Signaling Mechanism
Signaling on Error
If an error occurs before receiving the header (e.g., RpcStreamEvent::Error), the readiness channel is signaled with Err(io::Error) instead of Ok(()).
Implementation Details
The readiness sender is stored in Arc<StdMutex<Option<oneshot::Sender>>> and taken using mem::take() when signaling to ensure it's only used once. The recv_fn closure acquires this mutex synchronously with .lock().unwrap().
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:78-80 extensions/muxio-rpc-service-caller/src/caller_interface.rs:127-134 extensions/muxio-rpc-service-caller/src/caller_interface.rs:332-348
Complete Streaming RPC Flow
End-to-End Sequence
Synchronization Points
- Channel Creation :
DynamicSenderandDynamicReceivercreated synchronously incall_rpc_streaming - Dispatcher Registration :
RpcDispatcher::call()registers the request and createsRpcStreamEncoder - Readiness Await :
call_rpc_streamingblocks onready_rx.awaituntil header received - Header Processing : First
RpcStreamEvent::Headerunblocks the caller - Chunk Processing : Each
RpcStreamEvent::PayloadChunkflows through the channel - Stream Termination :
RpcStreamEvent::Endcloses the channel
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:33-349
Integration with Transport Implementations
Tokio RPC Client Usage
The RpcClient struct in muxio-tokio-rpc-client implements RpcServiceCallerInterface, providing the transport-specific get_emit_fn() that sends binary data over the WebSocket connection.
When streaming RPC is used:
call_rpc_streaming()creates the channels and registers with dispatcherget_emit_fn()sends initial request frames viatx.send(WsMessage::Binary(chunk))- Receive loop processes incoming WebSocket binary messages
endpoint.read_bytes()called on received bytes, which dispatches torecv_fnrecv_fnforwards chunks toDynamicSender, which application receives viaDynamicReceiver
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:158-178 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:289-313
Connection State Impact
If the client disconnects during streaming:
is_connected()returnsfalse- Subsequent
call_rpc_streaming()attempts fail immediately withConnectionAborted - Pending streams receive
RpcStreamEvent::Errorfrom dispatcher'sfail_all_pending_requests() - Transport errors propagate through
DynamicReceiverasRpcServiceError::Transport
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:99-108 extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53
Testing Patterns
Mock Client Testing
Test the dynamic channel mechanism by creating mock implementations of RpcServiceCallerInterface:
Sources: extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:101-167
Integration Testing
Full integration tests with real client/server validate streaming across the WebSocket transport, testing scenarios like:
- Large payloads chunked correctly
- Bounded channel backpressure
- Early disconnect cancels pending streams
- Error status codes propagate correctly
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:168-292
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Transport Implementations
Relevant source files
Purpose and Scope
This page provides an overview of the concrete transport implementations that connect the abstract RPC framework to actual network protocols. These implementations serve as the bridge between application code and the underlying communication layer, enabling RPC calls to be transmitted over real network connections.
For details on the RPC abstraction layer these transports implement, see RPC Framework. For guidance on creating custom transports, see Custom Transport Implementation.
Overview
The rust-muxio system provides three production-ready transport implementations, each targeting different deployment environments while sharing the same core RPC abstractions. All three implementations use WebSocket as the underlying protocol and implement the RpcServiceCallerInterface trait for client-side operations.
| Transport | Environment | Key Dependencies | Use Cases |
|---|---|---|---|
muxio-tokio-rpc-server | Native Rust (Tokio runtime) | axum, tokio-tungstenite | Production servers, CLI applications |
muxio-tokio-rpc-client | Native Rust (Tokio runtime) | tokio, tokio-tungstenite | Native clients, integration tests |
muxio-wasm-rpc-client | WebAssembly (browser) | wasm-bindgen, js-sys | Web applications, browser extensions |
All transport implementations are located in the extensions/ directory, following the workspace structure defined in Cargo.toml:19-31
Sources: Cargo.toml:19-31 README.md:38-39 Cargo.lock:897-954
Transport Layer Architecture
The following diagram illustrates how transport implementations integrate with the RPC abstraction layer and the muxio core:
Sources: Cargo.toml:39-47 Cargo.lock:897-954 README.md:38-51
graph TB
subgraph "Application Layer"
APP["Application Code\nService Methods"]
end
subgraph "RPC Abstraction Layer"
CALLER["RpcServiceCallerInterface\nClient-side trait"]
ENDPOINT["RpcServiceEndpointInterface\nServer-side trait"]
SERVICE["RpcMethodPrebuffered\nService definitions"]
end
subgraph "Transport Implementations"
TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer struct"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient struct"]
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient struct"]
end
subgraph "Core Layer"
DISPATCHER["RpcDispatcher\nRequest correlation"]
FRAMING["Binary Framing Protocol\nStream multiplexing"]
end
subgraph "Network Layer"
WS_SERVER["tokio_tungstenite\nWebSocket server"]
WS_CLIENT_NATIVE["tokio_tungstenite\nWebSocket client"]
WS_CLIENT_WASM["Browser WebSocket API\nvia wasm_bindgen"]
end
APP --> SERVICE
SERVICE --> CALLER
SERVICE --> ENDPOINT
CALLER --> TOKIO_CLIENT
CALLER --> WASM_CLIENT
ENDPOINT --> TOKIO_SERVER
TOKIO_SERVER --> DISPATCHER
TOKIO_CLIENT --> DISPATCHER
WASM_CLIENT --> DISPATCHER
DISPATCHER --> FRAMING
TOKIO_SERVER --> WS_SERVER
TOKIO_CLIENT --> WS_CLIENT_NATIVE
WASM_CLIENT --> WS_CLIENT_WASM
FRAMING --> WS_SERVER
FRAMING --> WS_CLIENT_NATIVE
FRAMING --> WS_CLIENT_WASM
Tokio-Based Transports
RpcServer
The RpcServer struct provides the server-side transport implementation for Tokio environments. It combines Axum's WebSocket handling with the RPC endpoint interface to accept incoming connections and dispatch RPC calls to registered handlers.
Key characteristics:
- Built on
axumframework with WebSocket support - Uses
tokio-tungstenitefor WebSocket protocol implementation - Provides
serve_with_listener()method for integration with existing TCP listeners - Implements
RpcServiceEndpointInterfacefor handler registration
Sources: Cargo.lock:917-933 Cargo.toml46 README.md38
RpcClient
The RpcClient struct provides the client-side transport implementation for Tokio environments. It establishes WebSocket connections to servers and implements the caller interface for making RPC requests.
Key characteristics:
- Direct WebSocket connection using
tokio-tungstenite - Implements
RpcServiceCallerInterfacefor type-safe RPC calls - Provides state change callbacks via
set_state_change_handler() - Supports both prebuffered and streaming RPC methods
Sources: Cargo.lock:898-916 Cargo.toml47 README.md:136-151
WASM Transport
RpcWasmClient
The RpcWasmClient struct enables RPC communication from WebAssembly environments by bridging Rust code with JavaScript's WebSocket API through wasm-bindgen.
Key characteristics:
- Compiles to WebAssembly target
wasm32-unknown-unknown - Uses
wasm-bindgento interface with browser WebSocket API - Implements the same
RpcServiceCallerInterfaceas native clients - No direct dependency on Tokio runtime
Sources: Cargo.lock:934-954 Cargo.toml28 README.md39 README.md51
WebSocket Protocol Selection
All transport implementations use WebSocket as the underlying protocol for several reasons:
| Criterion | Rationale |
|---|---|
| Binary support | Native support for binary frames aligns with muxio's binary framing protocol |
| Bidirectional | Full-duplex communication enables server-initiated messages and streaming |
| Browser compatibility | Widely supported in all modern browsers via standard JavaScript API |
| Connection persistence | Single long-lived connection reduces overhead of multiple HTTP requests |
| Framing built-in | WebSocket's message framing complements muxio's multiplexing layer |
WebSocket messages carry the binary-serialized RPC frames defined by the muxio core protocol. The transport layer is responsible for:
- Establishing and maintaining WebSocket connections
- Converting between WebSocket binary messages and byte slices
- Handling connection lifecycle events (connect, disconnect, errors)
- Providing state change notifications to application code
Sources: Cargo.lock:1446-1455 Cargo.lock:1565-1580 README.md32
stateDiagram-v2
[*] --> Disconnected
Disconnected --> Connecting : new() / connect()
Connecting --> Connected : WebSocket handshake success
Connecting --> Disconnected : Connection failure
Connected --> Disconnected : Network error
Connected --> Disconnected : Server closes connection
Connected --> Disconnected : Client disconnect()
Disconnected --> [*]
note right of Connected
RpcTransportState enum
- Disconnected
- Connecting
- Connected
end note
Transport State Management
All client transports implement a state machine to track connection status. The state transitions are exposed to application code through callback handlers.
The RpcTransportState enum defines the possible connection states. Applications can register state change handlers using the set_state_change_handler() method available on client implementations:
This callback mechanism enables applications to:
- Display connection status in UI
- Implement automatic reconnection logic
- Queue requests while connecting
- Handle connection failures gracefully
Sources: README.md:138-141 README.md75
graph TD
subgraph "Tokio Server Stack"
TOKIO_SRV["muxio-tokio-rpc-server"]
AXUM["axum\nv0.8.4"]
TOKIO_1["tokio\nv1.45.1"]
TUNGSTENITE_1["tokio-tungstenite\nv0.26.2"]
end
subgraph "Tokio Client Stack"
TOKIO_CLI["muxio-tokio-rpc-client"]
TOKIO_2["tokio\nv1.45.1"]
TUNGSTENITE_2["tokio-tungstenite\nv0.26.2"]
end
subgraph "WASM Client Stack"
WASM_CLI["muxio-wasm-rpc-client"]
WASM_BINDGEN["wasm-bindgen\nv0.2.100"]
JS_SYS_DEP["js-sys\nv0.3.77"]
WASM_FUTURES["wasm-bindgen-futures\nv0.4.50"]
end
subgraph "Shared RPC Layer"
RPC_SERVICE["muxio-rpc-service"]
RPC_CALLER["muxio-rpc-service-caller"]
RPC_ENDPOINT["muxio-rpc-service-endpoint"]
end
subgraph "Core"
MUXIO_CORE["muxio"]
end
TOKIO_SRV --> AXUM
TOKIO_SRV --> TOKIO_1
TOKIO_SRV --> TUNGSTENITE_1
TOKIO_SRV --> RPC_ENDPOINT
TOKIO_CLI --> TOKIO_2
TOKIO_CLI --> TUNGSTENITE_2
TOKIO_CLI --> RPC_CALLER
WASM_CLI --> WASM_BINDGEN
WASM_CLI --> JS_SYS_DEP
WASM_CLI --> WASM_FUTURES
WASM_CLI --> RPC_CALLER
RPC_ENDPOINT --> RPC_SERVICE
RPC_CALLER --> RPC_SERVICE
RPC_SERVICE --> MUXIO_CORE
Dependency Graph
The following diagram shows the concrete dependency relationships between transport implementations and their supporting crates:
Sources: Cargo.lock:917-933 Cargo.lock:898-916 Cargo.lock:934-954 Cargo.toml:39-64
Cross-Platform Service Definition Sharing
A key design principle is that all transport implementations can consume the same service definitions. This is achieved through the RpcMethodPrebuffered trait, which defines methods with compile-time generated method IDs and encoding/decoding logic.
| Component | Role | Shared Across Transports |
|---|---|---|
RpcMethodPrebuffered trait | Defines RPC method signature | ✓ Yes |
encode_request() / decode_request() | Parameter serialization | ✓ Yes |
encode_response() / decode_response() | Result serialization | ✓ Yes |
METHOD_ID constant | Compile-time hash of method name | ✓ Yes |
| Transport connection logic | WebSocket handling | ✗ No (platform-specific) |
Example service definition usage from README.md:144-151:
The same service definitions work identically with RpcClient (Tokio), RpcWasmClient (WASM), and any future transport implementations that implement RpcServiceCallerInterface.
Sources: README.md:47-49 README.md:69-73 README.md:144-151 Cargo.toml42
Implementation Selection Guidelines
Choose the appropriate transport implementation based on your deployment target:
Usemuxio-tokio-rpc-server when:
- Building server-side applications
- Need to handle multiple concurrent client connections
- Require integration with existing Tokio/Axum infrastructure
- Operating in native Rust environments
Usemuxio-tokio-rpc-client when:
- Building native client applications (CLI tools, desktop apps)
- Writing integration tests for server implementations
- Need Tokio's async runtime features
- Operating in native Rust environments
Usemuxio-wasm-rpc-client when:
- Building web applications that run in browsers
- Creating browser extensions
- Need to communicate with servers from JavaScript contexts
- Targeting the
wasm32-unknown-unknownplatform
For detailed usage examples of each transport, refer to the subsections Tokio RPC Server, Tokio RPC Client, and WASM RPC Client.
Sources: README.md:38-51 Cargo.toml:19-31
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Tokio RPC Server
Relevant source files
- Cargo.lock
- extensions/muxio-rpc-service-endpoint/Cargo.toml
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs
- extensions/muxio-tokio-rpc-server/Cargo.toml
Purpose and Scope
The muxio-tokio-rpc-server crate provides a concrete, production-ready WebSocket RPC server implementation built on the Tokio async runtime and the Axum web framework. This crate bridges the transport-agnostic RpcServiceEndpointInterface (documented in 4.3) with real-world network transport over WebSocket connections.
For client-side connection logic, see 5.2 Tokio RPC Client. For cross-platform WASM client support, see 5.3 WASM RPC Client. For details on connection state tracking and disconnection handling, see 5.4 Transport State Management.
Sources: extensions/muxio-tokio-rpc-server/Cargo.toml:1-23
System Architecture
The Tokio RPC Server sits at the intersection of three major subsystems: the Axum HTTP/WebSocket framework, the muxio RPC endpoint abstraction layer, and the Tokio async runtime. It translates between the generic RpcServiceEndpointInterface and the specific requirements of WebSocket binary message transport.
Diagram: Tokio RPC Server Component Architecture
graph TB
subgraph "Application Layer"
APP["Application Code"]
HANDLERS["RPC Method Handlers\n(user-defined async functions)"]
end
subgraph "muxio-tokio-rpc-server"
SERVER["TokioRpcServer"]
WS_HANDLER["WebSocket Handler\n(Axum route)"]
ENDPOINT_IMPL["RpcServiceEndpointInterface\nImplementation"]
CONN_MGR["Connection Manager\n(per-connection state)"]
end
subgraph "RPC Abstraction Layer"
ENDPOINT_TRAIT["RpcServiceEndpointInterface\n(trait)"]
DISPATCHER["RpcDispatcher"]
HANDLERS_LOCK["Handler Registry"]
end
subgraph "Network Transport"
AXUM["Axum Web Framework"]
WS["tokio-tungstenite\nWebSocket Protocol"]
TOKIO["Tokio Async Runtime"]
end
APP --> SERVER
APP --> HANDLERS
HANDLERS --> SERVER
SERVER --> WS_HANDLER
SERVER --> ENDPOINT_IMPL
WS_HANDLER --> AXUM
AXUM --> WS
ENDPOINT_IMPL -.implements.-> ENDPOINT_TRAIT
ENDPOINT_IMPL --> DISPATCHER
ENDPOINT_IMPL --> HANDLERS_LOCK
WS_HANDLER --> CONN_MGR
CONN_MGR --> ENDPOINT_IMPL
WS --> TOKIO
ENDPOINT_IMPL --> TOKIO
The server operates in three layers:
- Application Layer : User code registers RPC handlers and starts the server
- Server Implementation : Manages WebSocket connections and implements the endpoint interface
- Transport Layer : Axum provides HTTP routing, tokio-tungstenite handles WebSocket framing, Tokio executes async tasks
Sources: extensions/muxio-tokio-rpc-server/Cargo.toml:12-22 Cargo.lock:918-932
Core Components
Server Structure
The TokioRpcServer serves as the main entry point and coordinates all subsystems:
| Component | Purpose | Key Responsibilities |
|---|---|---|
TokioRpcServer | Main server instance | Server lifecycle, handler registration, Axum router configuration |
| WebSocket Handler | Axum route handler | Connection upgrade, per-connection spawning, binary message loop |
| Endpoint Implementation | RpcServiceEndpointInterface impl | Handler lookup, request dispatching, response encoding |
| Handler Registry | Shared state | Thread-safe storage of registered RPC method handlers |
Sources: extensions/muxio-tokio-rpc-server/Cargo.toml:12-17
Dependency Integration
Diagram: Key Dependencies and Their Relationships
Sources: extensions/muxio-tokio-rpc-server/Cargo.toml:12-22 Cargo.lock:918-932
sequenceDiagram
participant Client
participant AxumRouter as "Axum Router"
participant WSHandler as "WebSocket Handler"
participant Upgrade as "WebSocket Upgrade"
participant ConnTask as "Connection Task\n(spawned)"
participant Dispatcher as "RpcDispatcher"
participant Endpoint as "Endpoint Interface"
Client->>AxumRouter: HTTP GET /ws
AxumRouter->>WSHandler: Route match
WSHandler->>Upgrade: Upgrade to WebSocket
Upgrade->>Client: 101 Switching Protocols
Upgrade->>WSHandler: WebSocket stream
WSHandler->>ConnTask: tokio::spawn
ConnTask->>Dispatcher: Create new instance
loop Message Loop
Client->>ConnTask: Binary WebSocket Frame
ConnTask->>Dispatcher: read_bytes(frame)
Dispatcher->>Endpoint: Decode + identify requests
Endpoint->>Endpoint: Execute handlers
Endpoint->>Dispatcher: Encode responses
Dispatcher->>ConnTask: Emit response bytes
ConnTask->>Client: Binary WebSocket Frame
end
alt Client Disconnect
Client->>ConnTask: Close frame
ConnTask->>Dispatcher: Cleanup
ConnTask->>ConnTask: Task exits
end
alt Server Shutdown
WSHandler->>ConnTask: Shutdown signal
ConnTask->>Client: Close frame
ConnTask->>Dispatcher: Cleanup
ConnTask->>ConnTask: Task exits
end
Connection Lifecycle
Each WebSocket connection follows a well-defined lifecycle managed by the server:
Diagram: WebSocket Connection Lifecycle
Connection States
| State | Description | Transitions |
|---|---|---|
| Upgrade | HTTP connection being upgraded to WebSocket | → Connected |
| Connected | Active WebSocket connection processing messages | → Disconnecting, → Error |
| Disconnecting | Graceful shutdown in progress, flushing pending responses | → Disconnected |
| Disconnected | Connection closed, resources cleaned up | Terminal state |
| Error | Abnormal termination due to protocol or transport error | → Disconnected |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:66-138
RPC Endpoint Implementation
The server implements RpcServiceEndpointInterface<C> where C is the per-connection context type. This implementation provides the bridge between WebSocket binary frames and the RPC protocol layer.
Handler Registration
The register_prebuffered method allows applications to register RPC method handlers at runtime:
Diagram: Handler Registration Flow
graph LR
APP["Application Code"]
SERVER["TokioRpcServer"]
ENDPOINT["Endpoint Implementation"]
LOCK["Handler Registry\n(Arc<Mutex<HashMap>>)"]
APP -->|register_prebuffered method_id, handler| SERVER
SERVER -->|delegate| ENDPOINT
ENDPOINT -->|lock handlers| LOCK
LOCK -->|insert| LOCK
LOCK -->|check duplicates| LOCK
Key characteristics:
- Thread-safe : Uses
Arcand async-aware locking (tokio::sync::RwLockor similar) - Type-safe : Handlers accept
Vec<u8>input and returnResult<Vec<u8>, Box<dyn Error>> - Duplicate detection : Returns
RpcServiceEndpointError::Handlerif method ID already registered - Runtime registration : Handlers can be added after server start (though typically done during initialization)
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64 extensions/muxio-rpc-service-endpoint/Cargo.toml:21-22
flowchart TD
START["read_bytes(dispatcher, context, bytes, on_emit)"]
subgraph Stage1["Stage 1: Decode & Identify (Sync)"]
DECODE["dispatcher.read_bytes(bytes)"]
CHECK["Check which requests finalized"]
EXTRACT["Extract finalized requests\nfrom dispatcher"]
end
subgraph Stage2["Stage 2: Execute Handlers (Async)"]
LOOKUP["Look up handler by method_id"]
SPAWN["Spawn handler futures"]
AWAIT["join_all(futures)"]
end
subgraph Stage3["Stage 3: Encode & Emit (Sync)"]
ENCODE["dispatcher.respond(response)"]
CHUNK["Chunk large payloads"]
EMIT["on_emit(bytes)"]
end
START --> DECODE
DECODE --> CHECK
CHECK --> EXTRACT
EXTRACT -->|for each request| LOOKUP
LOOKUP --> SPAWN
SPAWN --> AWAIT
AWAIT -->|for each response| ENCODE
ENCODE --> CHUNK
CHUNK --> EMIT
EMIT --> END["Return Ok()"]
Request Processing Pipeline
The read_bytes method implements the three-stage request processing pipeline:
Diagram: Request Processing Pipeline in read_bytes
Stage details:
-
Decode & Identify extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:78-97
- Synchronously processes raw bytes from WebSocket
- Updates
RpcDispatcherinternal state - Collects fully-received requests ready for processing
- No blocking I/O or async operations
-
Execute Handlers extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:99-125
- Spawns async handler futures for each request
- Handlers execute concurrently via
join_all - Each handler receives cloned context and request bytes
- Returns vector of responses in same order as requests
-
Encode & Emit extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:127-136
- Synchronously encodes responses into RPC protocol format
- Chunks large payloads (respects
DEFAULT_SERVICE_MAX_CHUNK_SIZE) - Emits bytes via provided callback (typically writes to WebSocket)
- Updates dispatcher state for correlation tracking
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:66-138
sequenceDiagram
participant WS as "WebSocket Stream"
participant Task as "Connection Task"
participant Dispatcher as "RpcDispatcher"
participant Endpoint as "Endpoint::read_bytes"
participant Emit as "on_emit Closure"
loop Until Disconnection
WS->>Task: next() -> Some(Message::Binary)
Task->>Task: Extract bytes from message
Task->>Endpoint: read_bytes(dispatcher, ctx, bytes, emit)
Note over Endpoint: Stage 1: Decode
Endpoint->>Dispatcher: dispatcher.read_bytes(bytes)
Dispatcher-->>Endpoint: Vec<request_id>
Note over Endpoint: Stage 2: Execute
Endpoint->>Endpoint: Spawn handler futures
Endpoint->>Endpoint: join_all(futures).await
Note over Endpoint: Stage 3: Encode
Endpoint->>Dispatcher: dispatcher.respond(response)
Dispatcher->>Emit: on_emit(response_bytes)
Emit->>Task: Collect bytes to send
Endpoint-->>Task: Ok(())
Task->>WS: send(Message::Binary(response_bytes))
end
alt WebSocket Close
WS->>Task: next() -> Some(Message::Close)
Task->>Task: Break loop
end
alt WebSocket Error
WS->>Task: next() -> Some(Message::Error)
Task->>Task: Log error, break loop
end
Task->>Dispatcher: Drop (cleanup)
Task->>Task: Task exits
WebSocket Message Loop
Each spawned connection task runs a continuous message loop that bridges WebSocket frames to RPC processing:
Diagram: WebSocket Message Loop Detail
Key implementation aspects:
| Aspect | Implementation Detail |
|---|---|
| Message Type Handling | Only Message::Binary frames processed; text/ping/pong handled separately or ignored |
| Backpressure | Async send() naturally applies backpressure when client slow to receive |
| Error Handling | WebSocket errors logged via tracing, connection terminated gracefully |
| Emit Callback | Closure captures WebSocket sink, queues bytes for batched sending |
| Dispatcher Lifetime | One RpcDispatcher per connection, dropped on task exit |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:66-138
Integration with Axum
The server integrates with Axum's routing and middleware system:
Diagram: Axum Integration Pattern
Integration points:
- Router Construction : Server provides method to convert into
axum::Router - WebSocket Route : Typically mounted at
/wsor configurable path - Handler Function : Accepts
WebSocketUpgradeextractor from Axum - Upgrade Callback : Receives upgraded socket, spawns per-connection task
- Middleware Compatibility : Works with standard Tower middleware (CORS, auth, logging)
Sources: extensions/muxio-tokio-rpc-server/Cargo.toml12 Cargo.lock:80-114
classDiagram
class Context {
<<trait bounds>>
+Send
+Sync
+Clone
+'static
}
class ConnectionId {+u64 id\n+SocketAddr peer_addr\n+Instant connected_at}
class AppState {+Arc~SharedData~ shared\n+Metrics metrics\n+AuthManager auth}
Context <|-- ConnectionId : implements
Context <|-- AppState : implements
note for Context "Any type implementing these bounds\ncan be used as connection context"
Context and State Management
The server supports per-connection context via the generic C type parameter in RpcServiceEndpointInterface<C>:
Context Type Requirements
Diagram: Context Type Requirements
Common context patterns:
| Pattern | Use Case | Example Fields |
|---|---|---|
| Connection Metadata | Track connection identity and timing | connection_id, peer_addr, connected_at |
| Authentication State | Store authenticated user information | user_id, session_token, permissions |
| Application State | Share server-wide resources | Arc<Database>, Arc<ConfigManager> |
| Request Context | Per-request metadata | trace_id, request_start, client_version |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:9-13
Handler Execution Model
Handlers execute asynchronously with Tokio runtime integration:
Diagram: Handler Execution Flow
Execution characteristics:
| Characteristic | Behavior |
|---|---|
| Concurrency | Multiple handlers execute concurrently via join_all for requests in same batch |
| Isolation | Each handler receives cloned context, preventing shared mutable state issues |
| Cancellation | If connection drops, futures are dropped (handlers should handle cancellation gracefully) |
| Error Handling | Handler errors converted to RPC error responses, connection remains active |
| Backpressure | WebSocket send backpressure naturally limits handler spawn rate |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:99-125
Error Handling
The server handles errors at multiple layers:
Error Types and Propagation
| Error Source | Error Type | Handling Strategy |
|---|---|---|
| WebSocket Protocol | tungstenite::Error | Log via tracing, close connection gracefully |
| RPC Dispatcher | muxio::RpcError | Convert to RPC error response, send to client |
| Handler Execution | Box<dyn Error + Send + Sync> | Encode as RPC error response, log details |
| Endpoint Registration | RpcServiceEndpointError | Return to caller during setup, prevent server start |
| Serialization | bitcode::Error | Treat as invalid request, send error response |
Error Response Flow
Diagram: Error Response Flow
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:32-34 extensions/muxio-tokio-rpc-server/Cargo.toml22
Performance Considerations
The Tokio RPC Server implementation includes several optimizations:
Message Batching
Diagram: Request Batching for Efficiency
Optimization Strategies
| Strategy | Implementation | Benefit |
|---|---|---|
| Zero-copy when possible | Uses bytes::Bytes for reference-counted buffers | Reduces memory allocations for large payloads |
| Concurrent handler execution | join_all spawns all handlers in batch | Maximizes CPU utilization for independent requests |
| Chunked responses | Large responses split per DEFAULT_SERVICE_MAX_CHUNK_SIZE | Prevents memory spikes, enables streaming-like behavior |
| Connection pooling | Each connection has dedicated task, no contention | Scales to thousands of concurrent connections |
| Async I/O | All I/O operations are async via Tokio | Efficient use of OS threads, high concurrency |
Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:99-125 extensions/muxio-tokio-rpc-server/Cargo.toml:13-14
Logging and Observability
The server integrates with the tracing ecosystem for structured logging:
Instrumentation Points
Diagram: Tracing Instrumentation Hierarchy
Key logging capabilities:
- Connection lifecycle events with peer address
- Request/response message IDs for correlation
- Handler execution timing and errors
- WebSocket protocol errors
- Dispatcher state changes
Sources: extensions/muxio-tokio-rpc-server/Cargo.toml22 extensions/muxio-rpc-service-endpoint/Cargo.toml18
Example: Building a Server
Typical server construction and lifecycle:
Diagram: Server Initialization Sequence
Sources: extensions/muxio-tokio-rpc-server/Cargo.toml:1-23 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Tokio RPC Client
Relevant source files
- Cargo.lock
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/Cargo.toml
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
Purpose and Scope
This page documents the muxio-tokio-rpc-client crate, which provides a production-ready WebSocket client implementation for native Rust applications using the Tokio async runtime. This client connects to servers running muxio-tokio-rpc-server and enables type-safe RPC communication through shared service definitions.
Related Pages:
- For server-side implementation, see Tokio RPC Server
- For browser-based client implementation, see WASM RPC Client
- For the underlying RPC service caller abstraction, see Service Caller Interface
- For transport state management concepts, see Transport State Management
Architecture Overview
The RpcClient is a fully-featured WebSocket client that manages connection lifecycle, message routing, and RPC request/response correlation. It implements the RpcServiceCallerInterface trait, providing the same API as the WASM client while using native Tokio primitives.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-32 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:131-267
graph TB
subgraph "Client Application"
APP["Application Code"]
end
subgraph "RpcClient Structure"
CLIENT["RpcClient"]
DISPATCHER["Arc<TokioMutex<RpcDispatcher>>"]
ENDPOINT["Arc<RpcServiceEndpoint<()>>"]
TX["mpsc::UnboundedSender<WsMessage>"]
STATE_HANDLER["RpcTransportStateChangeHandler"]
IS_CONNECTED["Arc<AtomicBool>"]
end
subgraph "Background Tasks"
HEARTBEAT["Heartbeat Task\n(JoinHandle)"]
RECV_LOOP["Receive Loop\n(JoinHandle)"]
SEND_LOOP["Send Loop\n(JoinHandle)"]
end
subgraph "WebSocket Connection"
WS_SENDER["ws_sender\n(SplitSink)"]
WS_RECEIVER["ws_receiver\n(SplitStream)"]
WS_STREAM["tokio_tungstenite::WebSocketStream"]
end
APP --> CLIENT
CLIENT --> DISPATCHER
CLIENT --> ENDPOINT
CLIENT --> TX
CLIENT --> STATE_HANDLER
CLIENT --> IS_CONNECTED
CLIENT --> HEARTBEAT
CLIENT --> RECV_LOOP
CLIENT --> SEND_LOOP
HEARTBEAT --> TX
TX --> SEND_LOOP
SEND_LOOP --> WS_SENDER
WS_RECEIVER --> RECV_LOOP
RECV_LOOP --> DISPATCHER
RECV_LOOP --> ENDPOINT
WS_SENDER --> WS_STREAM
WS_RECEIVER --> WS_STREAM
Core Components
RpcClient Structure
The RpcClient struct is the primary type exposed by this crate. It owns all connection resources and background tasks.
| Field | Type | Purpose |
|---|---|---|
dispatcher | Arc<TokioMutex<RpcDispatcher<'static>>> | Manages RPC request correlation and frame encoding/decoding |
endpoint | Arc<RpcServiceEndpoint<()>> | Handles server-to-client RPC calls (bidirectional support) |
tx | mpsc::UnboundedSender<WsMessage> | Channel for sending messages to WebSocket |
state_change_handler | RpcTransportStateChangeHandler | Optional callback for connection state changes |
is_connected | Arc<AtomicBool> | Atomic flag tracking connection status |
task_handles | Vec<JoinHandle<()>> | Handles for background tasks (aborted on drop) |
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-32
RpcServiceCallerInterface Implementation
The client implements the RpcServiceCallerInterface trait, which defines the contract for making RPC calls.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:278-335
graph LR
subgraph "Trait Methods"
GET_DISPATCHER["get_dispatcher()"]
IS_CONNECTED["is_connected()"]
GET_EMIT["get_emit_fn()"]
SET_HANDLER["set_state_change_handler()"]
end
subgraph "RpcClient Implementation"
RETURN_DISP["Returns Arc<TokioMutex<RpcDispatcher>>"]
CHECK_FLAG["Checks Arc<AtomicBool>"]
CREATE_CLOSURE["Creates closure wrapping mpsc::Sender"]
STORE_CB["Stores callback in Arc<StdMutex>"]
end
GET_DISPATCHER --> RETURN_DISP
IS_CONNECTED --> CHECK_FLAG
GET_EMIT --> CREATE_CLOSURE
SET_HANDLER --> STORE_CB
Connection Lifecycle
Connection Establishment
The RpcClient::new() function establishes a WebSocket connection and spawns background tasks using Arc::new_cyclic() to enable weak references.
Connection URL Construction:
sequenceDiagram
participant App as "Application"
participant New as "RpcClient::new()"
participant WS as "tokio_tungstenite"
participant Tasks as "Background Tasks"
App->>New: new(host, port)
New->>New: construct websocket_url
New->>WS: connect_async(url)
WS-->>New: WebSocketStream + Response
New->>New: split stream into sender/receiver
New->>New: create mpsc channel
New->>New: Arc::new_cyclic(closure)
New->>Tasks: spawn heartbeat task
New->>Tasks: spawn receive loop
New->>Tasks: spawn send loop
New-->>App: Arc<RpcClient>
- If
hostparses asIpAddr:ws://{ip}:{port}/ws - Otherwise:
ws://{host}:{port}/ws
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271
Background Tasks
The client spawns three concurrent tasks that run for the lifetime of the connection:
1. Heartbeat Task
Sends periodic WebSocket ping frames to keep the connection alive and detect disconnections.
- Interval: 1 second
- Payload: Empty ping message
- Exit condition: Channel send failure
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-154
graph TB
START["ws_receiver.next().await"]
MATCH["Match message type"]
BINARY["WsMessage::Binary"]
PING["WsMessage::Ping"]
OTHER["Other messages"]
ERROR["Err(e)"]
LOCK_DISP["Lock dispatcher"]
READ_BYTES["endpoint.read_bytes()"]
SEND_PONG["Send pong response"]
LOG["Log message"]
SHUTDOWN["Call shutdown_async()"]
START --> MATCH
MATCH --> BINARY
MATCH --> PING
MATCH --> OTHER
MATCH --> ERROR
BINARY --> LOCK_DISP
LOCK_DISP --> READ_BYTES
PING --> SEND_PONG
OTHER --> LOG
ERROR --> SHUTDOWN
2. Receive Loop
Processes incoming WebSocket messages and routes them to the appropriate handlers.
Message Handling:
Binary: Decoded byRpcDispatcherand processed byRpcServiceEndpointPing: Automatically responds withPongPong: Logged (heartbeat responses)- Error: Triggers
shutdown_async()
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:156-222
3. Send Loop
Drains the internal MPSC channel and transmits messages over the WebSocket.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:224-257
Connection Shutdown
Shutdown can occur synchronously (on Drop) or asynchronously (on connection errors).
Synchronous Shutdown (shutdown_sync)
Called from Drop implementation. Uses swap to ensure single execution.
Process:
- Swap
is_connectedflag tofalse - If previously
true, invoke state change handler withDisconnected - Abort all background tasks
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:55-77 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52
Asynchronous Shutdown (shutdown_async)
Called from background tasks on connection errors.
Process:
- Swap
is_connectedflag tofalse - If previously
true, invoke state change handler withDisconnected - Acquire dispatcher lock
- Call
fail_all_pending_requests()withFrameDecodeError::ReadAfterCancel
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-108
Message Flow
Client-to-Server RPC Call
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:289-313
sequenceDiagram
participant WS as "WebSocket"
participant RecvLoop as "Receive Loop Task"
participant Dispatcher as "RpcDispatcher"
participant Endpoint as "RpcServiceEndpoint"
participant Handler as "Registered Handler"
WS->>RecvLoop: Binary message
RecvLoop->>Dispatcher: Lock dispatcher
RecvLoop->>Endpoint: read_bytes(dispatcher, (), bytes, on_emit)
Endpoint->>Dispatcher: Decode frames
Dispatcher->>Endpoint: Route by METHOD_ID
Endpoint->>Handler: Invoke handler
Handler-->>Endpoint: Response bytes
Endpoint->>RecvLoop: on_emit(response_chunk)
RecvLoop->>WS: Send response
Server-to-Client RPC Call
The client includes an RpcServiceEndpoint to handle bidirectional RPC (server calling client).
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:164-177 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:273-275
State Management
Connection State Tracking
The client uses Arc<AtomicBool> for lock-free state checking and Arc<StdMutex<Option<Box<dyn Fn>>>> for the state change callback.
| State | is_connected Value | Trigger |
|---|---|---|
Connected | true | Successful connect_async() |
Disconnected | false | WebSocket error, explicit shutdown, or drop |
State Transition Events:
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs30 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-286
graph LR
SET["set_state_change_handler(handler)"]
LOCK["Lock state_change_handler mutex"]
STORE["Store Box<dyn Fn> in Option"]
CHECK["Check is_connected flag"]
CALL_INIT["Call handler(Connected)"]
SET --> LOCK
LOCK --> STORE
STORE --> CHECK
CHECK -->|true| CALL_INIT
State Change Handler
Applications can register a callback to be notified of connection state changes.
Handler Registration:
Important: If the client is already connected when the handler is set, it immediately invokes the handler with RpcTransportState::Connected.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
Error Handling and Disconnection
graph TB
ERROR["Connection Error Detected"]
SHUTDOWN["shutdown_async()
called"]
LOCK["Acquire dispatcher lock"]
FAIL["dispatcher.fail_all_pending_requests()"]
NOTIFY["Waiting futures receive error"]
ERROR --> SHUTDOWN
SHUTDOWN --> LOCK
LOCK --> FAIL
FAIL --> NOTIFY
Pending Request Cancellation
When the connection is lost, all pending RPC requests are failed with FrameDecodeError::ReadAfterCancel.
Cancellation Flow:
Error Propagation:
ReadAfterCancel→RpcServiceError::TransportError→ Application receivesErr
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-103
Send Failure Handling
If the send loop cannot transmit a message, it triggers shutdown.
Failure Scenarios:
- WebSocket send returns error →
shutdown_async() - Channel closed (receiver dropped) → Loop exits
is_connectedflag isfalse→ Message dropped, loop exits
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:231-253
Receive Failure Handling
If the receive loop encounters an error, it triggers shutdown and exits.
Failure Scenarios:
- WebSocket receive returns error →
shutdown_async(), break loop - Stream ends (
Nonefromnext()) →shutdown_async(), exit loop
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:186-199 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:205-220
Usage Patterns
Basic Client Creation
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271
Making RPC Calls
The client implements RpcServiceCallerInterface, enabling use with any RpcMethodPrebuffered implementation:
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:278-335
Monitoring Connection State
Register a handler to track connection lifecycle:
Handler Invocation:
- Called immediately with
Connectedif client is already connected - Called with
Disconnectedon any connection loss - Called from shutdown paths (both sync and async)
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334 extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:36-165
Bidirectional RPC (Server-to-Client Calls)
The client can handle RPC calls initiated by the server:
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:273-275
Implementation Details
Weak Reference Pattern
The client uses Arc::new_cyclic to allow background tasks to hold Weak<RpcClient> references. This prevents reference cycles while enabling tasks to access the client.
Benefits:
- Tasks can access client methods without preventing cleanup
- Client can be dropped while tasks are running
- Tasks gracefully exit when client is dropped
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs131 extensions/muxio-tokio-rpc-client/src/rpc_client.rs157 extensions/muxio-tokio-rpc-client/src/rpc_client.rs225
Lock-Free is_connected Check
The is_connected flag uses AtomicBool with Ordering::Relaxed for reads and Ordering::SeqCst for writes, enabling fast connection status checks without mutexes.
Memory Ordering:
- Read:
Relaxed- No synchronization needed, flag is only hint - Write:
SeqCst- Strong ordering for state transitions - Swap:
SeqCst- Ensure single shutdown execution
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs30 extensions/muxio-tokio-rpc-client/src/rpc_client.rs61 extensions/muxio-tokio-rpc-client/src/rpc_client.rs85 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-286
Debug Implementation
The Debug trait is manually implemented to avoid exposing closures and function pointers, showing only the connection state.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:34-40
Testing
Connection Failure Tests
Validates error handling when connecting to non-existent servers.
Test: test_client_errors_on_connection_failure
- Attempts connection to unused port
- Asserts
io::ErrorKind::ConnectionRefused
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:17-31
State Change Handler Tests
Validates state change callbacks are invoked correctly during connection lifecycle.
Test: test_transport_state_change_handler
- Spawns minimal WebSocket server
- Registers state change handler
- Verifies
Connectedcallback - Server closes connection
- Verifies
Disconnectedcallback
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:36-165
Pending Request Cancellation Tests
Validates that in-flight RPC requests fail when connection is lost.
Test: test_pending_requests_fail_on_disconnect
- Spawns server that accepts but doesn't respond
- Initiates RPC call (becomes pending)
- Server closes connection
- Asserts RPC call fails with cancellation error
Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:169-292
Dynamic Channel Tests
Validates streaming RPC functionality with bounded and unbounded channels.
Tests:
test_dynamic_channel_boundedtest_dynamic_channel_unbounded
Sources: extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:101-167
Dependencies
| Crate | Purpose |
|---|---|
tokio | Async runtime with "full" features |
tokio-tungstenite | WebSocket protocol implementation |
futures-util | Stream/sink utilities for WebSocket splitting |
async-trait | Trait async method support |
muxio | Core RPC dispatcher and framing |
muxio-rpc-service | Service trait definitions |
muxio-rpc-service-caller | Caller interface trait |
muxio-rpc-service-endpoint | Server-to-client RPC handling |
Sources: extensions/muxio-tokio-rpc-client/Cargo.toml:11-22
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
WASM RPC Client
Relevant source files
- Cargo.lock
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-tokio-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/Cargo.toml
- extensions/muxio-wasm-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
The WASM RPC Client provides a WebAssembly-compatible implementation of the RPC transport layer for browser environments. It bridges Rust code compiled to WASM with JavaScript's WebSocket API, enabling bidirectional RPC communication between WASM clients and native servers.
This page focuses on the client-side WASM implementation. For native Tokio-based clients, see Tokio RPC Client. For server-side implementations, see Tokio RPC Server. For the RPC abstraction layer, see RPC Framework.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-182 extensions/muxio-wasm-rpc-client/Cargo.toml:1-30
Architecture Overview
The WASM RPC client operates as a bridge between Rust WASM code and JavaScript's WebSocket API. Unlike the Tokio client which manages its own WebSocket connection, the WASM client relies on JavaScript glue code to handle WebSocket events and delegates to Rust for RPC protocol processing.
graph TB
subgraph "Browser JavaScript"
WS["WebSocket API"]
GLUE["JavaScript Glue Code\nmuxioWriteBytes()"]
APP["Web Application"]
end
subgraph "WASM Module (Rust)"
STATIC["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local!"]
CLIENT["RpcWasmClient"]
DISPATCHER["RpcDispatcher"]
ENDPOINT["RpcServiceEndpoint"]
CALLER["RpcServiceCallerInterface"]
end
subgraph "Core Layer"
MUXIO["muxio core\nBinary Framing"]
end
APP -->|create WebSocket| WS
WS -->|onopen| GLUE
WS -->|onmessage bytes| GLUE
WS -->|onerror/onclose| GLUE
GLUE -->|handle_connect| STATIC
GLUE -->|read_bytes bytes| STATIC
GLUE -->|handle_disconnect| STATIC
STATIC -.->|Arc reference| CLIENT
CLIENT -->|uses| DISPATCHER
CLIENT -->|uses| ENDPOINT
CLIENT -.->|implements| CALLER
CLIENT -->|emit_callback bytes| GLUE
GLUE -->|send bytes| WS
DISPATCHER --> MUXIO
ENDPOINT --> MUXIO
The architecture consists of three layers:
- JavaScript Layer : Manages WebSocket lifecycle and forwards events to WASM
- WASM Bridge Layer :
RpcWasmClientand static client helpers - Core RPC Layer :
RpcDispatcherfor multiplexing andRpcServiceEndpointfor handling incoming calls
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-36
RpcWasmClient Structure
The RpcWasmClient struct manages bidirectional RPC communication in WASM environments. It combines client-side call capabilities with server-side request handling.
classDiagram
class RpcWasmClient {
-Arc~Mutex~RpcDispatcher~~ dispatcher
-Arc~RpcServiceEndpoint~()~~ endpoint
-Arc~dyn Fn(Vec~u8~)~ emit_callback
-RpcTransportStateChangeHandler state_change_handler
-Arc~AtomicBool~ is_connected
+new(emit_callback) RpcWasmClient
+handle_connect() async
+read_bytes(bytes) async
+handle_disconnect() async
+is_connected() bool
+get_endpoint() Arc~RpcServiceEndpoint~()~~
}
class RpcServiceCallerInterface {<<trait>>\n+get_dispatcher() Arc~Mutex~RpcDispatcher~~\n+get_emit_fn() Arc~dyn Fn(Vec~u8~)~\n+is_connected() bool\n+set_state_change_handler(handler) async}
class RpcDispatcher {
+read_bytes(bytes) Result
+respond(response, chunk_size, callback) Result
+is_rpc_request_finalized(id) bool
+delete_rpc_request(id) Option
+fail_all_pending_requests(error)
}
class RpcServiceEndpoint {+get_prebuffered_handlers() Arc}
RpcWasmClient ..|> RpcServiceCallerInterface
RpcWasmClient --> RpcDispatcher
RpcWasmClient --> RpcServiceEndpoint
| Field | Type | Purpose |
|---|---|---|
dispatcher | Arc<Mutex<RpcDispatcher>> | Manages request/response correlation and stream multiplexing |
endpoint | Arc<RpcServiceEndpoint<()>> | Handles incoming RPC requests from the server |
emit_callback | Arc<dyn Fn(Vec<u8>)> | Callback to send bytes to JavaScript WebSocket |
state_change_handler | Arc<Mutex<Option<Box<dyn Fn(RpcTransportState)>>>> | Optional callback for connection state changes |
is_connected | Arc<AtomicBool> | Tracks connection status |
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-24 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181
Connection Lifecycle
The WASM client relies on JavaScript to manage the WebSocket connection. Three lifecycle methods must be called from JavaScript glue code in response to WebSocket events:
stateDiagram-v2
[*] --> Disconnected : new()
Disconnected --> Connected : handle_connect()
Connected --> Processing : read_bytes(data)
Processing --> Connected
Connected --> Disconnected : handle_disconnect()
Disconnected --> [*]
note right of Connected
is_connected = true
state_change_handler(Connected)
end note
note right of Processing
1. read_bytes() into dispatcher
2. process_single_prebuffered_request()
3. respond() with results
end note
note right of Disconnected
is_connected = false
state_change_handler(Disconnected)
fail_all_pending_requests()
end note
handle_connect
Called when JavaScript's WebSocket onopen event fires. Updates connection state and notifies registered state change handlers.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-44
sequenceDiagram
participant JS as JavaScript
participant Client as RpcWasmClient
participant Dispatcher as RpcDispatcher
participant Endpoint as RpcServiceEndpoint
participant Handler as User Handler
JS->>Client: read_bytes(bytes)
Note over Client,Dispatcher: Stage 1: Synchronous Reading
Client->>Dispatcher: lock().read_bytes(bytes)
Dispatcher-->>Client: request_ids[]
Client->>Dispatcher: is_rpc_request_finalized(id)
Dispatcher-->>Client: true/false
Client->>Dispatcher: delete_rpc_request(id)
Dispatcher-->>Client: RpcRequest
Note over Client: Release dispatcher lock
Note over Client,Handler: Stage 2: Asynchronous Processing
loop For each request
Client->>Endpoint: process_single_prebuffered_request()
Endpoint->>Handler: invoke(request)
Handler-->>Endpoint: response
Endpoint-->>Client: RpcResponse
end
Note over Client,Dispatcher: Stage 3: Synchronous Sending
Client->>Dispatcher: lock().respond(response)
Dispatcher->>Client: emit_callback(chunk)
Client->>JS: muxioWriteBytes(chunk)
JS->>JS: websocket.send(chunk)
read_bytes
The core message processing method, called when JavaScript's WebSocket onmessage event fires. Implements a three-stage pipeline to avoid holding the dispatcher lock during expensive async operations:
Stage 1 : Acquires dispatcher lock, reads bytes into frame buffer, identifies finalized requests, and extracts them for processing. Lock is released immediately.
Stage 2 : Processes all requests concurrently without holding the dispatcher lock. User handlers execute async logic here.
Stage 3 : Re-acquires dispatcher lock briefly to serialize and send responses back through the emit callback.
This design prevents deadlocks and allows concurrent request processing while maintaining thread safety.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:46-121
handle_disconnect
Called when JavaScript's WebSocket onclose or onerror events fire. Updates connection state, notifies handlers, and fails all pending requests with a cancellation error.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:123-134
Static Client Pattern
For simplified JavaScript integration, the WASM client provides a static client pattern using thread-local storage. This eliminates the need to pass client instances through JavaScript and provides a global access point.
graph LR
subgraph "JavaScript"
INIT["init()"]
CALL["callSomeRpc()"]
end
subgraph "WASM Exports"
INIT_EXPORT["#[wasm_bindgen]\ninit_static_client()"]
RPC_EXPORT["#[wasm_bindgen]\nexported_rpc_function()"]
end
subgraph "Static Client Layer"
TLS["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local!"]
WITH["with_static_client_async()"]
end
subgraph "Client Layer"
CLIENT["Arc<RpcWasmClient>"]
end
INIT --> INIT_EXPORT
INIT_EXPORT --> TLS
TLS -.stores.-> CLIENT
CALL --> RPC_EXPORT
RPC_EXPORT --> WITH
WITH --> TLS
TLS -.retrieves.-> CLIENT
WITH --> CLIENT
init_static_client
Initializes the thread-local static client reference. This function is idempotent—calling it multiple times has no effect after the first initialization. Typically called once during WASM module startup.
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36
with_static_client_async
Primary method for interacting with the static client from exported WASM functions. Accepts a closure that receives the Arc<RpcWasmClient> and returns a future. Converts the result to a JavaScript Promise.
| Parameter | Type | Description |
|---|---|---|
f | FnOnce(Arc<RpcWasmClient>) -> Fut | Closure receiving client reference |
Fut | Future<Output = Result<T, String>> | Future returned by closure |
T | Into<JsValue> | Result type convertible to JavaScript value |
| Returns | Promise | JavaScript promise resolving to T or rejecting with error |
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72
get_static_client
Returns the current static client if initialized, otherwise returns None. Useful for conditional logic or direct access without promise conversion.
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:79-81
JavaScript Integration
The WASM client requires JavaScript glue code to bridge WebSocket events to WASM function calls. Here's the typical integration pattern:
graph TB
subgraph "JavaScript WebSocket Events"
OPEN["ws.onopen"]
MESSAGE["ws.onmessage"]
ERROR["ws.onerror"]
CLOSE["ws.onclose"]
end
subgraph "WASM Bridge Functions"
WASM_CONNECT["wasm.handle_connect()"]
WASM_READ["wasm.read_bytes(event.data)"]
WASM_DISCONNECT["wasm.handle_disconnect()"]
end
subgraph "WASM Emit Callback"
EMIT["emit_callback(bytes)"]
WRITE["muxioWriteBytes(bytes)"]
end
OPEN --> WASM_CONNECT
MESSAGE --> WASM_READ
ERROR --> WASM_DISCONNECT
CLOSE --> WASM_DISCONNECT
EMIT --> WRITE
WRITE -->|ws.send bytes| MESSAGE
JavaScript Glue Layer
The JavaScript layer must:
- Create and manage a WebSocket connection
- Forward
onopenevents tohandle_connect() - Forward
onmessagedata toread_bytes() - Forward
onerror/oncloseevents tohandle_disconnect() - Implement
muxioWriteBytes()to send data back through the WebSocket
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-8 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:27-34
Making RPC Calls
The WASM client implements RpcServiceCallerInterface, enabling the same call patterns as the Tokio client. All methods defined in service definitions using RpcMethodPrebuffered are available.
sequenceDiagram
participant WASM as WASM Code
participant Caller as RpcServiceCallerInterface
participant Dispatcher as RpcDispatcher
participant Emit as emit_callback
participant JS as JavaScript
participant WS as WebSocket
participant Server as Server
WASM->>Caller: Add::call(client, params)
Caller->>Dispatcher: encode_request + METHOD_ID
Dispatcher->>Dispatcher: assign request_id
Dispatcher->>Emit: emit_callback(bytes)
Emit->>JS: muxioWriteBytes(bytes)
JS->>WS: websocket.send(bytes)
WS->>Server: transmit
Server->>WS: response
WS->>JS: onmessage(bytes)
JS->>Caller: read_bytes(bytes)
Caller->>Dispatcher: decode frames
Dispatcher->>Dispatcher: match request_id
Dispatcher->>Caller: decode_response
Caller-->>WASM: Result<Response>
Example Usage Pattern
From WASM code:
- Obtain client reference (either directly or via
with_static_client_async) - Call service methods using the trait (e.g.,
Add::call(&client, request).await) - Handle the returned
Result<Response, RpcServiceError>
The client automatically handles:
- Request serialization with
bitcode - METHOD_ID attachment for routing
- Request correlation via dispatcher
- Response deserialization
- Error propagation
Sources: extensions/muxio-wasm-rpc-client/src/lib.rs:6-9 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181
graph TB
subgraph "JavaScript"
WS["WebSocket\nonmessage(bytes)"]
end
subgraph "RpcWasmClient"
READ["read_bytes(bytes)"]
STAGE1["Stage 1:\nExtract finalized requests"]
STAGE2["Stage 2:\nprocess_single_prebuffered_request()"]
STAGE3["Stage 3:\nrespond(response)"]
end
subgraph "RpcServiceEndpoint"
HANDLERS["get_prebuffered_handlers()"]
DISPATCH["dispatch by METHOD_ID"]
end
subgraph "User Code"
HANDLER["Registered Handler\nasync fn(request) -> response"]
end
WS --> READ
READ --> STAGE1
STAGE1 --> STAGE2
STAGE2 --> HANDLERS
HANDLERS --> DISPATCH
DISPATCH --> HANDLER
HANDLER --> STAGE2
STAGE2 --> STAGE3
STAGE3 -->|emit_callback| WS
Handling Incoming RPC Calls
The WASM client can also act as a server, handling RPC calls initiated by the remote endpoint. This enables bidirectional RPC where both client and server can initiate calls.
sequenceDiagram
participant Code as User Code
participant Client as RpcWasmClient
participant Endpoint as RpcServiceEndpoint
Code->>Client: get_endpoint()
Client-->>Code: Arc<RpcServiceEndpoint<()>>
Code->>Endpoint: register_prebuffered_handler::<Method>()
Note over Endpoint: Store handler by METHOD_ID
Registering Handlers
Handlers are registered with the RpcServiceEndpoint obtained via get_endpoint():
When an incoming request arrives:
read_bytes()extracts the request from the dispatcherprocess_single_prebuffered_request()looks up the handler by METHOD_ID- The handler executes asynchronously
- The response is serialized and sent via
respond()
The context type for WASM client handlers is () since there is no per-connection state.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:86-120 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:141-143
stateDiagram-v2
[*] --> Disconnected
Disconnected --> Connected : handle_connect()
Connected --> Disconnected : handle_disconnect()
state Connected {
[*] --> Ready
Ready --> Processing : read_bytes()
Processing --> Ready
}
note right of Connected
is_connected = true
emit state_change_handler(Connected)
end note
note right of Disconnected
is_connected = false
emit state_change_handler(Disconnected)
fail_all_pending_requests()
end note
State Management
The WASM client tracks connection state using an AtomicBool and provides optional state change notifications.
State Change Handler
Applications can register a callback to receive notifications when the connection state changes:
| State | Trigger | Actions |
|---|---|---|
Connected | handle_connect() called | Handler invoked with RpcTransportState::Connected |
Disconnected | handle_disconnect() called | Handler invoked with RpcTransportState::Disconnected, all pending requests failed |
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:168-180 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:22-23
Dependencies
The WASM client has minimal dependencies focused on WASM/JavaScript interop:
| Dependency | Purpose |
|---|---|
wasm-bindgen | JavaScript/Rust FFI bindings |
wasm-bindgen-futures | Convert Rust futures to JavaScript promises |
js-sys | JavaScript standard library types |
tokio | Async runtime (sync primitives only: Mutex) |
futures | Future composition utilities |
muxio | Core multiplexing and framing protocol |
muxio-rpc-service | RPC trait definitions and METHOD_ID generation |
muxio-rpc-service-caller | Client-side RPC call interface |
muxio-rpc-service-endpoint | Server-side RPC handler interface |
Note: While tokio is included, the WASM client does not use Tokio's runtime. Only synchronization primitives like Mutex are used, which work in WASM environments.
Sources: extensions/muxio-wasm-rpc-client/Cargo.toml:11-22
Thread Safety
The WASM client is designed for single-threaded WASM environments:
Arcis used for reference counting, but WASM is single-threadedMutexguards shared state but never blocks (no contention)AtomicBoolprovides lock-free state access- All callbacks use
Send + Syncbounds for API consistency with native code
The three-stage read_bytes() pipeline ensures the dispatcher lock is held only during brief serialization/deserialization operations, not during handler execution.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:46-121 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:7-14
Comparison with Tokio Client
| Feature | WASM Client | Tokio Client |
|---|---|---|
| WebSocket Management | Delegated to JavaScript | Built-in with tokio-tungstenite |
| Event Model | Callback-based (onopen, onmessage, etc.) | Async stream-based |
| Connection Initialization | handle_connect() | connect() |
| Data Reading | read_bytes() called from JS | read_loop() task |
| Async Runtime | None (WASM environment) | Tokio |
| State Tracking | AtomicBool + manual calls | Automatic with connection task |
| Bidirectional RPC | Yes, via RpcServiceEndpoint | Yes, via RpcServiceEndpoint |
| Static Client Pattern | Yes, via thread_local | Not applicable |
Both clients implement RpcServiceCallerInterface, ensuring identical call patterns and service definitions work across both environments.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Transport State Management
Relevant source files
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/lib.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
- extensions/muxio-wasm-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
Purpose and Scope
This document explains how transport implementations track connection state, handle disconnection events, and notify application code through state change callbacks. All concrete transport implementations (RpcClient for Tokio and RpcWasmClient for WASM) provide consistent state management interfaces defined by the RpcServiceCallerInterface trait.
For information about the overall RPC caller interface and making RPC calls, see Service Caller Interface. For implementation details specific to each transport, see Tokio RPC Client and WASM RPC Client.
Transport State Types
The system defines connection state through the RpcTransportState enum, which represents the binary connection status of a transport.
Sources:
stateDiagram-v2
[*] --> Disconnected : Initial State
Disconnected --> Connected : Connection Established
Connected --> Disconnected : Connection Lost/Closed
Disconnected --> [*] : Client Dropped
- extensions/muxio-rpc-service-caller/src/transport_state.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:22-23
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:13-14
RpcTransportState Variants
| Variant | Description | Usage |
|---|---|---|
Connected | Transport has an active connection | Allows RPC calls to proceed |
Disconnected | Transport connection is closed or failed | Blocks new RPC calls, cancels pending requests |
The state is exposed through the is_connected() method on RpcServiceCallerInterface, which returns a boolean indicating current connection status.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-286
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:136-139
State Tracking Mechanism
Both transport implementations use Arc<AtomicBool> for thread-safe state tracking. This allows state to be checked and modified atomically across multiple concurrent tasks without requiring locks.
Sources:
graph TB
subgraph "RpcClient (Tokio)"
RpcClient["RpcClient"]
TokioAtomicBool["is_connected: Arc<AtomicBool>"]
RpcClient --> TokioAtomicBool
end
subgraph "RpcWasmClient (WASM)"
RpcWasmClient["RpcWasmClient"]
WasmAtomicBool["is_connected: Arc<AtomicBool>"]
RpcWasmClient --> WasmAtomicBool
end
subgraph "RpcServiceCallerInterface"
IsConnected["is_connected() -> bool"]
GetEmitFn["get_emit_fn()
checks state"]
end
TokioAtomicBool -.reads.-> IsConnected
WasmAtomicBool -.reads.-> IsConnected
TokioAtomicBool -.reads.-> GetEmitFn
WasmAtomicBool -.reads.-> GetEmitFn
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs30
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs23
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-286
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:136-139
Atomic Ordering Semantics
The implementations use specific memory ordering for different operations:
| Operation | Ordering | Rationale |
|---|---|---|
| Initial state check | Relaxed | Non-critical reads for logging |
| State transition (swap) | SeqCst | Strong guarantee for state transitions |
| Send loop state check | Acquire | Synchronizes with state changes |
| State store | SeqCst | Ensures visibility to all threads |
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs61
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs85
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs231
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs39
State Change Callbacks
Transport implementations allow application code to register callbacks that execute when connection state changes. This enables reactive patterns where applications can update UI, retry connections, or clean up resources.
Sources:
sequenceDiagram
participant App as "Application Code"
participant Client as "RpcClient/RpcWasmClient"
participant Handler as "State Change Handler"
participant Transport as "Underlying Transport"
App->>Client: set_state_change_handler(callback)
Client->>Client: Store handler in Arc<Mutex<Option<Box<...>>>>
alt Already Connected
Client->>Handler: callback(RpcTransportState::Connected)
Handler->>App: Initial state notification
end
Transport->>Client: Connection closed/error
Client->>Client: is_connected.swap(false)
Client->>Handler: callback(RpcTransportState::Disconnected)
Handler->>App: Disconnection notification
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:168-180
Handler Registration
The set_state_change_handler() method is defined by RpcServiceCallerInterface and implemented by both transport types:
async fn set_state_change_handler(
&self,
handler: impl Fn(RpcTransportState) + Send + Sync + 'static,
)
The handler is stored as Arc<StdMutex<Option<Box<dyn Fn(RpcTransportState) + Send + Sync>>>>, allowing it to be:
- Shared across multiple tasks via
Arc - Safely mutated when setting/clearing via
Mutex - Called from any thread via
Send + Syncbounds - Dynamically replaced via
Option
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:22-23
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:13-14
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
Initial State Notification
Both implementations immediately call the handler with RpcTransportState::Connected if already connected when the handler is registered. This ensures application code receives the current state without waiting for a transition.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:324-333
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:175-179
graph TB
subgraph "RpcClient State Lifecycle"
Constructor["RpcClient::new()"]
CreateState["Create is_connected: AtomicBool(true)"]
SpawnTasks["Spawn receive/send/heartbeat tasks"]
ReceiveLoop["Receive Loop Task"]
SendLoop["Send Loop Task"]
Heartbeat["Heartbeat Task"]
ErrorDetect["Error Detection"]
ShutdownAsync["shutdown_async()"]
ShutdownSync["shutdown_sync() (Drop)"]
FailPending["fail_all_pending_requests()"]
CallHandler["Call state_change_handler(Disconnected)"]
Constructor --> CreateState
CreateState --> SpawnTasks
SpawnTasks --> ReceiveLoop
SpawnTasks --> SendLoop
SpawnTasks --> Heartbeat
ReceiveLoop --> ErrorDetect
SendLoop --> ErrorDetect
ErrorDetect --> ShutdownAsync
ShutdownAsync --> CallHandler
ShutdownAsync --> FailPending
ShutdownSync --> CallHandler
end
Tokio Client State Management
The RpcClient manages connection state across multiple concurrent tasks: receive loop, send loop, and heartbeat task.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-108
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52
Connection Establishment
When RpcClient::new() successfully connects:
- WebSocket connection is established at extensions/muxio-tokio-rpc-client/src/rpc_client.rs:118-125
is_connectedis initialized totrueat extensions/muxio-tokio-rpc-client/src/rpc_client.rs134- Background tasks are spawned (receive, send, heartbeat) at extensions/muxio-tokio-rpc-client/src/rpc_client.rs:137-257
- If a state change handler is later set, it immediately receives
Connectednotification
Sources:
Disconnection Detection
The client detects disconnection through multiple mechanisms:
| Detection Point | Trigger | Handler Location |
|---|---|---|
| WebSocket receive error | ws_receiver.next() returns error | extensions/muxio-tokio-rpc-client/src/rpc_client.rs:186-198 |
| WebSocket stream end | ws_receiver.next() returns None | extensions/muxio-tokio-rpc-client/src/rpc_client.rs:206-220 |
| Send failure | ws_sender.send() returns error | extensions/muxio-tokio-rpc-client/src/rpc_client.rs:239-252 |
| Client drop | Drop::drop() is called | extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52 |
All detection paths eventually call shutdown_async() to ensure clean disconnection handling.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:156-221
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:224-257
Shutdown Implementation
The client implements two shutdown paths:
Asynchronous Shutdown (shutdown_async):
- Used by background tasks when detecting errors
- Acquires dispatcher lock to prevent concurrent RPC calls
- Fails all pending requests with
FrameDecodeError::ReadAfterCancel - Calls state change handler with
Disconnected - Uses
swap(false, Ordering::SeqCst)to ensure exactly-once semantics
Synchronous Shutdown (shutdown_sync):
- Used by
Dropimplementation - Cannot await, so doesn't acquire dispatcher lock
- Calls state change handler with
Disconnected - Relies on task abortion to prevent further operations
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-108
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:55-77
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52
Send Loop State Guard
The send loop checks connection state before attempting each send operation to prevent writing to a closed connection:
This prevents unnecessary error logging and ensures clean shutdown when disconnection has been detected by another task.
Sources:
graph TB
subgraph "WASM Client State Management"
JSInit["JavaScript: new WebSocket()"]
WasmNew["RpcWasmClient::new()"]
InitState["Initialize is_connected: false"]
JSOnOpen["JavaScript: onopen event"]
HandleConnect["handle_connect()"]
SetConnected["is_connected.store(true)"]
NotifyConnect["Call handler(Connected)"]
JSOnMessage["JavaScript: onmessage event"]
ReadBytes["read_bytes(data)"]
ProcessRPC["Process RPC messages"]
JSOnCloseError["JavaScript: onclose/onerror"]
HandleDisconnect["handle_disconnect()"]
SwapFalse["is_connected.swap(false)"]
NotifyDisconnect["Call handler(Disconnected)"]
FailPending["fail_all_pending_requests()"]
JSInit --> WasmNew
WasmNew --> InitState
JSOnOpen --> HandleConnect
HandleConnect --> SetConnected
SetConnected --> NotifyConnect
JSOnMessage --> ReadBytes
ReadBytes --> ProcessRPC
JSOnCloseError --> HandleDisconnect
HandleDisconnect --> SwapFalse
SwapFalse --> NotifyDisconnect
SwapFalse --> FailPending
end
WASM Client State Management
The RpcWasmClient provides explicit methods for JavaScript code to manage connection state since WASM cannot directly observe WebSocket events.
Sources:
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:26-35
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-44
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:123-134
JavaScript Integration Points
The WASM client expects JavaScript glue code to call specific methods at WebSocket lifecycle events:
| WebSocket Event | WASM Method | Purpose |
|---|---|---|
onopen | handle_connect() | Set connected state, notify handler |
onmessage | read_bytes(data) | Process incoming RPC messages |
onclose | handle_disconnect() | Set disconnected state, cancel requests |
onerror | handle_disconnect() | Same as onclose |
Sources:
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-44
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:46-121
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:123-134
Initial Disconnected State
Unlike RpcClient, which starts connected after new() succeeds, RpcWasmClient::new() initializes with is_connected: false because:
- Constructor cannot know if JavaScript has established a WebSocket connection
- Connection state is entirely controlled by JavaScript
- Prevents race conditions if RPC calls occur before
handle_connect()
Sources:
Disconnection Handling
When handle_disconnect() is called:
- Checks if already disconnected using
swap(false, Ordering::SeqCst)to ensure exactly-once handling - If transitioning from connected to disconnected:
- Calls state change handler with
Disconnectedstate - Acquires dispatcher lock
- Fails all pending requests with
FrameDecodeError::ReadAfterCancel
- Calls state change handler with
This ensures that any RPC calls in progress are properly terminated and their futures resolve with errors rather than hanging indefinitely.
Sources:
sequenceDiagram
participant App as "Application Code"
participant Client as "Transport Client"
participant Dispatcher as "RpcDispatcher"
participant Request as "Pending Request Future"
App->>Client: RpcMethod::call()
Client->>Dispatcher: register_request()
Dispatcher->>Request: Create future (pending)
Note over Client: Connection error detected
Client->>Client: is_connected.swap(false)
Client->>Dispatcher: fail_all_pending_requests(error)
Dispatcher->>Request: Resolve with error
Request-->>App: Err(RpcServiceError::...)
Pending Request Cancellation
When a transport disconnects, all pending RPC requests must be cancelled to prevent application code from waiting indefinitely. Both implementations use the dispatcher's fail_all_pending_requests() method.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-103
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:130-133
Cancellation Error Type
Pending requests are failed with FrameDecodeError::ReadAfterCancel, which propagates through the error handling chain as RpcServiceError::Transport. This error type specifically indicates that the request was cancelled due to connection closure rather than a protocol error or application error.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs102
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs131
Race Condition Prevention
The implementations use swap() to atomically transition state and check the previous value:
This ensures that fail_all_pending_requests() is called exactly once even if multiple tasks detect disconnection simultaneously.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs61
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs85
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs125
Testing State Management
The codebase includes comprehensive tests for state management behavior:
Connection Failure Test
test_client_errors_on_connection_failure verifies that attempting to connect to a non-existent server produces an appropriate error rather than hanging or panicking.
Sources:
State Change Handler Test
test_transport_state_change_handler validates the complete state change callback lifecycle:
- Handler registered after connection established
- Receives immediate
Connectednotification - Server closes connection
- Handler receives
Disconnectednotification - States collected in correct order:
[Connected, Disconnected]
Sources:
Pending Request Cancellation Test
test_pending_requests_fail_on_disconnect ensures that in-flight RPC calls are properly cancelled when connection closes:
- Client connects successfully
- RPC call spawned in background task (becomes pending)
- Server closes connection (triggered by test signal)
- Client detects disconnection
- Pending RPC call resolves with cancellation error
This test demonstrates the critical timing where the request must become pending in the dispatcher before disconnection occurs.
Sources:
Mock Client Implementation
The test suite includes MockRpcClient implementations that provide minimal state tracking for testing higher-level components without requiring actual network connections.
Sources:
Thread Safety Considerations
State management uses lock-free atomic operations where possible to minimize contention:
| Component | Synchronization Primitive | Access Pattern |
|---|---|---|
is_connected | Arc<AtomicBool> | High-frequency reads, rare writes |
state_change_handler | Arc<StdMutex<Option<...>>> | Rare reads (on state change), rare writes (handler registration) |
dispatcher | Arc<TokioMutex<...>> | Moderate frequency for RPC operations |
The combination of atomics for frequent checks and mutexes for infrequent handler invocation provides good performance while maintaining correctness.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-32
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:17-24
Emit Function State Check
The get_emit_fn() implementation checks connection state before sending data to prevent writing to closed connections:
This guard is essential because the emit function may be called from any task after disconnection has been detected by another task.
Sources:
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Type Safety and Shared Definitions
Relevant source files
- README.md
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This document explains how rust-muxio achieves compile-time type safety for RPC operations through shared service definitions. It covers the RpcMethodPrebuffered trait, how method definitions are shared between clients and servers, and the compile-time guarantees that eliminate entire classes of distributed system bugs.
For step-by-step instructions on implementing custom service definitions, see Creating Service Definitions. For details on METHOD_ID generation and collision prevention, see Method ID Generation. For serialization implementation details, see Serialization with Bitcode.
Architecture Overview
The type safety model in rust-muxio is built on a simple principle: both client and server implementations depend on the same service definition crate. This shared dependency ensures that any incompatibility between client expectations and server behavior results in a compile-time error rather than a runtime failure.
graph TB
subgraph ServiceDef["Service Definition Crate\n(example-muxio-rpc-service-definition)"]
Trait["RpcMethodPrebuffered trait"]
Add["Add struct\nMETHOD_ID: u64"]
Mult["Mult struct\nMETHOD_ID: u64"]
Echo["Echo struct\nMETHOD_ID: u64"]
end
subgraph Client["Client Implementations"]
TokioClient["muxio-tokio-rpc-client"]
WasmClient["muxio-wasm-rpc-client"]
ClientCode["Application Code:\nAdd::call(client, params)"]
end
subgraph Server["Server Implementation"]
TokioServer["muxio-tokio-rpc-server"]
ServerCode["Handler Registration:\nendpoint.register_prebuffered(Add::METHOD_ID, handler)"]
end
Trait --> Add
Trait --> Mult
Trait --> Echo
Add -.depends on.-> ClientCode
Add -.depends on.-> ServerCode
ClientCode --> TokioClient
ClientCode --> WasmClient
ServerCode --> TokioServer
Add -.enforces contract.-> ClientCode
Add -.enforces contract.-> ServerCode
Shared Definition Pattern
Sources:
- README.md49
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-14
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-37
The RpcMethodPrebuffered Trait
The [RpcMethodPrebuffered trait](https://github.com/jzombie/rust-muxio/blob/fcb45826/RpcMethodPrebuffered trait) defines the contract for a complete request/response RPC method. Each method implementation provides:
| Trait Element | Type | Purpose |
|---|---|---|
METHOD_ID | u64 | Compile-time unique identifier (xxhash of method name) |
Input | Associated Type | Request parameter type |
Output | Associated Type | Response result type |
encode_request() | Method | Serialize Input to bytes |
decode_request() | Method | Deserialize bytes to Input |
encode_response() | Method | Serialize Output to bytes |
decode_response() | Method | Deserialize bytes to Output |
Type Flow Through System Layers
Sources:
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:35-42
Compile-Time Guarantees
Type Safety Enforcement
The shared service definition pattern enforces three critical guarantees at compile time:
- Parameter Type Matching : The client's call site must pass arguments that match the
Inputassociated type - Response Type Matching : The handler must return a value matching the
Outputassociated type - METHOD_ID Uniqueness : Duplicate method names result in compile-time constant collision
Example: Type Mismatch Detection
The following table illustrates how type mismatches are caught:
| Scenario | Client Code | Server Code | Result |
|---|---|---|---|
| Correct | Add::call(client, vec![1.0, 2.0]) | Handler returns f64 | ✓ Compiles |
| Wrong Input Type | Add::call(client, "invalid") | Handler expects Vec<f64> | ✗ Compile error: type mismatch |
| Wrong Output Type | Client expects f64 | Handler returns String | ✗ Compile error: trait bound not satisfied |
| Missing Handler | Add::call(client, params) | No handler registered | ✓ Compiles, runtime NotFound error |
Code Entity Mapping
Sources:
- README.md:69-117
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-95
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:226-239
Client-Server Contract Enforcement
RpcCallPrebuffered Implementation
The [RpcCallPrebuffered trait](https://github.com/jzombie/rust-muxio/blob/fcb45826/RpcCallPrebuffered trait) provides the client-side call interface. It is automatically implemented for all types that implement RpcMethodPrebuffered:
This design ensures that:
- The
call()method always receivesSelf::Input(enforced by trait bounds) - The return type is always
Result<Self::Output, _>(enforced by trait signature) - Both encoding and decoding use the shared implementation from
RpcMethodPrebuffered
Sources:
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-29
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-98
Cross-Platform Type Safety
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-88
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-133
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:35-60
Integration Test Validation
The integration tests demonstrate identical usage patterns across both Tokio and WASM clients, validating the shared definition model:
Tokio Client Test Pattern
1. Server registers handler: endpoint.register_prebuffered(Add::METHOD_ID, handler)
2. Handler decodes: Add::decode_request(&request_bytes)
3. Handler encodes: Add::encode_response(sum)
4. Client calls: Add::call(&client, vec![1.0, 2.0, 3.0])
5. Assertion: assert_eq!(result, 6.0)
WASM Client Test Pattern
1. Server registers handler: endpoint.register_prebuffered(Add::METHOD_ID, handler)
2. Handler decodes: Add::decode_request(&request_bytes)
3. Handler encodes: Add::encode_response(sum)
4. Client calls: Add::call(&client, vec![1.0, 2.0, 3.0])
5. Assertion: assert_eq!(result, 6.0)
The patterns are identical except for the client type. This demonstrates that:
- Application logic is transport-agnostic
- Service definitions work identically across platforms
- Type safety is maintained regardless of transport implementation
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:18-97
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:40-142
Error Handling and Type Safety
Runtime Error Detection
While type mismatches are caught at compile time, certain errors can only be detected at runtime:
| Error Type | Detection Time | Example |
|---|---|---|
| Type mismatch | Compile time | Add::call(client, "wrong type") → Compile error |
| Missing field in struct | Compile time | Client uses Add v2, server has Add v1 → Compile error if both recompile |
| Method not registered | Runtime | Client calls Add, server never registered handler → RpcServiceError::NotFound |
| Handler logic error | Runtime | Handler returns Err("Addition failed") → RpcServiceError::Rpc |
Runtime Error Propagation
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:205-240
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:144-227
Large Payload Handling
The RpcCallPrebuffered implementation includes automatic handling for large payloads. The type system ensures that this complexity is transparent to application code:
| Payload Size | Transport Strategy | Type Safety Impact |
|---|---|---|
| < 64KB | Sent in rpc_param_bytes header field | None - same types |
| ≥ 64KB | Sent in rpc_prebuffered_payload_bytes, chunked automatically | None - same types |
The decision is made based on the serialized byte length, not the Rust type. The following code from extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-65 shows this logic:
Application code calling Add::call(client, vec![1.0; 1_000_000]) receives the same type safety guarantees as Add::call(client, vec![1.0; 3]).
Sources:
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-65
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:154-203
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312
Summary
The type safety model in rust-muxio provides the following guarantees:
- Compile-Time Contract Enforcement : Both client and server depend on the same service definition crate, ensuring API compatibility
- Type-Safe Method Calls : The
RpcCallPrebufferedtrait ensures thatInputandOutputtypes are correctly used at all call sites - Transport Agnosticism : The same type-safe definitions work identically across Tokio and WASM clients
- Automatic Serialization : Encoding and decoding are encapsulated in the method definition, hidden from application code
- Early Error Detection : Type mismatches, missing fields, and incompatible changes result in compile errors, not runtime failures
This design eliminates an entire class of distributed system bugs where client and server implementations drift apart over time. Any breaking change to a service definition requires both client and server code to be updated simultaneously, and the compiler enforces this constraint.
Sources:
- README.md49
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-99
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Creating Service Definitions
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/Cargo.toml
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This page provides a step-by-step guide to creating RPC service definitions by implementing the RpcMethodPrebuffered trait. Service definitions establish type-safe contracts between clients and servers, ensuring that both sides use identical data structures and serialization logic at compile time.
For conceptual background on the RpcMethodPrebuffered trait and its role in the architecture, see Service Definitions. For details on compile-time method ID generation, see Method ID Generation. For serialization internals, see Serialization with Bitcode.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-97 extensions/muxio-rpc-service/Cargo.toml:1-18
Service Definition Structure
A service definition is a Rust struct that implements RpcMethodPrebuffered. Each struct represents a single RPC method and defines:
| Component | Purpose | Compile-Time or Runtime |
|---|---|---|
METHOD_ID | Unique identifier for the method | Compile-time constant |
Input type | Parameter structure | Compile-time type |
Output type | Response structure | Compile-time type |
encode_request | Serializes parameters to bytes | Runtime function |
decode_request | Deserializes parameters from bytes | Runtime function |
encode_response | Serializes result to bytes | Runtime function |
decode_response | Deserializes result from bytes | Runtime function |
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:36-60
Required Trait Implementation
The RpcMethodPrebuffered trait is defined in muxio-rpc-service and must be implemented for each RPC method. The trait definition looks like this:
Associated Types
| Type | Requirements | Purpose |
|---|---|---|
Input | Serialize + DeserializeOwned + Send + Sync | Method parameter type |
Output | Serialize + DeserializeOwned + Send + Sync | Method return type |
Sources: extensions/muxio-rpc-service/Cargo.toml:11-17
Step-by-Step Implementation
1. Create a Service Definition Crate
Service definitions should live in a separate crate that both client and server depend on. This ensures compile-time type safety across the network boundary.
example-muxio-rpc-service-definition/
├── Cargo.toml
└── src/
├── lib.rs
└── prebuffered.rs
The Cargo.toml must include:
muxio-rpc-service(providesRpcMethodPrebufferedtrait)bitcode(for serialization)xxhash-rust(for method ID generation)
Sources: extensions/muxio-rpc-service/Cargo.toml:11-17
2. Define the Method Struct
Create a zero-sized struct for each RPC method:
These structs have no fields and exist purely to carry trait implementations.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs1
3. Generate METHOD_ID
The METHOD_ID is a compile-time constant generated by hashing the method name. This must be unique across all methods in your service:
The hash function is xxhash_rust::const_xxh3::xxh3_64, which can be evaluated at compile time. Using the method name as the hash input ensures readability while maintaining uniqueness.
Sources: extensions/muxio-rpc-service/Cargo.toml16
4. Implement RpcMethodPrebuffered
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:23-29 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:38-41
graph LR
subgraph "Request Path"
INPUT["Input: Vec<f64>"]
ENC_REQ["encode_request"]
BYTES_REQ["Vec<u8>"]
DEC_REQ["decode_request"]
INPUT2["Input: Vec<f64>"]
end
subgraph "Response Path"
OUTPUT["Output: f64"]
ENC_RES["encode_response"]
BYTES_RES["Vec<u8>"]
DEC_RES["decode_response"]
OUTPUT2["Output: f64"]
end
INPUT --> ENC_REQ
ENC_REQ --> BYTES_REQ
BYTES_REQ --> DEC_REQ
DEC_REQ --> INPUT2
OUTPUT --> ENC_RES
ENC_RES --> BYTES_RES
BYTES_RES --> DEC_RES
DEC_RES --> OUTPUT2
Serialization Implementation
The encode/decode methods use bitcode for binary serialization. The pattern is consistent across all methods:
Error Handling
All encode/decode methods return Result<T, io::Error>. The bitcode library's errors must be converted to io::Error:
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:75-76
Client and Server Usage
Once a service definition is implemented, both client and server use it to ensure type safety.
Client-Side Usage
The RpcCallPrebuffered trait provides the call method automatically for any type implementing RpcMethodPrebuffered.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:82-88
Server-Side Usage
The handler receives raw bytes, uses the service definition to decode them, processes the request, and uses the service definition to encode the response.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:35-61
Code Entity Mapping
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:35-61 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-97
Complete Example: Echo Service
Here is a complete implementation of an Echo service that returns its input unchanged:
Client Usage
Server Usage
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:53-60 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:86-87
Handling Large Payloads
The RpcCallPrebuffered trait automatically handles large payloads by detecting when encoded arguments exceed DEFAULT_SERVICE_MAX_CHUNK_SIZE (typically 64KB). When this occurs:
- Small payloads: Encoded arguments are placed in
rpc_param_bytesfield of the request header - Large payloads: Encoded arguments are placed in
rpc_prebuffered_payload_bytesand automatically chunked by theRpcDispatcher
This is handled transparently by the framework. Service definitions do not need special logic for large payloads.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:155-203
Best Practices
1. Use Separate Service Definition Crates
Create a dedicated crate for service definitions that both client and server depend on:
workspace/
├── my-service-definition/ # Shared definitions
├── my-server/ # Depends on my-service-definition
└── my-client/ # Depends on my-service-definition
2. Choose Descriptive Method Names
The METHOD_ID is generated from the method name, so choose names that clearly describe the operation:
3. Keep Input/Output Types Simple
Prefer simple, serializable types:
4. Test Service Definitions
Integration tests should verify both client and server usage:
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:16-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142
5. Document Expected Errors
Service definitions should document what errors can occur during encode/decode:
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152
Cross-Platform Compatibility
Service definitions work identically across native (Tokio) and WebAssembly clients:
| Platform | Client Type | Service Definition Usage |
|---|---|---|
| Native | muxio-tokio-rpc-client::RpcClient | Add::call(client.as_ref(), params).await |
| WASM | muxio-wasm-rpc-client::RpcWasmClient | Add::call(client.as_ref(), params).await |
Both platforms use the same service definition crate, ensuring API consistency across deployments.
Sources: extensions/muxio-wasm-rpc-client/Cargo.toml:11-22 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-142
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Method ID Generation
Relevant source files
Purpose and Scope
This document explains the compile-time method ID generation mechanism in Muxio's RPC framework. Method IDs are unique 64-bit identifiers that enable type-safe RPC dispatch without runtime overhead. The system uses compile-time hashing of method names to generate these identifiers, ensuring that method routing is both efficient and collision-resistant.
For information about implementing RPC methods and the broader service definition patterns, see Creating Service Definitions. For details on how method IDs are used during RPC dispatch, see Service Endpoint Interface.
Overview
Method ID generation is a critical component of Muxio's type-safe RPC system. Each RPC method must have a unique identifier that both client and server use to route requests to the correct handler. Rather than using string-based method names at runtime (which would require string parsing and comparison), Muxio generates fixed 64-bit integer identifiers at compile-time by hashing method names.
This approach provides several key benefits:
| Benefit | Description |
|---|---|
| Zero Runtime Overhead | Method IDs are compile-time constants; no hashing occurs during request dispatch |
| Type Safety | Duplicate method names cause compile-time errors, preventing runtime routing conflicts |
| Efficient Comparison | Integer comparison is faster than string comparison for method dispatch |
| Network Efficiency | 8-byte integers are transmitted instead of variable-length strings |
| Cross-Platform Consistency | The same hash algorithm produces identical IDs on all platforms |
Sources:
- README.md49
- Diagram 4 from high-level architecture
The METHOD_ID Field
Every RPC method type in Muxio must implement the RpcMethodPrebuffered trait, which requires a METHOD_ID constant of type u64. This constant is the compile-time generated identifier for that method.
graph TB
subgraph "Client Side"
ClientCode["Application calls Add::call()"]
EncodeReq["Add::encode_request()"]
MethodID1["Attach Add::METHOD_ID"]
end
subgraph "Network Transport"
Frame["Binary frame with method_id field"]
end
subgraph "Server Side"
DecodeFrame["Extract method_id from frame"]
Dispatch["RpcDispatcher routes by method_id"]
Handler["Registered Add handler invoked"]
end
ClientCode --> EncodeReq
EncodeReq --> MethodID1
MethodID1 --> Frame
Frame --> DecodeFrame
DecodeFrame --> Dispatch
Dispatch --> Handler
MethodID1 -.METHOD_ID = 0xABCD1234.-> Dispatch
The METHOD_ID field serves as the primary routing key throughout the RPC system:
Diagram: Method ID Flow Through RPC Call
During an RPC call, the client includes the METHOD_ID in the request frame. The server's dispatcher extracts this ID and uses it to look up the appropriate handler in its registry. This integer-based routing is significantly faster than string-based method name matching.
Sources:
- README.md:101-117
- Diagram 3 from high-level architecture
Compile-Time Hash Generation
Method IDs are generated using the xxhash-rust crate, which provides fast, non-cryptographic hashing. The hash is computed at compile-time from the method's string name, producing a deterministic 64-bit identifier.
graph LR
subgraph "Compile Time"
MethodName["Method name string\n(e.g., 'Add')"]
HashFunc["xxhash algorithm"]
MethodID["METHOD_ID constant\n(u64)"]
end
subgraph "Runtime"
NoHash["No hashing occurs"]
DirectUse["METHOD_ID used directly"]
end
MethodName --> HashFunc
HashFunc --> MethodID
MethodID --> NoHash
NoHash --> DirectUse
style MethodName fill:#f9f9f9,stroke:#333
style MethodID fill:#f9f9f9,stroke:#333
Hash Generation Process
Diagram: Compile-Time vs Runtime Hash Computation
The hash generation function is typically implemented as a const function or macro that can be evaluated at compile-time. This ensures that the METHOD_ID is embedded directly in the compiled binary as a constant value.
Implementation Details
The implementation relies on xxhash's ability to produce consistent hashes across compilations and platforms. The hash is computed from the UTF-8 bytes of the method name string:
METHOD_ID = xxhash64(method_name.as_bytes())
Key characteristics of the hash generation:
| Characteristic | Value/Behavior |
|---|---|
| Hash Algorithm | xxHash (64-bit variant) |
| Input | UTF-8 encoded method name string |
| Output | u64 constant |
| Computation Time | Compile-time only |
| Determinism | Always produces same hash for same input |
| Collision Resistance | Very low probability with 64-bit space |
Sources:
- extensions/muxio-rpc-service/Cargo.toml16
- Diagram 4 from high-level architecture
Hash Algorithm: xxHash
Muxio uses the xxHash algorithm for method ID generation, as declared in the dependency specification at extensions/muxio-rpc-service/Cargo.toml16 xxHash is specifically chosen for its properties:
Why xxHash?
| Property | Benefit for Method IDs |
|---|---|
| Extremely Fast | Minimal compile-time overhead |
| Good Distribution | Low collision probability in typical method name sets |
| Deterministic | Same input always produces same output across platforms |
| Non-Cryptographic | No security requirements; speed is prioritized |
| 64-bit Output | Large enough space to avoid collisions in practical use |
Collision Probability
With 64-bit hashes, the probability of collision follows the birthday problem:
- For 100 methods: ~0.00027% chance of collision
- For 1,000 methods: ~0.027% chance of collision
- For 10,000 methods: ~2.7% chance of collision
In practice, most RPC services define far fewer than 1,000 methods, making collisions extremely unlikely. The system also provides compile-time collision detection (see next section).
Sources:
- extensions/muxio-rpc-service/Cargo.toml16
- Diagram 6 from high-level architecture
graph TB
subgraph "Service Definition Crate"
Add["Add struct\nMETHOD_ID = hash('Add')"]
Mult["Mult struct\nMETHOD_ID = hash('Mult')"]
Add2["AddNumbers struct\nMETHOD_ID = hash('Add')\n(hypothetical collision)"]
end
subgraph "Server Registration"
Registry["Method Handler Registry\nHashMap<u64, Handler>"]
end
subgraph "Compilation Result"
Success["✓ Compiles successfully"]
Error["✗ Duplicate key error\n'Add' and 'AddNumbers'\nboth hash to same ID"]
end
Add --> Registry
Mult --> Registry
Add2 -.collides.-> Registry
Add --> Success
Mult --> Success
Add2 --> Error
Collision Detection and Prevention
While xxHash provides good distribution, Muxio's type system ensures that duplicate method IDs cause compile-time errors rather than silent runtime failures.
Compile-Time Collision Detection
Diagram: Collision Detection at Compile-Time
The collision detection mechanism works through Rust's type system and const evaluation:
- Method Registration : When handlers are registered, each
METHOD_IDbecomes a key in a compile-time checked registry - Duplicate Detection : If two methods produce the same
METHOD_ID, the registration code will fail to compile - Clear Error Messages : The compiler reports which method names collided
Best Practices to Avoid Collisions
| Practice | Rationale |
|---|---|
| Use Descriptive Names | Longer, more specific names reduce collision probability |
| Namespace Methods | Prefix related methods (e.g., User_Create, User_Delete) |
| Avoid Abbreviations | Add is better than Ad or A |
| Test at Compile Time | Collisions are caught during build, not in production |
Sources:
- Diagram 4 from high-level architecture
- README.md49
sequenceDiagram
participant Client as "RpcClient"
participant Encoder as "RpcRequest encoder"
participant Network as "Binary protocol"
participant Decoder as "RpcRequest decoder"
participant Dispatcher as "RpcDispatcher"
participant Registry as "Handler registry"
participant Handler as "Method handler"
Client->>Encoder: METHOD_ID + params
Note over Encoder: METHOD_ID = 0xABCD1234
Encoder->>Network: Binary frame [id=0xABCD1234]
Network->>Decoder: Receive frame
Decoder->>Dispatcher: Extract METHOD_ID
Dispatcher->>Registry: lookup(0xABCD1234)
Registry->>Handler: Get registered handler
Handler->>Handler: Execute method
Handler->>Dispatcher: Return response
Note over Dispatcher,Registry: O(1) integer lookup\nvs O(n) string comparison
Method ID Usage in RPC Dispatch
Once generated, method IDs flow through the entire RPC pipeline to enable efficient request routing.
Request Dispatch Flow
Diagram: Method ID in Request Dispatch Sequence
The dispatcher maintains a HashMap<u64, Handler> mapping method IDs to their handler functions. When a request arrives:
- The binary frame is decoded to extract the
method_idfield - The dispatcher performs a hash map lookup using the 64-bit integer key
- If found, the corresponding handler is invoked
- If not found, an "method not found" error is returned
graph TB
subgraph "Dispatcher State"
Registry["Handler Registry\nHashMap<u64, Box<dyn Handler>>"]
end
subgraph "Registered Methods"
Add["0xABCD1234\n'Add' handler"]
Mult["0xDEF56789\n'Mult' handler"]
Echo["0x12345678\n'Echo' handler"]
end
subgraph "Incoming Request"
Request["RpcRequest\nmethod_id: 0xABCD1234"]
end
Add --> Registry
Mult --> Registry
Echo --> Registry
Request --> Registry
Registry -.lookup.-> Add
This integer-based lookup is significantly faster than string-based routing and consumes less memory in the dispatch table.
Handler Registry Structure
Diagram: Handler Registry Data Structure
The registry uses Rust's HashMap for O(1) average-case lookup performance. Each entry maps a METHOD_ID (u64) to a handler function or closure that can process requests for that method.
Sources:
- README.md:101-117
- Diagram 3 from high-level architecture
- extensions/muxio-rpc-service/Cargo.toml:1-18
graph TB
subgraph "Service Definition Crate"
Trait["RpcMethodPrebuffered"]
AddDef["Add implementation\nMETHOD_ID\nencode_request\ndecode_response"]
end
subgraph "Client Crate"
ClientCall["Add::call()\nuses Add::METHOD_ID\nuses Add::encode_request"]
end
subgraph "Server Crate"
ServerReg["register_prebuffered(Add::METHOD_ID)\nuses Add::decode_request"]
end
subgraph "Compile-Time Checks"
Check1["✓ Same METHOD_ID on client and server"]
Check2["✓ Same data structures in encode/decode"]
Check3["✓ Type mismatches = compile error"]
end
Trait --> AddDef
AddDef --> ClientCall
AddDef --> ServerReg
ClientCall --> Check1
ServerReg --> Check1
ClientCall --> Check2
ServerReg --> Check2
AddDef --> Check3
Type Safety Guarantees
The method ID generation system provides several compile-time guarantees that prevent entire classes of runtime errors.
Shared Definition Enforcement
Diagram: Type Safety Through Shared Definitions
Key guarantees provided by the system:
| Guarantee | Enforcement Mechanism |
|---|---|
| Method ID Consistency | Both client and server use Add::METHOD_ID from shared crate |
| Parameter Type Safety | Shared encode_request and decode_request functions |
| Response Type Safety | Shared encode_response and decode_response functions |
| API Contract Enforcement | Any change to method signature requires recompilation of both client and server |
Prevented Error Classes
By using compile-time method IDs and shared definitions, the following runtime errors become impossible:
- Method Name Typos : Cannot call a method that doesn't exist
- Parameter Type Mismatches : Wrong parameter types fail at compile-time
- Version Skew : Incompatible client/server versions won't compile against same service definition
- Routing Errors : Impossible to route to wrong handler due to integer-based dispatch
Sources:
- README.md49
- Diagram 4 from high-level architecture
Implementation Example
Here's how method ID generation integrates into a complete service definition, as demonstrated in the example application:
Service Definition Structure
Service Definition Crate
├── RpcMethodPrebuffered trait implementations
│ ├── Add::METHOD_ID = hash("Add")
│ ├── Add::encode_request
│ ├── Add::decode_request
│ ├── Add::encode_response
│ └── Add::decode_response
├── Mult::METHOD_ID = hash("Mult")
│ └── ... (similar structure)
└── Echo::METHOD_ID = hash("Echo")
└── ... (similar structure)
Client Usage
From README.md:144-151 the client invokes methods using the shared definitions:
Internally, this:
- Uses
Add::METHOD_IDto identify the method - Calls
Add::encode_requestto serialize parameters - Sends the request with the method ID
- Calls
Add::decode_responseto deserialize the result
Server Registration
From README.md:101-106 the server registers handlers using the same IDs:
The registration explicitly uses Add::METHOD_ID, ensuring that:
- Client calls to "Add" route to this handler
- The handler uses the correct decode/encode functions
- Any mismatch in type definitions causes a compile error
Sources:
graph TB
subgraph "Shared Service Definition"
Source["Add::METHOD_ID = hash('Add')"]
end
subgraph "Native Linux Build"
Linux["Compiled METHOD_ID\n0xABCD1234"]
end
subgraph "Native Windows Build"
Windows["Compiled METHOD_ID\n0xABCD1234"]
end
subgraph "WASM Build"
WASM["Compiled METHOD_ID\n0xABCD1234"]
end
Source --> Linux
Source --> Windows
Source --> WASM
Linux -.identical.-> Windows
Windows -.identical.-> WASM
Cross-Platform Consistency
Method IDs are generated identically across all platforms where Muxio runs, including native (x86, ARM), WebAssembly, and different operating systems.
Platform-Independent Hash Generation
Diagram: Cross-Platform Method ID Consistency
This consistency is critical for the "write once, deploy everywhere" architecture. The same service definition crate can be:
- Compiled into a native Tokio server
- Compiled into a native Tokio client
- Compiled into a WASM client for browsers
- Used in all three simultaneously
All instances will generate and use identical method IDs, ensuring interoperability.
Sources:
- README.md47
- extensions/muxio-wasm-rpc-client/Cargo.toml:1-30
- Diagram 2 from high-level architecture
Performance Characteristics
The compile-time method ID generation provides significant performance benefits compared to string-based method identification:
| Aspect | String-Based | Method ID (u64) | Improvement |
|---|---|---|---|
| Network Size | Variable (4-30 bytes) | Fixed (8 bytes) | ~50-75% reduction |
| Comparison Speed | O(n) string compare | O(1) integer compare | 10-100x faster |
| Memory Overhead | String allocation | Integer copy | Minimal |
| Hash Computation | At runtime | At compile-time | Zero runtime cost |
| Cache Efficiency | Poor (pointer chase) | Excellent (value type) | Better CPU utilization |
Benchmark Scenarios
In typical RPC scenarios:
- Method Dispatch : Integer-based lookup is 10-100x faster than string comparison
- Network Transmission : Fixed 8-byte ID saves bandwidth and reduces serialization time
- Memory Pressure : No string allocations or hash map overhead for method names
These optimizations are particularly valuable in high-throughput scenarios where thousands of RPC calls per second are processed.
Sources:
- README.md32
- README.md45
- Diagram 6 from high-level architecture
Summary
Method ID generation is a foundational element of Muxio's type-safe RPC system. By hashing method names at compile-time into 64-bit integers, the system achieves:
- Zero runtime overhead for method identification
- Compile-time collision detection preventing routing errors
- Efficient network transmission using fixed-size identifiers
- Type-safe APIs through shared service definitions
- Cross-platform consistency ensuring interoperability
The combination of compile-time hashing, shared trait implementations, and integer-based dispatch creates a robust foundation for building distributed systems where API contracts are enforced by the compiler rather than runtime validation.
Sources:
- README.md:1-166
- extensions/muxio-rpc-service/Cargo.toml:1-18
- All architecture diagrams
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Serialization with Bitcode
Relevant source files
- Cargo.lock
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This document explains how the bitcode library is used for binary serialization in the rust-muxio RPC system. It covers the serialization architecture, the integration with service definitions, encoding strategies for different payload sizes, and performance characteristics.
For information about defining custom RPC services that use this serialization, see Creating Service Definitions. For information about compile-time method identification, see Method ID Generation.
Overview
The rust-muxio system uses bitcode version 0.6.6 as its primary serialization format for encoding and decoding RPC request parameters and response values. Bitcode is a compact, binary serialization library that integrates with Serde, providing efficient serialization with minimal overhead compared to text-based formats like JSON.
Sources: Cargo.lock:158-168
Why Bitcode
The system uses bitcode for several key reasons:
| Feature | Benefit |
|---|---|
| Binary Format | Significantly smaller payload sizes compared to JSON or other text formats |
| Serde Integration | Works seamlessly with Rust's standard serialization ecosystem via #[derive(Serialize, Deserialize)] |
| Type Safety | Preserves Rust's strong typing across the network boundary |
| Performance | Fast encoding and decoding with minimal CPU overhead |
| Cross-Platform | Identical binary representation on native and WASM targets |
The compact binary representation is particularly important for the multiplexed stream architecture, where multiple concurrent RPC calls share a single WebSocket connection. Smaller payloads mean lower latency and higher throughput.
Sources: Cargo.lock:158-168 high-level architecture diagrams
Integration with RpcMethodPrebuffered
Bitcode serialization is accessed through the RpcMethodPrebuffered trait, which defines four core methods for encoding and decoding:
Sources: Service definition pattern from integration tests
graph TB
subgraph "RpcMethodPrebuffered Trait"
TRAIT[RpcMethodPrebuffered]
ENC_REQ["encode_request(Input)\n→ Result<Vec<u8>, io::Error>"]
DEC_REQ["decode_request(&[u8])\n→ Result<Input, io::Error>"]
ENC_RES["encode_response(Output)\n→ Result<Vec<u8>, io::Error>"]
DEC_RES["decode_response(&[u8])\n→ Result<Output, io::Error>"]
TRAIT --> ENC_REQ
TRAIT --> DEC_REQ
TRAIT --> ENC_RES
TRAIT --> DEC_RES
end
subgraph "Bitcode Library"
BITCODE_ENC["bitcode::encode()"]
BITCODE_DEC["bitcode::decode()"]
end
ENC_REQ -.uses.-> BITCODE_ENC
DEC_REQ -.uses.-> BITCODE_DEC
ENC_RES -.uses.-> BITCODE_ENC
DEC_RES -.uses.-> BITCODE_DEC
subgraph "Service Implementation"
ADD_SERVICE["Add Method\nInput: Vec<f64>\nOutput: f64"]
MULT_SERVICE["Mult Method\nInput: Vec<f64>\nOutput: f64"]
ECHO_SERVICE["Echo Method\nInput: Vec<u8>\nOutput: Vec<u8>"]
end
ADD_SERVICE -.implements.-> TRAIT
MULT_SERVICE -.implements.-> TRAIT
ECHO_SERVICE -.implements.-> TRAIT
Serialization Flow
The following diagram shows how data transforms from typed Rust values to binary bytes and back during a complete RPC roundtrip:
sequenceDiagram
participant AppClient as "Application Code\n(Client)"
participant EncReq as "encode_request\nbitcode::encode"
participant Transport as "WebSocket Transport\nBinary Frames"
participant DecReq as "decode_request\nbitcode::decode"
participant AppServer as "Application Code\n(Server)"
participant EncRes as "encode_response\nbitcode::encode"
participant DecRes as "decode_response\nbitcode::decode"
AppClient->>EncReq: Vec<f64> input
Note over EncReq: Serialize to bytes
EncReq->>Transport: Vec<u8> (binary)
Transport->>DecReq: Vec<u8> (binary)
Note over DecReq: Deserialize from bytes
DecReq->>AppServer: Vec<f64> input
Note over AppServer: Process request
AppServer->>EncRes: f64 output
Note over EncRes: Serialize to bytes
EncRes->>Transport: Vec<u8> (binary)
Transport->>DecRes: Vec<u8> (binary)
Note over DecRes: Deserialize from bytes
DecRes->>AppClient: f64 output
This flow ensures that application code works with native Rust types on both sides, while bitcode handles the conversion to and from the binary wire format.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-97 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:36-43
Encoding Strategy for Large Payloads
The system implements an intelligent encoding strategy that adapts based on payload size. This is necessary because RPC header frames have size limitations imposed by the underlying transport protocol.
graph TB
INPUT["Typed Input\n(e.g., Vec<f64>)"]
ENCODE["encode_request()"]
BYTES["Encoded Bytes\nVec<u8>"]
SIZE_CHECK{"Size ≥\nDEFAULT_SERVICE_MAX_CHUNK_SIZE\n(64 KB)?"}
SMALL["Place in\nrpc_param_bytes\n(Header Field)"]
LARGE["Place in\nrpc_prebuffered_payload_bytes\n(Chunked Stream)"]
REQ["RpcRequest\nmethod_id + params/payload"]
DISPATCHER["RpcDispatcher\nMultiplexing Layer"]
INPUT --> ENCODE
ENCODE --> BYTES
BYTES --> SIZE_CHECK
SIZE_CHECK -->|No| SMALL
SIZE_CHECK -->|Yes| LARGE
SMALL --> REQ
LARGE --> REQ
REQ --> DISPATCHER
This dual-path strategy is implemented in extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:55-65:
Small Payload Path
When encoded arguments are less than 64 KB (the value of DEFAULT_SERVICE_MAX_CHUNK_SIZE):
- Bytes are placed in
RpcRequest::rpc_param_bytes - Transmitted in the initial header frame
- Most efficient for typical RPC calls
Large Payload Path
When encoded arguments are 64 KB or larger :
- Bytes are placed in
RpcRequest::rpc_prebuffered_payload_bytes - Automatically chunked by
RpcDispatcher - Streamed as multiple frames after the header
- Handles arbitrarily large arguments
The server-side endpoint interface contains corresponding logic to check both fields when extracting request parameters.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-72 [muxio-rpc-service constants](https://github.com/jzombie/rust-muxio/blob/fcb45826/muxio-rpc-service constants)
Practical Example: Add Method
Here's how serialization works in practice for the Add RPC method:
Client-Side Encoding
Server-Side Decoding and Processing
Client-Side Response Handling
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:36-43 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:82-88
Large Payload Handling
The system is tested with payloads up to 200x the chunk size (approximately 12.8 MB) to ensure that large data sets can be transmitted reliably:
The RpcDispatcher in the multiplexing layer handles chunking transparently. Neither the application code nor the serialization layer needs to be aware of this streaming behavior.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:155-203 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:230-312
Error Handling
Bitcode serialization can fail for several reasons:
Encoding Errors
| Error Type | Cause | Example |
|---|---|---|
| Type Mismatch | Data type doesn't implement Serialize | Non-serializable field in struct |
| Encoding Failure | Bitcode internal error | Extremely nested structures |
These are returned as io::Error from encode_request and encode_response methods.
Decoding Errors
| Error Type | Cause | Example |
|---|---|---|
| Invalid Format | Corrupted bytes or version mismatch | Network corruption |
| Type Mismatch | Server/client type definitions don't match | Field added without updating both sides |
| Truncated Data | Incomplete transmission | Connection interrupted mid-stream |
These are returned as io::Error from decode_request and decode_response methods.
All serialization errors are converted to RpcServiceError::Transport by the caller interface, ensuring consistent error handling throughout the RPC stack.
Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:87-96
graph LR
subgraph "Service Definition Crate"
TYPES["Shared Type Definitions\nstruct with Serialize/Deserialize"]
end
subgraph "Native Client/Server"
NATIVE_ENC["bitcode::encode()\nx86_64 / ARM64"]
NATIVE_DEC["bitcode::decode()\nx86_64 / ARM64"]
end
subgraph "WASM Client"
WASM_ENC["bitcode::encode()\nWebAssembly"]
WASM_DEC["bitcode::decode()\nWebAssembly"]
end
subgraph "Wire Format"
BYTES["Binary Bytes\n(Platform-Independent)"]
end
TYPES -.defines.-> NATIVE_ENC
TYPES -.defines.-> NATIVE_DEC
TYPES -.defines.-> WASM_ENC
TYPES -.defines.-> WASM_DEC
NATIVE_ENC --> BYTES
WASM_ENC --> BYTES
BYTES --> NATIVE_DEC
BYTES --> WASM_DEC
Cross-Platform Compatibility
One of bitcode's key advantages is that it produces identical binary representations across different platforms:
This ensures that:
- Native Tokio clients can communicate with native Tokio servers
- WASM clients can communicate with native Tokio servers
- Any client can communicate with any server, as long as they share the same service definitions
The cross-platform compatibility is validated by integration tests that run identical test suites against both Tokio and WASM clients.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs high-level architecture diagrams
Performance Characteristics
Bitcode provides several performance advantages:
Encoding Speed
Bitcode uses a zero-copy architecture where possible, minimizing memory allocations during serialization. For primitive types and simple structs, encoding is extremely fast.
Decoding Speed
The binary format is designed for fast deserialization. Unlike text formats that require parsing, bitcode can often deserialize directly into memory with minimal transformations.
Payload Size Comparison
| Format | Example Payload | Size |
|---|---|---|
| JSON | [1.0, 2.0, 3.0] | ~19 bytes |
| Bitcode | [1.0, 2.0, 3.0] | ~8 bytes (depends on encoding) |
The exact size savings vary by data type and structure, but bitcode consistently produces smaller payloads than text-based formats.
Memory Usage
Bitcode minimizes temporary allocations during encoding and decoding. The library is designed to work efficiently with Rust's ownership model, often allowing zero-copy operations for borrowed data.
Sources: Cargo.lock:158-168 (bitcode dependency), general bitcode library characteristics
Version Compatibility
The service definition crate specifies the exact bitcode version (0.6.6) used across the entire system. This is critical for compatibility:
- Shared Dependencies : All crates use the same bitcode version via the workspace
- Binary Stability : Bitcode maintains binary compatibility within minor versions
- Migration Path : Upgrading bitcode requires updating all clients and servers simultaneously
To ensure compatibility, the service definition crate that implements RpcMethodPrebuffered should be shared as a compiled dependency, not duplicated across codebases.
Sources: Cargo.lock:158-168 workspace structure from high-level diagrams
Summary
The bitcode serialization layer provides:
- Efficient binary encoding for request/response data
- Transparent integration via
RpcMethodPrebufferedtrait - Automatic chunking for large payloads via smart routing
- Cross-platform compatibility between native and WASM targets
- Type safety through compile-time Rust types
Application code remains abstraction-free, working with native Rust types while bitcode handles the low-level serialization details. The integration with the RPC framework ensures that serialization errors are properly propagated as RpcServiceError::Transport for consistent error handling.
Sources: All sections above
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Error Handling
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
Purpose and Scope
This document describes the error handling architecture in the rust-muxio system. It covers error types at each layer (framing, RPC, transport), error propagation mechanisms, and patterns for handling failures in distributed RPC calls. For information about RPC service definitions and method dispatch, see RPC Framework. For transport-specific connection management, see Transport State Management.
Error Type Hierarchy
The rust-muxio system defines errors at three distinct layers, each with its own error type. These types compose to provide rich error context while maintaining clean layer separation.
Sources:
graph TB
subgraph "Application Layer Errors"
RpcServiceError["RpcServiceError\n(muxio-rpc-service)"]
RpcVariant["Rpc(RpcServiceErrorPayload)"]
TransportVariant["Transport(io::Error)"]
CancelledVariant["Cancelled"]
end
subgraph "RPC Error Codes"
NotFound["NotFound\nMETHOD_ID not registered"]
Fail["Fail\nHandler returned error"]
System["System\nInternal failure"]
Busy["Busy\nResource unavailable"]
end
subgraph "Framing Layer Errors"
FrameEncodeError["FrameEncodeError\n(muxio core)"]
FrameDecodeError["FrameDecodeError\n(muxio core)"]
CorruptFrame["CorruptFrame"]
end
RpcServiceError --> RpcVariant
RpcServiceError --> TransportVariant
RpcServiceError --> CancelledVariant
RpcVariant --> NotFound
RpcVariant --> Fail
RpcVariant --> System
RpcVariant --> Busy
TransportVariant --> FrameEncodeError
TransportVariant --> FrameDecodeError
FrameEncodeError --> CorruptFrame
FrameDecodeError --> CorruptFrame
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:5-9
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:2-5
RpcServiceError
RpcServiceError is the primary error type exposed to application code. It represents failures that can occur during RPC method invocation, from method lookup through response decoding.
Error Variants
| Variant | Description | Typical Cause |
|---|---|---|
Rpc(RpcServiceErrorPayload) | Remote method handler returned an error | Handler logic failure, validation error, or internal server error |
Transport(io::Error) | Low-level transport or encoding failure | Network disconnection, frame corruption, or serialization failure |
Cancelled | Request was cancelled before completion | Connection dropped while request was pending |
RpcServiceErrorPayload
The RpcServiceErrorPayload structure provides detailed information about server-side errors:
RpcServiceErrorCode
| Code | Usage | Example Scenario |
|---|---|---|
NotFound | Method ID not registered on server | Client calls method that server doesn't implement |
Fail | Handler explicitly returned an error | Business logic validation failed (e.g., "item does not exist") |
System | Internal server panic or system failure | Handler panicked or server encountered critical error |
Busy | Server cannot accept request | Resource exhaustion or rate limiting |
Sources:
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:155-158
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:140-144
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:217-222
Framing Layer Errors
The framing layer defines two error types for low-level binary protocol operations. These errors are typically wrapped in RpcServiceError::Transport before reaching application code.
FrameEncodeError
Occurs when encoding RPC requests or responses into binary frames fails. Common causes:
- Frame data exceeds maximum allowed size
- Corrupt internal state during encoding
- Memory allocation failure
FrameDecodeError
Occurs when decoding binary frames into RPC messages fails. Common causes:
- Malformed or truncated frame data
- Protocol version mismatch
- Corrupt frame headers
- Mutex poisoning in shared state
Sources:
Error Propagation Through RPC Layers
The following diagram shows how errors flow from their point of origin through the system layers to the application code:
Sources:
sequenceDiagram
participant App as "Application Code"
participant Caller as "RpcCallPrebuffered\ntrait"
participant Client as "RpcServiceCallerInterface\nimplementation"
participant Dispatcher as "RpcDispatcher"
participant Transport as "WebSocket or\nother transport"
participant Server as "RpcServiceEndpointInterface"
participant Handler as "Method Handler"
Note over App,Handler: Success Path (for context)
App->>Caller: method::call(args)
Caller->>Client: call_rpc_buffered()
Client->>Dispatcher: call()
Dispatcher->>Transport: emit frames
Transport->>Server: receive frames
Server->>Handler: invoke handler
Handler->>Server: Ok(response_bytes)
Server->>Transport: emit response
Transport->>Dispatcher: read_bytes()
Dispatcher->>Client: decode response
Client->>Caller: Ok(result)
Caller->>App: Ok(output)
Note over App,Handler: Error Path 1: Handler Failure
App->>Caller: method::call(args)
Caller->>Client: call_rpc_buffered()
Client->>Dispatcher: call()
Dispatcher->>Transport: emit frames
Transport->>Server: receive frames
Server->>Handler: invoke handler
Handler->>Server: Err("validation failed")
Server->>Transport: response with Fail code
Transport->>Dispatcher: read_bytes()
Dispatcher->>Client: decode RpcServiceError::Rpc
Client->>Caller: Err(RpcServiceError::Rpc)
Caller->>App: Err(RpcServiceError::Rpc)
Note over App,Handler: Error Path 2: Transport Failure
App->>Caller: method::call(args)
Caller->>Client: call_rpc_buffered()
Client->>Dispatcher: call()
Dispatcher->>Transport: emit frames
Transport--xClient: connection dropped
Dispatcher->>Dispatcher: fail_all_pending_requests()
Dispatcher->>Client: RpcStreamEvent::Error
Client->>Caller: Err(RpcServiceError::Cancelled)
Caller->>App: Err(RpcServiceError::Cancelled)
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-96
- src/rpc/rpc_dispatcher.rs:226-286
Dispatcher Error Handling
The RpcDispatcher is responsible for correlating requests with responses and managing error conditions during stream processing.
Mutex Poisoning Policy
The dispatcher uses a Mutex to protect the shared request queue. If this mutex becomes poisoned (i.e., a thread panicked while holding the lock), the dispatcher implements a "fail-fast" policy:
This design choice prioritizes safety over graceful degradation. A poisoned queue indicates partial state mutation, and continuing could lead to:
- Incorrect request/response correlation
- Data loss or duplication
- Undefined behavior in dependent code
Sources:
Stream Error Events
When decoding errors occur during stream processing, the dispatcher generates RpcStreamEvent::Error events:
These events are delivered to registered response handlers, allowing them to detect and react to mid-stream failures.
Sources:
graph TB A["fail_all_pending_requests()"] --> B["Take ownership of\nresponse_handlers map"] B --> C["Iterate over all\npending request IDs"] C --> D["Create synthetic\nRpcStreamEvent::Error"] D --> E["Invoke handler\nwith error event"] E --> F["Handler wakes\nawaiting Future"] F --> G["Future resolves to\nRpcServiceError::Cancelled"]
Connection Failure Cleanup
When a transport connection is dropped, all pending requests must be notified to prevent indefinite waiting. The fail_all_pending_requests() method handles this:
This ensures that all client code waiting for responses receives a timely error indication rather than hanging indefinitely.
Sources:
graph TB Start["call(client, input)"] --> Encode["encode_request(input)"] Encode -->|io::Error| EncodeErr["Return RpcServiceError::Transport"] Encode -->|Success| CreateReq["Create RpcRequest"] CreateReq --> CallBuffered["call_rpc_buffered()"] CallBuffered -->|RpcServiceError| ReturnErr1["Return error directly"] CallBuffered -->|Success| Unwrap["Unwrap nested result"] Unwrap --> CheckInner["Check inner Result"] CheckInner -->|Ok bytes| Decode["decode_response(bytes)"] CheckInner -->|Err RpcServiceError| ReturnErr2["Return RpcServiceError"] Decode -->|io::Error| DecodeErr["Wrap as Transport error"] Decode -->|Success| Success["Return decoded output"]
Error Handling in RpcCallPrebuffered
The RpcCallPrebuffered trait implementation demonstrates the complete error handling flow from client perspective:
The nested result structure (Result<Result<T, io::Error>, RpcServiceError>) separates transport-level errors from decoding errors:
- Outer
Result: Transport or RPC-level errors (RpcServiceError) - Inner
Result: Decoding errors after successful transport (io::Errorfrom deserialization)
Sources:
Testing Error Conditions
Handler Failures
Integration tests verify that handler errors propagate correctly to clients:
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:113-151
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:155-227
Method Not Found
Tests verify that calling unregistered methods returns NotFound errors:
Sources:
Mock Client Error Injection
Unit tests use mock clients to inject specific error conditions:
Sources:
Error Code Mapping
The following table shows how different failure scenarios map to RpcServiceErrorCode values:
| Scenario | Code | Typical Message | Detected By |
|---|---|---|---|
| Method ID not in registry | NotFound | "Method not found" | Server endpoint |
Handler returns Err(String) | System | Handler error message | Server endpoint |
| Handler panics | System | "Method has panicked" | Server endpoint (catch_unwind) |
| Business logic failure | Fail | Custom validation message | Handler implementation |
| Transport disconnection | N/A (Cancelled variant) | N/A | Dispatcher on connection drop |
| Frame decode error | N/A (Transport variant) | Varies | Framing layer |
| Serialization failure | N/A (Transport variant) | "Failed to encode/decode" | Bitcode layer |
Sources:
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:155-212
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:232-239
Best Practices
For Service Implementations
-
Use
Failfor Expected Errors: ReturnErr("descriptive message")from handlers for expected failure cases like validation errors or missing resources. -
Let System Handle Panics : If a handler panics, the server automatically converts it to a
Systemerror. No explicit panic handling is needed. -
Provide Descriptive Messages : Error messages are transmitted to clients and should contain enough context for debugging without exposing sensitive information.
For Client Code
-
Match on Error Variants : Distinguish between recoverable errors (
RpcwithFailcode) and fatal errors (Cancelled,Transport). -
Handle Connection Loss : Be prepared for
Cancellederrors and implement appropriate reconnection logic. -
Don't Swallow Transport Errors :
Transporterrors indicate serious issues like protocol corruption and should be logged or escalated.
For Testing
-
Test Both Success and Failure Paths : Every RPC method should have tests for successful calls and expected error conditions.
-
Verify Error Codes : Match on specific
RpcServiceErrorCodevalues rather than just checkingis_err(). -
Test Connection Failures : Simulate transport disconnection to ensure proper cleanup and error propagation.
Sources:
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:95-212
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-240
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:144-227
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
RPC Service Errors
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This document describes the error handling system used by the RPC framework layer in rust-muxio. It covers the RpcServiceError type, its variants, error codes, and how errors propagate between client and server. For information about lower-level transport errors and connection failures, see Transport Errors. For details on how errors are created in service handlers, see Service Endpoint Interface.
Error Type Hierarchy
The RPC framework uses a structured error system centered around the RpcServiceError enum, which distinguishes between protocol-level RPC errors and underlying transport errors.
Sources:
graph TB
RSE["RpcServiceError"]
RSE --> Rpc["Rpc(RpcServiceErrorPayload)"]
RSE --> Transport["Transport(io::Error)"]
Rpc --> Payload["RpcServiceErrorPayload"]
Payload --> Code["code: RpcServiceErrorCode"]
Payload --> Message["message: String"]
Code --> NotFound["NotFound"]
Code --> Fail["Fail"]
Code --> System["System"]
NotFound -.- NF_Desc["Method not registered\non server"]
Fail -.- Fail_Desc["Application-level\nerror from handler"]
System -.- Sys_Desc["Server panic or\ninternal error"]
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs8
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs4
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs4
RpcServiceError Variants
The RpcServiceError type has two primary variants that separate RPC protocol errors from transport layer failures.
Rpc Variant
The Rpc variant wraps an RpcServiceErrorPayload and represents errors that occur at the RPC protocol layer. These errors are serialized and transmitted from the server to the client as part of the RPC protocol itself.
Structure:
code: AnRpcServiceErrorCodeenum indicating the error categorymessage: A human-readable error description string
Example usage from tests:
Sources:
Transport Variant
The Transport variant wraps a standard io::Error and represents errors that occur in the underlying transport layer, such as serialization failures or network issues that prevent the RPC from completing.
Example usage:
Sources:
RpcServiceErrorCode Values
The RpcServiceErrorCode enum defines standardized error categories used throughout the RPC system. Each code has specific semantics for how the error should be interpreted and potentially recovered from.
| Error Code | Meaning | Typical Scenario |
|---|---|---|
NotFound | The requested method ID is not registered on the server | Client calls a method that the server doesn't implement |
Fail | Application-level error returned by the handler | Business logic failure (e.g., "user not found", "invalid input") |
System | Internal server error or panic in the handler | Handler panicked, internal consistency error, or unexpected condition |
NotFound Error Code
Used when a client attempts to invoke an RPC method that the server has not registered. This typically indicates a version mismatch between client and server service definitions.
Test example:
Sources:
Fail Error Code
Used for application-level errors that are part of normal business logic. These errors are expected and recoverable.
Test example:
Sources:
System Error Code
Used for internal server errors, including handler panics or unexpected system failures. These typically indicate bugs or serious issues on the server side.
Test example:
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:114-150
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:197-200
sequenceDiagram
participant Client as "RpcCallPrebuffered\n::call()"
participant CallerIface as "RpcServiceCaller\nInterface"
participant Dispatcher as "RpcDispatcher"
participant Transport as "WebSocket"
participant ServerDisp as "Server\nRpcDispatcher"
participant Endpoint as "RpcServiceEndpoint\nInterface"
participant Handler as "Method Handler"
Client->>CallerIface: call_rpc_buffered(request)
CallerIface->>Dispatcher: Register request
Dispatcher->>Transport: Send RPC frames
Transport->>ServerDisp: Receive frames
ServerDisp->>Endpoint: dispatch by METHOD_ID
Endpoint->>Handler: invoke handler
Note over Handler: Handler returns\nErr("Addition failed")
Handler-->>Endpoint: Err(String)
Note over Endpoint: Convert to RpcServiceError::Rpc\nwith code=System
Endpoint-->>ServerDisp: Serialize error payload
ServerDisp-->>Transport: Send error response frames
Transport-->>Dispatcher: Receive error frames
Dispatcher-->>CallerIface: Deserialize error
CallerIface-->>Client: Err(RpcServiceError::Rpc)
Note over Client: Pattern match on error:\ncode and message available
Error Propagation Flow
This sequence diagram illustrates how errors propagate from a server-side handler back to the client, including serialization and protocol-level wrapping.
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:100-152
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-97
Error Creation on Server Side
Server-side handlers can return errors in multiple ways, all of which are converted to RpcServiceError by the endpoint interface.
Handler Return Types
Handlers registered via register_prebuffered return Result<Vec<u8>, E> where E implements Into<RpcServiceError>. The most common patterns are:
Returning a String error:
This automatically converts to RpcServiceError::Rpc with code: RpcServiceErrorCode::System.
Sources:
graph LR Request["RpcRequest\nmethod_id=12345"] --> Lookup["Endpoint method\nlookup"] Lookup -->|Not found| AutoError["Auto-generate\nRpcServiceError::Rpc"] AutoError --> ErrorPayload["RpcServiceErrorPayload\ncode=NotFound"] ErrorPayload --> Response["Send error response\nto client"]
Method Not Found Errors
When a client calls a method that isn't registered, the server's endpoint interface automatically generates a NotFound error without involving any handler:
Sources:
Error Handling on Client Side
Clients receive errors as part of the normal RPC response flow and must pattern match on the error type to handle different scenarios appropriately.
Pattern Matching on Error Codes
The recommended pattern for handling RPC errors is to match on the RpcServiceError variants and then examine the error code:
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:136-150
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:170-176
Error Propagation Through RpcCallPrebuffered
The RpcCallPrebuffered trait implementation automatically propagates errors from the underlying call_rpc_buffered method. It handles two levels of Result nesting:
- Outer Result : Represents transport-level success/failure
- Inner Result : Contains either decoded response or
RpcServiceError::Rpc
Sources:
Common Error Scenarios
Scenario 1: Unregistered Method Call
Situation: Client calls a method that the server doesn't have registered.
Error Flow:
- Client encodes request and sends to server
- Server dispatcher receives frames and extracts
METHOD_ID - Endpoint lookup fails to find handler
- Server automatically creates
RpcServiceError::Rpcwithcode: NotFound - Error is serialized and sent back to client
- Client receives and deserializes the error
Client-side result:
Sources:
Scenario 2: Handler Returns Application Error
Situation: Handler logic determines that the request cannot be fulfilled due to business logic constraints.
Error Flow:
- Server invokes handler with decoded request
- Handler returns
Err("validation failed") - Endpoint converts string to
RpcServiceError::Rpcwithcode: System - Error is serialized and transmitted
- Client receives error with code and message
Client-side result:
Sources:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:100-152
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:144-227
Scenario 3: Deserialization Failure
Situation: Client successfully receives response bytes but fails to deserialize them into the expected type.
Error Flow:
- Server sends valid response bytes
- Client dispatcher reassembles payload
decode_responseclosure returnsio::Error- Error is wrapped as
RpcServiceError::Transport
Client-side result:
Sources:
Integration with Transport Layer
While this document focuses on RPC-level errors, it's important to understand how these errors interact with transport-level failures:
| Layer | Error Type | Scope |
|---|---|---|
| RPC Protocol | RpcServiceError::Rpc | Method not found, handler failures, application errors |
| Transport | RpcServiceError::Transport | Serialization, deserialization, io::Error |
| Connection | Connection state changes | Disconnection, reconnection (see Transport State Management) |
The separation ensures that RPC-level errors (which are part of the protocol) remain distinct from transport-level errors (which indicate infrastructure failures).
Sources:
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-99
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Transport Errors
Relevant source files
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
- src/rpc/rpc_dispatcher.rs
- src/rpc/rpc_internals/rpc_respondable_session.rs
Purpose and Scope
This document explains transport-level errors in the rust-muxio system—failures that occur at the connection and network layer rather than in RPC method execution. Transport errors include connection failures, unexpected disconnections, stream cancellations, and network I/O errors. These errors are distinct from RPC service errors (see RPC Service Errors), which represent application-level failures like method-not-found or serialization errors.
Transport errors affect the underlying communication channel and typically result in all pending requests being cancelled. The system provides automatic cleanup mechanisms to ensure that no requests hang indefinitely when transport fails.
Transport Error Categories
The system handles three primary categories of transport errors, each with distinct handling mechanisms and propagation paths.
Connection Establishment Errors
Connection errors occur during the initial WebSocket handshake or TCP connection setup. These are synchronous errors that prevent the client from being created.
Key Error Type : std::io::Error with kind ConnectionRefused
extensions/muxio-tokio-rpc-client/src/rpc_client.rs:118-121
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-126
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:17-31
Runtime Disconnection Errors
Disconnections occur after a successful connection when the WebSocket stream encounters an error or closes unexpectedly. These trigger automatic cleanup of all pending requests.
Key Error Type : FrameDecodeError::ReadAfterCancel
extensions/muxio-tokio-rpc-client/src/rpc_client.rs102
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:156-220
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:80-108
Frame-Level Decode Errors
Frame decode errors occur when the binary framing protocol receives malformed data. These are represented by FrameDecodeError variants from the core library.
Key Error Types:
FrameDecodeError::CorruptFrame- Invalid frame structureFrameDecodeError::ReadAfterCancel- Stream cancelled by transportFrameDecodeError::UnexpectedEOF- Incomplete frame data
src/rpc/rpc_dispatcher.rs:187-206
Sources:
Transport Error Types and Code Entities
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-126
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:80-108
- src/rpc/rpc_dispatcher.rs:428-456
- src/rpc/rpc_dispatcher.rs:187-206
- extensions/muxio-rpc-service-caller/src/transport_state.rs
Error Propagation Through RpcDispatcher
When a transport error occurs, the RpcDispatcher is responsible for propagating the error to all pending request handlers. This prevents requests from hanging indefinitely.
sequenceDiagram
participant Transport as "WebSocket Transport"
participant Client as "RpcClient"
participant Dispatcher as "RpcDispatcher"
participant Handlers as "Response Handlers"
participant App as "Application Code"
Note over Transport: Connection failure detected
Transport->>Client: ws_receiver error
Client->>Client: shutdown_async()
Note over Client: is_connected.swap(false)
Client->>Dispatcher: dispatcher.lock().await
Client->>Dispatcher: fail_all_pending_requests(error)
Note over Dispatcher: Take ownership of handlers\nstd::mem::take()
loop For each pending request
Dispatcher->>Handlers: Create RpcStreamEvent::Error
Note over Handlers: rpc_request_id: Some(id)\nframe_decode_error: ReadAfterCancel
Handlers->>Handlers: handler(error_event)
Handlers->>App: Resolve Future with error
end
Note over Dispatcher: response_handlers now empty\nNote over Client: State change handler called\nRpcTransportState::Disconnected
Dispatcher Error Propagation Sequence
Sources:
- src/rpc/rpc_dispatcher.rs:428-456
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:80-108
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:186-198
Automatic Disconnection Handling
The RpcClient implements automatic disconnection handling through three concurrent tasks that monitor the WebSocket connection and coordinate cleanup.
Client Task Architecture
| Task | Responsibility | Error Detection | Cleanup Action |
|---|---|---|---|
| Receive Loop | Reads WebSocket messages | Detects ws_receiver.next() errors or None | Spawns shutdown_async() |
| Send Loop | Writes WebSocket messages | Detects ws_sender.send() errors | Spawns shutdown_async() |
| Heartbeat Loop | Periodic ping messages | Detects channel closed | Exits task |
extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-257
Shutdown Synchronization
The client provides both synchronous and asynchronous shutdown paths to handle different scenarios:
Asynchronous Shutdown (shutdown_async):
- Called by background tasks when detecting errors
- Acquires dispatcher lock to prevent new RPC calls
- Calls
fail_all_pending_requestswithReadAfterCancelerror - Invokes state change handler with
RpcTransportState::Disconnected
extensions/muxio-tokio-rpc-client/src/rpc_client.rs:80-108
Synchronous Shutdown (shutdown_sync):
- Called from
Dropimplementation - Does not acquire locks (avoids deadlock during cleanup)
- Only invokes state change handler
- Aborts all background tasks
extensions/muxio-tokio-rpc-client/src/rpc_client.rs:56-77
Key Synchronization Mechanism : AtomicBool::is_connected
The is_connected flag uses SeqCst ordering to ensure:
- Only one shutdown path executes
- Send loop drops messages if disconnected
- Emit function rejects outgoing RPC data
extensions/muxio-tokio-rpc-client/src/rpc_client.rs61 extensions/muxio-tokio-rpc-client/src/rpc_client.rs85 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:231-236 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:294-297
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:56-108
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-257
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52
graph TB
subgraph "Before Disconnect"
PH1["response_handlers HashMap\nrequest_id → handler"]
PR1["Pending Request 1\nawaiting response"]
PR2["Pending Request 2\nawaiting response"]
PR3["Pending Request 3\nawaiting response"]
PH1 --> PR1
PH1 --> PR2
PH1 --> PR3
end
subgraph "Cancellation Process"
DC["Disconnect detected"]
FP["fail_all_pending_requests()\nstd::mem::take(&mut handlers)"]
DC --> FP
end
subgraph "Handler Invocation"
LE["Loop over handlers"]
CE["Create RpcStreamEvent::Error\nrpc_request_id: Some(id)\nframe_decode_error: ReadAfterCancel"]
CH["handler(error_event)"]
FP --> LE
LE --> CE
CE --> CH
end
subgraph "After Cancellation"
PH2["response_handlers HashMap\n(empty)"]
ER1["Request 1 fails with\nRpcServiceError::TransportError"]
ER2["Request 2 fails with\nRpcServiceError::TransportError"]
ER3["Request 3 fails with\nRpcServiceError::TransportError"]
CH --> ER1
CH --> ER2
CH --> ER3
PH2 -.handlers cleared.-> ER1
end
style DC fill:#ffcccc
style FP fill:#ffcccc
style ER1 fill:#ffcccc
style ER2 fill:#ffcccc
style ER3 fill:#ffcccc
Pending Request Cancellation
When a transport error occurs, all pending RPC requests must be cancelled to prevent application code from hanging indefinitely. The system achieves this through the fail_all_pending_requests method.
Cancellation Mechanism
Sources:
- src/rpc/rpc_dispatcher.rs:428-456
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:169-292
Implementation Details
The fail_all_pending_requests method in RpcDispatcher:
-
Takes ownership of all response handlers using
std::mem::take- Leaves
response_handlersempty - Prevents new errors from affecting already-cancelled requests
- Leaves
-
Creates synthetic error events for each pending request:
RpcStreamEvent::ErrorwithFrameDecodeError::ReadAfterCancel- Includes
rpc_request_idfor correlation - Omits
rpc_headerandrpc_method_id(not needed for cancellation)
-
Invokes each handler with the error event:
- Wakes up the waiting Future in application code
- Results in
RpcServiceError::TransportErrorpropagated to caller
src/rpc/rpc_dispatcher.rs:428-456
Critical Design Note : Handler removal via std::mem::take prevents the catch-all handler from processing error events for already-cancelled requests, avoiding duplicate error notifications.
Sources:
Transport State Change Notifications
Applications can register a state change handler to receive notifications when the transport connection state changes. This enables reactive UI updates and connection retry logic.
State Change Handler Interface
RpcTransportState Enum
| State | Meaning | Triggered When |
|---|---|---|
Connected | Transport is active | Client successfully connects, or handler registered on already-connected client |
Disconnected | Transport has failed | WebSocket error detected, connection closed, or client dropped |
extensions/muxio-rpc-service-caller/src/transport_state.rs
Handler Invocation Guarantees
The state change handler is invoked with the following guarantees:
-
Immediate callback on registration : If the client is already connected when
set_state_change_handleris called, the handler is immediately invoked withConnected -
Single disconnection notification : The
is_connectedatomic flag ensures only one thread invokes theDisconnectedhandler -
Thread-safe invocation : Handler is called while holding the
state_change_handlermutex, preventing concurrent modifications
extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:56-108
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:36-165
- extensions/muxio-rpc-service-caller/src/transport_state.rs
Error Handling in Stream Events
The RpcDispatcher and RpcRespondableSession track stream-level errors through the RpcStreamEvent::Error variant. These errors are distinct from transport disconnections—they represent protocol-level decode failures during frame reassembly.
RpcStreamEvent::Error Structure
Error Event Processing
When RpcDispatcher::read_bytes encounters a decode error:
- Error logged : Tracing output includes method ID, header, and request ID context
- Queue unaffected : Unlike response events, error events do not remove entries from
rpc_request_queue - Handler not invoked : Catch-all response handler processes the error but does not delete the queue entry
src/rpc/rpc_dispatcher.rs:187-206
Design Rationale : Error events do not automatically clean up queue entries because:
- Partial streams may still be recoverable
- Application code may need to inspect incomplete payloads
- Explicit deletion via
delete_rpc_requestgives caller control
TODO in codebase : Consider auto-removing errored requests from queue or marking them with error state.
Sources:
- src/rpc/rpc_dispatcher.rs:187-206
- src/rpc/rpc_dispatcher.rs:99-209
- src/rpc/rpc_respondable_session.rs:93-173
Mutex Poisoning and Error Recovery
The RpcDispatcher uses Mutex to protect the rpc_request_queue. If a thread panics while holding this lock, the mutex becomes "poisoned" and subsequent lock attempts return an error. The dispatcher treats poisoned mutexes as fatal errors.
Poisoning Handling Strategy
In catch-all response handler :
src/rpc/rpc_dispatcher.rs:104-118
In read_bytes :
src/rpc/rpc_dispatcher.rs:367-370
Rationale : A poisoned queue indicates inconsistent shared state. Continuing could result in:
- Incorrect request routing
- Lost response data
- Silent data corruption
The dispatcher crashes fast to provide clear debugging signals rather than attempting partial recovery.
Sources:
Best Practices for Handling Transport Errors
1. Register State Change Handlers Early
Always register a state change handler before making RPC calls to ensure disconnection events are captured:
2. Handle Cancellation Errors Gracefully
Pending RPC calls will fail with RpcServiceError::TransportError containing ReadAfterCancel. Application code should distinguish these from service-level errors:
3. Check Connection Before Making Calls
Use is_connected() to avoid starting RPC operations when transport is down:
extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-286
4. Understand Disconnect Timing
The send and receive loops detect disconnections independently:
- Receive loop : Detects server-initiated disconnects immediately
- Send loop : Detects errors when attempting to send data
- Heartbeat : May detect connection issues if both loops are idle
Do not assume instant disconnection detection for all failure modes.
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:156-257
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:36-165
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:169-292
Differences from RPC Service Errors
Transport errors (page 7.2) differ from RPC service errors (RPC Service Errors) in critical ways:
| Aspect | Transport Errors | RPC Service Errors |
|---|---|---|
| Layer | Connection/framing layer | RPC protocol/application layer |
| Scope | Affects all pending requests | Affects single request |
| Recovery | Requires reconnection | Retry may succeed |
| Detection | WebSocket errors, frame decode failures | Method dispatch failures, serialization errors |
| Propagation | fail_all_pending_requests | Individual handler callbacks |
| Error Type | std::io::Error, FrameDecodeError | RpcServiceError variants |
When to use this page vs. page 7.1 :
- Use this page for connection failures, disconnections, stream cancellations
- Use RPC Service Errors for method-not-found, parameter validation, handler panics
Sources:
- src/rpc/rpc_dispatcher.rs:428-456
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-126
- extensions/muxio-rpc-service/src/error.rs
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Testing
Relevant source files
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
This document provides an overview of testing strategies and patterns used in the rust-muxio codebase. It covers the testing philosophy, test organization, common testing patterns, and available testing utilities. For detailed information about specific testing approaches, see Unit Testing and Integration Testing.
The rust-muxio system emphasizes compile-time correctness through shared type definitions and trait-based abstractions. This design philosophy directly influences the testing strategy: many potential bugs are prevented by the type system, allowing tests to focus on runtime behavior, protocol correctness, and cross-platform compatibility.
Testing Philosophy
The rust-muxio testing approach is built on three core principles:
Compile-Time Guarantees Reduce Runtime Test Burden : By using shared service definitions (example-muxio-rpc-service-definition), both clients and servers depend on the same RpcMethodPrebuffered trait implementations. This ensures that parameter encoding/decoding, method IDs, and data structures are consistent at compile time. Tests do not need to verify type mismatches or protocol version incompatibilities—these are caught by the compiler.
Layered Testing Mirrors Layered Architecture : The system's modular design (core multiplexing → RPC abstraction → transport implementations) enables focused testing at each layer. Unit tests verify RpcDispatcher behavior in isolation tests/rpc_dispatcher_tests.rs while integration tests validate the complete stack including network transports extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
Cross-Platform Validation Is Essential : Because the same RPC service definitions work across Tokio-based native clients, WASM browser clients, and the server, tests must verify that all client types can communicate with the server correctly. This is achieved through parallel integration test suites that use identical test cases against different client implementations.
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-97 tests/rpc_dispatcher_tests.rs:1-30
Test Organization in the Workspace
Test Location Strategy
Tests are organized by scope and purpose:
| Test Type | Location | Purpose | Example |
|---|---|---|---|
| Core Unit Tests | tests/ in workspace root | Validate RpcDispatcher logic without async runtime | rpc_dispatcher_tests.rs |
| Integration Tests | tests/ in extension crates | Validate full client-server communication | prebuffered_integration_tests.rs |
| Test Utilities | extensions/muxio-ext-test/ | Shared test helpers and mock implementations | N/A |
| Test Service Definitions | example-muxio-rpc-service-definition/ | Shared RPC methods for testing | Add, Mult, Echo |
This organization ensures that:
- Core library tests have no async runtime dependencies
- Extension tests can use their specific runtime environments
- Test service definitions are reusable across all client types
- Integration tests exercise the complete, realistic code paths
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-18 tests/rpc_dispatcher_tests.rs:1-7
Integration Test Architecture
Integration tests create realistic client-server scenarios to validate end-to-end behavior. The following diagram illustrates the typical test setup:
Key Components
sequenceDiagram
participant Test as "Test Function\n#[tokio::test]"
participant Listener as "TcpListener\nRandom Port"
participant Server as "Arc<RpcServer>"
participant Endpoint as "RpcServiceEndpointInterface"
participant Client as "RpcClient\n(Tokio or WASM)"
participant ServiceDef as "Add/Mult/Echo\nService Definitions"
Test->>Listener: bind("127.0.0.1:0")
Test->>Server: RpcServer::new(None)
Test->>Server: server.endpoint()
Server-->>Endpoint: endpoint reference
Test->>Endpoint: register_prebuffered(Add::METHOD_ID, handler)
Test->>Endpoint: register_prebuffered(Mult::METHOD_ID, handler)
Test->>Endpoint: register_prebuffered(Echo::METHOD_ID, handler)
Test->>Test: tokio::spawn(server.serve_with_listener)
Test->>Client: RpcClient::new(host, port)
Test->>ServiceDef: Add::call(client, params)
ServiceDef->>Client: call_rpc_buffered(request)
Client->>Server: WebSocket binary frames
Server->>Endpoint: dispatch by METHOD_ID
Endpoint->>Endpoint: execute handler
Endpoint->>Client: response frames
Client->>ServiceDef: decode_response
ServiceDef-->>Test: Result<f64>
Test->>Test: assert_eq!(result, expected)
Random Port Binding : Tests bind to 127.0.0.1:0 to obtain a random available port, preventing conflicts when running multiple tests in parallel extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:21-23
Arc-Wrapped Server : The RpcServer is wrapped in Arc<RpcServer> to enable cloning into spawned tasks while maintaining shared state extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs28
Separate Endpoint Registration : Handlers are registered on the endpoint obtained via server.endpoint(), not directly on the server. This separation allows handler registration to complete before the server starts accepting connections extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:31-61
Background Server Task : The server runs in a spawned Tokio task, allowing the test to proceed with client operations on the main test task extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:64-70
Shared Service Definitions : Both client and server use the same Add, Mult, and Echo implementations from example-muxio-rpc-service-definition, ensuring type-safe, consistent encoding/decoding extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs1
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:16-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20
Common Test Patterns
Success Case Testing
The most fundamental test pattern validates that RPC calls complete successfully with correct results:
This pattern uses tokio::join! to execute multiple concurrent RPC calls, verifying both concurrency and correctness extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-96
Error Propagation Testing
Tests verify that server-side errors are correctly propagated to clients with appropriate error codes:
This validates that errors are serialized, transmitted, and deserialized with correct error codes and messages extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152
Large Payload Testing
Tests ensure that payloads exceeding the chunk size are correctly chunked and reassembled:
This pattern validates the streaming chunking mechanism for both requests and responses extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:154-203
Method Not Found Testing
Tests verify that calling unregistered methods returns the correct error code:
This ensures the server correctly identifies missing handlers extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:205-240
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-240 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-142
WASM Client Testing with WebSocket Bridge
Testing the WASM client requires special handling because it is runtime-agnostic and designed for browser environments. Integration tests use a WebSocket bridge to connect the WASM client to a real Tokio server:
Bridge Implementation Details
graph TB
subgraph "Test Environment"
TEST["Test Function\n#[tokio::test]"]
end
subgraph "Server Side"
SERVER["RpcServer\nTokio-based"]
LISTENER["TcpListener\n127.0.0.1:random"]
HANDLERS["Registered Handlers\nAdd, Mult, Echo"]
end
subgraph "Bridge Infrastructure"
WS_CONN["WebSocket Connection\ntokio-tungstenite"]
TO_BRIDGE["mpsc channel\nto_bridge_rx"]
FROM_BRIDGE["ws_receiver\nStreamExt"]
BRIDGE_TX["Bridge Task\nClient→Server"]
BRIDGE_RX["Bridge Task\nServer→Client"]
end
subgraph "WASM Client Side"
WASM_CLIENT["RpcWasmClient\nRuntime-agnostic"]
DISPATCHER["RpcDispatcher\nblocking_lock()"]
OUTPUT_CB["Output Callback\nsend(bytes)"]
end
TEST --> SERVER
TEST --> LISTENER
SERVER --> HANDLERS
TEST --> WASM_CLIENT
TEST --> TO_BRIDGE
WASM_CLIENT --> OUTPUT_CB
OUTPUT_CB --> TO_BRIDGE
TO_BRIDGE --> BRIDGE_TX
BRIDGE_TX --> WS_CONN
WS_CONN --> SERVER
SERVER --> WS_CONN
WS_CONN --> FROM_BRIDGE
FROM_BRIDGE --> BRIDGE_RX
BRIDGE_RX --> DISPATCHER
DISPATCHER --> WASM_CLIENT
The WebSocket bridge consists of two spawned tasks:
Client to Server Bridge : Receives bytes from the WASM client's output callback and forwards them as WebSocket binary messages extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:98-108:
Server to Client Bridge : Receives WebSocket messages and feeds them to the WASM client's dispatcher using spawn_blocking to avoid blocking the async runtime extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:110-123:
Why spawn_blocking : The RpcWasmClient uses synchronous locking (blocking_lock()) and synchronous dispatcher methods because it targets WASM environments where true async is not available. In tests, this synchronous code must run on a blocking thread pool to prevent starving the Tokio runtime.
Sources : extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-142 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-123
graph LR
CLIENT_DISP["Client Dispatcher\nRpcDispatcher"]
OUT_BUF["Outgoing Buffer\nRc<RefCell<Vec<u8>>>"]
IN_BUF["Incoming Buffer\nSame as outgoing"]
SERVER_DISP["Server Dispatcher\nRpcDispatcher"]
CLIENT_DISP -->|call request, write_cb| OUT_BUF
OUT_BUF -->|chunks 4| IN_BUF
IN_BUF -->|read_bytes chunk| SERVER_DISP
SERVER_DISP -->|respond response, write_cb| OUT_BUF
OUT_BUF -->|read_bytes| CLIENT_DISP
Unit Testing the RpcDispatcher
The core RpcDispatcher can be tested in isolation without async runtimes or network transports. These tests use in-memory buffers to simulate data exchange:
Test Structure
Unit tests create two RpcDispatcher instances representing client and server, connected via a shared buffer tests/rpc_dispatcher_tests.rs:30-38:
Request Flow : Client creates RpcRequest, calls dispatcher.call() with a write callback that appends to the buffer tests/rpc_dispatcher_tests.rs:42-124:
Server Processing : Server reads from the buffer in chunks, processes requests, and writes responses back tests/rpc_dispatcher_tests.rs:126-203:
This pattern validates framing, chunking, correlation, and protocol correctness without external dependencies.
Sources : tests/rpc_dispatcher_tests.rs:30-203 tests/rpc_dispatcher_tests.rs:1-29
Test Coverage Matrix
The following table summarizes test coverage across different layers and client types:
| Test Scenario | Core Unit Tests | Tokio Integration | WASM Integration | Coverage Notes |
|---|---|---|---|---|
| Basic RPC Call | ✓ | ✓ | ✓ | All layers validated |
| Concurrent Calls | ✗ | ✓ | ✓ | Requires async runtime |
| Large Payloads | ✓ | ✓ | ✓ | Chunking tested at all levels |
| Error Propagation | ✓ | ✓ | ✓ | Error serialization validated |
| Method Not Found | ✗ | ✓ | ✓ | Requires endpoint dispatch |
| Framing Protocol | ✓ | Implicit | Implicit | Core tests focus on this |
| Request Correlation | ✓ | Implicit | Implicit | Core dispatcher tests |
| WebSocket Transport | ✗ | ✓ | ✓ (bridged) | Extension-level tests |
| Connection State | ✗ | ✓ | ✓ | Transport-specific |
Coverage Rationale
- Core unit tests validate the
RpcDispatcherwithout runtime dependencies - Tokio integration tests validate native client-server communication over real WebSocket connections
- WASM integration tests validate cross-platform compatibility by testing the WASM client against the same server
- Each layer is tested at the appropriate level of abstraction
Sources : tests/rpc_dispatcher_tests.rs:1-203 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313
Shared Test Service Definitions
All integration tests use service definitions from example-muxio-rpc-service-definition/src/prebuffered.rs:
| Service Method | Input Type | Output Type | Purpose |
|---|---|---|---|
Add::METHOD_ID | Vec<f64> | f64 | Sum of numbers |
Mult::METHOD_ID | Vec<f64> | f64 | Product of numbers |
Echo::METHOD_ID | Vec<u8> | Vec<u8> | Identity function |
These methods are intentionally simple to focus tests on protocol correctness rather than business logic. The Echo method is particularly useful for testing large payloads because it returns the exact input, making assertions straightforward.
Method ID Generation : Each method has a unique METHOD_ID generated at compile time by hashing the method name extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs18:
This ensures consistent method identification across all client and server implementations.
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs1 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs21
Running Tests
Tests are executed using standard Cargo commands:
Test Execution Environment : Most integration tests require a Tokio runtime even when testing the WASM client, because the test infrastructure (server, WebSocket bridge) runs on Tokio. The WASM client itself remains runtime-agnostic.
For detailed information on specific testing approaches, see:
- Unit Testing - Patterns for testing individual components
- Integration Testing - End-to-end testing with real transports
Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs18 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs39
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Unit Testing
Relevant source files
- extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs
- src/rpc/rpc_internals/rpc_header.rs
- src/rpc/rpc_request_response.rs
- tests/rpc_dispatcher_tests.rs
Purpose and Scope
This document covers patterns and strategies for unit testing individual components within the rust-muxio system. Unit tests focus on testing isolated functionality without requiring external services, network connections, or complex integration setups. These tests validate core logic such as RpcDispatcher request correlation, service caller interfaces, and custom service method implementations.
For end-to-end testing with real clients and servers communicating over actual transports, see Integration Testing. For error handling patterns in production code, see Error Handling.
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 tests/rpc_dispatcher_tests.rs:1-204
Unit Testing Architecture
The rust-muxio system is designed to enable comprehensive unit testing through several key architectural decisions:
- Runtime-agnostic core : The
muxiocore library does not depend on async runtimes, allowing synchronous unit tests - Trait-based abstractions : Interfaces like
RpcServiceCallerInterfacecan be easily mocked - Callback-based emission : The
RpcDispatcheruses callback functions for output, enabling in-memory testing - Shared service definitions : Method traits enable testing both client and server sides independently
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-93 tests/rpc_dispatcher_tests.rs:30-39
Testing the RPC Dispatcher
Synchronous Test Pattern
The RpcDispatcher can be tested synchronously without any async runtime because it operates on byte buffers and callbacks. This enables fast, deterministic unit tests.
Sources: tests/rpc_dispatcher_tests.rs:30-203
graph LR
subgraph "Client Side"
ClientDisp["client_dispatcher\n(RpcDispatcher)"]
OutBuf["outgoing_buf\n(Rc<RefCell<Vec<u8>>>)"]
end
subgraph "Server Side"
ServerDisp["server_dispatcher\n(RpcDispatcher)"]
IncomingBuf["incoming_buf\n(buffer chunks)"]
end
subgraph "Test Flow"
CallReq["RpcRequest"]
ReadBytes["read_bytes()"]
ProcessReq["Process request"]
Respond["RpcResponse"]
end
CallReq -->|client_dispatcher.call| ClientDisp
ClientDisp -->|emit callback| OutBuf
OutBuf -->|chunked read| IncomingBuf
IncomingBuf -->|server_dispatcher| ReadBytes
ReadBytes --> ProcessReq
ProcessReq -->|server_dispatcher.respond| Respond
Respond -->|emit to client| ClientDisp
Key Components in Dispatcher Tests
| Component | Type | Purpose |
|---|---|---|
RpcDispatcher::new() | Constructor | Creates empty dispatcher instances for client and server |
client_dispatcher.call() | Method | Initiates RPC request with callback for emitted bytes |
server_dispatcher.read_bytes() | Method | Processes incoming bytes and returns request IDs |
is_rpc_request_finalized() | Method | Checks if request is complete for prebuffered handling |
delete_rpc_request() | Method | Retrieves and removes complete request from dispatcher |
server_dispatcher.respond() | Method | Sends response back through emit callback |
Sources: tests/rpc_dispatcher_tests.rs:37-38 tests/rpc_dispatcher_tests.rs:74-123 tests/rpc_dispatcher_tests.rs:130-198
Example: Testing Request-Response Flow
The dispatcher test demonstrates a complete round-trip with multiple concurrent requests:
Sources: tests/rpc_dispatcher_tests.rs:74-123 tests/rpc_dispatcher_tests.rs:127-202
sequenceDiagram
participant Test as "Test Code"
participant ClientDisp as "client_dispatcher"
participant OutBuf as "outgoing_buf"
participant ServerDisp as "server_dispatcher"
participant Handler as "Request Handler"
Test->>ClientDisp: call(ADD_METHOD_ID)
ClientDisp->>OutBuf: emit bytes
Test->>ClientDisp: call(MULT_METHOD_ID)
ClientDisp->>OutBuf: emit bytes
Test->>OutBuf: read chunks
OutBuf->>ServerDisp: read_bytes(chunk)
ServerDisp-->>Test: rpc_request_id
Test->>ServerDisp: is_rpc_request_finalized()
ServerDisp-->>Test: true
Test->>ServerDisp: delete_rpc_request()
ServerDisp-->>Handler: RpcRequest
Handler->>Handler: decode, compute, encode
Handler->>ServerDisp: respond(RpcResponse)
ServerDisp->>ClientDisp: emit response bytes
ClientDisp->>Test: RpcStreamEvent::PayloadChunk
Code Structure for Dispatcher Tests
The test at tests/rpc_dispatcher_tests.rs:30-203 demonstrates the following pattern:
- Setup : Create client and server dispatchers with shared buffer (tests/rpc_dispatcher_tests.rs:34-38)
- Call Phase : Client dispatcher emits requests to shared buffer (tests/rpc_dispatcher_tests.rs:42-124)
- Read Phase : Server dispatcher processes chunked bytes (tests/rpc_dispatcher_tests.rs:127-132)
- Handle Phase : Extract finalized requests and process (tests/rpc_dispatcher_tests.rs:134-189)
- Respond Phase : Server dispatcher emits response back to client (tests/rpc_dispatcher_tests.rs:192-198)
Sources: tests/rpc_dispatcher_tests.rs:30-203
Testing RPC Service Callers
classDiagram
class RpcServiceCallerInterface {<<trait>>\n+get_dispatcher() Arc~TokioMutex~RpcDispatcher~~\n+get_emit_fn() Arc~Fn~\n+is_connected() bool\n+call_rpc_streaming() Result\n+set_state_change_handler()}
class MockRpcClient {
-response_sender_provider SharedResponseSender
-is_connected_atomic Arc~AtomicBool~
+get_dispatcher()
+get_emit_fn()
+is_connected()
+call_rpc_streaming()
+set_state_change_handler()
}
class TestCase {+test_buffered_call_success()\n+test_buffered_call_remote_error()\n+test_prebuffered_trait_converts_error()}
RpcServiceCallerInterface <|.. MockRpcClient
TestCase ..> MockRpcClient : uses
Mock Implementation Pattern
Testing client-side RPC logic requires mocking the RpcServiceCallerInterface. The mock implementation provides controlled behavior for test scenarios.
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:24-93
Mock Components
| Component | Type | Purpose |
|---|---|---|
MockRpcClient | Struct | Test implementation of RpcServiceCallerInterface |
SharedResponseSender | Type alias | Arc<Mutex<Option<DynamicSender>>> for providing responses |
is_connected_atomic | Field | Arc<AtomicBool> for controlling connection state |
DynamicChannelType | Enum | Specifies bounded or unbounded channel for responses |
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:22-28 extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:44-85
Mock Interface Implementation
The MockRpcClient implementation at extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:30-93 shows how to mock each trait method:
get_dispatcher(): Returns a freshRpcDispatcherinstance (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:32-34)get_emit_fn(): Returns no-op closure since tests don't emit to network (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:36-38)is_connected(): Reads from sharedAtomicBoolfor state control (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:40-42)call_rpc_streaming(): Creates dynamic channel and stores sender for test control (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:44-85)
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:30-93
Testing Success Cases
Prebuffered Call Success Test
The test at extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133 validates successful RPC call flow:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133
Test Structure
- Setup mock client : Initialize with sender provider and connection state (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:99-105)
- Spawn response task : Background task simulates server sending response (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:110-121)
- Create request : Build
RpcRequestwith method ID and parameters (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:123-128) - Call RPC : Invoke
call_rpc_buffered()which awaits response (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs130) - Verify result : Assert response matches expected payload (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs132)
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133
Testing Error Cases
Remote Error Handling Test
The test at extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177 validates error propagation from server to client:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177
Error Types in Tests
The tests validate handling of RpcServiceError variants:
| Error Type | Code | Test Scenario |
|---|---|---|
RpcServiceError::Rpc with RpcServiceErrorCode::Fail | Business logic error | Item not found, validation failure (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:155-158) |
RpcServiceError::Rpc with RpcServiceErrorCode::System | System error | Method panic, internal error (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:197-200) |
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:155-158 extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:197-200
Testing Trait Methods
RpcMethodPrebuffered Integration
The test at extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:179-212 validates that high-level trait methods correctly propagate errors:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:179-212
Key Test Points
- Direct trait method call : Use
Echo::call()instead of lower-level APIs (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs203) - Error type preservation : Verify
RpcServiceError::Rpcvariant is maintained (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:205-210) - Error code and message : Assert both
codeandmessagefields are correct (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:207-208)
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:203-211
graph TB
subgraph "Channel Types"
Unbounded["DynamicChannelType::Unbounded\n(mpsc::unbounded)"]
Bounded["DynamicChannelType::Bounded\n(mpsc::channel with size)"]
end
subgraph "Channel Components"
DynSender["DynamicSender\n(enum wrapper)"]
DynReceiver["DynamicReceiver\n(enum wrapper)"]
end
subgraph "Test Usage"
StoreProvider["Store in\nresponse_sender_provider"]
SpawnTask["Spawned task\npolls for sender"]
SendResponse["send_and_ignore()"]
end
Unbounded --> DynSender
Bounded --> DynSender
Unbounded --> DynReceiver
Bounded --> DynReceiver
DynSender --> StoreProvider
StoreProvider --> SpawnTask
SpawnTask --> SendResponse
DynReceiver --> StoreProvider
Mock Transport Patterns
Dynamic Channel Management
Tests use dynamic channels to control response timing and simulate various scenarios:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:55-70
Channel Creation in Tests
The call_rpc_streaming() mock implementation at extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:44-85 shows the pattern:
- Match channel type : Handle both bounded and unbounded variants (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:55-70)
- Create dummy encoder : Return placeholder
RpcStreamEncoder(extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:72-80) - Store sender : Place sender in shared provider for test task access (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs82)
- Return receiver : Client code awaits responses on returned receiver (extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs84)
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:44-85
Test Data Structures
Request and Response Test Fixtures
Unit tests define custom request/response types for validation:
Sources: tests/rpc_dispatcher_tests.rs:7-28 src/rpc/rpc_request_response.rs:9-76
Encoding and Decoding in Tests
The dispatcher tests at tests/rpc_dispatcher_tests.rs:44-49 show how to use bitcode for serialization:
- Request encoding :
bitcode::encode(&AddRequestParams { numbers: vec![1.0, 2.0, 3.0] })(tests/rpc_dispatcher_tests.rs:44-46) - Response decoding :
bitcode::decode::<AddResponseParams>(&bytes)(tests/rpc_dispatcher_tests.rs:103-105) - Handler processing : Decode request, compute result, encode response (tests/rpc_dispatcher_tests.rs:151-167)
Sources: tests/rpc_dispatcher_tests.rs:44-49 tests/rpc_dispatcher_tests.rs:103-105 tests/rpc_dispatcher_tests.rs:151-167
Common Test Patterns
Pattern: Synchronous In-Memory Testing
For components that don't require async:
Sources: tests/rpc_dispatcher_tests.rs:32-203
Pattern: Async Mock with Controlled Responses
For async components requiring controlled response timing:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133
Pattern: Error Case Validation
For testing error propagation:
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177
Summary
The rust-muxio unit testing approach emphasizes:
- Isolation : Core components like
RpcDispatchercan be tested without async runtimes or network connections - Mock interfaces : Trait-based design enables easy creation of test doubles for
RpcServiceCallerInterface - In-memory buffers : Shared
Rc<RefCell<Vec<u8>>>buffers simulate network transmission - Dynamic channels : Controlled response delivery via
DynamicSender/DynamicReceiver - Error validation : Comprehensive testing of both success and error paths
- Type safety : Shared service definitions ensure compile-time correctness even in tests
These patterns enable fast, reliable unit tests that validate component behavior without external dependencies.
Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 tests/rpc_dispatcher_tests.rs:1-204
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Integration Testing
Relevant source files
- Cargo.lock
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
This page describes the patterns and practices for writing integration tests in the rust-muxio system. Integration tests verify end-to-end functionality by creating real server and client instances that communicate over actual network connections.
For unit testing patterns focused on individual components, see Unit Testing. For examples of complete applications demonstrating these patterns, see WebSocket RPC Application.
Overview
Integration tests in rust-muxio follow a full-fidelity approach: they instantiate real RpcServer instances, real client instances (RpcClient or RpcWasmClient), and communicate over actual network sockets. This ensures that all layers of the system—from binary framing through RPC dispatch to service handlers—are exercised together.
The integration test suites are located in:
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241 for native Tokio client tests
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313 for WASM client tests
Both test suites use the same server implementation and shared service definitions from example-muxio-rpc-service-definition, demonstrating the cross-platform compatibility of the system.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313
Test Architecture
Diagram: Integration Test Component Architecture
Integration tests create isolated environments where a real server listens on a random TCP port. Clients connect to this server using actual WebSocket connections. For WASM clients, a bridge component forwards bytes between the client's callback interface and the WebSocket connection.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:16-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142
Server Setup Pattern
Integration tests follow a consistent pattern for server initialization:
| Step | Component | Purpose |
|---|---|---|
| 1 | TcpListener::bind("127.0.0.1:0") | Bind to random available port |
| 2 | Arc::new(RpcServer::new(None)) | Create server with Arc for ownership |
| 3 | server.endpoint() | Obtain endpoint for handler registration |
| 4 | endpoint.register_prebuffered() | Register service method handlers |
| 5 | tokio::spawn(server.serve_with_listener()) | Spawn server in background task |
Diagram: Server Setup Sequence
The Arc<RpcServer> pattern is critical because the server needs to be cloned into the spawned task while handlers are being registered. The endpoint() method returns a reference that can register handlers before the server starts serving.
Example server setup from Tokio integration tests:
extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:20-71
Example server setup from WASM integration tests:
extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:42-78
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:20-71 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:42-78
Handler Registration
Service method handlers are registered using the register_prebuffered() method on the RpcServiceEndpointInterface. Each handler is an async closure that:
- Receives encoded request bytes and a context
- Decodes the request using
MethodType::decode_request() - Performs the business logic
- Encodes the response using
MethodType::encode_response() - Returns
Result<Vec<u8>, RpcServiceError>
Diagram: Handler Registration Flow
graph LR
REG["endpoint.register_prebuffered()"]
METHOD_ID["METHOD_ID\nCompile-time constant"]
HANDLER["Async Closure\n/bytes, ctx/ async move"]
DECODE["decode_request(&bytes)"]
LOGIC["Business Logic\nsum(), product(), echo()"]
ENCODE["encode_response(result)"]
REG --> METHOD_ID
REG --> HANDLER
HANDLER --> DECODE
DECODE --> LOGIC
LOGIC --> ENCODE
ENCODE --> RETURN["Ok(Vec<u8>)"]
Example Add handler registration:
This pattern is consistent across all methods (Add, Mult, Echo) and appears identically in both Tokio and WASM integration tests.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:34-61 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:51-69
Tokio Client Setup
Native Tokio clients connect directly to the server using WebSocket:
Diagram: Tokio Client Initialization
The client setup is straightforward:
- Wait briefly for server to start accepting connections
- Create
RpcClientwith host and port - Make RPC calls using the high-level
RpcCallPrebufferedtrait
Example from integration tests:
extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:74-96
The RpcCallPrebuffered trait provides the call() method that handles encoding, transport, and decoding automatically. See extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-98 for trait implementation.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:74-96 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-98
WASM Client Setup with Bridge
WASM client integration tests require additional infrastructure because the RpcWasmClient uses a callback-based interface rather than direct WebSocket access. The tests create a "bridge" that simulates the JavaScript glue code:
Diagram: WASM Client Bridge Architecture
graph TB
subgraph "Test Components"
TEST["Test Function"]
WASM_CLIENT["RpcWasmClient::new(callback)"]
end
subgraph "Bridge Infrastructure"
TX_CHANNEL["tokio_mpsc::unbounded_channel\nto_bridge_tx/rx"]
WS_CONNECTION["tokio_tungstenite::connect_async\nWebSocket"]
SENDER_TASK["Sender Task\nChannel → WebSocket"]
RECEIVER_TASK["Receiver Task\nWebSocket → Dispatcher"]
end
subgraph "Server"
RPC_SERVER["RpcServer\nWebSocket endpoint"]
end
TEST --> WASM_CLIENT
WASM_CLIENT -->|send_callback bytes| TX_CHANNEL
TX_CHANNEL --> SENDER_TASK
SENDER_TASK -->|Binary frames| WS_CONNECTION
WS_CONNECTION --> RPC_SERVER
RPC_SERVER -->|Binary frames| WS_CONNECTION
WS_CONNECTION --> RECEIVER_TASK
RECEIVER_TASK -->|dispatcher.read_bytes| WASM_CLIENT
The bridge consists of three components:
| Component | Implementation | Purpose |
|---|---|---|
| Send Callback | tokio_mpsc::unbounded_channel sender | Captures bytes from WASM client |
| Sender Task | tokio::spawn with channel receiver | Forwards bytes to WebSocket |
| Receiver Task | tokio::spawn with WebSocket receiver | Forwards bytes to client dispatcher |
Key bridge setup steps:
- Create unbounded channel for outgoing bytes: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-86
- Create WASM client with callback that sends to channel: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:84-86
- Connect to server via
tokio_tungstenite: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:89-92 - Call
client.handle_connect()to simulate JavaScriptonopenevent: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs95 - Spawn sender task: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:98-108
- Spawn receiver task with
spawn_blockingfor synchronous dispatcher calls: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:110-123
Important: The receiver task uses task::spawn_blocking() because the dispatcher's blocking_lock() and read_bytes() methods are synchronous. Running these on the async runtime would block the executor, so they are moved to a dedicated blocking thread.
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-123
Common Test Patterns
Success Roundtrip Test
The most basic integration test verifies successful request-response cycles:
Diagram: Success Roundtrip Test Flow
Test structure:
- Server registers handlers for multiple methods
- Client makes concurrent calls using
tokio::join!() - Test asserts each result matches expected value
Example usingtokio::join!() for concurrent calls:
extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-96
This pattern tests multiple methods in parallel, verifying that the multiplexing layer correctly correlates responses to requests.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:19-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:40-142
graph LR
HANDLER["Handler returns\nErr(String)"]
ENCODE["Error encoded as\nRpcServiceError"]
TRANSPORT["Sent via\nWebSocket"]
DECODE["Client decodes\nerror"]
MATCH["Test matches\nerror variant"]
HANDLER --> ENCODE
ENCODE --> TRANSPORT
TRANSPORT --> DECODE
DECODE --> MATCH
VARIANT["RpcServiceError::Rpc\ncode: System\nmessage: 'Addition failed'"]
MATCH --> VARIANT
Error Propagation Test
Error handling tests verify that server-side errors are correctly propagated to clients:
Diagram: Error Propagation Flow
Test pattern:
- Register handler that always returns
Err() - Make RPC call
- Assert result is
Err - Match on specific error variant and check error code and message
Example error handler registration:
extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:114-117
Example error assertion:
extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:136-151
The test explicitly matches on RpcServiceError::Rpc variant and verifies both the error code (RpcServiceErrorCode::System) and message.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:100-152 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:145-227
graph TB
CREATE["Create payload\n200 × DEFAULT_SERVICE_MAX_CHUNK_SIZE\n≈ 12.8 MB"]
CALL["Echo::call(client, large_payload)"]
subgraph "Client Side Chunking"
CLIENT_CHUNK["RpcDispatcher chunks\ninto ~200 frames"]
CLIENT_SEND["Send frames sequentially"]
end
subgraph "Server Side Processing"
SERVER_RECV["Receive and reassemble\n~200 frames"]
SERVER_ECHO["Echo handler\nreturns same bytes"]
SERVER_CHUNK["Chunk response\ninto ~200 frames"]
SERVER_SEND["Send response frames"]
end
subgraph "Client Side Reassembly"
CLIENT_RECV["Receive and reassemble\n~200 frames"]
CLIENT_RETURN["Return decoded response"]
end
CREATE --> CALL
CALL --> CLIENT_CHUNK
CLIENT_CHUNK --> CLIENT_SEND
CLIENT_SEND --> SERVER_RECV
SERVER_RECV --> SERVER_ECHO
SERVER_ECHO --> SERVER_CHUNK
SERVER_CHUNK --> SERVER_SEND
SERVER_SEND --> CLIENT_RECV
CLIENT_RECV --> CLIENT_RETURN
CLIENT_RETURN --> ASSERT["assert_eq!(result, large_payload)"]
Large Payload Test
Large payload tests verify that the system correctly chunks and reassembles payloads that exceed the maximum frame size:
Diagram: Large Payload Chunking Flow
Test implementation:
- Create payload:
vec![1u8; DEFAULT_SERVICE_MAX_CHUNK_SIZE * 200] - Register echo handler that returns received bytes
- Call
Echo::call()with large payload - Assert response equals request
Example from Tokio integration tests:
extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:155-203
This test exercises the framing protocol's ability to handle payloads hundreds of times larger than the maximum frame size, verifying that chunking and reassembly work correctly in both directions.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:155-203 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:230-312
graph LR
START["Start server\nNo handlers registered"]
CONNECT["Client connects"]
CALL["Add::call(client, params)"]
DISPATCH["Server dispatcher receives\nMETHOD_ID"]
LOOKUP["Lookup handler\nNot found!"]
ERROR["Return RpcServiceError::Rpc\ncode: NotFound"]
CLIENT_ERR["Client receives error"]
ASSERT["assert error code\n== NotFound"]
START --> CONNECT
CONNECT --> CALL
CALL --> DISPATCH
DISPATCH --> LOOKUP
LOOKUP --> ERROR
ERROR --> CLIENT_ERR
CLIENT_ERR --> ASSERT
Method Not Found Test
This test verifies error handling when a client calls a method that has no registered handler:
Diagram: Method Not Found Error Flow
Test pattern:
- Start server without registering any handlers
- Make RPC call for a method
- Assert error is
RpcServiceError::RpcwithRpcServiceErrorCode::NotFound
Example:
extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:206-240
This test validates that the RpcDispatcher correctly handles missing method IDs and returns appropriate error codes rather than panicking or hanging.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:206-240
Test Execution
Integration tests use the #[tokio::test] macro to run in an async context:
Running integration tests:
| Command | Scope |
|---|---|
cargo test | All tests in workspace |
cargo test --package muxio-tokio-rpc-client | Tokio client integration tests only |
cargo test --package muxio-wasm-rpc-client | WASM client integration tests only |
cargo test test_success_client_server_roundtrip | Specific test across all packages |
Note: WASM client integration tests run using native Tokio runtime with a bridge, not in actual WebAssembly. They validate the client's core logic but not WASM-specific browser APIs. For browser testing, see JavaScript/WASM Integration.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs18 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs39
Test Organization
Integration tests are organized by client type in separate test files:
Diagram: Integration Test File Organization
| Test File | Purpose | Dependencies |
|---|---|---|
muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs | Native Tokio client end-to-end tests | muxio-tokio-rpc-server, example-muxio-rpc-service-definition |
muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs | WASM client end-to-end tests with bridge | muxio-tokio-rpc-server, tokio-tungstenite, example-muxio-rpc-service-definition |
Both test files use the same service definitions (Add, Mult, Echo) from example-muxio-rpc-service-definition, demonstrating that the same RPC interface works across platforms.
The muxio-ext-test crate provides shared testing utilities (see Cargo.lock:842-855 for dependencies), though the integration tests shown here use direct dependencies on the server and service definition crates.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-11 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-37 Cargo.lock:842-855
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Examples and Tutorials
Relevant source files
- README.md
- assets/Muxio-logo.svg
- extensions/README.md
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
This page provides an overview of practical examples demonstrating rust-muxio usage patterns. It covers the core example application, common testing patterns, and basic usage workflows. For a detailed walkthrough of the WebSocket RPC application, see WebSocket RPC Application. For a step-by-step tutorial on building services, see Simple Calculator Service.
The examples in this system demonstrate:
- Service definition patterns using
RpcMethodPrebuffered - Server-side handler registration with
RpcServiceEndpointInterface - Client-side invocation using
RpcServiceCallerInterface - Cross-platform testing with native Tokio and WASM clients
- Error handling and large payload streaming
Sources: README.md:1-166 extensions/README.md:1-4
Available Example Resources
The codebase includes several resources demonstrating different aspects of the system:
| Resource | Location | Purpose |
|---|---|---|
| Main Example Application | examples/example-muxio-ws-rpc-app/ | Complete WebSocket RPC server and client |
| Shared Service Definitions | examples/example-muxio-rpc-service-definition/ | Reusable service contracts |
| Tokio Client Tests | extensions/muxio-tokio-rpc-client/tests/ | Native client integration tests |
| WASM Client Tests | extensions/muxio-wasm-rpc-client/tests/ | WASM client integration tests |
Sources: README.md:67-68 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313
Service Definition to Client Call Flow
This diagram maps the complete flow from defining a service method to executing it on a client, using actual type and function names from the codebase:
Sources: README.md:69-161 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-99 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:34-61
graph TB
subgraph "Service Definition Crate"
AddStruct["Add struct"]
RpcMethodPrebuffered["impl RpcMethodPrebuffered"]
METHOD_ID["Add::METHOD_ID"]
encode_request["Add::encode_request()"]
decode_response["Add::decode_response()"]
end
subgraph "Server Registration"
endpoint["RpcServiceEndpointInterface"]
register_prebuffered["endpoint.register_prebuffered()"]
handler["Handler closure"]
decode_req["Add::decode_request()"]
encode_resp["Add::encode_response()"]
end
subgraph "Client Invocation"
RpcClient["RpcClient or RpcWasmClient"]
RpcCallPrebuffered["RpcCallPrebuffered trait"]
call_method["Add::call()"]
call_rpc_buffered["client.call_rpc_buffered()"]
end
AddStruct --> RpcMethodPrebuffered
RpcMethodPrebuffered --> METHOD_ID
RpcMethodPrebuffered --> encode_request
RpcMethodPrebuffered --> decode_response
METHOD_ID --> register_prebuffered
register_prebuffered --> handler
handler --> decode_req
handler --> encode_resp
encode_request --> call_method
decode_response --> call_method
call_method --> RpcCallPrebuffered
RpcCallPrebuffered --> call_rpc_buffered
call_rpc_buffered --> RpcClient
Basic Server Setup Pattern
The standard pattern for creating and configuring an RPC server involves these steps:
sequenceDiagram
participant App as "Application Code"
participant TcpListener as "TcpListener"
participant RpcServer as "RpcServer::new()"
participant Endpoint as "server.endpoint()"
participant Task as "tokio::spawn()"
App->>TcpListener: bind("127.0.0.1:0")
App->>RpcServer: Create with optional config
App->>Endpoint: Get endpoint handle
loop For each RPC method
Endpoint->>Endpoint: register_prebuffered(METHOD_ID, handler)
end
App->>Task: Spawn server task
Task->>RpcServer: serve_with_listener(listener)
Note over RpcServer: Server now accepting\nWebSocket connections
The server is wrapped in Arc<RpcServer> to allow sharing between the registration code and the spawned task. The endpoint() method returns a handle implementing RpcServiceEndpointInterface for registering handlers before the server starts.
Sources: README.md:86-128 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:21-71
Handler Registration Code Pattern
Handler registration uses register_prebuffered() with the compile-time generated METHOD_ID and an async closure:
graph LR
subgraph "Handler Components"
METHOD_ID["Add::METHOD_ID\n(compile-time hash)"]
closure["Async closure:\n/request_bytes, _ctx/"]
decode["Add::decode_request()"]
logic["Business logic:\nrequest_params.iter().sum()"]
encode["Add::encode_response()"]
end
METHOD_ID --> register["endpoint.register_prebuffered()"]
closure --> register
closure --> decode
decode --> logic
logic --> encode
encode --> return["Ok(response_bytes)"]
Example handler registration from the tests:
- Decode request bytes using
Add::decode_request(&request_bytes)? - Execute business logic (e.g.,
request_params.iter().sum()) - Encode response using
Add::encode_response(result)? - Return
Ok(response_bytes)orErr(...)for errors
Sources: README.md:100-117 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:35-60
Client Connection and Call Pattern
Client setup and RPC invocation follows this structure:
The client implements RpcServiceCallerInterface, which is used by the RpcCallPrebuffered trait to handle all encoding, transmission, and decoding automatically.
Sources: README.md:130-161 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:75-96
Complete Integration Test Structure
Integration tests demonstrate the full client-server setup in a single test function:
graph TB
subgraph "Test Setup Phase"
bind["TcpListener::bind()"]
create_server["Arc::new(RpcServer::new())"]
get_endpoint["server.endpoint()"]
register["Register all handlers"]
spawn_server["tokio::spawn(server.serve())"]
end
subgraph "Test Execution Phase"
sleep["tokio::time::sleep()"]
create_client["RpcClient::new()"]
make_calls["Execute RPC calls with join!()"]
assertions["Assert results"]
end
bind --> create_server
create_server --> get_endpoint
get_endpoint --> register
register --> spawn_server
spawn_server --> sleep
sleep --> create_client
create_client --> make_calls
make_calls --> assertions
The join! macro enables concurrent RPC calls over a single connection, demonstrating multiplexing in action. All requests are sent and responses are awaited in parallel.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:16-97
Example: Concurrent RPC Calls
The test files demonstrate making multiple concurrent RPC calls using tokio::join!:
All six calls execute concurrently over the same WebSocket connection. The RpcDispatcher assigns unique request IDs and correlates responses back to the correct futures.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-96 README.md:144-151
Example: Error Handling
The integration tests show how RPC errors propagate from server to client:
Example error handling code from tests:
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152
Example: Large Payload Handling
The system automatically chunks large payloads. The integration tests verify this with payloads 200x larger than the chunk size:
| Test Scenario | Payload Size | Mechanism |
|---|---|---|
| Small request | < DEFAULT_SERVICE_MAX_CHUNK_SIZE | Sent in rpc_param_bytes field |
| Large request | ≥ DEFAULT_SERVICE_MAX_CHUNK_SIZE | Sent via rpc_prebuffered_payload_bytes with automatic chunking |
| Large response | Any size | Automatically chunked by RpcDispatcher |
Test code creates a payload of DEFAULT_SERVICE_MAX_CHUNK_SIZE * 200 (approximately 12.8 MB) and verifies it round-trips correctly:
The chunking and reassembly happen transparently in the RpcDispatcher layer.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:154-203 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-72
WASM Client Testing Pattern
WASM client tests use a bridge pattern to connect the client to a real server:
The bridge creates an unbounded_channel to receive bytes from the WASM client's output callback, then forwards them through a real WebSocket connection. Responses flow back through spawn_blocking to avoid blocking the async runtime when calling the synchronous dispatcher.blocking_lock().
Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-142
Example: Method Not Found Error
When a client calls a method that hasn't been registered on the server, the system returns RpcServiceErrorCode::NotFound:
This demonstrates the system's ability to distinguish between different error types: NotFound for missing handlers versus System for handler execution failures.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:205-240
State Change Callbacks
Clients can register callbacks to monitor connection state changes:
Example from the README:
Sources: README.md:138-141
Service Definition Module Structure
Service definitions are typically organized in a separate crate shared by both client and server:
| Component | Purpose | Example |
|---|---|---|
RpcMethodPrebuffered impl | Defines method contract | impl RpcMethodPrebuffered for Add |
METHOD_ID constant | Compile-time generated hash | Add::METHOD_ID |
Input type | Request parameters type | Vec<f64> |
Output type | Response result type | f64 |
encode_request() | Serializes input | Uses bitcode::encode() |
decode_request() | Deserializes input | Uses bitcode::decode() |
encode_response() | Serializes output | Uses bitcode::encode() |
decode_response() | Deserializes output | Uses bitcode::decode() |
This shared definition ensures compile-time type safety between client and server implementations.
Sources: README.md:49-50 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-6
Testing Utilities
The muxio-ext-test crate provides utilities for testing, though the primary testing pattern uses real server and client instances as shown in the integration tests. The tests demonstrate:
- Using
TcpListener::bind("127.0.0.1:0")for random available ports - Extracting host and port with
tcp_listener_to_host_port() - Using
tokio::time::sleep()for synchronization - Spawning server tasks with
tokio::spawn() - Making concurrent calls with
tokio::join!()
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:9-14
Common Patterns Summary
| Pattern | Implementation | Use Case |
|---|---|---|
| Server setup | Arc::new(RpcServer::new()) | Share between registration and serving |
| Handler registration | endpoint.register_prebuffered() | Before calling serve_with_listener() |
| Client creation | RpcClient::new(host, port) | Tokio-based native client |
| WASM client | RpcWasmClient::new(callback) | Browser/WASM environment |
| RPC invocation | Method::call(&client, params) | Type-safe method calls |
| Concurrent calls | tokio::join!(...) | Multiple simultaneous requests |
| Error handling | match RpcServiceError | Distinguish error types |
| Large payloads | Automatic chunking | Transparent for > 64KB data |
Sources: README.md:69-161 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:18-97
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
WebSocket RPC Application
Relevant source files
This page provides a detailed walkthrough of the example-muxio-ws-rpc-app demonstration, which showcases a complete WebSocket-based RPC application using the Muxio framework. This example illustrates how to create a server with registered RPC handlers, connect a client, perform concurrent RPC calls, and gracefully shut down.
Scope : This page focuses specifically on the example application structure and implementation. For information about creating custom service definitions, see Creating Service Definitions. For details about the Tokio server implementation, see Tokio RPC Server. For details about the Tokio client implementation, see Tokio RPC Client.
Overview
The WebSocket RPC application demonstrates the complete lifecycle of a Muxio-based RPC service:
- Server Setup : Binds to a random TCP port and registers RPC method handlers
- Client Connection : Connects to the server via WebSocket
- Concurrent RPC Execution : Makes multiple simultaneous RPC calls over a single connection
- State Management : Monitors connection state changes via callbacks
- Request/Response Verification : Validates that all responses match expected values
Sources : README.md:64-161
Application Structure
The example application is organized into two main workspace crates:
| Crate | Purpose | Key Components |
|---|---|---|
example-muxio-ws-rpc-app | Application executable | Server setup, client execution, main loop |
example-muxio-rpc-service-definition | Shared service contract | Add, Mult, Echo method definitions |
The shared service definition crate ensures compile-time type safety between client and server by providing a single source of truth for RPC method signatures, parameter types, and return types.
Sources : README.md:67-73
graph LR
subgraph "Shared Contract"
SERVICE_DEF["example-muxio-rpc-service-definition\nAdd, Mult, Echo"]
end
subgraph "Server Side"
SERVER_IMPL["Server Implementation\nDecode → Compute → Encode"]
end
subgraph "Client Side"
CLIENT_IMPL["Client Implementation\nEncode → Send → Decode"]
end
SERVICE_DEF -->|Defines API| SERVER_IMPL
SERVICE_DEF -->|Defines API| CLIENT_IMPL
CLIENT_IMPL -->|WebSocket Binary Frames| SERVER_IMPL
Service Definitions
The example uses three prebuffered RPC methods, all defined in the example-muxio-rpc-service-definition crate:
Method Inventory
| Method | Input Type | Output Type | Operation |
|---|---|---|---|
Add | Vec<f64> | f64 | Sum of all input values |
Mult | Vec<f64> | f64 | Product of all input values |
Echo | Vec<u8> | Vec<u8> | Returns input unchanged |
Each method implements the RpcMethodPrebuffered trait, which provides:
- Compile-time METHOD_ID : Generated by hashing the method name with
xxhash-rust - encode_request/decode_request : Serialization logic for parameters using
bitcode - encode_response/decode_response : Serialization logic for return values using
bitcode
Sources : README.md:70-73
Server Implementation
The server setup involves creating an RpcServer instance, registering method handlers, and spawning the server task.
sequenceDiagram
participant Main as "main()"
participant Listener as "TcpListener"
participant Server as "RpcServer"
participant Endpoint as "endpoint()"
participant Task as "Server Task"
Main->>Listener: TcpListener::bind("127.0.0.1:0")
Listener-->>Main: Random port assigned
Main->>Server: RpcServer::new(None)
Main->>Server: Arc::new(server)
Main->>Endpoint: server.endpoint()
Main->>Endpoint: register_prebuffered(Add::METHOD_ID, handler)
Main->>Endpoint: register_prebuffered(Mult::METHOD_ID, handler)
Main->>Endpoint: register_prebuffered(Echo::METHOD_ID, handler)
Main->>Task: tokio::spawn(server.serve_with_listener)
Task->>Server: Begin accepting connections
Server Setup Sequence
Sources : README.md:86-128
Handler Registration
Handlers are registered using the register_prebuffered method from RpcServiceEndpointInterface. Each handler receives:
request_bytes: Vec<u8>- The serialized request parameters_ctx- Request context (unused in this example)
The handler pattern follows these steps:
- Decode Request : README.md102 -
Add::decode_request(&request_bytes)? - Compute Result : README.md103 -
let sum = request_params.iter().sum() - Encode Response : README.md104 -
Add::encode_response(sum)? - Return Result : README.md105 -
Ok(response_bytes)
Sources : README.md:101-117
Server Configuration Details
| Configuration | Value | Code Reference |
|---|---|---|
| Bind address | "127.0.0.1:0" | README.md87 |
| Port selection | Random available port | README.md87 |
| Server options | None (defaults) | README.md94 |
| Arc wrapping | Arc::new(RpcServer::new(None)) | README.md94 |
The server is wrapped in Arc to enable sharing across multiple tasks. The endpoint handle is obtained via README.md97 server.endpoint() and used for handler registration.
Sources : README.md:86-94
sequenceDiagram
participant Main as "main()"
participant Sleep as "tokio::time::sleep"
participant Client as "RpcClient"
participant Handler as "State Change Handler"
participant Methods as "RPC Methods"
Main->>Sleep: sleep(200ms)
Note over Sleep: Wait for server startup
Main->>Client: RpcClient::new(host, port)
Client-->>Main: Connected client
Main->>Client: set_state_change_handler(callback)
Client->>Handler: Register callback
Main->>Methods: Add::call(&client, params)
Main->>Methods: Mult::call(&client, params)
Main->>Methods: Echo::call(&client, params)
Note over Methods: All calls execute concurrently
Methods-->>Main: Results returned via join!
Client Implementation
The client connects to the server, sets up state monitoring, and performs concurrent RPC calls.
Client Connection Flow
Sources : README.md:130-160
State Change Monitoring
The client sets a state change handler at README.md:138-141 to monitor connection lifecycle events:
The handler receives RpcTransportState enum values indicating connection status. See Transport State Management for details on available states.
Sources : README.md:138-141
Concurrent RPC Execution
The example demonstrates concurrent request handling using Tokio's join! macro at README.md:144-151 Six RPC calls execute simultaneously over a single WebSocket connection:
| Call | Method | Parameters | Expected Result |
|---|---|---|---|
| res1 | Add::call | [1.0, 2.0, 3.0] | 6.0 |
| res2 | Add::call | [8.0, 3.0, 7.0] | 18.0 |
| res3 | Mult::call | [8.0, 3.0, 7.0] | 168.0 |
| res4 | Mult::call | [1.5, 2.5, 8.5] | 31.875 |
| res5 | Echo::call | b"testing 1 2 3" | b"testing 1 2 3" |
| res6 | Echo::call | b"testing 4 5 6" | b"testing 4 5 6" |
All requests are multiplexed over the single WebSocket connection, with the RpcDispatcher handling request correlation and response routing.
Sources : README.md:144-158
sequenceDiagram
participant App as "Application\nmain()"
participant AddCall as "Add::call()"
participant Client as "RpcClient"
participant Dispatcher as "RpcDispatcher"
participant WS as "WebSocket\nConnection"
participant ServerDisp as "Server RpcDispatcher"
participant Endpoint as "Endpoint"
participant Handler as "Add Handler"
App->>AddCall: Add::call(&client, [1.0, 2.0, 3.0])
AddCall->>AddCall: Add::encode_request(params)
AddCall->>Client: call_prebuffered(METHOD_ID, request_bytes)
Client->>Dispatcher: dispatch_request(METHOD_ID, bytes)
Note over Dispatcher: Assign unique request_id\nStore pending request
Dispatcher->>WS: Binary frames with METHOD_ID
WS->>ServerDisp: Receive binary frames
ServerDisp->>Endpoint: Route by METHOD_ID
Endpoint->>Handler: Invoke registered handler
Handler->>Handler: Add::decode_request(bytes)
Handler->>Handler: let sum = params.iter().sum()
Handler->>Handler: Add::encode_response(sum)
Handler-->>Endpoint: response_bytes
Endpoint-->>ServerDisp: response_bytes
ServerDisp->>WS: Binary frames with request_id
WS->>Dispatcher: Receive response frames
Note over Dispatcher: Match by request_id\nResolve pending future
Dispatcher-->>Client: response_bytes
Client-->>AddCall: response_bytes
AddCall->>AddCall: Add::decode_response(bytes)
AddCall-->>App: Result<f64>
Request/Response Flow
This diagram shows the complete path of a single RPC call through the system layers:
Sources : README.md:69-161
Running the Example
The example application requires the following dependencies in Cargo.toml:
Execution Flow
- Initialize : README.md84 -
tracing_subscriber::fmt().with_env_filter("info").init() - Bind Port : README.md87 -
TcpListener::bind("127.0.0.1:0").await.unwrap() - Create Server : README.md94 -
Arc::new(RpcServer::new(None)) - Register Handlers : README.md:100-118 - Register
Add,Mult,Echohandlers - Spawn Server : README.md:121-127 -
tokio::spawn(server.serve_with_listener(listener)) - Wait for Startup : README.md133 -
tokio::time::sleep(Duration::from_millis(200)) - Connect Client : README.md136 -
RpcClient::new(&host, port).await.unwrap() - Set Handler : README.md:138-141 -
set_state_change_handler(callback) - Make Calls : README.md:144-151 - Concurrent RPC invocations via
join! - Verify Results : README.md:153-158 - Assert expected values
Sources : README.md:82-161
Key Implementation Details
Server-Side Handler Closure Signature
Each handler registered via register_prebuffered has the signature:
|request_bytes: Vec<u8>, _ctx| async move { ... }
The handler must:
- Accept
request_bytes: Vec<u8>and context - Return
Result<Vec<u8>, RpcServiceError> - Be
asyncandmoveto capture necessary data
Sources : README.md:101-117
Client-Side Method Invocation
Each method provides a static call function with this pattern:
Method::call(&*rpc_client, params)
- Takes a reference to any type implementing
RpcServiceCallerInterface - Accepts typed parameters (e.g.,
Vec<f64>) - Returns
Result<T, RpcServiceError>with the typed response
The &* dereference at README.md:145-150 is required because rpc_client is of type RpcClient, and the trait bound requires &dyn RpcServiceCallerInterface.
Sources : README.md:144-151
Connection Lifecycle
The application demonstrates automatic connection management:
- Server Spawn : Server task runs independently in background
- Client Connect : Client establishes WebSocket connection
- State Tracking : Callback logs all state changes
- Request Processing : Multiple concurrent requests handled
- Implicit Cleanup : Server and client dropped when
main()exits
No explicit shutdown code is needed; Tokio handles task cancellation when the runtime stops.
Sources : README.md:82-160
Related Examples
For additional examples and tutorials:
- Simple Calculator Service : #9.2 - Step-by-step tutorial building from scratch
- Cross-Platform Deployment : #10.1 - Deploying to native and WASM targets
- JavaScript/WASM Integration : #10.4 - Using WASM clients with JavaScript
Sources : README.md:64-161
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Simple Calculator Service
Relevant source files
- README.md
- extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs
- extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs
- extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs
Purpose and Scope
This page provides a step-by-step tutorial for building a simple calculator RPC service using the rust-muxio framework. It demonstrates how to define service contracts, implement server-side handlers, and make client-side calls using the prebuffered RPC pattern. This tutorial covers the core workflow from service definition to testing.
For a walkthrough of the complete example application, see WebSocket RPC Application. For general information about creating service definitions, see Creating Service Definitions. For details on the prebuffered RPC mechanism, see Prebuffered RPC Calls.
Overview
The calculator service implements three basic mathematical operations:
- Add : Sums a list of floating-point numbers
- Mult : Multiplies a list of floating-point numbers
- Echo : Returns the input bytes unchanged
Each operation is defined as a separate RPC method implementing the RpcMethodPrebuffered trait, enabling type-safe, compile-time verified communication between client and server.
Sources: README.md:69-161 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-97
Service Architecture
Figure 1: Calculator Service Component Structure
This diagram illustrates how the shared service definitions in example_muxio_rpc_service_definition provide the contract between server handlers and client calls. Both sides depend on the same METHOD_ID and serialization logic.
Sources: README.md:69-161 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-97
Step 1: Define Service Methods
The calculator service requires three method definitions in the shared service definition crate. Each method implements RpcMethodPrebuffered, which requires:
| Required Item | Description | Example |
|---|---|---|
METHOD_ID | Compile-time generated constant identifying the method | Add::METHOD_ID |
Input | Associated type for method parameters | Vec<f64> |
Output | Associated type for method return value | f64 |
encode_request() | Serializes input to bytes | Uses bitcode |
decode_request() | Deserializes bytes to input | Uses bitcode |
encode_response() | Serializes output to bytes | Uses bitcode |
decode_response() | Deserializes bytes to output | Uses bitcode |
The Add, Mult, and Echo method definitions are located in the example_muxio_rpc_service_definition crate and used by both server and client implementations.
Sources: README.md:70-73 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-6
Step 2: Implement Server Handlers
Figure 2: Server Handler Registration and Execution Flow
Server Setup Pattern
The server setup follows a consistent pattern:
- Create
RpcServerinstance wrapped inArc - Get endpoint handle via
server.endpoint() - Register handlers using
endpoint.register_prebuffered() - Spawn server task with
server.serve_with_listener()
Handler Implementation
Each handler is an async closure that:
- Receives
request_bytes: Vec<u8>and an optional context - Decodes the request using the method's
decode_request() - Performs the calculation
- Encodes the result using the method's
encode_response() - Returns
Ok(response_bytes)orErr(e)
Example from README.md:
The Add handler implementation at README.md:101-106:
- Decodes request bytes to
Vec<f64> - Computes sum using
request_params.iter().sum() - Encodes result back to bytes
The Mult handler implementation at README.md:107-112:
- Decodes request bytes to
Vec<f64> - Computes product using
request_params.iter().product() - Encodes result back to bytes
The Echo handler implementation at README.md:113-117:
- Decodes request bytes to
Vec<u8> - Returns the same bytes unchanged
- Encodes result back to bytes
Sources: README.md:94-118 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:26-61
Step 3: Create Client Calls
Figure 3: Client Call Flow with Code Entities
Client Usage Pattern
The client-side code uses the RpcCallPrebuffered trait, which provides a high-level call() method. This trait is automatically implemented for all types that implement RpcMethodPrebuffered.
Key client steps:
- Create
RpcClientconnection to server - Call methods using
MethodName::call(&client, input) - Await results or use
tokio::join!for concurrent calls - Handle
Result<Output, RpcServiceError>responses
Example from README.md:
The client code at README.md:136-151 demonstrates:
- Creating client with
RpcClient::new() - Making concurrent calls using
tokio::join!() - Passing typed inputs directly (e.g.,
vec![1.0, 2.0, 3.0]) - Receiving typed outputs (e.g.,
f64orVec<u8>)
Argument Size Handling
The RpcCallPrebuffered implementation at extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48 automatically handles large arguments:
| Argument Size | Transport Strategy | Location |
|---|---|---|
| < 64KB | Sent in rpc_param_bytes field | Header frame |
| ≥ 64KB | Sent in rpc_prebuffered_payload_bytes | Streamed after header |
This ensures that RPC calls with large argument sets (like the test at extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:155-203 using 12.8 MB payloads) do not fail due to transport limitations.
Sources: README.md:130-159 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-98
Step 4: Full Integration Example
Complete Server Setup
Complete Client Usage
Sources: README.md:69-161
Testing Patterns
Success Path Testing
The integration tests at extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:16-97 demonstrate the complete success flow:
- Start server with handlers registered
- Wait for server to be ready
- Connect client
- Make multiple concurrent calls
- Assert all results are correct
Error Handling Testing
The error test at extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152 shows:
- Register handler that returns
Err("Addition failed".into()) - Make RPC call
- Verify error is
RpcServiceError::Rpcvariant - Check error code is
RpcServiceErrorCode::System - Verify error message is propagated
Large Payload Testing
The large payload test at extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:154-203 validates:
- Payloads of 200× chunk size (12.8 MB) are handled correctly
- Request payload is automatically chunked
- Response payload is correctly reassembled
- Round-trip data integrity is maintained
Method Not Found Testing
The not-found test at extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:205-240 verifies:
- Server with no handlers registered
- Client calls method
- Error is
RpcServiceError::RpcwithRpcServiceErrorCode::NotFound
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241
Cross-Platform Compatibility
Figure 4: Cross-Platform Service Usage
The same calculator service definitions work across all client types:
| Client Type | Test File | Key Features |
|---|---|---|
| Tokio Client | extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs | Native async/await, Tokio runtime |
| WASM Client | extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs | Browser-compatible, JavaScript bridge |
Both test files at extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-88 and extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-133 use identical calling patterns:
This demonstrates the "write once, deploy everywhere" capability of the muxio framework.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142
Key Implementation Details
Handler Context
Handlers receive a context parameter (_ctx) at README.md101 README.md107 and README.md113 which can be used for:
- Access to connection metadata
- Request tracing
- Authentication state
- Custom per-connection data
Error Propagation
Errors flow through multiple layers:
- Handler returns
Result<Vec<u8>, RpcServiceError> - Serialization errors from
encode_response()are automatically converted - Client receives
Result<Output, RpcServiceError> - Error payload includes code and message
The error handling pattern at extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:140-150 shows proper error matching:
Concurrent Request Handling
Using tokio::join! at README.md:144-151 enables:
- Multiple concurrent RPC calls over single connection
- Request correlation by unique IDs
- Multiplexed responses arriving in any order
- Efficient connection utilization
Sources: README.md:94-161 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152
Summary
The calculator service tutorial demonstrates the complete workflow for building type-safe RPC services with rust-muxio:
- Define methods implementing
RpcMethodPrebufferedin shared crate - Register handlers on server using
endpoint.register_prebuffered() - Make calls from client using
MethodName::call(client, input) - Test with integration tests covering success, errors, and edge cases
This pattern scales from simple calculators to complex production services while maintaining compile-time type safety and cross-platform compatibility.
Sources: README.md:69-161 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Advanced Topics
Relevant source files
This section covers advanced features, optimization techniques, and deep implementation details for developers who need to extend muxio beyond standard usage patterns. Topics include custom transport implementations, cross-platform deployment strategies, performance tuning, and low-level protocol internals.
For basic RPC usage patterns, see RPC Framework. For standard transport implementations, see Transport Implementations. For testing approaches, see Testing.
Advanced Multiplexing Patterns
The RpcDispatcher supports patterns beyond simple request-response cycles. These patterns leverage the underlying frame multiplexing to achieve concurrent operations over a single connection.
Concurrent Request Pipelining
Multiple RPC requests can be issued without waiting for responses. The dispatcher assigns unique request IDs and correlates responses when they arrive, enabling high throughput even over high-latency connections.
sequenceDiagram
participant App as "Application"
participant Dispatcher as "RpcDispatcher"
participant Transport as "Transport"
App->>Dispatcher: Request A (ID=1)
Dispatcher->>Transport: Serialize frames
App->>Dispatcher: Request B (ID=2)
Dispatcher->>Transport: Serialize frames
App->>Dispatcher: Request C (ID=3)
Dispatcher->>Transport: Serialize frames
Transport->>Dispatcher: Response C (ID=3)
Dispatcher->>App: Return C result
Transport->>Dispatcher: Response A (ID=1)
Dispatcher->>App: Return A result
Transport->>Dispatcher: Response B (ID=2)
Dispatcher->>App: Return B result
The dispatcher maintains a HashMap of pending requests indexed by request ID. When frames arrive, the dispatcher extracts the request ID from the RpcHeader and routes the payload to the appropriate response channel.
Sources: muxio/src/rpc_dispatcher.rs
Interleaved Frame Transmission
Large payloads are automatically chunked into frames. Multiple concurrent requests can have their frames interleaved during transmission, ensuring no single large transfer monopolizes the connection.
| Frame Sequence | Request ID | Frame Type | Payload Size |
|---|---|---|---|
| Frame 1 | 42 | First | 64 KB |
| Frame 2 | 43 | First | 64 KB |
| Frame 3 | 42 | Middle | 64 KB |
| Frame 4 | 43 | Last | 32 KB |
| Frame 5 | 42 | Last | 32 KB |
This interleaving is transparent to application code. The framing protocol handles reassembly using the FrameType enum values: First, Middle, Last, and OnlyChunk.
Sources: muxio/src/rpc_request_response.rs
Request Cancellation
RPC requests can be cancelled mid-flight by dropping the response future on the client side. The dispatcher detects the dropped receiver and removes the pending request from its internal map.
The cancellation is local to the client; the server may continue processing unless explicit cancellation messages are implemented at the application level. For distributed cancellation, implement a custom RPC method that signals the server to abort processing.
Sources: muxio/src/rpc_dispatcher.rs
Transport Adapter Architecture
The core RpcDispatcher interacts with transports through a callback-based interface. This design enables integration with diverse runtime environments without coupling to specific I/O frameworks.
graph TB
subgraph "RpcDispatcher Core"
Dispatcher["RpcDispatcher"]
PendingMap["pending_requests\nHashMap<u32, ResponseSender>"]
ReassemblyMap["pending_responses\nHashMap<u32, Vec<Frame>>"]
end
subgraph "Transport Adapter"
ReadCallback["read_bytes callback\nCalled by transport"]
WriteCallback["write_bytes callback\nProvided to dispatcher"]
Transport["Transport Implementation\n(WebSocket/TCP/Custom)"]
end
subgraph "Application Layer"
RpcCaller["RpcServiceCallerInterface"]
RpcEndpoint["RpcServiceEndpointInterface"]
end
Transport -->|Incoming bytes| ReadCallback
ReadCallback -->|process_incoming_bytes| Dispatcher
Dispatcher -->|Uses internally| PendingMap
Dispatcher -->|Uses internally| ReassemblyMap
Dispatcher -->|Outgoing bytes| WriteCallback
WriteCallback -->|Send to network| Transport
RpcCaller -->|Send request| Dispatcher
Dispatcher -->|Deliver response| RpcCaller
RpcEndpoint -->|Send response| Dispatcher
Dispatcher -->|Deliver request| RpcEndpoint
Dispatcher Interface Contract
The dispatcher exposes process_incoming_bytes for feeding received data and accepts a write_bytes closure for transmitting serialized frames. This bidirectional callback model decouples the dispatcher from transport-specific details.
Sources: muxio/src/rpc_dispatcher.rs extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
Custom Transport Implementation
To implement a new transport, provide three components:
- Connection Management : Establish and maintain the underlying connection (TCP, UDP, IPC, etc.)
- Read Integration : Call
dispatcher.process_incoming_bytes()when data arrives - Write Integration : Pass a closure to the dispatcher that transmits bytes
Example Transport Structure
The transport must handle buffering, error recovery, and connection state. The dispatcher remains unaware of these transport-specific concerns.
For reference implementations, see extensions/muxio-tokio-rpc-client/src/rpc_client.rs:50-200 for Tokio/WebSocket integration and extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:80-250 for WASM/browser integration.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
Performance Optimization Strategies
Serialization Overhead Reduction
The default service definition examples use bitcode for serialization. For performance-critical paths, consider these alternatives:
| Serialization Format | Use Case | Relative Speed | Binary Size |
|---|---|---|---|
bitcode | General purpose | Baseline (1x) | Compact |
bytemuck cast | Numeric arrays | 10-50x faster | Minimal |
| Custom binary layout | Fixed schemas | 5-20x faster | Optimal |
| Zero-copy views | Large buffers | Near-instant | Same as input |
For numeric array transfers (e.g., sensor data, time series), implement RpcMethodPrebuffered with bytemuck::cast_slice to avoid serialization overhead entirely. This approach requires fixed-size, layout-compatible types.
Sources: Cargo.toml53 examples/example-muxio-rpc-service-definition/src/prebuffered.rs
Frame Size Tuning
The framing protocol uses a default chunk size. Adjusting this affects throughput and latency:
- Smaller frames (8-16 KB) : Lower latency for concurrent requests, better interleaving
- Larger frames (64-128 KB) : Higher throughput, fewer frame headers, reduced CPU overhead
For bulk data transfer, prefer larger frames. For interactive applications, prefer smaller frames. The optimal size depends on the transport's MTU and the application's latency requirements.
Sources: muxio/src/rpc_request_response.rs
graph TB
subgraph "Application Threads"
Thread1["Thread 1"]
Thread2["Thread 2"]
Thread3["Thread 3"]
end
subgraph "Connection Pool"
Pool["Arc<RpcClient>"]
Connection1["Connection 1"]
Connection2["Connection 2"]
end
subgraph "Server"
Server["RpcServer"]
end
Thread1 -->|Shared ref| Pool
Thread2 -->|Shared ref| Pool
Thread3 -->|Shared ref| Pool
Pool -->|Uses| Connection1
Pool -->|Uses| Connection2
Connection1 -->|WebSocket| Server
Connection2 -->|WebSocket| Server
Connection Pooling
For native clients making many short-lived RPC calls, maintain a connection pool to amortize connection establishment overhead. The RpcClient is Send + Sync, allowing shared usage across threads.
Wrap clients in Arc and clone the Arc across threads. Each thread can concurrently issue requests over the same connection without blocking.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs README.md:92-128
Memory Management for Streaming
When using streaming RPC calls (see Streaming RPC Calls), allocate channels with bounded capacity to prevent unbounded memory growth if the consumer cannot keep pace with the producer.
Bounded channels apply backpressure, blocking the sender when the buffer fills. This prevents memory exhaustion at the cost of potential deadlock if channels form cycles. For unidirectional streams, bounded channels are typically safe.
Sources: extensions/muxio-rpc-service-caller/src/streaming/ extensions/muxio-rpc-service-endpoint/src/streaming/
Cross-Platform Deployment Patterns
graph TB
subgraph "Shared Crate"
ServiceDef["example-muxio-rpc-service-definition"]
AddMethod["Add::METHOD_ID\nencode_request\ndecode_response"]
MultMethod["Mult::METHOD_ID\nencode_request\ndecode_response"]
end
subgraph "Native Client"
NativeApp["Native Application"]
RpcClient["RpcClient\n(Tokio)"]
end
subgraph "WASM Client"
WasmApp["WASM Application"]
RpcWasmClient["RpcWasmClient\n(wasm-bindgen)"]
end
subgraph "Server"
RpcServer["RpcServer\n(Tokio)"]
Handlers["Method Handlers"]
end
ServiceDef -->|Defines API| AddMethod
ServiceDef -->|Defines API| MultMethod
NativeApp -->|Uses| AddMethod
NativeApp -->|Calls via| RpcClient
WasmApp -->|Uses| AddMethod
WasmApp -->|Calls via| RpcWasmClient
RpcServer -->|Uses| AddMethod
RpcServer -->|Implements| Handlers
RpcClient -->|WebSocket| RpcServer
RpcWasmClient -->|WebSocket| RpcServer
Shared Service Definitions
The key to cross-platform deployment is defining RPC methods in a platform-agnostic crate that both native and WASM clients import.
All three environments compile the same service definition crate. The native and WASM clients use different transport implementations (RpcClient vs RpcWasmClient), but the method invocation syntax is identical.
Sources: examples/example-muxio-rpc-service-definition/ README.md:47-48
Conditional Compilation for Platform-Specific Features
Use Cargo features to enable platform-specific functionality:
Application code can conditionally compile different client types:
The service definitions remain unchanged. Only the transport layer varies.
Sources: extensions/muxio-tokio-rpc-client/Cargo.toml extensions/muxio-wasm-rpc-client/Cargo.toml
WASM Binary Size Optimization
WASM builds benefit from aggressive optimization flags:
Additionally, ensure the WASM client does not transitively depend on native-only crates like tokio or tokio-tungstenite. The workspace structure in Cargo.toml:19-31 isolates WASM-specific dependencies to prevent bloat.
Sources: Cargo.toml extensions/muxio-wasm-rpc-client/Cargo.toml
Low-Level Protocol Details
Binary Frame Format
Each frame transmitted over the transport follows this structure:
| Offset | Size | Field | Description |
|---|---|---|---|
| 0 | 1 | FrameType | Enum: First=0, Middle=1, Last=2, OnlyChunk=3 |
| 1 | 4 | request_id | u32 unique identifier |
| 5 | N | payload | Serialized RPC data |
The FrameType enum enables the dispatcher to reassemble multi-frame messages. For single-frame messages, OnlyChunk (value 3) is used, avoiding intermediate buffering.
Sources: muxio/src/rpc_request_response.rs
RPC Header Structure
Within the frame payload, RPC messages include a header:
The method_id field uses the xxhash-rust XXH3 algorithm to hash method names at compile time. For example, "Add" hashes to a specific u64. This hash serves as the routing key on the server.
The optional fields support different RPC patterns:
params_bytes: Small, inline parameters (e.g., method arguments)prebuffered_payload_bytes: Large, prebuffered data (e.g., file contents)- Absence of both: Parameter-less methods
Sources: muxio/src/rpc_request_response.rs extensions/muxio-rpc-service/src/lib.rs
Method ID Collision Detection
Method IDs are generated at compile time using the xxhash crate:
While XXH3 is a high-quality hash, collisions are theoretically possible. The system does not automatically detect collisions across a codebase. If two methods hash to the same ID, the server will route requests to whichever handler was registered last.
To mitigate collision risk:
- Use descriptive, unique method names
- Implement integration tests that register all methods and verify correct routing
- Consider a build-time collision checker using
build.rs
Sources: extensions/muxio-rpc-service/src/lib.rs Cargo.toml64
Advanced Error Handling Strategies
Layered Error Propagation
Errors flow through multiple system layers, each with its own error type:
Each layer handles errors appropriate to its abstraction level. Application errors are serialized and returned in RPC responses. Dispatcher errors indicate protocol violations. Transport errors trigger state changes.
Sources: muxio/src/rpc_request_response.rs extensions/muxio-rpc-service/src/error.rs
Handling Partial Failures
When a server handler fails, the error is serialized into the RpcResponse and transmitted to the client. The client's RpcServiceCallerInterface implementation deserializes the error and returns it to the application.
For transient failures (e.g., temporary resource unavailability), implement retry logic in the application layer. The transport layer does not retry failed RPC calls.
For permanent failures (e.g., method not implemented), the server returns RpcServiceError::MethodNotFound. Clients should not retry these errors.
Sources: extensions/muxio-rpc-service/src/error.rs extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs
Connection State Management
The RpcTransportState enum tracks connection lifecycle:
Clients can register a state change callback to implement custom reconnection strategies:
The handler is invoked on every state transition, enabling reactive error handling.
Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs README.md:138-141
graph LR
subgraph "Test Harness"
ClientDispatcher["Client RpcDispatcher"]
ServerDispatcher["Server RpcDispatcher"]
ClientToServer["mpsc channel"]
ServerToClient["mpsc channel"]
end
ClientDispatcher -->|write_bytes| ClientToServer
ClientToServer -->|read_bytes| ServerDispatcher
ServerDispatcher -->|write_bytes| ServerToClient
ServerToClient -->|read_bytes| ClientDispatcher
Testing Advanced Scenarios
Mock Transport for Unit Tests
Create an in-memory transport using channels to test RPC logic without network I/O:
This pattern isolates RPC logic from transport concerns, enabling deterministic tests of error conditions, cancellation, and concurrent request handling.
Sources: extensions/muxio-ext-test/
Integration Testing Across Platforms
Run integration tests that compile the same service definition for both native and WASM targets:
Use wasm-pack test --headless --chrome to run WASM tests in a browser environment. This validates that both client types correctly implement the RpcServiceCallerInterface.
Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs extensions/muxio-wasm-rpc-client/tests/
Load Testing and Benchmarking
Use the criterion crate to benchmark serialization overhead, frame processing throughput, and end-to-end latency:
For distributed load testing, spawn multiple client instances and measure:
- Requests per second
- 99th percentile latency
- Connection establishment time
- Memory usage under load
Sources: Cargo.toml54 DRAFT.md23
Monitoring and Observability
Tracing Integration
The system uses the tracing crate for structured logging. Enable verbose logging during development:
Key tracing events include:
- Request dispatch:
request_id,method_id,params_size - Response completion:
request_id,elapsed_time_ms - Connection state changes:
old_state,new_state - Frame processing:
frame_type,payload_size
Sources: Cargo.toml37 README.md84
Custom Metrics Collection
Implement custom metrics by wrapping the RpcServiceCallerInterface:
Override trait methods to record metrics before delegating to the wrapped client. This pattern enables integration with Prometheus, statsd, or custom monitoring systems without modifying core muxio code.
Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs
Connection Health Monitoring
Implement heartbeat RPC methods to detect dead connections:
Periodically invoke Ping::call() and measure response time. Elevated latency or timeouts indicate network degradation.
Sources: examples/example-muxio-rpc-service-definition/src/prebuffered.rs
This page covers advanced usage patterns, optimization techniques, and low-level implementation details. For extension development guidelines, see the extensions/README.md For basic usage, start with Overview.
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Cross-Platform Deployment
Relevant source files
- README.md
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-tokio-rpc-client/src/lib.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
- extensions/muxio-wasm-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
Purpose and Scope
This document explains how to deploy rust-muxio RPC services across multiple platforms—specifically native environments using Tokio and web browsers using WebAssembly. The core principle is "write once, deploy everywhere": the same service definitions and application logic can be used by both native clients and WASM clients without modification.
For information about implementing custom transports beyond the provided Tokio and WASM clients, see Custom Transport Implementation. For details on JavaScript/WASM integration patterns, see JavaScript/WASM Integration. For service definition mechanics, see Service Definitions.
Sources : README.md:1-166 [Diagram 2 from high-level architecture]
Cross-Platform Architecture Overview
The rust-muxio system achieves cross-platform deployment through careful separation of concerns. The architecture layers are designed so that platform-specific code is isolated to the transport implementations, while the core multiplexing logic, RPC protocol, and service definitions remain platform-agnostic.
graph TB
subgraph "Platform Agnostic"
SERVICE_DEF["Service Definitions\nRpcMethodPrebuffered trait\nexample-muxio-rpc-service-definition"]
CORE["muxio Core\nRpcDispatcher\nBinary Framing"]
RPC_PROTOCOL["RPC Protocol\nRpcRequest/RpcResponse\nMethod ID routing"]
end
subgraph "Native Platform"
TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient"]
TOKIO_RT["tokio runtime\ntokio-tungstenite"]
end
subgraph "Web Platform"
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient"]
JS_BRIDGE["wasm-bindgen\nJavaScript WebSocket"]
BROWSER["Browser Environment"]
end
SERVICE_DEF --> CORE
CORE --> RPC_PROTOCOL
RPC_PROTOCOL --> TOKIO_SERVER
RPC_PROTOCOL --> TOKIO_CLIENT
RPC_PROTOCOL --> WASM_CLIENT
TOKIO_SERVER --> TOKIO_RT
TOKIO_CLIENT --> TOKIO_RT
WASM_CLIENT --> JS_BRIDGE
JS_BRIDGE --> BROWSER
TOKIO_CLIENT -.WebSocket.-> TOKIO_SERVER
WASM_CLIENT -.WebSocket.-> TOKIO_SERVER
Layered Abstraction Model
The critical architectural insight is that both RpcClient and RpcWasmClient implement the same RpcServiceCallerInterface trait extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-11 This allows application code to be written against the interface rather than a specific implementation.
Sources : README.md:34-48 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:278-335 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181
Shared Service Definitions
Cross-platform deployment relies on shared service definitions that work identically on all platforms. Service definitions are typically placed in a separate crate that both client and server depend on.
Service Definition Structure
| Component | Role | Platform Dependency |
|---|---|---|
RpcMethodPrebuffered trait | Defines method contract | None - pure Rust traits |
METHOD_ID | Compile-time generated hash | None - const expression |
encode_request / decode_request | Parameter serialization | None - uses bitcode |
encode_response / decode_response | Result serialization | None - uses bitcode |
The service definition crate is a standard Rust library with no platform-specific dependencies. Here's how different platforms use it:
graph LR
subgraph "example-muxio-rpc-service-definition"
ADD["Add::METHOD_ID\nAdd::encode_request\nAdd::decode_request"]
MULT["Mult::METHOD_ID\nMult::encode_request\nMult::decode_request"]
ECHO["Echo::METHOD_ID\nEcho::encode_request\nEcho::decode_request"]
end
subgraph "Native Client"
TOKIO_APP["Application Code"]
TOKIO_CLIENT["RpcClient"]
end
subgraph "WASM Client"
WASM_APP["Application Code"]
WASM_CLIENT["RpcWasmClient"]
end
subgraph "Server"
SERVER["RpcServer"]
ENDPOINT["RpcServiceEndpoint"]
end
ADD --> TOKIO_APP
ADD --> WASM_APP
ADD --> ENDPOINT
MULT --> TOKIO_APP
MULT --> WASM_APP
MULT --> ENDPOINT
ECHO --> TOKIO_APP
ECHO --> WASM_APP
ECHO --> ENDPOINT
TOKIO_APP --> TOKIO_CLIENT
WASM_APP --> WASM_CLIENT
ENDPOINT --> SERVER
Both native and WASM clients use identical invocation code. The only difference is how the client instance is created.
Sources : README.md:49-50 README.md:69-160
Native Deployment with Tokio
Native deployment uses the Tokio async runtime and provides full-featured client and server implementations.
graph TB
APP["Application main"]
SERVER["RpcServer::new"]
ENDPOINT["endpoint.register_prebuffered"]
LISTENER["TcpListener::bind"]
SERVE["server.serve_with_listener"]
APP --> SERVER
SERVER --> ENDPOINT
ENDPOINT --> |"handler: |bytes, ctx| async {...}"|ENDPOINT
APP --> LISTENER
LISTENER --> SERVE
subgraph "Per-Connection"
ACCEPT["Accept WebSocket"]
DISPATCHER["RpcDispatcher"]
HANDLER["Handler invocation"]
RESPOND["Send response"]
ACCEPT --> DISPATCHER
DISPATCHER --> HANDLER
HANDLER --> RESPOND
end
SERVE --> ACCEPT
Server Setup
The RpcServer [extensions/muxio-tokio-rpc-server/] uses Axum and Tokio-Tungstenite for WebSocket transport:
Server handlers are registered by METHOD_ID and receive deserialized requests. The server is platform-agnostic in its handler logic—handlers work with bytes and don't know if the client is native or WASM.
Client Setup
The RpcClient extensions/muxio-tokio-rpc-client/src/rpc_client.rs:54-271 establishes a WebSocket connection and manages concurrent RPC calls:
The client spawns three background tasks: heartbeat for connection health, receive loop for incoming data, and send loop for outgoing data. The Arc<RpcClient> is returned, allowing concurrent RPC calls from multiple tasks.
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:278-335
WASM Deployment for Web Browsers
WASM deployment compiles the client to WebAssembly and bridges to JavaScript's WebSocket API. The key difference from native deployment is that the WASM client does not manage the WebSocket connection—JavaScript does.
graph TB
subgraph "Rust WASM"
WASM_CLIENT["RpcWasmClient::new(emit_callback)"]
DISPATCHER["RpcDispatcher"]
ENDPOINT["RpcServiceEndpoint"]
READ_BYTES["read_bytes(bytes)"]
HANDLE_CONNECT["handle_connect()"]
HANDLE_DISCONNECT["handle_disconnect()"]
end
subgraph "JavaScript Host"
WS["WebSocket"]
ON_OPEN["onopen"]
ON_MESSAGE["onmessage"]
ON_CLOSE["onclose"]
EMIT_FN["emit function"]
end
subgraph "Application Code"
INIT["init_static_client()"]
RPC_CALL["Method::call()"]
end
INIT --> WASM_CLIENT
WASM_CLIENT --> |callback|EMIT_FN
EMIT_FN --> WS
WS --> ON_OPEN
WS --> ON_MESSAGE
WS --> ON_CLOSE
ON_OPEN --> HANDLE_CONNECT
ON_MESSAGE --> READ_BYTES
ON_CLOSE --> HANDLE_DISCONNECT
RPC_CALL --> DISPATCHER
READ_BYTES --> DISPATCHER
READ_BYTES --> ENDPOINT
WASM Client Architecture
The RpcWasmClient extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:17-181 is constructed with an emit_callback that sends bytes to JavaScript. JavaScript manages the WebSocket lifecycle and calls Rust methods when events occur.
graph LR
JS_INIT["JavaScript: init()"]
RUST_INIT["init_static_client()"]
STATIC_REF["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local RefCell"]
CLIENT["Arc<RpcWasmClient>"]
subgraph "Application Code"
GET["get_static_client()"]
WITH_ASYNC["with_static_client_async(closure)"]
RPC["Method::call()"]
end
JS_INIT --> RUST_INIT
RUST_INIT --> STATIC_REF
STATIC_REF --> CLIENT
GET --> STATIC_REF
WITH_ASYNC --> STATIC_REF
WITH_ASYNC --> RPC
Static Client Pattern
For WASM, a common pattern is to use a static global client instance initialized once at application startup:
The static client pattern extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-81 provides init_static_client() for initialization, get_static_client() for synchronous access, and with_static_client_async() for async operations that return JavaScript promises.
Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:26-152 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:13-81
Platform-Specific Implementation Details
While service definitions and application logic are shared, each platform has implementation differences in how it manages connections and state.
Connection Management
| Aspect | Native (RpcClient) | WASM (RpcWasmClient) |
|---|---|---|
| Connection establishment | RpcClient::new() creates WebSocket | JavaScript creates WebSocket, then init_static_client() |
| Heartbeat | Automatic via background task | Managed by JavaScript |
| Reconnection | Must create new RpcClient instance | Managed by JavaScript |
| Disconnection detection | Receive loop detects broken connection | JavaScript calls handle_disconnect() |
| State change notification | Automatic via shutdown_async() | Manual via handle_connect() / handle_disconnect() |
State Handling Differences
Both clients implement RpcServiceCallerInterface, but state management differs:
Native Client extensions/muxio-tokio-rpc-client/src/rpc_client.rs:56-108:
is_connectedis anAtomicBoolmanaged internallyshutdown_async()andshutdown_sync()handle disconnection- Background tasks automatically trigger state transitions
- State change handler invoked from background tasks
WASM Client extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-143:
is_connectedis anAtomicBoolupdated by explicit calls- JavaScript must call
handle_connect()andhandle_disconnect() - No background tasks—all events are synchronous from JavaScript
- State change handler invoked from explicit lifecycle methods
Error Propagation
Both clients fail pending requests on disconnection using fail_all_pending_requests() extensions/muxio-tokio-rpc-client/src/rpc_client.rs102 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:130-133 This ensures that RPC calls awaiting responses receive errors rather than hanging indefinitely.
Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:54-108 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:26-143
Build Process for Different Targets
Native Build
For native deployment, build is straightforward using standard Cargo:
# Server (includes Tokio server and service definition)
cargo build --release -p example-muxio-ws-rpc-app
# Client (includes Tokio client and service definition)
cargo build --release -p muxio-tokio-rpc-client
Both server and client depend on the same service definition crate. The binary includes the full Tokio runtime and WebSocket libraries.
WASM Build
WASM deployment requires building with the wasm32-unknown-unknown target:
# Install wasm32 target if not present
rustup target add wasm32-unknown-unknown
# Build WASM client
cargo build --release --target wasm32-unknown-unknown -p muxio-wasm-rpc-client
# Generate JavaScript bindings
wasm-bindgen target/wasm32-unknown-unknown/release/muxio_wasm_rpc_client.wasm \
--out-dir ./output \
--target web
The WASM build excludes Tokio dependencies and uses wasm-bindgen for JavaScript interop. The resulting .wasm file and JavaScript glue code can be loaded in any modern browser.
Conditional Compilation
The codebase uses feature flags and conditional compilation to handle platform differences. For example:
- Native client imports
tokioandtokio-tungstenite - WASM client imports
wasm-bindgenandjs-sys - Service definitions have no platform-specific imports
Sources : README.md:53-61 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-11 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:1-19
Application Code Portability
The key benefit of cross-platform deployment is that application code can be written once and used on all platforms. Here's how this works in practice:
Generic Application Code Pattern
Application code can be written against RpcServiceCallerInterface:
The same do_work function can accept either RpcClient or RpcWasmClient because both implement RpcServiceCallerInterface. The only platform-specific code is client instantiation.
Integration Example from Tests
The integration tests demonstrate cross-platform compatibility by running the same test logic against different client types extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:1-292:
Both test paths use identical RPC invocation code and assertion logic. The test validates that both clients produce identical results when communicating with the same server.
Sources : README.md:46-48 extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-165 extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:15-88
Testing Cross-Platform Compatibility
The system provides multiple mechanisms for testing cross-platform code:
Mock Client Pattern
For unit tests, a mock client can implement RpcServiceCallerInterface without actual network communication extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:19-88:
The mock client allows testing application logic without starting actual servers or WebSocket connections.
Integration Test Strategy
Integration tests validate that both client types work correctly with a real server:
| Test Type | Native Client | WASM Client | Server |
|---|---|---|---|
| Connection lifecycle | ✓ transport_state_tests.rs:36-165 | Simulated via manual state calls | ✓ Tokio server |
| Request cancellation | ✓ transport_state_tests.rs:169-292 | Simulated | ✓ Tokio server |
| Concurrent requests | ✓ | Via JavaScript concurrency | ✓ |
| Error propagation | ✓ | ✓ | ✓ |
The integration tests ensure that cross-platform abstractions work correctly in practice, not just in theory.
Sources : extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:15-167 extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-292
Summary
Cross-platform deployment in rust-muxio is achieved through:
- Shared Service Definitions :
RpcMethodPrebufferedtrait enables type-safe, platform-agnostic service contracts - Abstract Client Interface :
RpcServiceCallerInterfaceallows application code to work with any client implementation - Platform-Specific Transports :
RpcClientfor native Tokio andRpcWasmClientfor WASM, both implementing the same interface - Minimal Platform Code : Only client instantiation and connection management are platform-specific
- Consistent Testing : Mock clients and integration tests validate cross-platform compatibility
The architecture ensures that developers write service logic once and deploy to native and web environments without modification.
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Performance Optimization
Relevant source files
This page describes techniques and strategies for optimizing throughput, latency, and memory usage in rust-muxio applications. It covers serialization efficiency, chunking strategies, memory allocation patterns, and profiling approaches. For general architectural patterns, see Core Concepts. For transport-specific tuning, see Transport Implementations.
Binary Serialization Efficiency
The system uses bitcode for binary serialization of RPC method parameters and responses. This provides compact encoding with minimal overhead compared to text-based formats like JSON.
graph LR A["Application\nRust Types"] --> B["encode_request\nbitcode::encode"] B --> C["Vec<u8>\nBinary Buffer"] C --> D["RpcHeader\nrpc_metadata_bytes"] D --> E["Frame Protocol\nLow-Level Transport"] E --> F["decode_request\nbitcode::decode"] F --> A
Serialization Strategy
Sources:
- src/rpc/rpc_dispatcher.rs:249-251
- Diagram 6 in high-level architecture
Optimization Guidelines
| Technique | Impact | Implementation |
|---|---|---|
Use #[derive(bitcode::Encode, bitcode::Decode)] | Automatic optimal encoding | Applied in service definitions |
Avoid nested Option<Option<T>> | Reduces byte overhead | Flatten data structures |
| Prefer fixed-size types over variable-length | Predictable buffer sizes | Use [u8; N] instead of Vec<u8> when size is known |
Use u32 instead of u64 when range allows | Halves integer encoding size | RPC method IDs use u32 |
Sources:
- Service definition patterns in example-muxio-rpc-service-definition
- Cargo.lock:158-168 (bitcode dependencies)
Chunking Strategy and Throughput
The max_chunk_size parameter controls how large payloads are split into multiple frames. Optimal chunk size balances latency, memory usage, and transport efficiency.
graph TB
subgraph "Small Chunks (e.g., 1KB)"
A1["Lower Memory\nPer Request"] --> A2["More Frames"]
A2 --> A3["Higher CPU\nFraming Overhead"]
end
subgraph "Large Chunks (e.g., 64KB)"
B1["Higher Memory\nPer Request"] --> B2["Fewer Frames"]
B2 --> B3["Lower CPU\nFraming Overhead"]
end
subgraph "Optimal Range"
C1["8KB - 16KB"] --> C2["Balance of\nMemory & CPU"]
end
Chunk Size Selection
Performance Characteristics:
| Chunk Size | Latency | Memory | CPU | Best For |
|---|---|---|---|---|
| 1-2 KB | Excellent | Minimal | High overhead | Real-time, WASM |
| 4-8 KB | Very Good | Low | Moderate | Standard RPC |
| 16-32 KB | Good | Moderate | Low | Large payloads |
| 64+ KB | Fair | High | Minimal | Bulk transfers |
Sources:
- src/rpc/rpc_dispatcher.rs230 (
max_chunk_sizeparameter) - src/rpc/rpc_dispatcher.rs:260-266 (encoder initialization with chunking)
Prebuffering vs Streaming
The system supports two payload transmission modes with different performance trade-offs.
graph TB
subgraph "Prebuffered Mode"
PB1["RpcRequest\nis_finalized=true"]
PB2["Single write_bytes"]
PB3["Immediate end_stream"]
PB4["Low Latency\nHigh Memory"]
PB1 -->
PB2 -->
PB3 --> PB4
end
subgraph "Streaming Mode"
ST1["RpcRequest\nis_finalized=false"]
ST2["Multiple write_bytes\ncalls"]
ST3["Delayed end_stream"]
ST4["Higher Latency\nLow Memory"]
ST1 -->
ST2 -->
ST3 --> ST4
end
Mode Comparison
Prebuffered Response Handling
The prebuffer_response flag controls whether response payloads are accumulated before delivery:
| Mode | Memory Usage | Latency | Use Case |
|---|---|---|---|
prebuffer_response=true | Accumulates entire payload | Delivers complete response | Small responses, simpler logic |
prebuffer_response=false | Streams chunks as received | Minimal per-chunk latency | Large responses, progress tracking |
Implementation Details:
- Prebuffering accumulates chunks in
prebuffered_responsesHashMap - Buffer is stored until
RpcStreamEvent::Endis received - Handler is invoked once with complete payload
- Buffer is immediately cleared after handler invocation
Sources:
- src/rpc/rpc_dispatcher.rs233 (
prebuffer_responseparameter) - src/rpc/rpc_dispatcher.rs:269-283 (prebuffered payload handling)
- src/rpc/rpc_internals/rpc_respondable_session.rs:26-27 (prebuffering state)
- src/rpc/rpc_internals/rpc_respondable_session.rs:115-147 (prebuffering logic)
Memory Management Patterns
graph LR A["Inbound Frames"] --> B["RpcDispatcher\nread_bytes"] B --> C["Mutex Lock"] C --> D["VecDeque\nrpc_request_queue"] D --> E["Push/Update/Delete\nOperations"] E --> F["Mutex Unlock"] G["Application"] --> H["get_rpc_request"] H --> C
Request Queue Design
The RpcDispatcher maintains an internal request queue using Arc<Mutex<VecDeque<(u32, RpcRequest)>>>. This design has specific performance implications:
Lock Contention Considerations:
| Operation | Lock Duration | Frequency | Optimization |
|---|---|---|---|
read_bytes | Per-frame decode | High | Minimize work under lock |
get_rpc_request | Read access only | Medium | Returns guard, caller controls lock |
delete_rpc_request | Single element removal | Low | Uses VecDeque::remove |
Memory Overhead:
- Each in-flight request: ~100-200 bytes base + payload size
VecDequecapacity grows as needed- Payload bytes accumulated until
is_finalized=true
Sources:
- src/rpc/rpc_dispatcher.rs50 (
rpc_request_queuedeclaration) - src/rpc/rpc_dispatcher.rs:362-374 (
read_bytesimplementation) - src/rpc/rpc_dispatcher.rs:381-394 (
get_rpc_requestwith lock guard) - src/rpc/rpc_dispatcher.rs:411-420 (
delete_rpc_request)
Preventing Memory Leaks
The dispatcher must explicitly clean up completed or failed requests:
Critical Pattern:
- Request added to queue on
Headerevent - Payload accumulated on
PayloadChunkevents - Finalized on
Endevent - Application must call
delete_rpc_request()to free memory
Failure to delete finalized requests causes unbounded memory growth.
Sources:
- src/rpc/rpc_dispatcher.rs:121-141 (request creation)
- src/rpc/rpc_dispatcher.rs:144-169 (payload accumulation)
- src/rpc/rpc_dispatcher.rs:171-185 (finalization)
- src/rpc/rpc_dispatcher.rs:411-420 (cleanup)
graph LR A["next_rpc_request_id\nu32 counter"] --> B["increment_u32_id()"] B --> C["Assign to\nRpcHeader"] C --> D["Store in\nresponse_handlers"] D --> E["Match on\ninbound response"]
Request Correlation Overhead
Each outbound request is assigned a unique u32 ID for response correlation. The system uses monotonic ID generation with wraparound.
ID Generation Strategy
Performance Characteristics:
| Aspect | Cost | Justification |
|---|---|---|
| ID generation | Minimal (single addition) | u32::wrapping_add(1) |
| HashMap insertion | O(1) average | response_handlers.insert() |
| Response lookup | O(1) average | response_handlers.get_mut() |
| Memory per handler | ~24 bytes + closure size | Box<dyn FnMut> overhead |
Concurrency Considerations:
next_rpc_request_idis NOT thread-safe- Each client connection should have its own
RpcDispatcher - Sharing a dispatcher across threads requires external synchronization
Sources:
- src/rpc/rpc_dispatcher.rs42 (
next_rpc_request_idfield) - src/rpc/rpc_dispatcher.rs:241-242 (ID assignment)
- src/rpc/rpc_internals/rpc_respondable_session.rs24 (
response_handlersHashMap)
graph TB A["Connection\nClosed"] --> B["fail_all_pending_requests"] B --> C["std::mem::take\nresponse_handlers"] C --> D["For each handler"] D --> E["Create\nRpcStreamEvent::Error"] E --> F["Invoke handler\nwith error"] F --> G["Drop handler\nboxed closure"]
Handler Cleanup and Backpressure
Failed Request Handling
When a transport connection drops, all pending response handlers must be notified to prevent resource leaks and hung futures:
Implementation:
The fail_all_pending_requests() method takes ownership of all handlers and invokes them with an error event. This ensures:
- Awaiting futures are woken with error result
- Callback memory is freed immediately
- No handlers remain registered after connection failure
Performance Impact:
- Invocation cost: O(n) where n = number of pending requests
- Each handler invocation is synchronous
- Memory freed immediately after iteration
Sources:
- src/rpc/rpc_dispatcher.rs:428-456 (
fail_all_pending_requestsimplementation)
Benchmarking with Criterion
The codebase uses criterion for performance benchmarking. To run benchmarks:
Benchmark Structure
Key Metrics to Track:
| Metric | What It Measures | Target |
|---|---|---|
| Throughput | Bytes/sec processed | Maximize |
| Latency | Time per operation | Minimize |
| Allocation Rate | Heap allocations | Minimize |
| Frame Overhead | Protocol bytes vs payload | < 5% |
Sources:
- Cargo.lock:317-338 (criterion dependency)
- example-muxio-ws-rpc-app benchmarks
Platform-Specific Optimizations
Native (Tokio) vs WASM
Platform Tuning:
| Platform | Chunk Size | Buffer Strategy | Concurrency |
|---|---|---|---|
| Native (Tokio) | 8-16 KB | Reuse buffers | Multiple connections |
| WASM (Browser) | 2-4 KB | Small allocations | Single connection |
| Native (Server) | 16-32 KB | Pre-allocated pools | Connection pooling |
WASM-Specific Considerations:
- JavaScript boundary crossings have cost (~1-5 μs per call)
- Minimize calls to
wasm-bindgenfunctions - Use larger RPC payloads to amortize overhead
- Prebuffer responses when possible to reduce event callbacks
Sources:
- muxio-tokio-rpc-client vs muxio-wasm-rpc-client crate comparison
- Cargo.lock:935-953 (WASM dependencies)
Profiling and Diagnostics
Tracing Integration
The system uses tracing for instrumentation. Enable logging to identify bottlenecks:
Key Trace Points:
| Location | Event | Performance Insight |
|---|---|---|
RpcDispatcher::call | Request initiation | Call frequency, payload sizes |
read_bytes | Frame processing | Decode latency, lock contention |
| Handler callbacks | Response processing | Handler execution time |
Sources:
- src/rpc/rpc_dispatcher.rs12 (tracing import)
- src/rpc/rpc_dispatcher.rs98 (
#[instrument]macro usage)
Detecting Performance Issues
Tooling:
Sources:
- DRAFT.md:29-31 (coverage tooling)
- DRAFT.md:34-40 (module analysis)
Best Practices Summary
| Optimization | Technique | Impact |
|---|---|---|
| Minimize allocations | Reuse buffers, use Vec::with_capacity | High |
| Choose optimal chunk size | 8-16 KB for typical RPC | Medium |
| Prebuffer small responses | Enable prebuffer_response < 64KB | Medium |
| Clean up completed requests | Call delete_rpc_request() promptly | High |
| Use fixed-size types | Prefer [u8; N] over Vec<u8> in hot paths | Low |
| Profile before optimizing | Use criterion + flamegraph | Critical |
Sources:
- Performance patterns observed across src/rpc/rpc_dispatcher.rs
- Memory management in src/rpc/rpc_internals/rpc_respondable_session.rs
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Custom Transport Implementation
Relevant source files
- Cargo.toml
- extensions/muxio-rpc-service-caller/src/lib.rs
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs
- extensions/muxio-rpc-service-endpoint/Cargo.toml
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs
Purpose and Scope
This document explains how to implement custom transports for the muxio RPC system beyond the provided WebSocket implementations. Custom transports enable the use of alternative communication channels such as Unix domain sockets, named pipes, HTTP/2, QUIC, or in-memory channels for testing.
For information about the existing WebSocket transport implementations, see Tokio RPC Server, Tokio RPC Client, and WASM RPC Client. For the underlying protocol details, see Binary Framing Protocol and RPC Dispatcher.
Transport Architecture Overview
Custom transports act as adapters between the transport-agnostic RpcDispatcher and the specific communication mechanism. The transport layer is responsible for moving bytes bidirectionally while the dispatcher handles multiplexing, request correlation, and protocol encoding/decoding.
graph TB
APP["Application Code"]
subgraph "Custom Transport Layer"
CLIENT_IMPL["Custom Client Implementation\n(Your Code)"]
SERVER_IMPL["Custom Server Implementation\n(Your Code)"]
end
subgraph "RPC Interface Layer"
CALLER["RpcServiceCallerInterface"]
ENDPOINT["RpcServiceEndpointInterface"]
end
subgraph "Core Multiplexing Layer"
DISPATCHER_C["RpcDispatcher\n(Client-side)"]
DISPATCHER_S["RpcDispatcher\n(Server-side)"]
end
subgraph "Transport Medium"
CHANNEL["Custom I/O Channel\n(TCP, UDP, Unix Socket, etc.)"]
end
APP --> CLIENT_IMPL
APP --> SERVER_IMPL
CLIENT_IMPL -.implements.-> CALLER
SERVER_IMPL -.implements.-> ENDPOINT
CALLER --> DISPATCHER_C
ENDPOINT --> DISPATCHER_S
CLIENT_IMPL --> CHANNEL
SERVER_IMPL --> CHANNEL
DISPATCHER_C -.read_bytes/write_bytes.-> CLIENT_IMPL
DISPATCHER_S -.read_bytes/respond.-> SERVER_IMPL
Transport Integration Points
Sources:
- extensions/muxio-rpc-service-caller/src/lib.rs:1-11
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:8-138
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-276
Client Transport Implementation
Client transports must implement the RpcServiceCallerInterface trait to enable RPC method invocation. This trait provides the contract for establishing connections, sending requests, and receiving responses.
Required Interface Methods
The RpcServiceCallerInterface trait requires the following methods:
| Method | Return Type | Purpose |
|---|---|---|
get_dispatcher() | Arc<TokioMutex<RpcDispatcher<'static>>> | Provides access to the dispatcher for request management |
is_connected() | bool | Indicates current connection state |
get_emit_fn() | Arc<dyn Fn(Vec<u8>) + Send + Sync> | Returns function for sending encoded bytes over transport |
set_state_change_handler() | async fn | Registers callback for connection state changes |
call_rpc_streaming() | Result<(RpcStreamEncoder, DynamicReceiver), RpcServiceError> | Initiates streaming RPC calls (optional, has default impl) |
Sources:
Client Transport Structure
Sources:
Minimal Client Implementation Pattern
A minimal client transport implementation follows this pattern:
- Dispatcher Management : Create and wrap
RpcDispatcherinArc<TokioMutex<_>> - Connection State : Track connection state with
Arc<AtomicBool> - Emit Function : Implement function that sends bytes to the underlying transport
- Receive Loop : Spawn task that reads bytes and calls
dispatcher.read_bytes() - Send Loop : Spawn task that writes outgoing bytes to transport
- State Handler : Support registration of state change callbacks
graph TB
NEW["CustomClient::new()"]
subgraph "Initialization"
CONNECT["Connect to transport"]
CREATE_DISP["RpcDispatcher::new()"]
CREATE_CHANNEL["Create send/receive channels"]
end
subgraph "Task Spawning"
SPAWN_RECV["Spawn receive loop:\nReads bytes → dispatcher.read_bytes()"]
SPAWN_SEND["Spawn send loop:\nWrites bytes to transport"]
end
subgraph "Result"
RETURN["Return Arc<CustomClient>"]
end
NEW --> CONNECT
CONNECT --> CREATE_DISP
CREATE_DISP --> CREATE_CHANNEL
CREATE_CHANNEL --> SPAWN_RECV
SPAWN_RECV --> SPAWN_SEND
SPAWN_SEND --> RETURN
Example structure from mock test implementation:
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271
- extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:19-88
Server Transport Implementation
Server transports must implement the RpcServiceEndpointInterface trait to handle incoming RPC requests. This trait manages handler registration and request processing.
Required Server Interface Methods
The RpcServiceEndpointInterface trait requires:
| Method | Parameters | Purpose |
|---|---|---|
get_prebuffered_handlers() | - | Returns handlers lock for accessing registered method handlers |
register_prebuffered() | method_id: u64, handler: F | Registers handler function for a specific RPC method |
read_bytes() | dispatcher: &mut RpcDispatcher, context: C, bytes: &[u8], on_emit: E | Processes incoming bytes and dispatches to handlers |
Sources:
sequenceDiagram
participant Transport as "Custom Transport"
participant ReadBytes as "read_bytes()"
participant Dispatcher as "RpcDispatcher"
participant Handlers as "Handler Registry"
participant Handler as "User Handler"
Note over Transport,Handler: Stage 1: Decode & Identify
Transport->>ReadBytes: bytes from transport
ReadBytes->>Dispatcher: dispatcher.read_bytes(bytes)
Dispatcher-->>ReadBytes: Vec<request_id>
loop For each finalized request
ReadBytes->>Dispatcher: is_rpc_request_finalized(id)
Dispatcher-->>ReadBytes: true
ReadBytes->>Dispatcher: delete_rpc_request(id)
Dispatcher-->>ReadBytes: RpcRequest
end
Note over Transport,Handler: Stage 2: Execute Handlers
loop For each request
ReadBytes->>Handlers: get handler for method_id
Handlers-->>ReadBytes: handler function
ReadBytes->>Handler: handler(request_bytes, context)
Handler-->>ReadBytes: response_bytes
end
Note over Transport,Handler: Stage 3: Encode & Emit
loop For each response
ReadBytes->>Dispatcher: dispatcher.respond(response)
Dispatcher->>Transport: on_emit(encoded_bytes)
end
Server Request Processing Flow
The read_bytes method implements a three-stage pipeline for request processing:
Sources:
Server Implementation Pattern
A server transport implementation typically:
- Accepts incoming connections on the transport medium
- For each connection, creates an
RpcDispatcherinstance - Spawns receive loop that calls
endpoint.read_bytes()with incoming bytes - Provides
on_emitclosure that sends response bytes back over transport - Manages connection lifecycle and cleanup
graph TB
ACCEPT["Accept connection"]
subgraph "Per-Connection Setup"
CREATE_DISP["Create RpcDispatcher"]
SPLIT["Split I/O into\nreader and writer"]
end
subgraph "Receive Loop"
READ["Read bytes from transport"]
PROCESS["endpoint.read_bytes(\ndispatcher,\ncontext,\nbytes,\non_emit)"]
DECODE["Stage 1: Decode frames"]
INVOKE["Stage 2: Invoke handlers"]
EMIT["Stage 3: Emit responses"]
end
subgraph "Emit Closure"
ON_EMIT["on_emit closure:\nsends bytes to writer"]
end
ACCEPT --> CREATE_DISP
CREATE_DISP --> SPLIT
SPLIT --> READ
READ --> PROCESS
PROCESS --> DECODE
DECODE --> INVOKE
INVOKE --> EMIT
EMIT --> ON_EMIT
ON_EMIT --> READ
Connection handler structure:
Sources:
stateDiagram-v2
[*] --> Created: RpcDispatcher::new()
Created --> Processing : Bytes received
Processing --> Decoding : read_bytes(bytes)
Decoding --> Pending : Partial request
Decoding --> Finalized : Complete request
Pending --> Processing : More bytes
Finalized --> HandlerInvoked : Server - invoke handler
Finalized --> ResponsePending : Client - await response
HandlerInvoked --> Responding : respond()
Responding --> Processing : More requests
ResponsePending --> ResponseReceived : read_bytes(response)
ResponseReceived --> Processing : More requests
Processing --> Shutdown : fail_all_pending_requests()
Shutdown --> [*]
Managing the RPC Dispatcher
The RpcDispatcher is the core component that handles protocol encoding/decoding, request/response correlation, and stream management. Both client and server transports must properly integrate with the dispatcher.
Dispatcher Lifecycle
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:82-108
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:78-136
Key Dispatcher Operations
| Operation | Usage Context | Description |
|---|---|---|
read_bytes(&mut self, bytes: &[u8]) | Client & Server | Decodes incoming bytes, returns IDs of affected requests |
is_rpc_request_finalized(id: u32) | Server | Checks if request with given ID is complete |
delete_rpc_request(id: u32) | Server | Removes and returns finalized request |
respond(response, max_chunk_size, on_emit) | Server | Encodes and emits response |
fail_all_pending_requests(error) | Client | Cancels all pending requests on disconnect |
Sources:
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:82-92
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-103
graph TB
EMIT_CALL["emit_fn(bytes: Vec<u8>)"]
subgraph "Checks"
CHECK_CONN["Check is_connected"]
CHECK_SIZE["Check byte length"]
end
subgraph "Queueing"
QUEUE["Send to internal channel\n(mpsc, crossbeam, etc.)"]
end
subgraph "Error Handling"
LOG_ERROR["Log failure"]
DROP["Drop bytes\n(don't panic)"]
end
EMIT_CALL --> CHECK_CONN
CHECK_CONN -->|connected| CHECK_SIZE
CHECK_CONN -->|disconnected| LOG_ERROR
CHECK_SIZE --> QUEUE
QUEUE -->|success| LOG_TRACE["Trace log success"]
QUEUE -->|failure| LOG_ERROR
LOG_ERROR --> DROP
LOG_TRACE --> RETURN["Return"]
DROP --> RETURN
Implementing the Emit Function
The emit function is the primary mechanism for sending encoded bytes to the underlying transport. It must handle queueing, flow control, and error conditions.
Emit Function Requirements
Sources:
graph LR
CALLER["RPC Caller invokes emit_fn"]
subgraph "get_emit_fn()
Implementation"
CLOSURE["Arc<dyn Fn(Vec<u8>)>"]
CHECK["is_connected check"]
CONVERT["Vec<u8> → WsMessage::Binary"]
SEND["tx.send(message)"]
end
subgraph "Send Loop Task"
RECV["app_rx.recv()"]
TRANSMIT["ws_sender.send(msg)"]
IO["WebSocket I/O"]
end
CALLER --> CLOSURE
CLOSURE --> CHECK
CHECK -->|connected| CONVERT
CHECK -->|disconnected| DROP["Drop & log warning"]
CONVERT --> SEND
SEND --> RECV
RECV --> TRANSMIT
TRANSMIT --> IO
WebSocket Client Emit Implementation
The Tokio WebSocket client implementation demonstrates a proper emit function pattern:
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:289-313
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:224-257
graph TB
HANDLER["Handler returns response"]
RESPOND["dispatcher.respond(\nresponse,\nchunk_size,\non_emit)"]
subgraph "on_emit Closure"
EMIT["on_emit: impl RpcEmit"]
WRAPPER["Wraps send logic"]
SEND["Send to writer task"]
end
WRITE["Write to transport"]
HANDLER --> RESPOND
RESPOND --> EMIT
EMIT --> WRAPPER
WRAPPER --> SEND
SEND --> WRITE
Server Emit Closure Pattern
On the server side, the emit closure is typically provided as a parameter to read_bytes():
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:166-173
- extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:132-134
stateDiagram-v2
[*] --> Connecting : new() called
Connecting --> Connected : Transport connected
Connecting --> Error : Connection failed
Connected --> Disconnecting : shutdown_async()
Connected --> Error : I/O error
Disconnecting --> Disconnected : State handler called
Error --> Disconnected : State handler called
Disconnected --> [*] : Resources cleaned up
State Management and Error Handling
Proper state management is critical for custom transports to ensure graceful handling of disconnections, errors, and resource cleanup.
Connection State Tracking
Sources:
State Change Handler Invocation
Transport implementations should invoke the state change handler at appropriate times:
| Event | State to Report | When to Invoke |
|---|---|---|
| Connection established | RpcTransportState::Connected | After successful connection, when handler is set |
| Connection lost (error) | RpcTransportState::Disconnected | When I/O error detected in receive loop |
| Explicit shutdown | RpcTransportState::Disconnected | When shutdown_async() called |
| Drop cleanup | RpcTransportState::Disconnected | In Drop::drop() implementation |
Example from Tokio client:
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-108
Error Propagation Strategy
Custom transports should fail pending requests when disconnection occurs:
Sources:
- extensions/muxio-tokio-rpc-client/src/rpc_client.rs:99-103
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292
classDiagram
class MockRpcClient {+Arc~TokioMutex~RpcDispatcher~~ get_dispatcher()\n+Arc~dyn Fn~ get_emit_fn()\n+bool is_connected()\n+call_rpc_streaming()\n+set_state_change_handler()}
class RpcServiceCallerInterface {<<interface>>}
class TestState {
+SharedResponseSender response_sender_provider
+Arc~AtomicBool~ is_connected_atomic
}
MockRpcClient ..|> RpcServiceCallerInterface
MockRpcClient --> TestState : contains
Example: Mock Transport for Testing
The test suite includes a minimal mock transport implementation that demonstrates the core patterns without actual I/O.
Mock Client Structure
Sources:
Mock Implementation Highlights
The mock transport demonstrates minimal required functionality:
- Dispatcher : Returns new dispatcher instance (no state sharing needed for mock)
- Emit Function : No-op closure
Arc::new(|_| {}) - Connection State : Uses
AtomicBoolfor simple state tracking - Streaming Calls : Creates channels and returns them directly without actual I/O
- State Handler : No-op implementation
Key simplifications in mock vs. production transport:
| Aspect | Production Transport | Mock Transport |
|---|---|---|
| I/O | Actual socket/stream operations | No-op or in-memory channels |
| Task Spawning | Spawns send/receive loop tasks | No background tasks |
| Error Handling | Detects I/O errors, handles disconnects | Minimal error simulation |
| Lifecycle | Complex connection management | Simple atomic bool |
| Threading | Requires Send + Sync across tasks | Simpler synchronization |
Sources:
Testing Custom Transports
When implementing a custom transport, comprehensive testing should cover:
Test Coverage Areas
Sources:
Integration Test Pattern
Integration tests should verify end-to-end RPC communication:
Sources:
Key Test Scenarios
Test implementations should validate:
| Scenario | Verification |
|---|---|
| Connection failure | new() returns error for invalid address/port |
| State transitions | Handler called with Connected then Disconnected |
| Pending request cleanup | Pending calls fail with cancellation error on disconnect |
| RPC success path | Method invocation returns expected result |
| Concurrent requests | Multiple simultaneous RPCs complete correctly |
| Large payloads | Chunking and reassembly work correctly |
| Handler registration | Methods can be registered before connection |
| Error propagation | Transport errors surface as RPC errors |
Sources:
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-31
- extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292
Summary
Implementing a custom transport requires:
- Client-side : Implement
RpcServiceCallerInterfacewith dispatcher management, emit function, and state tracking - Server-side : Implement
RpcServiceEndpointInterfacewith handler registration and request processing - Dispatcher Integration : Properly call
read_bytes()andrespond()methods - Emit Function : Implement reliable byte transmission with error handling
- State Management : Track connection state and invoke state change handlers
- Error Handling : Fail pending requests on disconnect and propagate errors
- Testing : Comprehensive tests covering connection, data transfer, and error scenarios
The mock transport implementation in extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:19-88 provides a minimal reference, while the Tokio WebSocket client in extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-336 demonstrates a production-ready implementation pattern.
Dismiss
Refresh this wiki
Enter email to refresh
This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
JavaScript/WASM Integration
Relevant source files
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-tokio-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/Cargo.toml
- extensions/muxio-wasm-rpc-client/src/lib.rs
- extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs
- extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs
Purpose and Scope
This document provides a detailed guide to integrating the muxio-wasm-rpc-client with JavaScript host environments. It covers the architecture of the WASM client, the callback-based communication model that bridges Rust and JavaScript, WebSocket event handling patterns, and the static client pattern used for WASM exports.
For information about the WASM client's core RPC capabilities, see WASM RPC Client. For general cross-platform deployment strategies, see Cross-Platform Deployment.
WASM Client Architecture
The RpcWasmClient provides a WebAssembly-compatible RPC client that bridges between Rust's async runtime and JavaScript's event-driven WebSocket APIs. Unlike native clients that manage their own WebSocket connections, the WASM client relies on JavaScript to handle the actual network operations and deliver events to Rust callbacks.
Core Structure
The RpcWasmClient struct contains five key components:
| Component | Type | Purpose |
|---|---|---|
dispatcher | Arc<Mutex<RpcDispatcher>> | Manages request correlation and binary framing |
endpoint | Arc<RpcServiceEndpoint<()>> | Handles incoming RPC requests from the host |
emit_callback | Arc<dyn Fn(Vec<u8>)> | Sends binary data to JavaScript |
state_change_handler | Arc<Mutex<Option<Box<dyn Fn(RpcTransportState)>>>> | Notifies application of connection state changes |
is_connected | Arc<AtomicBool> | Tracks current connection status |
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-24
WASM/JavaScript Boundary Architecture
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-34 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-36
Callback-Based Communication Model
The WASM client uses a callback-based architecture to send data to JavaScript because WASM cannot directly call JavaScript WebSocket APIs. During initialization, the client receives an emit_callback closure that bridges to JavaScript:
sequenceDiagram
participant App as "WASM Application"
participant Client as "RpcWasmClient"
participant EmitCb as "emit_callback"
participant WasmBind as "wasm-bindgen"
participant JS as "JavaScript Glue"
participant WS as "WebSocket"
App->>Client: call_rpc_method(request)
Client->>Client: dispatcher.request()
Client->>EmitCb: emit_callback(bytes)
EmitCb->>WasmBind: static_muxio_write_bytes(bytes)
WasmBind->>JS: invoke JS function
JS->>WS: ws.send(bytes)
WS->>WS: transmit over network
When the client needs to send data (either RPC requests or responses), it invokes this callback with the binary payload. The callback implementation typically forwards to a JavaScript function via wasm-bindgen.
Emit Callback Flow
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:27-35 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:149-151
Bidirectional Data Flow
The communication model is inherently bidirectional:
| Direction | Mechanism | Trigger |
|---|---|---|
| WASM → JavaScript | emit_callback() invoked by client | RPC request or response needs to be sent |
| JavaScript → WASM | read_bytes() called by JS glue | WebSocket onmessage event fires |
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121
WebSocket Event Handling
The WASM client exposes three public methods that JavaScript glue code must call in response to WebSocket events. These methods implement the contract between JavaScript's event-driven WebSocket API and Rust's async model.
Connection Lifecycle Methods
handle_connect
Called when the JavaScript WebSocket's onopen event fires. Updates internal connection state and notifies any registered state change handlers:
Implementation:
- Sets
is_connectedatomic flag totrue - Invokes
state_change_handlerwithRpcTransportState::Connected - Enables RPC calls by making
is_connected()returntrue
read_bytes
Called when the JavaScript WebSocket's onmessage event delivers binary data. This method performs a three-stage processing pipeline:
Three-Stage Processing Pipeline:
Stage Details:
-
Synchronous Reading (lines 54-81):
- Acquires dispatcher lock briefly
- Calls
dispatcher.read_bytes(bytes)to parse binary frames - Identifies finalized requests via
is_rpc_request_finalized() - Extracts requests with
delete_rpc_request()for processing - Critical: Releases lock before async processing
-
Asynchronous Processing (lines 85-103):
- Creates futures for each request handler
- Calls
process_single_prebuffered_request()for each - Uses
join_all()to execute handlers concurrently - No dispatcher lock held during user handler execution
-
Synchronous Sending (lines 107-120):
- Re-acquires dispatcher lock
- Calls
dispatcher.respond()for each response - Invokes
emit_callbackto send response chunks to JavaScript - Releases lock after all responses are queued
This architecture prevents deadlocks and allows concurrent request processing while handlers execute.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121
handle_disconnect
Called when the JavaScript WebSocket's onclose or onerror event fires:
Implementation:
- Uses
swap()to atomically check and update connection state - Invokes
state_change_handlerwithRpcTransportState::Disconnected - Calls
dispatcher.fail_all_pending_requests()to complete pending futures with errors - Prevents redundant disconnect handling via atomic swap
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:124-134
Event Handling Integration Map
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:38-134
Static Client Pattern
The WASM client provides a static client pattern to simplify integration with wasm-bindgen exported functions. This pattern addresses a common WASM challenge: exported functions cannot easily access instance data.
Thread-Local Static Client
The static client is stored in a thread-local variable:
This allows any exported function to access the client without requiring explicit passing through the JavaScript boundary.
Initialization and Access Functions
| Function | Purpose | Returns |
|---|---|---|
init_static_client() | Initializes the static client (idempotent) | Option<Arc<RpcWasmClient>> |
get_static_client() | Retrieves the initialized client | Option<Arc<RpcWasmClient>> |
with_static_client_async() | Executes async closure with client, returns JS Promise | Promise |
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:13-82
Static Client Initialization Flow
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36
with_static_client_async Pattern
The with_static_client_async() function provides a convenient pattern for exported async RPC methods:
Usage Pattern:
- Retrieve static client from thread-local storage
- Execute user-provided async closure with client
- Convert Rust
Result<T, String>to JavaScriptPromise - Handle initialization errors by rejecting promise
This eliminates boilerplate in every exported function.
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72
wasm-bindgen Bridge
The WASM client relies on wasm-bindgen to create the bridge between Rust and JavaScript. Key dependencies enable this integration:
| Dependency | Version | Purpose |
|---|---|---|
wasm-bindgen | 0.2.100 | Core FFI bindings between WASM and JavaScript |
wasm-bindgen-futures | 0.4.50 | Converts Rust futures to JavaScript Promises |
js-sys | 0.3.77 | Bindings to JavaScript standard types |
Sources: extensions/muxio-wasm-rpc-client/Cargo.toml:14-16
Promise Conversion
JavaScript expects Promise objects for asynchronous operations. The wasm-bindgen-futures::future_to_promise function handles this conversion:
Conversion Details:
- Rust
Future<Output = Result<T, E>>→ JavaScriptPromise<T> Ok(value)→ Promise resolves withvalueErr(e)→ Promise rejects with error string
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:60-71
Type Marshaling
Types must be marshaled across the WASM/JavaScript boundary:
| Rust Type | JavaScript Type | Conversion Method |
|---|---|---|
Vec<u8> | Uint8Array | Automatic via wasm-bindgen |
String | string | JsValue::from_str() |
| Custom struct | Object | Implement From<T> for JsValue |
Result<T, E> | Promise<T> | future_to_promise() |
graph TB
subgraph "Required JavaScript Components"
INIT["initializeMuxio()\n- Create WebSocket\n- Call initStaticClient()"]
OPEN_HANDLER["onopen handler\n- Call handleConnect()"]
MESSAGE_HANDLER["onmessage handler\n- Extract bytes\n- Call readBytes()"]
CLOSE_HANDLER["onclose/onerror\n- Call handleDisconnect()"]
WRITE_BRIDGE["writeBytes(bytes)\n- Call ws.send()"]
end
subgraph "WASM Exported Functions"
INIT_STATIC["initStaticClient()"]
HANDLE_CONNECT["handleConnect()"]
READ_BYTES["readBytes(bytes)"]
HANDLE_DISCONNECT["handleDisconnect()"]
end
INIT --> INIT_STATIC
OPEN_HANDLER --> HANDLE_CONNECT
MESSAGE_HANDLER --> READ_BYTES
CLOSE_HANDLER --> HANDLE_DISCONNECT
INIT_STATIC -.registers.-> WRITE_BRIDGE
JavaScript Glue Code Requirements
JavaScript developers must implement glue code that bridges native WebSocket events to the WASM client's Rust interface. The following sections detail the required components.
Minimal WebSocket Integration
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-82 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:38-134
Binary Data Handling
WebSocket binary messages must be converted to byte arrays before passing to WASM:
Key Considerations:
- WebSocket can deliver
BloborArrayBufferdepending on configuration - WASM expects
Uint8Array(maps to Rust&[u8]) - Conversion is asynchronous for
Blobtypes
State Management Bridge
The JavaScript code should respond to connection state changes:
The WASM client provides is_connected() for synchronous state queries and set_state_change_handler() for reactive updates.
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:136-181
Complete Integration Example
The following example demonstrates a complete WASM/JavaScript integration showing both the Rust WASM module exports and the JavaScript glue code.
Rust WASM Module Exports
JavaScript Glue Code
Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-82 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-181
Integration Architecture Summary
Sources: All files in extensions/muxio-wasm-rpc-client/
Key Integration Considerations
Thread Safety
- Thread-local storage: The static client uses
thread_local!macro, making it single-threaded - Arc usage: Internal components use
Arcfor safe sharing between callbacks - No Send/Sync requirements: JavaScript is single-threaded, eliminating concurrency concerns
Memory Management
- Ownership: Binary data crosses the boundary by value (copied)
- Lifetime: The emit callback has
'staticlifetime to avoid lifetime issues - Cleanup: WebSocket disconnect triggers
fail_all_pending_requests()to prevent memory leaks
Error Propagation
- Rust → JavaScript:
Result<T, String>converts to rejected Promise - JavaScript → Rust: Invalid bytes trigger error logs via
tracing::error! - Connection errors: Propagate through
state_change_handlerandis_connected()checks
Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-182 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-82
Dismiss
Refresh this wiki
Enter email to refresh