This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
RPC Framework
Relevant source files
Purpose and Scope
This document provides a comprehensive overview of the RPC (Remote Procedure Call) abstraction layer in the rust-muxio system. The RPC framework is built on top of the core muxio multiplexing library and provides a structured, type-safe mechanism for defining and invoking remote methods across client-server boundaries.
The RPC framework consists of three primary components distributed across separate crates:
- Service definition traits and method identification (muxio-rpc-service)
- Client-side call invocation (muxio-rpc-service-caller)
- Server-side request handling (muxio-rpc-service-endpoint)
For details on specific transport implementations that use this RPC framework, see Transport Implementations. For information on the underlying multiplexing and framing protocol, see Core Library (muxio)).
Architecture Overview
The RPC framework operates as a middleware layer between application code and the underlying muxio multiplexing protocol. It provides compile-time type safety while maintaining flexibility in serialization and transport choices.
graph TB
subgraph "Application Layer"
APP["Application Code\nType-safe method calls"]
end
subgraph "RPC Service Definition Layer"
SERVICE["muxio-rpc-service"]
TRAIT["RpcMethodPrebuffered\nRpcMethodStreaming traits"]
METHOD_ID["METHOD_ID generation\nxxhash at compile-time"]
ENCODE["encode_request/response\ndecode_request/response"]
SERVICE --> TRAIT
SERVICE --> METHOD_ID
SERVICE --> ENCODE
end
subgraph "Client Side"
CALLER["muxio-rpc-service-caller"]
CALLER_IFACE["RpcServiceCallerInterface"]
PREBUF_CALL["call_prebuffered"]
STREAM_CALL["call_streaming"]
CALLER --> CALLER_IFACE
CALLER_IFACE --> PREBUF_CALL
CALLER_IFACE --> STREAM_CALL
end
subgraph "Server Side"
ENDPOINT["muxio-rpc-service-endpoint"]
ENDPOINT_IFACE["RpcServiceEndpointInterface"]
REGISTER_PREBUF["register_prebuffered"]
REGISTER_STREAM["register_streaming"]
ENDPOINT --> ENDPOINT_IFACE
ENDPOINT_IFACE --> REGISTER_PREBUF
ENDPOINT_IFACE --> REGISTER_STREAM
end
subgraph "Core Multiplexing Layer"
DISPATCHER["RpcDispatcher"]
MUXIO_CORE["muxio core\nBinary framing protocol"]
DISPATCHER --> MUXIO_CORE
end
APP --> TRAIT
APP --> CALLER_IFACE
TRAIT -.shared definitions.-> CALLER
TRAIT -.shared definitions.-> ENDPOINT
CALLER --> DISPATCHER
ENDPOINT --> DISPATCHER
PREBUF_CALL -.invokes.-> DISPATCHER
STREAM_CALL -.invokes.-> DISPATCHER
REGISTER_PREBUF -.handles via.-> DISPATCHER
REGISTER_STREAM -.handles via.-> DISPATCHER
RPC Framework Component Structure
Sources:
- Cargo.toml:19-31
- extensions/muxio-rpc-service/Cargo.toml
- extensions/muxio-rpc-service-caller/Cargo.toml
- extensions/muxio-rpc-service-endpoint/Cargo.toml
- High-level architecture diagrams
Core RPC Components
The RPC framework is divided into three specialized crates, each with a distinct responsibility in the RPC lifecycle.
Component Responsibilities
| Crate | Primary Responsibility | Key Traits/Types | Dependencies |
|---|---|---|---|
muxio-rpc-service | Service definition contracts | RpcMethodPrebuffered, RpcMethodStreaming, METHOD_ID | muxio, bitcode, xxhash-rust, num_enum |
muxio-rpc-service-caller | Client-side invocation | RpcServiceCallerInterface, call_prebuffered, call_streaming | muxio, muxio-rpc-service, futures |
muxio-rpc-service-endpoint | Server-side dispatch | RpcServiceEndpointInterface, register_prebuffered, register_streaming | muxio, muxio-rpc-service, muxio-rpc-service-caller |
Sources:
RPC Method Definition and Identification
The foundation of the RPC framework is the method definition system, which establishes compile-time contracts between clients and servers.
graph LR
subgraph "Compile Time"
METHOD_NAME["Method Name String\ne.g., 'Add'"]
XXHASH["xxhash-rust\nconst_xxh3"]
METHOD_ID["METHOD_ID: u64\nCompile-time constant"]
METHOD_NAME --> XXHASH
XXHASH --> METHOD_ID
end
subgraph "Service Definition Trait"
TRAIT_IMPL["RpcMethodPrebuffered impl"]
CONST_ID["const METHOD_ID"]
ENCODE_REQ["encode_request"]
DECODE_REQ["decode_request"]
ENCODE_RESP["encode_response"]
DECODE_RESP["decode_response"]
TRAIT_IMPL --> CONST_ID
TRAIT_IMPL --> ENCODE_REQ
TRAIT_IMPL --> DECODE_REQ
TRAIT_IMPL --> ENCODE_RESP
TRAIT_IMPL --> DECODE_RESP
end
subgraph "Bitcode Serialization"
BITCODE["bitcode crate"]
PARAMS["Request/Response types\nSerialize + Deserialize"]
ENCODE_REQ --> BITCODE
DECODE_REQ --> BITCODE
ENCODE_RESP --> BITCODE
DECODE_RESP --> BITCODE
BITCODE --> PARAMS
end
METHOD_ID --> CONST_ID
Method ID Generation Process
The METHOD_ID is a u64 value generated at compile time by hashing the method name using xxhash-rust. This approach ensures:
- Collision prevention : Hash-based IDs virtually eliminate accidental collisions
- Zero runtime overhead : IDs are compile-time constants
- Version independence : Method IDs remain stable across compilations
Sources:
Type Safety Through Shared Definitions
The RPC framework enforces type safety by requiring both client and server to depend on the same service definition crate. This creates a compile-time contract that prevents API mismatches.
sequenceDiagram
participant DEV as "Developer"
participant DEF as "Service Definition Crate"
participant CLIENT as "Client Crate"
participant SERVER as "Server Crate"
participant COMPILER as "Rust Compiler"
DEV->>DEF: Define RpcMethodPrebuffered
DEF->>DEF: Generate METHOD_ID
DEF->>DEF: Define Request/Response types
DEV->>CLIENT: Add dependency on DEF
DEV->>SERVER: Add dependency on DEF
CLIENT->>DEF: Import method traits
SERVER->>DEF: Import method traits
CLIENT->>COMPILER: Compile with encode_request
SERVER->>COMPILER: Compile with decode_request
alt Type Mismatch
COMPILER->>DEV: Compilation Error
else Types Match
COMPILER->>CLIENT: Successful build
COMPILER->>SERVER: Successful build
end
Note over CLIENT,SERVER: Both use identical\nMETHOD_ID and data structures
Shared Definition Workflow
This workflow demonstrates how compile-time validation eliminates an entire class of runtime errors. If the client attempts to send a request with a different structure than what the server expects, the code will not compile.
Sources:
sequenceDiagram
participant APP as "Application Code"
participant METHOD as "Method::call()\nRpcMethodPrebuffered"
participant CALLER as "RpcServiceCallerInterface"
participant DISP as "RpcDispatcher"
participant FRAME as "Binary Framing Layer"
participant TRANSPORT as "Transport\n(WebSocket, etc.)"
participant ENDPOINT as "RpcServiceEndpointInterface"
participant HANDLER as "Registered Handler"
APP->>METHOD: call(params)
METHOD->>METHOD: encode_request(params) → bytes
METHOD->>CALLER: call_prebuffered(METHOD_ID, bytes)
CALLER->>DISP: send_request(method_id, request_bytes)
DISP->>DISP: Assign unique request_id
DISP->>FRAME: Serialize to binary frames
FRAME->>TRANSPORT: Transmit frames
TRANSPORT->>FRAME: Receive frames
FRAME->>DISP: Reassemble frames
DISP->>DISP: Lookup handler by METHOD_ID
DISP->>ENDPOINT: dispatch_to_handler(METHOD_ID, bytes)
ENDPOINT->>HANDLER: invoke(request_bytes, context)
HANDLER->>METHOD: decode_request(bytes) → params
HANDLER->>HANDLER: Process business logic
HANDLER->>METHOD: encode_response(result) → bytes
HANDLER->>ENDPOINT: Return response_bytes
ENDPOINT->>DISP: send_response(request_id, bytes)
DISP->>FRAME: Serialize to binary frames
FRAME->>TRANSPORT: Transmit frames
TRANSPORT->>FRAME: Receive frames
FRAME->>DISP: Reassemble frames
DISP->>DISP: Match request_id to pending call
DISP->>CALLER: resolve_future(request_id, bytes)
CALLER->>METHOD: decode_response(bytes) → result
METHOD->>APP: Return typed result
RPC Call Flow
Understanding how an RPC call travels through the system is essential for debugging and optimization.
Complete RPC Invocation Sequence
Key observations:
- The
METHOD_IDis used for routing on the server side - The
request_id(assigned by the dispatcher) is used for correlation - All serialization/deserialization happens at the method trait level
- The dispatcher only handles raw bytes
Sources:
- README.md:70-160
- High-level architecture diagram 3 (RPC Communication Flow)
Prebuffered vs. Streaming RPC
The RPC framework supports two distinct calling patterns, each optimized for different use cases.
RPC Pattern Comparison
| Aspect | Prebuffered RPC | Streaming RPC |
|---|---|---|
| Request Size | Complete request buffered in memory | Request can be sent in chunks |
| Response Size | Complete response buffered in memory | Response can be received in chunks |
| Memory Usage | Higher for large payloads | Lower, constant memory footprint |
| Latency | Lower for small payloads | Higher initial latency, better throughput |
| Trait | RpcMethodPrebuffered | RpcMethodStreaming |
| Use Cases | Small to medium payloads (< 10MB) | Large payloads, file transfers, real-time data |
| Multiplexing | Multiple calls can be concurrent | Streams can be interleaved |
Sources:
- Section titles from table of contents
- README.md28
classDiagram
class RpcServiceCallerInterface {<<trait>>\n+call_prebuffered(method_id: u64, params: Option~Vec~u8~~, payload: Option~Vec~u8~~) Future~Result~Vec~u8~~~\n+call_streaming(method_id: u64, params: Option~Vec~u8~~) Future~Result~StreamResponse~~\n+get_transport_state() RpcTransportState\n+set_state_change_handler(handler: Fn) Future}
class RpcTransportState {<<enum>>\nConnecting\nConnected\nDisconnected\nFailed}
class RpcClient {+new(host, port) RpcClient\nimplements RpcServiceCallerInterface}
class RpcWasmClient {+new(url) RpcWasmClient\nimplements RpcServiceCallerInterface}
class CustomClient {+new(...) CustomClient\nimplements RpcServiceCallerInterface}
RpcServiceCallerInterface <|.. RpcClient : implements
RpcServiceCallerInterface <|.. RpcWasmClient : implements
RpcServiceCallerInterface <|.. CustomClient : implements
RpcServiceCallerInterface --> RpcTransportState : returns
Client-Side: RpcServiceCallerInterface
The client-side RPC invocation is abstracted through the RpcServiceCallerInterface trait, which allows different transport implementations to provide identical calling semantics.
RpcServiceCallerInterface Contract
This design allows application code to be written once against RpcServiceCallerInterface and work with any compliant transport implementation (Tokio, WASM, custom transports, etc.).
Sources:
classDiagram
class RpcServiceEndpointInterface {<<trait>>\n+register_prebuffered(method_id: u64, handler: Fn) Future~Result~~~\n+register_streaming(method_id: u64, handler: Fn) Future~Result~~~\n+unregister(method_id: u64) Future~Result~~~\n+is_registered(method_id: u64) Future~bool~}
class HandlerContext {+client_id: Option~String~\n+metadata: HashMap~String, String~}
class PrebufferedHandler {<<function>>\n+Fn(Vec~u8~, HandlerContext) Future~Result~Vec~u8~~~}
class StreamingHandler {<<function>>\n+Fn(Option~Vec~u8~~, DynamicChannel, HandlerContext) Future~Result~~~}
class RpcServer {
+new(config) RpcServer
+endpoint() Arc~RpcServiceEndpointInterface~
+serve_with_listener(listener) Future
}
RpcServiceEndpointInterface --> PrebufferedHandler : accepts
RpcServiceEndpointInterface --> StreamingHandler : accepts
RpcServiceEndpointInterface --> HandlerContext : provides
RpcServer --> RpcServiceEndpointInterface : provides
Server-Side: RpcServiceEndpointInterface
The server-side request handling is abstracted through the RpcServiceEndpointInterface trait, which manages method registration and dispatch.
RpcServiceEndpointInterface Contract
Handlers are registered by METHOD_ID and receive:
- Request bytes : The serialized request parameters (for prebuffered) or initial params (for streaming)
- Context : Metadata about the client and connection
- Dynamic channel (streaming only): For incremental data transmission
Sources:
Data Serialization with Bitcode
The RPC framework uses the bitcode crate for binary serialization. This provides compact, efficient encoding of Rust types.
Serialization Requirements
For a type to be used in RPC method definitions, it must implement:
serde::Serialize- For encodingserde::Deserialize- For decoding
The bitcode crate provides these implementations for most standard Rust types, including:
- Primitive types (
u64,f64,bool, etc.) - Standard collections (
Vec<T>,HashMap<K, V>, etc.) - Custom structs with
#[derive(Serialize, Deserialize)]
Serialization Flow
The compact binary format of bitcode significantly reduces payload sizes compared to JSON or other text-based formats, contributing to the framework's low-latency characteristics.
Sources:
- Cargo.lock:158-168
- Cargo.toml52
- README.md32
- High-level architecture diagram 6 (Data Flow and Serialization Strategy)
Method Registration and Dispatch
On the server side, methods must be registered with the endpoint before they can be invoked. The registration process associates a METHOD_ID with a handler function.
Handler Registration Pattern
From the example application, the registration pattern is:
Registration Lifecycle
Once registered, handlers remain active until explicitly unregistered or the server shuts down. Multiple concurrent invocations of the same handler are supported through the underlying multiplexing layer.
Sources:
graph TB
subgraph "Shared Application Logic"
APP_CODE["Application Code\nPlatform-agnostic"]
METHOD_CALL["Method::call(&client, params)"]
APP_CODE --> METHOD_CALL
end
subgraph "Native Platform"
TOKIO_CLIENT["RpcClient\n(Tokio-based)"]
TOKIO_RUNTIME["Tokio async runtime"]
TOKIO_WS["tokio-tungstenite\nWebSocket"]
METHOD_CALL -.uses.-> TOKIO_CLIENT
TOKIO_CLIENT --> TOKIO_RUNTIME
TOKIO_CLIENT --> TOKIO_WS
end
subgraph "Web Platform"
WASM_CLIENT["RpcWasmClient\n(WASM-based)"]
WASM_RUNTIME["Browser event loop"]
WASM_WS["JavaScript WebSocket API\nvia wasm-bindgen"]
METHOD_CALL -.uses.-> WASM_CLIENT
WASM_CLIENT --> WASM_RUNTIME
WASM_CLIENT --> WASM_WS
end
subgraph "Custom Platform"
CUSTOM_CLIENT["Custom RpcClient\nimplements RpcServiceCallerInterface"]
CUSTOM_TRANSPORT["Custom Transport"]
METHOD_CALL -.uses.-> CUSTOM_CLIENT
CUSTOM_CLIENT --> CUSTOM_TRANSPORT
end
Cross-Platform RPC Invocation
A key design goal of the RPC framework is enabling the same application code to work across different platforms and transports. This is achieved through the abstraction provided by RpcServiceCallerInterface.
Platform-Agnostic Application Code
The application layer depends only on:
- The service definition crate (for method traits)
- The
RpcServiceCallerInterfacetrait (for invocation)
This allows the same business logic to run in servers, native desktop applications, mobile apps, and web browsers with minimal platform-specific code.
Sources:
- README.md47
- Cargo.lock:898-916 (tokio client)
- Cargo.lock:935-954 (wasm client)
- High-level architecture diagram 2 (Cross-Platform Deployment Model)
graph TD
subgraph "Application-Level Errors"
BIZ_ERR["Business Logic Errors\nDomain-specific"]
end
subgraph "RPC Framework Errors"
RPC_ERR["RpcServiceError"]
METHOD_NOT_FOUND["MethodNotFound\nInvalid METHOD_ID"]
ENCODING_ERR["EncodingError\nSerialization failure"]
SYSTEM_ERR["SystemError\nInternal dispatcher error"]
TRANSPORT_ERR["TransportError\nNetwork failure"]
RPC_ERR --> METHOD_NOT_FOUND
RPC_ERR --> ENCODING_ERR
RPC_ERR --> SYSTEM_ERR
RPC_ERR --> TRANSPORT_ERR
end
subgraph "Core Layer Errors"
CORE_ERR["Muxio Core Errors\nFraming protocol errors"]
end
BIZ_ERR -.propagates through.-> RPC_ERR
TRANSPORT_ERR -.wraps.-> CORE_ERR
Error Handling in RPC
The RPC framework uses Rust's Result type throughout, with error types defined at the appropriate abstraction levels.
RPC Error Hierarchy
Error handling patterns:
- Client-side : Errors are returned as
Result<T, E>from RPC calls - Server-side : Handler errors are serialized and transmitted back to the client
- Transport errors : Automatically trigger state changes (see
RpcTransportState)
For detailed error type definitions, see Error Handling.
Sources:
- Section reference to page 7
Performance Characteristics
The RPC framework is designed for low latency and high throughput. Key performance features include:
Performance Optimizations
| Feature | Benefit | Implementation |
|---|---|---|
| Compile-time method IDs | Zero runtime hash overhead | xxhash-rust with const_xxh3 |
| Binary serialization | Smaller payload sizes | bitcode crate |
| Minimal frame headers | Reduced per-message overhead | Custom binary protocol |
| Request multiplexing | Concurrent calls over single connection | RpcDispatcher correlation |
| Zero-copy streaming | Reduced memory allocations | DynamicChannel for chunked data |
| Callback-driven dispatch | No polling overhead | Async handlers with futures |
The combination of these optimizations makes the RPC framework suitable for:
- Low-latency trading systems
- Real-time gaming
- Interactive remote tooling
- High-throughput data processing
Sources:
graph TB
subgraph "RPC Abstraction Layer"
CALLER_IF["RpcServiceCallerInterface"]
ENDPOINT_IF["RpcServiceEndpointInterface"]
end
subgraph "Core Dispatcher"
DISPATCHER["RpcDispatcher\nRequest correlation"]
SEND_CB["send_callback\nVec<u8> → ()"]
RECV_CB["receive_callback\n() → Vec<u8>"]
end
subgraph "Tokio WebSocket Transport"
TOKIO_SERVER["TokioRpcServer"]
TOKIO_CLIENT["TokioRpcClient"]
TUNGSTENITE["tokio-tungstenite"]
TOKIO_SERVER --> TUNGSTENITE
TOKIO_CLIENT --> TUNGSTENITE
end
subgraph "WASM WebSocket Transport"
WASM_CLIENT["WasmRpcClient"]
JS_BRIDGE["wasm-bindgen bridge"]
BROWSER_WS["Browser WebSocket API"]
WASM_CLIENT --> JS_BRIDGE
JS_BRIDGE --> BROWSER_WS
end
CALLER_IF -.implemented by.-> TOKIO_CLIENT
CALLER_IF -.implemented by.-> WASM_CLIENT
ENDPOINT_IF -.implemented by.-> TOKIO_SERVER
TOKIO_CLIENT --> DISPATCHER
WASM_CLIENT --> DISPATCHER
TOKIO_SERVER --> DISPATCHER
DISPATCHER --> SEND_CB
DISPATCHER --> RECV_CB
SEND_CB -.invokes.-> TUNGSTENITE
RECV_CB -.invokes.-> TUNGSTENITE
SEND_CB -.invokes.-> JS_BRIDGE
RECV_CB -.invokes.-> JS_BRIDGE
Integration with Transport Layer
The RPC framework is designed to be transport-agnostic, with concrete implementations provided for common scenarios.
Transport Integration Points
The RpcDispatcher accepts callbacks for sending and receiving bytes, allowing it to work with any transport mechanism. This design enables:
- WebSocket transports (Tokio and WASM implementations provided)
- TCP socket transports
- In-memory transports (for testing)
- Custom transports (by providing appropriate callbacks)
For implementation details of specific transports, see Transport Implementations.
Sources:
- README.md34
- Cargo.lock:898-933 (transport implementations)
- High-level architecture diagram 1 (Overall System Architecture)
Dismiss
Refresh this wiki
Enter email to refresh