Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

RPC Framework

Relevant source files

Purpose and Scope

This document provides a comprehensive overview of the RPC (Remote Procedure Call) abstraction layer in the rust-muxio system. The RPC framework is built on top of the core muxio multiplexing library and provides a structured, type-safe mechanism for defining and invoking remote methods across client-server boundaries.

The RPC framework consists of three primary components distributed across separate crates:

For details on specific transport implementations that use this RPC framework, see Transport Implementations. For information on the underlying multiplexing and framing protocol, see Core Library (muxio)).


Architecture Overview

The RPC framework operates as a middleware layer between application code and the underlying muxio multiplexing protocol. It provides compile-time type safety while maintaining flexibility in serialization and transport choices.

graph TB
    subgraph "Application Layer"
        APP["Application Code\nType-safe method calls"]
end
    
    subgraph "RPC Service Definition Layer"
        SERVICE["muxio-rpc-service"]
TRAIT["RpcMethodPrebuffered\nRpcMethodStreaming traits"]
METHOD_ID["METHOD_ID generation\nxxhash at compile-time"]
ENCODE["encode_request/response\ndecode_request/response"]
SERVICE --> TRAIT
 
       SERVICE --> METHOD_ID
 
       SERVICE --> ENCODE
    end
    
    subgraph "Client Side"
        CALLER["muxio-rpc-service-caller"]
CALLER_IFACE["RpcServiceCallerInterface"]
PREBUF_CALL["call_prebuffered"]
STREAM_CALL["call_streaming"]
CALLER --> CALLER_IFACE
 
       CALLER_IFACE --> PREBUF_CALL
 
       CALLER_IFACE --> STREAM_CALL
    end
    
    subgraph "Server Side"
        ENDPOINT["muxio-rpc-service-endpoint"]
ENDPOINT_IFACE["RpcServiceEndpointInterface"]
REGISTER_PREBUF["register_prebuffered"]
REGISTER_STREAM["register_streaming"]
ENDPOINT --> ENDPOINT_IFACE
 
       ENDPOINT_IFACE --> REGISTER_PREBUF
 
       ENDPOINT_IFACE --> REGISTER_STREAM
    end
    
    subgraph "Core Multiplexing Layer"
        DISPATCHER["RpcDispatcher"]
MUXIO_CORE["muxio core\nBinary framing protocol"]
DISPATCHER --> MUXIO_CORE
    end
    
 
   APP --> TRAIT
 
   APP --> CALLER_IFACE
    
    TRAIT -.shared definitions.-> CALLER
    TRAIT -.shared definitions.-> ENDPOINT
    
 
   CALLER --> DISPATCHER
 
   ENDPOINT --> DISPATCHER
    
    PREBUF_CALL -.invokes.-> DISPATCHER
    STREAM_CALL -.invokes.-> DISPATCHER
    REGISTER_PREBUF -.handles via.-> DISPATCHER
    REGISTER_STREAM -.handles via.-> DISPATCHER

RPC Framework Component Structure

Sources:


Core RPC Components

The RPC framework is divided into three specialized crates, each with a distinct responsibility in the RPC lifecycle.

Component Responsibilities

CratePrimary ResponsibilityKey Traits/TypesDependencies
muxio-rpc-serviceService definition contractsRpcMethodPrebuffered, RpcMethodStreaming, METHOD_IDmuxio, bitcode, xxhash-rust, num_enum
muxio-rpc-service-callerClient-side invocationRpcServiceCallerInterface, call_prebuffered, call_streamingmuxio, muxio-rpc-service, futures
muxio-rpc-service-endpointServer-side dispatchRpcServiceEndpointInterface, register_prebuffered, register_streamingmuxio, muxio-rpc-service, muxio-rpc-service-caller

Sources:


RPC Method Definition and Identification

The foundation of the RPC framework is the method definition system, which establishes compile-time contracts between clients and servers.

graph LR
    subgraph "Compile Time"
        METHOD_NAME["Method Name String\ne.g., 'Add'"]
XXHASH["xxhash-rust\nconst_xxh3"]
METHOD_ID["METHOD_ID: u64\nCompile-time constant"]
METHOD_NAME --> XXHASH
 
       XXHASH --> METHOD_ID
    end
    
    subgraph "Service Definition Trait"
        TRAIT_IMPL["RpcMethodPrebuffered impl"]
CONST_ID["const METHOD_ID"]
ENCODE_REQ["encode_request"]
DECODE_REQ["decode_request"]
ENCODE_RESP["encode_response"]
DECODE_RESP["decode_response"]
TRAIT_IMPL --> CONST_ID
 
       TRAIT_IMPL --> ENCODE_REQ
 
       TRAIT_IMPL --> DECODE_REQ
 
       TRAIT_IMPL --> ENCODE_RESP
 
       TRAIT_IMPL --> DECODE_RESP
    end
    
    subgraph "Bitcode Serialization"
        BITCODE["bitcode crate"]
PARAMS["Request/Response types\nSerialize + Deserialize"]
ENCODE_REQ --> BITCODE
 
       DECODE_REQ --> BITCODE
 
       ENCODE_RESP --> BITCODE
 
       DECODE_RESP --> BITCODE
 
       BITCODE --> PARAMS
    end
    
 
   METHOD_ID --> CONST_ID

Method ID Generation Process

The METHOD_ID is a u64 value generated at compile time by hashing the method name using xxhash-rust. This approach ensures:

  • Collision prevention : Hash-based IDs virtually eliminate accidental collisions
  • Zero runtime overhead : IDs are compile-time constants
  • Version independence : Method IDs remain stable across compilations

Sources:


Type Safety Through Shared Definitions

The RPC framework enforces type safety by requiring both client and server to depend on the same service definition crate. This creates a compile-time contract that prevents API mismatches.

sequenceDiagram
    participant DEV as "Developer"
    participant DEF as "Service Definition Crate"
    participant CLIENT as "Client Crate"
    participant SERVER as "Server Crate"
    participant COMPILER as "Rust Compiler"
    
    DEV->>DEF: Define RpcMethodPrebuffered
    DEF->>DEF: Generate METHOD_ID
    DEF->>DEF: Define Request/Response types
    
    DEV->>CLIENT: Add dependency on DEF
    DEV->>SERVER: Add dependency on DEF
    
    CLIENT->>DEF: Import method traits
    SERVER->>DEF: Import method traits
    
    CLIENT->>COMPILER: Compile with encode_request
    SERVER->>COMPILER: Compile with decode_request
    
    alt Type Mismatch
        COMPILER->>DEV: Compilation Error
    else Types Match
        COMPILER->>CLIENT: Successful build
        COMPILER->>SERVER: Successful build
    end
    
    Note over CLIENT,SERVER: Both use identical\nMETHOD_ID and data structures

Shared Definition Workflow

This workflow demonstrates how compile-time validation eliminates an entire class of runtime errors. If the client attempts to send a request with a different structure than what the server expects, the code will not compile.

Sources:


sequenceDiagram
    participant APP as "Application Code"
    participant METHOD as "Method::call()\nRpcMethodPrebuffered"
    participant CALLER as "RpcServiceCallerInterface"
    participant DISP as "RpcDispatcher"
    participant FRAME as "Binary Framing Layer"
    participant TRANSPORT as "Transport\n(WebSocket, etc.)"
    participant ENDPOINT as "RpcServiceEndpointInterface"
    participant HANDLER as "Registered Handler"
    
    APP->>METHOD: call(params)
    METHOD->>METHOD: encode_request(params) → bytes
    METHOD->>CALLER: call_prebuffered(METHOD_ID, bytes)
    
    CALLER->>DISP: send_request(method_id, request_bytes)
    DISP->>DISP: Assign unique request_id
    DISP->>FRAME: Serialize to binary frames
    FRAME->>TRANSPORT: Transmit frames
    
    TRANSPORT->>FRAME: Receive frames
    FRAME->>DISP: Reassemble frames
    DISP->>DISP: Lookup handler by METHOD_ID
    DISP->>ENDPOINT: dispatch_to_handler(METHOD_ID, bytes)
    ENDPOINT->>HANDLER: invoke(request_bytes, context)
    
    HANDLER->>METHOD: decode_request(bytes) → params
    HANDLER->>HANDLER: Process business logic
    HANDLER->>METHOD: encode_response(result) → bytes
    HANDLER->>ENDPOINT: Return response_bytes
    
    ENDPOINT->>DISP: send_response(request_id, bytes)
    DISP->>FRAME: Serialize to binary frames
    FRAME->>TRANSPORT: Transmit frames
    
    TRANSPORT->>FRAME: Receive frames
    FRAME->>DISP: Reassemble frames
    DISP->>DISP: Match request_id to pending call
    DISP->>CALLER: resolve_future(request_id, bytes)
    CALLER->>METHOD: decode_response(bytes) → result
    METHOD->>APP: Return typed result

RPC Call Flow

Understanding how an RPC call travels through the system is essential for debugging and optimization.

Complete RPC Invocation Sequence

Key observations:

  • The METHOD_ID is used for routing on the server side
  • The request_id (assigned by the dispatcher) is used for correlation
  • All serialization/deserialization happens at the method trait level
  • The dispatcher only handles raw bytes

Sources:


Prebuffered vs. Streaming RPC

The RPC framework supports two distinct calling patterns, each optimized for different use cases.

RPC Pattern Comparison

AspectPrebuffered RPCStreaming RPC
Request SizeComplete request buffered in memoryRequest can be sent in chunks
Response SizeComplete response buffered in memoryResponse can be received in chunks
Memory UsageHigher for large payloadsLower, constant memory footprint
LatencyLower for small payloadsHigher initial latency, better throughput
TraitRpcMethodPrebufferedRpcMethodStreaming
Use CasesSmall to medium payloads (< 10MB)Large payloads, file transfers, real-time data
MultiplexingMultiple calls can be concurrentStreams can be interleaved

Sources:


classDiagram
    class RpcServiceCallerInterface {<<trait>>\n+call_prebuffered(method_id: u64, params: Option~Vec~u8~~, payload: Option~Vec~u8~~) Future~Result~Vec~u8~~~\n+call_streaming(method_id: u64, params: Option~Vec~u8~~) Future~Result~StreamResponse~~\n+get_transport_state() RpcTransportState\n+set_state_change_handler(handler: Fn) Future}
    
    class RpcTransportState {<<enum>>\nConnecting\nConnected\nDisconnected\nFailed}
    
    class RpcClient {+new(host, port) RpcClient\nimplements RpcServiceCallerInterface}
    
    class RpcWasmClient {+new(url) RpcWasmClient\nimplements RpcServiceCallerInterface}
    
    class CustomClient {+new(...) CustomClient\nimplements RpcServiceCallerInterface}
    
    RpcServiceCallerInterface <|.. RpcClient : implements
    RpcServiceCallerInterface <|.. RpcWasmClient : implements
    RpcServiceCallerInterface <|.. CustomClient : implements
    RpcServiceCallerInterface --> RpcTransportState : returns

Client-Side: RpcServiceCallerInterface

The client-side RPC invocation is abstracted through the RpcServiceCallerInterface trait, which allows different transport implementations to provide identical calling semantics.

RpcServiceCallerInterface Contract

This design allows application code to be written once against RpcServiceCallerInterface and work with any compliant transport implementation (Tokio, WASM, custom transports, etc.).

Sources:


classDiagram
    class RpcServiceEndpointInterface {<<trait>>\n+register_prebuffered(method_id: u64, handler: Fn) Future~Result~~~\n+register_streaming(method_id: u64, handler: Fn) Future~Result~~~\n+unregister(method_id: u64) Future~Result~~~\n+is_registered(method_id: u64) Future~bool~}
    
    class HandlerContext {+client_id: Option~String~\n+metadata: HashMap~String, String~}
    
    class PrebufferedHandler {<<function>>\n+Fn(Vec~u8~, HandlerContext) Future~Result~Vec~u8~~~}
    
    class StreamingHandler {<<function>>\n+Fn(Option~Vec~u8~~, DynamicChannel, HandlerContext) Future~Result~~~}
    
    class RpcServer {
        +new(config) RpcServer
        +endpoint() Arc~RpcServiceEndpointInterface~
        +serve_with_listener(listener) Future
    }
    
    RpcServiceEndpointInterface --> PrebufferedHandler : accepts
    RpcServiceEndpointInterface --> StreamingHandler : accepts
    RpcServiceEndpointInterface --> HandlerContext : provides
    RpcServer --> RpcServiceEndpointInterface : provides

Server-Side: RpcServiceEndpointInterface

The server-side request handling is abstracted through the RpcServiceEndpointInterface trait, which manages method registration and dispatch.

RpcServiceEndpointInterface Contract

Handlers are registered by METHOD_ID and receive:

  1. Request bytes : The serialized request parameters (for prebuffered) or initial params (for streaming)
  2. Context : Metadata about the client and connection
  3. Dynamic channel (streaming only): For incremental data transmission

Sources:


Data Serialization with Bitcode

The RPC framework uses the bitcode crate for binary serialization. This provides compact, efficient encoding of Rust types.

Serialization Requirements

For a type to be used in RPC method definitions, it must implement:

  • serde::Serialize - For encoding
  • serde::Deserialize - For decoding

The bitcode crate provides these implementations for most standard Rust types, including:

  • Primitive types (u64, f64, bool, etc.)
  • Standard collections (Vec<T>, HashMap<K, V>, etc.)
  • Custom structs with #[derive(Serialize, Deserialize)]

Serialization Flow

The compact binary format of bitcode significantly reduces payload sizes compared to JSON or other text-based formats, contributing to the framework's low-latency characteristics.

Sources:


Method Registration and Dispatch

On the server side, methods must be registered with the endpoint before they can be invoked. The registration process associates a METHOD_ID with a handler function.

Handler Registration Pattern

From the example application, the registration pattern is:

Registration Lifecycle

Once registered, handlers remain active until explicitly unregistered or the server shuts down. Multiple concurrent invocations of the same handler are supported through the underlying multiplexing layer.

Sources:


graph TB
    subgraph "Shared Application Logic"
        APP_CODE["Application Code\nPlatform-agnostic"]
METHOD_CALL["Method::call(&client, params)"]
APP_CODE --> METHOD_CALL
    end
    
    subgraph "Native Platform"
        TOKIO_CLIENT["RpcClient\n(Tokio-based)"]
TOKIO_RUNTIME["Tokio async runtime"]
TOKIO_WS["tokio-tungstenite\nWebSocket"]
METHOD_CALL -.uses.-> TOKIO_CLIENT
 
       TOKIO_CLIENT --> TOKIO_RUNTIME
 
       TOKIO_CLIENT --> TOKIO_WS
    end
    
    subgraph "Web Platform"
        WASM_CLIENT["RpcWasmClient\n(WASM-based)"]
WASM_RUNTIME["Browser event loop"]
WASM_WS["JavaScript WebSocket API\nvia wasm-bindgen"]
METHOD_CALL -.uses.-> WASM_CLIENT
 
       WASM_CLIENT --> WASM_RUNTIME
 
       WASM_CLIENT --> WASM_WS
    end
    
    subgraph "Custom Platform"
        CUSTOM_CLIENT["Custom RpcClient\nimplements RpcServiceCallerInterface"]
CUSTOM_TRANSPORT["Custom Transport"]
METHOD_CALL -.uses.-> CUSTOM_CLIENT
 
       CUSTOM_CLIENT --> CUSTOM_TRANSPORT
    end

Cross-Platform RPC Invocation

A key design goal of the RPC framework is enabling the same application code to work across different platforms and transports. This is achieved through the abstraction provided by RpcServiceCallerInterface.

Platform-Agnostic Application Code

The application layer depends only on:

  1. The service definition crate (for method traits)
  2. The RpcServiceCallerInterface trait (for invocation)

This allows the same business logic to run in servers, native desktop applications, mobile apps, and web browsers with minimal platform-specific code.

Sources:


graph TD
    subgraph "Application-Level Errors"
        BIZ_ERR["Business Logic Errors\nDomain-specific"]
end
    
    subgraph "RPC Framework Errors"
        RPC_ERR["RpcServiceError"]
METHOD_NOT_FOUND["MethodNotFound\nInvalid METHOD_ID"]
ENCODING_ERR["EncodingError\nSerialization failure"]
SYSTEM_ERR["SystemError\nInternal dispatcher error"]
TRANSPORT_ERR["TransportError\nNetwork failure"]
RPC_ERR --> METHOD_NOT_FOUND
 
       RPC_ERR --> ENCODING_ERR
 
       RPC_ERR --> SYSTEM_ERR
 
       RPC_ERR --> TRANSPORT_ERR
    end
    
    subgraph "Core Layer Errors"
        CORE_ERR["Muxio Core Errors\nFraming protocol errors"]
end
    
    BIZ_ERR -.propagates through.-> RPC_ERR
    TRANSPORT_ERR -.wraps.-> CORE_ERR

Error Handling in RPC

The RPC framework uses Rust's Result type throughout, with error types defined at the appropriate abstraction levels.

RPC Error Hierarchy

Error handling patterns:

  • Client-side : Errors are returned as Result<T, E> from RPC calls
  • Server-side : Handler errors are serialized and transmitted back to the client
  • Transport errors : Automatically trigger state changes (see RpcTransportState)

For detailed error type definitions, see Error Handling.

Sources:

  • Section reference to page 7

Performance Characteristics

The RPC framework is designed for low latency and high throughput. Key performance features include:

Performance Optimizations

FeatureBenefitImplementation
Compile-time method IDsZero runtime hash overheadxxhash-rust with const_xxh3
Binary serializationSmaller payload sizesbitcode crate
Minimal frame headersReduced per-message overheadCustom binary protocol
Request multiplexingConcurrent calls over single connectionRpcDispatcher correlation
Zero-copy streamingReduced memory allocationsDynamicChannel for chunked data
Callback-driven dispatchNo polling overheadAsync handlers with futures

The combination of these optimizations makes the RPC framework suitable for:

  • Low-latency trading systems
  • Real-time gaming
  • Interactive remote tooling
  • High-throughput data processing

Sources:


graph TB
    subgraph "RPC Abstraction Layer"
        CALLER_IF["RpcServiceCallerInterface"]
ENDPOINT_IF["RpcServiceEndpointInterface"]
end
    
    subgraph "Core Dispatcher"
        DISPATCHER["RpcDispatcher\nRequest correlation"]
SEND_CB["send_callback\nVec&lt;u8&gt; → ()"]
RECV_CB["receive_callback\n() → Vec&lt;u8&gt;"]
end
    
    subgraph "Tokio WebSocket Transport"
        TOKIO_SERVER["TokioRpcServer"]
TOKIO_CLIENT["TokioRpcClient"]
TUNGSTENITE["tokio-tungstenite"]
TOKIO_SERVER --> TUNGSTENITE
 
       TOKIO_CLIENT --> TUNGSTENITE
    end
    
    subgraph "WASM WebSocket Transport"
        WASM_CLIENT["WasmRpcClient"]
JS_BRIDGE["wasm-bindgen bridge"]
BROWSER_WS["Browser WebSocket API"]
WASM_CLIENT --> JS_BRIDGE
 
       JS_BRIDGE --> BROWSER_WS
    end
    
    CALLER_IF -.implemented by.-> TOKIO_CLIENT
    CALLER_IF -.implemented by.-> WASM_CLIENT
    ENDPOINT_IF -.implemented by.-> TOKIO_SERVER
    
 
   TOKIO_CLIENT --> DISPATCHER
 
   WASM_CLIENT --> DISPATCHER
 
   TOKIO_SERVER --> DISPATCHER
    
 
   DISPATCHER --> SEND_CB
 
   DISPATCHER --> RECV_CB
    
    SEND_CB -.invokes.-> TUNGSTENITE
    RECV_CB -.invokes.-> TUNGSTENITE
    SEND_CB -.invokes.-> JS_BRIDGE
    RECV_CB -.invokes.-> JS_BRIDGE

Integration with Transport Layer

The RPC framework is designed to be transport-agnostic, with concrete implementations provided for common scenarios.

Transport Integration Points

The RpcDispatcher accepts callbacks for sending and receiving bytes, allowing it to work with any transport mechanism. This design enables:

  • WebSocket transports (Tokio and WASM implementations provided)
  • TCP socket transports
  • In-memory transports (for testing)
  • Custom transports (by providing appropriate callbacks)

For implementation details of specific transports, see Transport Implementations.

Sources:

Dismiss

Refresh this wiki

Enter email to refresh