Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Overview

Loading…

Overview

Relevant source files

Purpose and Scope

This document provides a high-level introduction to the rust-muxio repository, explaining its purpose as a toolkit for building high-performance, multiplexed RPC systems. It covers the foundational architecture, the layered design philosophy, and how the different components of the system work together to enable efficient, cross-platform communication.

For detailed information about specific subsystems, see:

Sources : README.md:1-18 Cargo.toml:9-17


What is Muxio?

Muxio is a layered transport toolkit for building multiplexed, binary RPC systems in Rust. It separates concerns across three distinct architectural layers:

LayerPrimary CratesResponsibility
Core MultiplexingmuxioBinary framing protocol, stream multiplexing, non-async callback-driven primitives
RPC Frameworkmuxio-rpc-service
muxio-rpc-service-caller
muxio-rpc-service-endpointService definitions, method ID generation, client/server abstractions, request correlation
Platform Extensionsmuxio-tokio-rpc-server
muxio-tokio-rpc-client
muxio-wasm-rpc-clientConcrete implementations for native (Tokio) and web (WASM) environments

The system is designed around two key principles:

  1. Runtime Agnosticism : The core library (muxio) uses a callback-driven, non-async model that works in any Rust environment—Tokio, WASM, single-threaded, or multi-threaded contexts.

  2. Layered Abstraction : Each layer provides a clean interface to the layer above, enabling developers to build custom transports or replace individual components without affecting the entire stack.

Sources : README.md:19-41 Cargo.toml:19-31


System Architecture

The following diagram illustrates the complete system architecture, showing how components in the workspace relate to each other:

Sources : Cargo.toml:19-31 README.md:23-41 Cargo.lock:830-954

graph TB
    subgraph Core["Core Foundation (muxio crate)"]
RpcDispatcher["RpcDispatcher\nRequest Correlation\nResponse Routing"]
RpcSession["RpcSession\nStream Multiplexing\nFrame Mux/Demux"]
FrameProtocol["Binary Framing\nRpcHeader + Payload Chunks"]
RpcDispatcher --> RpcSession
 
       RpcSession --> FrameProtocol
    end
    
    subgraph RPCFramework["RPC Framework Layer"]
RpcService["muxio-rpc-service\nRpcMethodPrebuffered Trait\nxxhash Method IDs"]
RpcCaller["muxio-rpc-service-caller\nRpcServiceCallerInterface\nGeneric Client Logic"]
RpcEndpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface\nHandler Registration"]
RpcCaller --> RpcService
 
       RpcEndpoint --> RpcService
    end
    
    subgraph NativeExt["Native Platform Extensions"]
TokioServer["muxio-tokio-rpc-server\nRpcServer + Axum\nWebSocket Transport"]
TokioClient["muxio-tokio-rpc-client\nRpcClient\ntokio-tungstenite"]
TokioServer --> RpcEndpoint
 
       TokioServer --> RpcCaller
 
       TokioClient --> RpcCaller
 
       TokioClient --> RpcEndpoint
    end
    
    subgraph WASMExt["WASM Platform Extensions"]
WasmClient["muxio-wasm-rpc-client\nRpcWasmClient\nwasm-bindgen Bridge"]
WasmClient --> RpcCaller
 
       WasmClient --> RpcEndpoint
    end
    
    subgraph AppLayer["Application Layer"]
ServiceDef["example-muxio-rpc-service-definition\nShared Service Contracts"]
ExampleApp["example-muxio-ws-rpc-app\nDemo Application"]
ServiceDef --> RpcService
 
       ExampleApp --> ServiceDef
 
       ExampleApp --> TokioServer
 
       ExampleApp --> TokioClient
    end
    
 
   RpcCaller --> RpcDispatcher
 
   RpcEndpoint --> RpcDispatcher
    
 
   TokioClient <-->|Binary Frames| TokioServer
 
   WasmClient <-->|Binary Frames| TokioServer

Core Components and Their Roles

muxio Core (muxio)

The foundational crate provides three critical components:

  • RpcSession : Manages stream multiplexing over a single connection. Allocates stream IDs, maintains per-stream decoders, and handles frame interleaving. Located at src/rpc/rpc_internals/rpc_session.rs:16-21

  • RpcDispatcher : Correlates RPC requests with responses using unique request IDs. Tracks pending requests in a HashMap and routes responses to the appropriate callback. Located at src/rpc/rpc_dispatcher.rs:19-30

  • Binary Framing Protocol : A schemaless, low-overhead protocol that chunks payloads into frames with minimal headers. Each frame contains a stream ID, flags, and a payload chunk. Defined at src/rpc/rpc_internals/rpc_session.rs:16-21

RPC Framework Layer

  • muxio-rpc-service : Defines the RpcMethodPrebuffered trait for creating shared service contracts. Uses xxhash-rust to generate compile-time method IDs from method names. Located at extensions/muxio-rpc-service/

  • muxio-rpc-service-caller : Provides the RpcServiceCallerInterface trait, which abstracts client-side RPC invocation. Any client implementation (Tokio, WASM, custom) must implement this interface. Located at extensions/muxio-rpc-service-caller/

  • muxio-rpc-service-endpoint : Provides the RpcServiceEndpointInterface trait for server-side handler registration and request dispatch. Located at extensions/muxio-rpc-service-endpoint/

Platform Extensions

  • muxio-tokio-rpc-server : Implements RpcServer using Axum for HTTP/WebSocket serving and tokio-tungstenite for WebSocket framing. Located at extensions/muxio-tokio-rpc-server/

  • muxio-tokio-rpc-client : Implements RpcClient with Arc-based lifecycle management and background tasks for connection handling. Located at extensions/muxio-tokio-rpc-client/

  • muxio-wasm-rpc-client : Implements RpcWasmClient using wasm-bindgen to bridge Rust to JavaScript. Communicates with browser WebSocket APIs via a static byte-passing interface. Located at extensions/muxio-wasm-rpc-client/

Sources : Cargo.toml:40-47 Cargo.lock:830-954 README.md:36-41


Key Design Characteristics

graph LR
    subgraph CorePrimitives["Core Primitives (Non-Async)"]
RpcDispatcher["RpcDispatcher\nBox<dyn Fn> Callbacks"]
RpcSession["RpcSession\nCallback-Driven Events"]
end
    
    subgraph TokioRuntime["Tokio Runtime"]
TokioClient["RpcClient\nArc + TokioMutex"]
TokioServer["RpcServer\ntokio::spawn Tasks"]
end
    
    subgraph WASMRuntime["WASM Runtime"]
WasmClient["RpcWasmClient\nthread_local RefCell"]
JSBridge["JavaScript Bridge\nstatic_muxio_write_bytes"]
end
    
    subgraph CustomRuntime["Custom Runtime"]
CustomImpl["Custom Implementation\nUser-Defined Threading"]
end
    
 
   CorePrimitives --> TokioRuntime
 
   CorePrimitives --> WASMRuntime
 
   CorePrimitives --> CustomRuntime
    
 
   TokioClient --> RpcDispatcher
 
   TokioServer --> RpcDispatcher
 
   WasmClient --> RpcDispatcher
 
   CustomImpl --> RpcDispatcher

Runtime-Agnostic Core

The following diagram shows how the non-async, callback-driven core enables cross-runtime compatibility:

Key Implementation Details :

  • Callback Closures : Both RpcDispatcher and RpcSession accept Box<dyn Fn(...)> callbacks rather than returning Futures, enabling use in any execution context.

  • No Async Core : The muxio crate itself has no async functions in its public API. All asynchrony is introduced by the platform extensions (muxio-tokio-rpc-client, etc.).

  • Flexible Synchronization : Platform extensions choose their own synchronization primitives—TokioMutex for Tokio, RefCell for WASM, or custom locking for other runtimes.

Sources : README.md:35-36 Cargo.lock:830-839


Type Safety Through Shared Definitions

Muxio enforces compile-time type safety by requiring shared service definitions between client and server:

ComponentLocationPurpose
Service Definition Crateexamples/example-muxio-rpc-service-definitionDefines RpcMethodPrebuffered implementations for each RPC method
Method ID ConstantsGenerated via xxhash-rustCompile-time hashes of method names (e.g., Add::METHOD_ID)
Request/Response TypesDefined in service crateShared structs serialized with bitcode

Both client and server depend on the same service definition crate. If a client attempts to call a method with the wrong parameter types, or if the server returns a response with the wrong type, the code will not compile.

Example fromexample-muxio-rpc-service-definition:

Sources : README.md:50-51 Cargo.lock:426-431 examples/example-muxio-rpc-service-definition/


graph TB
    subgraph ServerSide["Server-Side Deployment"]
RpcServer["RpcServer\nAxum + WebSocket"]
TokioRuntime["Tokio Runtime\nMulti-threaded Executor"]
RpcServer --> TokioRuntime
    end
    
    subgraph NativeClient["Native Client Deployment"]
RpcClient["RpcClient\ntokio-tungstenite"]
TokioClientRuntime["Tokio Runtime\nAsync Tasks"]
RpcClient --> TokioClientRuntime
    end
    
    subgraph WASMClient["WASM Browser Client"]
RpcWasmClient["RpcWasmClient\nwasm-bindgen"]
JSHost["JavaScript Host\nWebSocket APIs"]
RpcWasmClient --> JSHost
    end
    
    subgraph SharedContract["Shared Service Definition"]
ServiceDef["example-muxio-rpc-service-definition\nAdd, Mult, Echo Methods"]
end
    
 
   RpcClient <-->|Binary Protocol| RpcServer
 
   RpcWasmClient <-->|Binary Protocol| RpcServer
    
    RpcClient -.depends on.-> ServiceDef
    RpcWasmClient -.depends on.-> ServiceDef
    RpcServer -.depends on.-> ServiceDef

Deployment Configurations

Muxio supports multiple deployment configurations, all using the same binary protocol:

Key Characteristics :

  1. Same Service Definitions : All clients and servers depend on the same service definition crate, ensuring API consistency across platforms.

  2. Binary Protocol Compatibility : Native clients, WASM clients, and servers all communicate using identical binary framing and serialization (via bitcode).

  3. Platform-Specific Transports : Each platform extension provides its own WebSocket transport implementation—tokio-tungstenite for native, browser APIs for WASM.

Sources : README.md:66-161 Cargo.lock:898-954


Summary

Muxio provides a three-layer architecture for building efficient, cross-platform RPC systems:

  1. Core Layer (muxio): Binary framing, stream multiplexing, request/response correlation—all using callback-driven, non-async primitives.

  2. RPC Framework Layer : Service definitions with compile-time method IDs, generic caller/endpoint interfaces, and type-safe abstractions.

  3. Platform Extensions : Concrete implementations for Tokio (native) and WASM (browser) environments, both implementing the same abstract interfaces.

The system prioritizes low-latency communication (via compact binary protocol), type safety (via shared service definitions), and runtime flexibility (via callback-driven core).

For implementation details, see:

Sources : README.md:1-163 Cargo.toml:1-71 Cargo.lock:830-954


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Core Concepts

Loading…

Core Concepts

Relevant source files

This document explains the fundamental architectural concepts and design principles that underpin Muxio. It provides a conceptual overview of the binary protocol, multiplexing mechanisms, RPC abstraction layer, and cross-platform capabilities. For detailed implementation specifics of the core library, see Core Library (muxio)). For RPC framework details, see RPC Framework. For transport implementation patterns, see Transport Implementations.

Scope and Purpose

Muxio is built on a layered architecture where each layer serves a distinct purpose:

  1. Binary Framing Layer : Low-level frame encoding/decoding with stream identification
  2. Stream Multiplexing Layer : Managing multiple concurrent streams over a single connection
  3. RPC Protocol Layer : Request/response semantics and correlation
  4. Transport Abstraction Layer : Runtime-agnostic interfaces for different environments

The design prioritizes transport agnosticism, cross-platform compatibility (native + WASM), and type safety through shared service definitions.

Sources: README.md:17-54

Binary Protocol Foundation

Frame-Based Communication

Muxio uses a compact binary framing protocol where all data is transmitted as discrete frames. The RpcFrame struct defines the wire format, and the RpcFrameType enum distinguishes frame purposes.

RpcFrame Structure:

FieldTypePurpose
stream_idu32Identifies which logical stream the frame belongs to
frame_typeRpcFrameTypeEnum indicating frame purpose
payload_bytesVec<u8>Raw bytes specific to the frame type

RpcFrameType Variants:

VariantBinary ValuePurpose
Header0x01Contains serialized RpcHeader structure
Data0x02Contains payload chunk (up to DEFAULT_MAX_CHUNK_SIZE)
End0x03Signals stream completion
Cancel0x04Aborts stream mid-transmission
stateDiagram-v2
    [*] --> AwaitingHeader
    AwaitingHeader --> ReceivingPayload: RpcFrameType::Header
    AwaitingHeader --> Complete: RpcFrameType::End
    AwaitingHeader --> Cancelled: RpcFrameType::Cancel
    ReceivingPayload --> ReceivingPayload: RpcFrameType::Data
    ReceivingPayload --> Complete: RpcFrameType::End
    ReceivingPayload --> Cancelled: RpcFrameType::Cancel
    Complete --> [*]
    Cancelled --> [*]

This framing approach enables multiple independent data streams to be interleaved over a single physical connection without interference. The RpcStreamEncoder::write_frame() method emits frames, while RpcStreamDecoder::decode_frame() processes incoming frames.

Diagram: Frame Type State Machine

Sources: README.md:33-34 DRAFT.md:9-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-100 src/rpc/rpc_internals/rpc_stream_encoder.rs:1-80

Non-Async Callback Design

The core multiplexing logic in muxio is implemented using synchronous control flow with callbacks rather than async/await. This design decision enables:

  • WASM Compatibility : Works in single-threaded JavaScript environments
  • Runtime Agnosticism : No dependency on Tokio, async-std, or any specific runtime
  • Flexible Integration : Can be wrapped with async interfaces when needed

Sources: README.md:35-36 DRAFT.md:48-52

Stream Multiplexing Architecture

RpcSession: Multi-Stream Management

The RpcSession component manages multiple concurrent logical streams over a single connection. It maintains per-stream state and ensures frames are correctly routed to their destination stream.

Key Methods:

MethodSignaturePurpose
allocate_stream_id()fn(&mut self) -> u32Assigns unique identifiers to new streams
init_request()fn(&mut self, header: RpcHeader) -> u32Creates new stream, returns stream_id
read_bytes()fn(&mut self, bytes: &[u8], callback: F)Processes incoming frames, routes to decoders
remove_decoder()fn(&mut self, stream_id: u32)Cleans up completed/cancelled streams

Internal State:

  • decoders: HashMap<u32, RpcStreamDecoder> - Per-stream decoder instances
  • next_stream_id: u32 - Monotonically increasing stream identifier counter
  • event_callback: F - Closure invoked for each RpcStreamEvent

Diagram: RpcSession Stream Routing

graph TB
    RawBytes["Raw Bytes from Transport"]
subgraph RpcSession["RpcSession"]
ReadBytes["read_bytes(bytes, callback)"]
DecoderMap["decoders: HashMap&lt;u32, RpcStreamDecoder&gt;"]
Decoder1["RpcStreamDecoder { stream_id: 1 }"]
Decoder2["RpcStreamDecoder { stream_id: 2 }"]
DecoderN["RpcStreamDecoder { stream_id: N }"]
RemoveDecoder["remove_decoder(stream_id)"]
end
    
    subgraph Events["RpcStreamEvent Variants"]
Header["Header { stream_id, header: RpcHeader }"]
Data["Data { stream_id, bytes: Vec&lt;u8&gt; }"]
End["End { stream_id }"]
Cancel["Cancel { stream_id }"]
end
    
 
   RawBytes --> ReadBytes
 
   ReadBytes --> DecoderMap
 
   DecoderMap --> Decoder1
 
   DecoderMap --> Decoder2
 
   DecoderMap --> DecoderN
    
 
   Decoder1 --> Header
 
   Decoder2 --> Data
 
   DecoderN --> End
 
   End --> RemoveDecoder
 
   Cancel --> RemoveDecoder

Each RpcStreamDecoder maintains its own state machine via the RpcStreamDecoderState enum:

  • AwaitingHeader - Waiting for initial RpcFrameType::Header frame
  • ReceivingPayload - Accumulating RpcFrameType::Data frames
  • Complete - Stream finalized after RpcFrameType::End

Sources: README.md:29-30 src/rpc/rpc_internals/rpc_session.rs:1-150 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-120

RpcDispatcher: Request/Response Correlation

The RpcDispatcher sits above RpcSession and provides RPC-specific semantics. It maintains a HashMap<u32, PendingRequest> to track in-flight requests.

Key Methods:

MethodSignaturePurpose
call()fn(&mut self, request: RpcRequest, on_response: F)Initiates RPC call, registers response callback
respond()fn(&mut self, response: RpcResponse)Sends response back to caller
read_bytes()fn(&mut self, bytes: &[u8])Processes incoming frames via RpcSession
cancel()fn(&mut self, request_id: u32)Aborts pending request

Internal State:

  • session: RpcSession - Underlying multiplexing layer
  • pending_requests: HashMap<u32, PendingRequest> - Tracks active calls
  • next_request_id: u32 - Monotonically increasing request identifier
  • write_bytes_fn: F - Callback to emit frames to transport

Diagram: RpcDispatcher Call Flow with Code Entities

sequenceDiagram
    participant App as "Application Code"
    participant Dispatcher as "RpcDispatcher"
    participant Session as "RpcSession"
    participant Pending as "pending_requests: HashMap"
    participant Encoder as "RpcStreamEncoder"
    
    App->>Dispatcher: call(RpcRequest, on_response)
    Dispatcher->>Dispatcher: next_request_id++
    Dispatcher->>Pending: insert(request_id, PendingRequest)
    Dispatcher->>Session: init_request(RpcHeader)
    Session->>Session: allocate_stream_id()
    Session->>Encoder: write_frame(RpcFrameType::Header)
    Encoder->>Encoder: write_frame(RpcFrameType::Data)
    Encoder->>Encoder: write_frame(RpcFrameType::End)
    
    Note over App,Encoder: Response Path
    
    Session->>Dispatcher: RpcStreamEvent::Header
    Dispatcher->>Dispatcher: Buffer payload in PendingRequest
    Session->>Dispatcher: RpcStreamEvent::Data
    Session->>Dispatcher: RpcStreamEvent::End
    Dispatcher->>Pending: remove(request_id)
    Dispatcher->>App: on_response(RpcResponse)

The PendingRequest struct accumulates stream data:

struct PendingRequest {
    header: RpcHeader,
    accumulated_bytes: Vec<u8>,
    on_response: Box<dyn FnOnce(Result<RpcResponse>)>
}

Sources: README.md:29-30 src/rpc/rpc_dispatcher.rs:1-300 src/rpc/rpc_internals/rpc_stream_encoder.rs:1-100

RPC Protocol Layer

Request and Response Types

Muxio defines structured types for RPC communication. These types are serialized using bitcode for transmission.

RpcHeader Structure:

pub struct RpcHeader {
    pub msg_type: RpcMsgType,          // Call(0x01) or Response(0x02)
    pub request_id: u32,                // Correlation identifier
    pub method_id: u32,                 // xxhash of method name
    pub rpc_param_bytes: Option<Vec<u8>>, // Inline params (if small)
    pub metadata_bytes: Vec<u8>,        // Optional auxiliary data
}

RpcMsgType Enum:

Variantnum_enum ValuePurpose
Call0x01Client-initiated request
Response0x02Server-generated response

RpcRequest Structure:

pub struct RpcRequest {
    pub header: RpcHeader,              // Contains method_id, request_id
    pub param_bytes: Vec<u8>,           // Full serialized parameters
    pub param_stream_rx: Option<...>,   // Optional streaming channel
}

RpcResponse Structure:

pub struct RpcResponse {
    pub request_id: u32,                // Matches original request
    pub result_type: RpcResultType,     // Ok(0x01) or Err(0x02)
    pub result_bytes: Vec<u8>,          // Serialized return value or error
}

Diagram: Type Relationships

Sources: README.md:33-34 src/rpc/types/rpc_header.rs:1-50 src/rpc/types/rpc_request.rs:1-40 src/rpc/types/rpc_response.rs:1-40

Method Routing

RPC methods are identified by numeric method_id values generated at compile-time using xxhash::xxh32() of the method name. This enables:

  • Constant-time lookups : Direct HashMap<u32, Handler> access rather than string comparison
  • Type safety : Method IDs are generated from trait definitions shared between client and server
  • Compact representation : 4-byte method identifiers instead of variable-length strings

RpcMethodPrebuffered Trait Definition:

Example Service Definition:

Method Dispatch Flow:

Sources: README.md:50-51 README.md:102-118 extensions/muxio-rpc-service/src/lib.rs:1-100

Transport Agnosticism

Generic Caller Interface

The RpcServiceCallerInterface trait abstracts the client-side transport layer, enabling the same application code to work with multiple implementations.

Trait Definition:

Concrete Implementations:

TypeModuleTransportPlatform
RpcClientmuxio-tokio-rpc-clienttokio-tungstenite WebSocketNative (Tokio)
RpcWasmClientmuxio-wasm-rpc-clientwasm-bindgen → JavaScript WebSocketBrowser (WASM)

Diagram: Trait Implementation and Usage

This abstraction allows writing code once that compiles for multiple targets:

  • Native applications : Use RpcClient with tokio::spawn() background tasks
  • Browser/WASM : Use RpcWasmClient with static_muxio_write_bytes() JavaScript bridge
  • Custom transports : Implement the trait for specialized needs (e.g., IPC, embedded systems)

Sources: README.md:48-49 extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-100 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:1-50 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-50

Generic Endpoint Interface

The RpcServiceEndpointInterface trait abstracts the server-side handler registration. Handlers are stored in a HashMap<u32, Handler> indexed by method_id.

Trait Definition:

RpcContext Structure:

Diagram: Handler Registration and Dispatch

Sources: README.md:98-119 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-150

Cross-Platform Support

Single Codebase, Multiple Targets

Muxio enables true cross-platform RPC by separating concerns:

  1. Service Definition Layer : Platform-agnostic method definitions shared between client and server
  2. Transport Layer : Platform-specific implementations (Tokio, WASM) hidden behind traits
  3. Application Logic : Written once against the trait interface

The same Add::call(), Mult::call(), and Echo::call() method invocations work identically whether called from native code or WASM, as demonstrated in the example application.

Sources: README.md:48-49 README.md:64-161 Diagram 2 from high-level architecture

Type Safety Model

Compile-Time Contract Enforcement

Muxio enforces API contracts at compile-time through shared service definitions. Both client and server depend on the same crate containing method definitions, ensuring:

  • Parameter type mismatches are caught at compile-time
  • Return type mismatches are caught at compile-time
  • Method ID collisions are prevented by the build system
  • Serialization format consistency is guaranteed

The flow for type-safe RPC:

Any mismatch in the shared trait definition causes a compilation error, eliminating an entire class of runtime errors common in dynamically-typed RPC systems.

Sources: README.md:50-51 README.md:102-118

Serialization Layer

Muxio uses bitcode for efficient binary serialization, but the design is format-agnostic. The RpcMethodPrebuffered trait defines encode/decode methods, allowing alternative serialization libraries to be substituted if needed:

  • Bitcode : Default choice for compact binary format
  • Bincode : Alternative binary format
  • MessagePack : Cross-language compatibility
  • Custom formats : Full control over wire protocol

The key requirement is that both client and server use the same serialization implementation for a given method.

Sources: README.md:33-34

Summary

Muxio’s core concepts revolve around:

  1. Binary Framing : Efficient, low-overhead frame-based protocol
  2. Stream Multiplexing : Multiple concurrent streams via RpcSession and per-stream decoders
  3. Request Correlation : Matching responses to requests via RpcDispatcher
  4. Transport Abstraction : Generic traits (RpcServiceCallerInterface, RpcServiceEndpointInterface) enable multiple implementations
  5. Non-Async Core : Callback-driven design supports WASM and multiple runtimes
  6. Type Safety : Shared service definitions provide compile-time contract enforcement
  7. Cross-Platform : Single codebase runs on native (Tokio) and browser (WASM) clients

These concepts are elaborated in subsequent sections: Design Philosophy covers the reasoning behind these choices, and Layered Architecture provides detailed implementation patterns.

Sources: README.md:17-54 DRAFT.md:9-52 All high-level architecture diagrams


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Design Philosophy

Loading…

Design Philosophy

Relevant source files

Purpose and Scope

This document describes the fundamental design principles that guide the architecture and implementation of the rust-muxio framework. It covers four core tenets: runtime-agnostic architecture through a non-async callback-driven model, binary protocol with schemaless RPC design, cross-platform compatibility spanning native and WASM environments, and bidirectional symmetric communication patterns.

For details on how these principles manifest in the layered architecture, see Layered Architecture. For concrete platform implementations, see Platform Implementations.


Runtime Agnosticism: The Non-Async Core

The Callback-Driven Model

The muxio core library is deliberately implemented without async/await primitives. Instead, it uses a synchronous control flow with callback functions to handle events. This architectural choice enables the same core logic to function across fundamentally different runtime environments without modification.

graph TB
    subgraph CoreLibrary["muxio Core Library"]
RpcDispatcher["RpcDispatcher"]
RpcSession["RpcSession"]
FrameDecoder["FrameDecoder"]
end
    
    subgraph Callbacks["Callback Interfaces"]
OnRead["on_read_bytes()\nInvoked when data arrives"]
OnFrame["Frame callbacks\nInvoked per decoded frame"]
OnResponse["Response handlers\nInvoked on RPC completion"]
end
    
    subgraph Runtimes["Compatible Runtimes"]
Tokio["Tokio Multi-threaded\nasync runtime"]
StdThread["std::thread\nSingle-threaded or custom"]
WASM["WASM Browser\nJavaScript event loop"]
end
    
 
   RpcDispatcher -->|registers| OnResponse
 
   RpcSession -->|registers| OnFrame
 
   FrameDecoder -->|invokes| OnFrame
    
 
   Tokio -->|drives| CoreLibrary
 
   StdThread -->|drives| CoreLibrary
 
   WASM -->|drives| CoreLibrary
    
 
   CoreLibrary -->|invokes| Callbacks

The key insight is that the core library never blocks, never spawns tasks, and never assumes an async executor. Instead:

  • Data ingestion occurs through explicit read_bytes() calls src/rpc/rpc_dispatcher.rs:265-389
  • Event processing happens synchronously within the call stack
  • Downstream actions are delegated via callbacks, allowing the caller to decide whether to spawn async tasks, queue work, or handle synchronously

Sources: DRAFT.md:48-52 README.md:35-36

Benefits of Runtime Agnosticism

BenefitDescriptionCode Reference
WASM CompatibilityNo reliance on thread-based async executors that don’t exist in browser environmentsextensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-140
Flexible IntegrationCore can be wrapped in TokioMutex, StdMutex, or RefCell depending on contextextensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38
Deterministic ExecutionNo hidden task spawning or scheduling; control flow is explicitsrc/rpc/rpc_dispatcher.rs:1-47
Testing SimplicityCan test without async harness; unit tests run synchronouslysrc/rpc/rpc_dispatcher.rs:391-490

Sources: DRAFT.md:48-52 README.md:35-36 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-32


Binary Protocol and Schemaless Design

Low-Overhead Binary Framing

The muxio protocol operates entirely on raw byte sequences. Unlike text-based protocols (JSON, XML), every layer—from frame headers to RPC payloads—is transmitted as compact binary data. This design decision prioritizes performance and minimizes CPU overhead.

Sources: README.md:33-34 README.md:46-47

graph LR
    subgraph Application["Application Layer"]
RustStruct["Rust Struct\nAdd{a: f64, b: f64}"]
end
    
    subgraph Serialization["Serialization Layer"]
Bitcode["bitcode::encode()\nCompact binary"]
end
    
    subgraph RPC["RPC Protocol Layer"]
RpcRequest["RpcRequest\nmethod_id: u64\nparams: Vec&lt;u8&gt;"]
MethodID["xxhash(method_name)\nCompile-time constant"]
end
    
    subgraph Framing["Framing Layer"]
FrameHeader["Frame Header\nstream_id: u64\nflags: u8\npayload_len: u32"]
FramePayload["Binary Payload\nRaw bytes"]
end
    
 
   RustStruct -->|serialize| Bitcode
 
   Bitcode -->|Vec&lt;u8&gt;| RpcRequest
 
   RpcRequest -->|hash name| MethodID
 
   RpcRequest -->|encode| FramePayload
 
   FramePayload -->|wrap| FrameHeader

Schemaless RPC with Type Safety

The RPC layer is “schemaless” in that the protocol itself makes no assumptions about payload structure. Method IDs are 64-bit hashes computed at compile time via xxhash, and payloads are opaque byte vectors. However, type safety is enforced through shared service definitions :

This design achieves:

  • Compile-time verification : Mismatched types between client and server result in compilation errors, not runtime failures
  • Zero schema overhead : No runtime schema validation or parsing
  • Flexibility : Different services can use different serialization formats (bitcode, bincode, protobuf, etc.) as long as both sides agree

Sources: README.md:50-51 extensions/muxio-rpc-service/src/prebuffered/mod.rs:1-50

Performance Characteristics

AspectText-Based (JSON)Binary (muxio)
Parsing OverheadParse UTF-8, validate syntax, construct ASTDirect byte copying, minimal validation
Payload SizeVerbose keys, quoted strings, escape sequencesCompact type encodings, no metadata
CPU UsageHigh for serialization/deserializationLow, mainly memcpy operations
LatencyHigher due to parsingLower due to binary processing

Sources: README.md:46-47 DRAFT.md11


Cross-Platform Compatibility

Platform-Specific Extensions on a Shared Core

The muxio architecture separates platform-agnostic logic from platform-specific implementations. The core library (muxio) contains all RPC logic, multiplexing, and framing. Platform extensions provide transport bindings:

Sources: README.md:37-41 README.md:48-49

graph TB
    subgraph Shared["Platform-Agnostic Core"]
MuxioCore["muxio crate\nRpcDispatcher, RpcSession, FrameDecoder"]
RpcService["muxio-rpc-service\nRpcMethodPrebuffered trait"]
RpcCaller["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
RpcEndpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
    
    subgraph Native["Native Platform (Tokio)"]
TokioClient["muxio-tokio-rpc-client\nRpcClient\ntokio-tungstenite"]
TokioServer["muxio-tokio-rpc-server\nRpcServer\naxum + tokio-tungstenite"]
end
    
    subgraph Browser["Browser Platform (WASM)"]
WasmClient["muxio-wasm-rpc-client\nRpcWasmClient\nwasm-bindgen + js-sys"]
JSBridge["JavaScript Bridge\nstatic_muxio_write_bytes()"]
end
    
    subgraph Application["Application Code"]
ServiceDef["example-muxio-rpc-service-definition\nShared service contracts"]
AppLogic["Application Logic\nSame code for both platforms"]
end
    
 
   MuxioCore --> RpcCaller
 
   MuxioCore --> RpcEndpoint
 
   RpcService --> RpcCaller
 
   RpcService --> RpcEndpoint
    
 
   RpcCaller --> TokioClient
 
   RpcCaller --> WasmClient
 
   RpcEndpoint --> TokioServer
    
 
   TokioClient --> AppLogic
 
   WasmClient --> AppLogic
 
   ServiceDef --> AppLogic
 
   ServiceDef --> RpcService
    
 
   WasmClient --> JSBridge

Write Once, Deploy Everywhere

Because the RpcServiceCallerInterface extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-97 abstracts the underlying transport, application code that calls RPC methods is platform-independent :

The same pattern applies to server-side handlers via RpcServiceEndpointInterface extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-137 Handlers registered on the endpoint work identically whether the server is Tokio-based or hypothetically implemented for another runtime.

Sources: README.md:48-49 Cargo.toml:20-41

WASM-Specific Considerations

The WASM client (muxio-wasm-rpc-client) demonstrates how the callback-driven core enables browser integration:

  1. No native async runtime : WASM doesn’t have threads or a native async executor. The callback model works directly with JavaScript’s event loop.
  2. Static singleton pattern : Uses thread_local! with RefCell to maintain a global client reference extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:10-12
  3. JavaScript bridge : Exposes a single function static_muxio_write_bytes() that JavaScript calls when WebSocket data arrives extensions/muxio-wasm-rpc-client/src/static_lib/mod.rs:63-75

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-140 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-49


Bidirectional and Symmetric Communication

Client-Server Symmetry

Unlike traditional RPC systems where clients can only call servers, muxio treats both sides symmetrically. Every connection has:

This enables:

  • Server-initiated calls : Servers can invoke methods on connected clients
  • Bidirectional streaming : Either side can send data streams
  • Event-driven architectures : Push notifications, real-time updates, etc.

Implementation Details:

Sources: DRAFT.md25 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-38 extensions/muxio-tokio-rpc-server/src/rpc_server.rs:41-59


Streaming, Interleaving, and Cancellation

Support for Large Payloads

The multiplexing layer src/rpc/rpc_internals/rpc_session.rs:16-21 enables streaming operations:

Sources: src/rpc/rpc_internals/rpc_session.rs:16-21 src/rpc/rpc_internals/constants.rs:1-5 DRAFT.md:19-21

Cancellation Support

The system supports mid-stream cancellation:

  1. Client-side : Cancel by dropping the response future or explicitly signaling cancellation (implementation-dependent)
  2. Protocol-level : Stream decoders are removed when End or Error events are received src/rpc/rpc_internals/rpc_session.rs:52-120
  3. Resource cleanup : Pending requests are cleared from the dispatcher’s hashmap when completed or errored src/rpc/rpc_dispatcher.rs:265-389

Sources: DRAFT.md21 src/rpc/rpc_dispatcher.rs:265-389 src/rpc/rpc_internals/rpc_session.rs:52-120


Design Trade-offs and Prioritization

What muxio Prioritizes

PriorityRationaleImplementation
PerformanceLow latency and minimal overhead for high-throughput applicationsBinary protocol, zero-copy where possible, compact serialization
Cross-platformSame code works on native and WASMNon-async core, callback-driven model
Type safetyCatch errors at compile time, not runtimeShared service definitions, trait-based contracts
SimplicityEasy to reason about, minimal magicExplicit control flow, no hidden task spawning

What muxio Does Not Prioritize

  • Human readability of wire protocol : The binary format is not human-inspectable (unlike JSON). Debugging requires tooling.
  • Built-in authentication/encryption : Transport security (TLS, authentication) is delegated to the transport layer (e.g., wss:// WebSockets).
  • Schema evolution : No built-in versioning. Breaking changes require careful service definition updates.
  • Automatic code generation : Service definitions are written manually using traits. No macros or code generation (by design, for transparency).

Sources: README.md:42-53 DRAFT.md:9-23


Summary

The muxio design philosophy centers on four pillars:

  1. Runtime agnosticism via non-async, callback-driven primitives
  2. Binary protocol with schemaless flexibility and compile-time type safety
  3. Cross-platform compatibility spanning native (Tokio) and WASM environments
  4. Bidirectional symmetry enabling client-server parity and streaming operations

These choices enable muxio to serve as a high-performance, flexible foundation for distributed systems that require low latency, cross-platform deployment, and type-safe communication.

Sources: README.md:42-53 DRAFT.md:9-52 Cargo.toml:20-41


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Workspace Structure

Loading…

Workspace Structure

Relevant source files

Purpose and Scope

This document details the Cargo workspace organization of the rust-muxio repository. It catalogs all workspace member crates, their locations in the directory tree, their roles within the overall system architecture, and their dependency relationships. For information about the design philosophy and layered architecture principles, see Design Philosophy and Layered Architecture.


Workspace Overview

The rust-muxio repository is organized as a Cargo workspace containing 11 member crates. The workspace is configured with resolver version 2 and defines shared metadata (version, authors, license, repository) inherited by all member crates.

Sources:

graph TB
    subgraph "Root Directory"
        ROOT["muxio (Root Crate)"]
end
    
    subgraph "extensions/"
        EXT_TEST["muxio-ext-test"]
RPC_SERVICE["muxio-rpc-service"]
RPC_CALLER["muxio-rpc-service-caller"]
RPC_ENDPOINT["muxio-rpc-service-endpoint"]
TOKIO_SERVER["muxio-tokio-rpc-server"]
TOKIO_CLIENT["muxio-tokio-rpc-client"]
WASM_CLIENT["muxio-wasm-rpc-client"]
end
    
    subgraph "examples/"
        EXAMPLE_APP["example-muxio-ws-rpc-app"]
EXAMPLE_DEF["example-muxio-rpc-service-definition"]
end
    
 
   ROOT -->|extends| RPC_SERVICE
 
   RPC_SERVICE -->|extends| RPC_CALLER
 
   RPC_SERVICE -->|extends| RPC_ENDPOINT
 
   RPC_CALLER -->|implements| TOKIO_CLIENT
 
   RPC_CALLER -->|implements| WASM_CLIENT
 
   RPC_ENDPOINT -->|implements| TOKIO_SERVER
 
   EXAMPLE_DEF -->|uses| RPC_SERVICE
 
   EXAMPLE_APP -->|demonstrates| EXAMPLE_DEF

Workspace Member Listing

The workspace members are declared in the root Cargo.toml and organized into three categories:

Crate NameDirectory PathCategoryPrimary Purpose
muxio.CoreBinary framing protocol and stream multiplexing
muxio-rpc-serviceextensions/muxio-rpc-serviceRPC FrameworkService trait definitions and method ID generation
muxio-rpc-service-callerextensions/muxio-rpc-service-callerRPC FrameworkClient-side RPC invocation interface
muxio-rpc-service-endpointextensions/muxio-rpc-service-endpointRPC FrameworkServer-side handler registration and dispatch
muxio-tokio-rpc-serverextensions/muxio-tokio-rpc-serverPlatform ExtensionTokio-based WebSocket RPC server
muxio-tokio-rpc-clientextensions/muxio-tokio-rpc-clientPlatform ExtensionTokio-based WebSocket RPC client
muxio-wasm-rpc-clientextensions/muxio-wasm-rpc-clientPlatform ExtensionWASM browser-based RPC client
muxio-ext-testextensions/muxio-ext-testTestingIntegration test suite
example-muxio-rpc-service-definitionexamples/example-muxio-rpc-service-definitionExampleShared service definition for examples
example-muxio-ws-rpc-appexamples/example-muxio-ws-rpc-appExampleDemonstration application

Sources:


Core Library: muxio

The root crate muxio (located at repository root .) provides the foundational binary framing protocol and stream multiplexing primitives. This crate is runtime-agnostic and has minimal dependencies.

Key Components:

  • RpcDispatcher - Request correlation and response routing
  • RpcSession - Stream multiplexing and per-stream decoders
  • RpcStreamEncoder / RpcStreamDecoder - Frame encoding/decoding
  • RpcRequest / RpcResponse / RpcHeader - Core message types

Direct Dependencies:

  • bitcode - Binary serialization
  • chrono - Timestamp generation
  • once_cell - Lazy static initialization
  • tracing - Structured logging

Dev Dependencies:

  • rand - Random data generation for tests
  • tokio - Async runtime for tests

Sources:


RPC Framework Extensions

muxio-rpc-service

Located at extensions/muxio-rpc-service, this crate provides trait definitions for RPC services and compile-time method ID generation.

Key Exports:

  • RpcMethodPrebuffered trait - Defines prebuffered RPC method signatures
  • DynamicChannelReceiver / DynamicChannelSender - Channel abstractions for streaming
  • Method ID constants via xxhash-rust

Dependencies:

  • muxio - Core framing and session management
  • bitcode - Parameter/response serialization
  • xxhash-rust - Compile-time method ID generation
  • num_enum - Message type discrimination
  • async-trait - Async trait support
  • futures - Channel and stream utilities

Sources:

muxio-rpc-service-caller

Located at extensions/muxio-rpc-service-caller, this crate defines the RpcServiceCallerInterface trait for platform-agnostic client-side RPC invocation.

Key Exports:

  • RpcServiceCallerInterface trait - Abstract client interface
  • Helper functions for invoking prebuffered and streaming methods

Dependencies:

  • muxio - Core session and dispatcher
  • muxio-rpc-service - Service definitions
  • async-trait - Trait async support
  • futures - Future combinators
  • tracing - Instrumentation

Sources:

muxio-rpc-service-endpoint

Located at extensions/muxio-rpc-service-endpoint, this crate defines the RpcServiceEndpointInterface trait for platform-agnostic server-side handler registration and request processing.

Key Exports:

  • RpcServiceEndpointInterface trait - Abstract server interface
  • Handler registration and dispatch utilities

Dependencies:

  • muxio - Core dispatcher and session
  • muxio-rpc-service - Service trait definitions
  • muxio-rpc-service-caller - For bidirectional RPC (server-to-client calls)
  • bitcode - Request/response deserialization
  • async-trait - Trait async support
  • futures - Channel management

Sources:


Platform-Specific Extensions

muxio-tokio-rpc-server

Located at extensions/muxio-tokio-rpc-server, this crate provides a Tokio-based WebSocket RPC server implementation using Axum and tokio-tungstenite.

Key Exports:

  • RpcServer struct - Main server implementation
  • Axum WebSocket handler integration
  • Connection lifecycle management

Dependencies:

  • muxio - Core session primitives
  • muxio-rpc-service - Service definitions
  • muxio-rpc-service-caller - For bidirectional communication
  • muxio-rpc-service-endpoint - Server endpoint interface
  • axum - HTTP/WebSocket framework
  • tokio - Async runtime
  • tokio-tungstenite - WebSocket transport
  • bytes - Byte buffer utilities
  • futures-util - Stream combinators
  • async-trait - Async trait implementations

Sources:

muxio-tokio-rpc-client

Located at extensions/muxio-tokio-rpc-client, this crate provides a Tokio-based WebSocket RPC client with Arc-based lifecycle management and background task coordination.

Key Exports:

  • RpcClient struct - Main client implementation
  • Connection state tracking with RpcClientConnectionState
  • Arc-based shared ownership model

Dependencies:

  • muxio - Core dispatcher and session
  • muxio-rpc-service - Service definitions
  • muxio-rpc-service-caller - Client interface implementation
  • muxio-rpc-service-endpoint - For bidirectional communication
  • axum - (Used for shared types)
  • tokio - Async runtime
  • tokio-tungstenite - WebSocket transport
  • bytes - Byte buffer utilities
  • futures / futures-util - Async combinators
  • async-trait - Trait async support

Sources:

muxio-wasm-rpc-client

Located at extensions/muxio-wasm-rpc-client, this crate provides a WASM-compatible RPC client for browser environments using wasm-bindgen and JavaScript interop.

Key Exports:

  • RpcWasmClient struct - WASM client implementation
  • MUXIO_STATIC_RPC_CLIENT_REF - Thread-local static client reference
  • JavaScript bridge functions (static_muxio_write_bytes, etc.)

Dependencies:

  • muxio - Core framing and session
  • muxio-rpc-service - Service definitions
  • muxio-rpc-service-caller - Client interface
  • muxio-rpc-service-endpoint - For bidirectional communication
  • wasm-bindgen - JavaScript FFI
  • js-sys - JavaScript API bindings
  • wasm-bindgen-futures - Async/await in WASM
  • futures / futures-util - Future utilities
  • async-trait - Trait support

Sources:


Testing Infrastructure

muxio-ext-test

Located at extensions/muxio-ext-test, this integration test crate validates end-to-end functionality across client and server implementations.

Test Coverage:

  • Native client to native server communication
  • Service definition validation
  • Error handling and edge cases

Dependencies:

  • muxio-rpc-service - Service trait testing
  • muxio-rpc-service-caller - Client interface testing
  • muxio-rpc-service-endpoint - Server endpoint testing
  • muxio-tokio-rpc-client - Client implementation testing
  • muxio-tokio-rpc-server - Server implementation testing
  • example-muxio-rpc-service-definition - Test service definitions
  • tokio - Async test runtime
  • tracing / tracing-subscriber - Test instrumentation
  • bytemuck - Binary data utilities

Sources:


Example Applications

example-muxio-rpc-service-definition

Located at examples/example-muxio-rpc-service-definition, this crate defines shared RPC service contracts used across example applications.

Service Definitions:

  • Basic arithmetic operations (Add, Multiply)
  • Echo service
  • Demonstrates RpcMethodPrebuffered trait implementation

Dependencies:

  • muxio-rpc-service - Service trait definitions
  • bitcode - Serialization for parameters and responses

Sources:

example-muxio-ws-rpc-app

Located at examples/example-muxio-ws-rpc-app, this demonstration application shows complete client-server setup with WebSocket transport.

Demonstrates:

  • Server instantiation with RpcServer
  • Client connection with RpcClient
  • Service handler registration
  • RPC method invocation
  • Benchmarking with criterion

Dependencies:

  • example-muxio-rpc-service-definition - Shared service contracts
  • muxio - Core primitives
  • muxio-rpc-service-caller - Client calling
  • muxio-tokio-rpc-client - Client implementation
  • muxio-tokio-rpc-server - Server implementation
  • tokio - Async runtime
  • async-trait - Handler trait implementation
  • criterion - Performance benchmarking
  • futures - Async utilities
  • tracing / tracing-subscriber - Application logging
  • doc-comment - Documentation tests

Sources:


Workspace Dependency Graph

Sources:


Directory Structure to Code Entity Mapping

Sources:


Shared Workspace Configuration

All workspace member crates inherit common metadata from the workspace-level configuration:

PropertyValuePurpose
version0.10.0-alphaSynchronized versioning across all crates
edition2024Rust edition (preview edition)
authorsJeremy HarrisPackage authorship
repositoryhttps://github.com/jzombie/rust-muxioSource location
licenseApache-2.0Licensing terms
publishtruecrates.io publication enabled
resolver2Cargo feature resolver version

Workspace-wide Third-Party Dependencies:

The workspace defines shared third-party dependency versions to ensure consistency:

DependencyVersionUsed By
async-trait0.1.88RPC service traits
axum0.8.4Server framework
bitcode0.6.6Serialization
tokio1.45.1Async runtime
tokio-tungstenite0.26.2WebSocket transport
tracing0.1.41Logging infrastructure
tracing-subscriber0.3.20Log output formatting
xxhash-rust0.8.15Method ID hashing
num_enum0.7.3Enum discriminants
futures0.3.31Async utilities

Sources:


Crate Size and Complexity Metrics

Based on Cargo.lock dependency counts:

CrateDirect DependenciesPurpose Complexity
muxio5Low - Core primitives only
muxio-rpc-service6Medium - Trait definitions and hashing
muxio-rpc-service-caller5Low - Interface abstraction
muxio-rpc-service-endpoint6Low - Interface abstraction
muxio-tokio-rpc-server10High - Full server stack
muxio-tokio-rpc-client11High - Full client stack
muxio-wasm-rpc-client11High - WASM bridge complexity
muxio-ext-test8Medium - Integration testing
example-muxio-rpc-service-definition2Low - Simple definitions
example-muxio-ws-rpc-app9Medium - Demonstration code

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Layered Architecture

Loading…

Layered Architecture

Relevant source files

Purpose and Scope

This document explains the layered transport kit design of the muxio system, describing how each layer builds upon the previous one to provide progressively higher-level abstractions. The architecture separates concerns into six distinct layers: binary framing, stream multiplexing, RPC protocol, RPC abstractions, service definitions, and platform extensions.

For information about the design principles that motivated this architecture, see Design Philosophy. For detailed implementation details of individual layers, see Core Library (muxio)) and RPC Framework.

Architectural Overview

The muxio system implements a layered transport kit where each layer has a well-defined responsibility and interacts only with adjacent layers. This separation enables runtime-agnostic operation and cross-platform deployment.

Sources:

graph TB
    subgraph "Layer 6: Application Code"
        APP["User Application\nBusiness Logic"]
end
    
    subgraph "Layer 5: Service Definition Layer"
        SD["RpcMethodPrebuffered Traits\nCompile-Time Method IDs\nShared Type Contracts"]
end
    
    subgraph "Layer 4: RPC Abstraction Layer"
        CALLER["RpcServiceCallerInterface\nPlatform-Agnostic Client API"]
ENDPOINT["RpcServiceEndpointInterface\nPlatform-Agnostic Server API"]
end
    
    subgraph "Layer 3: RPC Protocol Layer"
        DISPATCHER["RpcDispatcher\nRequest Correlation\nResponse Routing"]
end
    
    subgraph "Layer 2: Stream Multiplexing Layer"
        SESSION["RpcSession\nStream ID Allocation\nPer-Stream Decoders"]
end
    
    subgraph "Layer 1: Binary Framing Layer"
        ENCODER["RpcStreamEncoder\nFrame Construction"]
DECODER["RpcStreamDecoder\nFrame Reconstruction"]
end
    
    subgraph "Layer 0: Platform Extensions"
        TOKIO_CLIENT["RpcClient\ntokio + tokio-tungstenite"]
TOKIO_SERVER["RpcServer\naxum + tokio-tungstenite"]
WASM_CLIENT["RpcWasmClient\nwasm-bindgen + js-sys"]
end
    
 
   APP --> SD
 
   SD --> CALLER
 
   SD --> ENDPOINT
 
   CALLER --> DISPATCHER
 
   ENDPOINT --> DISPATCHER
 
   DISPATCHER --> SESSION
 
   SESSION --> ENCODER
 
   SESSION --> DECODER
 
   ENCODER --> TOKIO_CLIENT
 
   ENCODER --> TOKIO_SERVER
 
   ENCODER --> WASM_CLIENT
 
   DECODER --> TOKIO_CLIENT
 
   DECODER --> TOKIO_SERVER
 
   DECODER --> WASM_CLIENT

Layer 1: Binary Framing Protocol

The binary framing layer defines the wire format for all data transmission. It provides discrete message boundaries over byte streams using a compact header structure.

Frame Structure

Each frame consists of a fixed-size header followed by a variable-length payload chunk:

FieldTypeSizeDescription
stream_idu324 bytesIdentifies which logical stream this frame belongs to
flagsu81 byteControl flags (Start, End, Error, Cancelation)
payload[u8]VariableBinary data chunk

The frame header is defined in RpcHeader and serialized using bytemuck for zero-copy conversion.

Frame Types

Frames are categorized by their flags field, encoded using num_enum:

  • Start Frame : First frame of a stream, initializes decoder state
  • Data Frame : Intermediate payload chunk
  • End Frame : Final frame, triggers stream completion
  • Error Frame : Signals stream-level error
  • Cancelation Frame : Requests stream termination

Encoding and Decoding

The RpcStreamEncoder serializes data into frames with automatic chunking based on DEFAULT_MAX_CHUNK_SIZE. The RpcStreamDecoder reconstructs the original message from potentially out-of-order frames.

Sources:

graph LR
    INPUT["Input Bytes"]
CHUNK["Chunk into\nDEFAULT_MAX_CHUNK_SIZE"]
HEADER["Add RpcHeader\nstream_id + flags"]
FRAME["Binary Frame"]
INPUT --> CHUNK
 
   CHUNK --> HEADER
 
   HEADER --> FRAME
    
    RECV["Received Frames"]
DEMUX["Demultiplex by\nstream_id"]
BUFFER["Reassemble Chunks"]
OUTPUT["Output Bytes"]
RECV --> DEMUX
 
   DEMUX --> BUFFER
 
   BUFFER --> OUTPUT

Layer 2: Stream Multiplexing Layer

The stream multiplexing layer, implemented by RpcSession, manages multiple concurrent logical streams over a single connection. Each stream has independent state and lifecycle.

RpcSession Responsibilities

The RpcSession struct provides:

  • Stream ID Allocation : Monotonically increasing u32 identifiers
  • Per-Stream Decoders : HashMap<u32, RpcStreamDecoder> for concurrent reassembly
  • Frame Muxing : Interleaving frames from multiple streams
  • Frame Demuxing : Routing incoming frames to the correct decoder
  • Stream Lifecycle : Automatic decoder cleanup on End/Error events

Stream Lifecycle Management

The session maintains a decoder for each active stream in the decoders field. When a stream completes (End/Error/Cancelation), its decoder is removed from the map, freeing resources.

Concurrent Stream Operations

Multiple streams can be active simultaneously:

Sources:

Layer 3: RPC Protocol Layer

The RPC protocol layer, implemented by RpcDispatcher, adds request/response semantics on top of the stream multiplexer. It correlates requests with responses using unique request IDs.

RpcDispatcher Structure

Request Correlation

The dispatcher assigns each RPC call a unique request_id:

  1. Client calls RpcDispatcher::call(RpcRequest)
  2. Dispatcher assigns monotonic request_id from next_request_id
  3. Request is serialized with embedded request_id
  4. Dispatcher stores callback in pending_requests map
  5. Server processes request and returns RpcResponse with same request_id
  6. Dispatcher looks up callback in pending_requests and invokes it
  7. Entry is removed from pending_requests

RPC Message Types

The protocol uses num_enum to encode message types in frame payloads:

Message TypeDirectionContains
RpcRequestClient → Serverrequest_id, method_id, params
RpcResponseServer → Clientrequest_id, result or error
RpcStreamChunkBidirectionalrequest_id, chunk_data
RpcStreamEndBidirectionalrequest_id

Request/Response Flow with Code Entities

Sources:

Layer 4: RPC Abstraction Layer

The RPC abstraction layer defines platform-agnostic traits that enable the same application code to work across different runtime environments.

RpcServiceCallerInterface

The RpcServiceCallerInterface trait abstracts client-side RPC invocation:

This trait is implemented by:

  • RpcClient (Tokio-based native client)
  • RpcWasmClient (WASM browser client)

RpcServiceEndpointInterface

The RpcServiceEndpointInterface trait abstracts server-side handler registration:

Platform Abstraction Benefits

AspectImplementation DetailAbstracted By
TransportWebSocket, TCP, Browser APIsCaller/Endpoint traits
RuntimeTokio, WASM event loop, std::threadAsync trait methods
Serializationbitcode encoding/decodingVec<u8> byte interface
Error HandlingPlatform-specific errorsRpcServiceError enum

Sources:

Layer 5: Service Definition Layer

The service definition layer provides compile-time type safety through shared trait definitions between client and server.

RpcMethodPrebuffered Trait

Service methods are defined using the RpcMethodPrebuffered trait:

Compile-Time Method ID Generation

The METHOD_ID is computed at compile time using xxhash3_64 from the xxhash-rust crate:

graph TB
    DEF["Service Definition Crate\nRpcMethodPrebuffered impls"]
SERVER["Server Crate"]
CLIENT["Client Crate"]
DEF -->|depends on| SERVER
 
   DEF -->|depends on| CLIENT
    
 
   SERVER -->|register_prebuffered Add::METHOD_ID, handler| ENDPOINT["RpcServiceEndpoint"]
CLIENT -->|Add::call client, params| CALLER["RpcServiceCaller"]
ENDPOINT -->|decode_request| DEF
 
   ENDPOINT -->|encode_response| DEF
 
   CALLER -->|encode_request| DEF
 
   CALLER -->|decode_response| DEF
    
    Note1["Compile Error if:\n- Method name mismatch\n- Type signature mismatch\n- Serialization incompatibility"]

This ensures that method names are never transmitted on the wire—only their compact 8-byte hash values.

Type Safety Enforcement

Sources:

graph TB
    subgraph "Tokio Native Platform"
        TOKIO_CLIENT["RpcClient\nArc&lt;RpcClientInner&gt;"]
TOKIO_INNER["RpcClientInner\ndispatcher: TokioMutex&lt;RpcDispatcher&gt;\nendpoint: Arc&lt;RpcServiceEndpoint&gt;"]
TOKIO_TRANSPORT["tokio-tungstenite\nWebSocketStream"]
TOKIO_TASKS["Background Tasks\nread_task\nwrite_task"]
TOKIO_CLIENT -->|owns Arc| TOKIO_INNER
 
       TOKIO_INNER -->|uses| TOKIO_TRANSPORT
 
       TOKIO_CLIENT -->|spawns| TOKIO_TASKS
    end
    
    subgraph "WASM Browser Platform"
        WASM_CLIENT["RpcWasmClient\nRpcClientInner"]
WASM_BRIDGE["static_muxio_write_bytes\nJavaScript Bridge"]
WASM_STATIC["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local! RefCell"]
WASM_WSAPI["Browser WebSocket API\njs-sys bindings"]
WASM_CLIENT -->|calls| WASM_BRIDGE
 
       WASM_BRIDGE -->|write to| WASM_WSAPI
 
       WASM_STATIC -->|holds| WASM_CLIENT
    end
    
    subgraph "Shared Abstractions"
        CALLER_TRAIT["RpcServiceCallerInterface"]
ENDPOINT_TRAIT["RpcServiceEndpointInterface"]
end
    
    TOKIO_CLIENT -.implements.-> CALLER_TRAIT
    WASM_CLIENT -.implements.-> CALLER_TRAIT
 
   TOKIO_INNER -->|owns| ENDPOINT_TRAIT
 
   WASM_CLIENT -->|owns| ENDPOINT_TRAIT

Layer 6: Platform Extensions

Platform extensions implement the abstraction layer traits for specific runtime environments, providing concrete transport mechanisms.

Platform Extension Architecture

Extension Crate Mapping

Extension CrateImplementsRuntimeTransport
muxio-tokio-rpc-clientRpcServiceCallerInterface, RpcServiceEndpointInterfaceTokio asynctokio-tungstenite WebSocket
muxio-tokio-rpc-serverRpcServiceEndpointInterfaceTokio + Axumtokio-tungstenite WebSocket
muxio-wasm-rpc-clientRpcServiceCallerInterface, RpcServiceEndpointInterfaceBrowser event loopwasm-bindgen + js-sys

Tokio Client Lifecycle

The RpcClient manages lifecycle through Arc reference counting:

Background tasks (read_task, write_task) hold Arc clones and automatically clean up when the connection drops.

WASM Client Singleton Pattern

The WASM client uses a thread-local singleton for JavaScript interop:

This enables JavaScript to write bytes into the Rust dispatcher without async overhead.

Sources:

Cross-Cutting Concerns

Several subsystems span multiple layers:

Serialization (bitcode)

The bitcode crate provides compact binary serialization at Layer 5 (Service Definitions):

  • encode() in RpcMethodPrebuffered::encode_request/encode_response
  • decode() in RpcMethodPrebuffered::decode_request/decode_response
  • Configured in service definition crates, used by both client and server

Observability (tracing)

The tracing crate provides structured logging at Layers 2-4:

  • Frame-level events in RpcSession
  • Request/response correlation in RpcDispatcher
  • Connection state changes in platform extensions

Error Propagation

Errors flow upward through layers:

Each layer defines its own error type and converts lower-layer errors appropriately.

Sources:

Layer Interaction Patterns

Write Path (Client → Server)

Read Path (Server → Client)

Sources:

Summary

The layered architecture enables:

  1. Separation of Concerns : Each layer has a single, well-defined responsibility
  2. Runtime Agnosticism : Core layers (1-3) use non-async, callback-driven design
  3. Platform Extensibility : Layer 6 implements platform-specific transports
  4. Type Safety : Layer 5 enforces compile-time contracts
  5. Code Reuse : Same service definitions work across all platforms

This design allows the same business logic to execute in Tokio native environments, WASM browsers, and potentially other runtimes without modification to the core layers.

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Core Library (muxio)

Loading…

Core Library (muxio)

Relevant source files

Purpose and Scope

This document describes the foundational muxio crate, which provides the low-level binary framing protocol and stream multiplexing capabilities that form the base layer of the Muxio framework. The core library is transport-agnostic, non-async, and callback-driven, enabling integration with any runtime environment including Tokio, standard library, and WebAssembly.

For higher-level RPC service abstractions built on top of this core, see RPC Framework. For concrete client and server implementations that use this core library, see Transport Implementations.

Sources: Cargo.toml:1-71 README.md:17-24

Architecture Overview

The muxio core library implements a layered architecture where each layer has a specific, well-defined responsibility. The design separates concerns into three distinct layers: binary framing, stream multiplexing, and RPC protocol.

Diagram: Core Library Component Layering

graph TB
    subgraph "Application Code"
        App["Application Logic"]
end
    
    subgraph "RPC Protocol Layer"
        Dispatcher["RpcDispatcher"]
Request["RpcRequest"]
Response["RpcResponse"]
Header["RpcHeader"]
end
    
    subgraph "Stream Multiplexing Layer"
        Session["RpcSession"]
StreamDecoder["RpcStreamDecoder"]
StreamEncoder["RpcStreamEncoder"]
StreamEvent["RpcStreamEvent"]
end
    
    subgraph "Binary Framing Layer"
        MuxDecoder["FrameMuxStreamDecoder"]
Frame["DecodedFrame"]
FrameKind["FrameKind"]
end
    
    subgraph "Transport Layer"
        Transport["Raw Bytes\nWebSocket/TCP/Custom"]
end
    
 
   App --> Dispatcher
 
   Dispatcher --> Request
 
   Dispatcher --> Response
 
   Request --> Header
 
   Response --> Header
    
 
   Dispatcher --> Session
 
   Session --> StreamDecoder
 
   Session --> StreamEncoder
 
   StreamDecoder --> StreamEvent
 
   StreamEncoder --> Header
    
 
   Session --> MuxDecoder
 
   MuxDecoder --> Frame
 
   Frame --> FrameKind
    
 
   MuxDecoder --> Transport
 
   Session --> Transport

Sources: src/rpc/rpc_internals/rpc_session.rs:1-118

Key Components

The core library consists of several primary components organized into distinct functional layers:

ComponentFile LocationLayerPrimary ResponsibilityDetails
FrameMuxStreamDecodersrc/frame/Binary FramingDecodes raw bytes into DecodedFrame structuresSee Binary Framing Protocol
DecodedFramesrc/frame/Binary FramingContainer for decoded frame data with stream ID and payloadSee Binary Framing Protocol
FrameKindsrc/frame/Binary FramingFrame type enumeration (Data, End, Cancel)See Binary Framing Protocol
RpcSessionsrc/rpc/rpc_internals/rpc_session.rs:20-117Stream MultiplexingManages stream ID allocation and per-stream decodersSee Stream Multiplexing
RpcStreamDecodersrc/rpc/rpc_internals/rpc_stream_decoder.rs:11-186Stream MultiplexingMaintains state machine for individual stream decodingSee Stream Multiplexing
RpcStreamEncodersrc/rpc/rpc_internals/Stream MultiplexingEncodes RPC headers and payloads into framesSee Stream Multiplexing
RpcDispatchersrc/rpc/rpc_dispatcher.rsRPC ProtocolCorrelates requests with responses via request_idSee RPC Dispatcher
RpcRequestsrc/rpc/RPC ProtocolRequest data structure with method ID and parametersSee Request and Response Types
RpcResponsesrc/rpc/RPC ProtocolResponse data structure with result or errorSee Request and Response Types
RpcHeadersrc/rpc/rpc_internals/RPC ProtocolContains RPC metadata (message type, IDs, metadata bytes)See Request and Response Types

Sources: src/rpc/rpc_internals/rpc_session.rs:15-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18

Component Interaction Flow

The following diagram illustrates how the core components interact during a typical request/response cycle, showing the actual method calls and data structures involved:

Diagram: Request/Response Data Flow Through Core Components

sequenceDiagram
    participant App as "Application"
    participant Disp as "RpcDispatcher"
    participant Sess as "RpcSession"
    participant Enc as "RpcStreamEncoder"
    participant Dec as "RpcStreamDecoder"
    participant Mux as "FrameMuxStreamDecoder"
    
    rect rgb(245, 245, 245)
        Note over App,Mux: Outbound Request Flow
        App->>Disp: call(RpcRequest)
        Disp->>Disp: assign request_id
        Disp->>Sess: init_request(RpcHeader, max_chunk_size, on_emit)
        Sess->>Sess: allocate stream_id via increment_u32_id()
        Sess->>Enc: RpcStreamEncoder::new(stream_id, header, on_emit)
        Enc->>Enc: encode header + payload into frames
        Enc->>App: emit bytes via on_emit callback
    end
    
    rect rgb(245, 245, 245)
        Note over App,Mux: Inbound Response Flow
        App->>Sess: read_bytes(input, on_rpc_stream_event)
        Sess->>Mux: FrameMuxStreamDecoder::read_bytes(input)
        Mux->>Sess: Iterator<Result<DecodedFrame>>
        Sess->>Dec: RpcStreamDecoder::decode_rpc_frame(frame)
        Dec->>Dec: state machine: AwaitHeader → AwaitPayload → Done
        Dec->>Sess: Vec<RpcStreamEvent>
        Sess->>App: on_rpc_stream_event(RpcStreamEvent::Header)
        Sess->>App: on_rpc_stream_event(RpcStreamEvent::PayloadChunk)
        Sess->>App: on_rpc_stream_event(RpcStreamEvent::End)
    end

Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-186

Non-Async, Callback-Driven Design

A fundamental design characteristic of the core library is its non-async, callback-driven architecture. This design choice enables the library to be used across different runtime environments without requiring a specific async runtime.

graph LR
    subgraph "Callback Trait System"
        RpcEmit["RpcEmit trait"]
RpcStreamEventHandler["RpcStreamEventDecoderHandler trait"]
RpcEmit --> TokioImpl["Tokio Implementation:\nasync fn + channels"]
RpcEmit --> WasmImpl["WASM Implementation:\nwasm_bindgen + JS bridge"]
RpcEmit --> CustomImpl["Custom Implementation:\nuser-defined"]
RpcStreamEventHandler --> DispatcherHandler["RpcDispatcher handler"]
RpcStreamEventHandler --> CustomHandler["Custom event handler"]
end

Callback Traits

The core library defines several callback traits that enable integration with different transport layers:

Diagram: Callback Trait Architecture

The callback-driven design means:

  1. No built-in async/await : Core methods like RpcSession::read_bytes() and RpcSession::init_request() are synchronous
  2. Emit callbacks : Output is sent via callback functions implementing the RpcEmit trait
  3. Event callbacks : Decoded events are delivered via RpcStreamEventDecoderHandler implementations
  4. Transport agnostic : Any byte transport can be used as long as it can call the core methods and handle callbacks

Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 README.md:35-36

Core Abstractions

The core library provides three fundamental abstractions that enable runtime-agnostic operation:

Stream Lifecycle Management

The RpcSession component provides stream multiplexing by maintaining per-stream state. Each outbound call allocates a unique stream_id via increment_u32_id(), and incoming frames are demultiplexed to their corresponding RpcStreamDecoder instances. For detailed information on stream multiplexing mechanics, see Stream Multiplexing.

Binary Protocol Format

All communication uses a binary framing protocol with fixed-size headers and variable-length payloads. The protocol supports chunking large messages using DEFAULT_MAX_CHUNK_SIZE and includes frame types (FrameKind::Data, FrameKind::End, FrameKind::Cancel) for stream control. For complete protocol specification, see Binary Framing Protocol.

Request Correlation

The RpcDispatcher component manages request/response correlation using unique request_id values. It maintains a HashMap of pending requests and routes incoming responses to the appropriate callback handlers. For dispatcher implementation details, see RPC Dispatcher.

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:53-117

Dependencies and Build Configuration

The core muxio crate has minimal dependencies to maintain its lightweight, transport-agnostic design:

DependencyPurposeVersion
chronoTimestamp utilities0.4.41
once_cellLazy static initialization1.21.3
tracingLogging and diagnostics0.1.41

Development dependencies include:

  • bitcode: Used in tests for serialization examples
  • rand: Random data generation for tests
  • tokio: Async runtime for integration tests

The crate is configured to publish to crates.io with Apache-2.0 license Cargo.toml:1-18

Sources: Cargo.toml:34-71

Integration with Extensions

The core library is designed to be extended through separate crates in the workspace. The callback-driven, non-async design enables these extensions without modification to the core:

Diagram: Core Library Extension Architecture

graph TB
    subgraph "Core Library: muxio"
        Session["RpcSession\n(callback-driven)"]
Dispatcher["RpcDispatcher\n(callback-driven)"]
end
    
    subgraph "RPC Service Layer"
        ServiceTrait["muxio-rpc-service\nRpcMethodPrebuffered trait"]
Caller["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
Endpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
    
    subgraph "Runtime Implementations"
        TokioServer["muxio-tokio-rpc-server\nasync Tokio + Axum"]
TokioClient["muxio-tokio-rpc-client\nasync Tokio + WebSocket"]
WasmClient["muxio-wasm-rpc-client\nwasm-bindgen"]
end
    
 
   Session --> ServiceTrait
 
   Dispatcher --> Caller
 
   Dispatcher --> Endpoint
    
 
   Caller --> TokioClient
 
   Caller --> WasmClient
 
   Endpoint --> TokioServer

Extensions built on the core library:

  • RPC Service Layer RPC Framework: Provides method definition traits and abstractions
  • Tokio Server Tokio RPC Server: Async server implementation using tokio and axum
  • Tokio Client Tokio RPC Client: Async client using tokio-tungstenite WebSockets
  • WASM Client WASM RPC Client: Browser-based client using wasm-bindgen

Sources: Cargo.toml:19-31 README.md:37-41

Memory and Performance Characteristics

The core library is designed for efficiency with several key optimizations:

OptimizationImplementationBenefit
Zero-copy frame parsingFrameMuxStreamDecoder processes bytes in-placeEliminates unnecessary allocations during frame decoding
Shared headersRpcHeader wrapped in Arc src/rpc/rpc_internals/rpc_stream_decoder.rs111Multiple events reference same header without cloning
Minimal bufferingStream decoders emit chunks immediately after header parseLow memory footprint for large payloads
Automatic cleanupStreams removed from HashMap on End/Cancel src/rpc/rpc_internals/rpc_session.rs74Prevents memory leaks from completed streams
Configurable chunksmax_chunk_size parameter in init_request() src/rpc/rpc_internals/rpc_session.rs:35-50Tune for different payload sizes and network conditions

Typical Memory Usage

  • RpcSession: Approximately 48 bytes base + HashMap overhead
  • RpcStreamDecoder: Approximately 80 bytes + buffered payload size
  • RpcHeader (shared): 24 bytes + metadata length
  • Active stream overhead: ~160 bytes per concurrent stream

For performance tuning strategies and benchmarks, see Performance Considerations.

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18 src/rpc/rpc_internals/rpc_stream_decoder.rs:111-116

Error Handling

The core library uses Result types with specific error enums:

  • FrameDecodeError : Returned by FrameMuxStreamDecoder::read_bytes() and RpcStreamDecoder::decode_rpc_frame()
    • CorruptFrame: Invalid frame structure or header data
    • ReadAfterCancel: Attempt to read after stream cancellation
  • FrameEncodeError : Returned by encoding operations
    • Propagated from RpcStreamEncoder::new()

Error events are emitted as RpcStreamEvent::Error src/rpc/rpc_internals/rpc_session.rs:84-91 src/rpc/rpc_internals/rpc_session.rs:103-110 containing:

  • rpc_header: The header if available
  • rpc_request_id: The request ID if known
  • rpc_method_id: The method ID if parsed
  • frame_decode_error: The underlying error

For comprehensive error handling patterns, see Error Handling.

Sources: src/rpc/rpc_internals/rpc_session.rs:80-111


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Binary Framing Protocol

Loading…

Binary Framing Protocol

Relevant source files

Purpose and Scope

The binary framing protocol defines the lowest-level data structure for all communication in muxio. This protocol operates below the RPC layer, providing a schemaless, ordered, chunked byte transport mechanism.

Core Responsibilities:

  • Define binary frame structure (7-byte header + variable payload)
  • Encode structured data into frames via FrameMuxStreamEncoder
  • Decode byte streams into DecodedFrame structures via FrameMuxStreamDecoder
  • Support frame types (Data, End, Cancel) for stream lifecycle management
  • Enable payload chunking with configurable max_chunk_size

The framing protocol is transport-agnostic and makes no assumptions about serialization formats. It operates purely on raw bytes. Higher-level concerns like RPC headers, method IDs, and serialization are handled by layers above this protocol.

Related Pages:

  • Stream multiplexing and per-stream decoders: #3.2
  • RPC protocol structures (RpcHeader, RpcRequest, RpcResponse): #3.4
  • RPC session management and stream ID allocation: #3.2

Sources: src/rpc/rpc_internals/rpc_session.rs:15-24 DRAFT.md:11-21


Architecture Overview

The framing protocol sits between raw transport (WebSocket, TCP) and the RPC session layer. It provides discrete message boundaries over continuous byte streams.

Component Diagram: Frame Processing Pipeline

graph TB
    RawBytes["Raw Byte Stream\nWebSocket/TCP"]
FrameEncoder["FrameMuxStreamEncoder\nencode frames"]
FrameDecoder["FrameMuxStreamDecoder\nparse frames"]
DecodedFrame["DecodedFrame\nstruct"]
RpcSession["RpcSession\nmultiplexer"]
RpcStreamEncoder["RpcStreamEncoder\nper-stream encoder"]
RpcSession -->|allocate stream_id| RpcStreamEncoder
 
   RpcStreamEncoder -->|emit frames| FrameEncoder
 
   FrameEncoder -->|write_bytes| RawBytes
    
 
   RawBytes -->|read_bytes| FrameDecoder
 
   FrameDecoder -->|yield| DecodedFrame
 
   DecodedFrame -->|route by stream_id| RpcSession

Key Classes:

  • FrameMuxStreamEncoder: Encodes frames into bytes (referenced indirectly via RpcStreamEncoder)
  • FrameMuxStreamDecoder: Parses incoming bytes into DecodedFrame structures
  • DecodedFrame: Represents a parsed frame with stream_id, kind, and payload
  • RpcSession: Manages frame multiplexing and per-stream decoder lifecycle

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:52-60


Frame Structure

Each binary frame consists of a fixed-size header followed by a variable-length payload. The frame format is designed for efficient parsing with minimal copying.

Binary Layout

FieldOffsetSizeTypeDescription
stream_id04 bytesu32 (LE)Logical stream identifier for multiplexing
kind41 byteu8 enumFrameKind: Data=0, End=1, Cancel=2
payload_length52 bytesu16 (LE)Payload byte count (0-65535)
payload7payload_length[u8]Raw payload bytes

Total Frame Size: 7 bytes (header) + payload_length

graph LR
    subgraph "Frame Header - 7 Bytes"
        StreamID["stream_id\n4 bytes\nu32 LE"]
Kind["kind\n1 byte\nu8"]
PayloadLen["payload_length\n2 bytes\nu16 LE"]
end
    
    subgraph "Frame Payload"
        Payload["payload\n0-65535 bytes\n[u8]"]
end
    
 
   StreamID --> Kind
 
   Kind --> PayloadLen
 
   PayloadLen --> Payload

All multi-byte integers use little-endian encoding. The u16 payload length field limits individual frames to 65,535 bytes, enforcing bounded memory consumption per frame.

Frame Header Diagram

Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:3-6 (frame structure constants)


stateDiagram-v2
    [*] --> AwaitingFirstFrame
    AwaitingFirstFrame --> AcceptingData: FrameKind::Data
    AcceptingData --> AcceptingData: FrameKind::Data
    AcceptingData --> StreamClosed: FrameKind::End
    AcceptingData --> StreamAborted: FrameKind::Cancel
    AwaitingFirstFrame --> StreamAborted: FrameKind::Cancel
    StreamClosed --> [*]
    StreamAborted --> [*]

Frame Types

The FrameKind enum (from crate::frame) defines frame semantics. Each frame’s kind field determines how the decoder processes it.

Frame Lifecycle State Machine

FrameKind Values

Enum VariantWire ValuePurposeEffect
FrameKind::DataImplementation-definedPayload chunkAccumulate or emit payload bytes
FrameKind::EndImplementation-definedNormal terminationFinalize stream, emit RpcStreamEvent::End
FrameKind::CancelImplementation-definedAbnormal abortDiscard state, emit error or remove decoder

Frame Type Semantics

  • Data frames: Carry payload bytes. A stream may consist of 1 to N data frames depending on chunking.
  • End frames: Signal successful completion. Payload length is typically 0. Decoder emits final event and removes stream state.
  • Cancel frames: Signal early termination. Decoder removes stream from rpc_stream_decoders map and may emit error event.

Sources: src/rpc/rpc_internals/rpc_session.rs:98-100 (decoder cleanup), src/rpc/rpc_internals/rpc_stream_decoder.rs:156-166 (End/Cancel handling)


Frame Encoding

The encoding process transforms logical data into binary frames suitable for transmission. The core encoder is FrameMuxStreamEncoder, though specific encoding details are handled by stream-level encoders like RpcStreamEncoder.

Frame Encoding Sequence

sequenceDiagram
    participant RpcSession
    participant RpcStreamEncoder
    participant on_emit
    participant Transport
    
    RpcSession->>RpcSession: allocate stream_id
    RpcSession->>RpcStreamEncoder: new(stream_id, max_chunk_size, header, on_emit)
    RpcStreamEncoder->>RpcStreamEncoder: encode RpcHeader into first frame
    RpcStreamEncoder->>on_emit: emit(Frame{stream_id, Data, header_bytes})
    on_emit->>Transport: write_bytes()
    
    loop "For each payload chunk"
        RpcStreamEncoder->>RpcStreamEncoder: chunk payload by max_chunk_size
        RpcStreamEncoder->>on_emit: emit(Frame{stream_id, Data, chunk})
        on_emit->>Transport: write_bytes()
    end
    
    RpcStreamEncoder->>on_emit: emit(Frame{stream_id, End, []})
    on_emit->>Transport: write_bytes()

Encoding Process

  1. Stream ID Allocation: RpcSession::init_request() allocates a unique stream_id via increment_u32_id()
  2. Encoder Creation: Creates RpcStreamEncoder with stream_id, max_chunk_size, and on_emit callback
  3. Frame Emission: Encoder calls on_emit for each frame. Callback receives raw frame bytes for transport writing.
  4. Chunking: If payload exceeds max_chunk_size, encoder emits multiple Data frames with same stream_id
  5. Finalization: Final End frame signals completion

The callback-based on_emit pattern enables non-async, runtime-agnostic operation. Callers provide their own I/O strategy.

Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 (init_request method)


graph TB
    read_bytes["read_bytes(&[u8])"]
FrameMuxStreamDecoder["FrameMuxStreamDecoder\nstateful parser"]
DecodedFrame["DecodedFrame\nResult iterator"]
RpcSession["RpcSession"]
rpc_stream_decoders["rpc_stream_decoders\nHashMap&lt;u32, RpcStreamDecoder&gt;"]
RpcStreamDecoder["RpcStreamDecoder\nper-stream state"]
read_bytes -->|input bytes| FrameMuxStreamDecoder
 
   FrameMuxStreamDecoder -->|yield| DecodedFrame
 
   RpcSession -->|iterate frames| DecodedFrame
 
   RpcSession -->|route by stream_id| rpc_stream_decoders
 
   rpc_stream_decoders -->|or_default| RpcStreamDecoder
 
   RpcStreamDecoder -->|decode_rpc_frame| RpcStreamEvent

Frame Decoding

The decoding process parses incoming byte streams into structured DecodedFrame objects. The FrameMuxStreamDecoder maintains parsing state across multiple read_bytes calls to handle partial frame reception.

Decoding Architecture

Decoding Sequence

Decoder State Management

Per-Connection State (RpcSession):

  • frame_mux_stream_decoder: FrameMuxStreamDecoder - parses frame boundaries from byte stream
  • rpc_stream_decoders: HashMap<u32, RpcStreamDecoder> - maps stream_id to per-stream state

Per-Stream State (RpcStreamDecoder):

  • state: RpcDecoderState - AwaitHeader, AwaitPayload, or Done
  • header: Option<Arc<RpcHeader>> - parsed RPC header from first frame
  • buffer: Vec<u8> - accumulates bytes across frames
  • rpc_request_id: Option<u32> - extracted from header
  • rpc_method_id: Option<u64> - extracted from header

Lifecycle:

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-42


graph LR
    Payload["Payload: 100 KB"]
Encoder["RpcStreamEncoder\nmax_chunk_size=16384"]
F1["Frame\nstream_id=42\nkind=Data\npayload=16KB"]
F2["Frame\nstream_id=42\nkind=Data\npayload=16KB"]
F3["Frame\nstream_id=42\nkind=Data\npayload=16KB"]
FN["Frame\nstream_id=42\nkind=Data\npayload=..."]
FEnd["Frame\nstream_id=42\nkind=End\npayload=0"]
Payload --> Encoder
 
   Encoder --> F1
 
   Encoder --> F2
 
   Encoder --> F3
 
   Encoder --> FN
 
   Encoder --> FEnd

Chunk Management

Large messages are automatically split into multiple frames to avoid memory exhaustion and enable incremental processing. The chunking mechanism operates transparently at the frame level.

Chunking Strategy

Chunk Size Configuration

max_chunk_size controls payload bytes per frame. System-defined constant DEFAULT_MAX_CHUNK_SIZE provides default.

Size RangeLatencyOverheadUse Case
4-8 KBLowerHigherInteractive/real-time
16-32 KBBalancedModerateGeneral purpose
64 KBHigherLowerBulk transfer

Maximum frame payload is 65,535 bytes (u16::MAX) per frame structure. Practical values are typically 16-32 KB to balance latency and efficiency.

Reassembly Process

Frames with matching stream_id are processed by the same RpcStreamDecoder:

Frame TypeDecoder ActionState Transition
First Data (with RPC header)Parse header, emit RpcStreamEvent::HeaderAwaitHeaderAwaitPayload
Subsequent DataEmit RpcStreamEvent::PayloadChunkRemain in AwaitPayload
EndEmit RpcStreamEvent::End, remove decoderAwaitPayload → removed from map
CancelRemove decoder, optionally emit errorAny state → removed from map

Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:59-146 (decoder state machine), src/rpc/rpc_internals/rpc_session.rs:68-100 (decoder lifecycle)


RPC Header Layer

The framing protocol is payload-agnostic, but RPC usage places an RpcHeader structure in the first frame’s payload. This is an RPC-level concern, not a frame-level concern.

RPC Header Binary Structure

The first Data frame for an RPC stream contains a serialized RpcHeader:

FieldOffsetSizeTypeDescription
rpc_msg_type01 byteu8RpcMessageType: Call=0, Response=1
rpc_request_id14 bytesu32 LERequest correlation ID
rpc_method_id58 bytesu64 LExxhash of method name
rpc_metadata_length132 bytesu16 LEByte count of metadata
rpc_metadata_bytes15variable[u8]Serialized parameters or result

Total: 15 bytes + rpc_metadata_length

RPC Header Constants

Constants from src/constants.rs define the layout:

ConstantValuePurpose
RPC_FRAME_FRAME_HEADER_SIZE15Minimum RPC header size
RPC_FRAME_MSG_TYPE_OFFSET0Offset to rpc_msg_type
RPC_FRAME_ID_OFFSET1Offset to rpc_request_id
RPC_FRAME_METHOD_ID_OFFSET5Offset to rpc_method_id
RPC_FRAME_METADATA_LENGTH_OFFSET13Offset to metadata length
RPC_FRAME_METADATA_LENGTH_SIZE2Size of metadata length field

RpcStreamDecoder uses these constants to parse the header from the first frame’s payload.

Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:2-8 (constant imports), src/rpc/rpc_internals/rpc_stream_decoder.rs:64-125 (parsing logic)


Error Handling

Frame-level errors are represented by FrameDecodeError and FrameEncodeError enumerations. The decoder converts corrupted or invalid frames into error events rather than panicking.

Frame Decode Errors

Error VariantCauseDecoder Behavior
FrameDecodeError::CorruptFrameInvalid header, failed parsingRemove stream_id from map, emit error event
FrameDecodeError::ReadAfterCancelData frame after Cancel frameReturn error, stop processing stream
Incomplete frame (not an error)Insufficient bytes for full frameFrameMuxStreamDecoder buffers, awaits more data

Error Event Propagation

Error isolation ensures corruption in one stream (stream_id=42) does not affect other streams (stream_id=43, etc.) on the same connection.

Sources: src/rpc/rpc_internals/rpc_session.rs:80-94 (error handling and cleanup), src/rpc/rpc_internals/rpc_stream_decoder.rs:70-72 (error detection)


sequenceDiagram
    participant Client
    participant Connection
    participant Server
    
    Note over Client,Server: Three concurrent calls, interleaved frames
    
    Client->>Connection: Frame{stream_id=1, Data, RpcHeader{Add}}
    Client->>Connection: Frame{stream_id=2, Data, RpcHeader{Multiply}}
    Client->>Connection: Frame{stream_id=3, Data, RpcHeader{Echo}}
    
    Connection->>Server: Frames arrive in send order
    
    Note over Server: Server processes concurrently
    
    Server->>Connection: Frame{stream_id=2, Data, result}
    Server->>Connection: Frame{stream_id=2, End}
    
    Server->>Connection: Frame{stream_id=1, Data, result}
    Server->>Connection: Frame{stream_id=1, End}
    
    Server->>Connection: Frame{stream_id=3, Data, result}
    Server->>Connection: Frame{stream_id=3, End}
    
    Connection->>Client: Responses complete out-of-order

Multiplexing Example

Multiple concurrent RPC calls use distinct stream_id values to interleave over a single connection:

Concurrent Streams Over Single Connection

Frame multiplexing eliminates head-of-line blocking. A slow operation on stream_id=1 does not delay responses for stream_id=2 or stream_id=3.

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 (stream ID allocation), src/rpc/rpc_internals/rpc_session.rs:61-77 (decoder routing)


Summary

The binary framing protocol provides a lightweight, efficient mechanism for transmitting structured data over byte streams. Key characteristics:

  • Fixed-format frames: 7-byte header + variable payload (max 64 KB per frame)
  • Stream identification: stream_id field enables multiplexing
  • Lifecycle management: Data, End, and Cancel frame types control stream state
  • Chunking support: Large messages split automatically into multiple frames
  • Stateful decoding: Handles partial frame reception across multiple reads
  • Error isolation: Frame errors affect only the associated stream

This framing protocol forms the foundation for higher-level stream multiplexing (#3.2) and RPC protocol implementation (#3.4).


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Stream Multiplexing

Loading…

Stream Multiplexing

Relevant source files

Purpose and Scope

This document describes the stream multiplexing mechanism in the Muxio core library, specifically focusing on the RpcSession component. Stream multiplexing enables multiple independent RPC request/response streams to be transmitted concurrently over a single underlying connection without interference.

This page covers the low-level mechanics of stream ID allocation, per-stream state management, frame routing, and cleanup. For information about the binary framing protocol that underlies stream multiplexing, see Binary Framing Protocol. For information about higher-level request/response correlation and RPC lifecycle management, see RPC Dispatcher.

Sources: src/rpc/rpc_internals/rpc_session.rs:1-118 README.md:15-36


Overview

The RpcSession struct is the central component for stream multiplexing. It operates at a layer above binary frame encoding/decoding but below RPC protocol semantics. Its primary responsibilities include:

  • Allocating unique stream IDs for outbound requests
  • Routing incoming frames to the appropriate per-stream decoder
  • Managing the lifecycle of individual stream decoders
  • Emitting stream events (header, payload chunks, completion) for higher layers to process

Each logical RPC call (request or response) is assigned a unique stream_id. Multiple streams can be active simultaneously, with their frames interleaved at the transport level. The RpcSession ensures that frames are correctly demultiplexed and reassembled into coherent stream events.

Sources: src/rpc/rpc_internals/rpc_session.rs:15-24


Architecture

Component Structure

Diagram 1: RpcSession Architecture

This diagram illustrates how RpcSession sits between the binary framing layer and the RPC protocol layer. The frame_mux_stream_decoder field processes raw bytes into DecodedFrame instances, which are then routed to individual RpcStreamDecoder instances stored in the rpc_stream_decoders HashMap based on their stream_id.

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18 src/rpc/rpc_internals/rpc_session.rs:52-60


Stream ID Allocation

When an outbound RPC request or response is initiated, RpcSession must allocate a unique stream ID. This is managed by the next_stream_id counter, which is incremented for each new stream.

Allocation Process

StepActionImplementation
1Capture current next_stream_id valuesrc/rpc/rpc_internals/rpc_session.rs44
2Increment counter via increment_u32_id()src/rpc/rpc_internals/rpc_session.rs45
3Create RpcStreamEncoder with allocated IDsrc/rpc/rpc_internals/rpc_session.rs:47-48
4Return encoder to callersrc/rpc/rpc_internals/rpc_session.rs49

The increment_u32_id() utility function src/utils.rs wraps the counter on overflow, ensuring continuous operation.

Diagram 2: Stream ID Allocation Sequence

Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/utils.rs5


Inbound Frame Processing

The read_bytes() method is the entry point for all incoming data. It processes raw bytes through the following pipeline:

graph LR
    RawBytes["input: &[u8]"] --> ReadBytes["RpcSession::read_bytes()"]
ReadBytes --> FrameMuxDecoder["frame_mux_stream_decoder\n.read_bytes(input)"]
FrameMuxDecoder --> |frames: Iterator&lt;Result&lt;DecodedFrame&gt;&gt;| FrameLoop["for frame_result in frames"]
FrameLoop --> ExtractStreamID["frame.inner.stream_id"]
ExtractStreamID --> LookupDecoder["rpc_stream_decoders\n.entry(stream_id)\n.or_default()"]
LookupDecoder --> DecodeFrame["rpc_stream_decoder\n.decode_rpc_frame(&frame)"]
DecodeFrame --> |Ok events: Vec&lt;RpcStreamEvent&gt;| EmitEvents["for event in events"]
EmitEvents --> CheckEnd["matches!(event,\nRpcStreamEvent::End)"]
CheckEnd --> |true| CleanupDecoder["rpc_stream_decoders\n.remove(&stream_id)"]
CheckEnd --> |false| CallbackInvoke["on_rpc_stream_event(event)"]
DecodeFrame --> |Err e| ErrorCleanup["rpc_stream_decoders\n.remove(&stream_id)\nemit Error event"]

Processing Pipeline

Diagram 3: Inbound Frame Processing Pipeline

Sources: src/rpc/rpc_internals/rpc_session.rs:52-117

Per-Stream Decoder Management

Each unique stream_id encountered gets its own RpcStreamDecoder instance, which maintains the decoding state for that stream. These decoders are stored in the rpc_stream_decoders: HashMap<u32, RpcStreamDecoder> field and are lazily created on first access using the entry().or_default() pattern.

Decoders are automatically cleaned up when:

  • An RpcStreamEvent::End is emitted after processing (line 73-75)
  • A FrameKind::Cancel or FrameKind::End frame is received (line 98-100)
  • A decoding error occurs in decode_rpc_frame() (line 82)

Sources: src/rpc/rpc_internals/rpc_session.rs68 src/rpc/rpc_internals/rpc_session.rs:73-75 src/rpc/rpc_internals/rpc_session.rs:80-82 src/rpc/rpc_internals/rpc_session.rs:98-100


RpcStreamDecoder State Machine

Each RpcStreamDecoder operates as a state machine that transitions through three states as it processes frames for a single stream.

stateDiagram-v2
    [*] --> AwaitHeader: RpcStreamDecoder::new()
    
    AwaitHeader --> AwaitHeader : buffer.len() < RPC_FRAME_FRAME_HEADER_SIZE or insufficient metadata
    AwaitHeader --> AwaitPayload : Header complete - extract RpcHeader, state = AwaitPayload, emit Header event
    
    AwaitPayload --> AwaitPayload: FrameKind::Data\nemit PayloadChunk event
    AwaitPayload --> Done: frame.inner.kind == FrameKind::End\nstate = Done,\nemit End event
    AwaitPayload --> [*]: frame.inner.kind == FrameKind::Cancel\nreturn ReadAfterCancel error
    
    Done --> [*] : Decoder removed from HashMap

State Transitions

Diagram 4: RpcStreamDecoder State Machine

Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:20-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-186 src/rpc/rpc_internals/rpc_stream_decoder.rs:64-65 src/rpc/rpc_internals/rpc_stream_decoder.rs120 src/rpc/rpc_internals/rpc_stream_decoder.rs:156-157 src/rpc/rpc_internals/rpc_stream_decoder.rs:165-166

State Descriptions

StatePurposeBuffer UsageEvents Emitted
AwaitHeaderAccumulate bytes until complete RPC header is availableAccumulates all incoming bytesRpcStreamEvent::Header when header complete
AwaitPayloadProcess payload chunks after header extractedNot used (data forwarded directly)RpcStreamEvent::PayloadChunk for each frame
DoneStream has completedNot usedRpcStreamEvent::End on transition

Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:11-42


Header Decoding

The RPC header is embedded at the beginning of each stream and contains metadata necessary for routing and processing. The header structure is fixed-size with a variable-length metadata field:

Header Structure

FieldOffset ConstantSizeDescription
rpc_msg_typeRPC_FRAME_MSG_TYPE_OFFSET (0)1 byteRpcMessageType enum (Call or Response)
rpc_request_idRPC_FRAME_ID_OFFSET (1)4 bytesRequest correlation ID
rpc_method_idRPC_FRAME_METHOD_ID_OFFSET (5)8 bytesMethod identifier (xxhash)
metadata_lengthRPC_FRAME_METADATA_LENGTH_OFFSET (13)2 bytesLength of metadata bytes
rpc_metadata_bytes15variableSerialized metadata
sequenceDiagram
    participant Frame as "DecodedFrame"
    participant Decoder as "RpcStreamDecoder"
    participant Buffer as "self.buffer: Vec&lt;u8&gt;"
    participant Events as "events: Vec&lt;RpcStreamEvent&gt;"
    
    Note over Decoder: self.state = AwaitHeader
    Frame->>Decoder: decode_rpc_frame(&frame)
    Decoder->>Buffer: buffer.extend(&frame.inner.payload)
    Decoder->>Buffer: Check buffer.len() >= RPC_FRAME_FRAME_HEADER_SIZE
    
    alt "buffer.len() < RPC_FRAME_FRAME_HEADER_SIZE"
        Decoder-->>Events: return Ok(events) // empty
    else "Sufficient for header"
        Decoder->>Decoder: rpc_msg_type = RpcMessageType::try_from(buffer[RPC_FRAME_MSG_TYPE_OFFSET])
        Decoder->>Decoder: rpc_request_id = u32::from_le_bytes(buffer[RPC_FRAME_ID_OFFSET..RPC_FRAME_METHOD_ID_OFFSET])
        Decoder->>Decoder: rpc_method_id = u64::from_le_bytes(buffer[RPC_FRAME_METHOD_ID_OFFSET..RPC_FRAME_METADATA_LENGTH_OFFSET])
        Decoder->>Decoder: meta_len = u16::from_le_bytes(buffer[RPC_FRAME_METADATA_LENGTH_OFFSET..])
        Decoder->>Buffer: Check buffer.len() >= header_size + meta_len
        
        alt "Complete header + metadata available"
            Decoder->>Decoder: rpc_metadata_bytes = buffer[15..15+meta_len].to_vec()
            Decoder->>Decoder: header_arc = Arc::new(RpcHeader { ... })
            Decoder->>Decoder: self.header = Some(header_arc.clone())
            Decoder->>Decoder: self.state = AwaitPayload
            Decoder->>Buffer: buffer.drain(..15+meta_len)
            Decoder->>Events: events.push(RpcStreamEvent::Header { ... })
            
            opt "!buffer.is_empty()"
                Decoder->>Events: events.push(RpcStreamEvent::PayloadChunk { ... })
            end
        end
    end
    
    Decoder-->>Frame: return Ok(events)

The decoder buffers incoming data in buffer: Vec<u8> until at least RPC_FRAME_FRAME_HEADER_SIZE + metadata_length bytes are available, then extracts the header fields and transitions to AwaitPayload state.

Diagram 5: Header Decoding Process

Sources: src/rpc/rpc_internals/rpc_stream_decoder.rs:60-145 src/constants.rs:13-17


graph TB
    subgraph "RpcSession State at Time T"
        Session["RpcSession\nnext_stream_id = 104"]
subgraph "rpc_stream_decoders HashMap"
            Stream101["stream_id: 101\nState: AwaitPayload\nheader: Some(Add request)\nBuffered: 256 bytes"]
Stream102["stream_id: 102\nState: AwaitHeader\nheader: None\nBuffered: 8 bytes"]
Stream103["stream_id: 103\nState: Done\nheader: Some(Echo response)\nReady for cleanup"]
end
    end
    
 
   IncomingFrame["Incoming Frame\nstream_id: 102\npayload: 24 bytes"] --> Session
 
   Session --> |Route to| Stream102
 
   Stream102 --> |Now 32 bytes total Header complete!| HeaderEvent["RpcStreamEvent::Header\nfor stream 102"]
Stream103 --> |Remove from HashMap| Cleanup["Cleanup"]

Concurrent Stream Example

Multiple streams can be active simultaneously, each at different stages of processing. The following illustrates how three concurrent streams might be managed:

Diagram 6: Concurrent Stream Management Example

This illustrates a snapshot where stream 101 is actively receiving payload, stream 102 is still accumulating its header, and stream 103 has completed and is ready for cleanup.

Sources: src/rpc/rpc_internals/rpc_session.rs:22-32 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18


Stream Cleanup

Proper cleanup of stream decoders is essential to prevent memory leaks in long-lived connections with many sequential RPC calls. Cleanup occurs at several points:

Cleanup Triggers

TriggerLocationCleanup Action
RpcStreamEvent::End emittedsrc/rpc/rpc_internals/rpc_session.rs:73-75self.rpc_stream_decoders.remove(&stream_id)
FrameKind::End receivedsrc/rpc/rpc_internals/rpc_session.rs:98-100self.rpc_stream_decoders.remove(&stream_id)
FrameKind::Cancel receivedsrc/rpc/rpc_internals/rpc_session.rs:98-100self.rpc_stream_decoders.remove(&stream_id)
Decoding error in decode_rpc_frame()src/rpc/rpc_internals/rpc_session.rs82self.rpc_stream_decoders.remove(&stream_id)
graph TD
 
   Event["Stream Event"] --> CheckType{"Event Type?"}
CheckType --> |RpcStreamEvent::End| Cleanup1["Remove decoder\nfrom HashMap"]
CheckType --> |FrameKind::End| Cleanup2["Remove decoder\nfrom HashMap"]
CheckType --> |FrameKind::Cancel| Cleanup3["Remove decoder\nfrom HashMap"]
CheckType --> |Decode Error| Cleanup4["Remove decoder\nfrom HashMap"]
CheckType --> |Other| Continue["Continue processing"]
Cleanup1 --> Done["Stream resources freed"]
Cleanup2 --> Done
 
   Cleanup3 --> Done
 
   Cleanup4 --> Done

All cleanup operations use the HashMap::remove(&stream_id) method to immediately deallocate the RpcStreamDecoder instance and its buffered data.

Diagram 7: Stream Cleanup Triggers

Sources: src/rpc/rpc_internals/rpc_session.rs:73-100


Error Handling

When errors occur during frame decoding or stream processing, the RpcSession generates RpcStreamEvent::Error events and performs cleanup:

Error Scenarios

Error TypeSourceResponse
Invalid frame structureFrameMuxStreamDecoderEmit error event, continue with other streams
Corrupt RPC headerRpcStreamDecoderEmit error event, remove decoder, return error
Read after cancelFrameKind::Cancel receivedReturn FrameDecodeError::ReadAfterCancel

All error events include:

  • rpc_header: Header data if available
  • rpc_request_id: Request ID if header was decoded
  • rpc_method_id: Method ID if header was decoded
  • frame_decode_error: The underlying error type

This information allows higher layers to perform appropriate error handling and reporting.

Sources: src/rpc/rpc_internals/rpc_session.rs:80-111 src/rpc/rpc_internals/rpc_stream_decoder.rs:165-166


Key Implementation Details

Thread Safety

RpcSession itself is not thread-safe and does not implement Send or Sync. This design choice aligns with the core philosophy of using callbacks rather than async/await. Higher-level components (like RpcDispatcher) are responsible for managing thread safety if needed.

Memory Efficiency

  • Each stream decoder maintains its own buffer, but only while in AwaitHeader state
  • Once the header is extracted, payload chunks are forwarded directly without additional buffering
  • Completed streams are immediately cleaned up, preventing unbounded memory growth
  • The HashMap of decoders shrinks automatically as streams complete

Arc-Wrapped Headers

Headers are wrapped in Arc<RpcHeader> immediately upon decoding:

This allows multiple RpcStreamEvent instances for the same stream to share the same header data via Arc::clone() without deep cloning the rpc_metadata_bytes, which is particularly important for streams with large metadata.

Sources: src/rpc/rpc_internals/rpc_session.rs:15-33 src/rpc/rpc_internals/rpc_stream_decoder.rs:111-117 src/rpc/rpc_internals/rpc_stream_decoder.rs13


Integration with Other Components

Relationship to Binary Framing

RpcSession depends on FrameMuxStreamDecoder from the binary framing layer but does not implement frame encoding/decoding itself. It operates at a higher level of abstraction, concerned with RPC-specific concepts like headers and stream events rather than raw frame structures.

See Binary Framing Protocol for details on frame encoding/decoding.

Relationship to RPC Dispatcher

RpcDispatcher (see RPC Dispatcher) sits above RpcSession and consumes the RpcStreamEvent callbacks. The dispatcher correlates requests with responses and manages the RPC protocol semantics, while RpcSession handles only the mechanical aspects of stream multiplexing.

Sources: README.md:29-35


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

RPC Dispatcher

Loading…

RPC Dispatcher

Relevant source files

Purpose and Scope

The RpcDispatcher is the central request coordination component in muxio’s Core Transport Layer. It sits above RpcSession and below the RPC Service Layer, managing the complete lifecycle of RPC requests and responses. The dispatcher handles request correlation via unique IDs, multiplexed stream management, and response routing over the binary framed transport.

This document covers the internal architecture, request/response flow, queue management, and usage patterns. For the underlying stream multiplexing, see Stream Multiplexing. For the RpcRequest and RpcResponse data structures, see Request and Response Types.

Sources: src/rpc/rpc_dispatcher.rs:1-458


Architectural Context

Diagram: RpcDispatcher in Layered Architecture

The RpcDispatcher operates in a non-async , callback-based model compatible with WASM and multithreaded runtimes. Key responsibilities:

ResponsibilityImplementation
Request Correlationnext_rpc_request_id: u32 with monotonic increment
Response RoutingPer-request handlers in response_handlers: HashMap<u32, Handler>
Stream ManagementWraps RpcRespondableSession for lifecycle control
Payload Accumulationrpc_request_queue: Arc<Mutex<VecDeque<(u32, RpcRequest)>>>
Error Propagationfail_all_pending_requests() on connection drop

Important : Each RpcDispatcher instance is bound to a single connection. Do not share across connections.

Sources: src/rpc/rpc_dispatcher.rs:20-51 src/rpc/rpc_dispatcher.rs:36-51


Core Components and Data Structures

Diagram: RpcDispatcher Internal Structure

Primary Fields

FieldTypeLinePurpose
rpc_respondable_sessionRpcRespondableSession<'a>37Delegates to RpcSession for frame encoding/decoding
next_rpc_request_idu3242Monotonic counter for outbound request ID generation
rpc_request_queueArc<Mutex<VecDeque<(u32, RpcRequest)>>>50Thread-safe queue tracking all active inbound requests

RpcRespondableSession Internal State

FieldTypePurpose
rpc_sessionRpcSessionManages stream IDs and frame encoding/decoding
response_handlersHashMap<u32, Box<dyn FnMut(RpcStreamEvent) + Send>>Per-request response callbacks indexed by request_id
catch_all_response_handlerOption<Box<dyn FnMut(RpcStreamEvent) + Send>>Global fallback handler for unmatched events
prebuffered_responsesHashMap<u32, Vec<u8>>Accumulates payload bytes when prebuffering is enabled
prebuffering_flagsHashMap<u32, bool>Tracks which requests should prebuffer responses

Sources: src/rpc/rpc_dispatcher.rs:36-51 src/rpc/rpc_internals/rpc_respondable_session.rs:21-28


Request Lifecycle

Outbound Request Flow

Diagram: Outbound Request Encoding via RpcDispatcher::call()

The call() method at src/rpc/rpc_dispatcher.rs:227-286 executes the following steps:

  1. ID Assignment ([line 241](https://github.com/jzombie/rust-muxio/blob/30450c98/line 241)): Captures self.next_rpc_request_id as the unique rpc_request_id
  2. ID Increment ([line 242](https://github.com/jzombie/rust-muxio/blob/30450c98/line 242)): Advances next_rpc_request_id using increment_u32_id()
  3. Header Construction ([lines 252-257](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 252-257)): Creates RpcHeader struct with:
    • rpc_msg_type: RpcMessageType::Call
    • rpc_request_id (from step 1)
    • rpc_method_id (from RpcRequest.rpc_method_id)
    • rpc_metadata_bytes (converted from RpcRequest.rpc_param_bytes)
  4. Handler Registration ([line 260-266](https://github.com/jzombie/rust-muxio/blob/30450c98/line 260-266)): Calls init_respondable_request() which:
    • Stores on_response handler in response_handlers HashMap
    • Sets prebuffering_flags[request_id] to control response buffering
  5. Payload Transmission ([lines 270-276](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 270-276)): If rpc_prebuffered_payload_bytes exists, writes to encoder
  6. Stream Finalization ([lines 279-283](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 279-283)): If is_finalized, calls flush() and end_stream()
  7. Encoder Return ([line 285](https://github.com/jzombie/rust-muxio/blob/30450c98/line 285)): Returns RpcStreamEncoder<E> for additional streaming

Sources: src/rpc/rpc_dispatcher.rs:227-286 src/rpc/rpc_internals/rpc_respondable_session.rs:42-68


Inbound Response Flow

Diagram: Inbound Response Processing via RpcDispatcher::read_bytes()

The read_bytes() method at src/rpc/rpc_dispatcher.rs:362-374 processes incoming transport data:

  1. Frame Decoding ([line 364](https://github.com/jzombie/rust-muxio/blob/30450c98/line 364)): Delegates to self.rpc_respondable_session.read_bytes(bytes)
  2. Event Dispatch (src/rpc/rpc_internals/rpc_respondable_session.rs:93-173): RpcSession decodes frames into RpcStreamEvent enum
  3. Handler Invocation : For each event:
    • Specific Handler ([line 152](https://github.com/jzombie/rust-muxio/blob/30450c98/line 152)): Calls response_handlers[rpc_request_id] if registered
    • Catch-All Handler ([lines 102-208](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 102-208)): Always invoked to populate rpc_request_queue
  4. Queue Mutations (via catch-all handler):
    • Header event: Creates new RpcRequest and pushes to queue ([line 140](https://github.com/jzombie/rust-muxio/blob/30450c98/line 140))
    • PayloadChunk event: Extends rpc_prebuffered_payload_bytes ([lines 154-157](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 154-157))
    • End event: Sets is_finalized = true ([line 177](https://github.com/jzombie/rust-muxio/blob/30450c98/line 177))
  5. Active IDs Return ([lines 367-371](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 367-371)): Locks queue and returns Vec<u32> of all request_ids

Sources: src/rpc/rpc_dispatcher.rs:362-374 src/rpc/rpc_dispatcher.rs:99-209 src/rpc/rpc_internals/rpc_respondable_session.rs:93-173


Response Handling Mechanisms

Dual Handler System

Diagram: Dual Handler Dispatch System

The dispatcher uses two parallel handler mechanisms at src/rpc/rpc_internals/rpc_respondable_session.rs:93-173:

Specific Response Handlers

Storage: response_handlers: HashMap<u32, Box<dyn FnMut(RpcStreamEvent) + Send + 'a>> ([line 24](https://github.com/jzombie/rust-muxio/blob/30450c98/line 24))

AspectImplementation
RegistrationIn init_respondable_request() at [line 61-62](https://github.com/jzombie/rust-muxio/blob/30450c98/line 61-62) when on_response is Some
InvocationAt [line 152-154](https://github.com/jzombie/rust-muxio/blob/30450c98/line 152-154) for each RpcStreamEvent matching rpc_request_id
RemovalAt [line 161-162](https://github.com/jzombie/rust-muxio/blob/30450c98/line 161-162) when End or Error event received
Use CaseApplication-level response processing with custom callbacks

Catch-All Response Handler

Storage: catch_all_response_handler: Option<Box<dyn FnMut(RpcStreamEvent) + Send + 'a>> ([line 25](https://github.com/jzombie/rust-muxio/blob/30450c98/line 25))

AspectImplementation
RegistrationIn set_catch_all_response_handler() at [line 86-91](https://github.com/jzombie/rust-muxio/blob/30450c98/line 86-91) during RpcDispatcher::new()
InvocationAt [line 165-167](https://github.com/jzombie/rust-muxio/blob/30450c98/line 165-167) for all events not handled by specific handlers
Primary RolePopulates rpc_request_queue for queue-based processing pattern

Catch-All Handler Responsibilities ([lines 102-208](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 102-208)):

  • Header Event ([lines 122-142](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 122-142)): Creates new RpcRequest, pushes to rpc_request_queue
  • PayloadChunk Event ([lines 144-169](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 144-169)): Extends rpc_prebuffered_payload_bytes in existing request
  • End Event ([lines 171-185](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 171-185)): Sets is_finalized = true
  • Error Event ([lines 187-206](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 187-206)): Logs error (currently no queue removal)

Sources: src/rpc/rpc_internals/rpc_respondable_session.rs:24-25 src/rpc/rpc_dispatcher.rs:99-209


Prebuffering vs. Streaming Modes

The prebuffer_response: bool parameter in call() ([line 233](https://github.com/jzombie/rust-muxio/blob/30450c98/line 233)) controls response delivery mode:

Modeprebuffer_responseImplementationUse Case
PrebufferingtrueAccumulates all PayloadChunk events into single buffer, delivers as one chunk when stream endsComplete request/response RPCs where full payload needed before processing
StreamingfalseDelivers each PayloadChunk event immediately as receivedProgressive rendering, large file transfers, real-time data

Prebuffering Implementation (src/rpc/rpc_internals/rpc_respondable_session.rs:112-147):

Diagram: Prebuffering Control Flow

Key Data Structures:

  • prebuffering_flags: HashMap<u32, bool> ([line 27](https://github.com/jzombie/rust-muxio/blob/30450c98/line 27)): Tracks mode per request, set at [line 57-58](https://github.com/jzombie/rust-muxio/blob/30450c98/line 57-58)
  • prebuffered_responses: HashMap<u32, Vec<u8>> ([line 26](https://github.com/jzombie/rust-muxio/blob/30450c98/line 26)): Accumulates bytes for prebuffered requests

Prebuffering Sequence ([lines 112-147](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 112-147)):

  1. Set Flag ([line 57-58](https://github.com/jzombie/rust-muxio/blob/30450c98/line 57-58)): prebuffering_flags.insert(rpc_request_id, true) in init_respondable_request()
  2. Accumulate Chunks ([line 127](https://github.com/jzombie/rust-muxio/blob/30450c98/line 127)): buffer.extend_from_slice(bytes) for each PayloadChunk event
  3. Flush on End ([lines 135-142](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 135-142)): Emit synthetic PayloadChunk with full buffer, then emit End
  4. Cleanup ([line 145](https://github.com/jzombie/rust-muxio/blob/30450c98/line 145)): prebuffered_responses.remove(&rpc_id) after delivery

Sources: src/rpc/rpc_internals/rpc_respondable_session.rs:112-147 src/rpc/rpc_internals/rpc_respondable_session.rs:26-27


Request Queue Management

Queue Structure and Threading

Type: Arc<Mutex<VecDeque<(u32, RpcRequest)>>> at src/rpc/rpc_dispatcher.rs50

Diagram: Request Queue Threading Model

The queue stores (request_id, RpcRequest) tuples for all active inbound requests. Each entry represents a request that has received at least a Header event.

RpcRequest Structure

Source: src/rpc/rpc_request_response.rs:10-33

FieldTypeMutabilityDescription
rpc_method_idu64Set on HeaderMethod identifier from header
rpc_param_bytesOption<Vec<u8>>Set on HeaderMetadata from RpcHeader.rpc_metadata_bytes
rpc_prebuffered_payload_bytesOption<Vec<u8>>Grows on ChunksAccumulated via extend_from_slice() at [line 157](https://github.com/jzombie/rust-muxio/blob/30450c98/line 157)
is_finalizedboolSet on Endtrue when End event received at [line 177](https://github.com/jzombie/rust-muxio/blob/30450c98/line 177)

Sources: src/rpc/rpc_dispatcher.rs50 src/rpc/rpc_request_response.rs:10-33


Queue Operation Methods

Diagram: Queue Mutation and Access Operations

Public API Methods

get_rpc_request(header_id: u32)

Signature: -> Option<MutexGuard<'_, VecDeque<(u32, RpcRequest)>>>
Location: src/rpc/rpc_dispatcher.rs:381-394

Behavior:

  1. Acquires lock: self.rpc_request_queue.lock() ([line 385](https://github.com/jzombie/rust-muxio/blob/30450c98/line 385))
  2. Searches for header_id: queue.iter().any(|(id, _)| *id == header_id) ([line 389](https://github.com/jzombie/rust-muxio/blob/30450c98/line 389))
  3. Returns entire MutexGuard if found, None otherwise

Rationale: Cannot return reference to queue element—lifetime would outlive MutexGuard. Caller must re-search under guard.

Example Usage:


is_rpc_request_finalized(header_id: u32)

Signature: -> Option<bool>
Location: src/rpc/rpc_dispatcher.rs:399-405

Returns:

  • Some(true): Request exists and is_finalized == true
  • Some(false): Request exists but not finalized
  • None: Request not found in queue

Implementation: [line 401-404](https://github.com/jzombie/rust-muxio/blob/30450c98/line 401-404) searches queue and returns req.is_finalized


delete_rpc_request(header_id: u32)

Signature: -> Option<RpcRequest>
Location: src/rpc/rpc_dispatcher.rs:411-420

Behavior:

  1. Locks queue: self.rpc_request_queue.lock() ([line 412](https://github.com/jzombie/rust-muxio/blob/30450c98/line 412))
  2. Finds index: queue.iter().position(|(id, _)| *id == header_id) ([line 414](https://github.com/jzombie/rust-muxio/blob/30450c98/line 414))
  3. Removes entry: queue.remove(index)? ([line 416](https://github.com/jzombie/rust-muxio/blob/30450c98/line 416))
  4. Returns owned RpcRequest, discarding request ID

Typical Usage: Call after is_rpc_request_finalized() returns true to consume the completed request.

Sources: src/rpc/rpc_dispatcher.rs:381-420


Server-Side Response Sending

respond() Method Flow

Diagram: RpcDispatcher::respond() Execution Flow

Method Signature: src/rpc/rpc_dispatcher.rs:298-337

Key Differences from call()

Aspectcall() (Client)respond() (Server)
Request IDGenerates new via next_rpc_request_id ([line 241](https://github.com/jzombie/rust-muxio/blob/30450c98/line 241))Uses rpc_response.rpc_request_id from original request ([line 308](https://github.com/jzombie/rust-muxio/blob/30450c98/line 308))
Message TypeRpcMessageType::Call ([line 253](https://github.com/jzombie/rust-muxio/blob/30450c98/line 253))RpcMessageType::Response ([line 309](https://github.com/jzombie/rust-muxio/blob/30450c98/line 309))
Metadatarpc_request.rpc_param_bytes converted to rpc_metadata_bytes ([line 250](https://github.com/jzombie/rust-muxio/blob/30450c98/line 250))Only rpc_result_status byte if present ([lines 313-317](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 313-317))
Handler RegistrationOptionally registers on_response handler ([line 264](https://github.com/jzombie/rust-muxio/blob/30450c98/line 264))No handler registration (responses don’t receive responses)
PrebufferingSupports prebuffer_response parameterNot applicable (prebuffering is for receiving, not sending)

Metadata Encoding

The metadata field in response headers carries only the result status ([lines 313-317](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 313-317)):

Convention: While muxio core doesn’t enforce semantics, 0 typically indicates success. See RPC Service Errors for error code conventions.

Sources: src/rpc/rpc_dispatcher.rs:298-337 src/rpc/rpc_internals/rpc_respondable_session.rs:70-82


Error Handling and Cleanup

Mutex Poisoning Strategy

The rpc_request_queue uses a Mutex for synchronization. If a thread panics while holding the lock, the mutex becomes poisoned.

Poisoning Detection at src/rpc/rpc_dispatcher.rs:104-118:

Design Rationale:

AspectJustification
Panic on PoisonPoisoned mutex indicates another thread panicked during queue mutation
No RecoveryInconsistent state could cause incorrect routing, data loss, silent failures
Fast FailureExplicit crash provides clear debugging signal vs. undefined behavior
Production SafetyBetter to fail loudly than corrupt application state

Alternative Implementations (future consideration):

  • Configurable panic policy via builder pattern
  • Error reporting mechanism instead of panic
  • Queue reconstruction from handler state

Other Lock Sites:

  • read_bytes() at [line 367-370](https://github.com/jzombie/rust-muxio/blob/30450c98/line 367-370): Maps poison to FrameDecodeError::CorruptFrame
  • is_rpc_request_finalized() at [line 400](https://github.com/jzombie/rust-muxio/blob/30450c98/line 400): Returns None on lock failure
  • delete_rpc_request() at [line 412](https://github.com/jzombie/rust-muxio/blob/30450c98/line 412): Returns None on lock failure

Sources: src/rpc/rpc_dispatcher.rs:85-118 src/rpc/rpc_dispatcher.rs:362-374


Connection Failure Cleanup

When transport connection drops, fail_all_pending_requests() at src/rpc/rpc_dispatcher.rs:427-456 prevents indefinite hangs:

Diagram: fail_all_pending_requests() Execution Flow

Implementation Steps:

  1. Ownership Transfer ([line 436](https://github.com/jzombie/rust-muxio/blob/30450c98/line 436)): std::mem::take(&mut self.rpc_respondable_session.response_handlers)

    • Moves all handlers out of the HashMap
    • Leaves response_handlers empty (prevents further invocations)
  2. Synthetic Error Creation ([lines 444-450](https://github.com/jzombie/rust-muxio/blob/30450c98/lines 444-450)):

  3. Handler Invocation ([line 452](https://github.com/jzombie/rust-muxio/blob/30450c98/line 452)): Calls handler(error_event) for each pending request

    • Wakes async Futures waiting for responses
    • Triggers error handling in callback-based code
  4. Automatic Cleanup : Handlers dropped after loop completes

Usage Context: Transport implementations (e.g., muxio-tokio-rpc-client, muxio-wasm-rpc-client) call this in WebSocket close handlers.

Sources: src/rpc/rpc_dispatcher.rs:427-456


Thread Safety

The dispatcher achieves thread safety through:

Shared Request Queue

  • Arc : Enables shared ownership across threads and callbacks
  • Mutex : Ensures exclusive access during mutations
  • VecDeque : Efficient push/pop operations for queue semantics

Handler Storage

Response handlers are stored as boxed trait objects:

The Send bound allows handlers to be invoked from different threads if the dispatcher is shared across threads.

Sources: src/rpc/rpc_dispatcher.rs50 src/rpc/rpc_internals/rpc_respondable_session.rs24


Usage Patterns

Client-Side Pattern

1. Create RpcDispatcher
2. For each RPC call:
   a. Build RpcRequest with method_id, params, payload
   b. Call dispatcher.call() with on_response handler
   c. Write bytes to transport via on_emit callback
3. When receiving data from transport:
   a. Call dispatcher.read_bytes()
   b. Response handlers are invoked automatically

Sources: tests/rpc_dispatcher_tests.rs:32-124


Server-Side Pattern

1. Create RpcDispatcher
2. When receiving data from transport:
   a. Call dispatcher.read_bytes()
   b. Returns list of active request IDs
3. For each active request ID:
   a. Check is_rpc_request_finalized()
   b. If finalized, delete_rpc_request() to retrieve full request
   c. Process request (decode params, execute method)
   d. Build RpcResponse with result
   e. Call dispatcher.respond() to send response
4. Write response bytes to transport via on_emit callback

Sources: tests/rpc_dispatcher_tests.rs:126-202

graph TB
    Dispatcher["RpcDispatcher"]
subgraph "Client Role"
        OutboundCall["call()\nInitiate request"]
InboundResponse["read_bytes()\nReceive response"]
end
    
    subgraph "Server Role"
        InboundRequest["read_bytes()\nReceive request"]
OutboundResponse["respond()\nSend response"]
end
    
 
   Dispatcher --> OutboundCall
 
   Dispatcher --> InboundResponse
 
   Dispatcher --> InboundRequest
 
   Dispatcher --> OutboundResponse
    
    OutboundCall -.emits.-> Transport["Transport Layer"]
Transport -.delivers.-> InboundRequest
    OutboundResponse -.emits.-> Transport
    Transport -.delivers.-> InboundResponse

Bidirectional Pattern

The same dispatcher instance can handle both client and server roles simultaneously:

Diagram: Bidirectional Request/Response Flow

This pattern enables peer-to-peer architectures where both endpoints can initiate requests and respond to requests.

Sources: src/rpc/rpc_dispatcher.rs:20-51


Implementation Notes

Request ID Generation

Location: src/rpc/rpc_dispatcher.rs:241-242

Mechanism:

  1. Captures current self.next_rpc_request_id for the outgoing request
  2. Calls increment_u32_id() from src/utils/increment_u32_id.rs (implementation in crate::utils)
  3. Updates self.next_rpc_request_id with next value

Wraparound Behavior:

  • After reaching u32::MAX (4,294,967,295), wraps to 0 and continues
  • Provides monotonic sequence within 32-bit range

Collision Analysis:

Connection DurationRequests/SecondTime to WraparoundCollision Risk
Short-lived (hours)1,00049.7 daysNegligible
Long-lived (days)10,0004.97 daysVery low
High-throughput100,00011.9 hoursConsider ID reuse detection

Mitigation for Long-Running Connections:

  • Track active request IDs in a HashSet before assignment
  • Reject or delay requests if ID would collide with pending request
  • Not currently implemented (acceptable for typical usage)

Initialization: Starts at first ID from increment_u32_id() in [line 64](https://github.com/jzombie/rust-muxio/blob/30450c98/line 64) during RpcDispatcher::new()

Sources: src/rpc/rpc_dispatcher.rs:241-242 src/rpc/rpc_dispatcher.rs64


Non-Async Design

The dispatcher uses callbacks instead of async/await for several reasons:

  1. WASM Compatibility : Avoids dependency on async runtimes that may not work in WASM
  2. Runtime Agnostic : Works with Tokio, async-std, or no runtime at all
  3. Deterministic : No hidden scheduling or context switching
  4. Zero-Cost : No Future state machines or executor overhead

Higher-level abstractions (like those in muxio-rpc-service-caller) can wrap the dispatcher with async interfaces when desired.

Sources: src/rpc/rpc_dispatcher.rs:26-27


Summary

The RpcDispatcher provides the core request/response coordination layer for muxio’s RPC framework:

ResponsibilityMechanism
Request CorrelationUnique request IDs with monotonic generation
Response RoutingPer-request handlers + catch-all fallback
Stream ManagementWraps RpcRespondableSession for encoder lifecycle
Payload AccumulationOptional prebuffering or streaming delivery
Queue ManagementThread-safe VecDeque for tracking active requests
Error PropagationSynthetic error events on connection failure
Thread SafetyArc<Mutex<>> for shared state

The dispatcher’s non-async, callback-based design enables deployment across native and WASM environments while maintaining type safety and performance.

Sources: src/rpc/rpc_dispatcher.rs:1-458


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Request and Response Types

Loading…

Request and Response Types

Relevant source files

Purpose and Scope

This document defines the core data structures used to represent RPC requests and responses in the muxio framework: RpcRequest, RpcResponse, and RpcHeader. These types provide a runtime-agnostic, schemaless representation of method invocations and their corresponding replies. They serve as the fundamental building blocks for all RPC communication in the system.

For information about how these types are processed and routed, see RPC Dispatcher. For information about how they are serialized and transmitted at the binary level, see Binary Framing Protocol.


Type Hierarchy and Relationships

The request-response system is built on three primary types that work together to enable method invocation and correlation:

Sources:

graph TB
    subgraph "Application Layer"
        USER["User Code\nService Definitions"]
end
    
    subgraph "RPC Protocol Layer Types"
        REQ["RpcRequest\nsrc/rpc/rpc_request_response.rs"]
RESP["RpcResponse\nsrc/rpc/rpc_request_response.rs"]
HDR["RpcHeader\nsrc/rpc/rpc_internals/rpc_header.rs"]
end
    
    subgraph "Processing Components"
        DISP["RpcDispatcher::call()\nRpcDispatcher::respond()"]
SESSION["RpcRespondableSession"]
end
    
 
   USER -->|constructs| REQ
 
   USER -->|constructs| RESP
    
 
   REQ -->|converted to| HDR
 
   RESP -->|converted to| HDR
    
 
   HDR -->|can be converted back to| RESP
    
 
   DISP -->|processes| REQ
 
   DISP -->|processes| RESP
    
 
   DISP -->|delegates to| SESSION
    
    style REQ fill:#f9f9f9
    style RESP fill:#f9f9f9
    style HDR fill:#f9f9f9

RpcHeader Structure

RpcHeader is the internal protocol-level representation of RPC metadata transmitted in frame headers. It contains all information necessary to route and identify an RPC message.

Field Description

FieldTypePurpose
rpc_msg_typeRpcMessageTypeDiscriminates between Call, Response, Event, etc.
rpc_request_idu32Unique identifier for request-response correlation
rpc_method_idu64Identifier (typically hash) of the method being invoked
rpc_metadata_bytesVec<u8>Schemaless encoded parameters or status information

Type Definition

The complete structure is defined at src/rpc/rpc_internals/rpc_header.rs:3-24:

Metadata Semantics

The rpc_metadata_bytes field has different interpretations depending on the message type:

  • For Calls : Contains serialized method parameters (e.g., bitcode-encoded arguments)
  • For Responses : Contains a single-byte result status code, or is empty if no status is provided
  • Empty Vector : Valid and indicates no metadata is present

Sources:


RpcRequest Structure

RpcRequest represents an outbound RPC call initiated by a client. It is constructed by user code and passed to RpcDispatcher::call() for transmission.

Field Description

FieldTypePurpose
rpc_method_idu64Unique method identifier (typically xxhash of method name)
rpc_param_bytesOption<Vec<u8>>Serialized method parameters (None if no parameters)
rpc_prebuffered_payload_bytesOption<Vec<u8>>Optional payload to send after header
is_finalizedboolIf true, stream ends immediately after header+payload

Type Definition

Defined at src/rpc/rpc_request_response.rs:9-33:

Usage Patterns

Finalized Request (Single-Frame RPC)

When the entire request is known upfront:

This pattern is demonstrated at tests/rpc_dispatcher_tests.rs:42-49

Streaming Request (Multi-Frame RPC)

When payload will be written incrementally:

Conversion to RpcHeader

When RpcDispatcher::call() is invoked, the RpcRequest is converted to an RpcHeader at src/rpc/rpc_dispatcher.rs:249-257:

Sources:


RpcResponse Structure

RpcResponse represents the reply to a prior RPC request. It is constructed on the server side and passed to RpcDispatcher::respond() for transmission back to the client.

Field Description

FieldTypePurpose
rpc_request_idu32Must match the original request’s rpc_request_id for correlation
rpc_method_idu64Should match the original request’s rpc_method_id
rpc_result_statusOption<u8>Optional status byte (by convention, 0 = success)
rpc_prebuffered_payload_bytesOption<Vec<u8>>Serialized response payload
is_finalizedboolIf true, stream ends immediately after header+payload

Type Definition

Defined at src/rpc/rpc_request_response.rs:40-76:

Construction from RpcHeader

The from_rpc_header() method provides a convenience constructor for server-side response creation at src/rpc/rpc_request_response.rs:90-103:

Usage Example

Server-side response construction from tests/rpc_dispatcher_tests.rs:161-167:

Conversion to RpcHeader

When RpcDispatcher::respond() is invoked, the RpcResponse is converted to an RpcHeader at src/rpc/rpc_dispatcher.rs:307-319:

Sources:


sequenceDiagram
    participant Client as "Client Code"
    participant Disp as "RpcDispatcher"
    participant IDGen as "next_rpc_request_id\n(u32 counter)"
    participant Queue as "rpc_request_queue\nArc<Mutex<VecDeque>>"
    
    Note over Client,Queue: Request Path
    Client->>Disp: call(RpcRequest)
    Disp->>IDGen: Allocate ID
    IDGen-->>Disp: rpc_request_id = N
    Disp->>Disp: Build RpcHeader with\nrpc_request_id=N
    Note over Disp: Transmit request frames...
    
    Note over Client,Queue: Response Path
    Disp->>Disp: Receive response frames
    Disp->>Disp: Parse RpcHeader from frames
    Disp->>Queue: Find request by rpc_request_id=N
    Queue-->>Disp: Matched RpcRequest entry
    Disp->>Disp: Invoke response handler
    Disp->>Queue: Remove entry on End/Error

Request-Response Correlation Mechanism

The system correlates requests and responses using the rpc_request_id field, which is managed by the dispatcher’s monotonic ID generator.

ID Generation

The dispatcher maintains a monotonic counter at src/rpc/rpc_dispatcher.rs42:

Each call increments this counter using increment_u32_id() at src/rpc/rpc_dispatcher.rs:241-242:

Request Queue Management

Inbound requests (responses from the remote peer) are tracked in a shared queue at src/rpc/rpc_dispatcher.rs50:

The queue is populated by the catch-all response handler at src/rpc/rpc_dispatcher.rs:122-141:

  • Header Event : Creates new RpcRequest entry with rpc_request_id
  • PayloadChunk Event : Appends bytes to matching request’s payload
  • End Event : Marks request as finalized

Requests can be retrieved and removed using:

Sources:


graph LR
    subgraph "Client Side - Outbound Call"
        C1["User constructs\nRpcRequest"]
C2["RpcDispatcher::call()"]
C3["Convert to RpcHeader\nrpc_msg_type=Call"]
C4["RpcStreamEncoder\nSerialize + Frame"]
C5["Bytes on wire"]
end
    
    subgraph "Server Side - Inbound Call"
        S1["Bytes received"]
S2["RpcSession::read_bytes()"]
S3["Decode to RpcHeader"]
S4["RpcStreamEvent::Header\n+PayloadChunk\n+End"]
S5["Reconstruct RpcRequest\nin rpc_request_queue"]
S6["User retrieves\ndelete_rpc_request()"]
end
    
    subgraph "Server Side - Outbound Response"
        R1["User constructs\nRpcResponse"]
R2["RpcDispatcher::respond()"]
R3["Convert to RpcHeader\nrpc_msg_type=Response"]
R4["RpcStreamEncoder\nSerialize + Frame"]
R5["Bytes on wire"]
end
    
    subgraph "Client Side - Inbound Response"
        R6["Bytes received"]
R7["RpcSession::read_bytes()"]
R8["Decode to RpcHeader"]
R9["RpcStreamEvent fired\nto response_handler"]
R10["User processes\nresponse payload"]
end
    
 
   C1 --> C2
 
   C2 --> C3
 
   C3 --> C4
 
   C4 --> C5
    
    C5 -.network.-> S1
 
   S1 --> S2
 
   S2 --> S3
 
   S3 --> S4
 
   S4 --> S5
 
   S5 --> S6
    
 
   R1 --> R2
 
   R2 --> R3
 
   R3 --> R4
 
   R4 --> R5
    
    R5 -.network.-> R6
 
   R6 --> R7
 
   R7 --> R8
 
   R8 --> R9
 
   R9 --> R10

Data Flow Through Type Transformations

The following diagram illustrates how request and response data flows through type transformations:

Sources:


Field Semantics and Special Cases

Prebuffered Payloads

Both RpcRequest and RpcResponse support rpc_prebuffered_payload_bytes:

  • Purpose : Allows sending the entire payload in a single transmission without manual streaming
  • Transmission : Sent immediately after header via encoder.write_bytes() at src/rpc/rpc_dispatcher.rs:270-276 and src/rpc/rpc_dispatcher.rs:327-329
  • Use Case : Suitable for small to medium-sized payloads where chunking overhead is undesirable

Finalization Flag

The is_finalized field controls stream lifecycle:

Result Status Conventions

While the core library does not enforce semantics for rpc_result_status, the following conventions are commonly used:

ValueMeaning
Some(0)Success
Some(1)Generic error
Some(2+)Custom error codes
NoneNo status information

This convention is referenced in the documentation at src/rpc/rpc_request_response.rs:61-62

Sources:


Complete Request-Response Lifecycle Example

The following table illustrates a complete request-response cycle from the test suite at tests/rpc_dispatcher_tests.rs:42-198:

StepLocationActionData Structure
1ClientConstruct requestRpcRequest { method_id: ADD_METHOD_ID, param_bytes: Some(encoded), prebuffered_payload: None, is_finalized: true }
2ClientCall dispatcherclient_dispatcher.call(rpc_request, 4, on_emit, on_response, true)
3ClientConvert to headerRpcHeader { msg_type: Call, request_id: 1, method_id: ADD_METHOD_ID, metadata: encoded_params }
4TransportTransmit bytesBinary frames written to outgoing_buf
5ServerReceive bytesserver_dispatcher.read_bytes(chunk)
6ServerDecode to eventsRpcStreamEvent::Header, RpcStreamEvent::End
7ServerReconstruct requestEntry added to rpc_request_queue with (request_id, RpcRequest)
8ServerRetrieve requestserver_dispatcher.delete_rpc_request(request_id)
9ServerProcess and respondRpcResponse { request_id: 1, method_id: ADD_METHOD_ID, result_status: Some(0), payload: encoded_result, is_finalized: true }
10ServerConvert to headerRpcHeader { msg_type: Response, request_id: 1, method_id: ADD_METHOD_ID, metadata: [0] }
11TransportTransmit responseBinary frames via server_dispatcher.respond()
12ClientReceive responseclient_dispatcher.read_bytes() routes to on_response handler
13ClientProcess resultUser code decodes payload from RpcStreamEvent::PayloadChunk

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

RPC Framework

Loading…

RPC Framework

Relevant source files

Purpose and Scope

This document provides a comprehensive overview of the RPC (Remote Procedure Call) abstraction layer in the rust-muxio system. The RPC framework is built on top of the core muxio multiplexing library and provides a structured, type-safe mechanism for defining and invoking remote methods across client-server boundaries.

The RPC framework consists of three primary components distributed across separate crates:

For details on specific transport implementations that use this RPC framework, see Transport Implementations. For information on the underlying multiplexing and framing protocol, see Core Library (muxio)).


Architecture Overview

The RPC framework operates as a middleware layer between application code and the underlying muxio multiplexing protocol. It provides compile-time type safety while maintaining flexibility in serialization and transport choices.

graph TB
    subgraph "Application Layer"
        APP["Application Code\nType-safe method calls"]
end
    
    subgraph "RPC Service Definition Layer"
        SERVICE["muxio-rpc-service"]
TRAIT["RpcMethodPrebuffered\nRpcMethodStreaming traits"]
METHOD_ID["METHOD_ID generation\nxxhash at compile-time"]
ENCODE["encode_request/response\ndecode_request/response"]
SERVICE --> TRAIT
 
       SERVICE --> METHOD_ID
 
       SERVICE --> ENCODE
    end
    
    subgraph "Client Side"
        CALLER["muxio-rpc-service-caller"]
CALLER_IFACE["RpcServiceCallerInterface"]
PREBUF_CALL["call_prebuffered"]
STREAM_CALL["call_streaming"]
CALLER --> CALLER_IFACE
 
       CALLER_IFACE --> PREBUF_CALL
 
       CALLER_IFACE --> STREAM_CALL
    end
    
    subgraph "Server Side"
        ENDPOINT["muxio-rpc-service-endpoint"]
ENDPOINT_IFACE["RpcServiceEndpointInterface"]
REGISTER_PREBUF["register_prebuffered"]
REGISTER_STREAM["register_streaming"]
ENDPOINT --> ENDPOINT_IFACE
 
       ENDPOINT_IFACE --> REGISTER_PREBUF
 
       ENDPOINT_IFACE --> REGISTER_STREAM
    end
    
    subgraph "Core Multiplexing Layer"
        DISPATCHER["RpcDispatcher"]
MUXIO_CORE["muxio core\nBinary framing protocol"]
DISPATCHER --> MUXIO_CORE
    end
    
 
   APP --> TRAIT
 
   APP --> CALLER_IFACE
    
    TRAIT -.shared definitions.-> CALLER
    TRAIT -.shared definitions.-> ENDPOINT
    
 
   CALLER --> DISPATCHER
 
   ENDPOINT --> DISPATCHER
    
    PREBUF_CALL -.invokes.-> DISPATCHER
    STREAM_CALL -.invokes.-> DISPATCHER
    REGISTER_PREBUF -.handles via.-> DISPATCHER
    REGISTER_STREAM -.handles via.-> DISPATCHER

RPC Framework Component Structure

Sources:


Core RPC Components

The RPC framework is divided into three specialized crates, each with a distinct responsibility in the RPC lifecycle.

Component Responsibilities

CratePrimary ResponsibilityKey Traits/TypesDependencies
muxio-rpc-serviceService definition contractsRpcMethodPrebuffered, RpcMethodStreaming, METHOD_IDmuxio, bitcode, xxhash-rust, num_enum
muxio-rpc-service-callerClient-side invocationRpcServiceCallerInterface, call_prebuffered, call_streamingmuxio, muxio-rpc-service, futures
muxio-rpc-service-endpointServer-side dispatchRpcServiceEndpointInterface, register_prebuffered, register_streamingmuxio, muxio-rpc-service, muxio-rpc-service-caller

Sources:


RPC Method Definition and Identification

The foundation of the RPC framework is the method definition system, which establishes compile-time contracts between clients and servers.

graph LR
    subgraph "Compile Time"
        METHOD_NAME["Method Name String\ne.g., 'Add'"]
XXHASH["xxhash-rust\nconst_xxh3"]
METHOD_ID["METHOD_ID: u64\nCompile-time constant"]
METHOD_NAME --> XXHASH
 
       XXHASH --> METHOD_ID
    end
    
    subgraph "Service Definition Trait"
        TRAIT_IMPL["RpcMethodPrebuffered impl"]
CONST_ID["const METHOD_ID"]
ENCODE_REQ["encode_request"]
DECODE_REQ["decode_request"]
ENCODE_RESP["encode_response"]
DECODE_RESP["decode_response"]
TRAIT_IMPL --> CONST_ID
 
       TRAIT_IMPL --> ENCODE_REQ
 
       TRAIT_IMPL --> DECODE_REQ
 
       TRAIT_IMPL --> ENCODE_RESP
 
       TRAIT_IMPL --> DECODE_RESP
    end
    
    subgraph "Bitcode Serialization"
        BITCODE["bitcode crate"]
PARAMS["Request/Response types\nSerialize + Deserialize"]
ENCODE_REQ --> BITCODE
 
       DECODE_REQ --> BITCODE
 
       ENCODE_RESP --> BITCODE
 
       DECODE_RESP --> BITCODE
 
       BITCODE --> PARAMS
    end
    
 
   METHOD_ID --> CONST_ID

Method ID Generation Process

The METHOD_ID is a u64 value generated at compile time by hashing the method name using xxhash-rust. This approach ensures:

  • Collision prevention : Hash-based IDs virtually eliminate accidental collisions
  • Zero runtime overhead : IDs are compile-time constants
  • Version independence : Method IDs remain stable across compilations

Sources:


Type Safety Through Shared Definitions

The RPC framework enforces type safety by requiring both client and server to depend on the same service definition crate. This creates a compile-time contract that prevents API mismatches.

sequenceDiagram
    participant DEV as "Developer"
    participant DEF as "Service Definition Crate"
    participant CLIENT as "Client Crate"
    participant SERVER as "Server Crate"
    participant COMPILER as "Rust Compiler"
    
    DEV->>DEF: Define RpcMethodPrebuffered
    DEF->>DEF: Generate METHOD_ID
    DEF->>DEF: Define Request/Response types
    
    DEV->>CLIENT: Add dependency on DEF
    DEV->>SERVER: Add dependency on DEF
    
    CLIENT->>DEF: Import method traits
    SERVER->>DEF: Import method traits
    
    CLIENT->>COMPILER: Compile with encode_request
    SERVER->>COMPILER: Compile with decode_request
    
    alt Type Mismatch
        COMPILER->>DEV: Compilation Error
    else Types Match
        COMPILER->>CLIENT: Successful build
        COMPILER->>SERVER: Successful build
    end
    
    Note over CLIENT,SERVER: Both use identical\nMETHOD_ID and data structures

Shared Definition Workflow

This workflow demonstrates how compile-time validation eliminates an entire class of runtime errors. If the client attempts to send a request with a different structure than what the server expects, the code will not compile.

Sources:


sequenceDiagram
    participant APP as "Application Code"
    participant METHOD as "Method::call()\nRpcMethodPrebuffered"
    participant CALLER as "RpcServiceCallerInterface"
    participant DISP as "RpcDispatcher"
    participant FRAME as "Binary Framing Layer"
    participant TRANSPORT as "Transport\n(WebSocket, etc.)"
    participant ENDPOINT as "RpcServiceEndpointInterface"
    participant HANDLER as "Registered Handler"
    
    APP->>METHOD: call(params)
    METHOD->>METHOD: encode_request(params) → bytes
    METHOD->>CALLER: call_prebuffered(METHOD_ID, bytes)
    
    CALLER->>DISP: send_request(method_id, request_bytes)
    DISP->>DISP: Assign unique request_id
    DISP->>FRAME: Serialize to binary frames
    FRAME->>TRANSPORT: Transmit frames
    
    TRANSPORT->>FRAME: Receive frames
    FRAME->>DISP: Reassemble frames
    DISP->>DISP: Lookup handler by METHOD_ID
    DISP->>ENDPOINT: dispatch_to_handler(METHOD_ID, bytes)
    ENDPOINT->>HANDLER: invoke(request_bytes, context)
    
    HANDLER->>METHOD: decode_request(bytes) → params
    HANDLER->>HANDLER: Process business logic
    HANDLER->>METHOD: encode_response(result) → bytes
    HANDLER->>ENDPOINT: Return response_bytes
    
    ENDPOINT->>DISP: send_response(request_id, bytes)
    DISP->>FRAME: Serialize to binary frames
    FRAME->>TRANSPORT: Transmit frames
    
    TRANSPORT->>FRAME: Receive frames
    FRAME->>DISP: Reassemble frames
    DISP->>DISP: Match request_id to pending call
    DISP->>CALLER: resolve_future(request_id, bytes)
    CALLER->>METHOD: decode_response(bytes) → result
    METHOD->>APP: Return typed result

RPC Call Flow

Understanding how an RPC call travels through the system is essential for debugging and optimization.

Complete RPC Invocation Sequence

Key observations:

  • The METHOD_ID is used for routing on the server side
  • The request_id (assigned by the dispatcher) is used for correlation
  • All serialization/deserialization happens at the method trait level
  • The dispatcher only handles raw bytes

Sources:


Prebuffered vs. Streaming RPC

The RPC framework supports two distinct calling patterns, each optimized for different use cases.

RPC Pattern Comparison

AspectPrebuffered RPCStreaming RPC
Request SizeComplete request buffered in memoryRequest can be sent in chunks
Response SizeComplete response buffered in memoryResponse can be received in chunks
Memory UsageHigher for large payloadsLower, constant memory footprint
LatencyLower for small payloadsHigher initial latency, better throughput
TraitRpcMethodPrebufferedRpcMethodStreaming
Use CasesSmall to medium payloads (< 10MB)Large payloads, file transfers, real-time data
MultiplexingMultiple calls can be concurrentStreams can be interleaved

Sources:


classDiagram
    class RpcServiceCallerInterface {<<trait>>\n+call_prebuffered(method_id: u64, params: Option~Vec~u8~~, payload: Option~Vec~u8~~) Future~Result~Vec~u8~~~\n+call_streaming(method_id: u64, params: Option~Vec~u8~~) Future~Result~StreamResponse~~\n+get_transport_state() RpcTransportState\n+set_state_change_handler(handler: Fn) Future}
    
    class RpcTransportState {<<enum>>\nConnecting\nConnected\nDisconnected\nFailed}
    
    class RpcClient {+new(host, port) RpcClient\nimplements RpcServiceCallerInterface}
    
    class RpcWasmClient {+new(url) RpcWasmClient\nimplements RpcServiceCallerInterface}
    
    class CustomClient {+new(...) CustomClient\nimplements RpcServiceCallerInterface}
    
    RpcServiceCallerInterface <|.. RpcClient : implements
    RpcServiceCallerInterface <|.. RpcWasmClient : implements
    RpcServiceCallerInterface <|.. CustomClient : implements
    RpcServiceCallerInterface --> RpcTransportState : returns

Client-Side: RpcServiceCallerInterface

The client-side RPC invocation is abstracted through the RpcServiceCallerInterface trait, which allows different transport implementations to provide identical calling semantics.

RpcServiceCallerInterface Contract

This design allows application code to be written once against RpcServiceCallerInterface and work with any compliant transport implementation (Tokio, WASM, custom transports, etc.).

Sources:


classDiagram
    class RpcServiceEndpointInterface {<<trait>>\n+register_prebuffered(method_id: u64, handler: Fn) Future~Result~~~\n+register_streaming(method_id: u64, handler: Fn) Future~Result~~~\n+unregister(method_id: u64) Future~Result~~~\n+is_registered(method_id: u64) Future~bool~}
    
    class HandlerContext {+client_id: Option~String~\n+metadata: HashMap~String, String~}
    
    class PrebufferedHandler {<<function>>\n+Fn(Vec~u8~, HandlerContext) Future~Result~Vec~u8~~~}
    
    class StreamingHandler {<<function>>\n+Fn(Option~Vec~u8~~, DynamicChannel, HandlerContext) Future~Result~~~}
    
    class RpcServer {
        +new(config) RpcServer
        +endpoint() Arc~RpcServiceEndpointInterface~
        +serve_with_listener(listener) Future
    }
    
    RpcServiceEndpointInterface --> PrebufferedHandler : accepts
    RpcServiceEndpointInterface --> StreamingHandler : accepts
    RpcServiceEndpointInterface --> HandlerContext : provides
    RpcServer --> RpcServiceEndpointInterface : provides

Server-Side: RpcServiceEndpointInterface

The server-side request handling is abstracted through the RpcServiceEndpointInterface trait, which manages method registration and dispatch.

RpcServiceEndpointInterface Contract

Handlers are registered by METHOD_ID and receive:

  1. Request bytes : The serialized request parameters (for prebuffered) or initial params (for streaming)
  2. Context : Metadata about the client and connection
  3. Dynamic channel (streaming only): For incremental data transmission

Sources:


Data Serialization with Bitcode

The RPC framework uses the bitcode crate for binary serialization. This provides compact, efficient encoding of Rust types.

Serialization Requirements

For a type to be used in RPC method definitions, it must implement:

  • serde::Serialize - For encoding
  • serde::Deserialize - For decoding

The bitcode crate provides these implementations for most standard Rust types, including:

  • Primitive types (u64, f64, bool, etc.)
  • Standard collections (Vec<T>, HashMap<K, V>, etc.)
  • Custom structs with #[derive(Serialize, Deserialize)]

Serialization Flow

The compact binary format of bitcode significantly reduces payload sizes compared to JSON or other text-based formats, contributing to the framework’s low-latency characteristics.

Sources:


Method Registration and Dispatch

On the server side, methods must be registered with the endpoint before they can be invoked. The registration process associates a METHOD_ID with a handler function.

Handler Registration Pattern

From the example application, the registration pattern is:

Registration Lifecycle

Once registered, handlers remain active until explicitly unregistered or the server shuts down. Multiple concurrent invocations of the same handler are supported through the underlying multiplexing layer.

Sources:


graph TB
    subgraph "Shared Application Logic"
        APP_CODE["Application Code\nPlatform-agnostic"]
METHOD_CALL["Method::call(&client, params)"]
APP_CODE --> METHOD_CALL
    end
    
    subgraph "Native Platform"
        TOKIO_CLIENT["RpcClient\n(Tokio-based)"]
TOKIO_RUNTIME["Tokio async runtime"]
TOKIO_WS["tokio-tungstenite\nWebSocket"]
METHOD_CALL -.uses.-> TOKIO_CLIENT
 
       TOKIO_CLIENT --> TOKIO_RUNTIME
 
       TOKIO_CLIENT --> TOKIO_WS
    end
    
    subgraph "Web Platform"
        WASM_CLIENT["RpcWasmClient\n(WASM-based)"]
WASM_RUNTIME["Browser event loop"]
WASM_WS["JavaScript WebSocket API\nvia wasm-bindgen"]
METHOD_CALL -.uses.-> WASM_CLIENT
 
       WASM_CLIENT --> WASM_RUNTIME
 
       WASM_CLIENT --> WASM_WS
    end
    
    subgraph "Custom Platform"
        CUSTOM_CLIENT["Custom RpcClient\nimplements RpcServiceCallerInterface"]
CUSTOM_TRANSPORT["Custom Transport"]
METHOD_CALL -.uses.-> CUSTOM_CLIENT
 
       CUSTOM_CLIENT --> CUSTOM_TRANSPORT
    end

Cross-Platform RPC Invocation

A key design goal of the RPC framework is enabling the same application code to work across different platforms and transports. This is achieved through the abstraction provided by RpcServiceCallerInterface.

Platform-Agnostic Application Code

The application layer depends only on:

  1. The service definition crate (for method traits)
  2. The RpcServiceCallerInterface trait (for invocation)

This allows the same business logic to run in servers, native desktop applications, mobile apps, and web browsers with minimal platform-specific code.

Sources:


graph TD
    subgraph "Application-Level Errors"
        BIZ_ERR["Business Logic Errors\nDomain-specific"]
end
    
    subgraph "RPC Framework Errors"
        RPC_ERR["RpcServiceError"]
METHOD_NOT_FOUND["MethodNotFound\nInvalid METHOD_ID"]
ENCODING_ERR["EncodingError\nSerialization failure"]
SYSTEM_ERR["SystemError\nInternal dispatcher error"]
TRANSPORT_ERR["TransportError\nNetwork failure"]
RPC_ERR --> METHOD_NOT_FOUND
 
       RPC_ERR --> ENCODING_ERR
 
       RPC_ERR --> SYSTEM_ERR
 
       RPC_ERR --> TRANSPORT_ERR
    end
    
    subgraph "Core Layer Errors"
        CORE_ERR["Muxio Core Errors\nFraming protocol errors"]
end
    
    BIZ_ERR -.propagates through.-> RPC_ERR
    TRANSPORT_ERR -.wraps.-> CORE_ERR

Error Handling in RPC

The RPC framework uses Rust’s Result type throughout, with error types defined at the appropriate abstraction levels.

RPC Error Hierarchy

Error handling patterns:

  • Client-side : Errors are returned as Result<T, E> from RPC calls
  • Server-side : Handler errors are serialized and transmitted back to the client
  • Transport errors : Automatically trigger state changes (see RpcTransportState)

For detailed error type definitions, see Error Handling.

Sources:

  • Section reference to page 7

Performance Characteristics

The RPC framework is designed for low latency and high throughput. Key performance features include:

Performance Optimizations

FeatureBenefitImplementation
Compile-time method IDsZero runtime hash overheadxxhash-rust with const_xxh3
Binary serializationSmaller payload sizesbitcode crate
Minimal frame headersReduced per-message overheadCustom binary protocol
Request multiplexingConcurrent calls over single connectionRpcDispatcher correlation
Zero-copy streamingReduced memory allocationsDynamicChannel for chunked data
Callback-driven dispatchNo polling overheadAsync handlers with futures

The combination of these optimizations makes the RPC framework suitable for:

  • Low-latency trading systems
  • Real-time gaming
  • Interactive remote tooling
  • High-throughput data processing

Sources:


graph TB
    subgraph "RPC Abstraction Layer"
        CALLER_IF["RpcServiceCallerInterface"]
ENDPOINT_IF["RpcServiceEndpointInterface"]
end
    
    subgraph "Core Dispatcher"
        DISPATCHER["RpcDispatcher\nRequest correlation"]
SEND_CB["send_callback\nVec&lt;u8&gt; → ()"]
RECV_CB["receive_callback\n() → Vec&lt;u8&gt;"]
end
    
    subgraph "Tokio WebSocket Transport"
        TOKIO_SERVER["TokioRpcServer"]
TOKIO_CLIENT["TokioRpcClient"]
TUNGSTENITE["tokio-tungstenite"]
TOKIO_SERVER --> TUNGSTENITE
 
       TOKIO_CLIENT --> TUNGSTENITE
    end
    
    subgraph "WASM WebSocket Transport"
        WASM_CLIENT["WasmRpcClient"]
JS_BRIDGE["wasm-bindgen bridge"]
BROWSER_WS["Browser WebSocket API"]
WASM_CLIENT --> JS_BRIDGE
 
       JS_BRIDGE --> BROWSER_WS
    end
    
    CALLER_IF -.implemented by.-> TOKIO_CLIENT
    CALLER_IF -.implemented by.-> WASM_CLIENT
    ENDPOINT_IF -.implemented by.-> TOKIO_SERVER
    
 
   TOKIO_CLIENT --> DISPATCHER
 
   WASM_CLIENT --> DISPATCHER
 
   TOKIO_SERVER --> DISPATCHER
    
 
   DISPATCHER --> SEND_CB
 
   DISPATCHER --> RECV_CB
    
    SEND_CB -.invokes.-> TUNGSTENITE
    RECV_CB -.invokes.-> TUNGSTENITE
    SEND_CB -.invokes.-> JS_BRIDGE
    RECV_CB -.invokes.-> JS_BRIDGE

Integration with Transport Layer

The RPC framework is designed to be transport-agnostic, with concrete implementations provided for common scenarios.

Transport Integration Points

The RpcDispatcher accepts callbacks for sending and receiving bytes, allowing it to work with any transport mechanism. This design enables:

  • WebSocket transports (Tokio and WASM implementations provided)
  • TCP socket transports
  • In-memory transports (for testing)
  • Custom transports (by providing appropriate callbacks)

For implementation details of specific transports, see Transport Implementations.

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Service Definitions

Loading…

Service Definitions

Relevant source files

Purpose and Scope

Service definitions provide compile-time type-safe RPC method contracts that are shared between client and server implementations. The muxio-rpc-service crate defines the core traits and utilities for declaring RPC methods with automatic method ID generation and efficient binary serialization. Service definitions serve as the single source of truth for RPC interfaces, ensuring that both sides of the communication agree on method signatures, parameter types, and return types at compile time.

For information about implementing client-side invocation logic, see Service Caller Interface. For server-side handler registration, see Service Endpoint Interface. For a step-by-step guide on creating your own service definitions, see Creating Service Definitions.


Core Architecture

The service definition layer sits at the top of the RPC abstraction layer, providing the foundation for type-safe communication. It bridges application-level Rust types with the underlying binary protocol.

Sources:

graph TB
    subgraph "Application Layer"
        APP["Application Code\nBusiness Logic"]
end
    
    subgraph "Service Definition Layer"
        TRAIT["RpcMethodPrebuffered Trait\nMethod Signature Declaration"]
METHODID["METHOD_ID Constant\nxxhash::xxh3_64(method_name)"]
PARAMS["Parameter Types\nSerialize + Deserialize"]
RESULT["Result Types\nSerialize + Deserialize"]
end
    
    subgraph "RPC Framework Layer"
        CALLER["RpcServiceCallerInterface\nClient Invocation"]
ENDPOINT["RpcServiceEndpointInterface\nServer Dispatch"]
SERIALIZER["bitcode::encode/decode\nBinary Serialization"]
end
    
 
   APP --> TRAIT
 
   TRAIT --> METHODID
 
   TRAIT --> PARAMS
 
   TRAIT --> RESULT
 
   CALLER --> METHODID
 
   ENDPOINT --> METHODID
 
   PARAMS --> SERIALIZER
 
   RESULT --> SERIALIZER

The muxio-rpc-service Crate

The muxio-rpc-service crate provides the foundational types and traits for defining RPC services. It has minimal dependencies to remain runtime-agnostic and platform-independent.

Dependencies

DependencyPurpose
async-traitEnables async trait methods for service definitions
futuresProvides stream abstractions for streaming RPC methods
muxioCore framing and multiplexing primitives
num_enumDiscriminated union encoding for message types
xxhash-rustFast compile-time hash generation for method IDs
bitcodeCompact binary serialization for parameters and results

Sources:


The RpcMethodPrebuffered Trait

The RpcMethodPrebuffered trait is the primary mechanism for defining RPC methods. It specifies the method signature, parameter types, result types, and automatically generates a unique method identifier.

graph LR
    subgraph "RpcMethodPrebuffered Trait Definition"
        NAME["const NAME: &'static str\nHuman-readable method name"]
METHODID["const METHOD_ID: u64\nxxh3_64(NAME)
at compile time"]
PARAMS["type Params\nSerialize + Deserialize + Send"]
RESULT["type Result\nSerialize + Deserialize + Send"]
end
    
    subgraph "Example: AddMethod"
        NAME_EX["NAME = 'Add'"]
METHODID_EX["METHOD_ID = 0x5f8b3c4a2e1d6f90"]
PARAMS_EX["Params = (i32, i32)"]
RESULT_EX["Result = i32"]
end
    
 
   NAME --> NAME_EX
 
   METHODID --> METHODID_EX
 
   PARAMS --> PARAMS_EX
 
   RESULT --> RESULT_EX

Key Components

ComponentTypeDescription
NAMEconst &'static strHuman-readable method name (e.g., “Add”, “Multiply”)
METHOD_IDconst u64Compile-time hash of NAME using xxhash
ParamsAssociated TypeInput parameter type, must implement Serialize + Deserialize + Send
ResultAssociated TypeReturn value type, must implement Serialize + Deserialize + Send

Type Safety Guarantees

Service definitions enforce several type safety invariants:

  1. Compile-time method identification : Method IDs are computed at compile time from method names
  2. Consistent serialization : Both client and server use the same bitcode schema for parameters
  3. Type mismatch detection : Mismatched parameter or result types cause compilation errors
  4. Zero-cost abstraction : Method dispatch has no runtime overhead beyond the hash lookup

Sources:


Method ID Generation with xxhash

Method IDs are 64-bit unsigned integers generated at compile time by hashing the method name. This approach provides efficient dispatch while maintaining human-readable method names in code.

graph LR
    subgraph "Compile Time"
        METHNAME["Method Name (String)\ne.g., 'Add', 'Multiply', 'Echo'"]
HASH["xxhash::xxh3_64(bytes)"]
METHODID["METHOD_ID: u64\ne.g., 0x5f8b3c4a2e1d6f90"]
METHNAME --> HASH
 
       HASH --> METHODID
    end
    
    subgraph "Runtime - Client"
        CLIENT_CALL["Client calls method"]
CLIENT_ENCODE["RpcRequest::method_id = METHOD_ID"]
CLIENT_SEND["Send binary request"]
CLIENT_CALL --> CLIENT_ENCODE
 
       CLIENT_ENCODE --> CLIENT_SEND
    end
    
    subgraph "Runtime - Server"
        SERVER_RECV["Receive binary request"]
SERVER_DECODE["Extract method_id from RpcRequest"]
SERVER_MATCH["Match method_id to handler"]
SERVER_EXEC["Execute handler"]
SERVER_RECV --> SERVER_DECODE
 
       SERVER_DECODE --> SERVER_MATCH
 
       SERVER_MATCH --> SERVER_EXEC
    end
    
 
   METHODID --> CLIENT_ENCODE
 
   METHODID --> SERVER_MATCH

Method ID Properties

PropertyDescription
Size64-bit unsigned integer
GenerationCompile-time hash using xxh3_64 algorithm
Collision ResistanceExtremely low probability of collision for reasonable method counts
PerformanceSingle integer comparison for method dispatch
StabilitySame method name always produces same ID across compilations

Benefits of Compile-Time Method IDs

  1. No string comparison overhead : Method dispatch uses integer comparison instead of string matching
  2. Compact wire format : Only 8 bytes sent over the network instead of method name strings
  3. Automatic generation : No manual assignment of method IDs required
  4. Type-safe verification : Compile-time guarantee that client and server agree on method IDs

Sources:


graph TB
    subgraph "Parameter Encoding"
        RUSTPARAM["Rust Type\ne.g., (i32, String, Vec<u8>)"]
BITCODEENC["bitcode::encode(params)"]
BINARY["Compact Binary Payload\nVariable-length encoding"]
RUSTPARAM --> BITCODEENC
 
       BITCODEENC --> BINARY
    end
    
    subgraph "RPC Request Structure"
        RPCREQ["RpcRequest"]
REQMETHOD["method_id: u64"]
REQPARAMS["params: Vec<u8>"]
RPCREQ --> REQMETHOD
 
       RPCREQ --> REQPARAMS
    end
    
    subgraph "Result Decoding"
        RESPBINARY["Binary Payload"]
BITCODEDEC["bitcode::decode::<T>(bytes)"]
RUSTRESULT["Rust Type\ne.g., Result<String, Error>"]
RESPBINARY --> BITCODEDEC
 
       BITCODEDEC --> RUSTRESULT
    end
    
 
   BINARY --> REQPARAMS
    REQPARAMS -.Wire Protocol.-> RESPBINARY

Serialization with Bitcode

All RPC parameters and results are serialized using the bitcode crate, which provides compact binary encoding with built-in support for Rust types.

Bitcode Characteristics

CharacteristicDescription
EncodingCompact binary format with variable-length integers
SchemaSchemaless - structure implied by Rust types
PerformanceZero-copy deserialization where possible
Type SupportBuilt-in support for standard Rust types (primitives, tuples, Vec, HashMap, etc.)
VersioningField order and type changes require coordinated updates

Serialization Requirements

For a type to be used as Params or Result in an RPC method definition, it must implement:

Serialize + Deserialize + Send

These bounds ensure:

  • The type can be encoded to binary (Serialize)
  • The type can be decoded from binary (Deserialize)
  • The type can be safely sent across thread boundaries (Send)

Sources:


graph TB
    subgraph "Service Definition Crate"
        CRATE["example-muxio-rpc-service-definition"]
subgraph "Method Definitions"
            ADD["AddMethod\nNAME: 'Add'\nParams: (i32, i32)\nResult: i32"]
MULT["MultiplyMethod\nNAME: 'Multiply'\nParams: (i32, i32)\nResult: i32"]
ECHO["EchoMethod\nNAME: 'Echo'\nParams: String\nResult: String"]
end
        
 
       CRATE --> ADD
 
       CRATE --> MULT
 
       CRATE --> ECHO
    end
    
    subgraph "Consumer Crates"
        CLIENT["Client Application\nUses methods via RpcServiceCallerInterface"]
SERVER["Server Application\nImplements handlers via RpcServiceEndpointInterface"]
end
    
 
   ADD --> CLIENT
 
   MULT --> CLIENT
 
   ECHO --> CLIENT
 
   ADD --> SERVER
 
   MULT --> SERVER
 
   ECHO --> SERVER

Service Definition Structure

A complete service definition typically consists of multiple method trait implementations grouped together. Here’s the conceptual structure:

Typical Crate Layout

example-muxio-rpc-service-definition/
├── Cargo.toml
│   ├── [dependency] muxio-rpc-service
│   └── [dependency] bitcode
└── src/
    └── lib.rs
        ├── struct AddMethod;
        ├── impl RpcMethodPrebuffered for AddMethod { ... }
        ├── struct MultiplyMethod;
        ├── impl RpcMethodPrebuffered for MultiplyMethod { ... }
        └── ...

Sources:


graph TB
    subgraph "Service Definition"
        SERVICEDEF["RpcMethodPrebuffered Implementation\n- NAME\n- METHOD_ID\n- Params\n- Result"]
end
    
    subgraph "Client Side"
        CALLERIFACE["RpcServiceCallerInterface"]
CALLER_INVOKE["call_method<<M: RpcMethodPrebuffered>>()"]
DISPATCHER["RpcDispatcher"]
CALLERIFACE --> CALLER_INVOKE
 
       CALLER_INVOKE --> DISPATCHER
    end
    
    subgraph "Server Side"
        ENDPOINTIFACE["RpcServiceEndpointInterface"]
ENDPOINT_REGISTER["register<<M: RpcMethodPrebuffered>>()"]
HANDLER_MAP["HashMap<u64, Handler>"]
ENDPOINTIFACE --> ENDPOINT_REGISTER
 
       ENDPOINT_REGISTER --> HANDLER_MAP
    end
    
    subgraph "Wire Protocol"
        RPCREQUEST["RpcRequest\nmethod_id: u64\nparams: Vec<u8>"]
RPCRESPONSE["RpcResponse\nrequest_id: u64\nresult: Vec<u8>"]
end
    
 
   SERVICEDEF --> CALLER_INVOKE
 
   SERVICEDEF --> ENDPOINT_REGISTER
 
   DISPATCHER --> RPCREQUEST
 
   RPCREQUEST --> HANDLER_MAP
 
   HANDLER_MAP --> RPCRESPONSE
 
   RPCRESPONSE --> DISPATCHER

Integration with the RPC Framework

Service definitions integrate with the broader RPC framework through well-defined interfaces:

Compile-Time Guarantees

The service definition system provides several compile-time guarantees:

GuaranteeMechanism
Type SafetyGeneric trait bounds enforce matching types across client/server
Method ID UniquenessHashing function produces consistent IDs for method names
Serialization CompatibilityShared trait implementations ensure same encoding/decoding
Parameter ValidationRust type system validates parameter structure at compile time

Runtime Flow

  1. Client : Invokes method through RpcServiceCallerInterface::call::<MethodType>(params)
  2. Serialization : Parameters are encoded using bitcode::encode(params)
  3. Request Construction : RpcRequest created with METHOD_ID and serialized params
  4. Server Dispatch : Request routed to handler based on method_id lookup
  5. Handler Execution : Handler deserializes params, executes logic, serializes result
  6. Response Delivery : RpcResponse sent back with serialized result
  7. Client Deserialization : Result decoded using bitcode::decode::<ResultType>(bytes)

Sources:


Cross-Platform Compatibility

Service definitions are completely platform-agnostic. The same service definition crate can be used by:

  • Native Tokio-based clients and servers
  • WASM browser-based clients
  • Custom transport implementations
  • Different runtime environments (async/sync)

This cross-platform capability is achieved because:

  • Service definitions contain no platform-specific code
  • Serialization is handled by platform-agnostic bitcode
  • Method IDs are computed at compile time without runtime dependencies
  • The trait system provides compile-time polymorphism

Sources:


Summary

Service definitions in muxio provide:

  1. Type-Safe Contracts : Compile-time verified method signatures shared between client and server
  2. Efficient Dispatch : 64-bit integer method IDs computed at compile time using xxhash
  3. Compact Serialization : Binary encoding using bitcode for minimal network overhead
  4. Platform Independence : Service definitions work across native, WASM, and custom platforms
  5. Zero Runtime Cost : All method resolution and type checking happens at compile time

The next sections cover how to use service definitions from the client side (Service Caller Interface) and server side (Service Endpoint Interface), as well as the specific patterns for prebuffered (Prebuffered RPC Calls) and streaming (Streaming RPC Calls) RPC methods.

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Service Caller Interface

Loading…

Service Caller Interface

Relevant source files

Purpose and Scope

The Service Caller Interface defines the core abstraction for client-side RPC invocation in Muxio. The RpcServiceCallerInterface trait provides a runtime-agnostic interface that allows application code to make RPC calls without depending on specific transport implementations. This abstraction enables the same client code to work across different runtimes (native Tokio, WASM) by implementing the trait for each platform.

For information about defining RPC services and methods, see Service Definitions. For server-side RPC handling, see Service Endpoint Interface. For concrete implementations of this interface, see Tokio RPC Client and WASM RPC Client.

Interface Overview

The RpcServiceCallerInterface is defined as an async trait that provides two primary RPC invocation patterns: streaming and buffered. It abstracts the underlying transport mechanism while exposing connection state and allowing customization through state change handlers.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:25-405

graph TB
    subgraph "Application Layer"
        AppCode["Application Code"]
ServiceDef["RPC Method Definitions\n(RpcMethodPrebuffered)"]
end
    
    subgraph "Service Caller Interface"
        CallerTrait["RpcServiceCallerInterface\nasync trait"]
CallBuffered["call_rpc_buffered()\nReturns complete response"]
CallStreaming["call_rpc_streaming()\nReturns stream of chunks"]
GetDispatcher["get_dispatcher()\nAccess to RpcDispatcher"]
IsConnected["is_connected()\nConnection status"]
StateHandler["set_state_change_handler()\nState notifications"]
end
    
    subgraph "Concrete Implementations"
        TokioImpl["muxio-tokio-rpc-client\nRpcClient (Tokio)"]
WasmImpl["muxio-wasm-rpc-client\nRpcWasmClient (WASM)"]
end
    
    subgraph "Core Components"
        Dispatcher["RpcDispatcher\nRequest correlation"]
EmitFn["Emit Function\nSend bytes to transport"]
end
    
 
   AppCode --> ServiceDef
 
   ServiceDef --> CallerTrait
 
   CallerTrait --> CallBuffered
 
   CallerTrait --> CallStreaming
 
   CallerTrait --> GetDispatcher
 
   CallerTrait --> IsConnected
 
   CallerTrait --> StateHandler
    
    TokioImpl -.implements.-> CallerTrait
    WasmImpl -.implements.-> CallerTrait
    
 
   GetDispatcher --> Dispatcher
 
   CallBuffered --> Dispatcher
 
   CallStreaming --> Dispatcher
 
   CallBuffered --> EmitFn
 
   CallStreaming --> EmitFn

Trait Definition

The RpcServiceCallerInterface trait requires implementors to provide access to core components and implement multiple invocation patterns:

MethodReturn TypePurpose
get_dispatcher()Arc<TokioMutex<RpcDispatcher<'static>>>Provides access to the RPC dispatcher for request/response correlation
get_emit_fn()Arc<dyn Fn(Vec<u8>) + Send + Sync>Returns the function that sends binary frames to the transport
is_connected()boolChecks current connection state
call_rpc_streaming()Result<(RpcStreamEncoder, DynamicReceiver), RpcServiceError>Initiates streaming RPC call with incremental data reception
call_rpc_buffered()Result<(RpcStreamEncoder, Result<T, RpcServiceError>), RpcServiceError>Initiates buffered RPC call that returns complete response
set_state_change_handler()async fnRegisters callback for transport state changes

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:26-31 extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-405

Core Components Access

Dispatcher Access

The get_dispatcher() method returns an Arc<TokioMutex<RpcDispatcher>> that allows the implementation to register RPC calls and manage request/response correlation. The TokioMutex is used because these methods are async and may need to await the lock.

Emit Function

The get_emit_fn() method returns a closure that the caller interface uses to send binary frames to the underlying transport. This function is called by the RpcStreamEncoder when writing request payloads.

Connection State

The is_connected() method allows implementations to check the connection state before attempting RPC calls. When false, the call_rpc_streaming() method immediately returns a ConnectionAborted error.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:28-30 extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53

Streaming RPC Pattern

The call_rpc_streaming() method provides the foundation for all RPC calls, supporting incremental data reception through dynamic channels. This pattern is used directly for streaming RPC calls and internally by the buffered pattern.

sequenceDiagram
    participant App as "Application"
    participant Interface as "RpcServiceCallerInterface"
    participant Dispatcher as "RpcDispatcher"
    participant Channel as "DynamicChannel\n(mpsc)"
    participant SendFn as "Emit Function"
    participant RecvFn as "Response Handler\n(Closure)"
    
    App->>Interface: call_rpc_streaming(request)
    Interface->>Interface: Check is_connected()
    Interface->>Channel: Create mpsc channel\n(Bounded/Unbounded)
    Interface->>Interface: Create send_fn closure
    Interface->>Interface: Create recv_fn closure
    
    Interface->>Dispatcher: call(request, send_fn, recv_fn)
    Dispatcher-->>Interface: Return RpcStreamEncoder
    
    Note over Interface,Channel: Wait for readiness signal
    RecvFn->>RecvFn: Receive RpcStreamEvent::Header
    RecvFn->>Channel: Send readiness via oneshot
    
    Interface-->>App: Return (encoder, receiver)
    
    loop "For each response chunk"
        RecvFn->>RecvFn: Receive RpcStreamEvent::PayloadChunk
        RecvFn->>Channel: Send Ok(bytes) to DynamicReceiver
        App->>Channel: next().await
        Channel-->>App: Some(Ok(bytes))
    end
    
    RecvFn->>RecvFn: Receive RpcStreamEvent::End
    RecvFn->>Channel: Close sender (drop)
    App->>Channel: next().await
    Channel-->>App: None

Streaming Call Flow

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-349

Dynamic Channel Types

The streaming method accepts a DynamicChannelType parameter that determines the channel buffering strategy:

Channel TypeBuffer SizeUse Case
DynamicChannelType::BoundedDEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZEControlled memory usage, backpressure
DynamicChannelType::UnboundedUnlimitedMaximum throughput, simple buffering

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73

Response Handler Implementation

The recv_fn closure handles all incoming RpcStreamEvent variants synchronously using StdMutex for WASM compatibility. The handler maintains internal state to track response status and buffer error payloads:

stateDiagram-v2
    [*] --> WaitingHeader : recv_fn created
    WaitingHeader --> ProcessingPayload: RpcStreamEvent::Header\nExtract RpcResultStatus
    WaitingHeader --> SendReadiness : Send readiness signal
    SendReadiness --> ProcessingPayload
    
    ProcessingPayload --> BufferSuccess: RpcResultStatus::Success
    ProcessingPayload --> BufferError: RpcResultStatus::*Error
    
    BufferSuccess --> ProcessingPayload : More chunks
    BufferError --> ProcessingPayload : More chunks
    
    ProcessingPayload --> CompleteSuccess: RpcStreamEvent::End\nSuccess status
    ProcessingPayload --> CompleteError: RpcStreamEvent::End\nError status
    ProcessingPayload --> HandleError: RpcStreamEvent::Error
    
    CompleteSuccess --> [*] : Close channel
    CompleteError --> [*] : Send RpcServiceError Close channel
    HandleError --> [*] : Send Transport error Close channel

The response handler uses StdMutex instead of TokioMutex because it operates in a synchronous context and must be compatible with WASM environments where Tokio mutexes are not available.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:91-287 extensions/muxio-rpc-service-caller/src/caller_interface.rs:75-79 extensions/muxio-rpc-service-caller/src/caller_interface.rs:94-96

Response Event Handling

The recv_fn closure processes four event types:

Event TypeActionsChannel Operation
RpcStreamEvent::HeaderExtract RpcResultStatus from metadata, send readiness signalSignal header received via oneshot
RpcStreamEvent::PayloadChunkIf success: forward to receiver; otherwise buffer error payloadsender.send_and_ignore(Ok(bytes))
RpcStreamEvent::EndProcess final status, convert errors to RpcServiceErrorSend final error or close channel
RpcStreamEvent::ErrorSend transport error to both readiness and data channelssender.send_and_ignore(Err(error))

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:118-286

Buffered RPC Pattern

The call_rpc_buffered() method provides a higher-level interface for RPC calls where the entire response is buffered before being returned. This method is built on top of call_rpc_streaming() and is used by RpcMethodPrebuffered implementations.

sequenceDiagram
    participant App as "Application"
    participant Buffered as "call_rpc_buffered()"
    participant Streaming as "call_rpc_streaming()"
    participant Stream as "DynamicReceiver"
    participant Decode as "decode: F"
    
    App->>Buffered: call_rpc_buffered(request, decode)
    Buffered->>Streaming: call_rpc_streaming(request, Unbounded)
    Streaming-->>Buffered: Return (encoder, stream)
    
    Buffered->>Buffered: Create empty success_buf
    
    loop "Stream consumption"
        Buffered->>Stream: stream.next().await
        Stream-->>Buffered: Some(Ok(chunk))
        Buffered->>Buffered: success_buf.extend(chunk)
    end
    
    alt "Stream completed successfully"
        Stream-->>Buffered: None
        Buffered->>Decode: decode(&success_buf)
        Decode-->>Buffered: Return T
        Buffered-->>App: Ok((encoder, Ok(decoded)))
    else "Stream yielded error"
        Stream-->>Buffered: Some(Err(e))
        Buffered-->>App: Ok((encoder, Err(e)))
    end

Buffered Call Implementation

The buffered pattern always uses DynamicChannelType::Unbounded to avoid backpressure complications when consuming the entire stream.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:351-399 extensions/muxio-rpc-service-caller/src/caller_interface.rs:368-370

Decode Function

The decode parameter is a closure that converts the buffered byte slice into the desired return type T. This function is provided by the RPC method implementation and typically uses bitcode::decode() for deserialization:

F: Fn(&[u8]) -> T + Send + Sync + 'static

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:363-365

Return Value Structure

Both streaming and buffered methods return a tuple containing an RpcStreamEncoder and the response data. The encoder allows the caller to write request payloads after initiating the call:

MethodReturn TypeEncoder PurposeResponse Type
call_rpc_streaming()(RpcStreamEncoder, DynamicReceiver)Write request payloadStream of Result<Vec<u8>, RpcServiceError>
call_rpc_buffered()(RpcStreamEncoder, Result<T, RpcServiceError>)Write request payloadComplete decoded response

The RpcStreamEncoder is returned even if the response contains an error, allowing the caller to properly finalize the request payload.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:37-42 extensions/muxio-rpc-service-caller/src/caller_interface.rs:357-362

Connection State Management

State Change Handler

The set_state_change_handler() method allows applications to register callbacks that are invoked when the transport state changes. Implementations store these handlers and invoke them during connection lifecycle events:

The RpcTransportState enum indicates whether the transport is connected or disconnected. For more details on transport state management, see Transport State Management.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:401-405

graph LR
    CallStart["call_rpc_streaming()
invoked"]
CheckConn{"is_connected()?"}
RejectCall["Return ConnectionAborted error"]
ProceedCall["Create channels and proceed"]
CallStart --> CheckConn
 
   CheckConn -->|false| RejectCall
 
   CheckConn -->|true| ProceedCall

Connection Checks

The is_connected() method is checked at the beginning of call_rpc_streaming() to prevent RPC calls on disconnected transports:

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53

Error Handling

The caller interface propagates errors from multiple sources through the RpcServiceError enum:

Error SourceError TypeTrigger
Disconnected clientRpcServiceError::Transport(ConnectionAborted)is_connected() returns false
Dispatcher call failureRpcServiceError::Transport(io::Error)dispatcher.call() returns error
Readiness channel closedRpcServiceError::Transport(io::Error)Oneshot channel drops before header received
Method not foundRpcServiceError::Rpc(NotFound)RpcResultStatus::MethodNotFound in response
Application errorRpcServiceError::Rpc(Fail)RpcResultStatus::Fail in response
System errorRpcServiceError::Rpc(System)RpcResultStatus::SystemError in response
Frame decode errorRpcServiceError::Transport(io::Error)RpcStreamEvent::Error received

For comprehensive error handling documentation, see RPC Service Errors.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:49-52 extensions/muxio-rpc-service-caller/src/caller_interface.rs:186-232 extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-284

Implementation Requirements

Types implementing RpcServiceCallerInterface must:

  1. Provide thread-safe dispatcher access: Return Arc<TokioMutex<RpcDispatcher>> from get_dispatcher()
  2. Implement emit function: Return closure that sends bytes to underlying transport
  3. Track connection state: Maintain boolean state returned by is_connected()
  4. Manage state handlers: Store and invoke state change handlers at appropriate lifecycle points
  5. Satisfy trait bounds: Implement Send + Sync for cross-thread usage

The trait uses #[async_trait::async_trait] to support async methods in trait definitions.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:25-26

graph TB
    subgraph "Trait Definition"
        Trait["RpcServiceCallerInterface\nextensions/muxio-rpc-service-caller"]
end
    
    subgraph "Native Implementation"
        TokioClient["RpcClient\nmuxio-tokio-rpc-client"]
TokioRuntime["Tokio async runtime\ntokio-tungstenite WebSocket"]
end
    
    subgraph "Browser Implementation"
        WasmClient["RpcWasmClient\nmuxio-wasm-rpc-client"]
WasmBridge["wasm-bindgen bridge\nBrowser WebSocket API"]
end
    
    Trait -.implemented by.-> TokioClient
    Trait -.implemented by.-> WasmClient
    
 
   TokioClient --> TokioRuntime
 
   WasmClient --> WasmBridge
    
    style Trait fill:#f9f9f9,stroke:#333,stroke-width:2px

Platform-Specific Implementations

The RpcServiceCallerInterface is implemented by two platform-specific clients:

Both implementations provide the same interface to application code while adapting to their respective runtime environments. For implementation details, see Tokio RPC Client and WASM RPC Client.

graph LR
    subgraph "Application Code"
        Call["Add::call(&client, params)"]
end
    
    subgraph "Method Definition"
        Method["RpcMethodPrebuffered::call()\nType-safe wrapper"]
Encode["encode_request(params)\nSerialize arguments"]
Decode["decode_response(bytes)\nDeserialize result"]
end
    
    subgraph "Caller Interface"
        CallBuffered["call_rpc_buffered(request, decode)"]
BuildRequest["Build RpcRequest\nwith method_id and params"]
end
    
 
   Call --> Method
 
   Method --> Encode
 
   Encode --> BuildRequest
 
   BuildRequest --> CallBuffered
    CallBuffered -.async.-> Decode
 
   Decode --> Method
 
   Method --> Call

Integration with RPC Methods

RPC method definitions use the caller interface through the RpcMethodPrebuffered trait, which provides a type-safe wrapper around call_rpc_buffered():

For details on method definitions and the prebuffered pattern, see Service Definitions and Prebuffered RPC Calls.

Package Information

The RpcServiceCallerInterface is defined in the muxio-rpc-service-caller package, which provides generic, runtime-agnostic client logic:

PackageDescriptionKey Dependencies
muxio-rpc-service-callerGeneric RPC client interface and logicmuxio, muxio-rpc-service, async-trait, futures, tokio (sync only)

The package uses minimal Tokio features (only sync for TokioMutex) to remain as platform-agnostic as possible while supporting async methods.

Sources: extensions/muxio-rpc-service-caller/Cargo.toml:1-22


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Service Endpoint Interface

Loading…

Service Endpoint Interface

Relevant source files

Purpose and Scope

This document describes the RpcServiceEndpointInterface trait, which provides the server-side abstraction for registering RPC method handlers and processing incoming requests in the muxio framework. This trait is runtime-agnostic and enables any platform-specific server implementation to handle RPC calls using a consistent interface.

For client-side RPC invocation, see Service Caller Interface. For details on defining RPC services, see Service Definitions.

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-138


Trait Overview

The RpcServiceEndpointInterface<C> trait defines the server-side contract for handling incoming RPC requests. It is generic over a connection context type C, which allows handlers to access per-connection state or metadata.

Core Methods

MethodPurpose
register_prebufferedRegisters an async handler for a specific method ID
read_bytesProcesses incoming transport bytes, routes to handlers, and sends responses
get_prebuffered_handlersProvides access to the handler registry (implementation detail)

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:8-64


Trait Definition Structure

Diagram: Trait structure showing methods, associated types, and key dependencies

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:8-14 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64


Handler Registration

The register_prebuffered method registers an asynchronous handler function for a specific RPC method. Handlers are identified by a 64-bit method ID, typically generated using the rpc_method_id! macro from the service definition layer.

Handler Function Signature

Registration Flow

Diagram: Handler registration sequence showing duplicate detection

The handler is wrapped in an Arc and stored in a HashMap<u64, RpcPrebufferedHandler<C>>. If a handler for the given method_id already exists, registration fails with an RpcServiceEndpointError::Handler.

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:35-64 extensions/muxio-rpc-service-endpoint/src/endpoint.rs:12-23


Three-Stage Request Processing Pipeline

The read_bytes method implements a three-stage pipeline for processing incoming RPC requests:

Stage 1: Decode and Identify

Incoming transport bytes are passed to the RpcDispatcher for frame decoding and stream demultiplexing. The dispatcher identifies which requests are now fully received (finalized) and ready for processing.

Diagram: Stage 1 - Synchronous decoding and request identification

graph LR
    BYTES["Incoming bytes[]"]
DISPATCHER["RpcDispatcher::read_bytes()"]
REQUEST_IDS["Vec&lt;request_id&gt;"]
FINALIZED["Finalized requests\nVec&lt;(id, RpcRequest)&gt;"]
BYTES --> DISPATCHER
 
   DISPATCHER --> REQUEST_IDS
 
   REQUEST_IDS --> FINALIZED

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:78-97

Stage 2: Execute Handlers Asynchronously

Each finalized request is dispatched to its corresponding handler. Handlers execute concurrently using join_all, allowing the event loop to process multiple requests in parallel without blocking.

Diagram: Stage 2 - Concurrent handler execution

graph TB
    subgraph "Handler Execution"
        REQ1["Request 1"]
REQ2["Request 2"]
REQ3["Request 3"]
HANDLER1["Handler Future 1"]
HANDLER2["Handler Future 2"]
HANDLER3["Handler Future 3"]
JOIN["futures::join_all"]
RESULTS["Vec&lt;RpcResponse&gt;"]
end
    
 
   REQ1 --> HANDLER1
 
   REQ2 --> HANDLER2
 
   REQ3 --> HANDLER3
    
 
   HANDLER1 --> JOIN
 
   HANDLER2 --> JOIN
 
   HANDLER3 --> JOIN
    
 
   JOIN --> RESULTS

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:99-126

graph LR
    RESPONSES["Handler Results\nVec&lt;RpcResponse&gt;"]
ENCODE["dispatcher.respond()"]
CHUNK["Chunk by max_chunk_size"]
EMIT["on_emit callback"]
TRANSPORT["Transport layer"]
RESPONSES --> ENCODE
 
   ENCODE --> CHUNK
 
   CHUNK --> EMIT
 
   EMIT --> TRANSPORT

Stage 3: Encode and Emit Responses

Handler results are synchronously encoded into the RPC protocol format and emitted back to the transport layer via the RpcDispatcher::respond method.

Diagram: Stage 3 - Synchronous response encoding and emission

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:127-137

Complete Processing Flow

Diagram: Complete three-stage request processing sequence

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:66-138


graph TB
    subgraph "RpcServiceEndpoint&lt;C&gt;"
        ENDPOINT["RpcServiceEndpoint&lt;C&gt;"]
HANDLERS["prebuffered_handlers:\nArc&lt;Mutex&lt;HashMap&lt;u64, Handler&gt;&gt;&gt;"]
PHANTOM["_context: PhantomData&lt;C&gt;"]
ENDPOINT --> HANDLERS
 
       ENDPOINT --> PHANTOM
    end
    
    subgraph "Handler Type Definition"
        HANDLER_TYPE["RpcPrebufferedHandler&lt;C&gt;"]
FN_TYPE["Arc&lt;Fn(Vec&lt;u8&gt;, C) -> Pin&lt;Box&lt;Future&gt;&gt;&gt;"]
HANDLER_TYPE -.alias.-> FN_TYPE
    end
    
    subgraph "Mutex Abstraction"
        FEATURE{{"tokio_support feature"}}
TOKIO_MUTEX["tokio::sync::Mutex"]
STD_MUTEX["std::sync::Mutex"]
FEATURE -->|enabled| TOKIO_MUTEX
 
       FEATURE -->|disabled| STD_MUTEX
    end
    
    HANDLERS -.type.-> HANDLER_TYPE
    HANDLERS -.implementation.-> FEATURE

Concrete Implementation

The RpcServiceEndpoint<C> struct provides a concrete implementation of the RpcServiceEndpointInterface trait.

Structure

Diagram: Structure of the concrete RpcServiceEndpoint implementation

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint.rs:25-54

Key Implementation Details

ComponentTypePurpose
prebuffered_handlersArc<Mutex<HashMap<u64, RpcPrebufferedHandler<C>>>>Thread-safe handler registry
_contextPhantomData<C>Zero-cost marker for generic type C
Associated HandlersLockMutex<HashMap<u64, RpcPrebufferedHandler<C>>>Provides access pattern for handler registry

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint.rs:25-32 extensions/muxio-rpc-service-endpoint/src/endpoint.rs:56-67


Connection Context Type Pattern

The endpoint is generic over a context type C, which must satisfy:

This context is passed to every handler invocation and enables:

  • Per-connection state : Track connection-specific data like authentication tokens, session IDs, or user information
  • Shared resources : Provide access to database pools, configuration, or other application state
  • Request metadata : Include connection metadata like remote address, protocol version, or timing information
graph LR
    TRANSPORT["Transport Layer\n(per connection)"]
CONTEXT["Context Instance\nC: Clone"]
ENDPOINT["RpcServiceEndpoint"]
READ_BYTES["read_bytes(context)"]
HANDLER["Handler(request, context)"]
TRANSPORT --> CONTEXT
 
   CONTEXT --> ENDPOINT
 
   ENDPOINT --> READ_BYTES
 
   READ_BYTES --> HANDLER

Context Flow

Diagram: Context propagation from transport to handler

When calling read_bytes, the transport layer provides a context instance which is cloned for each handler invocation. This allows handlers to access per-connection state without shared mutable references.

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:9-12 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:68-76


Runtime-Agnostic Mutex Abstraction

The endpoint uses conditional compilation to select the appropriate mutex implementation based on the runtime environment:

Feature-Based Selection

FeatureMutex TypeUse Case
Default (no features)std::sync::MutexBlocking, non-async environments
tokio_supporttokio::sync::MutexAsync/await with Tokio runtime

This abstraction allows the same endpoint code to work in both synchronous and asynchronous contexts without runtime overhead or complex trait abstractions.

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint.rs:5-9 extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27


graph TB
    TRAIT["WithHandlers&lt;C&gt; trait"]
METHOD["with_handlers&lt;F, R&gt;(f: F) -> R"]
STD_IMPL["impl for std::sync::Mutex"]
TOKIO_IMPL["impl for tokio::sync::Mutex"]
TRAIT --> METHOD
    TRAIT -.implemented by.-> STD_IMPL
    TRAIT -.implemented by.-> TOKIO_IMPL
    
 
   STD_IMPL --> LOCK_STD["lock().unwrap()"]
TOKIO_IMPL --> LOCK_TOKIO["lock().await"]

WithHandlers Trait

The WithHandlers<C> trait provides a uniform interface for accessing the handler registry regardless of the underlying mutex implementation:

Diagram: WithHandlers abstraction over different mutex types

This trait enables the register_prebuffered method to work identically regardless of the feature flag, maintaining the runtime-agnostic design principle.

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs13 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:46-62


Error Handling

The endpoint returns RpcServiceEndpointError for various failure conditions:

Error TypeCauseWhen
Handler(Box<dyn Error>)Handler already registeredDuring register_prebuffered with duplicate method_id
Dispatcher(RpcDispatcherError)Frame decode failureDuring read_bytes if frames are malformed
Dispatcher(RpcDispatcherError)Request tracking failureDuring read_bytes if request state is inconsistent

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:33-34 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:48-53 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs74


Integration with Platform Implementations

The RpcServiceEndpointInterface is implemented by platform-specific server types:

Both implementations use the same handler registration API and benefit from compile-time type safety through shared service definitions (see Service Definitions).

Sources: extensions/muxio-rpc-service-endpoint/Cargo.toml:1-33


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Prebuffered RPC Calls

Loading…

Prebuffered RPC Calls

Relevant source files

Purpose and Scope

This document describes the prebuffered RPC call pattern in muxio, where the entire request payload is encoded and buffered before transmission, and the entire response payload is accumulated before being decoded and returned to the caller. This is in contrast to streaming RPC calls, which process data incrementally using channels (see Streaming RPC Calls).

Prebuffered calls are the simplest and most common RPC pattern, suitable for request/response operations where the entire input and output fit comfortably in memory. For service definition details, see Service Definitions. For caller interface abstractions, see Service Caller Interface.

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21


Overview

A prebuffered RPC call is a synchronous-style remote procedure invocation where:

  1. The caller encodes the entire request payload using bitcode serialization
  2. The request is transmitted as a complete unit to the server
  3. The server processes the request and generates a complete response
  4. The response is transmitted back as a complete unit
  5. The caller decodes the response and returns the result

The key characteristic is that both request and response are treated as atomic, indivisible units from the application’s perspective, even though the underlying transport may chunk them into multiple frames for transmission.

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48


The RpcCallPrebuffered Trait

The RpcCallPrebuffered trait defines the interface for making prebuffered RPC calls. It is automatically implemented for any type that implements RpcMethodPrebuffered from the muxio-rpc-service crate.

Trait Definition

The trait is generic over any client type C that implements RpcServiceCallerInterface, making it runtime-agnostic and usable with both native Tokio clients and WASM clients.

Automatic Implementation

The blanket implementation at extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:24-98 automatically provides the call method for any service that defines request/response types via the RpcMethodPrebuffered trait.

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-28


Request/Response Flow

The following diagram illustrates the complete flow of a prebuffered RPC call from the caller’s perspective:

sequenceDiagram
    participant App as "Application Code"
    participant Trait as "RpcCallPrebuffered::call"
    participant Encode as "encode_request"
    participant Strategy as "Transport Strategy"
    participant Client as "RpcServiceCallerInterface"
    participant Buffered as "call_rpc_buffered"
    participant Decode as "decode_response"
    
    App->>Trait: Echo::call(client, input)
    Trait->>Encode: Self::encode_request(input)
    Encode-->>Trait: Vec<u8>
    
    Trait->>Strategy: Check encoded_args.len()
    
    alt "len < DEFAULT_SERVICE_MAX_CHUNK_SIZE"
        Strategy->>Strategy: Use rpc_param_bytes
        note over Strategy: Small payload: inline in header
    else "len >= DEFAULT_SERVICE_MAX_CHUNK_SIZE"
        Strategy->>Strategy: Use rpc_prebuffered_payload_bytes
        note over Strategy: Large payload: stream as chunks
    end
    
    Strategy->>Trait: RpcRequest struct
    Trait->>Client: call_rpc_buffered(request, decode_fn)
    
    Client->>Client: Transmit request frames
    Client->>Client: Await response frames
    Client->>Client: Accumulate response bytes
    
    Client-->>Trait: (encoder, Result<Vec<u8>, Error>)
    
    alt "Response is Ok(bytes)"
        Trait->>Decode: decode_response(bytes)
        Decode-->>Trait: Self::Output
        Trait-->>App: Ok(output)
    else "Response is Err"
        Trait-->>App: Err(RpcServiceError)
    end

Prebuffered RPC Call Sequence

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-97


Smart Transport Strategy for Large Payloads

The prebuffered implementation uses a “smart transport strategy” to handle arguments of any size, automatically selecting the most efficient encoding method based on payload size.

Strategy Logic

ConditionField UsedTransmission Method
encoded_args.len() < DEFAULT_SERVICE_MAX_CHUNK_SIZErpc_param_bytesInline in initial header frame
encoded_args.len() >= DEFAULT_SERVICE_MAX_CHUNK_SIZErpc_prebuffered_payload_bytesChunked and streamed after header

RpcRequest Structure

RpcRequest {
    rpc_method_id: u64,
    rpc_param_bytes: Option<Vec<u8>>,          // Used for small payloads
    rpc_prebuffered_payload_bytes: Option<Vec<u8>>,  // Used for large payloads
    is_finalized: bool,                         // Always true for prebuffered
}

Implementation Details

The decision logic is implemented at extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-65:

This strategy ensures that:

  • Small requests avoid unnecessary chunking overhead
  • Large requests don’t fail due to header size limits (typically ~64KB)
  • The underlying RpcDispatcher automatically handles chunking for large payloads
  • Server-side endpoint logic transparently locates arguments in either field

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-72


graph TB
    Input["Application Input\n(Self::Input)"]
Encode["encode_request()\nbitcode serialization"]
EncodedBytes["Vec&lt;u8&gt;\nencoded_args"]
SizeCheck{"encoded_args.len() &gt;=\nDEFAULT_SERVICE_MAX_CHUNK_SIZE?"}
SmallPath["rpc_param_bytes:\nSome(encoded_args)"]
SmallNote["Inline in header frame\nSingle transmission"]
LargePath["rpc_prebuffered_payload_bytes:\nSome(encoded_args)"]
LargeNote["Chunked by RpcDispatcher\nMultiple frames"]
Request["RpcRequest struct\nis_finalized: true"]
Dispatcher["RpcDispatcher\nEncodes and transmits"]
Network["WebSocket Transport\nBinary frames"]
Input-->Encode
 
   Encode-->EncodedBytes
 
   EncodedBytes-->SizeCheck
    
 
   SizeCheck-->|No| SmallPath
 
   SmallPath-->SmallNote
 
   SmallNote-->Request
    
 
   SizeCheck-->|Yes| LargePath
 
   LargePath-->LargeNote
 
   LargeNote-->Request
    
 
   Request-->Dispatcher
 
   Dispatcher-->Network

Transport Strategy Data Flow

The following diagram shows how data flows through the different encoding paths:

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:55-72 README.md:32-34


Error Propagation

Prebuffered RPC calls propagate errors through a nested Result structure to distinguish between transport errors and application-level errors.

graph TB
    Call["RpcCallPrebuffered::call"]
BufferedCall["call_rpc_buffered"]
OuterResult["Result&lt;(encoder, InnerResult), RpcServiceError&gt;"]
InnerResult["Result&lt;Self::Output, RpcServiceError&gt;"]
TransportErr["Transport Error\n(Connection failed, timeout, etc.)"]
RemoteErr["Remote Service Error\n(Handler returned Err)"]
DecodeErr["Decode Error\n(Malformed response)"]
Success["Successful Response\nSelf::Output"]
Call-->BufferedCall
 
   BufferedCall-->OuterResult
    
 
   OuterResult-->|Outer Err| TransportErr
 
   OuterResult-->|Outer Ok| InnerResult
    
 
   InnerResult-->|Inner Err RpcServiceError::Rpc| RemoteErr
 
   InnerResult-->|Inner Ok, decode fails| DecodeErr
 
   InnerResult-->|Inner Ok, decode succeeds| Success

Error Flow Diagram

Error Handling Code

The error unwrapping logic at extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:87-96:

Error Types

Error TypeCauseExample
RpcServiceError::TransportNetwork failure, framing error, decode failureConnection closed, malformed frame
RpcServiceError::RpcRemote handler returned error“item does not exist”, “Addition failed”
RpcServiceError::NotConnectedClient not connected when call initiatedWebSocket not established

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:79-96 extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177


Usage Example

The following test demonstrates typical usage of prebuffered RPC calls:

Basic Success Case

From extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:98-133:

Error Handling Example

From extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:179-212:

Large Payload Test

From extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:296-312:

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:98-212 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:296-312


Integration with Service Definitions

Prebuffered RPC calls rely on service definitions that implement the RpcMethodPrebuffered trait. Each service method must provide:

Required Trait Methods

MethodPurposeReturn Type
METHOD_IDCompile-time constant identifying the methodu64
encode_request()Serialize input parametersResult<Vec<u8>, io::Error>
decode_request()Deserialize input parametersResult<Self::Input, io::Error>
encode_response()Serialize output resultResult<Vec<u8>, io::Error>
decode_response()Deserialize output resultResult<Self::Output, io::Error>

Example Service Usage

The RpcCallPrebuffered trait automatically implements the call method for any type implementing RpcMethodPrebuffered, providing compile-time type safety and zero-cost abstractions.

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-11 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-27


Comparison with Streaming Calls

AspectPrebuffered CallsStreaming Calls
Memory UsageEntire payload in memoryIncremental processing
LatencyHigher (wait for complete payload)Lower (process as data arrives)
ComplexitySimple request/responseRequires channel management
Use CasesSmall to medium payloads, simple operationsLarge datasets, incremental results, progress tracking
Request Fieldis_finalized: trueis_finalized: false initially
Response HandlingSingle accumulated bufferStream of chunks via channels

When to Use Prebuffered

  • Request and response fit comfortably in memory
  • Simple request/response semantics
  • No need for progress tracking or cancellation
  • Examples: database queries, RPC calculations, file uploads < 10MB

When to Use Streaming

  • Large payloads that don’t fit in memory
  • Need to process results incrementally
  • Progress tracking or cancellation required
  • Examples: video streaming, large file transfers, real-time data feeds

For detailed information on streaming RPC patterns, see Streaming RPC Calls.

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs71 Table of Contents (4.4 vs 4.5)


graph TB
    subgraph "Application Layer"
        AppCode["Application Code\nBusiness Logic"]
end
    
    subgraph "RPC Service Layer"
        ServiceDef["Service Definition\nRpcMethodPrebuffered trait\nEcho, Add, Mult"]
MethodID["METHOD_ID constant\nxxhash generated"]
EncDec["encode_request()\ndecode_response()\nbitcode serialization"]
end
    
    subgraph "Caller Layer"
        CallTrait["RpcCallPrebuffered trait\ncall()
method"]
CallerIface["RpcServiceCallerInterface\ncall_rpc_buffered()"]
end
    
    subgraph "Dispatcher Layer"
        Dispatcher["RpcDispatcher\nRequest correlation\nResponse routing"]
Request["RpcRequest struct\nmethod_id\nparam_bytes or payload_bytes\nis_finalized: true"]
end
    
    subgraph "Transport Layer"
        Session["RpcSession\nStream multiplexing\nFrame encoding"]
Framing["Binary frames\nChunking strategy"]
end
    
 
   AppCode-->|Echo::call client, input| CallTrait
 
   CallTrait-->|implements for| ServiceDef
 
   CallTrait-->|uses| MethodID
 
   CallTrait-->|calls| EncDec
 
   CallTrait-->|invokes| CallerIface
    
 
   CallerIface-->|creates| Request
 
   CallerIface-->|sends to| Dispatcher
    
 
   Dispatcher-->|initializes stream in| Session
 
   Session-->|chunks and encodes| Framing
    
    ServiceDef-.defines.->MethodID
    ServiceDef-.implements.->EncDec

Component Relationships

The following diagram shows how prebuffered RPC components relate to each other:

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-98 extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-18


Testing Strategy

The prebuffered RPC implementation includes comprehensive unit and integration tests:

Unit Tests

Located in extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-213:

  • MockRpcClient : Test implementation of RpcServiceCallerInterface
  • test_buffered_call_success : Verifies successful request/response roundtrip
  • test_buffered_call_remote_error : Tests error propagation from server
  • test_prebuffered_trait_converts_error : Validates error type conversion

Integration Tests

Located in extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313:

  • Real server instance : Uses actual RpcServer from muxio-tokio-rpc-server
  • WebSocket bridge : Connects WASM client to real server over network
  • Large payload test : Validates chunking for payloads 200x chunk size
  • Cross-platform validation : Same test logic for both Tokio and WASM clients

Test Architecture

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Streaming RPC Calls

Loading…

Streaming RPC Calls

Relevant source files

Purpose and Scope

This document describes the streaming RPC mechanism in rust-muxio, which allows bidirectional data transfer over RPC calls with chunked payloads and asynchronous processing. Streaming RPC is used when responses are large, dynamic in size, or need to be processed incrementally.

For information about one-shot RPC calls with complete request/response buffers, see Prebuffered RPC Calls. For the underlying service definition traits, see Service Definitions. For client-side invocation patterns, see Service Caller Interface.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-406


Overview of Streaming RPC

Streaming RPC calls provide a mechanism for sending requests and receiving responses that may be too large to buffer entirely in memory, or where the response size is unknown at call time. Unlike prebuffered calls which return complete Result<T, RpcServiceError> values, streaming calls return:

  1. RpcStreamEncoder - For sending additional payload chunks to the server after the initial request
  2. DynamicReceiver - A stream that yields Result<Vec<u8>, RpcServiceError> chunks asynchronously

The streaming mechanism handles:

  • Chunked payload transmission and reassembly
  • Backpressure through bounded or unbounded channels
  • Error propagation and early termination
  • Request/response correlation across multiplexed streams

Key Distinction:

  • Prebuffered RPC : Entire response buffered in memory before returning to caller
  • Streaming RPC : Response chunks streamed incrementally as they arrive

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:33-73


Initiating a Streaming Call

The call_rpc_streaming method on RpcServiceCallerInterface initiates a streaming RPC call:

Method Parameters

ParameterTypeDescription
requestRpcRequestContains rpc_method_id, optional rpc_param_bytes, and optional rpc_prebuffered_payload_bytes
dynamic_channel_typeDynamicChannelTypeSpecifies Bounded or Unbounded channel for response streaming

Return Value

On success, returns a tuple containing:

  • RpcStreamEncoder - Used to send additional payload chunks after the initial request
  • DynamicReceiver - Stream that yields response chunks as Result<Vec<u8>, RpcServiceError>

On failure, returns RpcServiceError::Transport if the client is disconnected or if dispatcher registration fails.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:32-54


Dynamic Channel Types

The DynamicChannelType enum determines the backpressure characteristics of the response stream:

graph LR
    DCT["DynamicChannelType"]
UNBOUNDED["Unbounded\nmpsc::unbounded()"]
BOUNDED["Bounded\nmpsc::channel(buffer_size)"]
DCT -->|No backpressure| UNBOUNDED
 
   DCT -->|Backpressure at buffer_size| BOUNDED
    
 
   UNBOUNDED -->|Creates| DS_UNBOUNDED["DynamicSender::Unbounded"]
UNBOUNDED -->|Creates| DR_UNBOUNDED["DynamicReceiver::Unbounded"]
BOUNDED -->|Creates| DS_BOUNDED["DynamicSender::Bounded"]
BOUNDED -->|Creates| DR_BOUNDED["DynamicReceiver::Bounded"]

Unbounded Channels

Created with DynamicChannelType::Unbounded. Uses mpsc::unbounded() internally, allowing unlimited buffering of response chunks. Suitable for:

  • Fast consumers that can process chunks quickly
  • Scenarios where response size is bounded and known to fit in memory
  • Testing and development

Risk: Unbounded channels can lead to unbounded memory growth if the receiver is slower than the sender.

Bounded Channels

Created with DynamicChannelType::Bounded. Uses mpsc::channel(DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE) where DEFAULT_RPC_STREAM_CHANNEL_BUFFER_SIZE is typically 8. Provides backpressure when the buffer is full. Suitable for:

  • Production systems with predictable memory usage
  • Long-running streams with unknown total size
  • Rate-limiting response processing

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73 extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:50-65


RpcStreamEncoder and DynamicReceiver

RpcStreamEncoder

The RpcStreamEncoder is created by RpcDispatcher::call() and provides methods to send additional payload chunks after the initial request. It wraps an RpcEmit trait implementation that sends binary frames over the transport.

Key characteristics:

  • Created with max_chunk_size from DEFAULT_SERVICE_MAX_CHUNK_SIZE
  • Automatically chunks large payloads into frames
  • Shares the same rpc_request_id as the original request
  • Can send multiple chunks before finalizing the stream

DynamicReceiver

The DynamicReceiver is a unified abstraction over mpsc::UnboundedReceiver and mpsc::Receiver that implements Stream<Item = Result<Vec<u8>, RpcServiceError>>.

VariantUnderlying TypeBackpressure
Unboundedmpsc::UnboundedReceiverNone
Boundedmpsc::ReceiverYes
graph TB
    subgraph "Call Flow"
        CALL["call_rpc_streaming()"]
DISPATCHER["RpcDispatcher::call()"]
ENCODER["RpcStreamEncoder"]
RECEIVER["DynamicReceiver"]
end
    
    subgraph "Response Flow"
        RECV_FN["recv_fn closure\n(RpcResponseHandler)"]
TX["DynamicSender"]
RX["DynamicReceiver"]
APP["Application code\n.next().await"]
end
    
 
   CALL -->|Creates channel| TX
 
   CALL -->|Creates channel| RX
 
   CALL -->|Registers| DISPATCHER
 
   DISPATCHER -->|Returns| ENCODER
 
   CALL -->|Returns| RECEIVER
    
 
   RECV_FN -->|send_and_ignore| TX
 
   TX -.->|mpsc| RX
 
   RX -->|yields chunks| APP

Both variants provide the same next() interface through the StreamExt trait, abstracting the channel type from the caller.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:56-73 extensions/muxio-rpc-service-caller/src/caller_interface.rs:289-323


stateDiagram-v2
    [*] --> Waiting : recv_fn registered
    Waiting --> HeaderReceived: RpcStreamEvent::Header
    HeaderReceived --> Streaming: RpcResultStatus::Success
    HeaderReceived --> ErrorBuffering: RpcResultStatus::MethodNotFound\nRpcResultStatus::Fail\nRpcResultStatus::SystemError
    Streaming --> Streaming: RpcStreamEvent::PayloadChunk
    ErrorBuffering --> ErrorBuffering: RpcStreamEvent::PayloadChunk
    Streaming --> Complete: RpcStreamEvent::End
    ErrorBuffering --> Complete: RpcStreamEvent::End
    Waiting --> Error: RpcStreamEvent::Error
    HeaderReceived --> Error: RpcStreamEvent::Error
    Streaming --> Error: RpcStreamEvent::Error
    ErrorBuffering --> Error: RpcStreamEvent::Error
    Complete --> [*]
    Error --> [*]

Stream Event Processing

The recv_fn closure registered with the dispatcher handles four types of RpcStreamEvent:

Event Types and State Machine

RpcStreamEvent::Header

Received first for every RPC response. Contains RpcHeader with:

  • rpc_msg_type - Should be RpcMessageType::Response
  • rpc_request_id - Correlation ID matching the request
  • rpc_method_id - Method identifier
  • rpc_metadata_bytes - First byte contains RpcResultStatus

The recv_fn extracts RpcResultStatus from rpc_metadata_bytes[0] and stores it for subsequent processing. A readiness signal is sent via the oneshot channel to unblock the call_rpc_streaming future.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:118-135

RpcStreamEvent::PayloadChunk

Contains a chunk of the response payload. Processing depends on the previously received RpcResultStatus:

StatusBehavior
SuccessChunk sent to DynamicSender with send_and_ignore(Ok(bytes))
MethodNotFound, Fail, SystemErrorChunk buffered in error_buffer for error message construction
None (not yet received)Chunk buffered defensively

The synchronous recv_fn uses StdMutex to protect shared state (tx_arc, status, error_buffer).

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:136-174

RpcStreamEvent::End

Signals stream completion. Final actions depend on RpcResultStatus:

  1. RpcResultStatus::MethodNotFound : Constructs RpcServiceError::Rpc with RpcServiceErrorCode::NotFound and buffered error payload
  2. RpcResultStatus::Fail : Sends RpcServiceError::Rpc with RpcServiceErrorCode::Fail
  3. RpcResultStatus::SystemError : Sends RpcServiceError::Rpc with RpcServiceErrorCode::System and buffered error payload
  4. RpcResultStatus::Success : Closes the channel normally (no error sent)

The DynamicSender is taken from the Option wrapper and dropped, closing the channel.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:175-245

RpcStreamEvent::Error

Indicates a framing protocol error (e.g., malformed frames, decode errors). Sends RpcServiceError::Transport to the DynamicReceiver and also signals the readiness channel if still waiting for the header. The DynamicSender is dropped immediately.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-285


Error Handling in Streams

Error Propagation Path

Pre-Dispatch Errors

Before the dispatcher registers the request, errors are returned immediately from call_rpc_streaming():

  • Disconnected client : RpcServiceError::Transport(io::ErrorKind::ConnectionAborted)
  • Dispatcher registration failure : RpcServiceError::Transport with error details

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53 extensions/muxio-rpc-service-caller/src/caller_interface.rs:315-328

Post-Dispatch Errors

After the dispatcher registers the request, errors are sent through the DynamicReceiver stream:

  • Framing errors : RpcServiceError::Transport from RpcStreamEvent::Error
  • RPC-level errors : RpcServiceError::Rpc with appropriate RpcServiceErrorCode based on RpcResultStatus

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:185-238 extensions/muxio-rpc-service-caller/src/caller_interface.rs:246-285


sequenceDiagram
    participant Caller as "call_rpc_streaming()"
    participant Dispatcher as "RpcDispatcher"
    participant RecvFn as "recv_fn closure"
    participant ReadyChan as "oneshot channel"
    
    Caller->>ReadyChan: Create (ready_tx, ready_rx)
    Caller->>Dispatcher: call(request, recv_fn)
    Dispatcher-->>Caller: Returns encoder
    Caller->>ReadyChan: .await on ready_rx
    
    Note over RecvFn: Transport receives response
    RecvFn->>RecvFn: RpcStreamEvent::Header
    RecvFn->>RecvFn: Extract RpcResultStatus
    RecvFn->>ReadyChan: ready_tx.send(Ok(()))
    
    ReadyChan-->>Caller: Ok(())
    Caller-->>Caller: Return (encoder, receiver)

Readiness Signaling

The call_rpc_streaming method uses a oneshot channel to signal when the RPC stream is ready to be consumed. This ensures the caller doesn’t begin processing until the header has been received and the RpcResultStatus is known.

Signaling Mechanism

Signaling on Error

If an error occurs before receiving the header (e.g., RpcStreamEvent::Error), the readiness channel is signaled with Err(io::Error) instead of Ok(()).

Implementation Details

The readiness sender is stored in Arc<StdMutex<Option<oneshot::Sender>>> and taken using mem::take() when signaling to ensure it’s only used once. The recv_fn closure acquires this mutex synchronously with .lock().unwrap().

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:78-80 extensions/muxio-rpc-service-caller/src/caller_interface.rs:127-134 extensions/muxio-rpc-service-caller/src/caller_interface.rs:332-348


Complete Streaming RPC Flow

End-to-End Sequence

Synchronization Points

  1. Channel Creation : DynamicSender and DynamicReceiver created synchronously in call_rpc_streaming
  2. Dispatcher Registration : RpcDispatcher::call() registers the request and creates RpcStreamEncoder
  3. Readiness Await : call_rpc_streaming blocks on ready_rx.await until header received
  4. Header Processing : First RpcStreamEvent::Header unblocks the caller
  5. Chunk Processing : Each RpcStreamEvent::PayloadChunk flows through the channel
  6. Stream Termination : RpcStreamEvent::End closes the channel

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs:33-349


Integration with Transport Implementations

Tokio RPC Client Usage

The RpcClient struct in muxio-tokio-rpc-client implements RpcServiceCallerInterface, providing the transport-specific get_emit_fn() that sends binary data over the WebSocket connection.

When streaming RPC is used:

  1. call_rpc_streaming() creates the channels and registers with dispatcher
  2. get_emit_fn() sends initial request frames via tx.send(WsMessage::Binary(chunk))
  3. Receive loop processes incoming WebSocket binary messages
  4. endpoint.read_bytes() called on received bytes, which dispatches to recv_fn
  5. recv_fn forwards chunks to DynamicSender, which application receives via DynamicReceiver

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:158-178 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:289-313

Connection State Impact

If the client disconnects during streaming:

  1. is_connected() returns false
  2. Subsequent call_rpc_streaming() attempts fail immediately with ConnectionAborted
  3. Pending streams receive RpcStreamEvent::Error from dispatcher’s fail_all_pending_requests()
  4. Transport errors propagate through DynamicReceiver as RpcServiceError::Transport

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:99-108 extensions/muxio-rpc-service-caller/src/caller_interface.rs:44-53


Testing Patterns

Mock Client Testing

Test the dynamic channel mechanism by creating mock implementations of RpcServiceCallerInterface:

Sources: extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:101-167

Integration Testing

Full integration tests with real client/server validate streaming across the WebSocket transport, testing scenarios like:

  • Large payloads chunked correctly
  • Bounded channel backpressure
  • Early disconnect cancels pending streams
  • Error status codes propagate correctly

Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:168-292


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Platform Implementations

Loading…

Platform Implementations

Relevant source files

Purpose and Scope

This page documents the concrete platform-specific implementations that enable muxio to run in different runtime environments. These implementations provide the bridge between the runtime-agnostic RPC framework core and actual platform capabilities, including async runtimes, network protocols, and JavaScript interop.

The three production-ready implementations are:

  • muxio-tokio-rpc-server : Native server using Tokio runtime and Axum framework
  • muxio-tokio-rpc-client : Native client using Tokio runtime and tokio-tungstenite
  • muxio-wasm-rpc-client : Browser client using wasm-bindgen and JavaScript WebSocket API

For details on the RPC abstraction layer these platforms implement, see RPC Framework. For guidance on creating custom platform implementations, see Extending the Framework.


Overview

The muxio framework provides three platform implementations, each targeting different deployment environments while sharing the same core RPC abstractions and service definitions. All implementations use WebSocket as the transport protocol and communicate using the same binary framing format.

ImplementationRuntime EnvironmentPrimary DependenciesTypical Use Cases
muxio-tokio-rpc-serverNative (Tokio async)axum, tokio-tungsteniteHTTP/WebSocket servers, microservices
muxio-tokio-rpc-clientNative (Tokio async)tokio, tokio-tungsteniteCLI tools, native apps, integration tests
muxio-wasm-rpc-clientWebAssembly (browser)wasm-bindgen, js-sysWeb applications, browser extensions

All implementations are located in extensions/ and follow the workspace structure defined in Cargo.toml:19-31

Sources: Cargo.toml:19-31 README.md:38-40 Cargo.lock:897-954


Platform Integration Architecture

The following diagram shows how platform implementations integrate with the RPC framework and muxio core components:

Sources: Cargo.toml:39-47 Cargo.lock:897-954 README.md:38-51

graph TB
    subgraph "Application Layer"
        APP["Application Code\nService Methods"]
end
    
    subgraph "RPC Abstraction Layer"
        CALLER["RpcServiceCallerInterface\nClient-side trait"]
ENDPOINT["RpcServiceEndpointInterface\nServer-side trait"]
SERVICE["RpcMethodPrebuffered\nService definitions"]
end
    
    subgraph "Transport Implementations"
        TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer struct"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient struct"]
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient struct"]
end
    
    subgraph "Core Layer"
        DISPATCHER["RpcDispatcher\nRequest correlation"]
FRAMING["Binary Framing Protocol\nStream multiplexing"]
end
    
    subgraph "Network Layer"
        WS_SERVER["tokio_tungstenite\nWebSocket server"]
WS_CLIENT_NATIVE["tokio_tungstenite\nWebSocket client"]
WS_CLIENT_WASM["Browser WebSocket API\nvia wasm_bindgen"]
end
    
 
   APP --> SERVICE
 
   SERVICE --> CALLER
 
   SERVICE --> ENDPOINT
    
 
   CALLER --> TOKIO_CLIENT
 
   CALLER --> WASM_CLIENT
    
 
   ENDPOINT --> TOKIO_SERVER
    
 
   TOKIO_SERVER --> DISPATCHER
 
   TOKIO_CLIENT --> DISPATCHER
 
   WASM_CLIENT --> DISPATCHER
    
 
   DISPATCHER --> FRAMING
    
 
   TOKIO_SERVER --> WS_SERVER
 
   TOKIO_CLIENT --> WS_CLIENT_NATIVE
 
   WASM_CLIENT --> WS_CLIENT_WASM
    
 
   FRAMING --> WS_SERVER
 
   FRAMING --> WS_CLIENT_NATIVE
 
   FRAMING --> WS_CLIENT_WASM

Tokio RPC Server

The extensions/muxio-tokio-rpc-server/ crate provides a production-ready WebSocket server implementation using the Tokio async runtime. The central type is RpcServer, which combines Axum’s HTTP/WebSocket capabilities with the RpcServiceEndpointInterface trait for handler registration.

graph TB
    subgraph "RpcServer Structure"
        SERVER["RpcServer\nArc-wrapped"]
ENDPOINT_FIELD["endpoint: Arc&lt;RpcServiceEndpoint&gt;"]
CALLER_FIELD["caller: Arc&lt;RpcServiceCaller&gt;"]
end
    
    subgraph "Axum Integration"
        ROUTER["axum::Router"]
WS_UPGRADE["WebSocketUpgrade handler"]
WS_ROUTE["/ws route"]
end
    
    subgraph "Connection Handler"
        ACCEPT_CONN["handle_websocket_connection()"]
TOKIO_SPAWN["tokio::spawn per connection"]
MSG_LOOP["Message read/write loop"]
end
    
    subgraph "Dependencies"
        AXUM_CRATE["axum v0.8.4"]
TOKIO_TUNG["tokio-tungstenite v0.26.2"]
TOKIO_RT["tokio v1.45.1"]
end
    
 
   SERVER --> ENDPOINT_FIELD
 
   SERVER --> CALLER_FIELD
 
   SERVER --> ROUTER
    
 
   ROUTER --> WS_ROUTE
 
   WS_ROUTE --> WS_UPGRADE
 
   WS_UPGRADE --> ACCEPT_CONN
    
 
   ACCEPT_CONN --> TOKIO_SPAWN
 
   TOKIO_SPAWN --> MSG_LOOP
    
 
   ROUTER --> AXUM_CRATE
 
   WS_UPGRADE --> TOKIO_TUNG
 
   TOKIO_SPAWN --> TOKIO_RT

Core Components

Sources: Cargo.lock:917-933 README.md:94-128

Server Lifecycle

The server follows this initialization and operation sequence:

PhaseMethodDescription
ConstructionRpcServer::new(config)Creates server with optional configuration, initializes endpoint and caller
Handler Registrationendpoint().register_prebuffered()Registers RPC method handlers before starting server
Bindingserve_with_listener(listener)Accepts a TcpListener and starts serving on it
Connection AcceptanceInternalAxum router upgrades HTTP connections to WebSocket
Per-Connection SpawnInternalEach WebSocket connection gets its own tokio::spawn task
Message ProcessingInternalReads WebSocket binary messages, feeds to RpcDispatcher

Example from README.md:94-128:

Sources: README.md:94-128

Integration with Axum

The server uses Axum’s router to expose a WebSocket endpoint at /ws. The implementation leverages:

  • axum::extract::ws::WebSocketUpgrade for protocol upgrade
  • axum::Router::new().route("/ws", get(handler)) for routing
  • Per-connection state isolation using Arc cloning

Sources: Cargo.lock:80-114 Cargo.lock:917-933


Tokio RPC Client

The extensions/muxio-tokio-rpc-client/ crate provides a native client implementation that establishes WebSocket connections and makes RPC calls using the Tokio runtime. The primary type is RpcClient, which implements RpcServiceCallerInterface.

graph TB
    subgraph "RpcClient Structure"
        CLIENT["RpcClient"]
INNER["ClientInner\nArc&lt;TokioMutex&lt;...&gt;&gt;"]
DISPATCHER_REF["dispatcher: Arc&lt;TokioMutex&lt;RpcDispatcher&gt;&gt;"]
ENDPOINT_REF["endpoint: Arc&lt;RpcServiceEndpoint&gt;"]
STATE_HANDLER["state_handler: Option&lt;Callback&gt;"]
end
    
    subgraph "Background Tasks"
        READ_TASK["tokio::spawn read_task"]
WRITE_TASK["tokio::spawn write_task"]
STATE_TASK["State change publisher"]
end
    
    subgraph "WebSocket Communication"
        WS_STREAM["WebSocketStream"]
SPLIT_WRITE["SplitSink&lt;write&gt;"]
SPLIT_READ["SplitStream&lt;read&gt;"]
end
    
    subgraph "Dependencies"
        TOKIO_TUNG_CLI["tokio-tungstenite v0.26.2"]
TOKIO_RT_CLI["tokio v1.45.1"]
FUTURES["futures-util"]
end
    
 
   CLIENT --> INNER
 
   INNER --> DISPATCHER_REF
 
   INNER --> ENDPOINT_REF
 
   INNER --> STATE_HANDLER
    
 
   CLIENT --> READ_TASK
 
   CLIENT --> WRITE_TASK
    
 
   READ_TASK --> SPLIT_READ
 
   WRITE_TASK --> SPLIT_WRITE
    
 
   SPLIT_READ --> WS_STREAM
 
   SPLIT_WRITE --> WS_STREAM
    
 
   WS_STREAM --> TOKIO_TUNG_CLI
 
   READ_TASK --> TOKIO_RT_CLI
 
   WRITE_TASK --> TOKIO_RT_CLI
 
   SPLIT_READ --> FUTURES
 
   SPLIT_WRITE --> FUTURES

Client Architecture

Sources: Cargo.lock:898-916 README.md:136-142

Connection Establishment

The client connection follows this sequence:

  1. DNS Resolution & TCP Connection: tokio_tungstenite::connect_async() establishes TCP connection
  2. WebSocket Handshake : HTTP upgrade to WebSocket protocol
  3. Stream Splitting : WebSocket stream split into separate read/write halves using futures::StreamExt::split()
  4. Background Task Spawn : Two tasks spawned for bidirectional communication
  5. State Notification : Connection state changes from Connecting to Connected

Example from README.md:136-142:

Sources: README.md:136-142

Arc-Based Lifecycle Management

The client uses Arc reference counting for shared ownership:

  • RpcDispatcher wrapped in Arc<TokioMutex<>> for concurrent access
  • RpcServiceEndpoint wrapped in Arc<> for shared handler registry
  • Background tasks hold Arc clones to prevent premature cleanup
  • When all Arc references drop, connection automatically closes

This design enables:

  • Multiple concurrent RPC calls from different tasks
  • Bidirectional RPC (client can handle incoming calls from server)
  • Automatic cleanup on disconnect without manual resource management

Sources: Cargo.lock:898-916

Background Task Architecture

The client spawns two persistent Tokio tasks:

TaskPurposeError Handling
Read TaskReads WebSocket binary messages, feeds bytes to RpcDispatcherExits on error, triggers state change to Disconnected
Write TaskReceives bytes from RpcDispatcher write callback, sends as WebSocket messagesExits on error, triggers state change to Disconnected

Both tasks communicate through channels and callbacks, maintaining the non-async core design of muxio.

Sources: Cargo.lock:898-916


WASM RPC Client

The extensions/muxio-wasm-rpc-client/ crate enables RPC communication from browser environments by bridging Rust WebAssembly code with JavaScript’s native WebSocket API. This implementation demonstrates muxio’s cross-platform capability without requiring Tokio.

graph TB
    subgraph "Rust WASM Layer"
        WASM_CLIENT["RpcWasmClient"]
STATIC_REF["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local! RefCell"]
DISPATCHER_WASM["RpcDispatcher"]
ENDPOINT_WASM["RpcServiceEndpoint"]
end
    
    subgraph "wasm-bindgen Bridge"
        EXTERN_FN["#[wasm_bindgen]\nstatic_muxio_write_bytes()"]
CLOSURE["Closure::wrap callbacks"]
JS_VALUE["JsValue conversions"]
end
    
    subgraph "JavaScript Environment"
        WS_API["new WebSocket(url)"]
ONMESSAGE["ws.onmessage event"]
ONERROR["ws.onerror event"]
ONOPEN["ws.onopen event"]
SEND["ws.send(bytes)"]
end
    
    subgraph "Dependencies"
        WASM_BINDGEN_CRATE["wasm-bindgen v0.2.100"]
JS_SYS_CRATE["js-sys v0.3.77"]
WASM_FUTURES_CRATE["wasm-bindgen-futures v0.4.50"]
end
    
 
   WASM_CLIENT --> STATIC_REF
 
   WASM_CLIENT --> DISPATCHER_WASM
 
   WASM_CLIENT --> ENDPOINT_WASM
    
 
   DISPATCHER_WASM --> EXTERN_FN
 
   EXTERN_FN --> SEND
    
 
   ONMESSAGE --> CLOSURE
 
   CLOSURE --> DISPATCHER_WASM
    
 
   EXTERN_FN --> WASM_BINDGEN_CRATE
 
   CLOSURE --> WASM_BINDGEN_CRATE
 
   JS_VALUE --> JS_SYS_CRATE
 
   WS_API --> JS_SYS_CRATE

WASM Bridge Architecture

Sources: Cargo.lock:934-954 README.md51

JavaScript Interop Pattern

The WASM client uses a bidirectional byte-passing bridge between Rust and JavaScript:

Rust → JavaScript (outgoing data):

  1. RpcDispatcher invokes write callback with Vec<u8>
  2. Callback invokes #[wasm_bindgen] extern function static_muxio_write_bytes()
  3. JavaScript receives Uint8Array and calls WebSocket.send()

JavaScript → Rust (incoming data):

  1. JavaScript ws.onmessage receives ArrayBuffer
  2. Converts to Uint8Array and passes to Rust entry point
  3. Rust code accesses MUXIO_STATIC_RPC_CLIENT_REF and feeds bytes to RpcDispatcher

This design eliminates async runtime dependencies while maintaining compatibility with the same service definitions used by native clients.

Sources: Cargo.lock:934-954 README.md:51-52

Static Client Pattern

The WASM client uses a thread-local static reference for JavaScript access:

This pattern enables:

  • Simple JavaScript API that doesn’t require passing Rust objects
  • Single global client instance per WebAssembly module
  • Automatic memory management through Arc reference counting

Sources: Cargo.lock:934-954

Browser Compatibility

The WASM client compiles to wasm32-unknown-unknown target and relies on standard browser APIs:

  • WebSocket constructor for connection establishment
  • WebSocket.send() for binary message transmission
  • WebSocket.onmessage for binary message reception
  • WebSocket.onerror and WebSocket.onclose for error handling

No polyfills or special browser features required beyond standard WebSocket support (available in all modern browsers).

Sources: Cargo.lock:934-954 Cargo.lock:1637-1646 Cargo.lock:1663-1674


WebSocket Protocol Selection

All transport implementations use WebSocket as the underlying protocol for several reasons:

CriterionRationale
Binary supportNative support for binary frames aligns with muxio’s binary framing protocol
BidirectionalFull-duplex communication enables server-initiated messages and streaming
Browser compatibilityWidely supported in all modern browsers via standard JavaScript API
Connection persistenceSingle long-lived connection reduces overhead of multiple HTTP requests
Framing built-inWebSocket’s message framing complements muxio’s multiplexing layer

WebSocket messages carry the binary-serialized RPC frames defined by the muxio core protocol. The transport layer is responsible for:

  1. Establishing and maintaining WebSocket connections
  2. Converting between WebSocket binary messages and byte slices
  3. Handling connection lifecycle events (connect, disconnect, errors)
  4. Providing state change notifications to application code

Sources: Cargo.lock:1446-1455 Cargo.lock:1565-1580 README.md32


stateDiagram-v2
    [*] --> Disconnected : Initial state
    
    Disconnected --> Connecting: RpcClient::new() called
    
    Connecting --> Connected : WebSocket handshake success
    Connecting --> Disconnected : Connection failure DNS error Network timeout
    
    Connected --> Disconnected : Network error Server closes connection Client drop
    
    Disconnected --> [*] : All Arc references dropped
    
    note right of Disconnected
        RpcTransportState::Disconnected
    end note
    
    note right of Connecting
        RpcTransportState::Connecting
    end note
    
    note right of Connected
        RpcTransportState::Connected
    end note

Connection Lifecycle and State Management

All client implementations (RpcClient and RpcWasmClient) implement connection state tracking through the RpcTransportState enum. State changes are exposed to application code via callback handlers, enabling reactive connection management.

Connection State Machine

Sources: README.md:138-141

State Change Handler Registration

Applications register state change callbacks using set_state_change_handler():

The handler receives RpcTransportState enum variants:

  • RpcTransportState::Disconnected - Not connected, safe to drop client
  • RpcTransportState::Connecting - Connection in progress, should not send requests
  • RpcTransportState::Connected - Fully connected, ready for RPC calls

Sources: README.md:138-141 README.md:75-76

Automatic Cleanup on Disconnect

Both client implementations use Arc reference counting for automatic resource cleanup:

ResourceCleanup MechanismTrigger
WebSocket connectionDropped when read/write tasks exitNetwork error, server close, last Arc dropped
Background taskstokio::spawn tasks exit naturallyConnection close detected
Pending requestsRpcDispatcher returns errors for in-flight requestsState change to Disconnected
Stream decodersRemoved from RpcSession decoder mapStream end or connection close

When the last Arc<RpcClient> reference is dropped:

  1. Destructor signals background tasks to exit
  2. Read/write tasks complete their current iteration and exit
  3. WebSocket connection closes gracefully (if still open)
  4. All pending request futures resolve with connection errors
  5. State transitions to Disconnected

This design eliminates manual cleanup and prevents resource leaks.

Sources: Cargo.lock:898-916 Cargo.lock:934-954

State-Based Application Logic

Common patterns using state callbacks:

UI Connection Indicator:

Automatic Reconnection:

Request Queueing:

Sources: README.md:138-141


graph TD
    subgraph "Tokio Server Stack"
        TOKIO_SRV["muxio-tokio-rpc-server"]
AXUM["axum\nv0.8.4"]
TOKIO_1["tokio\nv1.45.1"]
TUNGSTENITE_1["tokio-tungstenite\nv0.26.2"]
end
    
    subgraph "Tokio Client Stack"
        TOKIO_CLI["muxio-tokio-rpc-client"]
TOKIO_2["tokio\nv1.45.1"]
TUNGSTENITE_2["tokio-tungstenite\nv0.26.2"]
end
    
    subgraph "WASM Client Stack"
        WASM_CLI["muxio-wasm-rpc-client"]
WASM_BINDGEN["wasm-bindgen\nv0.2.100"]
JS_SYS_DEP["js-sys\nv0.3.77"]
WASM_FUTURES["wasm-bindgen-futures\nv0.4.50"]
end
    
    subgraph "Shared RPC Layer"
        RPC_SERVICE["muxio-rpc-service"]
RPC_CALLER["muxio-rpc-service-caller"]
RPC_ENDPOINT["muxio-rpc-service-endpoint"]
end
    
    subgraph "Core"
        MUXIO_CORE["muxio"]
end
    
 
   TOKIO_SRV --> AXUM
 
   TOKIO_SRV --> TOKIO_1
 
   TOKIO_SRV --> TUNGSTENITE_1
 
   TOKIO_SRV --> RPC_ENDPOINT
    
 
   TOKIO_CLI --> TOKIO_2
 
   TOKIO_CLI --> TUNGSTENITE_2
 
   TOKIO_CLI --> RPC_CALLER
    
 
   WASM_CLI --> WASM_BINDGEN
 
   WASM_CLI --> JS_SYS_DEP
 
   WASM_CLI --> WASM_FUTURES
 
   WASM_CLI --> RPC_CALLER
    
 
   RPC_ENDPOINT --> RPC_SERVICE
 
   RPC_CALLER --> RPC_SERVICE
 
   RPC_SERVICE --> MUXIO_CORE

Dependency Graph

The following diagram shows the concrete dependency relationships between transport implementations and their supporting crates:

Sources: Cargo.lock:917-933 Cargo.lock:898-916 Cargo.lock:934-954 Cargo.toml:39-64


Cross-Platform Service Definition Sharing

A key design principle is that all transport implementations can consume the same service definitions. This is achieved through the RpcMethodPrebuffered trait, which defines methods with compile-time generated method IDs and encoding/decoding logic.

ComponentRoleShared Across Transports
RpcMethodPrebuffered traitDefines RPC method signature✓ Yes
encode_request() / decode_request()Parameter serialization✓ Yes
encode_response() / decode_response()Result serialization✓ Yes
METHOD_ID constantCompile-time hash of method name✓ Yes
Transport connection logicWebSocket handling✗ No (platform-specific)

Example service definition usage from README.md:144-151:

The same service definitions work identically with RpcClient (Tokio), RpcWasmClient (WASM), and any future transport implementations that implement RpcServiceCallerInterface.

Sources: README.md:47-49 README.md:69-73 README.md:144-151 Cargo.toml42


Implementation Selection Guidelines

Choose the appropriate transport implementation based on your deployment target:

Usemuxio-tokio-rpc-server when:

  • Building server-side applications
  • Need to handle multiple concurrent client connections
  • Require integration with existing Tokio/Axum infrastructure
  • Operating in native Rust environments

Usemuxio-tokio-rpc-client when:

  • Building native client applications (CLI tools, desktop apps)
  • Writing integration tests for server implementations
  • Need Tokio’s async runtime features
  • Operating in native Rust environments

Usemuxio-wasm-rpc-client when:

  • Building web applications that run in browsers
  • Creating browser extensions
  • Need to communicate with servers from JavaScript contexts
  • Targeting the wasm32-unknown-unknown platform

For detailed usage examples of each transport, refer to the subsections Tokio RPC Server, Tokio RPC Client, and WASM RPC Client.

Sources: README.md:38-51 Cargo.toml:19-31


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Tokio RPC Server

Loading…

Tokio RPC Server

Relevant source files

Purpose and Scope

The muxio-tokio-rpc-server crate provides a production-ready WebSocket RPC server implementation for native Rust environments using the Tokio async runtime. This server integrates the muxio RPC framework with Axum’s HTTP/WebSocket capabilities and tokio-tungstenite’s WebSocket protocol handling.

This document covers the server-side implementation for native applications. For client-side Tokio implementations, see Tokio RPC Client. For browser-based clients, see WASM RPC Client. For general information about service definitions and endpoint interfaces, see Service Endpoint Interface.

Sources:


Architecture Overview

The Tokio RPC Server sits at the intersection of three major subsystems: Axum’s HTTP framework, the muxio core multiplexing layer, and the RPC service framework. It provides a complete server implementation that accepts WebSocket connections, manages multiple concurrent clients, and dispatches RPC requests to registered handlers.

graph TB
    subgraph "Application Layer"
        APP["Application Code\nService Handler Implementations"]
end
    
    subgraph "Server Interface Layer - muxio-tokio-rpc-server"
        RPC_SERVER["RpcServer\nMain server struct\nHandler registration\nConnection management"]
WS_HANDLER["WebSocket Handler\nhandle_websocket_connection\nPer-connection async task"]
end
    
    subgraph "RPC Framework Integration"
        ENDPOINT["RpcServiceEndpointInterface\nDispatcher to handlers\nMethod routing"]
CALLER["RpcServiceCallerInterface\nServer-to-client calls\nBidirectional RPC"]
DISPATCHER["RpcDispatcher\nRequest correlation\nResponse routing"]
end
    
    subgraph "Transport Layer"
        AXUM["Axum Framework\nHTTP routing\nWebSocket upgrade"]
TUNGSTENITE["tokio-tungstenite\nWebSocket protocol\nFrame handling"]
end
    
    subgraph "Core Multiplexing"
        SESSION["RpcSession\nStream multiplexing\nFrame mux/demux"]
end
    
 
   APP --> RPC_SERVER
 
   RPC_SERVER --> WS_HANDLER
 
   RPC_SERVER --> ENDPOINT
 
   WS_HANDLER --> CALLER
 
   WS_HANDLER --> ENDPOINT
 
   WS_HANDLER --> DISPATCHER
 
   WS_HANDLER --> AXUM
 
   AXUM --> TUNGSTENITE
 
   DISPATCHER --> SESSION
 
   ENDPOINT --> DISPATCHER
 
   CALLER --> DISPATCHER
 
   TUNGSTENITE --> SESSION
    
    style RPC_SERVER fill:#e1f5ff
    style WS_HANDLER fill:#e1f5ff

Server Component Stack

Description: The server is built in layers. The RpcServer struct provides the high-level interface for configuring the server and registering handlers. When a WebSocket connection arrives via Axum, a dedicated handle_websocket_connection task is spawned. This task instantiates both an RpcServiceEndpointInterface (for handling incoming client requests) and an RpcServiceCallerInterface (for making server-initiated calls to the client), both sharing a single RpcDispatcher and RpcSession for the connection.

Sources:


Key Components

RpcServer Structure

The main server structure manages the overall server lifecycle, route configuration, and handler registry. It integrates with Axum’s routing system to expose WebSocket endpoints.

ComponentTypePurpose
RpcServerMain structServer configuration, handler registration, Axum router integration
Handler registryInternal storageMaintains registered RPC service handlers
Axum routeraxum::RouterHTTP routing and WebSocket upgrade handling
Connection trackingState managementTracks active connections and connection state

Key Responsibilities:

  • Registering service handlers via RpcServiceEndpointInterface
  • Creating Axum routes for WebSocket endpoints
  • Spawning per-connection tasks
  • Managing server lifecycle (start, stop, graceful shutdown)

Sources:

WebSocket Connection Handler

Each WebSocket connection is handled by a dedicated async task that manages the complete lifecycle of that connection.

Description: The connection handler orchestrates all communication for a single client. It creates a shared RpcDispatcher, instantiates both endpoint and caller interfaces, and manages the read/write loops for WebSocket frames. This enables full bidirectional RPC - clients can call server methods, and the server can call client methods over the same connection.

sequenceDiagram
    participant Axum as "Axum Router"
    participant Handler as "handle_websocket_connection"
    participant Endpoint as "RpcServiceEndpointInterface"
    participant Caller as "RpcServiceCallerInterface"
    participant Dispatcher as "RpcDispatcher"
    participant WS as "WebSocket\n(tokio-tungstenite)"
    
    Axum->>Handler: WebSocket upgrade
    Handler->>Handler: Create RpcDispatcher
    Handler->>Endpoint: Instantiate with dispatcher
    Handler->>Caller: Instantiate with dispatcher
    Handler->>Handler: Spawn read loop
    Handler->>Handler: Spawn write loop
    
    Note over Handler,WS: Connection Active
    
    WS->>Handler: Binary frame received
    Handler->>Dispatcher: Feed bytes
    Dispatcher->>Endpoint: Route request
    Endpoint->>Handler: Handler result
    Handler->>Dispatcher: Send response
    Dispatcher->>WS: Binary frames
    
    Note over Handler,WS: Bidirectional RPC Support
    
    Handler->>Caller: call(request)
    Caller->>Dispatcher: Outgoing request
    Dispatcher->>WS: Binary frames
    WS->>Handler: Response frames
    Handler->>Dispatcher: Feed bytes
    Dispatcher->>Caller: Deliver response
    
    Note over Handler,WS: Connection Closing
    
    WS->>Handler: Close frame
    Handler->>Endpoint: Cleanup
    Handler->>Caller: Cleanup
    Handler->>Handler: Task exit

Sources:


WebSocket Integration with Axum

The server leverages Axum’s built-in WebSocket support to handle the HTTP upgrade handshake and ongoing WebSocket communication.

graph LR
    subgraph "Axum Router Configuration"
        ROUTER["axum::Router"]
ROUTE["WebSocket Route\n/ws or custom path"]
UPGRADE["WebSocket Upgrade Handler\naxum::extract::ws"]
end
    
    subgraph "Handler Function"
        WS_FN["handle_websocket_connection\nAsync task per connection"]
EXTRACT["Extract WebSocket\nfrom HTTP upgrade"]
end
    
    subgraph "Connection State"
        DISPATCHER_STATE["Arc<Mutex<RpcDispatcher>>"]
ENDPOINT_STATE["Arc<RpcServiceEndpointInterface>"]
HANDLER_REGISTRY["Handler Registry\nRegistered service handlers"]
end
    
 
   ROUTER --> ROUTE
 
   ROUTE --> UPGRADE
 
   UPGRADE --> WS_FN
 
   WS_FN --> EXTRACT
 
   WS_FN --> DISPATCHER_STATE
 
   WS_FN --> ENDPOINT_STATE
 
   WS_FN --> HANDLER_REGISTRY
    
    style ROUTER fill:#fff4e1
    style WS_FN fill:#e1f5ff

Route Configuration

Description: The server configures an Axum route (typically at /ws) that handles WebSocket upgrades. When a client connects, Axum invokes the handler function with the upgraded WebSocket. The handler then creates the necessary RPC infrastructure (dispatcher, endpoint, caller) and enters the connection loop.

Integration Points:

  • axum::extract::ws::WebSocketUpgrade - HTTP to WebSocket upgrade
  • axum::extract::ws::WebSocket - Bidirectional WebSocket stream
  • axum::routing::get() - Route registration
  • tokio::spawn() - Connection task spawning

Sources:


Handler Registration and Dispatch

Service handlers are registered with the server before it starts, allowing the endpoint to route incoming requests to the appropriate handler implementations.

Handler Registration Flow

StepActionCode Entity
1Create RpcServer instanceRpcServer::new() or ::builder()
2Register service handlersregister_handler<T>() method
3Build Axum routerInternal router configuration
4Start serverserve() or run() method

Request Dispatch Mechanism

Description: When a complete RPC request is received and decoded, the RpcServiceEndpointInterface looks up the registered handler by method ID. The handler is invoked asynchronously, and its result is serialized and sent back through the same connection. Error handling is built into each layer, with errors propagated back to the client as RpcServiceError responses.

Sources:


graph TB
    subgraph "Per-Connection State"
        SHARED_DISPATCHER["Arc<TokioMutex<RpcDispatcher>>\nShared between endpoint & caller"]
end
    
    subgraph "Server-to-Client Direction"
        CALLER["RpcServiceCallerInterface\nServer-initiated calls"]
CALLER_LOGIC["Call Logic\nawait response from client"]
end
    
    subgraph "Client-to-Server Direction"
        ENDPOINT["RpcServiceEndpointInterface\nClient-initiated calls"]
HANDLER["Registered Handlers\nServer-side implementations"]
end
    
    subgraph "Transport Sharing"
        READ_LOOP["Read Loop\nReceive WebSocket frames\nFeed to dispatcher"]
WRITE_LOOP["Write Loop\nSend WebSocket frames\nFrom dispatcher"]
WEBSOCKET["WebSocket Connection\nSingle TCP connection"]
end
    
 
   CALLER --> SHARED_DISPATCHER
 
   ENDPOINT --> SHARED_DISPATCHER
 
   SHARED_DISPATCHER --> READ_LOOP
 
   SHARED_DISPATCHER --> WRITE_LOOP
 
   READ_LOOP --> WEBSOCKET
 
   WRITE_LOOP --> WEBSOCKET
 
   CALLER_LOGIC --> CALLER
 
   HANDLER --> ENDPOINT
    
    style SHARED_DISPATCHER fill:#e1f5ff
    style WEBSOCKET fill:#fff4e1

Bidirectional RPC Support

A key feature of the Tokio RPC Server is support for server-initiated RPC calls to connected clients. This enables push notifications, server-side events, and bidirectional request/response patterns.

Bidirectional Communication Architecture

Description: Both the RpcServiceEndpointInterface and RpcServiceCallerInterface share the same RpcDispatcher and underlying RpcSession. This allows both directions to multiplex their requests over the same WebSocket connection. The dispatcher ensures that request IDs are unique and that responses are routed to the correct awaiting caller, whether that’s a client waiting for a server response or a server waiting for a client response.

Use Cases:

  • Server pushing real-time updates to clients
  • Server requesting information from clients
  • Peer-to-peer style communication patterns
  • Event-driven architectures where both sides can initiate actions

Sources:


Connection Lifecycle Management

The server manages the complete lifecycle of each WebSocket connection, from initial upgrade through active communication to graceful or abrupt termination.

Connection States

StateDescriptionTransitions
ConnectingWebSocket upgrade in progressConnected
ConnectedActive RPC communicationDisconnecting, Error
DisconnectingGraceful shutdown initiatedDisconnected
DisconnectedConnection closed cleanlyTerminal state
ErrorConnection error occurredDisconnected

Lifecycle State Machine

Description: The connection lifecycle is managed by the per-connection task. State transitions are tracked internally, and state change callbacks (if registered) are invoked to notify application code of connection events. Resources are properly cleaned up in all terminal states.

Resource Cleanup:

  • WebSocket stream closed
  • Per-stream decoders removed from RpcSession
  • Pending requests in RpcDispatcher completed with error
  • Connection removed from server’s tracking state
  • Task exit and memory deallocation

Sources:


graph LR
    subgraph "Inbound Path"
        WS_MSG["WebSocket Binary Message\ntokio_tungstenite::Message"]
BYTES["Bytes Extract\nVec<u8> or Bytes"]
FEED["dispatcher.feed_bytes()\nProcess frame data"]
DECODE["RpcSession Decode\nReconstruct streams"]
EVENT["RpcDispatcher Events\nComplete requests/responses"]
end
    
    subgraph "Outbound Path"
        RESPONSE["Response Data\nFrom handlers or caller"]
ENCODE["RpcSession Encode\nFrame into chunks"]
WRITE_QUEUE["Write Queue\nPending frames"]
WS_SEND["WebSocket Send\nBinary message"]
end
    
 
   WS_MSG --> BYTES
 
   BYTES --> FEED
 
   FEED --> DECODE
 
   DECODE --> EVENT
    
 
   RESPONSE --> ENCODE
 
   ENCODE --> WRITE_QUEUE
 
   WRITE_QUEUE --> WS_SEND
    
    style FEED fill:#e1f5ff
    style ENCODE fill:#e1f5ff

Binary Frame Processing

The server handles the low-level details of converting between WebSocket binary messages and muxio’s framing protocol.

Frame Processing Pipeline

Description: WebSocket messages arrive as binary frames. The server extracts the byte payload and feeds it to the RpcDispatcher, which uses the RpcSession to demultiplex and decode the frames. In the outbound direction, responses are encoded by RpcSession into chunked frames and queued for WebSocket transmission.

Frame Format Details:

  • Binary WebSocket frames only (text frames rejected)
  • Frames may be chunked at the WebSocket layer (transparent to muxio)
  • muxio’s internal chunking is based on DEFAULT_MAX_CHUNK_SIZE
  • Stream multiplexing allows multiple concurrent operations

Sources:


Server Configuration and Builder Pattern

The server typically provides a builder pattern for configuration before starting.

Configuration Options

OptionPurposeDefault
Bind addressTCP address and port127.0.0.1:3000 (typical)
WebSocket pathRoute path for WebSocket upgrade/ws (typical)
Connection limitsMax concurrent connectionsUnlimited (configurable)
Timeout settingsConnection and request timeoutsPlatform defaults
Handler registryRegistered RPC service handlersEmpty (must register)
State change handlersLifecycle event callbacksNone (optional)

Typical Server Setup Pattern

Description: The server is configured using a builder pattern. Handlers are registered before the server starts. Once build() is called, the server configures its Axum router with the WebSocket route. The run() or serve() method starts the server’s event loop.

Sources:


Integration with Tokio Async Runtime

The server is fully integrated with the Tokio async runtime, using async/await throughout.

Async Task Structure

Task TypePurposeLifetime
Server listenerAccept incoming connectionsUntil server shutdown
Per-connection taskHandle one WebSocket connectionUntil connection closes
Read loopProcess incoming WebSocket framesPer-connection lifetime
Write loopSend outgoing WebSocket framesPer-connection lifetime
Handler executionExecute registered RPC handlersPer-request duration

Async Patterns:

  • async fn for all I/O operations
  • tokio::spawn() for concurrent task creation
  • tokio::select! for graceful shutdown coordination
  • TokioMutex for shared state protection
  • futures::stream::StreamExt for frame processing

Concurrency Model:

  • One Tokio task per connection
  • Handlers executed concurrently on Tokio thread pool
  • Multiplexed streams allow concurrent RPC calls per connection
  • Backpressure handled at WebSocket layer

Sources:


graph TB
    subgraph "Error Sources"
        WS_ERR["WebSocket Errors\nConnection failures\nProtocol violations"]
DECODE_ERR["Decode Errors\nInvalid frames\nMalformed data"]
HANDLER_ERR["Handler Errors\nBusiness logic failures\nRpcServiceError"]
DISPATCHER_ERR["Dispatcher Errors\nUnknown request ID\nMutex poisoning"]
end
    
    subgraph "Error Handling"
        LOG["tracing::error!\nStructured logging"]
CLIENT_RESP["Error Response\nRpcServiceError to client"]
DISCONNECT["Connection Close\nUnrecoverable errors"]
STATE_CALLBACK["State Change Handler\nNotify application"]
end
    
 
   WS_ERR --> LOG
 
   WS_ERR --> DISCONNECT
 
   DECODE_ERR --> LOG
 
   DECODE_ERR --> CLIENT_RESP
 
   HANDLER_ERR --> LOG
 
   HANDLER_ERR --> CLIENT_RESP
 
   DISPATCHER_ERR --> LOG
 
   DISPATCHER_ERR --> DISCONNECT
    
 
   DISCONNECT --> STATE_CALLBACK
    
    style LOG fill:#fff4e1
    style DISCONNECT fill:#ffe1e1

Error Handling and Observability

The server provides comprehensive error handling and integrates with the tracing ecosystem for observability.

Error Propagation

Description: Errors are categorized by severity and handled appropriately. Handler errors are serialized and sent to the client as RpcServiceError responses. Transport and protocol errors are logged and result in connection termination. All errors are emitted as structured tracing events for monitoring and debugging.

Tracing Integration:

  • #[tracing::instrument] on key functions
  • Span context for each connection
  • Structured fields for method IDs, request IDs, error codes
  • Error, warn, info, debug, and trace level events

Sources:


Performance Considerations

The Tokio RPC Server is designed for high-performance scenarios with multiple concurrent connections and high request throughput.

Performance Characteristics

MetricBehaviorTuning
ConnectionsO(n) tasks, minimal per-connection overheadTokio thread pool size
ThroughputStream multiplexing enables pipeliningDEFAULT_MAX_CHUNK_SIZE
LatencyLow - single async task hop per requestHandler execution time
MemoryIncremental - per-stream decoders onlyConnection limits

Optimization Strategies:

  • Stream multiplexing reduces head-of-line blocking
  • Binary protocol minimizes serialization overhead
  • Zero-copy where possible (using bytes::Bytes)
  • Efficient buffer management in RpcSession
  • Concurrent handler execution on Tokio thread pool

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Tokio RPC Client

Loading…

Tokio RPC Client

Relevant source files

Purpose and Scope

This document describes the muxio-tokio-rpc-client crate, which provides a Tokio-based native RPC client implementation for connecting to muxio WebSocket servers. The client uses tokio-tungstenite for WebSocket transport and implements the platform-agnostic RpcServiceCallerInterface trait to enable type-safe RPC calls.

For server-side implementation details, see Tokio RPC Server. For the browser-based WASM client alternative, see WASM RPC Client. For general connection lifecycle concepts applicable to both platforms, see Connection Lifecycle and State Management.

Sources : extensions/muxio-tokio-rpc-client/Cargo.toml:1-31 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:1-336


Architecture Overview

The RpcClient is built on Tokio’s asynchronous runtime and manages a persistent WebSocket connection to an muxio server. It encapsulates bidirectional RPC functionality through two primary abstractions:

  • Client-side calling : Via the RpcServiceCallerInterface trait, enabling outbound RPC invocations
  • Server-to-client calling : Via an embedded RpcServiceEndpoint, enabling the server to invoke methods on the client
graph TB
    subgraph "RpcClient Structure"
        CLIENT["RpcClient"]
DISPATCHER["Arc&lt;TokioMutex&lt;RpcDispatcher&gt;&gt;\nRequest correlation & routing"]
ENDPOINT["Arc&lt;RpcServiceEndpoint&lt;()&gt;&gt;\nServer-to-client RPC handler"]
TX["mpsc::UnboundedSender&lt;WsMessage&gt;\nInternal message queue"]
STATE_HANDLER["Arc&lt;StdMutex&lt;Option&lt;Box&lt;dyn Fn&gt;&gt;&gt;&gt;\nState change callback"]
IS_CONNECTED["Arc&lt;AtomicBool&gt;\nConnection state flag"]
TASK_HANDLES["Vec&lt;JoinHandle&lt;()&gt;&gt;\nBackground task handles"]
end
    
    subgraph "Background Tasks"
        HEARTBEAT["Heartbeat Task\nSends Ping every 1s"]
RECV["Receive Loop Task\nProcesses WS frames"]
SEND["Send Loop Task\nTransmits WS messages"]
end
    
    subgraph "WebSocket Layer"
        WS_STREAM["tokio_tungstenite::WebSocketStream"]
WS_SENDER["SplitSink\nWrite half"]
WS_RECEIVER["SplitStream\nRead half"]
end
    
 
   CLIENT -->|owns| DISPATCHER
 
   CLIENT -->|owns| ENDPOINT
 
   CLIENT -->|owns| TX
 
   CLIENT -->|owns| STATE_HANDLER
 
   CLIENT -->|owns| IS_CONNECTED
 
   CLIENT -->|owns| TASK_HANDLES
    
 
   TASK_HANDLES -->|contains| HEARTBEAT
 
   TASK_HANDLES -->|contains| RECV
 
   TASK_HANDLES -->|contains| SEND
    
 
   HEARTBEAT -->|sends Ping via| TX
 
   SEND -->|reads from| TX
 
   SEND -->|writes to| WS_SENDER
    
 
   RECV -->|reads from| WS_RECEIVER
 
   RECV -->|processes via| DISPATCHER
 
   RECV -->|processes via| ENDPOINT
    
 
   WS_SENDER -->|part of| WS_STREAM
 
   WS_RECEIVER -->|part of| WS_STREAM

The client spawns three background Tokio tasks to manage the WebSocket lifecycle: a heartbeat task for periodic pings, a receive loop for processing incoming WebSocket frames, and a send loop for transmitting outbound messages.

High-Level Component Structure

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-32 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:131-267


Core Components

RpcClient Structure

The RpcClient struct holds all state and resources necessary for maintaining an active WebSocket connection and processing RPC operations.

FieldTypePurpose
dispatcherArc<TokioMutex<RpcDispatcher<'static>>>Manages request ID allocation, response correlation, and stream multiplexing
endpointArc<RpcServiceEndpoint<()>>Handles server-to-client RPC method dispatch
txmpsc::UnboundedSender<WsMessage>Internal channel for queuing outbound WebSocket messages
state_change_handlerRpcTransportStateChangeHandlerUser-registered callback invoked on connection state changes
is_connectedArc<AtomicBool>Atomic flag tracking current connection status
task_handlesVec<JoinHandle<()>>Handles to the three background Tokio tasks

Type Alias : RpcTransportStateChangeHandler is defined as Arc<StdMutex<Option<Box<dyn Fn(RpcTransportState) + Send + Sync>>>>, allowing thread-safe storage and invocation of the state change callback.

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:22-32

Debug and Drop Implementations

The RpcClient implements Debug to display connection status and Drop to ensure proper cleanup:

The Drop implementation ensures that when the last Arc<RpcClient> reference is dropped, all background tasks are aborted and the state change handler is notified of disconnection.

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:34-52


Lifecycle Management

Client Creation

The RpcClient::new method establishes a WebSocket connection and initializes all client components. It uses Arc::new_cyclic to enable background tasks to hold weak references to the client, preventing circular reference cycles.

sequenceDiagram
    participant App as "Application Code"
    participant New as "RpcClient::new"
    participant WS as "tokio_tungstenite"
    participant Cyclic as "Arc::new_cyclic"
    participant Tasks as "Background Tasks"
    
    App->>New: RpcClient::new(host, port)
    New->>New: Build websocket_url
    New->>WS: connect_async(url)
    WS-->>New: WebSocketStream + Response
    New->>New: ws_stream.split()
    New->>New: mpsc::unbounded_channel()
    
    New->>Cyclic: "Arc::new_cyclic(|weak_client|)"
    Cyclic->>Tasks: Spawn heartbeat task
    Cyclic->>Tasks: Spawn receive loop task
    Cyclic->>Tasks: Spawn send loop task
    Cyclic-->>New: Arc<RpcClient>
    
    New-->>App: Ok(Arc<RpcClient>)

Connection Establishment Flow

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271

Key Initialization Steps

  1. WebSocket URL Construction : Parses the host to determine if it’s an IP address or hostname, constructing the appropriate ws:// URL (rpc_client.rs:112-115)
  2. Connection Establishment : Calls connect_async from tokio-tungstenite (rpc_client.rs:118-121)
  3. Stream Splitting : Splits the WebSocket stream into separate read and write halves (rpc_client.rs127)
  4. Channel Creation : Creates an unbounded MPSC channel for internal message passing (rpc_client.rs128)
  5. Cyclic Arc Initialization : Uses Arc::new_cyclic to allow tasks to hold weak references (rpc_client.rs131)
  6. Component Initialization : Creates RpcDispatcher, RpcServiceEndpoint, and state tracking (rpc_client.rs:132-136)
  7. Task Spawning : Spawns the three background tasks (rpc_client.rs:139-257)

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:110-271

Shutdown Mechanisms

The client provides both synchronous and asynchronous shutdown paths to handle different termination scenarios.

Shutdown Flow Comparison

Aspectshutdown_sync()shutdown_async()
ContextCalled from Drop implementationCalled from background tasks on error
LockingUses StdMutex::lock() (blocking)Uses TokioMutex::lock().await (async)
DispatcherDoes not acquire dispatcher lockAcquires dispatcher lock to fail pending requests
Pending RequestsLeft in dispatcher (client is dropping)Explicitly failed with ReadAfterCancel error
State HandlerInvokes with Disconnected if connectedInvokes with Disconnected if connected

Synchronous Shutdown (rpc_client.rs:56-77):

  • Used when the client is being dropped
  • Checks is_connected and swaps to false atomically
  • Invokes the state change handler if connected
  • Does not fail pending requests (client is being destroyed)

Asynchronous Shutdown (rpc_client.rs:79-108):

  • Used when background tasks detect connection errors
  • Swaps is_connected to false atomically
  • Acquires the dispatcher lock asynchronously
  • Calls fail_all_pending_requests with FrameDecodeError::ReadAfterCancel
  • Invokes the state change handler if connected

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:56-108


graph LR
    subgraph "Task Lifecycle"
        SPAWN["Arc::new_cyclic\ncreates Weak references"]
HEARTBEAT_TASK["Heartbeat Task"]
RECV_TASK["Receive Loop Task"]
SEND_TASK["Send Loop Task"]
SPAWN --> HEARTBEAT_TASK
 
       SPAWN --> RECV_TASK
 
       SPAWN --> SEND_TASK
    end
    
    subgraph "Shared State"
        APP_TX["mpsc::UnboundedSender\nMessage queue"]
WS_SENDER["WebSocket write half"]
WS_RECEIVER["WebSocket read half"]
WEAK_CLIENT["Weak<RpcClient>\nUpgradable reference"]
end
    
 
   HEARTBEAT_TASK -->|send Ping| APP_TX
 
   SEND_TASK -->|recv| APP_TX
 
   SEND_TASK -->|send msg| WS_SENDER
 
   RECV_TASK -->|next| WS_RECEIVER
 
   RECV_TASK -->|upgrade| WEAK_CLIENT
 
   SEND_TASK -->|upgrade| WEAK_CLIENT

Background Tasks

The client spawns three independent Tokio tasks during initialization, each serving a specific role in the connection lifecycle.

Task Architecture

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:131-267

Heartbeat Task

The heartbeat task generates periodic ping messages to maintain connection liveness and detect silent disconnections.

Implementation (rpc_client.rs:139-154):

Behavior :

  • Ticks every 1 second using tokio::time::interval
  • Sends WsMessage::Ping to the internal message queue
  • Exits when the channel is closed (client dropped)

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-154

Receive Loop Task

The receive loop processes incoming WebSocket frames and routes them through the RPC dispatcher and endpoint.

Implementation (rpc_client.rs:157-222):

Frame Processing :

Message TypeAction
BinaryLock dispatcher, call endpoint.read_bytes() to process RPC frames
PingSend Pong response via internal channel
PongLog and ignore (response to our heartbeat pings)
TextLog and ignore (protocol uses binary frames only)
CloseBreak loop (connection closed)
ErrorSpawn shutdown_async() task and break loop

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:157-222

Send Loop Task

The send loop drains the internal message queue and transmits messages to the WebSocket.

Implementation (rpc_client.rs:224-257):

Behavior :

  • Receives messages from the MPSC channel
  • Checks is_connected before sending (prevents sending after disconnect signal)
  • Uses Ordering::Acquire for memory synchronization with the shutdown path
  • Spawns shutdown_async() on send errors
  • Exits when the channel is closed or disconnected

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:224-257


Connection State Management

is_connected Flag

The is_connected field is an Arc<AtomicBool> that tracks the current connection status. It uses atomic operations to ensure thread-safe updates across multiple tasks.

Memory Ordering :

  • Ordering::SeqCst for swapping in shutdown paths (strongest guarantee)
  • Ordering::Acquire in send loop for reading (synchronizes with shutdown writes)
  • Ordering::Relaxed for general reads (no synchronization needed)

State Transitions :

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs30 extensions/muxio-tokio-rpc-client/src/rpc_client.rs61 extensions/muxio-tokio-rpc-client/src/rpc_client.rs85

State Change Handlers

Applications can register a state change handler to be notified of connection and disconnection events.

Handler Registration (rpc_client.rs:315-334):

Invocation Points :

  1. Connected : Immediately after setting the handler if already connected
  2. Disconnected : In shutdown_sync() or shutdown_async() when connection ends

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334


graph TB
    subgraph "RpcServiceCallerInterface Trait"
        GET_DISPATCHER["get_dispatcher()\nReturns Arc&lt;TokioMutex&lt;RpcDispatcher&gt;&gt;"]
IS_CONNECTED["is_connected()\nReturns bool"]
GET_EMIT_FN["get_emit_fn()\nReturns Arc&lt;dyn Fn(Vec&lt;u8&gt;)&gt;"]
SET_STATE_HANDLER["set_state_change_handler()\nRegisters callback"]
end
    
    subgraph "RpcClient Implementation"
        IMPL_GET_DISP["Clone dispatcher Arc"]
IMPL_IS_CONN["Load is_connected atomic"]
IMPL_EMIT["Closure sending to tx channel"]
IMPL_SET_STATE["Store handler, invoke if connected"]
end
    
 
   GET_DISPATCHER --> IMPL_GET_DISP
 
   IS_CONNECTED --> IMPL_IS_CONN
 
   GET_EMIT_FN --> IMPL_EMIT
 
   SET_STATE_HANDLER --> IMPL_SET_STATE

RpcServiceCallerInterface Implementation

The RpcClient implements the platform-agnostic RpcServiceCallerInterface trait, enabling it to be used with shared RPC service definitions.

Trait Methods Implementation

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:278-335

get_dispatcher

Returns a clone of the Arc<TokioMutex<RpcDispatcher>>, allowing RPC call implementations to access the dispatcher for request correlation and stream management.

Implementation (rpc_client.rs:280-282):

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:280-282

is_connected

Returns the current connection status by loading the atomic boolean with relaxed ordering.

Implementation (rpc_client.rs:284-286):

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-286

get_emit_fn

Returns a closure that captures the internal message channel and is_connected flag. This closure is used by the RPC dispatcher to emit binary frames for transmission.

Implementation (rpc_client.rs:289-313):

Behavior :

  • Checks is_connected before sending (prevents writes after disconnect)
  • Converts Vec<u8> to WsMessage::Binary
  • Sends to the internal MPSC channel (non-blocking unbounded send)
  • Ignores send errors (channel closed means client is dropping)

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:289-313


WebSocket Transport Integration

tokio-tungstenite

The client uses tokio-tungstenite for WebSocket protocol implementation over Tokio’s async I/O.

Dependencies (Cargo.toml16):

Connection Flow :

  1. URL Construction : Builds ws://host:port/ws URL string
  2. Async Connect : Calls connect_async(&websocket_url) which returns (WebSocketStream, Response)
  3. Stream Split : Calls ws_stream.split() to obtain (SplitSink, SplitStream)
  4. Task Distribution : Distributes the sink to send loop, stream to receive loop

Sources : extensions/muxio-tokio-rpc-client/Cargo.toml16 extensions/muxio-tokio-rpc-client/src/rpc_client.rs19 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:118-127

Binary Frame Processing

All RPC communication uses WebSocket binary frames. The receive loop processes binary frames by passing them to the endpoint for decoding.

Binary Frame Flow :

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:163-178

Ping/Pong Handling

The client automatically responds to server pings and sends periodic pings via the heartbeat task.

Ping/Pong Matrix :

DirectionMessage TypeSenderHandler
Client → ServerPingHeartbeat task (1s interval)Server responds with Pong
Server → ClientPingServerReceive loop sends Pong
Server → ClientPongServer (response to our Ping)Receive loop logs and ignores

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-154 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:179-182


Error Handling and Cleanup

Error Sources

The client handles errors from multiple sources:

Error SourceHandling StrategyCleanup Action
connect_async failureReturn io::Error from new()No cleanup needed (not created)
WebSocket receive errorSpawn shutdown_async(), break receive loopFail pending requests, notify handler
WebSocket send errorSpawn shutdown_async(), break send loopFail pending requests, notify handler
MPSC channel closedBreak task loopTask exits naturally
Client dropAbort tasks, call shutdown_sync()Notify handler, abort background tasks

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:118-121 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:186-198 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:239-253

Pending Request Cleanup

When the connection is lost, all pending RPC requests in the dispatcher must be failed to prevent callers from waiting indefinitely.

Cleanup Flow (rpc_client.rs:100-103):

Error Propagation :

  1. Dispatcher lock is acquired (prevents new requests)
  2. fail_all_pending_requests is called with ReadAfterCancel error
  3. All pending request callbacks receive RpcServiceError::Transport
  4. Waiting futures resolve with error

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:100-103

Task Abort on Drop

The Drop implementation ensures clean shutdown even if the client is dropped while background tasks are running.

Drop Sequence (rpc_client.rs:42-52):

  1. Iterate over task_handles and call abort() on each
  2. Call shutdown_sync() to notify state handlers
  3. Background tasks receive abort signal and terminate

Abort Safety : Aborting tasks is safe because:

  • The receive loop holds a weak reference (won’t prevent drop)
  • The send loop checks is_connected before sending
  • The heartbeat task only sends pings (no critical state)
  • All critical state is owned by the RpcClient itself

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52


Usage Examples

Basic Connection and Call

This example demonstrates:

  • Creating a client with RpcClient::new(host, port)
  • Making a prebuffered RPC call via the trait method
  • The client is automatically cleaned up when dropped

Sources : Example pattern from integration tests

State Change Handler Registration

The handler is invoked:

  • Immediately with Connected if already connected
  • With Disconnected when the connection is lost or client is dropped

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:315-334

Server-to-Client RPC

The client can also handle incoming RPC calls from the server by registering handlers on its embedded endpoint.

This demonstrates bidirectional RPC:

  • The client can call server methods via RpcServiceCallerInterface
  • The server can call client methods via the registered endpoint handlers

Sources : extensions/muxio-tokio-rpc-client/src/rpc_client.rs:273-275


Testing

The crate includes comprehensive integration tests covering connection lifecycle, error handling, and state management.

Test Coverage

TestFilePurpose
test_client_errors_on_connection_failuretests/transport_state_tests.rs:16-31Verifies connection errors are returned properly
test_transport_state_change_handlertests/transport_state_tests.rs:34-165Validates state handler invocations
test_pending_requests_fail_on_disconnecttests/transport_state_tests.rs:167-292Ensures pending requests fail on disconnect

Sources : extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:1-293

Mock Implementations

The muxio-rpc-service-caller crate tests include mock client implementations demonstrating the trait contract:

MockRpcClient Structure (dynamic_channel_tests.rs:19-88):

  • Implements RpcServiceCallerInterface for testing
  • Uses Arc<Mutex<Option<DynamicSender>>> to provide response senders
  • Demonstrates dynamic channel handling (bounded/unbounded)
  • Uses Arc<AtomicBool> for connection state simulation

Sources : extensions/muxio-rpc-service-caller/tests/dynamic_channel_tests.rs:15-88


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

WASM RPC Client

Loading…

WASM RPC Client

Relevant source files

The WASM RPC Client provides a WebAssembly-compatible implementation of the RPC transport layer for browser environments. It bridges Rust code compiled to WASM with JavaScript’s WebSocket API, enabling bidirectional RPC communication between WASM clients and native servers.

This page focuses on the client-side WASM implementation. For native Tokio-based clients, see Tokio RPC Client. For server-side implementations, see Tokio RPC Server. For the RPC abstraction layer, see RPC Framework.

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-182 extensions/muxio-wasm-rpc-client/Cargo.toml:1-30

Architecture Overview

The WASM RPC client bridges Rust WASM code with JavaScript’s WebSocket API. Unlike the Tokio client which manages its own WebSocket connection, RpcWasmClient relies on JavaScript glue code to handle WebSocket events and delegates to Rust for RPC protocol processing.

Diagram: WASM Client Architecture and Data Flow

graph TB
    subgraph "Browser JavaScript"
        WS[WebSocket]
        WRITE_BYTES["static_muxio_write_bytes()"]
APP["Web Application"]
end
    
    subgraph "WASM Module"
        TLS["MUXIO_STATIC_RPC_CLIENT_REF\nRefCell<Option<Arc<RpcWasmClient>>>"]
CLIENT["RpcWasmClient"]
DISP["Arc<Mutex<RpcDispatcher>>"]
EP["Arc<RpcServiceEndpoint<()>>"]
EMIT["emit_callback: Arc<dyn Fn(Vec<u8>)>"]
CONN["is_connected: Arc<AtomicBool>"]
end
    
    subgraph "Core Layer"
        MUXIO["muxio::rpc::RpcDispatcher\nmuxio::frame"]
end
    
 
   APP -->|new WebSocket| WS
 
   WS -->|onopen| TLS
 
   WS -->|onmessage bytes| TLS
 
   WS -->|onerror/onclose| TLS
    
 
   TLS -->|handle_connect| CLIENT
 
   TLS -->|read_bytes bytes| CLIENT
 
   TLS -->|handle_disconnect| CLIENT
    
 
   CLIENT --> DISP
 
   CLIENT --> EP
 
   CLIENT --> EMIT
 
   CLIENT --> CONN
    
 
   EMIT -->|invoke| WRITE_BYTES
 
   WRITE_BYTES -->|websocket.send| WS
    
 
   DISP --> MUXIO
 
   EP --> MUXIO

The architecture consists of three layers:

  1. JavaScript Layer : Manages WebSocket lifecycle (onopen, onmessage, onclose) and forwards events to WASM
  2. WASM Bridge Layer : RpcWasmClient with emit_callback for outbound data and lifecycle methods for inbound events
  3. Core RPC Layer : RpcDispatcher for request/response correlation and RpcServiceEndpoint<()> for handling incoming calls

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-11 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36

RpcWasmClient Structure

The RpcWasmClient struct manages bidirectional RPC communication in WASM environments. It implements RpcServiceCallerInterface for outbound calls and uses RpcServiceEndpoint<()> for inbound request handling.

Diagram: RpcWasmClient Class Structure

classDiagram
    class RpcWasmClient {
        -Arc~Mutex~RpcDispatcher~~ dispatcher
        -Arc~RpcServiceEndpoint~()~~ endpoint
        -Arc~dyn Fn(Vec~u8~)~ emit_callback
        -RpcTransportStateChangeHandler state_change_handler
        -Arc~AtomicBool~ is_connected
        +new(emit_callback) RpcWasmClient
        +handle_connect() async
        +read_bytes(bytes: &[u8]) async
        +handle_disconnect() async
        +is_connected() bool
        +get_endpoint() Arc~RpcServiceEndpoint~()~~
        -dispatcher() Arc~Mutex~RpcDispatcher~~
        -emit() Arc~dyn Fn(Vec~u8~)~
    }
    
    class RpcServiceCallerInterface {<<trait>>\n+get_dispatcher() Arc~Mutex~RpcDispatcher~~\n+get_emit_fn() Arc~dyn Fn(Vec~u8~)~\n+is_connected() bool\n+set_state_change_handler(handler) async}
    
    class RpcDispatcher {
        +read_bytes(bytes: &[u8]) Result~Vec~u32~, FrameDecodeError~
        +respond(response, chunk_size, callback) Result
        +is_rpc_request_finalized(id: u32) Option~bool~
        +delete_rpc_request(id: u32) Option~RpcRequest~
        +fail_all_pending_requests(error)
    }
    
    class RpcServiceEndpoint {+get_prebuffered_handlers() Arc\n+register_prebuffered_handler()}
    
    RpcWasmClient ..|> RpcServiceCallerInterface
    RpcWasmClient --> RpcDispatcher
    RpcWasmClient --> RpcServiceEndpoint
FieldTypePurpose
dispatcherArc<Mutex<RpcDispatcher<'static>>>Manages request/response correlation via request_id and stream multiplexing
endpointArc<RpcServiceEndpoint<()>>Dispatches incoming RPC requests to registered handlers by METHOD_ID
emit_callbackArc<dyn Fn(Vec<u8>) + Send + Sync>Callback invoked to send bytes to JavaScript’s static_muxio_write_bytes()
state_change_handlerArc<Mutex<Option<Box<dyn Fn(RpcTransportState) + Send + Sync>>>>Optional callback for Connected/Disconnected state transitions
is_connectedArc<AtomicBool>Lock-free connection status tracking

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181

Connection Lifecycle

The WASM client relies on JavaScript to manage the WebSocket connection. Three lifecycle methods must be called from JavaScript glue code in response to WebSocket events:

stateDiagram-v2
    [*] --> Disconnected : new()
    Disconnected --> Connected : handle_connect()
    Connected --> Processing : read_bytes(data)
    Processing --> Connected
    Connected --> Disconnected : handle_disconnect()
    Disconnected --> [*]
    
    note right of Connected
        is_connected = true
        state_change_handler(Connected)
    end note
    
    note right of Processing
        1. read_bytes() into dispatcher
        2. process_single_prebuffered_request()
        3. respond() with results
    end note
    
    note right of Disconnected
        is_connected = false
        state_change_handler(Disconnected)
        fail_all_pending_requests()
    end note

handle_connect

Called when JavaScript’s WebSocket onopen event fires. Sets is_connected to true via AtomicBool::store() and invokes the registered state_change_handler with RpcTransportState::Connected.

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:38-44

read_bytes

The core message processing method, called when JavaScript’s WebSocket onmessage event fires. Implements a three-stage pipeline to avoid holding the dispatcher lock during expensive async operations.

Diagram: read_bytes Three-Stage Pipeline

sequenceDiagram
    participant JS as "JavaScript\nonmessage"
    participant RB as "read_bytes()"
    participant Disp as "Mutex<RpcDispatcher>"
    participant Proc as "process_single_prebuffered_request()"
    participant Handler as "User Handler"
    
    JS->>RB: read_bytes(bytes: &[u8])
    
    rect rgb(240, 240, 240)
    Note over RB,Disp: Stage 1: Synchronous Reading (lines 56-81)
    RB->>Disp: lock().await
    RB->>Disp: dispatcher.read_bytes(bytes)
    Disp-->>RB: Ok(request_ids: Vec<u32>)
    loop "for id in request_ids"
        RB->>Disp: is_rpc_request_finalized(id)
        Disp-->>RB: Some(true)
        RB->>Disp: delete_rpc_request(id)
        Disp-->>RB: Some(RpcRequest)
    end
    Note over RB: Lock dropped here
    end
    
    rect rgb(245, 245, 245)
    Note over RB,Handler: Stage 2: Async Processing (lines 83-103)
    loop "for (request_id, request)"
        RB->>Proc: process_single_prebuffered_request()
        Proc->>Handler: handler(context, request)
        Handler-->>Proc: Result<Vec<u8>, RpcServiceError>
        Proc-->>RB: RpcResponse
    end
    RB->>RB: join_all(response_futures).await
    end
    
    rect rgb(240, 240, 240)
    Note over RB,JS: Stage 3: Synchronous Sending (lines 105-120)
    RB->>Disp: lock().await
    loop "for response"
        RB->>Disp: dispatcher.respond(response, chunk_size, callback)
        Disp->>RB: emit_callback(chunk)
        RB->>JS: static_muxio_write_bytes(chunk)
    end
    Note over RB: Lock dropped here
    end

Stage 1 (lines 56-81) : Acquires dispatcher lock via Mutex::lock().await, calls dispatcher.read_bytes(bytes) to decode frames, identifies finalized requests using is_rpc_request_finalized(), extracts them with delete_rpc_request(), then releases lock.

Stage 2 (lines 83-103) : Without holding any locks, calls process_single_prebuffered_request() for each request. This invokes user handlers asynchronously and collects RpcResponse results using join_all().

Stage 3 (lines 105-120) : Re-acquires dispatcher lock, calls dispatcher.respond() for each response, which invokes emit_callback synchronously to send chunks via static_muxio_write_bytes().

This three-stage design prevents deadlocks by releasing the lock during handler execution and enables concurrent request processing.

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121

handle_disconnect

Called when JavaScript’s WebSocket onclose or onerror events fire. Uses AtomicBool::swap() to atomically set is_connected to false, invokes the state_change_handler with RpcTransportState::Disconnected, and calls dispatcher.fail_all_pending_requests() with FrameDecodeError::ReadAfterCancel to terminate all in-flight RPC calls.

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:124-134

Static Client Pattern

For simplified JavaScript integration, the WASM client provides a static client pattern using thread-local storage via MUXIO_STATIC_RPC_CLIENT_REF. This eliminates the need to pass client instances through JavaScript’s FFI boundary.

Diagram: Static Client Initialization and Access Flow

graph TB
    subgraph "JavaScript"
        INIT["init()"]
CALL["callRpcMethod()"]
end
    
    subgraph "WASM Exports"
        INIT_EXPORT["#[wasm_bindgen]\ninit_static_client()"]
RPC_EXPORT["#[wasm_bindgen]\nexported_rpc_fn()"]
end
    
    subgraph "Static Client Layer"
        TLS["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local!\nRefCell<Option<Arc<RpcWasmClient>>>"]
WITH["with_static_client_async()"]
GET["get_static_client()"]
end
    
    subgraph "Client"
        CLIENT["Arc<RpcWasmClient>"]
end
    
 
   INIT --> INIT_EXPORT
 
   INIT_EXPORT -->|cell.borrow_mut| TLS
 
   TLS -.->|stores| CLIENT
    
 
   CALL --> RPC_EXPORT
 
   RPC_EXPORT --> WITH
 
   WITH -->|cell.borrow .clone| TLS
 
   TLS -.->|retrieves| CLIENT
 
   WITH -->|FnOnce Arc<RpcWasmClient>| CLIENT

init_static_client

Initializes MUXIO_STATIC_RPC_CLIENT_REF thread-local storage with Arc<RpcWasmClient>. The function is idempotent—subsequent calls have no effect. The client is constructed with RpcWasmClient::new(|bytes| static_muxio_write_bytes(&bytes)) to bridge outbound data to JavaScript.

Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-11

with_static_client_async

Primary method for interacting with the static client from #[wasm_bindgen] exported functions. Retrieves Arc<RpcWasmClient> from MUXIO_STATIC_RPC_CLIENT_REF.with(), invokes the provided closure, and converts the result to a JavaScript Promise via future_to_promise().

ParameterTypeDescription
fFnOnce(Arc<RpcWasmClient>) -> Fut + 'staticClosure receiving client reference
FutFuture<Output = Result<T, String>> + 'staticFuture returned by closure
TInto<JsValue>Result type convertible to JavaScript value
ReturnsPromiseJavaScript promise resolving to T or rejecting with error string

If the static client has not been initialized, the promise rejects with "RPC client not initialized".

Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72

get_static_client

Returns the current static client if initialized, otherwise returns None. Useful for conditional logic or direct access without promise conversion.

Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:79-81

JavaScript Integration

The WASM client requires JavaScript glue code to bridge WebSocket events to WASM function calls. The integration relies on the emit_callback mechanism for outbound data and lifecycle methods for inbound events.

Diagram: JavaScript-WASM Bridge

graph TB
    subgraph "JavaScript WebSocket Events"
        OPEN["ws.onopen"]
MESSAGE["ws.onmessage"]
ERROR["ws.onerror"]
CLOSE["ws.onclose"]
end
    
    subgraph "WASM Exported Functions"
        WASM_CONNECT["handle_connect()"]
WASM_READ["read_bytes(event.data)"]
WASM_DISCONNECT["handle_disconnect()"]
end
    
    subgraph "WASM Emit Path"
        EMIT["emit_callback(bytes: Vec<u8>)"]
STATIC_WRITE["static_muxio_write_bytes(&bytes)"]
end
    
    subgraph "JavaScript Bridge"
        WRITE_FN["muxioWriteBytes(bytes)"]
end
    
 
   OPEN -->|await| WASM_CONNECT
 
   MESSAGE -->|await read_bytes new Uint8Array| WASM_READ
 
   ERROR -->|await| WASM_DISCONNECT
 
   CLOSE -->|await| WASM_DISCONNECT
    
 
   EMIT -->|invoke| STATIC_WRITE
 
   STATIC_WRITE -->|#[wasm_bindgen]| WRITE_FN
 
   WRITE_FN -->|websocket.send bytes| MESSAGE

The JavaScript layer must:

  1. Create and manage a WebSocket instance
  2. Forward onopen events to await handle_connect()
  3. Forward onmessage data to await read_bytes(new Uint8Array(event.data))
  4. Forward onerror/onclose events to await handle_disconnect()
  5. Implement muxioWriteBytes() function to receive data from static_muxio_write_bytes() and call websocket.send(bytes)

The emit_callback is constructed with |bytes| static_muxio_write_bytes(&bytes) when creating the client, which bridges to the JavaScript muxioWriteBytes() function.

Sources: extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-8 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:27-34

Making RPC Calls

The WASM client implements RpcServiceCallerInterface, enabling the same call patterns as the Tokio client. All methods defined using RpcMethodPrebuffered trait in service definitions are available via call_rpc_buffered().

Diagram: Outbound RPC Call Flow

sequenceDiagram
    participant WASM as "WASM Code"
    participant Method as "Add::call()"
    participant Caller as "RpcServiceCallerInterface"
    participant Disp as "RpcDispatcher"
    participant Emit as "emit_callback"
    participant JS as "static_muxio_write_bytes()"
    participant WS as "WebSocket"
    
    WASM->>Method: Add::call(&client, request).await
    Method->>Method: encode_request(params)\nwith METHOD_ID
    Method->>Caller: call_rpc_buffered(RpcRequest)
    Caller->>Disp: get_dispatcher().lock()
    Disp->>Disp: assign request_id
    Disp->>Disp: encode frames
    Disp->>Emit: get_emit_fn()(bytes)
    Emit->>JS: static_muxio_write_bytes(&bytes)
    JS->>WS: websocket.send(bytes)
    
    WS->>JS: onmessage(response_bytes)
    JS->>Caller: read_bytes(response_bytes)
    Caller->>Disp: decode frames
    Disp->>Disp: match request_id
    Disp->>Method: decode_response(bytes)
    Method-->>WASM: Result<Response, RpcServiceError>

Call Mechanics

From WASM code, RPC calls follow this pattern:

  1. Obtain Arc<RpcWasmClient> via with_static_client_async() or direct reference
  2. Call service methods: SomeMethod::call(&client, request).await
  3. The trait implementation calls client.call_rpc_buffered() which:
    • Serializes the request with bitcode::encode()
    • Attaches METHOD_ID constant in the RpcHeader
    • Invokes dispatcher.call() with a unique request_id
    • Emits encoded frames via emit_callback
  4. Awaits response correlation by request_id in the dispatcher
  5. Returns Result<DecodedResponse, RpcServiceError>

Sources: extensions/muxio-wasm-rpc-client/src/lib.rs:6-9 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-167

graph TB
    subgraph "JavaScript"
        WS["WebSocket.onmessage"]
end
    
    subgraph "read_bytes()
Pipeline"
        STAGE1["Stage 1:\ndelete_rpc_request(id)"]
STAGE2["Stage 2:\nprocess_single_prebuffered_request()"]
STAGE3["Stage 3:\ndispatcher.respond()"]
end
    
    subgraph "RpcServiceEndpoint<()>"
        HANDLERS["get_prebuffered_handlers()\nHashMap<u32, Box<Handler>>"]
LOOKUP["lookup by METHOD_ID"]
end
    
    subgraph "User Handler"
        HANDLER["async fn(context: (), request: Vec<u8>)\n-> Result<Vec<u8>, RpcServiceError>"]
end
    
 
   WS -->|bytes| STAGE1
 
   STAGE1 -->|Vec< u32, RpcRequest >| STAGE2
 
   STAGE2 --> HANDLERS
 
   HANDLERS --> LOOKUP
 
   LOOKUP --> HANDLER
 
   HANDLER -->|Result| STAGE2
 
   STAGE2 -->|Vec<RpcResponse>| STAGE3
 
   STAGE3 -->|emit_callback| WS

Handling Incoming RPC Calls

The WASM client supports bidirectional RPC by handling incoming calls from the server. The RpcServiceEndpoint<()> dispatches requests to registered handlers by METHOD_ID.

Diagram: Inbound RPC Request Processing

sequenceDiagram
    participant Code as "User Code"
    participant Client as "RpcWasmClient"
    participant EP as "RpcServiceEndpoint<()>"
    participant Map as "HashMap<u32, Handler>"
    
    Code->>Client: get_endpoint()
    Client-->>Code: Arc<RpcServiceEndpoint<()>>
    Code->>EP: register_prebuffered_handler::<Method>(handler)
    EP->>Map: insert(Method::METHOD_ID, Box<handler>)
    Note over Map: Handler stored for Method::METHOD_ID

Registering Handlers

Handlers are registered with the RpcServiceEndpoint<()> obtained via get_endpoint():

Diagram: Handler Registration Flow

When an incoming request arrives in read_bytes():

  1. Stage 1 : dispatcher.delete_rpc_request(id) extracts the RpcRequest containing METHOD_ID in its header
  2. Stage 2 : process_single_prebuffered_request() looks up the handler via get_prebuffered_handlers() using METHOD_ID
  3. Handler executes: handler(context: (), request.rpc_prebuffered_payload_bytes)
  4. Stage 3 : dispatcher.respond() serializes the response and invokes emit_callback

The context type for WASM client handlers is () since there is no per-connection state in WASM environments.

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:86-120 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:141-143

stateDiagram-v2
    [*] --> Disconnected
    Disconnected --> Connected : handle_connect()
    Connected --> Disconnected : handle_disconnect()
    
    state Connected {
        [*] --> Ready
        Ready --> Processing : read_bytes()
        Processing --> Ready
    }
    
    note right of Connected
        is_connected = true
        emit state_change_handler(Connected)
    end note
    
    note right of Disconnected
        is_connected = false
        emit state_change_handler(Disconnected)
        fail_all_pending_requests()
    end note

State Management

The WASM client tracks connection state using an AtomicBool and provides optional state change notifications.

State Change Handler

Applications can register a callback to receive notifications when the connection state changes:

StateTriggerActions
Connectedhandle_connect() calledHandler invoked with RpcTransportState::Connected
Disconnectedhandle_disconnect() calledHandler invoked with RpcTransportState::Disconnected, all pending requests failed

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:168-180 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:22-23

Dependencies

The WASM client has minimal dependencies focused on WASM/JavaScript interop:

DependencyVersionPurpose
wasm-bindgen0.2.100JavaScript/Rust FFI bindings via #[wasm_bindgen]
wasm-bindgen-futures0.4.50Convert Rust Future to JavaScript Promise via future_to_promise()
js-sys0.3.77JavaScript standard library types (Promise, Uint8Array)
tokioworkspaceAsync runtime (only tokio::sync::Mutex used, not the executor)
futuresworkspaceFuture composition (join_all() for concurrent request processing)
async-traitworkspaceAsync trait implementations (#[async_trait] for RpcServiceCallerInterface)
muxioworkspaceCore multiplexing (RpcDispatcher, RpcSession, frame encoding)
muxio-rpc-serviceworkspaceRPC trait definitions (RpcMethodPrebuffered, METHOD_ID)
muxio-rpc-service-callerworkspaceRpcServiceCallerInterface trait
muxio-rpc-service-endpointworkspaceRpcServiceEndpoint, process_single_prebuffered_request()
tracingworkspaceLogging macros (tracing::error!)

Note: While tokio is included, the WASM client does not use Tokio’s executor. Only synchronization primitives like tokio::sync::Mutex are used, which work in single-threaded WASM environments.

Sources: extensions/muxio-wasm-rpc-client/Cargo.toml:11-22

Thread Safety and Concurrency

The WASM client is designed for single-threaded WASM environments but uses thread-safe primitives for API consistency with native code:

PrimitivePurposeWASM Behavior
Arc<T>Reference counting for shared ownershipWorks in single-threaded context, no actual atomics needed
tokio::sync::Mutex<RpcDispatcher>Guards dispatcher state during frame encoding/decodingNever contends (single-threaded), provides interior mutability
Arc<AtomicBool>Lock-free is_connected trackingload()/store()/swap() operations work without OS threads
Send + Sync boundsTrait bounds on callbacks and handlersSatisfied for API consistency, no actual thread migration

The three-stage read_bytes() pipeline ensures the dispatcher lock is held only during:

  • Stage 1: read_bytes(), is_rpc_request_finalized(), delete_rpc_request() (lines 58-81)
  • Stage 3: respond() calls (lines 108-119)

Lock is not held during Stage 2’s async handler execution (lines 85-103), enabling concurrent request processing via join_all().

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:7-14

Comparison with Tokio Client

FeatureWASM ClientTokio Client
WebSocket ManagementDelegated to JavaScriptBuilt-in with tokio-tungstenite
Event ModelCallback-based (onopen, onmessage, etc.)Async stream-based
Connection Initializationhandle_connect()connect()
Data Readingread_bytes() called from JSread_loop() task
Async RuntimeNone (WASM environment)Tokio
State TrackingAtomicBool + manual callsAutomatic with connection task
Bidirectional RPCYes, via RpcServiceEndpointYes, via RpcServiceEndpoint
Static Client PatternYes, via thread_localNot applicable

Both clients implement RpcServiceCallerInterface, ensuring identical call patterns and service definitions work across both environments.

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Connection Lifecycle and State Management

Loading…

Connection Lifecycle and State Management

Relevant source files

Purpose and Scope

This document describes how muxio tracks connection state, manages lifecycle events, and performs automatic cleanup during disconnection across both Tokio-based native clients (5.2) and WASM browser clients (5.3). It covers the RpcTransportState representation, state change handlers, lifecycle transitions, and cleanup mechanisms that ensure pending requests are properly failed and resources are released when connections terminate.

Connection State Representation

RpcTransportState Enum

The RpcTransportState enum defines the two possible connection states:

StateDescription
ConnectedWebSocket connection is established and operational
DisconnectedConnection has been closed or failed

This enum is shared across all client implementations and is used as the parameter type for state change handlers.

Sources: extensions/muxio-rpc-service-caller/src/transport_state.rs

Atomic Connection Tracking

Both RpcClient and RpcWasmClient maintain an is_connected field of type Arc<AtomicBool> to track the current connection state:

The use of AtomicBool enables lock-free reads from multiple concurrent tasks (Tokio) or callbacks (WASM). The Arc wrapper allows the flag to be shared with background tasks and closures without ownership transfer. State changes use SeqCst (sequentially consistent) ordering to ensure all threads observe changes in a consistent order.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs30 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs23

State Change Handlers

Applications can register a callback to be notified of connection state changes using the set_state_change_handler() method from the RpcServiceCallerInterface trait:

The handler is stored in an RpcTransportStateChangeHandler:

When set, the handler is immediately invoked with the current state if the client is connected. This ensures the application receives the initial Connected event without race conditions.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:22-334 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:13-180

Connection Lifecycle Diagram

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-221 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:38-134

Connection Establishment

Tokio Client Initialization

The Tokio RpcClient::new() method establishes the connection and sets initial state:

  1. Connects to WebSocket endpoint via connect_async()
  2. Splits the stream into sender and receiver
  3. Creates MPSC channel for outbound messages
  4. Sets is_connected to true atomically
  5. Spawns three background tasks:
    • Heartbeat task (sends pings every 1 second)
    • Receive loop (processes incoming WebSocket messages)
    • Send loop (drains outbound MPSC channel)

The client uses Arc::new_cyclic() to allow background tasks to hold weak references, preventing reference cycles that would leak memory.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:111-271

WASM Client Initialization

The WASM RpcWasmClient::new() method creates the client structure but does not establish the connection immediately:

  1. Creates a new RpcWasmClient with is_connected set to false
  2. Stores the emit callback for sending data to JavaScript
  3. Returns the client instance

Connection establishment is triggered later by JavaScript calling the exported handle_connect() method when the browser’s WebSocket onopen event fires:

  1. Sets is_connected to true atomically
  2. Invokes the state change handler with RpcTransportState::Connected

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:27-44

Active Connection Management

Heartbeat Mechanism (Tokio Only)

The Tokio client spawns a dedicated heartbeat task that sends WebSocket ping frames every 1 second to keep the connection alive and detect failures:

If the channel is closed (indicating shutdown), the task exits cleanly. Pong responses from the server are handled by the receive loop.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:139-154

Connection Status Checks

Both implementations provide an is_connected() method via the RpcServiceCallerInterface trait:

This method is checked before initiating RPC calls or emitting data. If the client is disconnected, operations are aborted early to prevent sending data over a closed connection.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:284-296 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:137-166

Disconnection and Cleanup

Shutdown Paths

The Tokio RpcClient implements two shutdown methods:

shutdown_sync()

Used during Drop to ensure cleanup happens synchronously. This method:

  1. Swaps is_connected to false using SeqCst ordering
  2. Acquires the state change handler lock (best-effort)
  3. Invokes the handler with RpcTransportState::Disconnected
  4. Does NOT fail pending requests (dispatcher lock not acquired)

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:55-77

shutdown_async()

Used when errors are detected during background task execution. This method:

  1. Swaps is_connected to false using SeqCst ordering
  2. Acquires the state change handler lock (best-effort)
  3. Invokes the handler with RpcTransportState::Disconnected
  4. Acquires the dispatcher lock (async)
  5. Calls fail_all_pending_requests() to cancel all in-flight RPCs

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:79-108

WASM Disconnect Handling

The WASM client’s handle_disconnect() method performs similar cleanup:

  1. Swaps is_connected to false using SeqCst ordering
  2. Invokes the state change handler with RpcTransportState::Disconnected
  3. Acquires the dispatcher lock
  4. Calls fail_all_pending_requests() with FrameDecodeError::ReadAfterCancel

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:124-134

Disconnect Sequence Diagram

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-108 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:124-134

Pending Request Cleanup

When a connection is lost, all pending RPC requests must be failed to prevent the application from waiting indefinitely. The RpcDispatcher::fail_all_pending_requests() method accomplishes this:

  1. Iterates over all pending requests in the dispatcher’s request map
  2. For each pending request, extracts the response callback
  3. Invokes the callback with a FrameDecodeError (typically ReadAfterCancel)
  4. Clears the pending request map

This ensures that all outstanding RPC calls return an error immediately, allowing application code to handle the failure and retry or report the error to the user.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs102 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:130-132

Drop Behavior and Resource Cleanup

Tokio Client Drop Implementation

The Tokio RpcClient implements the Drop trait to ensure proper cleanup when the client is no longer in use:

This implementation:

  1. Aborts all background tasks (heartbeat, send loop, receive loop)
  2. Calls shutdown_sync() to trigger state change handler
  3. Does NOT fail pending requests (to avoid blocking in destructor)

The Drop implementation ensures that even if the application drops the client without explicit cleanup, resources are released and handlers are notified.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52

WASM Client Lifecycle

The WASM RpcWasmClient does not implement Drop because:

  1. WASM uses cooperative JavaScript event handling rather than background tasks
  2. Connection state is explicitly managed by JavaScript callbacks (handle_connect(), handle_disconnect())
  3. No OS-level resources need cleanup (WebSocket is owned by JavaScript)

The JavaScript glue code is responsible for calling handle_disconnect() when the WebSocket closes.

Sources: extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35

Component Relationship Diagram

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:25-108 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:17-134

Platform-Specific Differences

AspectTokio ClientWASM Client
Connection InitializationAutomatic in ::new() via connect_async()Manual via JavaScript handle_connect() call
HeartbeatAutomatic (1 second ping interval)None (handled by browser)
Background Tasks3 spawned tasks (heartbeat, send, receive)None (JavaScript event-driven)
Disconnect DetectionAutomatic (WebSocket error, stream end)Manual via JavaScript handle_disconnect() call
Drop BehaviorAborts tasks, calls shutdown_sync()No special Drop behavior
Cleanup TimingAsync (shutdown_async()) or sync (shutdown_sync())Async only (handle_disconnect())
Pending Request FailureVia shutdown_async() or explicitly triggeredVia handle_disconnect()
Threading ModelMulti-threaded with Tokio executorSingle-threaded WASM environment

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs

Ordering Guarantees

Both implementations use SeqCst (sequentially consistent) ordering when swapping the is_connected flag:

This ensures:

  1. Only one thread/task performs cleanup (the swap returns true only once)
  2. All subsequent reads of is_connected see false across all threads
  3. No operations are reordered across the swap boundary

The use of Relaxed ordering for reads (is_connected() method) is acceptable because the flag only transitions from true to false, never the reverse, making eventual consistency sufficient for read operations.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:61-284 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:39-138

Testing Connection Lifecycle

The codebase includes comprehensive integration tests verifying lifecycle behavior:

Test: Connection Failure

Verifies that attempting to connect to a non-listening port returns a ConnectionRefused error.

Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-31

Test: State Change Handler

Verifies that:

  1. Handler is called with Connected when connection is established
  2. Handler is called with Disconnected when server closes the connection
  3. Events occur in the correct order

Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:33-165

Test: Pending Requests Fail on Disconnect

Verifies that:

  1. RPC calls can be initiated while connected
  2. If the connection is lost before a response arrives, pending requests fail
  3. The error message indicates cancellation or transport failure

This test uses a oneshot::channel to capture the result of a spawned RPC task and validates that it receives an error after the server closes the connection.

Sources: extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292

Best Practices

  1. Always Set State Change Handler Early : Register the handler immediately after creating the client to avoid missing the initial Connected event.

  2. Handle Disconnection Gracefully : Applications should assume connections can fail at any time and implement retry logic or user notifications.

  3. Check Connection Before Critical Operations : Use is_connected() to avoid attempting operations that will fail immediately.

  4. Avoid Long-Running Handlers : State change handlers should complete quickly to avoid blocking disconnect processing. Spawn separate tasks for expensive operations.

  5. WASM: Callhandle_disconnect() Reliably: Ensure JavaScript glue code calls handle_disconnect() in both onclose and onerror WebSocket event handlers.

  6. Testing: Use Adequate Delays : Integration tests should allow sufficient time for background tasks to register pending requests before triggering disconnects.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:316-334 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:168-180 extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:220-246


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Type Safety and Shared Definitions

Loading…

Type Safety and Shared Definitions

Relevant source files

Purpose and Scope

This document explains how muxio achieves compile-time type safety through shared service definitions. It covers the architecture that enables both client and server to depend on a single source of truth for RPC method signatures, parameter types, and return types. This design eliminates a common class of runtime errors by ensuring that any mismatch between client and server results in a compile-time error rather than a runtime failure.

For implementation details about creating service definitions, see Creating Service Definitions. For details about method ID generation, see Method ID Generation. For serialization specifics, see Serialization with Bitcode.


The Type Safety Challenge in RPC Systems

Traditional RPC systems often face a fundamental challenge: ensuring that client and server agree on method signatures, parameter types, and return types. Common approaches include:

ApproachCompile-Time SafetyCross-PlatformExample
Schema filesPartial (code generation)YesgRPC with .proto files
Runtime validationNoYesJSON-RPC with runtime checks
Separate definitionsNoYesOpenAPI with client/server divergence
Shared type definitionsYesPlatform-dependentShared Rust code

Muxio uses the shared type definitions approach, but extends it to work across platforms (native and WASM) through its runtime-agnostic design. The core innovation is the RpcMethodPrebuffered trait, which defines a contract that both client and server implementations must satisfy.

Sources:


Shared Service Definitions Architecture

Analysis: The shared definition crate (example-muxio-rpc-service-definition) defines RPC methods by implementing the RpcMethodPrebuffered trait. Both server and client applications depend on this shared crate. The server uses decode_request() and encode_response() methods, while the client uses encode_request() and decode_response() (via the call() function). Because all implementations derive from the same trait implementation, the Rust compiler enforces type consistency at compile time.

Sources:


Compile-Time Type Safety Guarantees

Type Flow Diagram

Analysis: The type flow ensures that parameters pass through multiple type-checked boundaries. On the client side, the call() function requires parameters matching the Request associated type. The encode_request() method accepts this exact type. On the server side, decode_request() produces the same type, which the handler must accept. The response follows the reverse path with identical type checking. Any mismatch at any stage results in a compilation error, not a runtime error.

Sources:


The RpcMethodPrebuffered Trait Contract

Trait Structure

Analysis: Each RPC method implements the RpcMethodPrebuffered trait, defining its unique METHOD_ID, associated Request and Response types, and the required encoding/decoding methods. The trait enforces that all four encoding/decoding methods use consistent types. The call() method provides a high-level client interface that internally uses encode_request() and decode_response(), ensuring type safety throughout the call chain.

Sources:


Compile-Time Error Prevention

Type Mismatch Detection

The Rust compiler enforces type safety through multiple mechanisms:

Mismatch TypeDetection PointCompiler Error
Parameter type mismatchcall() invocationExpected Vec<f64>, found Vec<i32>
Handler input type mismatchdecode_request() callType mismatch in closure parameter
Handler output type mismatchencode_response() callExpected f64, found String
Response type mismatchcall() result bindingExpected f64, found i32
Method ID collisionTrait implementationDuplicate associated constant

Example: Type Mismatch Prevention

Analysis: The Rust type system prevents type mismatches before the code ever runs. When a developer attempts to call an RPC method with incorrect parameter types, the compiler immediately flags the error by comparing the provided type against the Request associated type defined in the shared trait implementation. This catch-early approach eliminates an entire class of integration bugs that would otherwise only surface at runtime, potentially in production.

Sources:


Cross-Platform Type Safety

Shared Definitions Across Client Types

Analysis: The shared definition crate enables identical type safety guarantees across all client platforms. Both RpcClient (native Tokio) and RpcWasmClient (WASM browser) depend on the same example-muxio-rpc-service-definition crate. Application code written for one client type can be ported to another client type with minimal changes, because both implement RpcServiceCallerInterface and both use the same call() methods with identical type signatures. The Rust compiler enforces that all platforms use matching types.

Sources:


Integration with Method IDs and Serialization

Type Safety Dependencies

Analysis: Type safety in muxio is achieved through the integration of three components. The RpcMethodPrebuffered trait defines the type contract. xxhash-rust generates unique METHOD_ID constants at compile time, enabling the compiler to detect method ID collisions. bitcode provides type-preserving binary serialization, ensuring that the types decoded on the server match the types encoded on the client. The Rust compiler verifies that all encoding, network transmission, and decoding operations preserve type integrity.

Sources:


Benefits Summary

The shared service definitions architecture provides several concrete benefits:

BenefitMechanismImpact
Compile-time error detectionRust type system enforces trait contractsBugs caught before runtime
API consistencySingle source of truth for method signaturesNo client-server divergence
Refactoring safetyType changes propagate to all dependentsCompiler guides migration
Cross-platform uniformitySame types work on native and WASMCode reuse across platforms
Zero runtime overheadAll checks happen at compile timeNo validation cost in production
Documentation through typesType signatures are self-documentingReduced documentation burden

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Creating Service Definitions

Loading…

Creating Service Definitions

Relevant source files

Purpose and Scope

This page provides a step-by-step guide for creating RPC service definitions using the RpcMethodPrebuffered trait. Service definitions are shared Rust crates that define the contract between client and server, enabling compile-time type safety across platform boundaries.

For conceptual background on service definitions and their role in the architecture, see Service Definitions. For details on how method IDs are generated, see Method ID Generation. For serialization internals, see Serialization with Bitcode.


What is a Service Definition?

A service definition is a Rust struct that implements the RpcMethodPrebuffered trait. It serves as a shared contract between client and server, defining:

  • Input type : The parameters passed to the RPC method
  • Output type : The value returned from the RPC method
  • Method ID : A unique identifier for the method (compile-time generated hash)
  • Serialization logic : How to encode/decode request and response data

Service definitions are typically packaged in a separate crate that both client and server applications depend on. This ensures that any mismatch in data structures results in a compile-time error rather than a runtime failure.

Sources: README.md50 README.md:71-74


The RpcMethodPrebuffered Trait Structure

The RpcMethodPrebuffered trait defines the contract that all prebuffered service definitions must implement. This trait is automatically extended with the RpcCallPrebuffered trait, which provides the high-level call() method for client invocation.

graph TB
    RpcMethodPrebuffered["RpcMethodPrebuffered\n(Core trait)"]
RpcCallPrebuffered["RpcCallPrebuffered\n(Auto-implemented)"]
UserStruct["User Service Struct\n(e.g., Add, Echo, Mult)"]
RpcMethodPrebuffered --> RpcCallPrebuffered
    UserStruct -.implements.-> RpcMethodPrebuffered
    UserStruct -.gets.-> RpcCallPrebuffered
    
 
   RpcMethodPrebuffered --> Input["Associated Type: Input"]
RpcMethodPrebuffered --> Output["Associated Type: Output"]
RpcMethodPrebuffered --> MethodID["Constant: METHOD_ID"]
RpcMethodPrebuffered --> EncodeReq["fn encode_request()"]
RpcMethodPrebuffered --> DecodeReq["fn decode_request()"]
RpcMethodPrebuffered --> EncodeRes["fn encode_response()"]
RpcMethodPrebuffered --> DecodeRes["fn decode_response()"]

Trait Hierarchy

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21

Required Components

ComponentTypePurpose
InputAssociated TypeType of the request parameters
OutputAssociated TypeType of the response value
METHOD_IDu64 constantUnique identifier for the method
encode_request()FunctionSerializes Input to Vec<u8>
decode_request()FunctionDeserializes Vec<u8> to Input
encode_response()FunctionSerializes Output to Vec<u8>
decode_response()FunctionDeserializes Vec<u8> to Output

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-6


Step-by-Step: Creating a Service Definition

Step 1: Create a Shared Crate

Create a new library crate that will be shared between client and server:

Add dependencies to Cargo.toml:

Sources: README.md71

Step 2: Define Request and Response Types

Define Rust structs for your input and output data. These must implement serde::Serialize and serde::Deserialize:

Note: The actual types can be as simple or complex as needed. They can be primitives (Vec<u8>), tuples, or complex nested structures.

Sources: README.md:146-151

Step 3: Implement the RpcMethodPrebuffered Trait

Create a struct for your service and implement the trait:

Sources: README.md:102-106 README.md146

Step 4: Use the Service Definition

Client-Side Usage

Import the service definition and call it using the RpcCallPrebuffered trait:

Sources: README.md146 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:50-53

Server-Side Usage

Register a handler using the service definition’s METHOD_ID and decode/encode methods:

Sources: README.md:102-107 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:52-56


sequenceDiagram
    participant Client as "Client Application"
    participant CallTrait as "RpcCallPrebuffered::call()"
    participant ServiceDef as "Service Definition\n(Add struct)"
    participant Network as "Network Transport"
    participant ServerEndpoint as "Server Endpoint Handler"
    
    Note over Client,ServerEndpoint: Request Path
    
    Client->>CallTrait: Add::call(client, vec![1.0, 2.0, 3.0])
    CallTrait->>ServiceDef: encode_request(vec![1.0, 2.0, 3.0])
    ServiceDef->>ServiceDef: bitcode::encode()
    ServiceDef-->>CallTrait: Vec<u8> (binary data)
    CallTrait->>Network: Send with METHOD_ID
    
    Network->>ServerEndpoint: Binary frame received
    ServerEndpoint->>ServiceDef: decode_request(&bytes)
    ServiceDef->>ServiceDef: bitcode::decode()
    ServiceDef-->>ServerEndpoint: Vec<f64>
    ServerEndpoint->>ServerEndpoint: Execute: sum = 6.0
    
    Note over Client,ServerEndpoint: Response Path
    
    ServerEndpoint->>ServiceDef: encode_response(6.0)
    ServiceDef->>ServiceDef: bitcode::encode()
    ServiceDef-->>ServerEndpoint: Vec<u8> (binary data)
    ServerEndpoint->>Network: Send binary response
    
    Network->>CallTrait: Binary frame received
    CallTrait->>ServiceDef: decode_response(&bytes)
    ServiceDef->>ServiceDef: bitcode::decode()
    ServiceDef-->>CallTrait: f64: 6.0
    CallTrait-->>Client: Result: 6.0

Service Definition Data Flow

The following diagram shows how data flows through a service definition during an RPC call:

Sources: README.md:92-161 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:50-97


Organizing Multiple Services

A service definition crate typically contains multiple service definitions. The recommended organization pattern is:

File Structure

my-service-definition/
├── Cargo.toml
└── src/
    ├── lib.rs
    └── prebuffered/
        ├── mod.rs
        ├── add.rs
        ├── multiply.rs
        └── echo.rs

Module Organization

src/lib.rs:

src/prebuffered/mod.rs:

This structure allows clients to import services with a clean syntax:

Sources: README.md:71-74 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs21


Service Definition Component Mapping

This diagram maps the conceptual components to their code entities:

Sources: README.md:71-119 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-68


Large Payload Handling

Service definitions automatically handle large payloads through a smart transport strategy. When encoded request parameters exceed DEFAULT_SERVICE_MAX_CHUNK_SIZE, they are automatically sent as a chunked payload rather than inline in the request header.

Transport Strategy Selection

Encoded SizeTransport MethodField Used
< DEFAULT_SERVICE_MAX_CHUNK_SIZEInline in headerrpc_param_bytes
DEFAULT_SERVICE_MAX_CHUNK_SIZEChunked payloadrpc_prebuffered_payload_bytes

This strategy is implemented automatically by the RpcCallPrebuffered trait and requires no special handling in service definitions:

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-48 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-65

Large Payload Test Example

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:295-311


Best Practices

Type Design

PracticeRationale
Use #[derive(Serialize, Deserialize)]Required for bitcode serialization
Add #[derive(Debug, Clone)]Helpful for testing and debugging
Keep types simpleSimpler types serialize more efficiently
Use Vec<u8> for binary dataAvoids double-encoding overhead

Method Naming

PracticeExampleRationale
Use PascalCase for struct namesAdd, GetUserProfileRust convention for type names
Use descriptive namesCalculateSum vs CalcImproves code readability
Match domain conceptsAuthenticateUserMakes intent clear

Error Handling

Service definitions should use io::Error for encoding/decoding failures:

This ensures consistent error propagation through the RPC framework.

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:6-7

Versioning

When evolving service definitions:

  1. Additive changes are safe : Adding new optional fields to request/response types
  2. Breaking changes require new methods : Changing input/output types requires a new METHOD_ID
  3. Maintain backwards compatibility : Keep old service definitions until all clients migrate

Sources: README.md50


Complete Example: Echo Service

Here is a complete example of a simple Echo service definition:

Usage:

Sources: README.md:114-118 README.md:150-151 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:64-67 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:131-132


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Method ID Generation

Loading…

Method ID Generation

Relevant source files

This page explains how the muxio RPC framework generates unique identifiers for RPC methods at compile time using the xxhash algorithm. Method IDs enable efficient method dispatch without runtime string comparisons or schema files.

For information about defining RPC services using these method IDs, see Creating Service Definitions. For details about the serialization format that carries method IDs over the wire, see Serialization with Bitcode.


Overview

The muxio RPC framework generates a unique 64-bit method identifier for each RPC method by hashing the method’s fully-qualified name at compile time. This approach provides several key benefits:

AspectImplementation
Hash Algorithmxxhash (xxHash64 variant)
InputFully-qualified method name as UTF-8 string
Output64-bit unsigned integer (u64)
Generation TimeCompile time (zero runtime overhead)
Collision HandlingDeterministic; same name always produces same ID

The generated method IDs are embedded directly into service definition constants, eliminating the need for runtime string hashing or lookup tables. Both client and server compile against the same service definitions, ensuring that method IDs match across platforms.

Sources:

  • Cargo.lock:1886-1889 (xxhash-rust dependency)
  • extensions/muxio-rpc-service/Cargo.toml:16 (xxhash-rust in dependencies)

The xxhash Algorithm

The muxio framework uses the xxhash-rust crate to generate method IDs. xxhash was selected for its specific characteristics:

graph LR
    Input["Method Name String\n(UTF-8 bytes)"]
XXH64["xxHash64 Algorithm"]
Output["64-bit Method ID\n(u64)"]
Input --> XXH64
 
   XXH64 --> Output
    
    Props["Properties:\n- Deterministic\n- Fast (compile-time only)\n- Non-cryptographic\n- Low collision rate\n- Platform-independent"]
XXH64 -.-> Props

Algorithm Characteristics

Why xxhash?

PropertyBenefit
DeterministicSame method name always produces the same ID across all platforms and compilations
FastMinimal compile-time overhead; speed matters less since generation is compile-time only
Non-cryptographicNo security requirements for method IDs; simpler algorithm reduces dependencies
Low collision rateStatistical likelihood of two different method names producing the same ID is negligible
64-bit outputLarge enough keyspace (2^64 possible IDs) to avoid collisions in practice

The framework does not perform collision detection because:

  1. The 64-bit keyspace makes collisions statistically improbable for any reasonable number of methods
  2. Methods are identified by fully-qualified names (including trait and module paths), further reducing collision likelihood
  3. If a collision does occur, it will manifest as a method routing error detectable during testing

Sources:

  • Cargo.lock:1886-1889 (xxhash-rust package metadata)
  • extensions/muxio-rpc-service/Cargo.toml:3 (core traits and method ID generation description)

Compile-Time Generation Mechanism

Method IDs are computed during compilation, not at runtime. The generation process integrates with Rust’s trait system and constant evaluation capabilities.

sequenceDiagram
    participant Source as "Service Definition Source Code"
    participant Compiler as "Rust Compiler"
    participant XXHash as "xxhash-rust Crate"
    participant Binary as "Compiled Binary"
    
    Note over Source: RpcMethodPrebuffered trait\nwith method name
    Source->>Compiler: Compile service definition
    Compiler->>XXHash: Hash method name constant
    XXHash-->>Compiler: Return u64 method ID
    Compiler->>Compiler: Embed ID as constant (METHOD_ID)
    Compiler->>Binary: Include ID in binary
    Note over Binary: Method ID available at runtime\nas a simple constant

Generation Flow

Constant Evaluation

The method ID is generated using Rust’s const fn capabilities, making the hash computation part of compile-time constant evaluation:

  1. Service definition declares a method name as a string constant
  2. The xxhash function (or wrapper) is invoked at compile time
  3. The resulting u64 is stored as a const associated with the method
  4. This constant is directly embedded in the compiled binary

This approach means:

  • Zero runtime cost : No hashing occurs during program execution
  • Type safety : Method IDs are compile-time constants that cannot be accidentally modified
  • Cross-platform consistency : The same source code produces identical method IDs on all platforms

Sources:

  • extensions/muxio-rpc-service/Cargo.toml:3 (compile-time method ID generation description)
  • Cargo.lock:858-867 (muxio-rpc-service dependencies including xxhash-rust)

Integration with Service Definitions

Method IDs are tightly integrated with the RpcMethodPrebuffered trait, which defines the contract for RPC methods.

graph TB
    Trait["RpcMethodPrebuffered Trait"]
MethodName["METHOD_NAME: &'static str\ne.g., 'MyService::calculate'"]
MethodID["METHOD_ID: u64\nxxhash64(METHOD_NAME)"]
Params["Params Type\n(Serializable)"]
Response["Response Type\n(Serializable)"]
Trait --> MethodName
 
   Trait --> MethodID
 
   Trait --> Params
 
   Trait --> Response
    
    MethodName -.generates.-> MethodID
    
    RpcRequest["RpcRequest Structure"]
RpcRequest --> MethodIDField["method_id: u64"]
RpcRequest --> ParamsField["params: Vec<u8>\n(serialized)"]
MethodID -.copied into.-> MethodIDField
    Params -.serialized into.-> ParamsField

Method ID in Service Definition Structure

Method ID Storage and Usage

Each service definition provides:

ComponentTypePurpose
METHOD_NAME&'static strHuman-readable method identifier (used for debugging/logging)
METHOD_IDu64Hashed identifier used in binary protocol
ParamsAssociated typeRequest parameter structure
ResponseAssociated typeResponse result structure

The METHOD_ID constant is used when:

  • Encoding requests : Client includes method ID in RpcRequest header
  • Dispatching requests : Server uses method ID to route to the appropriate handler
  • Validating responses : Client verifies the response corresponds to the correct method

Sources:

  • extensions/muxio-rpc-service/Cargo.toml:3 (core traits and types description)
  • Cargo.lock:858-867 (muxio-rpc-service package with xxhash dependency)

sequenceDiagram
    participant Client as "RPC Client"
    participant Network as "Binary Transport"
    participant Dispatcher as "RpcDispatcher"
    participant Endpoint as "RpcServiceEndpoint"
    participant Handler as "Method Handler"
    
    Note over Client: METHOD_ID = xxhash64("Add")
    Client->>Client: Create RpcRequest with METHOD_ID
    Client->>Network: Serialize and send request
    Network->>Dispatcher: Receive binary frames
    Dispatcher->>Dispatcher: Extract method_id from RpcRequest
    Dispatcher->>Endpoint: Route by method_id (u64 comparison)
    
    alt Method ID Registered
        Endpoint->>Handler: Invoke handler for METHOD_ID
        Handler-->>Endpoint: Return response
        Endpoint-->>Dispatcher: Send RpcResponse
    else Method ID Unknown
        Endpoint-->>Dispatcher: Return MethodNotFound error
    end
    
    Dispatcher-->>Network: Serialize and send response
    Network-->>Client: Deliver response

Method Dispatch Using IDs

At runtime, method IDs enable efficient dispatch without string comparisons or lookup tables.

Request Processing Flow

Dispatch Performance Characteristics

The use of 64-bit integer method IDs provides:

CharacteristicBenefit
O(1) lookupHash map dispatch using HashMap<u64, Handler>
No string allocationMethod names never allocated at runtime
Cache-friendlyInteger comparison much faster than string comparison
Minimal memory8 bytes per method ID vs. variable-length strings

The endpoint maintains a dispatch table:

HashMap<u64, Arc<dyn MethodHandler>>
  key: METHOD_ID (e.g., 0x12ab34cd56ef7890)
  value: Handler function for that method

When a request arrives with method_id = 0x12ab34cd56ef7890, the dispatcher performs a simple hash map lookup to find the handler.

Sources:

  • extensions/muxio-rpc-service/Cargo.toml:3 (service traits and method dispatch)
  • Cargo.lock:883-895 (muxio-rpc-service-endpoint package)

graph TB
    subgraph "Development Time"
        Define["Define Service:\ntrait MyMethod"]
Name["METHOD_NAME =\n'MyService::add'"]
Hash["Compile-time xxhash64"]
Constant["const METHOD_ID: u64 =\n0x3f8a4b2c1d9e7654"]
Define --> Name
 
       Name --> Hash
 
       Hash --> Constant
    end
    
    subgraph "Client Runtime"
        CallSite["Call my_method(params)"]
CreateReq["Create RpcRequest:\nmethod_id: 0x3f8a4b2c1d9e7654\nparams: serialized"]
Encode["Encode to binary frames"]
CallSite --> CreateReq
        Constant -.embedded in.-> CreateReq
 
       CreateReq --> Encode
    end
    
    subgraph "Network"
        Transport["WebSocket/TCP Transport\nBinary protocol"]
end
    
    subgraph "Server Runtime"
        Decode["Decode binary frames"]
ExtractID["Extract method_id:\n0x3f8a4b2c1d9e7654"]
Lookup["HashMap lookup:\nhandlers[0x3f8a4b2c1d9e7654]"]
Execute["Execute handler function"]
Decode --> ExtractID
 
       ExtractID --> Lookup
 
       Lookup --> Execute
        Constant -.registered in.-> Lookup
    end
    
 
   Encode --> Transport
 
   Transport --> Decode
    
    style Constant fill:#f9f9f9
    style CreateReq fill:#f9f9f9
    style Lookup fill:#f9f9f9

Complete Method ID Lifecycle

This diagram traces a method ID from definition through compilation, serialization, and dispatch:

Sources:

  • extensions/muxio-rpc-service/Cargo.toml:1-9 (service definition crate overview)
  • Cargo.lock:858-867 (muxio-rpc-service dependencies)
  • Cargo.lock:883-895 (muxio-rpc-service-endpoint for handler registration)

Benefits and Design Trade-offs

Advantages

BenefitExplanation
Zero Runtime CostHash computation happens once at compile time; runtime operations use simple integer comparisons
Type SafetyMethod IDs are compile-time constants; cannot be accidentally modified or corrupted
Platform IndependenceSame method name produces identical ID on all platforms (Windows, Linux, macOS, WASM)
No Schema FilesService definitions are Rust code; no external IDL or schema generation tools required
Fast DispatchInteger hash map lookup is faster than string comparison or reflection-based dispatch
Compact Wire Format8-byte method ID vs. variable-length method name string

Design Considerations

ConsiderationMitigation
Hash Collisions64-bit keyspace makes collisions statistically improbable; detected during testing if they occur
Method VersioningChanging a method name produces a new ID; requires coordinated client/server updates
DebuggingMethod names are logged alongside IDs for human readability during development
Binary CompatibilityChanging method names breaks wire compatibility; version management required at application level

The framework prioritizes simplicity and performance over elaborate versioning mechanisms. Applications requiring complex API evolution strategies should implement versioning at a higher level (e.g., versioned service definitions or API paths).

Sources:

  • extensions/muxio-rpc-service/Cargo.toml:3 (compile-time method ID generation)
  • Cargo.lock:1886-1889 (xxhash-rust algorithm choice)

Example: Method ID Generation in Practice

Consider a simple service definition:

Method Definition Components

Step-by-Step Process

  1. Definition : Developer writes struct AddMethod implementing RpcMethodPrebuffered
  2. Naming : Trait defines const METHOD_NAME: &'static str = "calculator::Add"
  3. Hashing : Compiler invokes xxhash64("calculator::Add") at compile time
  4. Constant : Result (e.g., 0x9c5f3a2b8d7e4f61) stored as const METHOD_ID: u64
  5. Client Usage : When calling Add, client creates RpcRequest { method_id: 0x9c5f3a2b8d7e4f61, ... }
  6. Server Registration : Server registers handler: handlers.insert(0x9c5f3a2b8d7e4f61, add_handler)
  7. Dispatch : Server receives request, extracts method_id, performs HashMap lookup, invokes handler

This entire flow occurs with zero string operations at runtime, providing efficient method dispatch across client and server implementations.

Sources:

  • extensions/muxio-rpc-service/Cargo.toml:1-18 (service definition crate structure)
  • Cargo.lock:858-867 (muxio-rpc-service dependencies including xxhash-rust)

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Serialization with Bitcode

Loading…

Serialization with Bitcode

Relevant source files

This page documents how the bitcode library provides compact binary serialization for RPC parameters, responses, and metadata throughout the muxio framework. For information about defining service methods that use these serialized types, see Creating Service Definitions. For details on how method identifiers are generated, see Method ID Generation.

Purpose and Role

The bitcode crate serves as the serialization layer that transforms strongly-typed Rust structs into compact binary representations suitable for network transmission. This enables type-safe RPC communication while maintaining minimal payload sizes and efficient encoding/decoding performance.

Sources:

Bitcode in the Data Pipeline

Diagram: Bitcode serialization flow between application types and wire format

Sources:

Core Traits and Functions

Trait/FunctionPurposeUsage Context
bitcode::EncodeDerive macro for serializationApplied to request/response parameter structs
bitcode::DecodeDerive macro for deserializationApplied to request/response parameter structs
bitcode::encode(&T)Encodes a value to Vec<u8>Used before sending RPC requests/responses
bitcode::decode::<T>(&[u8])Decodes bytes to type TUsed after receiving RPC requests/responses

Sources:

Type Definitions with Bitcode

Types used in RPC communication must derive both Encode and Decode traits. These derivations are typically combined with Debug and PartialEq for testing and debugging purposes.

Diagram: Type definition pattern for serializable RPC parameters

graph TB
    subgraph "Example Type Definition"
        Struct["#[derive(Encode, Decode, PartialEq, Debug)]\nstruct AddRequestParams"]
Field1["numbers: Vec&lt;f64&gt;"]
Struct --> Field1
    end
    
    subgraph "Bitcode Derive Macros"
        EncodeMacro["bitcode_derive::Encode"]
DecodeMacro["bitcode_derive::Decode"]
end
    
    subgraph "Generated Implementations"
        EncodeImpl["impl Encode for AddRequestParams"]
DecodeImpl["impl Decode for AddRequestParams"]
end
    
 
   Struct -.->|expands to| EncodeMacro
 
   Struct -.->|expands to| DecodeMacro
    
 
   EncodeMacro --> EncodeImpl
 
   DecodeMacro --> DecodeImpl

Example from test suite:

tests/rpc_dispatcher_tests.rs:10-28 demonstrates the standard pattern:

Sources:

Integration with RPC Request/Response Types

The serialized bytes produced by bitcode::encode() are stored in specific fields of the RPC protocol structures:

RPC TypeFieldPurpose
RpcRequestrpc_param_bytes: Option<Vec<u8>>Encoded method parameters sent in header metadata
RpcRequestrpc_prebuffered_payload_bytes: Option<Vec<u8>>Encoded payload data for prebuffered requests
RpcResponserpc_prebuffered_payload_bytes: Option<Vec<u8>>Encoded response payload
RpcHeaderrpc_metadata_bytes: Vec<u8>Encoded metadata (parameters or status)

Sources:

Request Encoding Pattern

Diagram: Encoding flow for RPC request parameters

Example from test suite:

tests/rpc_dispatcher_tests.rs:42-49 demonstrates encoding request parameters:

Sources:

Response Decoding Pattern

Diagram: Decoding flow for RPC response payload

Example from test suite:

tests/rpc_dispatcher_tests.rs:100-116 demonstrates decoding response payloads:

Sources:

graph LR
    subgraph "Receive Phase"
        ReqBytes["rpc_param_bytes"]
DecodeReq["bitcode::decode\n&lt;AddRequestParams&gt;"]
ReqParams["AddRequestParams"]
ReqBytes --> DecodeReq
 
       DecodeReq --> ReqParams
    end
    
    subgraph "Processing"
        Logic["Business Logic\n(sum numbers)"]
RespParams["AddResponseParams"]
ReqParams --> Logic
 
       Logic --> RespParams
    end
    
    subgraph "Send Phase"
        EncodeResp["bitcode::encode"]
RespBytes["rpc_prebuffered_payload_bytes"]
RespParams --> EncodeResp
 
       EncodeResp --> RespBytes
    end

Server-Side Processing Pattern

The server decodes incoming request parameters, processes them, and encodes response payloads:

Diagram: Complete encode-process-decode cycle on server

Example from test suite:

tests/rpc_dispatcher_tests.rs:151-167 demonstrates the complete server-side pattern:

Sources:

Supported Types

Bitcode supports a wide range of Rust types through its derive macros:

Type CategoryExamplesNotes
Primitivesi32, u64, f64, boolDirect binary encoding
Standard collectionsVec<T>, HashMap<K,V>, Option<T>Length-prefixed encoding
Tuples(T1, T2, T3)Sequential encoding
StructsCustom types with #[derive(Encode, Decode)]Field-by-field encoding
EnumsTagged unions with variantsDiscriminant + variant data

Sources:

graph TB
    Bytes["Incoming Bytes"]
Decode["bitcode::decode&lt;T&gt;()"]
Success["Ok(T)"]
Error["Err(bitcode::Error)"]
Bytes --> Decode
 
   Decode -->|Valid binary| Success
 
   Decode -->|Invalid/incompatible| Error
    
 
   Error -->|Handle| ErrorHandler["Error Handler\n(log, return error response)"]

Error Handling

Decoding operations return Result types to handle malformed or incompatible binary data:

Diagram: Error handling in bitcode deserialization

The test code demonstrates using .unwrap() for simplicity, but production code should handle decode errors gracefully:

tests/rpc_dispatcher_tests.rs:152-153 shows unwrapping (test code):

Sources:

Compact Binary Format

Bitcode produces compact binary representations compared to text-based formats like JSON. The format characteristics include:

FeatureBenefit
No field names in outputReduces payload size by relying on struct definition order
Variable-length integer encodingSmaller values use fewer bytes
No schema overheadBinary is decoded based on compile-time type information
Aligned data structuresOptimized for fast deserialization via bytemuck

Sources:

Dependencies and Ecosystem Integration

The bitcode crate integrates with the broader Rust ecosystem:

Diagram: Bitcode dependency graph

Sources:

Usage in Muxio Ecosystem

Bitcode is used throughout the muxio workspace:

CrateUsage
muxioCore RPC protocol structures
muxio-rpc-serviceService definition trait bounds
example-muxio-rpc-service-definitionShared RPC parameter types
muxio-rpc-service-endpointServer-side deserialization
muxio-rpc-service-callerClient-side serialization

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Error Handling

Loading…

Error Handling

Relevant source files

Purpose and Scope

This document describes error handling strategies, error types, and failure modes throughout the rust-muxio RPC system. It covers how errors are detected, propagated across layers, and delivered to calling code. This includes transport failures, framing errors, RPC-level failures, and critical system failures like mutex poisoning.

For information about defining service errors in your own RPC methods, see Creating Service Definitions. For connection lifecycle management and state changes, see Connection Lifecycle and State Management.


Error Type Hierarchy

The muxio system uses a layered error model that mirrors its architectural layers. Each layer defines specific error types appropriate to its abstraction level.

graph TB
    RpcServiceError["RpcServiceError"]
RpcError["RpcServiceError::Rpc"]
TransportError["RpcServiceError::Transport"]
RpcServiceErrorPayload["RpcServiceErrorPayload"]
RpcServiceErrorCode["RpcServiceErrorCode"]
IoError["std::io::Error"]
FrameDecodeError["FrameDecodeError"]
FrameEncodeError["FrameEncodeError"]
RpcResultStatus["RpcResultStatus"]
RpcServiceError --> RpcError
 
   RpcServiceError --> TransportError
    
 
   RpcError --> RpcServiceErrorPayload
 
   RpcServiceErrorPayload --> RpcServiceErrorCode
    
 
   TransportError --> IoError
    IoError -.wraps.-> FrameDecodeError
    
 
   RpcServiceErrorCode --> NotFound["NotFound"]
RpcServiceErrorCode --> Fail["Fail"]
RpcServiceErrorCode --> System["System"]
RpcResultStatus --> Success["Success"]
RpcResultStatus --> MethodNotFound["MethodNotFound"]
RpcResultStatus --> FailStatus["Fail"]
RpcResultStatus --> SystemError["SystemError"]
RpcResultStatus -.maps_to.-> RpcServiceErrorCode

Error Type Relationships

Sources:

RpcServiceError

RpcServiceError is the primary error type exposed to application code when making RPC calls. It has two variants:

VariantDescriptionContains
RpcRemote service returned an errorRpcServiceErrorPayload with code and message
TransportConnection or framing failurestd::io::Error

Sources:

RpcServiceErrorCode

Application-level error codes that indicate why an RPC call failed:

CodeMeaningTypical Cause
NotFoundMethod does not existClient calls unregistered method or method ID mismatch
FailMethod executed but failedHandler returned an error
SystemInternal system errorSerialization failure, internal panic, resource exhaustion

Sources:

RpcResultStatus

Wire-format status codes transmitted in RPC response headers. These are converted to RpcServiceErrorCode on the client side:

StatusWire ByteMaps To
Success0x00(no error)
MethodNotFoundN/ARpcServiceErrorCode::NotFound
FailN/ARpcServiceErrorCode::Fail
SystemErrorN/ARpcServiceErrorCode::System

Sources:

FrameDecodeError and FrameEncodeError

Low-level errors in the binary framing protocol:

  • FrameDecodeError : Occurs when incoming bytes cannot be parsed as valid frames (corrupt header, invalid stream ID, etc.)
  • FrameEncodeError : Occurs when outgoing data cannot be serialized into frames (buffer issues, invalid state, etc.)

These errors are typically wrapped in io::Error and surfaced as RpcServiceError::Transport.

Sources:


Error Propagation Through Layers

Errors flow through multiple layers before reaching application code. The propagation path depends on whether the error originates from transport, framing, RPC protocol, or service logic.

sequenceDiagram
    participant App as "Application Code"
    participant Caller as "RpcServiceCallerInterface"
    participant Dispatcher as "RpcDispatcher"
    participant Session as "RpcSession"
    participant Transport as "WebSocket Transport"
    
    Note over Transport: Transport Error
    Transport->>Session: read_bytes() returns Err
    Session->>Dispatcher: FrameDecodeError
    Dispatcher->>Dispatcher: fail_all_pending_requests()
    Dispatcher->>Caller: RpcStreamEvent::Error
    Caller->>Caller: Convert to RpcServiceError::Transport
    Caller->>App: Err(RpcServiceError::Transport)
    
    Note over Transport: RPC Method Error
    Transport->>Session: Valid frames, status=Fail
    Session->>Dispatcher: RpcStreamEvent::Header (status byte)
    Dispatcher->>Caller: status=RpcResultStatus::Fail
    Caller->>Caller: Convert to RpcServiceError::Rpc
    Caller->>App: Err(RpcServiceError::Rpc)

Error Flow Diagram

Sources:

Streaming vs Buffered Error Delivery

The system supports two error delivery modes depending on the RPC call type:

graph LR
 
   Error["Error Occurs"] --> RecvFn["recv_fn closure"]
RecvFn --> Status["Parse RpcResultStatus"]
Status --> ErrorBuffer["Buffer error payload"]
ErrorBuffer --> End["RpcStreamEvent::End"]
End --> Send["sender.send(Err(...))"]
Send --> AppCode["Application receives Err from stream"]

Streaming Error Delivery

For streaming RPC calls using call_rpc_streaming(), errors are sent through the DynamicReceiver channel as they occur:

Sources:

graph LR
 
   Stream["call_rpc_streaming"] --> Loop["while let Some(result)"]
Loop --> CheckResult{"result?"}
CheckResult -->|Ok| Accumulate["success_buf.extend"]
CheckResult -->|Err| StoreError["err = Some(e); break"]
Accumulate --> Loop
 
   StoreError --> Return["Err(rpc_service_error)"]
Loop -->|None| Decode["decode(success_buf)"]
Decode --> ReturnOk["Ok(T)"]

Buffered Error Delivery

For prebuffered RPC calls using call_rpc_buffered(), errors are accumulated until the stream ends, then returned as a Result<T, RpcServiceError>:

Sources:


graph TB
    RecvFn["recv_fn(RpcStreamEvent)"]
Header["Header Event"]
Payload["PayloadChunk Event"]
End["End Event"]
Error["Error Event"]
RecvFn --> Header
 
   RecvFn --> Payload
 
   RecvFn --> End
 
   RecvFn --> Error
    
 
   Header --> ParseStatus["Parse RpcResultStatus from metadata"]
ParseStatus --> StoreStatus["Store in status Mutex"]
StoreStatus --> SendReady["Send readiness signal"]
Payload --> CheckStatus{"status?"}
CheckStatus -->|Success| SendChunk["sender.send(Ok(bytes))"]
CheckStatus -->|Error status| BufferError["error_buffer.extend(bytes)"]
End --> FinalStatus{"final status?"}
FinalStatus -->|MethodNotFound| SendNotFound["sender.send(Err(NotFound))"]
FinalStatus -->|Fail| SendFail["sender.send(Err(Fail))"]
FinalStatus -->|SystemError| SendSystem["sender.send(Err(SystemError))"]
FinalStatus -->|Success| Close["Close channel normally"]
Error --> CreateError["Create Transport error"]
CreateError --> SendError["sender.send(Err(Transport))"]
SendError --> DropSender["Drop sender"]

Error Handling in recv_fn Closure

The recv_fn closure in RpcServiceCallerInterface is the primary mechanism for receiving and transforming RPC stream events into application-level errors. It handles four event types:

RpcStreamEvent Processing

Sources:

Error Buffering Logic

When a non-Success status is received, payload chunks are buffered into error_buffer instead of being sent to the application. This allows the complete error message to be assembled:

Event SequenceStatusAction
Header arrivesMethodNotFoundStore status, buffer subsequent payloads
PayloadChunk arrivesMethodNotFoundAppend to error_buffer
PayloadChunk arrivesMethodNotFoundAppend to error_buffer
End arrivesMethodNotFoundDecode error_buffer as error message, send Err(...)

Sources:


Disconnection and Transport Errors

Transport-level failures require special handling to prevent hanging requests and ensure prompt error delivery.

Connection State Checks

Before initiating any RPC call, the caller checks connection state:

Sources:

graph TB
 
   DispatcherCall["dispatcher.call()"] --> WaitReady["ready_rx.await"]
WaitReady --> CheckResult{"Result?"}
CheckResult -->|Ok| ReturnEncoder["Return (encoder, rx)"]
CheckResult -->|Err| ReturnError["Return Err(Transport)"]
TransportFail["Transport fails"] --> SendError["ready_tx.send(Err(io::Error))"]
SendError --> WaitReady
    
 
   ChannelDrop["Handler drops ready_tx"] --> ChannelClosed["ready_rx returns Err"]
ChannelClosed --> CheckResult

Readiness Channel Errors

The call_rpc_streaming() method uses a oneshot channel to signal when the RPC call is ready (header received). If this channel closes prematurely, it indicates a transport failure:

Sources:

graph TB
 
   FrameError["FrameDecodeError occurs"] --> CreateEvent["RpcStreamEvent::Error"]
CreateEvent --> RecvFn["recv_fn(Error event)"]
RecvFn --> WrapError["Wrap as io::Error::ConnectionAborted"]
WrapError --> NotifyReady["Send to ready_tx if pending"]
NotifyReady --> NotifyStream["Send to DynamicSender"]
NotifyStream --> Drop["Drop sender, close channel"]

RpcStreamEvent::Error Handling

When a FrameDecodeError occurs during stream processing, the session generates an RpcStreamEvent::Error:

Sources:


Critical Failure Modes

Certain failures are considered unrecoverable and result in immediate panic or cleanup.

graph TB
 
   LockAttempt["queue.lock()"] --> CheckResult{"Result?"}
CheckResult -->|Ok| ProcessEvent["Process RpcStreamEvent"]
CheckResult -->|Err poisoned| Panic["panic!()"]
Panic --> CrashMsg["'Request queue mutex poisoned'"]
CrashMsg --> Note["Note: Prevents data corruption\nand undefined behavior"]

Mutex Poisoning

The rpc_request_queue in RpcDispatcher is protected by a Mutex. If a thread panics while holding this lock, the mutex becomes “poisoned.” This is treated as a critical failure:

Mutex Poisoning Handling

The rationale for panicking on mutex poisoning is documented in src/rpc/rpc_dispatcher.rs:85-97:

If the lock is poisoned, it likely means another thread panicked while holding the mutex. The internal state of the request queue may now be inconsistent or partially mutated. Continuing execution could result in incorrect dispatch behavior, undefined state transitions, or silent data loss. This should be treated as a critical failure and escalated appropriately.

Sources:

graph LR
 
   ReadBytes["read_bytes()"] --> SessionRead["rpc_respondable_session.read_bytes()"]
SessionRead --> LockQueue["rpc_request_queue.lock()"]
LockQueue --> CheckLock{"lock()?"}
CheckLock -->|Ok| ReturnIds["Ok(active_request_ids)"]
CheckLock -->|Err poisoned| CorruptFrame["Err(FrameDecodeError::CorruptFrame)"]

FrameDecodeError as Critical Failure

When read_bytes() returns a FrameDecodeError, the dispatcher may also fail to lock the queue and return FrameDecodeError::CorruptFrame:

Sources:


graph TB
 
   ConnDrop["Connection Dropped"] --> FailAll["fail_all_pending_requests(error)"]
FailAll --> TakeHandlers["mem::take(response_handlers)"]
TakeHandlers --> Iterate["For each (request_id, handler)"]
Iterate --> CreateSynthetic["Create synthetic Error event"]
CreateSynthetic --> CallHandler["handler(error_event)"]
CallHandler --> WakesFuture["Wakes waiting Future/stream"]
WakesFuture --> Iterate
    
 
   Iterate --> Done["All handlers notified"]

fail_all_pending_requests Cleanup

When a connection drops, all pending RPC requests must be notified to prevent hanging futures. The fail_all_pending_requests() method performs this cleanup:

Cleanup Flow

The synthetic error event structure:

RpcStreamEvent::Error {
    rpc_header: None,
    rpc_request_id: Some(request_id),
    rpc_method_id: None,
    frame_decode_error: error.clone(),
}

Sources:

Handler Cleanup Guarantee

Taking ownership of the handlers (mem::take) ensures:

  1. The response_handlers map is immediately cleared
  2. No new events can be routed to removed handlers
  3. Each handler is called exactly once with the error
  4. Waiting futures/streams are unblocked promptly

Sources:


graph TB
    Prebuffering{"prebuffer_response?"}
Prebuffering -->|true| AccumulateMode["Accumulate mode"]
Prebuffering -->|false| StreamMode["Stream mode"]
AccumulateMode --> HeaderEvt["Header Event"]
HeaderEvt --> CallHandler["Call handler with Header"]
HeaderEvt --> PayloadEvt["PayloadChunk Events"]
PayloadEvt --> BufferBytes["buffer.extend_from_slice(bytes)"]
BufferBytes --> PayloadEvt
 
   PayloadEvt --> EndEvt["End Event"]
EndEvt --> SendAll["Send entire buffer at once"]
SendAll --> CallEndHandler["Call handler with End"]
StreamMode --> StreamHeader["Header Event"]
StreamHeader --> StreamPayload["PayloadChunk Events"]
StreamPayload --> CallHandlerImmediate["Call handler for each chunk"]
CallHandlerImmediate --> StreamPayload
 
   StreamPayload --> StreamEnd["End Event"]

Error Handling in Prebuffering

The RpcRespondableSession supports prebuffering mode where response payloads are accumulated before delivery. Error handling in this mode differs from streaming:

Prebuffering Error Accumulation

In prebuffering mode, if an error status is detected, the entire error payload is still buffered until the End event, then delivered as a single chunk.

Sources:


Standard Error Handling Patterns

Pattern 1: Immediate Rejection on Disconnect

Always check connection state before starting expensive operations:

Sources:

Pattern 2: Error Conversion at Boundaries

Convert lower-level errors to RpcServiceError at API boundaries:

Sources:

Pattern 3: Synchronous Error Handling in Callbacks

The recv_fn closure is synchronous and uses StdMutex to avoid async context issues:

Sources:

Pattern 4: Tracing for Error Diagnosis

All error paths include structured logging using tracing:

Sources:


Summary Table of Error Types and Handling

Error TypeLayerHandling StrategyRecoverable?
RpcServiceError::RpcApplicationReturn to callerYes
RpcServiceError::TransportTransportReturn to caller, cleanup handlersNo (requires reconnect)
FrameDecodeErrorFramingWrapped in io::Error, propagated upNo
FrameEncodeErrorFramingWrapped in io::Error, propagated upNo
Mutex poisoningInternalpanic!()No
Connection closedTransportfail_all_pending_requests()No (requires reconnect)
Method not foundRPC ProtocolRpcServiceErrorCode::NotFoundYes
Handler failureApplicationRpcServiceErrorCode::Fail or SystemYes

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

RPC Service Errors

Loading…

RPC Service Errors

Relevant source files

Purpose and Scope

This page documents the RPC-level error types, error codes, and error propagation mechanisms in the muxio RPC framework. It covers how service method failures are represented, encoded in the binary protocol, and delivered to callers. For information about lower-level transport and framing errors (such as connection failures and frame decode errors), see Transport and Framing Errors.


Error Type Hierarchy

The muxio RPC framework defines a structured error system that distinguishes between RPC-level errors (method failures, not found errors) and transport-level errors (connection issues, protocol violations).

RpcServiceError Enum

The RpcServiceError is the primary error type returned by RPC calls. It has two variants:

VariantDescriptionExample Use Cases
Rpc(RpcServiceErrorPayload)An error originating from the remote service method executionMethod not found, business logic failure, panics in handler
Transport(io::Error)An error in the underlying transport or protocol layerConnection dropped, timeout, frame decode failure

Sources:

RpcServiceErrorPayload Structure

When an RPC method fails on the server side, the error details are transmitted using RpcServiceErrorPayload:

FieldTypeDescription
codeRpcServiceErrorCodeCategorizes the error (NotFound, Fail, System)
messageStringHuman-readable error message from the service handler

Sources:

RpcServiceErrorCode Enum

The RpcServiceErrorCode categorizes RPC errors into three types:

CodeMeaningWhen Used
NotFoundThe requested method ID does not exist on the serverMethod dispatch fails due to unregistered handler
FailThe method executed but returned a business logic errorHandler returns Err in its result type
SystemA system-level error occurred during method executionPanics, internal errors, resource exhaustion

Sources:


Error Propagation Flow

The following diagram illustrates how errors flow from the server-side handler through the protocol layers to the client:

Sources:

sequenceDiagram
    participant Handler as "Service Handler"
    participant Endpoint as "RpcServiceEndpoint"
    participant ServerDisp as "Server RpcDispatcher"
    participant Protocol as "Binary Protocol"
    participant ClientDisp as "Client RpcDispatcher"
    participant RecvFn as "recv_fn Callback"
    participant Channel as "DynamicSender"
    participant Caller as "RPC Caller"

    Note over Handler,Caller: Error Case: Handler Fails
    Handler->>Endpoint: Return Err(error)
    Endpoint->>ServerDisp: respond() with RpcResultStatus
    ServerDisp->>Protocol: Encode status in metadata_bytes
    Protocol->>Protocol: Transmit error message as payload
    
    Protocol->>ClientDisp: RpcStreamEvent::Header
    ClientDisp->>RecvFn: Event with RpcResultStatus
    RecvFn->>RecvFn: Extract status from metadata_bytes[0]
    RecvFn->>RecvFn: Buffer error payload
    
    Protocol->>ClientDisp: RpcStreamEvent::PayloadChunk
    ClientDisp->>RecvFn: Error message bytes
    RecvFn->>RecvFn: Accumulate in error_buffer
    
    Protocol->>ClientDisp: RpcStreamEvent::End
    ClientDisp->>RecvFn: Stream complete
    RecvFn->>RecvFn: Convert RpcResultStatus to RpcServiceError
    RecvFn->>Channel: Send Err(RpcServiceError::Rpc)
    Channel->>Caller: Receive error from stream

Error Encoding in RPC Protocol

RpcResultStatus Enum

The RpcResultStatus is encoded in the first byte of the rpc_metadata_bytes field in the RpcHeader. This status indicates whether the RPC call succeeded or failed, and if it failed, what category of failure occurred.

Status ValueDescriptionMaps to RpcServiceErrorCode
SuccessMethod executed successfullyN/A (no error)
MethodNotFoundHandler not registered for method IDNotFound
FailHandler returned errorFail
SystemErrorHandler panicked or system errorSystem

Sources:

Status Extraction and Error Construction

The recv_fn callback extracts the status from the response header and buffers any error payload:

Sources:

graph TB
    Header["RpcStreamEvent::Header\nrpc_metadata_bytes[0]"]
Extract["Extract RpcResultStatus\nvia try_from(byte)"]
Store["Store status in\nStdMutex&lt;Option&lt;RpcResultStatus&gt;&gt;"]
PayloadChunk["RpcStreamEvent::PayloadChunk\nbytes"]
CheckStatus{"Status is\nSuccess?"}
SendSuccess["Send Ok(bytes)\nto DynamicSender"]
BufferError["Accumulate in\nerror_buffer"]
End["RpcStreamEvent::End"]
MatchStatus{"Match\nfinal status"}
CreateNotFound["RpcServiceError::Rpc\ncode: NotFound"]
CreateFail["RpcServiceError::Rpc\ncode: Fail"]
CreateSystem["RpcServiceError::Rpc\ncode: System\nmessage: from buffer"]
SendError["Send Err(error)\nto DynamicSender"]
Header --> Extract
 
   Extract --> Store
    
 
   PayloadChunk --> CheckStatus
 
   CheckStatus -->|Yes| SendSuccess
 
   CheckStatus -->|No| BufferError
    
 
   End --> MatchStatus
 
   MatchStatus -->|MethodNotFound| CreateNotFound
 
   MatchStatus -->|Fail| CreateFail
 
   MatchStatus -->|SystemError| CreateSystem
 
   CreateNotFound --> SendError
 
   CreateFail --> SendError
 
   CreateSystem --> SendError

Error Handling in Caller Interface

The RpcServiceCallerInterface trait defines error handling at two levels: during call setup and during response processing.

Connection State Validation

Before initiating an RPC call, the caller checks the connection state:

Sources:

Response Stream Error Handling

The recv_fn callback processes three types of events that can result in errors:

graph TD
    ErrorEvent["RpcStreamEvent::Error\nframe_decode_error"]
CreateTransportErr["Create RpcServiceError::Transport\nio::Error::ConnectionAborted\nmessage: frame_decode_error.to_string()"]
NotifyReady["Send Err to ready_rx\nif not yet signaled"]
SendToChannel["Send Err to DynamicSender\nif still available"]
DropSender["Drop DynamicSender\nclose channel"]
ErrorEvent --> CreateTransportErr
 
   CreateTransportErr --> NotifyReady
 
   NotifyReady --> SendToChannel
 
   SendToChannel --> DropSender

Error Event from Transport

When a RpcStreamEvent::Error is received (indicating a frame decode error or transport failure):

Sources:

RPC-Level Error from End Event

When a RpcStreamEvent::End is received with a non-success status:

Sources:


graph TD
    StartBuffered["call_rpc_buffered()"]
CallStreaming["call_rpc_streaming()\nDynamicChannelType::Unbounded"]
InitBuffers["success_buf = Vec::new()\nerr = None"]
LoopStart{"stream.next().await"}
MatchResult{"Match result"}
OkChunk["Ok(chunk)"]
ErrValue["Err(e)"]
ExtendBuf["success_buf.extend(chunk)"]
StoreErr["err = Some(e)\nbreak loop"]
CheckErr{"err.is_some()?"}
ReturnErr["Return Ok(encoder, Err(err))"]
Decode["decode(&success_buf)"]
ReturnOk["Return Ok(encoder, Ok(decoded))"]
StartBuffered --> CallStreaming
 
   CallStreaming --> InitBuffers
 
   InitBuffers --> LoopStart
    
 
   LoopStart -->|Some result| MatchResult
 
   LoopStart -->|None| CheckErr
    
 
   MatchResult --> OkChunk
 
   MatchResult --> ErrValue
    
 
   OkChunk --> ExtendBuf
 
   ExtendBuf --> LoopStart
    
 
   ErrValue --> StoreErr
 
   StoreErr --> CheckErr
    
 
   CheckErr -->|Yes| ReturnErr
 
   CheckErr -->|No| Decode
 
   Decode --> ReturnOk

Buffered Call Error Aggregation

The call_rpc_buffered method consumes a streaming response and returns either the complete success payload or the first error encountered:

Sources:


Test Examples

Handling MethodNotFound Error

Test demonstrating a method not found scenario:

Pattern:

  1. Mock sender emits Err(RpcServiceError::Rpc) with code: NotFound
  2. Caller receives the error through the buffered call
  3. Error code and message are preserved

Sources:

Handling System Error

Test demonstrating a system-level error (e.g., panic in handler):

Pattern:

  1. Mock sender emits Err(RpcServiceError::Rpc) with code: System
  2. Error message includes panic details
  3. Prebuffered trait call propagates the error to caller

Sources:

Error Variant Matching

When handling RpcServiceError in application code, match on the variants:

match result {
    Ok(value) => { /* handle success */ },
    Err(RpcServiceError::Rpc(payload)) => {
        match payload.code {
            RpcServiceErrorCode::NotFound => { /* method not registered */ },
            RpcServiceErrorCode::Fail => { /* business logic error */ },
            RpcServiceErrorCode::System => { /* handler panic or internal error */ },
        }
    },
    Err(RpcServiceError::Transport(io_err)) => { /* connection or protocol error */ },
}

Sources:


Summary Table

Error OriginStatus/VariantRpcServiceErrorCodeDescription
Handler not registeredMethodNotFoundNotFoundThe requested method ID has no registered handler on the server
Handler returns ErrFailFailThe handler executed but returned a business logic error
Handler panicsSystemErrorSystemThe handler panicked or encountered an internal system error
Connection droppedN/AN/AWrapped in RpcServiceError::Transport(io::Error)
Frame decode failureN/AN/AWrapped in RpcServiceError::Transport(io::Error)
Disconnected call attemptN/AN/Aio::ErrorKind::ConnectionAborted in Transport variant

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Transport and Framing Errors

Loading…

Transport and Framing Errors

Relevant source files

This page documents low-level transport and framing errors in the muxio system. These errors occur at the binary protocol layer, connection management layer, and dispatcher coordination layer. For application-level RPC service errors (method not found, invalid parameters, handler exceptions), see RPC Service Errors.

Scope : This page covers FrameDecodeError, FrameEncodeError, connection failures, transport state transitions, dispatcher mutex poisoning, and cleanup mechanisms when connections drop unexpectedly.


Overview of Transport and Framing Error Types

The muxio system defines errors at multiple layers of the transport stack. Each layer reports failures using specific error types that propagate upward through the system.

graph TB
    subgraph "Low-Level Frame Errors"
        FDE["FrameDecodeError"]
FEE["FrameEncodeError"]
end
    
    subgraph "Connection Errors"
        CE["io::Error\nConnectionRefused\nConnectionReset"]
TE["Transport Errors\nWebSocket failures\nNetwork timeouts"]
end
    
    subgraph "Dispatcher Coordination Errors"
        MP["Mutex Poisoning\nPoisonError"]
PC["Pending Call Failures\nReadAfterCancel"]
end
    
    subgraph "RPC Layer Errors"
        RSE["RpcServiceError\nTransport variant"]
end
    
 
   FDE -->|wrapped in| RSE
 
   CE -->|converted to| RSE
 
   TE -->|triggers| PC
 
   MP -->|panics| PANIC["System Panic"]
PC -->|error events to| RSE
    
    style PANIC fill:#ffcccc

Error Type Hierarchy

Sources :


Framing Protocol Errors

FrameDecodeError

FrameDecodeError represents failures when parsing incoming binary frames. These errors occur in the RpcSession decoder when frame headers are malformed, stream state is inconsistent, or data is corrupted.

VariantDescriptionWhen It Occurs
CorruptFrameFrame header is invalid or stream state is inconsistentMalformed binary data, protocol violation
ReadAfterCancelAttempt to read from a cancelled streamConnection dropped mid-stream, explicit cancellation
UnexpectedEndStream ended prematurely without End frameTransport closed unexpectedly
Other variants(Implementation-specific)Various protocol violations

Sources :

FrameEncodeError

FrameEncodeError represents failures when encoding outbound frames. These are less common than decode errors since encoding is deterministic, but can occur when stream state is invalid or resources are exhausted.

VariantDescriptionWhen It Occurs
CorruptFrameInternal state inconsistencyInvalid encoder state, logic error
Other variants(Implementation-specific)Resource exhaustion, invalid input

Sources :


Connection and Transport Errors

Connection Establishment Failures

When a client attempts to connect to a non-existent or unreachable server, the connection fails immediately with an io::Error of kind ConnectionRefused.

Connection Flow with Error :

Sources :

Connection Drop During Operation

When a connection drops while RPC calls are in flight, the client must:

  1. Detect the disconnection
  2. Fail all pending requests
  3. Notify state change handlers
  4. Prevent new requests from being sent

Disconnect Detection and Handling :

Sources :


Dispatcher Error Handling

Mutex Poisoning

The RpcDispatcher uses a Mutex<VecDeque<(u32, RpcRequest)>> to track pending inbound responses. If a thread panics while holding this mutex, the mutex becomes “poisoned” and all subsequent lock attempts return Err(PoisonError).

Design Decision : The dispatcher treats mutex poisoning as a critical, unrecoverable error and panics immediately rather than attempting recovery.

Rationale (from src/rpc/rpc_dispatcher.rs:85-97):

If the lock is poisoned, it likely means another thread panicked while holding the mutex. The internal state of the request queue may now be inconsistent or partially mutated. Continuing execution could result in incorrect dispatch behavior, undefined state transitions, or silent data loss. This should be treated as a critical failure and escalated appropriately.

Mutex Poisoning Detection :

Poisoning Sites :

LocationPurposePanic Behavior
src/rpc/rpc_dispatcher.rs:104-118init_catch_all_response_handlerPanics if queue lock is poisoned
src/rpc/rpc_dispatcher.rs:367-370read_bytes queue accessReturns FrameDecodeError::CorruptFrame

Sources :

Failing Pending Requests on Disconnect

When a transport connection drops, the RpcDispatcher::fail_all_pending_requests() method ensures that all in-flight RPC calls are notified of the failure. This prevents deadlocks where application code waits indefinitely for responses that will never arrive.

Cleanup Sequence :

Implementation Details (src/rpc/rpc_dispatcher.rs:422-456):

  1. Take Ownership : std::mem::take(&mut self.rpc_respondable_session.response_handlers) moves all handlers out of the map, leaving it empty
  2. Synthetic Error : Creates RpcStreamEvent::Error { frame_decode_error: error, ... } for each handler
  3. Invoke Handlers : Calls each handler with the error event, waking any awaiting futures
  4. Memory Safety : Handlers are dropped after invocation, preventing leaks

Sources :


graph TB
    subgraph "Transport Layer"
        T1["TCP/IP Error\nio::Error"]
T2["WebSocket Error\ntungstenite::Error"]
end
    
    subgraph "Framing Layer"
        F1["FrameDecodeError\nRpcSession::read_bytes()"]
F2["FrameEncodeError\nRpcStreamEncoder"]
end
    
    subgraph "RPC Protocol Layer"
        R1["RpcDispatcher::read_bytes()\nReturns Vec<u32> or Error"]
R2["RpcDispatcher::call()\nReturns Encoder or Error"]
end
    
    subgraph "RPC Service Layer"
        S1["RpcServiceError::Transport\nWrapped frame errors"]
S2["RpcServiceCallerInterface::call_rpc_prebuffered()\nReturns Result"]
end
    
    subgraph "Application Layer"
        A1["Application Code\nReceives Result<T, RpcServiceError>"]
end
    
 
   T1 -->|Connection drops| T2
 
   T2 -->|Stream error in ws_receiver.next| F1
 
   F1 -->|propagated via read_bytes| R1
 
   R1 -->|FrameDecodeError| S1
    
 
   F2 -->|Encode failure| R2
 
   R2 -->|FrameEncodeError| S1
    
 
   S1 --> S2
 
   S2 --> A1
    
    style T1 fill:#ffe6e6
    style F1 fill:#fff0e6
    style S1 fill:#e6f7ff

Error Propagation Through Layers

Errors flow upward through the muxio layer stack, with each layer translating or wrapping errors as appropriate for its abstraction level.

Error Flow Diagram

Sources :


Error Handling in Stream Processing

Per-Stream Error Events

When a stream encounters a decode error, the RpcSession emits an RpcStreamEvent::Error event to the registered handler. This allows stream-specific error handling without affecting other concurrent streams.

Error Event Structure :

Handler Processing (src/rpc/rpc_dispatcher.rs:187-206):

Sources :

Catch-All Response Handler

The dispatcher installs a catch-all handler to process incoming response events that don’t have a specific registered handler. This handler is responsible for error logging and queue management.

Handler Registration (src/rpc/rpc_dispatcher.rs:98-209):

Sources :


stateDiagram-v2
    [*] --> Connecting: RpcClient::new() called
    Connecting --> Connected : WebSocket handshake complete
    Connecting --> [*]: Connection error\n(io::Error returned)
    
    Connected --> Disconnecting : WebSocket error detected
    Connected --> Disconnecting : shutdown_async() called
    
    Disconnecting --> Disconnected : is_connected.swap(false)
    Disconnected --> HandlerNotified : Call state_change_handler
    HandlerNotified --> RequestsFailed : fail_all_pending_requests()
    RequestsFailed --> [*] : Cleanup complete
    
    note right of Connected
        Heartbeat pings sent
        RPC calls processed
    end note
    
    note right of RequestsFailed
        All pending RPC calls
        resolved with errors
    end note

Connection State Management

The RpcClient tracks connection state using an AtomicBool (is_connected) and notifies application code via state change handlers.

State Transition Diagram

State Change Handler Contract :

StateWhen CalledGuarantees
RpcTransportState::ConnectedImmediately after set_state_change_handler() if connectedClient is ready for RPC calls
RpcTransportState::DisconnectedOn connection drop, explicit shutdown, or client DropAll pending requests have been failed

Sources :


Recovery and Cleanup Strategies

Automatic Cleanup on Drop

The RpcClient implements Drop to ensure graceful shutdown when the client is destroyed:

Drop Implementation (extensions/muxio-tokio-rpc-client/src/rpc_client.rs:42-52):

Limitations : The synchronous Drop trait cannot await async operations, so fail_all_pending_requests() is only called when shutdown_async() is explicitly invoked by background tasks detecting errors.

Sources :

Manual Disconnect Handling

Applications can register state change handlers to implement custom cleanup logic:

Best Practices :

  1. Always handleDisconnected: Assume all in-flight RPC calls have failed
  2. Avoid blocking operations : Handler is called synchronously from disconnect detection path
  3. Use channels for async work : Spawn tasks rather than awaiting in the handler

Sources :


Error Handling Patterns in Tests

Testing Connection Failures

The test suite validates error handling through various scenarios:

Connection Refusal Test (extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:15-31):

  • Attempts connection to unused port
  • Verifies io::Error with ErrorKind::ConnectionRefused
  • Confirms no panic or hang

Disconnect During Operation Test (extensions/muxio-tokio-rpc-client/tests/transport_state_tests.rs:167-292):

  • Establishes connection
  • Spawns RPC call
  • Closes server connection
  • Verifies pending call fails with error containing “cancelled stream” or “Transport error”
sequenceDiagram
    participant Test as Test Code
    participant Server as Mock Server
    participant Client as RpcClient
    participant RPC as Pending RPC Call
    
    Test->>Server: Start listener
    Test->>Client: RpcClient::new()
    Test->>RPC: Spawn Echo::call() in background
    Test->>Test: Sleep to let RPC become pending
    
    Note over RPC: RPC call waiting in\ndispatcher.response_handlers
    
    Test->>Server: Signal to close connection
    Server->>Client: Close WebSocket
    Client->>Client: Detect error in ws_receiver
    Client->>Client: shutdown_async()
    Client->>Client: fail_all_pending_requests()
    Client->>RPC: Emit Error event
    RPC-->>Test: Return Err(RpcServiceError)
    
    Test->>Test: Assert error contains\n"cancelled stream"

Test Pattern :

Sources :


Summary Table: Error Types and Handling

Error TypeLayerCauseHandling Strategy
FrameDecodeError::CorruptFrameFramingMalformed binary dataLog error, drop stream
FrameDecodeError::ReadAfterCancelFramingStream cancelledPropagate to RPC layer
FrameEncodeErrorFramingEncoder state errorReturn error to caller
io::Error::ConnectionRefusedTransportServer not reachableReturn from new()
tungstenite::ErrorTransportWebSocket failureTrigger shutdown_async()
PoisonError<T>DispatcherThread panic with lockPANIC (critical failure)
RpcServiceError::TransportRPC ServiceWrapped lower-level errorReturn to application

Sources :


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Testing

Loading…

Testing

Relevant source files

Purpose and Scope

This document provides an overview of testing strategies and patterns used in the rust-muxio codebase. It covers the testing philosophy, test organization, common testing patterns, and available testing utilities. For detailed information about specific testing approaches, see Unit Testing and Integration Testing.

The rust-muxio system emphasizes compile-time correctness through shared type definitions and trait-based abstractions. This design philosophy directly influences the testing strategy: many potential bugs are prevented by the type system, allowing tests to focus on runtime behavior, protocol correctness, and cross-platform compatibility.

Testing Philosophy

The rust-muxio testing approach is built on three core principles:

Compile-Time Guarantees Reduce Runtime Test Burden : By using shared service definitions from example-muxio-rpc-service-definition, both clients and servers depend on the same RpcMethodPrebuffered trait implementations. The METHOD_ID constants are generated at compile time via xxhash, ensuring parameter encoding/decoding, method identification, and data structures remain consistent. Tests focus on runtime behavior rather than type mismatches—the compiler prevents protocol incompatibilities.

Layered Testing Mirrors Layered Architecture : The system’s modular design (core muxiomuxio-rpc-service → transport extensions) enables focused testing at each layer. Unit tests in tests/rpc_dispatcher_tests.rs verify RpcDispatcher::call() and RpcDispatcher::respond() behavior without async dependencies, while integration tests in extension crates validate the complete stack including tokio-tungstenite WebSocket transports.

Cross-Platform Validation Is Essential : Because the same RpcMethodPrebuffered traits work across muxio-tokio-rpc-client, muxio-wasm-rpc-client, and muxio-tokio-rpc-server, tests verify that all client types communicate correctly. This is achieved through parallel integration test suites that use identical Add, Mult, and Echo service methods against different client implementations.

Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-97 tests/rpc_dispatcher_tests.rs:1-30

Test Organization in the Workspace

Test Location Strategy

Tests are organized by scope and purpose:

Test TypeLocationPurposeKey Functions Tested
Core Unit Teststests/rpc_dispatcher_tests.rsValidate RpcDispatcher::call(), RpcDispatcher::respond(), RpcDispatcher::read_bytes() without async runtimerpc_dispatcher_call_and_echo_response()
Tokio Integration Testsextensions/muxio-tokio-rpc-client/tests/Validate RpcClientRpcServer communication over tokio-tungstenitetest_success_client_server_roundtrip(), test_error_client_server_roundtrip()
WASM Integration Testsextensions/muxio-wasm-rpc-client/tests/Validate RpcWasmClientRpcServer with WebSocket bridgetest_success_client_server_roundtrip(), test_large_prebuffered_payload_roundtrip_wasm()
Test Service Definitionsexample-muxio-rpc-service-definition/src/prebuffered.rsShared RpcMethodPrebuffered implementationsAdd::METHOD_ID, Mult::METHOD_ID, Echo::METHOD_ID

This organization ensures that:

  • Core library tests have no async runtime dependencies
  • Extension tests can use their specific runtime environments
  • Test service definitions are reusable across all client types
  • Integration tests exercise the complete, realistic code paths

Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-18 tests/rpc_dispatcher_tests.rs:1-7

Integration Test Architecture

Integration tests create realistic client-server scenarios to validate end-to-end behavior. The following diagram illustrates the typical test setup:

Key Components

sequenceDiagram
    participant Test as "test_success_client_server_roundtrip()"
    participant Listener as "TcpListener"
    participant Server as "Arc&lt;RpcServer&gt;"
    participant Endpoint as "RpcServiceEndpointInterface"
    participant Client as "RpcClient"
    participant Add as "Add::call()"
    
    Test->>Listener: TcpListener::bind("127.0.0.1:0")
    Test->>Server: RpcServer::new(None)
    Test->>Server: server.endpoint()
    Server-->>Endpoint: endpoint reference
    
    Test->>Endpoint: register_prebuffered(Add::METHOD_ID, handler)
    Note over Endpoint: Handler: |request_bytes, _ctx| async move
    Test->>Endpoint: register_prebuffered(Mult::METHOD_ID, handler)
    Test->>Endpoint: register_prebuffered(Echo::METHOD_ID, handler)
    
    Test->>Test: tokio::spawn(server.serve_with_listener(listener))
    
    Test->>Client: RpcClient::new(host, port).await
    
    Test->>Add: Add::call(client.as_ref(), vec![1.0, 2.0, 3.0])
    Add->>Add: Add::encode_request(input)
    Add->>Client: call_rpc_buffered(RpcRequest)
    Client->>Server: WebSocket binary frames
    Server->>Endpoint: Dispatch by Add::METHOD_ID
    Endpoint->>Endpoint: Execute registered handler
    Endpoint->>Client: RpcResponse frames
    Client->>Add: Buffered response bytes
    Add->>Add: Add::decode_response(&bytes)
    Add-->>Test: Ok(6.0)
    
    Test->>Test: assert_eq!(res1.unwrap(), 6.0)

Random Port Binding : Tests use TcpListener::bind("127.0.0.1:0").await to obtain a random available port, preventing conflicts when running multiple tests in parallel extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:21-23

Arc-Wrapped Server : The RpcServer instance is wrapped in Arc::new(RpcServer::new(None)) to enable cloning into spawned tasks while maintaining shared state extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs28

Separate Endpoint Registration : Handlers are registered using endpoint.register_prebuffered(Add::METHOD_ID, handler).await, not directly on the server. The endpoint is obtained via server.endpoint(). This separation allows handler registration to complete before server.serve_with_listener() begins accepting connections extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:31-61

Background Server Task : The server runs via tokio::spawn(server.serve_with_listener(listener)), allowing the test to proceed with client operations on the main test task extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:64-70

Shared Service Definitions : Both client and server invoke the same Add::call(), Mult::call(), and Echo::call() methods, which internally use Add::encode_request(), Add::decode_response(), etc., ensuring type-safe, consistent serialization via bitcode extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs1

Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:16-97 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20

Common Test Patterns

Success Case Testing

The most fundamental test pattern validates that RPC calls complete successfully with correct results. The pattern uses tokio::join! to execute multiple concurrent calls, verifying both concurrency handling and result correctness:

Each call internally invokes RpcServiceCallerInterface::call_rpc_buffered() with an RpcRequest containing the appropriate METHOD_ID and bitcode-encoded parameters extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-96

Error Propagation Testing

Tests verify that server-side errors are correctly propagated to clients with appropriate RpcServiceErrorCode values:

The server encodes the error into the RpcResponse.rpc_result_status field, which the client’s RpcDispatcher::read_bytes() method decodes back into a structured RpcServiceError extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:99-152

Large Payload Testing

Tests ensure that payloads exceeding DEFAULT_SERVICE_MAX_CHUNK_SIZE are correctly chunked by RpcDispatcher and reassembled:

The request is automatically chunked in RpcRequest.rpc_prebuffered_payload_bytes via RpcDispatcher::call(), transmitted as multiple frames, buffered by RpcStreamDecoder, and reassembled before the handler executes. The response follows the same chunking path via RpcDispatcher::respond() extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:154-203 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:230-312

Method Not Found Testing

Tests verify that calling unregistered methods returns the correct error code:

This ensures the server correctly identifies missing handlers extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:205-240

Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:81-240 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-142

WASM Client Testing with WebSocket Bridge

Testing the WASM client requires special handling because it is runtime-agnostic and designed for browser environments. Integration tests use a WebSocket bridge to connect the WASM client to a real Tokio server:

Bridge Implementation Details

graph TB
    subgraph "Test Environment"
        TEST["test_success_client_server_roundtrip()"]
end
    
    subgraph "Server Side"
        SERVER["Arc&lt;RpcServer&gt;"]
LISTENER["TcpListener::bind()"]
HANDLERS["endpoint.register_prebuffered()"]
end
    
    subgraph "Bridge Infrastructure"
        WS_CONN["connect_async(server_url)"]
TO_BRIDGE["tokio_mpsc::unbounded_channel()"]
WS_SENDER["ws_sender.send(WsMessage::Binary)"]
WS_RECEIVER["ws_receiver.next()"]
BRIDGE_TX["tokio::spawn(bridge_tx_task)"]
BRIDGE_RX["tokio::spawn(bridge_rx_task)"]
end
    
    subgraph "WASM Client Side"
        WASM_CLIENT["Arc&lt;RpcWasmClient&gt;::new()"]
DISPATCHER["client.get_dispatcher()"]
OUTPUT_CB["Output Callback:\nto_bridge_tx.send(bytes)"]
READ_BYTES["dispatcher.blocking_lock().read_bytes()"]
end
    
 
   TEST --> SERVER
 
   TEST --> LISTENER
 
   SERVER --> HANDLERS
    
 
   TEST --> WASM_CLIENT
 
   TEST --> TO_BRIDGE
 
   WASM_CLIENT --> OUTPUT_CB
 
   OUTPUT_CB --> TO_BRIDGE
    
 
   TO_BRIDGE --> BRIDGE_TX
 
   BRIDGE_TX --> WS_SENDER
 
   WS_SENDER --> WS_CONN
 
   WS_CONN --> SERVER
    
 
   SERVER --> WS_CONN
 
   WS_CONN --> WS_RECEIVER
 
   WS_RECEIVER --> BRIDGE_RX
 
   BRIDGE_RX --> READ_BYTES
 
   READ_BYTES --> DISPATCHER
 
   DISPATCHER --> WASM_CLIENT

The WebSocket bridge consists of two spawned tasks that connect RpcWasmClient to the real RpcServer:

Client to Server Bridge : Receives bytes from RpcWasmClient’s output callback (invoked during RpcDispatcher::call()) and forwards them as WsMessage::Binary extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:98-108:

Server to Client Bridge : Receives WsMessage::Binary from the server and feeds them to RpcDispatcher::read_bytes() via task::spawn_blocking() extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:110-123:

Why spawn_blocking : RpcWasmClient::get_dispatcher() returns a type that uses blocking_lock() (a synchronous mutex) and RpcDispatcher::read_bytes() is synchronous. These are required for WASM compatibility where async is unavailable. In tests running on Tokio, synchronous blocking operations must run on the blocking thread pool via task::spawn_blocking() to prevent starving the async runtime.

Sources : extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-142 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-123

graph LR
    CLIENT_DISP["client_dispatcher:\nRpcDispatcher::new()"]
OUT_BUF["outgoing_buf:\nRc&lt;RefCell&lt;Vec&lt;u8&gt;&gt;&gt;"]
SERVER_DISP["server_dispatcher:\nRpcDispatcher::new()"]
CLIENT_DISP -->|call rpc_request, 4, write_cb| OUT_BUF
 
   OUT_BUF -->|chunks 4| SERVER_DISP
 
   SERVER_DISP -->|read_bytes chunk| SERVER_DISP
 
   SERVER_DISP -->|is_rpc_request_finalized| SERVER_DISP
 
   SERVER_DISP -->|delete_rpc_request| SERVER_DISP
 
   SERVER_DISP -->|respond rpc_response, 4, write_cb| OUT_BUF
 
   OUT_BUF -->|client.read_bytes| CLIENT_DISP

Unit Testing the RpcDispatcher

The core RpcDispatcher can be tested in isolation without async runtimes or network transports. These tests use in-memory buffers to simulate data exchange:

Test Structure

The rpc_dispatcher_call_and_echo_response() test creates two RpcDispatcher instances representing client and server, connected via a shared Rc<RefCell<Vec<u8>>> buffer tests/rpc_dispatcher_tests.rs:30-38:

Request Flow : Client creates RpcRequest with rpc_method_id set to ADD_METHOD_ID or MULT_METHOD_ID, then invokes RpcDispatcher::call() with a write callback that appends to the buffer tests/rpc_dispatcher_tests.rs:42-124:

Server Processing : Server reads from the buffer in 4-byte chunks via RpcDispatcher::read_bytes(), checks is_rpc_request_finalized(), retrieves the request with delete_rpc_request(), processes it, and sends the response via RpcDispatcher::respond() tests/rpc_dispatcher_tests.rs:126-203:

This validates the complete request/response cycle including framing, chunking, request correlation via rpc_request_id, and method dispatch via rpc_method_id.

Sources : tests/rpc_dispatcher_tests.rs:30-203 tests/rpc_dispatcher_tests.rs:1-29

Test Coverage Matrix

The following table summarizes test coverage across different layers and client types:

Test ScenarioCore Unit TestsTokio IntegrationWASM IntegrationKey Functions Validated
Basic RPC CallRpcDispatcher::call(), RpcDispatcher::respond()
Concurrent Callstokio::join! with multiple RpcCallPrebuffered::call()
Large Payloads (> DEFAULT_SERVICE_MAX_CHUNK_SIZE)RpcStreamEncoder::write(), RpcStreamDecoder::process_chunk()
Error PropagationRpcServiceError::Rpc, RpcServiceErrorCode
Method Not FoundRpcServiceEndpointInterface::dispatch()
Framing ProtocolImplicitImplicitRpcDispatcher::read_bytes() chunking
Request CorrelationImplicitImplicitrpc_request_id in RpcHeader
WebSocket Transport✓ (bridged)tokio-tungstenite, WsMessage::Binary
Connection Stateclient.handle_connect(), client.handle_disconnect()

Coverage Rationale

  • Core unit tests validate the RpcDispatcher without runtime dependencies
  • Tokio integration tests validate native client-server communication over real WebSocket connections
  • WASM integration tests validate cross-platform compatibility by testing the WASM client against the same server
  • Each layer is tested at the appropriate level of abstraction

Sources : tests/rpc_dispatcher_tests.rs:1-203 extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs:1-241 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-313

Shared Test Service Definitions

All integration tests use service definitions from example-muxio-rpc-service-definition/src/prebuffered.rs:

Service MethodInput TypeOutput TypeImplementationPurpose
Add::METHOD_IDVec<f64>f64request_params.iter().sum()Sum of numbers
Mult::METHOD_IDVec<f64>f64request_params.iter().product()Product of numbers
Echo::METHOD_IDVec<u8>Vec<u8>Identity functionRound-trip validation

These methods are intentionally simple to focus tests on protocol correctness rather than business logic. The Echo::METHOD_ID method is particularly useful for testing large payloads because it returns the exact input, enabling straightforward assert_eq!() assertions.

Method ID Generation : Each method has a unique METHOD_ID constant generated at compile time by xxhash::xxh3_64 hashing of the method name. This is defined in the RpcMethodPrebuffered trait implementation:

The Add::METHOD_ID, Mult::METHOD_ID, and Echo::METHOD_ID constants are used both in test code (Add::call()) and in server handler registration (endpoint.register_prebuffered(Add::METHOD_ID, handler)), ensuring consistent method identification across all implementations extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs18

Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs1 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs21

Running Tests

Tests are executed using standard Cargo commands:

Test Execution Environment : Most integration tests require a Tokio runtime even when testing the WASM client, because the test infrastructure (server, WebSocket bridge) runs on Tokio. The WASM client itself remains runtime-agnostic.

For detailed information on specific testing approaches, see:

Sources : extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs18 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs39


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Unit Testing

Loading…

Unit Testing

Relevant source files

Purpose and Scope

This document covers unit testing practices and patterns within the muxio codebase. Unit tests verify individual components in isolation using mock implementations and controlled test scenarios. These tests focus on validating the behavior of core RPC components including RpcDispatcher, RpcServiceCallerInterface, and protocol-level request/response handling.

For information about end-to-end testing with real server instances and WebSocket connections, see Integration Testing. For general testing infrastructure overview, see Testing.

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 tests/rpc_dispatcher_tests.rs:1-204


Unit Testing Architecture

The unit testing strategy in muxio uses isolated component testing with mock implementations to verify behavior without requiring full runtime environments or network transports.

Architecture Overview: Unit tests instantiate mock implementations that satisfy core trait interfaces (RpcServiceCallerInterface) without requiring actual network connections or async runtimes. Mock implementations use synchronization primitives (Arc<Mutex>, Arc<AtomicBool>) to coordinate test behavior and response injection. Tests verify both success paths and error handling using pattern matching against RpcServiceError types.

graph TB
    subgraph "Test Layer"
        TEST["Test Functions\n(#[tokio::test])"]
MOCK["MockRpcClient"]
ASSERTIONS["Test Assertions\nassert_eq!, match patterns"]
end
    
    subgraph "Component Under Test"
        INTERFACE["RpcServiceCallerInterface\ntrait implementation"]
DISPATCHER["RpcDispatcher"]
CALL["call_rpc_buffered\ncall_rpc_streaming"]
end
    
    subgraph "Supporting Test Infrastructure"
        CHANNELS["DynamicSender/Receiver\nmpsc::unbounded, mpsc::channel"]
ENCODER["RpcStreamEncoder\ndummy instances"]
ATOMIC["Arc<AtomicBool>\nconnection state"]
MUTEX["Arc<Mutex<Option<DynamicSender>>>\nresponse coordination"]
end
    
 
   TEST --> MOCK
    MOCK -.implements.-> INTERFACE
 
   MOCK --> CHANNELS
 
   MOCK --> ENCODER
 
   MOCK --> ATOMIC
 
   MOCK --> MUTEX
    
 
   TEST --> CALL
 
   CALL --> DISPATCHER
    
 
   TEST --> ASSERTIONS

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-93 tests/rpc_dispatcher_tests.rs:30-39


Mock Implementation Pattern

Unit tests use mock implementations of core interfaces to isolate component behavior. The primary mock pattern implements RpcServiceCallerInterface with controllable behavior.

MockRpcClient Structure

ComponentTypePurpose
response_sender_providerArc<Mutex<Option<DynamicSender>>>Shared reference to response channel sender for injecting test responses
is_connected_atomicArc<AtomicBool>Atomic flag controlling connection state returned by is_connected()
get_dispatcher()Returns Arc<TokioMutex<RpcDispatcher>>Provides fresh dispatcher instance for each test
get_emit_fn()Returns Arc<dyn Fn(Vec<u8>)>No-op emit function (network writes not needed)
call_rpc_streaming()Returns (RpcStreamEncoder, DynamicReceiver)Creates test channels and stores sender for response injection

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:22-93

Mock Implementation Details

The MockRpcClient struct implements the RpcServiceCallerInterface trait with minimal functionality required for unit testing:

Key Implementation Characteristics:

graph LR
    subgraph "MockRpcClient Implementation"
        MOCK["MockRpcClient"]
PROVIDER["response_sender_provider\nArc<Mutex<Option<DynamicSender>>>"]
CONNECTED["is_connected_atomic\nArc<AtomicBool>"]
end
    
    subgraph "Trait Methods"
        GET_DISP["get_dispatcher()\nreturns new RpcDispatcher"]
GET_EMIT["get_emit_fn()\nreturns no-op closure"]
IS_CONN["is_connected()\nreads atomic bool"]
CALL_STREAM["call_rpc_streaming()\ncreates channels, stores sender"]
end
    
 
   MOCK --> PROVIDER
 
   MOCK --> CONNECTED
    MOCK -.implements.-> GET_DISP
    MOCK -.implements.-> GET_EMIT
    MOCK -.implements.-> IS_CONN
    MOCK -.implements.-> CALL_STREAM
    
 
   CALL_STREAM --> PROVIDER
 
   IS_CONN --> CONNECTED
  1. Fresh Dispatcher: Each call to get_dispatcher() returns a new RpcDispatcher instance wrapped in Arc<TokioMutex> extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:32-34

  2. No-op Emit: The get_emit_fn() returns a closure that discards bytes, as network transmission is not needed in unit tests extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:36-38

  3. Channel Creation: call_rpc_streaming() creates either bounded or unbounded channels based on DynamicChannelType, stores the sender in shared state for later response injection, and returns a dummy RpcStreamEncoder extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:44-85

  4. Connection State Control: Tests control the is_connected() return value by modifying the AtomicBool extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:40-42

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:24-93


Unit Test Structure and Patterns

Prebuffered RPC Call Tests

Tests for prebuffered RPC calls follow a standard pattern: instantiate mock client, spawn response injection task, invoke RPC method, verify result.

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133

sequenceDiagram
    participant Test as "Test Function"
    participant Mock as "MockRpcClient"
    participant Task as "tokio::spawn\nresponse task"
    participant Sender as "DynamicSender"
    participant Call as "call_rpc_buffered"
    participant Result as "Test Assertion"
    
    Test->>Mock: Instantiate with Arc<Mutex<Option<DynamicSender>>>
    Test->>Task: spawn background task
    Test->>Call: invoke with RpcRequest
    Call->>Mock: call_rpc_streaming()
    Mock->>Mock: create channels
    Mock->>Sender: store sender in Arc<Mutex>
    Mock-->>Call: return (encoder, receiver)
    Task->>Sender: poll for sender availability
    Task->>Sender: send_and_ignore(Ok(response_bytes))
    Call->>Call: await response from receiver
    Call-->>Test: return (encoder, Result<T>)
    Test->>Result: assert_eq! or match pattern

Test Case: Successful Buffered Call

The test_buffered_call_success function demonstrates the success path for prebuffered RPC calls:

Test Setup:

Response Injection:

Invocation:

Assertion:

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:97-133

Test Case: Remote Error Handling

The test_buffered_call_remote_error function verifies error propagation from service handlers:

Error Injection:

Error Verification:

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:135-177

Test Case: Trait-Level Error Conversion

The test_prebuffered_trait_converts_error function verifies that the RpcMethodPrebuffered trait correctly propagates service errors:

Trait Invocation:

Error Verification:

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:179-212


graph TB
    subgraph "Test Setup"
        TEST["rpc_dispatcher_call_and_echo_response"]
OUTBUF["outgoing_buf\nRc<RefCell<Vec<u8>>>"]
end
    
    subgraph "Client Dispatcher"
        CLIENT_DISP["client_dispatcher\nRpcDispatcher::new()"]
CLIENT_CALL["dispatcher.call()\nwith emit closure"]
CLIENT_STREAM["RpcStreamEvent handler\nreceives responses"]
end
    
    subgraph "Server Dispatcher"
        SERVER_DISP["server_dispatcher\nRpcDispatcher::new()"]
SERVER_READ["dispatcher.read_bytes()\nreturns request IDs"]
SERVER_DELETE["dispatcher.delete_rpc_request()\nretrieves buffered request"]
SERVER_RESPOND["dispatcher.respond()\nemits response frames"]
end
    
    subgraph "Method Handlers"
        ADD_HANDLER["ADD_METHOD_ID\nsum numbers"]
MULT_HANDLER["MULT_METHOD_ID\nmultiply numbers"]
end
    
 
   TEST --> CLIENT_DISP
 
   TEST --> SERVER_DISP
 
   TEST --> OUTBUF
    
 
   CLIENT_CALL --> OUTBUF
 
   OUTBUF --> SERVER_READ
 
   SERVER_READ --> SERVER_DELETE
 
   SERVER_DELETE --> ADD_HANDLER
 
   SERVER_DELETE --> MULT_HANDLER
 
   ADD_HANDLER --> SERVER_RESPOND
 
   MULT_HANDLER --> SERVER_RESPOND
 
   SERVER_RESPOND --> CLIENT_STREAM

RPC Dispatcher Unit Tests

The RpcDispatcher component is tested using a client-server pair of dispatchers that communicate through shared buffers, simulating network transmission without actual I/O.

Test Architecture: Dispatcher Echo Test

Sources: tests/rpc_dispatcher_tests.rs:30-203

Dispatcher Test Flow

The rpc_dispatcher_call_and_echo_response test demonstrates the complete request-response cycle at the dispatcher level:

Phase 1: Request Encoding and Transmission

StepComponentAction
1TestCreates AddRequestParams and MultRequestParams structs tests/rpc_dispatcher_tests.rs:10-28
2TestEncodes parameters using bitcode::encode() tests/rpc_dispatcher_tests.rs:44-46
3TestConstructs RpcRequest with method ID, param bytes, is_finalized: true tests/rpc_dispatcher_tests.rs:42-49
4Client DispatcherCalls dispatcher.call() with chunk size 4, emit closure that writes to outgoing_buf, and stream event handler tests/rpc_dispatcher_tests.rs:74-122
5Emit ClosureExtends outgoing_buf with emitted bytes tests/rpc_dispatcher_tests.rs:80-82

Phase 2: Request Reception and Processing

StepComponentAction
6TestChunks outgoing_buf into 4-byte segments tests/rpc_dispatcher_tests.rs:127-129
7Server DispatcherCalls dispatcher.read_bytes(chunk) returning RPC request IDs tests/rpc_dispatcher_tests.rs:130-132
8Server DispatcherChecks is_rpc_request_finalized() for each request ID tests/rpc_dispatcher_tests.rs:135-142
9Server DispatcherCalls delete_rpc_request() to retrieve and remove finalized request tests/rpc_dispatcher_tests.rs144
10TestDecodes param bytes using bitcode::decode() tests/rpc_dispatcher_tests.rs:152-153

Phase 3: Response Generation and Transmission

StepComponentAction
11Method HandlerProcesses request (sum for ADD, product for MULT) tests/rpc_dispatcher_tests.rs:151-187
12TestEncodes response using bitcode::encode() tests/rpc_dispatcher_tests.rs:157-159
13TestConstructs RpcResponse with rpc_request_id, method ID, status, and payload tests/rpc_dispatcher_tests.rs:161-167
14Server DispatcherCalls dispatcher.respond() with chunk size 4 and emit closure tests/rpc_dispatcher_tests.rs:193-197
15Emit ClosureCalls client_dispatcher.read_bytes() directly tests/rpc_dispatcher_tests.rs195
16Client Stream HandlerReceives RpcStreamEvent::Header and RpcStreamEvent::PayloadChunk tests/rpc_dispatcher_tests.rs:86-118
17Client Stream HandlerDecodes response bytes and logs results tests/rpc_dispatcher_tests.rs:102-112

Sources: tests/rpc_dispatcher_tests.rs:30-203

classDiagram
    class AddRequestParams {+Vec~f64~ numbers}
    
    class MultRequestParams {+Vec~f64~ numbers}
    
    class AddResponseParams {+f64 result}
    
    class MultResponseParams {+f64 result}
    
    AddRequestParams ..|> Encode
    AddRequestParams ..|> Decode
    MultRequestParams ..|> Encode
    MultRequestParams ..|> Decode
    AddResponseParams ..|> Encode
    AddResponseParams ..|> Decode
    MultResponseParams ..|> Encode
    MultResponseParams ..|> Decode

Request and Response Types

The dispatcher test defines custom request and response types that demonstrate the serialization pattern:

Request Types:

All types derive Encode and Decode from the bitcode crate, enabling efficient binary serialization tests/rpc_dispatcher_tests.rs:10-28

Method ID Constants:

Sources: tests/rpc_dispatcher_tests.rs:7-28


Test Coverage Areas

Components Under Test

ComponentTest FileKey Aspects Verified
RpcDispatchertests/rpc_dispatcher_tests.rsRequest correlation, chunked I/O, response routing, finalization detection
RpcServiceCallerInterfaceextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rsPrebuffered call success, error propagation, trait-level invocation
call_rpc_bufferedextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rsResponse deserialization, error handling, channel coordination
RpcMethodPrebuffered traitextensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rsTrait method invocation, error conversion

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:1-213 tests/rpc_dispatcher_tests.rs:1-204

Assertion Patterns

Equality Assertions:

Pattern Matching:

match result {
    Err(RpcServiceError::Rpc(err)) => {
        assert_eq!(err.code, RpcServiceErrorCode::Fail);
        assert_eq!(err.message, "item does not exist");
    }
    _ => panic!("Expected a RemoteError"),
}

extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:170-176

Boolean Assertions:

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:132-211 tests/rpc_dispatcher_tests.rs91


Running Unit Tests

Unit tests use the standard Rust test infrastructure with async support via tokio::test:

Test Attributes:

Running Tests:

Test Organization:

Unit tests are located in two primary locations:

  1. Workspace-level tests: tests/ directory at repository root contains core component tests tests/rpc_dispatcher_tests.rs1
  2. Package-level tests: extensions/<package>/tests/ directories contain extension-specific tests extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs1

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs97 tests/rpc_dispatcher_tests.rs:30-31


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Integration Testing

Loading…

Integration Testing

Relevant source files

This document describes integration testing patterns in Muxio that validate full client-server communication flows over real network connections. Integration tests verify the complete RPC stack from method invocation through binary framing to response decoding, ensuring all layers work correctly together.

For unit testing of individual components like RpcDispatcher and RpcStreamDecoder, see Unit Testing. For end-to-end application examples, see WebSocket RPC Application.

Purpose and Scope

Integration tests in Muxio validate:

  • Complete RPC request/response cycles across real network connections
  • Interoperability between different client implementations (native Tokio and WASM) with the server
  • Proper handling of large payloads that require chunking and streaming
  • Error propagation from server to client through all layers
  • WebSocket transport behavior under realistic conditions

These tests use actual RpcServer instances and establish real WebSocket connections, providing high-fidelity validation of the entire system.

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20

Integration Test Architecture

Three-Component Test Setup

Muxio integration tests for WASM clients use a three-component architecture to simulate realistic browser-server communication within a Tokio test environment:

Component Roles:

graph TB
    subgraph "Test Environment (Tokio Runtime)"
        Server["RpcServer\n(muxio-tokio-rpc-server)\nListening on TCP"]
Bridge["WebSocket Bridge\ntokio-tungstenite\nConnect & Forward"]
Client["RpcWasmClient\nRuntime-agnostic\nCallback-based"]
end
    
 
   Client -->|Callback: send bytes| Bridge
 
   Bridge -->|WebSocket Binary| Server
 
   Server -->|WebSocket Binary| Bridge
 
   Bridge -->|dispatcher.read_bytes| Client
    
 
   TestCode["Test Code"] -->|Echo::call| Client
 
   Server -->|Register handlers| Handlers["RpcServiceEndpointInterface\nAdd, Mult, Echo"]
style Server fill:#f9f9f9
    style Bridge fill:#f9f9f9
    style Client fill:#f9f9f9
ComponentTypePurpose
RpcServerReal server instanceActual production server from muxio-tokio-rpc-server
RpcWasmClientClient under testWASM-compatible client that sends bytes via callback
WebSocket BridgeTest infrastructureConnects client callback to server socket via real network

This architecture ensures the WASM client is tested against the actual server implementation rather than mocks, providing realistic validation.

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-19 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-78

WebSocket Bridge Pattern

Bridge Implementation

The WebSocket bridge connects the RpcWasmClient’s callback-based output to a real WebSocket connection. This pattern is essential because WASM clients are designed to be runtime-agnostic and cannot directly create network connections in a Tokio test environment.

sequenceDiagram
    participant Test as "Test Code"
    participant Client as "RpcWasmClient"
    participant ToBridge as "to_bridge_rx\ntokio_mpsc channel"
    participant WsSender as "WebSocket\nSender"
    participant WsReceiver as "WebSocket\nReceiver"
    participant Server as "RpcServer"
    
    Test->>Client: Echo::call(data)
    Client->>Client: encode_request()
    Client->>ToBridge: Callback: send(bytes)
    
    Note over ToBridge,WsSender: Bridge Task 1: Client → Server
    ToBridge->>WsSender: recv() bytes
    WsSender->>Server: WsMessage::Binary
    
    Server->>Server: Process RPC
    Server->>WsReceiver: WsMessage::Binary
    
    Note over WsReceiver,Client: Bridge Task 2: Server → Client
    WsReceiver->>Client: dispatcher.read_bytes()
    Client->>Test: Return decoded result

Bridge Setup Code

The bridge consists of two async tasks and a channel:

  1. Client Output Channel - Created when constructing RpcWasmClient:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-86

  1. Bridge Task 1: Client to Server - Forwards bytes from callback to WebSocket:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:98-108

  1. Bridge Task 2: Server to Client - Forwards bytes from WebSocket to dispatcher:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:110-123

Blocking Dispatcher Access

A critical detail is the use of task::spawn_blocking for dispatcher operations. The RpcDispatcher uses a blocking mutex (parking_lot::Mutex) because the core Muxio library is non-async to support WASM. In a Tokio runtime, blocking operations must be moved to dedicated threads:

This prevents freezing the async test runtime while maintaining compatibility with the non-async core design.

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:83-123 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:116-120

Server Setup for Integration Tests

Creating the Test Server

Integration tests create a real RpcServer listening on a random port:

The server is wrapped in Arc for safe sharing between the test code and the spawned server task:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:42-48

Handler Registration

Handlers are registered using the RpcServiceEndpointInterface:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:51-69

The server is then spawned in the background:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:72-78

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:42-78

graph TB
 
   Test["Test Code"] -->|join!| Calls["Multiple RPC Calls"]
Calls --> Add1["Add::call\nvec![1.0, 2.0, 3.0]"]
Calls --> Add2["Add::call\nvec![8.0, 3.0, 7.0]"]
Calls --> Mult1["Mult::call\nvec![8.0, 3.0, 7.0]"]
Calls --> Mult2["Mult::call\nvec![1.5, 2.5, 8.5]"]
Calls --> Echo1["Echo::call\nb'testing 1 2 3'"]
Calls --> Echo2["Echo::call\nb'testing 4 5 6'"]
Add1 --> Assert1["assert_eq! 6.0"]
Add2 --> Assert2["assert_eq! 18.0"]
Mult1 --> Assert3["assert_eq! 168.0"]
Mult2 --> Assert4["assert_eq! 31.875"]
Echo1 --> Assert5["assert_eq! bytes"]
Echo2 --> Assert6["assert_eq! bytes"]

Test Scenarios

Success Path Testing

Basic integration tests validate correct request/response handling for multiple concurrent RPC calls:

The test makes six concurrent RPC calls using tokio::join!:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:126-142

This validates that:

  • Multiple concurrent requests are correctly multiplexed
  • Request/response correlation works via request_id matching
  • Type-safe encoding/decoding produces correct results
  • The RpcDispatcher handles concurrent streams properly

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142

sequenceDiagram
    participant Test as "Test Code"
    participant Client as "RpcWasmClient"
    participant Bridge as "WebSocket Bridge"
    participant Server as "RpcServer"
    participant Handler as "Failing Handler"
    
    Test->>Client: Add::call(vec![1.0, 2.0, 3.0])
    Client->>Bridge: RpcRequest binary
    Bridge->>Server: WebSocket message
    Server->>Handler: Invoke handler
    Handler->>Handler: Return Err("Addition failed")
    Handler->>Server: Error response
    Server->>Bridge: RpcResponse with error code
    Bridge->>Client: Binary frames
    Client->>Client: Decode RpcServiceError
    Client->>Test: Err(RpcServiceError::Rpc)
    
    Test->>Test: match error.code\nassert_eq! System

Error Propagation Testing

Integration tests verify that server-side errors propagate correctly through all layers to the client:

Error Handler Registration

The test registers a handler that always fails:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:155-160

Error Assertion

The test verifies the error type, code, and message:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:210-226

This validates:

  • RpcServiceError serialization and deserialization
  • Error code preservation (RpcServiceErrorCode::System)
  • Error message transmission
  • Proper error variant matching on client side

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:144-227

graph TB
 
   Test["Test: Large Payload"] --> Create["Create payload\n200 × DEFAULT_SERVICE_MAX_CHUNK_SIZE"]
Create --> Call["Echo::call(client, large_payload)"]
Call --> Encode["RpcCallPrebuffered\nencode_request()"]
Encode --> Check{"Size >= chunk_size?"}
Check -->|Yes| Payload["rpc_prebuffered_payload_bytes"]
Check -->|No| Param["rpc_param_bytes"]
Payload --> Stream["RpcDispatcher\nStream as chunks"]
Stream --> Frames["Hundreds of binary frames"]
Frames --> Server["Server receives & reassembles"]
Server --> Echo["Echo handler returns"]
Echo --> Response["Stream response chunks"]
Response --> Client["Client reassembles"]
Client --> Assert["assert_eq! result == input"]

Large Payload Testing

Chunked Streaming Validation

A critical integration test validates that large payloads exceeding the chunk size are properly streamed as multiple frames:

Test Implementation

The test creates a payload 200 times the chunk size to force extensive streaming:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:295-311

What This Tests

LayerValidation
RpcCallPrebufferedLarge argument handling logic (extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:58-65)
RpcDispatcherRequest chunking and response reassembly
RpcSessionStream multiplexing with many frames
Binary FramingFrame encoding/decoding at scale
WebSocket TransportLarge binary message handling
Server EndpointPayload buffering and processing

This end-to-end test validates the complete stack can handle payloads requiring hundreds of frames without data corruption or performance issues.

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-73

Connection Lifecycle Testing

Handle Connect Pattern

Integration tests must explicitly call handle_connect() to simulate the browser’s WebSocket onopen event:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs95

Without this call, the client remains in Disconnected state and RPC calls will fail. This pattern reflects the WASM client’s design where connection state is managed by the JavaScript runtime in production.

Test Timing Considerations

Tests include explicit delays to ensure server initialization:

extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs80

This prevents race conditions where the client attempts to connect before the server is listening.

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs80 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs95

Common Integration Test Patterns

Test Structure Template

All integration tests follow a consistent structure:

Code Structure

  1. Server Initialization:

    • Create TcpListener with port 0 for random port
    • Wrap RpcServer in Arc for shared ownership
    • Register handlers via endpoint.register_prebuffered()
    • Spawn server with tokio::spawn
  2. Client and Bridge Creation:

    • Create tokio_mpsc channel for callback output
    • Instantiate RpcWasmClient with callback closure
    • Connect to server using tokio_tungstenite::connect_async
    • Call client.handle_connect() to mark connection ready
  3. Bridge Tasks:

    • Spawn task forwarding from channel to WebSocket
    • Spawn task forwarding from WebSocket to dispatcher (using spawn_blocking)
  4. Test Execution:

    • Use tokio::join! for concurrent RPC calls
    • Use async move blocks where necessary for ownership
  5. Assertions:

    • Validate success results with assert_eq!
    • Match error variants explicitly
    • Check error codes and messages

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-312

Comparison with Unit Tests

Integration tests differ fundamentally from unit tests in Muxio:

AspectUnit TestsIntegration Tests
ComponentsMock RpcServiceCallerInterfaceReal RpcServer and RpcWasmClient
TransportIn-memory channelsReal WebSocket connections
Response SimulationManual sender injectionServer processes actual requests
ScopeSingle component (e.g., dispatcher)Complete RPC stack
Example File extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rsextensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs

Unit Test Pattern

Unit tests use a MockRpcClient that provides controlled response injection:

extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:24-28

The mock allows direct control of responses without network communication:

extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:110-121

This approach is suitable for testing client-side logic in isolation but does not validate network behavior or server processing.

Sources: extensions/muxio-rpc-service-caller/tests/prebuffered_caller_tests.rs:20-93 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20

Best Practices

Resource Management

  1. UseArc for Server Sharing:

This prevents server from being moved into the spawn closure while still allowing endpoint access.

  1. Proper Channel Cleanup: WebSocket bridge tasks naturally terminate when channels close, preventing resource leaks.

  2. Usespawn_blocking for Synchronous Operations: Always wrap blocking dispatcher operations in task::spawn_blocking to prevent runtime stalls.

Test Reliability

  1. Random Port Allocation: Use port 0 to avoid conflicts: TcpListener::bind("127.0.0.1:0")

  2. Server Initialization Delay: Include brief sleep after spawning server: tokio::time::sleep(Duration::from_millis(200))

  3. Explicit Connection Lifecycle: Always call client.handle_connect() before making RPC calls

  4. Concurrent Call Testing: Use tokio::join! to validate request multiplexing and correlation

Error Testing

  1. Test Multiple Error Codes: Validate NotFound, Fail, and System error codes separately

  2. Check Error Message Preservation: Assert exact error messages to ensure proper serialization

  3. Test Both Success and Failure: Each integration test suite should include both happy path and error scenarios

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-312

Integration Test Limitations

What Integration Tests Do Not Cover

Integration tests in Muxio focus on RPC protocol correctness over real network connections. They do not address:

  1. Browser JavaScript Integration:

    • Tests run in Tokio, not actual browser environment
    • wasm-bindgen JavaScript glue code not exercised
    • Browser WebSocket API behavior not validated
  2. Network Failure Scenarios:

    • Connection drops and reconnection logic
    • Network partition handling
    • Timeout behavior under slow networks
  3. Concurrent Client Stress:

    • High connection count scenarios
    • Server resource exhaustion
    • Backpressure handling at scale
  4. Security:

    • TLS/SSL connection validation
    • Authentication and authorization flows
    • Message tampering detection

For end-to-end browser testing, see JavaScript/WASM Integration. For performance testing, see Performance Optimization.

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-20


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Examples and Tutorials

Loading…

Examples and Tutorials

Relevant source files

This section provides practical, hands-on examples demonstrating how to build applications using the muxio framework. The tutorials progress from understanding the complete example application to creating your own service definitions and implementations.

For information about the underlying architecture and design principles, see Core Concepts. For detailed documentation on specific platform implementations, see Platform Implementations.


Available Examples

The muxio repository includes a complete working example demonstrating the end-to-end process of building an RPC application:

Example CratePurposeLocation
example-muxio-rpc-service-definitionShared service contract definitionsexamples/example-muxio-rpc-service-definition/
example-muxio-ws-rpc-appComplete WebSocket RPC applicationexamples/example-muxio-ws-rpc-app/

The example demonstrates:

  • Defining shared service contracts using RpcMethodPrebuffered
  • Setting up a Tokio-based RPC server with WebSocket transport
  • Creating an RPC client and making concurrent calls
  • Proper error handling and state management
  • Connection lifecycle management

Sources : Cargo.lock:426-449 README.md:64-162


Example Application Architecture

The following diagram shows how the example application components relate to each other and to the muxio framework:

Sources : README.md:70-82 Cargo.lock:426-449

graph TB
    subgraph "Shared Service Definition"
        DEF["example-muxio-rpc-service-definition"]
ADD["Add::METHOD_ID\nRpcMethodPrebuffered"]
MULT["Mult::METHOD_ID\nRpcMethodPrebuffered"]
ECHO["Echo::METHOD_ID\nRpcMethodPrebuffered"]
DEF --> ADD
 
       DEF --> MULT
 
       DEF --> ECHO
    end
    
    subgraph "Server Side (example-muxio-ws-rpc-app)"
        SERVER["RpcServer::new()"]
ENDPOINT["server.endpoint()"]
HANDLERS["endpoint.register_prebuffered()"]
LISTENER["TcpListener::bind()"]
SERVER --> ENDPOINT
 
       ENDPOINT --> HANDLERS
 
       SERVER --> LISTENER
    end
    
    subgraph "Client Side (example-muxio-ws-rpc-app)"
        CLIENT["RpcClient::new()"]
CALLS["Add::call()\nMult::call()\nEcho::call()"]
STATE["set_state_change_handler()"]
CLIENT --> CALLS
 
       CLIENT --> STATE
    end
    
    subgraph "Muxio Framework"
        CALLER_IF["RpcServiceCallerInterface"]
ENDPOINT_IF["RpcServiceEndpointInterface"]
DISPATCHER["RpcDispatcher"]
CALLER_IF --> DISPATCHER
 
       ENDPOINT_IF --> DISPATCHER
    end
    
 
   DEF -.->|imported by| HANDLERS
 
   DEF -.->|imported by| CALLS
    
 
   HANDLERS -->|implements| ENDPOINT_IF
 
   CALLS -->|uses| CALLER_IF
    
 
   CLIENT <-.->|WebSocket frames| SERVER

Project Structure for an Example Application

When building a muxio application, the typical project structure separates concerns into distinct crates:

graph LR
    subgraph "Workspace Root"
        CARGO["Cargo.toml\n[workspace]"]
end
    
    subgraph "Service Definition Crate"
        DEF_CARGO["service-definition/Cargo.toml"]
DEF_LIB["service-definition/src/lib.rs"]
DEF_PREBUF["RpcMethodPrebuffered impls"]
DEF_CARGO --> DEF_LIB
 
       DEF_LIB --> DEF_PREBUF
    end
    
    subgraph "Application Crate"
        APP_CARGO["app/Cargo.toml"]
APP_MAIN["app/src/main.rs"]
SERVER_CODE["Server setup + handlers"]
CLIENT_CODE["Client setup + calls"]
APP_CARGO --> APP_MAIN
 
       APP_MAIN --> SERVER_CODE
 
       APP_MAIN --> CLIENT_CODE
    end
    
 
   CARGO -->|members| DEF_CARGO
 
   CARGO -->|members| APP_CARGO
    
 
   APP_CARGO -.->|depends on| DEF_CARGO
    
 
   DEF_CARGO -.->|muxio-rpc-service| MUXIO_RPC["muxio-rpc-service"]
APP_CARGO -.->|muxio-tokio-rpc-server| TOKIO_SERVER["muxio-tokio-rpc-server"]
APP_CARGO -.->|muxio-tokio-rpc-client| TOKIO_CLIENT["muxio-tokio-rpc-client"]

This structure ensures:

  • Service definitions are shared between client and server (compile-time type safety)
  • Application code depends on the definition crate
  • Changes to the service contract require recompilation of both sides

Sources : Cargo.lock:426-449 README.md:70-74


Understanding the Request-Response Flow

The following sequence diagram traces a complete RPC call through the example application:

Sources : README.md:92-162

sequenceDiagram
    participant Main as "main() function"
    participant Client as "RpcClient"
    participant AddTrait as "Add::call()"
    participant Caller as "RpcServiceCallerInterface"
    participant WS as "WebSocket Transport"
    participant Server as "RpcServer"
    participant Endpoint as "RpcServiceEndpointInterface"
    participant Handler as "register_prebuffered handler"
    
    Note over Main,Handler: Server Setup Phase
    Main->>Server: RpcServer::new()
    Main->>Server: server.endpoint()
    Server->>Endpoint: returns endpoint handle
    Main->>Endpoint: endpoint.register_prebuffered(Add::METHOD_ID, handler)
    Endpoint->>Handler: stores async handler
    Main->>Server: server.serve_with_listener(listener)
    
    Note over Main,Handler: Client Connection Phase
    Main->>Client: RpcClient::new(host, port)
    Client->>WS: WebSocket connection established
    
    Note over Main,Handler: RPC Call Phase
    Main->>AddTrait: Add::call(&client, vec![1.0, 2.0, 3.0])
    AddTrait->>AddTrait: Add::encode_request(params)
    AddTrait->>Caller: client.call_prebuffered(METHOD_ID, encoded_bytes)
    Caller->>WS: Binary frames with stream_id, method_id
    
    WS->>Server: Receive frames
    Server->>Endpoint: Dispatch to registered handler
    Endpoint->>Handler: invoke with request_bytes, ctx
    Handler->>Handler: Add::decode_request(&request_bytes)
    Handler->>Handler: let sum = request_params.iter().sum()
    Handler->>Handler: Add::encode_response(sum)
    Handler->>Endpoint: Ok(response_bytes)
    Endpoint->>WS: Binary frames with result
    
    WS->>Caller: Receive frames
    Caller->>AddTrait: response_bytes
    AddTrait->>AddTrait: Add::decode_response(&response_bytes)
    AddTrait->>Main: Ok(6.0)

Detailed Code Flow Through the Example

The following table maps natural language steps to specific code locations:

StepDescriptionCode Location
1. Bind server socketCreate TCP listener on random portREADME.md88
2. Create server instanceInstantiate RpcServer wrapped in ArcREADME.md95
3. Get endpoint handleRetrieve endpoint for handler registrationREADME.md98
4. Register Add handlerRegister async handler for Add::METHOD_IDREADME.md:102-107
5. Register Mult handlerRegister async handler for Mult::METHOD_IDREADME.md:108-113
6. Register Echo handlerRegister async handler for Echo::METHOD_IDREADME.md:114-118
7. Spawn server taskStart server with serve_with_listener()README.md126
8. Create clientConnect to server via RpcClient::new()README.md137
9. Set state handlerRegister callback for connection state changesREADME.md:139-142
10. Make concurrent callsUse join! macro to await multiple RPC callsREADME.md:145-152
11. Verify resultsAssert response values match expected resultsREADME.md:154-159

Sources : README.md:83-161


Service Definition Pattern

The service definition crate defines the contract between client and server. Each RPC method is implemented as a unit struct that implements the RpcMethodPrebuffered trait:

graph TB
    subgraph "Service Definition Structure"
        TRAIT["RpcMethodPrebuffered trait"]
ADD_STRUCT["pub struct Add"]
ADD_IMPL["impl RpcMethodPrebuffered for Add"]
ADD_METHOD_ID["const METHOD_ID: u64"]
ADD_REQUEST["type Request = Vec<f64>"]
ADD_RESPONSE["type Response = f64"]
MULT_STRUCT["pub struct Mult"]
MULT_IMPL["impl RpcMethodPrebuffered for Mult"]
ECHO_STRUCT["pub struct Echo"]
ECHO_IMPL["impl RpcMethodPrebuffered for Echo"]
TRAIT -.->|implemented by| ADD_IMPL
 
       TRAIT -.->|implemented by| MULT_IMPL
 
       TRAIT -.->|implemented by| ECHO_IMPL
        
 
       ADD_STRUCT --> ADD_IMPL
 
       ADD_IMPL --> ADD_METHOD_ID
 
       ADD_IMPL --> ADD_REQUEST
 
       ADD_IMPL --> ADD_RESPONSE
        
 
       MULT_STRUCT --> MULT_IMPL
 
       ECHO_STRUCT --> ECHO_IMPL
    end
    
    subgraph "Generated by Trait"
        ENCODE_REQ["encode_request()"]
DECODE_REQ["decode_request()"]
ENCODE_RESP["encode_response()"]
DECODE_RESP["decode_response()"]
CALL_METHOD["call()
async fn"]
TRAIT -.->|provides| ENCODE_REQ
 
       TRAIT -.->|provides| DECODE_REQ
 
       TRAIT -.->|provides| ENCODE_RESP
 
       TRAIT -.->|provides| DECODE_RESP
 
       TRAIT -.->|provides| CALL_METHOD
    end
    
    subgraph "Serialization"
        BITCODE["bitcode::encode/decode"]
ENCODE_REQ --> BITCODE
 
       DECODE_REQ --> BITCODE
 
       ENCODE_RESP --> BITCODE
 
       DECODE_RESP --> BITCODE
    end
    
    subgraph "Method ID Generation"
        XXHASH["xxhash-rust"]
METHOD_NAME["Method name string"]
ADD_METHOD_ID -.->|hash of| METHOD_NAME
 
       METHOD_NAME --> XXHASH
    end

The RpcMethodPrebuffered trait provides default implementations for encoding/decoding using bitcode serialization and a call() method that works with any RpcServiceCallerInterface implementation.

Sources : README.md:71-74 Cargo.lock:426-431


Handler Registration Pattern

Server-side handlers are registered on the endpoint using the register_prebuffered() method. Each handler is an async closure that:

  1. Receives raw request bytes and a context object
  2. Decodes the request using the service definition’s decode_request() method
  3. Performs the business logic
  4. Encodes the response using encode_response()
  5. Returns Result<Vec<u8>, RpcServiceError>

Sources : README.md:101-119

graph LR
    subgraph "Handler Registration Flow"
        ENDPOINT["endpoint.register_prebuffered()"]
METHOD_ID["Add::METHOD_ID"]
CLOSURE["async closure"]
ENDPOINT -->|key| METHOD_ID
 
       ENDPOINT -->|value| CLOSURE
    end
    
    subgraph "Handler Execution Flow"
        REQ_BYTES["request_bytes: Vec<u8>"]
CTX["_ctx: Arc<RpcContext>"]
DECODE["Add::decode_request(&request_bytes)"]
LOGIC["Business logic: iter().sum()"]
ENCODE["Add::encode_response(sum)"]
RESULT["Ok(response_bytes)"]
REQ_BYTES --> DECODE
 
       CTX -.->|available but unused| LOGIC
 
       DECODE --> LOGIC
 
       LOGIC --> ENCODE
 
       ENCODE --> RESULT
    end
    
 
   CLOSURE -.->|contains| DECODE

graph TB
    subgraph "Client Call Flow"
        APP_CODE["Application code"]
CALL_METHOD["Add::call(&client, vec![1.0, 2.0, 3.0])"]
APP_CODE --> CALL_METHOD
    end
    
    subgraph "Inside call()
implementation"
        ENCODE["encode_request(params)"]
CALL_PREBUF["client.call_prebuffered(METHOD_ID, bytes)"]
AWAIT["await response"]
DECODE["decode_response(&response_bytes)"]
RETURN["Ok(Response)"]
CALL_METHOD --> ENCODE
 
       ENCODE --> CALL_PREBUF
 
       CALL_PREBUF --> AWAIT
 
       AWAIT --> DECODE
 
       DECODE --> RETURN
    end
    
    subgraph "Client Implementation"
        CLIENT["RpcClient\n(implements RpcServiceCallerInterface)"]
DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
CALL_PREBUF -.->|delegates to| CLIENT
 
       CLIENT --> DISPATCHER
 
       DISPATCHER --> SESSION
    end
    
 
   RETURN --> APP_CODE

Client Call Pattern

Client-side calls use the service definition’s call() method, which is automatically provided by the RpcMethodPrebuffered trait:

The call() method handles all serialization, transport, and deserialization automatically. Application code works with typed Rust structs, never touching raw bytes.

Sources : README.md:145-152


graph TB
    subgraph "Concurrent Calls with join!"
        JOIN["join!()
macro"]
CALL1["Add::call(&client, vec![1.0, 2.0, 3.0])"]
CALL2["Add::call(&client, vec![8.0, 3.0, 7.0])"]
CALL3["Mult::call(&client, vec![8.0, 3.0, 7.0])"]
CALL4["Mult::call(&client, vec![1.5, 2.5, 8.5])"]
CALL5["Echo::call(&client, b\"testing 1 2 3\")"]
CALL6["Echo::call(&client, b\"testing 4 5 6\")"]
JOIN --> CALL1
 
       JOIN --> CALL2
 
       JOIN --> CALL3
 
       JOIN --> CALL4
 
       JOIN --> CALL5
 
       JOIN --> CALL6
    end
    
    subgraph "Multiplexing Layer"
        SESSION["RpcSession"]
STREAM1["Stream ID: 1"]
STREAM2["Stream ID: 2"]
STREAM3["Stream ID: 3"]
STREAM4["Stream ID: 4"]
STREAM5["Stream ID: 5"]
STREAM6["Stream ID: 6"]
SESSION --> STREAM1
 
       SESSION --> STREAM2
 
       SESSION --> STREAM3
 
       SESSION --> STREAM4
 
       SESSION --> STREAM5
 
       SESSION --> STREAM6
    end
    
    subgraph "Single WebSocket Connection"
        WS["Binary frames interleaved\nover single connection"]
end
    
 
   CALL1 -.->|assigned| STREAM1
 
   CALL2 -.->|assigned| STREAM2
 
   CALL3 -.->|assigned| STREAM3
 
   CALL4 -.->|assigned| STREAM4
 
   CALL5 -.->|assigned| STREAM5
 
   CALL6 -.->|assigned| STREAM6
    
 
   SESSION --> WS
    
    subgraph "Result Tuple"
        RESULTS["(res1, res2, res3, res4, res5, res6)"]
end
    
 
   JOIN --> RESULTS

Concurrent Request Handling

The example demonstrates concurrent request handling using Tokio’s join! macro:

Each concurrent call is assigned a unique stream ID, allowing frames to be interleaved over the single WebSocket connection. The join! macro waits for all responses before proceeding.

Sources : README.md:145-152


State Change Handling

The client supports registering a state change handler to track connection lifecycle:

StateDescriptionTypical Response
ConnectingInitial connection attemptLog connection start
ConnectedWebSocket establishedEnable UI, start heartbeat
DisconnectedConnection lostDisable UI, attempt reconnect
ErrorConnection error occurredLog error, notify user

The handler is registered using set_state_change_handler():

This callback-driven pattern enables reactive behavior without blocking the main application logic.

Sources : README.md:139-142


graph LR
    subgraph "Handler Error Flow"
        HANDLER["Handler function"]
DECODE_ERR["decode_request()
fails"]
LOGIC_ERR["Business logic fails"]
ENCODE_ERR["encode_response()
fails"]
HANDLER --> DECODE_ERR
 
       HANDLER --> LOGIC_ERR
 
       HANDLER --> ENCODE_ERR
    end
    
    subgraph "Error Propagation"
        RPC_ERR["Err(RpcServiceError)"]
ENDPOINT["RpcServiceEndpointInterface"]
DISPATCHER["RpcDispatcher"]
TRANSPORT["Binary error frame"]
DECODE_ERR --> RPC_ERR
 
       LOGIC_ERR --> RPC_ERR
 
       ENCODE_ERR --> RPC_ERR
        
 
       RPC_ERR --> ENDPOINT
 
       ENDPOINT --> DISPATCHER
 
       DISPATCHER --> TRANSPORT
    end
    
    subgraph "Client Side"
        CLIENT_CALL["call()
method"]
RESULT["Err(RpcServiceError)"]
APP["Application code"]
TRANSPORT --> CLIENT_CALL
 
       CLIENT_CALL --> RESULT
 
       RESULT --> APP
    end

Error Handling in Handlers

Handlers return Result<Vec<u8>, RpcServiceError>. The framework automatically propagates errors to the client:

Common error scenarios:

  • Malformed request bytes (deserialization failure)
  • Business logic errors (e.g., division by zero, validation failure)
  • Resource errors (e.g., database unavailable)

All errors are serialized and transmitted back to the client as part of the RPC protocol.

Sources : README.md:102-118


Running the Example

To run the complete example application:

  1. Clone the repository
  2. Navigate to the workspace root
  3. Execute the example:

Expected output includes:

  • Server binding to random port
  • Handler registration confirmations
  • Client connection establishment
  • State change callback invocations
  • Successful assertion of all RPC results

The example demonstrates a complete lifecycle:

  • Server starts and binds to a port
  • Handlers are registered for three methods
  • Client connects via WebSocket
  • Six concurrent RPC calls are made
  • All responses are verified
  • The application exits cleanly

Sources : README.md:64-162


Best Practices from the Example

The example application demonstrates several important patterns:

PatternImplementationBenefit
Arc-wrapped serverArc::new(RpcServer::new(None))Safe sharing across async tasks
Random port bindingTcpListener::bind("127.0.0.1:0")Avoids port conflicts in testing
Concurrent registrationjoin!() for handler registrationParallel setup reduces startup time
Async handler closuresasync move { ... }Enables async business logic
Destructured joinslet (res1, res2, ...) = join!(...)Clear result assignment
State change callbacksset_state_change_handler()Reactive connection management

These patterns can be adapted for production applications with appropriate error handling, logging, and configuration management.

Sources : README.md:83-161


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

WebSocket RPC Application Example

Loading…

WebSocket RPC Application Example

Relevant source files

Purpose and Scope

This document provides a complete walkthrough of the example-muxio-ws-rpc-app demonstration application, showing how to build a functional WebSocket-based RPC system using muxio. The example covers server initialization, handler registration, client connection, and executing RPC calls with shared type-safe service definitions.

For information about defining custom RPC services, see Defining a Simple RPC Service. For details about cross-platform deployment strategies, see Cross-Platform Deployment. For the underlying transport mechanisms, see Tokio RPC Server and Tokio RPC Client.


Application Architecture

The example application demonstrates a complete client-server interaction cycle using three main components:

ComponentCrateRole
Service Definitionsexample-muxio-rpc-service-definitionShared RPC method contracts (Add, Mult, Echo)
Serverexample-muxio-ws-rpc-app (server code)Tokio-based WebSocket server with handler registration
Clientexample-muxio-ws-rpc-app (client code)Tokio-based client making concurrent RPC calls

Diagram: Example Application Component Structure

Sources :


Shared Service Definitions

The example uses three RPC methods defined in example-muxio-rpc-service-definition:

MethodPurposeRequest TypeResponse Type
AddSum a vector of floatsVec<f64>f64
MultMultiply a vector of floatsVec<f64>f64
EchoEcho back binary dataVec<u8>Vec<u8>

Each method implements the RpcMethodPrebuffered trait, providing:

  • METHOD_ID: Compile-time generated method identifier
  • call(): Client-side invocation function
  • encode_request() / decode_request(): Request serialization
  • encode_response() / decode_response(): Response serialization

Diagram: Service Definition Structure

graph LR
    subgraph "RpcMethodPrebuffered Trait"
        TRAIT["const METHOD_ID: u64\nencode_request\ndecode_request\nencode_response\ndecode_response\ncall"]
end
    
    subgraph "Implementations"
        ADD_IMPL["Add struct\nMETHOD_ID = xxhash('Add')\nRequest: Vec&lt;f64&gt;\nResponse: f64"]
MULT_IMPL["Mult struct\nMETHOD_ID = xxhash('Mult')\nRequest: Vec&lt;f64&gt;\nResponse: f64"]
ECHO_IMPL["Echo struct\nMETHOD_ID = xxhash('Echo')\nRequest: Vec&lt;u8&gt;\nResponse: Vec&lt;u8&gt;"]
end
    
    TRAIT -.implemented by.-> ADD_IMPL
    TRAIT -.implemented by.-> MULT_IMPL
    TRAIT -.implemented by.-> ECHO_IMPL

Sources :


Server Setup and Initialization

The server initialization occurs in a dedicated block within main():

Diagram: Server Initialization Flow

sequenceDiagram
    participant MAIN as main function
    participant LISTENER as TcpListener
    participant SERVER as RpcServer
    participant ENDPOINT as endpoint
    participant TASK as tokio::spawn
    
    MAIN->>LISTENER: TcpListener::bind("127.0.0.1:0")
    Note over MAIN: Binds to random available port
    
    MAIN->>MAIN: tcp_listener_to_host_port
    Note over MAIN: Extracts host and port
    
    MAIN->>SERVER: RpcServer::new(None)
    MAIN->>SERVER: Arc::new(server)
    Note over SERVER: Wrapped in Arc for sharing
    
    MAIN->>ENDPOINT: server.endpoint()
    Note over ENDPOINT: Get handler registration interface
    
    MAIN->>ENDPOINT: register_prebuffered(Add::METHOD_ID, handler)
    MAIN->>ENDPOINT: register_prebuffered(Mult::METHOD_ID, handler)
    MAIN->>ENDPOINT: register_prebuffered(Echo::METHOD_ID, handler)
    Note over ENDPOINT: Handlers registered concurrently with join!
    
    MAIN->>TASK: tokio::spawn(server.serve_with_listener)
    Note over TASK: Server runs in background task

Sources :

TcpListener Binding

The server binds to a random available port using port 0:

Key Code Entities :

  • TcpListener::bind(): Tokio’s async TCP listener
  • tcp_listener_to_host_port(): Utility to extract host/port from listener

Sources :

RpcServer Creation

The RpcServer is created and wrapped in Arc for shared ownership:

Key Code Entities :

  • RpcServer::new(None): Creates server with default configuration
  • server.endpoint(): Returns RpcServiceEndpointInterface for handler registration
  • Arc: Enables shared ownership across async tasks

Sources :


graph TB
    ENDPOINT["endpoint\n(RpcServiceEndpointInterface)"]
subgraph "Handler Closure Signature"
        CLOSURE["async move closure\n/request_bytes: Vec&lt;u8&gt;, _ctx/ -&gt; Result"]
end
    
    subgraph "Handler Implementation Steps"
        DECODE["1. Decode request bytes\nMethod::decode_request(&request_bytes)"]
PROCESS["2. Process business logic\n(sum, product, echo)"]
ENCODE["3. Encode response\nMethod::encode_response(result)"]
end
    
 
   ENDPOINT -->|register_prebuffered METHOD_ID, closure| CLOSURE
 
   CLOSURE --> DECODE
 
   DECODE --> PROCESS
 
   PROCESS --> ENCODE
 
   ENCODE -->|Ok response_bytes| ENDPOINT

Handler Registration

Handlers are registered using the register_prebuffered method, which accepts a method ID and an async closure:

Diagram: Handler Registration Pattern

Sources :

Add Handler Example

The Add handler sums a vector of floats:

Key Operations :

  1. Add::decode_request(): Deserializes request bytes into Vec<f64>
  2. iter().sum(): Computes sum using standard iterator methods
  3. Add::encode_response(): Serializes f64 result back to bytes

Sources :

Concurrent Handler Registration

All handlers are registered concurrently using tokio::join!:

This ensures all handlers are registered before the server begins accepting connections.

Sources :

Server Task Spawning

The server is spawned into a background task:

Key Code Entities :

  • tokio::spawn(): Spawns async task on Tokio runtime
  • Arc::clone(&server): Clones Arc reference for task ownership
  • serve_with_listener(): Accepts connections and dispatches to handlers

Sources :


sequenceDiagram
    participant MAIN as main function
    participant CLIENT as RpcClient
    participant HANDLER as state_change_handler
    participant SERVER as RpcServer
    
    MAIN->>MAIN: tokio::time::sleep(200ms)
    Note over MAIN: Wait for server startup
    
    MAIN->>CLIENT: RpcClient::new(host, port)
    CLIENT->>SERVER: WebSocket connection
    SERVER-->>CLIENT: Connection established
    
    MAIN->>CLIENT: set_state_change_handler(callback)
    Note over HANDLER: Callback invoked on state changes
    
    MAIN->>CLIENT: Method::call(&client, params)
    Note over MAIN: Ready to make RPC calls

Client Connection and Configuration

The client establishes a WebSocket connection to the server:

Diagram: Client Initialization Flow

Sources :

RpcClient Creation

Key Code Entities :

  • RpcClient::new(): Creates client and initiates WebSocket connection
  • Parameters: Server host (String) and port (u16)
  • Returns: Result<RpcClient> on successful connection

Sources :

State Change Handling

The client supports optional state change callbacks:

Key Code Entities :

  • set_state_change_handler(): Registers callback for transport state changes
  • RpcTransportState: Enum representing connection states (Connected, Disconnected, etc.)
  • Callback signature: Fn(RpcTransportState)

Sources :


graph LR
    CLIENT["client code"]
CALL_METHOD["Method::call\n(client, params)"]
CALLER_IF["RpcServiceCallerInterface\ncall_prebuffered"]
DISPATCHER["RpcDispatcher\nassign request_id\ntrack pending"]
SESSION["RpcSession\nallocate stream_id\nencode frames"]
WS["WebSocket transport"]
CLIENT --> CALL_METHOD
 
   CALL_METHOD --> CALLER_IF
 
   CALLER_IF --> DISPATCHER
 
   DISPATCHER --> SESSION
 
   SESSION --> WS
    
    WS -.response frames.-> SESSION
    SESSION -.decoded response.-> DISPATCHER
    DISPATCHER -.correlated result.-> CALLER_IF
    CALLER_IF -.deserialized.-> CALL_METHOD
    CALL_METHOD -.return value.-> CLIENT

Making RPC Calls

RPC calls are made using the static call() method on each service definition:

Diagram: RPC Call Execution Pattern

Sources :

Concurrent Call Execution

Multiple calls can be executed concurrently using tokio::join!:

Key Features :

  • All six calls execute concurrently over the same WebSocket connection
  • Each call gets a unique request ID for response correlation
  • Stream multiplexing allows interleaved responses
  • join! waits for all responses before proceeding

Sources :

Call Syntax and Type Safety

The call() method is generic and type-safe:

CallInput TypeReturn TypeImplementation
Add::call(&client, vec![1.0, 2.0, 3.0])Vec<f64>Result<f64>Sums inputs
Mult::call(&client, vec![8.0, 3.0, 7.0])Vec<f64>Result<f64>Multiplies inputs
Echo::call(&client, b"test".into())Vec<u8>Result<Vec<u8>>Echoes input

Type Safety Guarantees :

  • Compile-time verification of parameter types
  • Compile-time verification of return types
  • Mismatched types result in compilation errors, not runtime errors

Sources :

Result Validation

The example validates all results with assertions:

Sources :


Complete Execution Flow

Diagram: End-to-End Request/Response Flow

Sources :


Running the Example

The example is located in the example-muxio-ws-rpc-app crate and can be executed with:

Expected Output :

[INFO] Transport state changed to: Connected
[INFO] All assertions passed

The example demonstrates:

  1. Server starts on random port
  2. Handlers registered for Add, Mult, Echo
  3. Client connects via WebSocket
  4. Six concurrent RPC calls execute successfully
  5. All responses validated with assertions
  6. Automatic cleanup on completion

Sources :


Key Takeaways

ConceptImplementation
Shared DefinitionsService methods defined once in example-muxio-rpc-service-definition, used by both client and server
Type SafetyCompile-time verification of request/response types via RpcMethodPrebuffered trait
ConcurrencyMultiple RPC calls multiplex over single WebSocket connection
Async HandlersServer handlers are async closures, enabling non-blocking execution
State ManagementOptional state change callbacks for connection monitoring
Zero BoilerplateMethod calls use simple Method::call(client, params) syntax

This example provides a foundation for building production WebSocket RPC applications. For streaming RPC patterns, see Streaming RPC Calls. For WASM client integration, see WASM RPC Client.

Sources :


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Defining a Simple RPC Service

Loading…

Defining a Simple RPC Service

Relevant source files

Purpose and Scope

This page provides a step-by-step tutorial for defining RPC services in the muxio framework. It demonstrates how to create a shared service definition crate containing type-safe RPC method contracts that are used by both clients and servers. The tutorial uses the example services Add, Mult, and Echo as practical demonstrations.

For information about implementing server-side handlers that process these requests, see Service Endpoint Interface. For client-side invocation patterns, see Service Caller Interface. For the complete working example application, see WebSocket RPC Application Example.


Overview: Service Definitions as Shared Contracts

Service definitions in muxio are compile-time type-safe contracts shared between clients and servers. By implementing the RpcMethodPrebuffered trait in a common crate, both sides of the communication agree on:

  • The method’s unique identifier (via compile-time xxhash)
  • The input parameter types
  • The output return types
  • The serialization/deserialization logic

This approach eliminates an entire class of runtime errors by catching type mismatches at compile time.

Sources: README.md:48-50 README.md:70-74


Architecture: Service Definition Flow

The following diagram illustrates how a service definition crate sits between client and server implementations:

Diagram: Service Definition Shared Between Client and Server

graph TB
    subgraph "Shared Service Definition Crate"
        SD["example-muxio-rpc-service-definition"]
AddDef["Add Service\nRpcMethodPrebuffered"]
MultDef["Mult Service\nRpcMethodPrebuffered"]
EchoDef["Echo Service\nRpcMethodPrebuffered"]
SD --> AddDef
 
       SD --> MultDef
 
       SD --> EchoDef
    end
    
    subgraph "Server Implementation"
        Server["RpcServer"]
Endpoint["RpcServiceEndpoint"]
AddHandler["Add::decode_request\nAdd::encode_response"]
MultHandler["Mult::decode_request\nMult::encode_response"]
EchoHandler["Echo::decode_request\nEcho::encode_response"]
Server --> Endpoint
 
       Endpoint --> AddHandler
 
       Endpoint --> MultHandler
 
       Endpoint --> EchoHandler
    end
    
    subgraph "Client Implementation"
        Client["RpcClient / RpcWasmClient"]
Caller["RpcServiceCallerInterface"]
AddCall["Add::call"]
MultCall["Mult::call"]
EchoCall["Echo::call"]
Client --> Caller
 
       Caller --> AddCall
 
       Caller --> MultCall
 
       Caller --> EchoCall
    end
    
    AddDef -.provides.-> AddHandler
    AddDef -.provides.-> AddCall
    
    MultDef -.provides.-> MultHandler
    MultDef -.provides.-> MultCall
    
    EchoDef -.provides.-> EchoHandler
    EchoDef -.provides.-> EchoCall
    
    AddHandler -.uses METHOD_ID.-> AddDef
    MultHandler -.uses METHOD_ID.-> MultDef
    EchoHandler -.uses METHOD_ID.-> EchoDef

Sources: README.md:66-162


Step 1: Create the Shared Service Definition Crate

The service definitions live in a dedicated crate that is referenced by both client and server applications. In the muxio examples, this is example-muxio-rpc-service-definition.

Crate Dependencies

The service definition crate requires these dependencies:

DependencyPurpose
muxio-rpc-serviceProvides RpcMethodPrebuffered trait
bitcodeBinary serialization (with derive feature)
serdeRequired by bitcode for derive macros

Sources: Inferred from README.md:71-74 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-6


Step 2: Define Input and Output Types

Each RPC method requires input and output types that implement Serialize and Deserialize from bitcode. For prebuffered methods, these types represent the complete request/response payloads.

Example Type Definitions

For the Add service that sums a vector of floating-point numbers:

  • Input: Vec<f64> (the numbers to sum)
  • Output: f64 (the result)

For the Echo service that returns data unchanged:

  • Input: Vec<u8> (arbitrary binary data)
  • Output: Vec<u8> (same binary data)

For the Mult service that multiplies a vector of floating-point numbers:

  • Input: Vec<f64> (the numbers to multiply)
  • Output: f64 (the product)

Sources: README.md:102-118 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:21-27


Step 3: Implement the RpcMethodPrebuffered Trait

The RpcMethodPrebuffered trait is the core abstraction for defining prebuffered RPC methods. Each service implements this trait to specify its contract.

classDiagram
    class RpcMethodPrebuffered {<<trait>>\n+Input: type\n+Output: type\n+METHOD_NAME: &'static str\n+METHOD_ID: u64\n+encode_request(input) Result~Vec~u8~~\n+decode_request(bytes) Result~Input~\n+encode_response(output) Result~Vec~u8~~\n+decode_response(bytes) Result~Output~}
    
    class Add {+Input = Vec~f64~\n+Output = f64\n+METHOD_NAME = "Add"\n+METHOD_ID = xxhash64("Add")}
    
    class Mult {+Input = Vec~f64~\n+Output = f64\n+METHOD_NAME = "Mult"\n+METHOD_ID = xxhash64("Mult")}
    
    class Echo {+Input = Vec~u8~\n+Output = Vec~u8~\n+METHOD_NAME = "Echo"\n+METHOD_ID = xxhash64("Echo")}
    
    RpcMethodPrebuffered <|.. Add
    RpcMethodPrebuffered <|.. Mult
    RpcMethodPrebuffered <|.. Echo

Trait Structure

Diagram: RpcMethodPrebuffered Trait Implementation Pattern

Required Associated Types and Constants

MemberDescription
InputThe type of the method’s parameters
OutputThe type of the method’s return value
METHOD_NAMEA unique string identifier (used for ID generation)
METHOD_IDA compile-time generated u64 via xxhash of METHOD_NAME

Required Methods

MethodPurpose
encode_request(input: Self::Input)Serialize input parameters to bytes using bitcode
decode_request(bytes: &[u8])Deserialize bytes to input parameters using bitcode
encode_response(output: Self::Output)Serialize output value to bytes using bitcode
decode_response(bytes: &[u8])Deserialize bytes to output value using bitcode

Sources: Inferred from extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:10-21 README.md:102-118


sequenceDiagram
    participant Dev as "Developer"
    participant Trait as "RpcMethodPrebuffered Trait"
    participant Compile as "Compile Time"
    participant Runtime as "Runtime"
    
    Note over Dev,Runtime: Service Definition Phase
    Dev->>Trait: Define Add struct
    Dev->>Trait: Set Input = Vec<f64>
    Dev->>Trait: Set Output = f64
    Dev->>Trait: Set METHOD_NAME = "Add"
    
    Compile->>Compile: Generate METHOD_ID via xxhash("Add")
    
    Dev->>Trait: Implement encode_request
    Note right of Trait: Uses bitcode::encode
    Dev->>Trait: Implement decode_request
    Note right of Trait: Uses bitcode::decode
    Dev->>Trait: Implement encode_response
    Note right of Trait: Uses bitcode::encode
    Dev->>Trait: Implement decode_response
    Note right of Trait: Uses bitcode::decode
    
    Note over Dev,Runtime: Usage Phase
    Runtime->>Trait: Call Add::encode_request(vec![1.0, 2.0])
    Trait->>Runtime: Returns Vec<u8> (serialized)
    Runtime->>Trait: Call Add::decode_response(bytes)
    Trait->>Runtime: Returns f64 (deserialized sum)

Step 4: Implementation Example

The following diagram shows the complete implementation flow for a single service:

Diagram: Service Implementation and Usage Flow

Sources: README.md:102-106 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:52-56


Step 5: Using Service Definitions on the Server

Once defined, services are registered on the server using their METHOD_ID constant. The server-side handler receives serialized bytes and uses the service’s decode_request and encode_response methods.

Server Registration Pattern

The server registration follows this pattern:

  1. Access the RpcServiceEndpoint from the server instance
  2. Call register_prebuffered with the service’s METHOD_ID
  3. Provide an async handler that:
    • Decodes the request using ServiceType::decode_request
    • Performs the business logic
    • Encodes the response using ServiceType::encode_response

Diagram: Server-Side Service Registration Flow

Example: Add Service Handler

From the example application README.md:102-106:

  • Handler receives request_bytes: Vec<u8>
  • Calls Add::decode_request(&request_bytes)? to get Vec<f64>
  • Computes sum: request_params.iter().sum()
  • Calls Add::encode_response(sum)? to get response bytes
  • Returns Ok(response_bytes)

Sources: README.md:100-119 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:51-69


Step 6: Using Service Definitions on the Client

Clients invoke RPC methods using the RpcCallPrebuffered trait, which is automatically implemented for all types that implement RpcMethodPrebuffered.

Client Invocation Pattern

The RpcCallPrebuffered::call method provides a high-level interface:

Diagram: Client-Side RPC Call Flow

Example: Client Call

From the example application README.md:145-152:

  • Call Add::call(&*rpc_client, vec![1.0, 2.0, 3.0])
  • Returns Result<f64, RpcServiceError>
  • The call method handles all encoding, transport, and decoding

Sources: README.md:144-159 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:49-97


graph TD
    EncodeArgs["Encode Input Arguments"]
CheckSize{"Size >= DEFAULT_SERVICE_MAX_CHUNK_SIZE?"}
SmallPath["Place in rpc_param_bytes\nSingle frame in header"]
LargePath["Place in rpc_prebuffered_payload_bytes\nStreamed after header"]
Transport["Send to RpcDispatcher"]
EncodeArgs --> CheckSize
 
   CheckSize -->|No| SmallPath
 
   CheckSize -->|Yes| LargePath
 
   SmallPath --> Transport
 
   LargePath --> Transport

Step 7: Smart Transport for Large Payloads

The RpcCallPrebuffered implementation includes automatic handling of large argument sets that exceed the default chunk size.

Transport Strategy Decision

Diagram: Automatic Large Payload Handling

ConditionStrategyLocation
encoded_args.len() < DEFAULT_SERVICE_MAX_CHUNK_SIZESend in header framerpc_param_bytes field
encoded_args.len() >= DEFAULT_SERVICE_MAX_CHUNK_SIZEStream as payloadrpc_prebuffered_payload_bytes field

This ensures that RPC calls work regardless of argument size without requiring application-level chunking logic.

Sources: extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:30-65 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312


Complete Service Definition Structure

The following table summarizes the complete structure of a service definition:

ComponentImplementationExample for Add
Service structZero-sized typepub struct Add;
Input typeAssociated typeVec<f64>
Output typeAssociated typef64
Method nameStatic string"Add"
Method IDCompile-time hashxxhash64("Add")
Request encoderBitcode serializebitcode::encode(input)
Request decoderBitcode deserializebitcode::decode(bytes)
Response encoderBitcode serializebitcode::encode(output)
Response decoderBitcode deserializebitcode::decode(bytes)

Sources: README.md:71-74 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-6


Testing Service Definitions

Service definitions can be tested end-to-end using the integration test pattern shown in extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142:

  1. Start a real RpcServer with handlers registered
  2. Connect a client (native or WASM)
  3. Invoke services using the call method
  4. Assert on the results
graph TB
    Server["Start RpcServer\non random port"]
Register["Register handlers\nAdd, Mult, Echo"]
Client["Create RpcClient\nor RpcWasmClient"]
CallService["Invoke Add::call"]
Assert["Assert results"]
Server --> Register
 
   Register --> Client
 
   Client --> CallService
 
   CallService --> Assert

Test Pattern

Diagram: End-to-End Service Test Pattern

This pattern validates the complete round-trip: serialization, transport, dispatch, execution, and deserialization.

Sources: extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:39-142 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:229-312


Summary

Defining a simple RPC service in muxio requires:

  1. Creating a shared crate for service definitions
  2. Defining input/output types with bitcode serialization
  3. Implementing the RpcMethodPrebuffered trait with encode/decode methods
  4. Using compile-time generated METHOD_ID for registration and dispatch
  5. Registering handlers on the server using the service’s static methods
  6. Invoking methods on the client using the RpcCallPrebuffered::call trait

This pattern ensures compile-time type safety, eliminates a large class of runtime errors, and enables the same service definitions to work across native and WASM clients without modification.

Sources: README.md:66-162 extensions/muxio-rpc-service-caller/src/prebuffered/traits.rs:1-99 extensions/muxio-wasm-rpc-client/tests/prebuffered_integration_tests.rs:1-312


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Advanced Topics

Loading…

Advanced Topics

Relevant source files

Purpose and Scope

This section covers advanced usage patterns, optimization techniques, and extension points for the muxio framework. Topics include cross-platform deployment strategies, performance tuning, custom transport implementations, and deep integration with JavaScript environments via WASM.

For basic usage patterns, see Overview. For specific platform implementations, see Platform Implementations. For service definition basics, see Service Definitions.


Cross-Platform Deployment

Deployment Architecture

The muxio framework enables a single service definition to be deployed across multiple runtime environments without code duplication. The key enabler is the runtime-agnostic core design, which separates platform-specific concerns (transport, async runtime) from business logic.

Sources : README.md:48-52 README.md:66-84 extensions structure from Cargo.toml

graph TB
    subgraph "Shared_Service_Contract"
        ServiceDef["example-muxio-rpc-service-definition\nRpcMethodPrebuffered traits\nMETHOD_ID constants\nencode_request/decode_response"]
end
    
    subgraph "Native_Tokio_Deployment"
        NativeApp["Native Application\nRust Binary"]
RpcClient["RpcClient\nArc&lt;TokioMutex&lt;RpcDispatcher&gt;&gt;\ntokio-tungstenite WebSocket"]
RpcServer["RpcServer\nAxum HTTP Server\ntokio::spawn task pool"]
NativeTransport["TCP/IP Network\nNative OS Stack"]
NativeApp -->|implements| ServiceDef
 
       NativeApp -->|uses| RpcClient
 
       NativeApp -->|serves| RpcServer
 
       RpcClient <-->|WebSocket frames| NativeTransport
 
       RpcServer <-->|WebSocket frames| NativeTransport
    end
    
    subgraph "WASM_Browser_Deployment"
        WasmApp["Web Application\nwasm-bindgen Module"]
RpcWasmClient["RpcWasmClient\nMUXIO_STATIC_RPC_CLIENT_REF\nthread_local RefCell"]
JsBridge["static_muxio_write_bytes\nJavaScript Callback\nWebSocket.send"]
BrowserTransport["Browser WebSocket API\nJavaScript Runtime"]
WasmApp -->|implements| ServiceDef
 
       WasmApp -->|uses| RpcWasmClient
 
       RpcWasmClient -->|calls| JsBridge
 
       JsBridge -->|invokes| BrowserTransport
    end
    
 
   NativeTransport <-->|Binary Protocol| BrowserTransport
    
    BuildConfig["Build Configuration\ncargo build --target x86_64\ncargo build --target wasm32"]
BuildConfig -.->|native| NativeApp
 
   BuildConfig -.->|wasm32-unknown-unknown| WasmApp

Shared Service Definition Pattern

Both native and WASM clients use the same service definition crate, ensuring compile-time type safety. The service definition declares methods via the RpcMethodPrebuffered trait, which generates:

  • METHOD_ID - A compile-time u64 constant derived from the method name via xxhash
  • encode_request / decode_request - Serialization using bitcode
  • encode_response / decode_response - Deserialization using bitcode
ComponentNative ClientWASM ClientServer
Service DefinitionShared crate dependencyShared crate dependencyShared crate dependency
Caller InterfaceRpcClient implements RpcServiceCallerInterfaceRpcWasmClient implements RpcServiceCallerInterfaceN/A
Endpoint InterfaceOptional (bidirectional)Optional (bidirectional)RpcServer exposes RpcServiceEndpointInterface
Transporttokio-tungsteniteJavaScript WebSockettokio-tungstenite via Axum
Async RuntimeTokioBrowser event loopTokio

Sources : README.md50 README.md:72-74

Build Configuration Strategy

The workspace uses Cargo features and target-specific dependencies to enable cross-platform builds:

The muxio-wasm-rpc-client crate uses conditional compilation to ensure WASM-specific dependencies like wasm-bindgen, js-sys, and web-sys are only included for WASM targets.

Sources : README.md:39-40 workspace structure from Cargo.toml


Performance Considerations

Data Flow and Chunking Strategy

The muxio framework employs a multi-layered data transformation pipeline optimized for low latency and minimal memory overhead. Understanding this pipeline is critical for performance tuning.

Sources : README.md architecture overview, bitcode usage from Cargo.lock, chunking from src/rpc/rpc_internals/rpc_session.rs

graph LR
    subgraph "Application_Layer"
        AppData["Rust Struct\nVec&lt;f64&gt; or Custom Type"]
end
    
    subgraph "Serialization_Layer"
        BitcodeEncode["bitcode::encode\nCompact Binary\n~70% smaller than JSON"]
SerializedBytes["Vec&lt;u8&gt;\nSerialized Payload"]
end
    
    subgraph "RPC_Protocol_Layer"
        RpcRequest["RpcRequest struct\nmethod_id: u64\nparams: Vec&lt;u8&gt;"]
RpcHeader["RpcHeader\nmessage_type: MessageType\nflags: HeaderFlags"]
end
    
    subgraph "Chunking_Layer"
        ChunkLogic["DEFAULT_MAX_CHUNK_SIZE\n8KB chunks\nRpcStreamEncoder"]
Chunk1["Chunk 1\n8KB"]
Chunk2["Chunk 2\n8KB"]
ChunkN["Chunk N\nRemaining"]
end
    
    subgraph "Multiplexing_Layer"
        StreamId["stream_id assignment\nRpcSession allocator"]
FrameHeader["Frame Header\n4 bytes: stream_id + flags"]
Frame1["Frame 1"]
Frame2["Frame 2"]
end
    
    subgraph "Transport_Layer"
        WsFrame["WebSocket Binary Frame"]
Network["TCP/IP Network"]
end
    
 
   AppData -->|serialize| BitcodeEncode
 
   BitcodeEncode --> SerializedBytes
 
   SerializedBytes --> RpcRequest
 
   RpcRequest --> RpcHeader
 
   RpcHeader --> ChunkLogic
 
   ChunkLogic --> Chunk1
 
   ChunkLogic --> Chunk2
 
   ChunkLogic --> ChunkN
 
   Chunk1 --> StreamId
 
   Chunk2 --> StreamId
 
   ChunkN --> StreamId
 
   StreamId --> FrameHeader
 
   FrameHeader --> Frame1
 
   FrameHeader --> Frame2
 
   Frame1 --> WsFrame
 
   Frame2 --> WsFrame
 
   WsFrame --> Network

Chunking Configuration

The default chunk size is defined in the core library and affects memory usage, latency, and throughput trade-offs:

Chunk SizeMemory ImpactLatencyThroughputUse Case
4KBLower peak memoryHigher (more frames)LowerMemory-constrained environments
8KB (default)BalancedBalancedBalancedGeneral purpose
16KBHigher peak memoryLower (fewer frames)HigherHigh-bandwidth scenarios
32KB+High peak memoryLowestHighestLarge file transfers

The chunk size is currently a compile-time constant (DEFAULT_MAX_CHUNK_SIZE). Custom transports can override this by implementing custom encoding logic in their RpcStreamEncoder implementations.

Sources : src/rpc/rpc_internals/rpc_session.rs frame chunking logic

Prebuffering vs Streaming Trade-offs

The framework supports two RPC invocation patterns, each with distinct performance characteristics:

Prebuffered Pattern (RpcMethodPrebuffered):

  • Entire request payload buffered in memory before processing
  • Entire response payload buffered before returning
  • Lower latency for small payloads (< 8KB)
  • Simpler error handling (atomic success/failure)
  • Example: README.md:102-118 handler registrations

Streaming Pattern (dynamic channels):

  • Incremental processing via bounded_channel or unbounded_channel
  • Memory usage proportional to channel buffer size
  • Enables processing before full payload arrives
  • Supports backpressure via bounded channels
  • Required for payloads > available memory
ScenarioRecommended PatternRationale
Small JSON-like data (< 8KB)PrebufferedSingle allocation, minimal overhead
Medium data (8KB - 1MB)Prebuffered or StreamingDepends on memory constraints
Large data (> 1MB)StreamingPrevents OOM, enables backpressure
Real-time data feedsStreamingContinuous processing required
File uploads/downloadsStreamingPredictable memory usage

Sources : README.md:102-118 prebuffered examples, streaming concepts from architecture overview

Smart Transport Strategy for Large Payloads

For payloads exceeding several megabytes, consider implementing a hybrid approach:

  1. Send small metadata message via muxio RPC
  2. Transfer large payload via alternative channel (HTTP multipart, object storage presigned URL)
  3. Send completion notification via muxio RPC

This pattern avoids WebSocket frame size limitations and allows specialized optimization for bulk data transfer while maintaining RPC semantics for control flow.

Example Flow:

Client -> Server: UploadRequest { file_id: "abc", size: 500MB }
Server -> Client: UploadResponse { presigned_url: "https://..." }
Client -> Storage: PUT to presigned_url (outside muxio)
Client -> Server: UploadComplete { file_id: "abc" }
Server -> Client: ProcessingResult { ... }

Sources : Design patterns implied by README.md:46-47 low-latency focus


Extending the Framework

graph TB
    subgraph "Core_Traits"
        CallerInterface["RpcServiceCallerInterface\nasync fn call_prebuffered\nasync fn call_streaming"]
EndpointInterface["RpcServiceEndpointInterface\nasync fn register_prebuffered\nasync fn register_streaming"]
end
    
    subgraph "Transport_Abstraction"
        ReadBytes["read_bytes callback\nfn(&apos;static [u8])"]
WriteBytes["write_bytes implementation\nfn send(&self Vec&lt;u8&gt;)"]
end
    
    subgraph "Provided_Implementations"
        RpcClient["RpcClient\nTokio + WebSocket"]
RpcWasmClient["RpcWasmClient\nwasm-bindgen bridge"]
RpcServer["RpcServer\nAxum + WebSocket"]
end
    
    subgraph "Custom_Implementation_Example"
        CustomTransport["CustomRpcClient\nYour transport layer"]
CustomDispatcher["Arc&lt;Mutex&lt;RpcDispatcher&gt;&gt;\nRequest correlation"]
CustomSession["RpcSession\nStream multiplexing"]
CustomSend["Custom send_bytes\ne.g. UDP, IPC, gRPC"]
CustomTransport -->|owns| CustomDispatcher
 
       CustomDispatcher -->|owns| CustomSession
 
       CustomTransport -->|implements| CustomSend
    end
    
    RpcClient -.implements.-> CallerInterface
    RpcWasmClient -.implements.-> CallerInterface
    RpcServer -.implements.-> EndpointInterface
    
    CustomTransport -.implements.-> CallerInterface
 
   CustomTransport -->|uses| ReadBytes
 
   CustomTransport -->|uses| WriteBytes
    
    CallerInterface -.requires.-> ReadBytes
    CallerInterface -.requires.-> WriteBytes

Extension Points and Custom Transports

The muxio framework exposes several well-defined extension points for custom implementations:

Sources : README.md48 RpcServiceCallerInterface description, extensions/muxio-rpc-service-caller/src/caller_interface.rs extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs

Implementing a Custom Transport

To create a custom transport (e.g., for UDP, Unix domain sockets, or custom protocols), implement the following pattern:

Required Components:

  1. Transport Handler - Manages the underlying I/O
  2. RpcDispatcher - Handles request correlation (reuse from core)
  3. RpcSession - Handles stream multiplexing (reuse from core)
  4. Trait Implementation - Implements RpcServiceCallerInterface

Key Integration Points:

ComponentResponsibilityImplementation Required
read_bytes callbackFeed received bytes to dispatcherYes - transport-specific
write_bytes functionSend frames to networkYes - transport-specific
RpcDispatcher::call()Initiate RPC requestsNo - use core implementation
RpcDispatcher::read()Process incoming framesNo - use core implementation
State managementTrack connection lifecycleYes - transport-specific

Example Structure:

custom_transport/
├── src/
│   ├── lib.rs
│   ├── custom_client.rs       # Implements RpcServiceCallerInterface
│   ├── custom_transport.rs    # Transport-specific I/O
│   └── custom_framing.rs      # Adapts RpcSession to transport

Reference implementations: extensions/muxio-tokio-rpc-client/src/rpc_client.rs for async pattern, extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs for callback-driven pattern.

Sources : README.md:35-36 runtime agnostic design, extensions/muxio-tokio-rpc-client/src/rpc_client.rs client structure

Runtime-Specific Adapters

The core library’s non-async, callback-driven design enables integration with diverse runtime environments:

Tokio Integration Pattern:

WASM Integration Pattern:

Custom Single-Threaded Runtime:

  • Wrap RpcDispatcher in Rc<RefCell<_>>
  • Process read_bytes on main thread
  • Use callback-based async pattern
  • No thread spawning required

Custom Multi-Threaded Runtime:

  • Wrap RpcDispatcher in Arc<StdMutex<_>>
  • Create thread pool for request handlers
  • Use channels for cross-thread communication
  • Example pattern from Tokio implementation

Sources : DRAFT.md:48-52 runtime model description, extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs


graph TB
    subgraph "Rust_WASM_Module"
        WasmApp["Application Code\ncalls RPC methods"]
RpcWasmClient["RpcWasmClient\nimplements RpcServiceCallerInterface"]
StaticClient["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local RefCell"]
Dispatcher["RpcDispatcher\nrequest correlation"]
Session["RpcSession\nstream multiplexing"]
WasmApp -->|uses| RpcWasmClient
 
       RpcWasmClient -->|stores in| StaticClient
 
       RpcWasmClient -->|owns| Dispatcher
 
       Dispatcher -->|owns| Session
    end
    
    subgraph "FFI_Boundary"
        WriteBytes["#[wasm_bindgen]\nstatic_muxio_write_bytes\nfn(Vec&lt;u8&gt;)"]
JsCallback["JavaScript Callback\nwindow.muxioWriteBytes\n= function(bytes)"]
WriteBytes -->|invokes| JsCallback
    end
    
    subgraph "JavaScript_Runtime"
        WebSocket["WebSocket Instance\nws.send(bytes)"]
EventLoop["Browser Event Loop"]
OnMessage["ws.onmessage handler"]
ReadBytes["Read Path\ncalls Rust static_muxio_read_bytes"]
JsCallback -->|calls| WebSocket
 
       WebSocket -->|send| EventLoop
 
       EventLoop -->|receive| OnMessage
 
       OnMessage -->|invokes| ReadBytes
    end
    
    subgraph "Rust_Read_Path"
        StaticReadFn["#[wasm_bindgen]\nstatic_muxio_read_bytes\nfn(&[u8])"]
DispatcherRead["dispatcher.read()\nframe decoding"]
ReadBytes -->|calls| StaticReadFn
 
       StaticReadFn -->|delegates to| DispatcherRead
    end
    
 
   Session -->|generates frames| WriteBytes
 
   DispatcherRead -->|delivers responses| WasmApp

JavaScript and WASM Integration

WASM Bridge Architecture

The WASM client integrates with JavaScript through a minimal FFI bridge that passes byte arrays between Rust and JavaScript:

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs README.md52 FFI description

Static Client Pattern

The WASM client uses a static singleton pattern due to WASM’s limitations with owned callbacks:

Problem : JavaScript callbacks cannot capture owned Rust data (no 'static lifetime guarantees)

Solution : Store client in thread-local static storage

Key Characteristics:

  • thread_local! ensures single-threaded access (WASM is single-threaded)
  • RefCell provides interior mutability
  • Option allows initialization after module load
  • All public API functions access the static client

Sources : extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:10-12 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs

JavaScript Interop Patterns

The JavaScript side must implement a minimal bridge to complete the integration:

Required JavaScript Setup:

FunctionPurposeImplementation
window.muxioWriteBytesRust -> JS data transferCallback that sends bytes via WebSocket
static_muxio_read_bytes()JS -> Rust data transferWASM export invoked from onmessage
static_muxio_create_client()Initialize WASM clientWASM export called on WebSocket open
static_muxio_handle_state_change()Connection lifecycleWASM export called on WebSocket state changes

Example JavaScript Integration:

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs architecture, README.md52 wasm-bindgen bridge description

Memory Management Across FFI Boundary

The WASM FFI boundary requires careful memory management to prevent leaks and use-after-free issues:

Rust - > JavaScript (Write Path):

  • Vec<u8> ownership transferred to JavaScript via wasm-bindgen
  • JavaScript holds Uint8Array view into WASM linear memory
  • Critical : JavaScript must not retain references after async operations
  • Data copied by WebSocket.send(), safe to release

JavaScript - > Rust (Read Path):

  • JavaScript creates Uint8Array from WebSocket data
  • Passes slice reference to Rust via wasm-bindgen
  • Rust copies data into owned structures (Vec<u8>)
  • JavaScript can release buffer after function returns

Best Practices:

  • Always copy data across FFI boundary, never share references
  • Use wasm-bindgen type conversions (Vec<u8>, &[u8])
  • Avoid storing JavaScript arrays in Rust (lifetime issues)
  • Avoid storing Rust pointers in JavaScript (invalidation risk)

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs implementation, wasm-bindgen safety patterns

WASM Build and Deployment

Building and deploying the WASM client requires specific toolchain configuration:

Build Steps:

Integration into Web Application:

Sources : extensions/muxio-wasm-rpc-client/ structure, WASM build patterns from Cargo.toml target configuration


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Cross-Platform Deployment

Loading…

Cross-Platform Deployment

Relevant source files

Purpose and Scope

This page explains strategies for deploying muxio-based RPC services across multiple runtime environments using a single shared service definition. It covers how to structure projects, configure builds, and deploy the same business logic to native applications (using Tokio) and web browsers (using WebAssembly).

For details on creating service definitions themselves, see Creating Service Definitions. For information on the WASM JavaScript bridge architecture, see JavaScript and WASM Integration. For runtime-specific implementation details, see Tokio RPC Client and WASM RPC Client.


Cross-Platform Architecture Overview

Muxio achieves cross-platform compatibility through a layered architecture where the core multiplexing and RPC logic remain platform-agnostic, while platform-specific extensions provide concrete implementations for different environments.

graph TB
    subgraph "Shared Business Logic"
        APP["Application Code\nType-safe RPC calls"]
SERVICE_DEF["Service Definition Crate\nRpcMethodPrebuffered traits\nMETHOD_ID constants"]
end
    
    subgraph "Abstraction Layer"
        CALLER_INTERFACE["RpcServiceCallerInterface\nPlatform-agnostic trait"]
ENDPOINT_INTERFACE["RpcServiceEndpointInterface\nHandler registration"]
end
    
    subgraph "Native Platform"
        TOKIO_CLIENT["RpcClient\nArc-based lifecycle\ntokio-tungstenite"]
TOKIO_SERVER["RpcServer\nAxum + WebSocket"]
end
    
    subgraph "Web Platform"
        WASM_CLIENT["RpcWasmClient\nwasm-bindgen\nJS WebSocket bridge"]
end
    
    subgraph "Core Layer"
        MUXIO_CORE["muxio core\nCallback-driven\nNo async runtime"]
RPC_FRAMEWORK["muxio-rpc-service\nMethod ID generation\nbitcode serialization"]
end
    
 
   APP --> SERVICE_DEF
 
   APP --> CALLER_INTERFACE
 
   SERVICE_DEF --> CALLER_INTERFACE
    
    CALLER_INTERFACE -.implemented by.-> TOKIO_CLIENT
    CALLER_INTERFACE -.implemented by.-> WASM_CLIENT
    
 
   TOKIO_CLIENT --> RPC_FRAMEWORK
 
   WASM_CLIENT --> RPC_FRAMEWORK
 
   TOKIO_SERVER --> ENDPOINT_INTERFACE
    
 
   RPC_FRAMEWORK --> MUXIO_CORE
 
   ENDPOINT_INTERFACE --> RPC_FRAMEWORK

Platform Independence Strategy

Sources : README.md:35-41 README.md:48-49 Cargo.lock:426-431 Cargo.lock:898-915 Cargo.lock:935-953


Shared Service Definition Pattern

The foundation of cross-platform deployment is a shared service definition crate that contains no platform-specific code. This crate defines the RPC methods, request/response types, and method identifiers that both client and server implementations depend on.

Service Definition Crate Structure

ComponentPurposeExample
RPC Method TraitsDefine method signatures and serializationimpl RpcMethodPrebuffered for Add
Method ID ConstantsCompile-time generated identifiersAdd::METHOD_ID
Request/Response TypesShared data structuresVec<f64> parameters
Serialization LogicEncoding/decoding with bitcodeencode_request(), decode_response()

Sources : README.md:50-51 README.md:71-74 Cargo.lock:426-431


Platform-Specific Client Implementations

While service definitions remain identical, client implementations differ based on the target platform. Both implementations satisfy the same RpcServiceCallerInterface trait, enabling application code to remain platform-agnostic.

Native (Tokio) Client

The muxio-tokio-rpc-client crate provides a native Rust client implementation using:

  • Async runtime : Tokio for concurrent I/O
  • Transport : tokio-tungstenite for WebSocket connections
  • Lifecycle : Arc<RpcClient> for shared ownership across async tasks
  • State management : Background task for connection monitoring

Sources : Cargo.lock:898-915 README.md:75-77 README.md:137-142

WebAssembly Client

The muxio-wasm-rpc-client crate provides a browser-compatible client using:

  • Platform : WebAssembly compiled from Rust
  • Transport : JavaScript WebSocket APIs via wasm-bindgen
  • Lifecycle : Thread-local static reference (MUXIO_STATIC_RPC_CLIENT_REF)
  • Bridge : static_muxio_write_bytes() function for JS-to-Rust communication

Sources : Cargo.lock:935-953 README.md:40-41 README.md:52-53

Unified Interface Usage

Application code can use the same RPC calling pattern regardless of platform:

Sources : README.md:145-152


graph TB
    subgraph "Core Layer"
        MUXIO["muxio\nno_std compatible"]
RPC_SERVICE["muxio-rpc-service\nplatform agnostic"]
end
    
    subgraph "Framework Layer"
        CALLER["muxio-rpc-service-caller"]
ENDPOINT["muxio-rpc-service-endpoint"]
end
    
    subgraph "Native Extensions"
        TOKIO_CLIENT["muxio-tokio-rpc-client\ntarget: native"]
TOKIO_SERVER["muxio-tokio-rpc-server\ntarget: native"]
end
    
    subgraph "WASM Extensions"
        WASM_CLIENT["muxio-wasm-rpc-client\ntarget: wasm32-unknown-unknown"]
end
    
    subgraph "Application Layer"
        SERVICE_DEF["example-muxio-rpc-service-definition\nplatform agnostic"]
EXAMPLE_APP["example-muxio-ws-rpc-app\nnative only"]
end
    
 
   RPC_SERVICE --> MUXIO
 
   CALLER --> RPC_SERVICE
 
   ENDPOINT --> RPC_SERVICE
    
 
   TOKIO_CLIENT --> CALLER
 
   TOKIO_SERVER --> ENDPOINT
 
   WASM_CLIENT --> CALLER
    
 
   SERVICE_DEF --> RPC_SERVICE
 
   EXAMPLE_APP --> SERVICE_DEF
 
   EXAMPLE_APP --> TOKIO_CLIENT
 
   EXAMPLE_APP --> TOKIO_SERVER

Cargo Workspace Configuration

Muxio uses a Cargo workspace to organize crates by layer and platform target. This structure enables selective compilation based on target architecture.

Workspace Structure

Dependency Requirements by Platform :

CrateNativeWASMNotes
muxioCore, no async required
muxio-rpc-servicePlatform-agnostic serialization
muxio-tokio-rpc-clientRequires Tokio runtime
muxio-tokio-rpc-serverRequires Tokio + Axum
muxio-wasm-rpc-clientRequires wasm-bindgen

Sources : Cargo.lock:830-839 Cargo.lock:858-867 Cargo.lock:898-915 Cargo.lock:918-932 Cargo.lock:935-953


Build Targets and Compilation

Cross-platform deployment requires configuring separate build targets for native and WASM outputs.

Native Build Configuration

Native applications compile with the standard Rust toolchain:

Key dependencies for native builds :

  • tokio with full features
  • tokio-tungstenite for WebSocket transport
  • axum for HTTP server (server-side only)

Sources : Cargo.lock:1417-1432 Cargo.lock:1446-1455 Cargo.lock:80-114

WebAssembly Build Configuration

WASM applications require the wasm32-unknown-unknown target:

Key dependencies for WASM builds :

  • wasm-bindgen for Rust-JavaScript interop
  • wasm-bindgen-futures for async/await in WASM
  • js-sys for JavaScript standard library access
  • No Tokio or native I/O dependencies

Sources : Cargo.lock:1637-1646 Cargo.lock:1663-1673 Cargo.lock:745-752


Deployment Patterns

Pattern 1: Unified Service Definition

Create a dedicated crate for service definitions that contains no platform-specific code:

my-rpc-services/
├── Cargo.toml          # Only depend on muxio-rpc-service + bitcode
└── src/
    ├── lib.rs
    └── methods/
        ├── user_auth.rs
        ├── data_sync.rs
        └── notifications.rs

Dependencies :

This crate can be used by all platforms.

Sources : Cargo.lock:426-431

Pattern 2: Platform-Specific Applications

Structure application code to depend on platform-appropriate client implementations:

my-project/
├── shared-services/         # Platform-agnostic service definitions
├── native-app/              # Tokio-based desktop/server app
│   ├── Cargo.toml           # Depends on muxio-tokio-rpc-client
│   └── src/
├── wasm-app/                # Browser-based web app
│   ├── Cargo.toml           # Depends on muxio-wasm-rpc-client
│   └── src/
└── server/                  # Backend server
    ├── Cargo.toml           # Depends on muxio-tokio-rpc-server
    └── src/

Pattern 3: Conditional Compilation

Use Cargo features to conditionally compile platform-specific code:

Application code uses conditional compilation:

Sources : Cargo.lock:434-449


sequenceDiagram
    participant JS as "JavaScript Host"
    participant WS as "WebSocket"
    participant WASM as "WASM Module"
    participant CLIENT as "RpcWasmClient"
    
    Note over JS,CLIENT: Initialization
    JS->>WASM: Import WASM module
    WASM->>CLIENT: Initialize static client
    JS->>WS: Create WebSocket connection
    
    Note over JS,CLIENT: Outbound RPC Call
    CLIENT->>WASM: Emit bytes via callback
    WASM->>JS: Call write_bytes_to_js()
    JS->>WS: WebSocket.send(bytes)
    
    Note over JS,CLIENT: Inbound Response
    WS->>JS: onmessage event
    JS->>WASM: static_muxio_write_bytes(bytes)
    WASM->>CLIENT: Process response
    CLIENT->>CLIENT: Resolve awaited call

WebAssembly-Specific Considerations

JavaScript Bridge Requirements

WASM clients require a thin JavaScript layer to handle WebSocket communication:

Key Functions :

  • static_muxio_write_bytes(bytes: &[u8]): Entry point for JavaScript to pass received bytes to WASM
  • write_bytes_to_js(): Callback function exposed to WASM for outbound data

Sources : README.md:52-53

Memory Management

WASM clients use different ownership patterns than native clients:

AspectNative (Tokio)WASM
Client storageArc<RpcClient>thread_local! RefCell<RpcWasmClient>
Dispatcher mutexTokioMutexStdMutex
Task spawningtokio::spawn()Direct execution, no spawning
Async runtimeTokio multi-threadedSingle-threaded, promise-based

Sources : Cargo.lock:898-915 Cargo.lock:935-953

Build Optimization for WASM

Optimize WASM builds for size:

graph TB
    subgraph "Server Process"
        LISTENER["TcpListener\nBind to port"]
SERVER["Arc<RpcServer>"]
ENDPOINT["RpcServiceEndpoint\nHandler registry"]
HANDLERS["Registered handlers"]
end
    
    subgraph "Per-Connection"
        WS_UPGRADE["WebSocket upgrade"]
CONNECTION["Connection handler"]
DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
end
    
 
   LISTENER --> WS_UPGRADE
 
   WS_UPGRADE --> CONNECTION
 
   SERVER --> ENDPOINT
 
   ENDPOINT --> HANDLERS
 
   CONNECTION --> DISPATCHER
 
   DISPATCHER --> SESSION
 
   HANDLERS --> DISPATCHER

Additional size reduction via wasm-opt:


Server Deployment Considerations

While clients differ by platform, servers typically run in native environments using Tokio.

Server Architecture

Deployment Checklist :

  • Single server binary serves all client types (native and WASM)
  • WebSocket endpoint accessible to both platforms
  • Same binary protocol used for all connections
  • Service handlers registered once, used by all clients

Sources : README.md:83-128


Integration Example

The following example shows how shared service definitions enable identical calling patterns across platforms:

Shared Service Definition

Native Client Usage

WASM Client Usage

Server Implementation

Sources : README.md:70-161


Benefits of Cross-Platform Deployment

BenefitDescriptionImplementation
Code ReuseWrite business logic onceShared service definition crate
Type SafetyCompile-time API contractRpcMethodPrebuffered trait
Binary EfficiencySame protocol across platformsbitcode serialization
Single ServerOne backend serves all clientsPlatform-agnostic RpcServer
FlexibilityEasy to add new platformsImplement RpcServiceCallerInterface

Sources : README.md:48-49 README.md:50-51


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Performance Considerations

Loading…

Performance Considerations

Relevant source files

Purpose and Scope

This document covers performance characteristics, optimization strategies, and trade-offs in the muxio RPC framework. Topics include binary protocol efficiency, chunking strategies, payload size management, prebuffering versus streaming patterns, and memory management considerations.

For general architecture and design principles, see Design Philosophy. For detailed information about streaming RPC patterns, see Streaming RPC Calls. For cross-platform deployment strategies, see Cross-Platform Deployment.


Binary Protocol Efficiency

The muxio framework is designed for low-overhead communication through several architectural decisions:

Compact Binary Serialization

The framework uses bitcode for serialization instead of text-based formats like JSON. This provides:

  • Smaller payload sizes : Binary encoding reduces network transfer costs
  • Faster encoding/decoding : No string parsing or formatting overhead
  • Type safety : Compile-time verification of serialized structures
  • Zero schema overhead : No field names transmitted in messages

The serialization occurs at the RPC service definition layer, where RpcMethodPrebuffered::encode_request and RpcMethodPrebuffered::decode_response handle type conversion.

Schemaless Framing Protocol

The underlying framing protocol is schema-agnostic, meaning:

  • No metadata about message structure is transmitted
  • Frame headers contain only essential routing information (stream ID, flags)
  • Method identification uses 64-bit xxhash values computed at compile time
  • Response correlation uses numeric request IDs

This minimalist approach reduces per-message overhead while maintaining full type safety through shared service definitions.

Sources:


Chunking and Payload Size Management

DEFAULT_SERVICE_MAX_CHUNK_SIZE

The framework defines a constant chunk size used for splitting large payloads:

This value represents the maximum size of a single frame’s payload. Any data exceeding this size is automatically chunked by the RpcDispatcher and RpcSession layers.

Rationale for 64 KB chunks:

FactorConsideration
WebSocket compatibilityMany WebSocket implementations handle 64 KB frames efficiently
Memory footprintLimits per-stream buffer requirements
Latency vs throughputBalances sending small chunks quickly vs fewer total frames
TCP segment alignmentAligns reasonably with typical TCP maximum segment sizes

Smart Transport Strategy for Large Payloads

The framework implements an adaptive strategy for transmitting RPC arguments based on their encoded size:

Small Payload Path ( < 64 KB):

flowchart TB
    EncodeArgs["RpcCallPrebuffered::call\nEncode input arguments"]
CheckSize{"encoded_args.len() >=\nDEFAULT_SERVICE_MAX_CHUNK_SIZE?"}
SmallPath["Small payload path:\nSet rpc_param_bytes\nHeader contains full args"]
LargePath["Large payload path:\nSet rpc_prebuffered_payload_bytes\nStreamed after header"]
Dispatcher["RpcDispatcher::call\nCreate RpcRequest"]
Session["RpcSession::write_bytes\nChunk if needed"]
Transport["WebSocket transport"]
EncodeArgs --> CheckSize
 
   CheckSize -->|< 64 KB| SmallPath
 
   CheckSize -->|>= 64 KB| LargePath
 
   SmallPath --> Dispatcher
 
   LargePath --> Dispatcher
 
   Dispatcher --> Session
 
   Session --> Transport
    
    style CheckSize fill:#f9f9f9
    style SmallPath fill:#f0f0f0
    style LargePath fill:#f0f0f0

The encoded arguments fit in the rpc_param_bytes field of the RpcRequest structure. This field is transmitted as part of the initial request header frame, minimizing round-trips.

Large Payload Path ( >= 64 KB):

The encoded arguments are placed in rpc_prebuffered_payload_bytes. The RpcDispatcher automatically chunks this data into multiple frames, each with its own stream ID and sequence flags.

This prevents request header frames from exceeding transport limitations while ensuring arguments of any size can be transmitted.

Sources:


Prebuffering vs Streaming Trade-offs

The framework provides two distinct patterns for RPC calls, each with different performance characteristics:

Prebuffered RPC Pattern

Characteristics:

  • Entire request payload buffered in memory before transmission begins
  • Entire response payload buffered before processing begins
  • Uses RpcCallPrebuffered trait and call_rpc_buffered method
  • Set is_finalized: true on RpcRequest

Performance implications:

AspectImpact
Memory usageHigher - full payload in memory simultaneously
LatencyHigher initial latency - must encode entire payload first
ThroughputOptimal for small-to-medium payloads
SimplicitySimpler error handling - all-or-nothing semantics
BackpressureNone - sender controls pacing

Optimal use cases:

  • Small payloads (< 10 MB)
  • Computations requiring full dataset before processing
  • Simple request/response patterns
  • Operations where atomicity is important

Streaming RPC Pattern

Characteristics:

  • Incremental transmission using dynamic channels
  • Processing begins before entire payload arrives
  • Uses RpcMethodStreaming trait (bounded or unbounded channels)
  • Supports bidirectional streaming

Performance implications:

AspectImpact
Memory usageLower - processes data incrementally
LatencyLower initial latency - processing begins immediately
ThroughputBetter for large payloads
ComplexityRequires async channel management
BackpressureSupported via bounded channels

Optimal use cases:

  • Large payloads (> 10 MB)
  • Real-time streaming data
  • Long-running operations
  • File uploads/downloads
  • Bidirectional communication

Sources:


Memory Management and Buffering

Per-Stream Decoder Allocation

The RpcSession maintains a separate decoder instance for each active stream:

Memory characteristics:

  • Per-stream overhead : Each active stream allocates a decoder with internal buffer
  • Buffer growth : Buffers grow dynamically as chunks arrive
  • Cleanup timing : Decoders removed on End or Error events
  • Peak memory : (concurrent_streams × average_payload_size) + overhead

Example calculation for prebuffered calls:

Scenario: 10 concurrent RPC calls, each with 5 MB response
Peak memory ≈ 10 × 5 MB = 50 MB (excluding overhead)

Encoder Lifecycle

The RpcStreamEncoder is created per-request and manages outbound chunking:

  • Created when RpcDispatcher::call initiates a request
  • Holds reference to payload bytes during transmission
  • Automatically chunks data based on DEFAULT_SERVICE_MAX_CHUNK_SIZE
  • Dropped after final chunk transmitted

For prebuffered calls, the encoder is returned to the caller, allowing explicit lifecycle management:

Pending Request Tracking

The RpcDispatcher maintains a HashMap of pending requests:

Entry lifecycle:

  1. Inserted when call or call_rpc_buffered invoked
  2. Maintained until response received or timeout
  3. Removed on successful response, error, or explicit cleanup
  4. Each entry holds oneshot::Sender or callback for result delivery

Memory impact : Proportional to number of in-flight requests. Each entry contains minimal overhead (sender channel + metadata).

Sources:


graph LR
    subgraph "Async/Await Model"
        A1["Task spawn overhead"]
A2["Future state machine"]
A3["Runtime scheduler"]
A4["Context switching"]
end
    
    subgraph "muxio Callback Model"
        M1["Direct function calls"]
M2["No state machines"]
M3["No runtime dependency"]
M4["Deterministic execution"]
end
    
    A1 -.higher overhead.-> M1
    A2 -.higher overhead.-> M2
    A3 -.higher overhead.-> M3
    A4 -.higher overhead.-> M4

Non-Async Callback Model Performance

The framework’s non-async, callback-driven architecture provides specific performance characteristics:

Runtime Overhead Comparison

Performance advantages:

FactorBenefit
No async runtimeEliminates scheduler overhead
Direct callbacksNo future polling or waker mechanisms
Deterministic flowPredictable execution timing
WASM compatibleWorks in single-threaded browser contexts
Memory efficiencyNo per-task stack allocation

Performance limitations:

FactorImpact
Synchronous processingLong-running callbacks block progress
No implicit parallelismConcurrency must be managed explicitly
Callback complexityDeep callback chains increase stack usage

Read/Write Operation Flow

This synchronous model means:

  • Low latency : No context switching between read and callback invocation
  • Predictable timing : Callback invoked immediately when data complete
  • Stack-based execution : Entire chain executes on single thread/stack
  • No allocations : No heap allocation for task state

Sources:


Connection and Stream Multiplexing Efficiency

Stream ID Allocation Strategy

The RpcSession allocates stream IDs sequentially:

Efficiency characteristics:

  • O(1) allocation : No data structure lookup required
  • Collision-free : Client/server use separate number spaces
  • Reuse strategy : IDs wrap after exhaustion (u32 range)
  • No cleanup needed : Decoders removed, IDs naturally recycled
graph TB
    SingleConnection["Single WebSocket Connection"]
Multiplexer["RpcSession Multiplexer"]
subgraph "Interleaved Streams"
        S1["Stream 1\nLarge file upload\n1000 chunks"]
S2["Stream 3\nQuick query\n1 chunk"]
S3["Stream 5\nMedium response\n50 chunks"]
end
    
 
   SingleConnection --> Multiplexer
 
   Multiplexer --> S1
 
   Multiplexer --> S2
 
   Multiplexer --> S3
    
    Timeline["Frame sequence: [1,3,1,1,5,3,1,5,1,...]"]
Multiplexer -.-> Timeline
    
    Note1["Stream 3 completes quickly\ndespite Stream 1 still transmitting"]
S2 -.-> Note1

Concurrent Request Handling

The framework supports concurrent requests over a single connection through stream multiplexing:

Performance benefits:

  1. Head-of-line avoidance : Small requests don’t wait for large transfers
  2. Resource efficiency : Single connection handles all operations
  3. Lower latency : No connection establishment overhead per request
  4. Fairness : Chunks from different streams interleave naturally

Example throughput:

Scenario: 1 large transfer (100 MB) + 10 small queries (10 KB each)
Without multiplexing: Small queries wait ~seconds for large transfer
With multiplexing: Small queries complete in ~milliseconds

Sources:


Best Practices and Recommendations

Payload Size Guidelines

Payload SizeRecommended PatternRationale
< 64 KBPrebuffered, inline paramsSingle frame, no chunking overhead
64 KB - 10 MBPrebuffered, payload_bytesAutomatic chunking, simple semantics
10 MB - 100 MBStreaming (bounded channels)Backpressure control, lower memory

100 MB| Streaming (bounded channels)| Essential for memory constraints

Concurrent Request Optimization

For high-throughput scenarios:

Maximum concurrent requests = min(
    server_handler_capacity,
    client_memory_budget / average_payload_size
)

Example calculation:

Server: 100 concurrent handlers
Client memory budget: 500 MB
Average response size: 2 MB

Optimal concurrency = min(100, 500/2) = min(100, 250) = 100 requests

Chunking Strategy Selection

When DEFAULT_SERVICE_MAX_CHUNK_SIZE (64 KB) is optimal:

  • General-purpose RPC with mixed payload sizes
  • WebSocket transport (browser or native)
  • Balanced latency/throughput requirements

When to consider smaller chunks (e.g., 16 KB):

  • Real-time streaming with low-latency requirements
  • Bandwidth-constrained networks
  • Interactive applications requiring immediate feedback

When to consider larger chunks (e.g., 256 KB):

  • High-bandwidth, low-latency networks
  • Bulk data transfer scenarios
  • When minimizing frame overhead is critical

Note: Chunk size is currently a compile-time constant. Custom chunk sizes require modifying DEFAULT_SERVICE_MAX_CHUNK_SIZE and recompiling.

Memory Optimization Patterns

Pattern 1: Limit concurrent streams

Pattern 2: Streaming for large data

Use streaming RPC methods instead of prebuffered when dealing with large datasets to process data incrementally.

Pattern 3: Connection pooling

For client-heavy scenarios, consider connection pooling to distribute load across multiple connections, avoiding single-connection bottlenecks.

Monitoring and Profiling

The framework uses tracing for observability. Key metrics to monitor:

  • RpcDispatcher::call : Request initiation timing
  • RpcSession::write_bytes : Frame transmission timing
  • RpcStreamDecoder : Chunk reassembly timing
  • Pending request count : Memory pressure indicator
  • Active stream count : Multiplexing efficiency indicator

Sources:


Performance Testing Results

The integration test suite includes performance validation scenarios:

Large Payload Test (200x Chunk Size)

Test configuration:

  • Payload size: 200 × 64 KB = 12.8 MB
  • Pattern: Prebuffered echo (round-trip)
  • Transport: WebSocket over TCP
  • Client: WASM client with bridge

Results demonstrate:

  • Successful transmission of 12.8 MB payload
  • Automatic chunking into 200 frames
  • Correct reassembly and verification
  • No memory leaks or decoder issues

This validates the framework’s ability to handle multi-megabyte payloads using the prebuffered pattern with automatic chunking.

Sources:


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Extending the Framework

Loading…

Extending the Framework

Relevant source files

Purpose and Scope

This document provides guidance for extending the muxio framework with custom transport implementations, runtime-specific client and server implementations, and platform-specific adaptations. It covers the architectural extension points, required trait implementations, and patterns for integrating new runtime environments.

For information about deploying existing implementations across platforms, see Cross-Platform Deployment. For details about the existing Tokio and WASM implementations, see Platform Implementations.

Sources: Cargo.toml:19-31 extensions/muxio-rpc-service-endpoint/Cargo.toml:1-33


Extension Points in the Architecture

The muxio framework provides several well-defined extension points that enable custom implementations without modifying core logic. The layered architecture isolates platform-specific concerns from runtime-agnostic abstractions.

Diagram 1: Framework Extension Points

graph TB
    subgraph "Core Layer - No Extensions Needed"
        CORE["muxio Core"]
DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
FRAME["Binary Framing Protocol"]
end
    
    subgraph "Framework Layer - Extension Points"
        CALLER["RpcServiceCallerInterface\nTrait for Client Logic"]
ENDPOINT["RpcServiceEndpointInterface\nTrait for Server Logic"]
SERVICE["RpcMethodPrebuffered\nService Definitions"]
end
    
    subgraph "Platform Layer - Your Extensions"
        CUSTOM_CLIENT["Custom RPC Client\nYour Transport"]
CUSTOM_SERVER["Custom RPC Server\nYour Runtime"]
CUSTOM_TRANSPORT["Custom Transport Layer\nYour Protocol"]
end
    
 
   CORE --> DISPATCHER
 
   CORE --> SESSION
 
   CORE --> FRAME
    
 
   DISPATCHER --> CALLER
 
   DISPATCHER --> ENDPOINT
 
   SESSION --> CALLER
 
   SESSION --> ENDPOINT
    
 
   CALLER --> SERVICE
 
   ENDPOINT --> SERVICE
    
    CUSTOM_CLIENT -.implements.-> CALLER
    CUSTOM_SERVER -.implements.-> ENDPOINT
    CUSTOM_TRANSPORT -.uses.-> CORE
    
 
   CUSTOM_CLIENT --> CUSTOM_TRANSPORT
 
   CUSTOM_SERVER --> CUSTOM_TRANSPORT

Key Extension Points

Extension PointLocationPurposeRequired Traits
Client ImplementationCustom cratePlatform-specific RPC clientRpcServiceCallerInterface
Server ImplementationCustom cratePlatform-specific RPC serverRpcServiceEndpointInterface
Transport LayerAnyCustom wire protocol or runtimeNone (callback-driven)
Service DefinitionsShared crateBusiness logic contractsRpcMethodPrebuffered
Feature FlagsCargo.tomlConditional compilationN/A

Sources: Cargo.toml:20-31 extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27


Creating Custom Transports

Custom transports wrap the RpcDispatcher and provide platform-specific I/O mechanisms. The core dispatcher is runtime-agnostic and callback-driven, enabling integration with any execution model.

Diagram 2: Custom Transport Integration Pattern

graph LR
    subgraph "Your Custom Transport"
        INIT["Initialize Transport\nCustom I/O Setup"]
WRITE["Write Callback\nfn(Vec<u8>)"]
READ["Read Loop\nPlatform-Specific"]
LIFECYCLE["Connection Lifecycle\nState Management"]
end
    
    subgraph "Core muxio Components"
        DISPATCHER["RpcDispatcher"]
SESSION["RpcSession"]
end
    
 
   INIT --> DISPATCHER
 
   DISPATCHER --> WRITE
 
   READ --> DISPATCHER
 
   LIFECYCLE --> DISPATCHER
    
 
   DISPATCHER --> SESSION

Transport Implementation Requirements

  1. InitializeRpcDispatcher with a write callback that sends binary frames via your transport
  2. Implement read loop that feeds received bytes to RpcDispatcher::read()
  3. Handle lifecycle events such as connection, disconnection, and errors
  4. Manage concurrency model appropriate for your runtime (async, sync, thread-per-connection, etc.)

Example Transport Structure

Key patterns:

  • Write Callback : Closure or function that writes bytes to transport [see extensions/muxio-tokio-rpc-client/src/rpc_client.rs100-226](https://github.com/jzombie/rust-muxio/blob/30450c98/see extensions/muxio-tokio-rpc-client/src/rpc_client.rs#L100-L226)
  • Read Integration : Feed incoming bytes to RpcDispatcher::read() [see src/rpc/rpc_dispatcher.rs130-264](https://github.com/jzombie/rust-muxio/blob/30450c98/see src/rpc/rpc_dispatcher.rs#L130-L264)
  • State Management : Track connection lifecycle with callbacks [see extensions/muxio-tokio-rpc-client/src/rpc_client.rs30-38](https://github.com/jzombie/rust-muxio/blob/30450c98/see extensions/muxio-tokio-rpc-client/src/rpc_client.rs#L30-L38)

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-226 src/rpc/rpc_dispatcher.rs:130-264


Implementing Platform-Specific Clients

Platform-specific clients implement the RpcServiceCallerInterface trait to provide type-safe RPC invocation. The trait is defined in muxio-rpc-service-caller.

Diagram 3: Client Implementation Architecture

graph TB
    subgraph "Your Client Implementation"
        CLIENT["CustomRpcClient\nPlatform-Specific Lifecycle"]
CALLER_IMPL["impl RpcServiceCallerInterface"]
end
    
    subgraph "Required Integration"
        DISPATCHER["RpcDispatcher\nOwned or Arc Reference"]
MUTEX["Sync Primitive\nMutex, TokioMutex, etc."]
WRITE["write_callback\nPlatform I/O"]
READ["read_loop\nBackground Task/Thread"]
end
    
    subgraph "Trait Methods to Implement"
        CALL["async fn call()\nRpcRequest -> RpcResponse"]
GET_DISP["fn get_dispatcher()\nAccess to Dispatcher"]
end
    
 
   CLIENT --> CALLER_IMPL
 
   CLIENT --> DISPATCHER
 
   CLIENT --> MUTEX
 
   CLIENT --> WRITE
 
   CLIENT --> READ
    
    CALLER_IMPL -.implements.-> CALL
    CALLER_IMPL -.implements.-> GET_DISP
    
 
   CALL --> GET_DISP
 
   GET_DISP --> DISPATCHER

Required Trait Implementation

The RpcServiceCallerInterface trait requires implementing two core methods:

MethodSignaturePurpose
callasync fn call(&self, request: RpcRequest) -> Result<RpcResponse>Send RPC request and await response
get_dispatcherfn get_dispatcher(&self) -> Arc<...>Provide access to underlying dispatcher

Client Lifecycle Considerations

  1. Construction : Initialize RpcDispatcher with write callback
  2. Connection : Establish transport and start read loop
  3. Operation : Handle call() invocations by delegating to dispatcher
  4. Cleanup : Close connection and stop background tasks on drop

Reference Implementation Pattern

For Tokio-based clients, see extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-226 which demonstrates:

  • Arc-based shared ownership of RpcDispatcher
  • TokioMutex for async-safe access
  • Background task for read loop using tokio::spawn
  • Connection state tracking with callbacks

For WASM-based clients, see extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-32 which demonstrates:

  • Single-threaded execution model
  • JavaScript bridge integration
  • Static singleton pattern for WASM constraints

Sources: extensions/muxio-rpc-service-caller/ extensions/muxio-tokio-rpc-client/src/rpc_client.rs:30-226 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:15-32


graph TB
    subgraph "Your Server Implementation"
        SERVER["CustomRpcServer\nPlatform-Specific Server"]
ENDPOINT_IMPL["impl RpcServiceEndpointInterface"]
HANDLER_REG["Handler Registration\nregister_handler()"]
end
    
    subgraph "Request Processing Pipeline"
        ACCEPT["Accept Connection\nPlatform-Specific"]
DISPATCHER["RpcDispatcher\nPer-Connection"]
READ["Read Loop\nFeed to Dispatcher"]
ENDPOINT["RpcServiceEndpoint\nHandler Registry"]
end
    
    subgraph "Handler Execution"
        DECODE["Decode RpcRequest\nDeserialize Parameters"]
DISPATCH["Dispatch to Handler\nMatch method_id"]
EXECUTE["Execute Handler\nUser Business Logic"]
RESPOND["Encode RpcResponse\nSend via Dispatcher"]
end
    
 
   SERVER --> ENDPOINT_IMPL
 
   SERVER --> HANDLER_REG
    
 
   ACCEPT --> DISPATCHER
 
   DISPATCHER --> READ
 
   READ --> ENDPOINT
    
 
   ENDPOINT --> DECODE
 
   DECODE --> DISPATCH
 
   DISPATCH --> EXECUTE
 
   EXECUTE --> RESPOND
 
   RESPOND --> DISPATCHER

Implementing Platform-Specific Servers

Platform-specific servers implement the RpcServiceEndpointInterface trait to handle incoming RPC requests and dispatch them to registered handlers. The trait is defined in muxio-rpc-service-endpoint.

Diagram 4: Server Implementation Flow

Required Trait Implementation

The RpcServiceEndpointInterface trait provides default implementations but allows customization:

MethodDefault ImplementationOverride When
register_handlerRegisters handler in internal mapCustom dispatch logic needed
handle_finalized_requestDeserializes and invokes handlerCustom request processing required
handle_stream_openRoutes streamed requestsCustom stream handling needed

Server Architecture Patterns

Connection Management:

  • Create new RpcDispatcher instance per connection
  • Create new RpcServiceEndpoint instance per connection
  • Share handler registrations across connections (using Arc)

Handler Registration:

  • Register handlers at server startup or connection time
  • Use RpcServiceEndpoint::register_handler() with closure
  • Handlers receive deserialized parameters and return typed results

Reference Implementation Pattern

For Tokio-based servers, see extensions/muxio-tokio-rpc-server/ which demonstrates:

  • Axum framework integration for HTTP/WebSocket serving
  • Per-connection RpcDispatcher and RpcServiceEndpoint
  • Handler registration with async closures
  • Graceful shutdown handling

Sources: extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:65-137 extensions/muxio-tokio-rpc-server/


Feature Flags and Conditional Compilation

The muxio ecosystem uses Cargo feature flags to enable optional dependencies and platform-specific code. This pattern allows extensions to remain lightweight while supporting multiple runtimes.

Diagram 5: Feature Flag Pattern

graph LR
    subgraph "Extension Crate Cargo.toml"
        DEFAULT["default = []\nMinimal Dependencies"]
FEATURE1["tokio_support\ndep:tokio"]
FEATURE2["async_std_support\ndep:async-std"]
FEATURE3["custom_feature\nYour Feature"]
end
    
    subgraph "Conditional Code"
        CFG1["#[cfg(feature = tokio_support)]\nTokio-Specific Impl"]
CFG2["#[cfg(feature = async_std_support)]\nasync-std Impl"]
CFG3["#[cfg(not(any(...)))]\nFallback Impl"]
end
    
 
   DEFAULT --> CFG3
 
   FEATURE1 --> CFG1
 
   FEATURE2 --> CFG2
 
   FEATURE3 --> CFG1

Feature Flag Best Practices

InCargo.toml:

In source code:

Example: muxio-rpc-service-endpoint

The muxio-rpc-service-endpoint crate demonstrates this pattern at extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27:

This enables Tokio-specific mutex types while maintaining compatibility with synchronous code.

Sources: extensions/muxio-rpc-service-endpoint/Cargo.toml:23-27 Cargo.toml:39-65


Integration Patterns

When creating a new platform extension, follow these patterns to ensure compatibility with the muxio ecosystem.

Workspace Integration

Directory Structure:

rust-muxio/
├── extensions/
│   ├── muxio-custom-client/
│   │   ├── Cargo.toml
│   │   └── src/
│   │       └── lib.rs
│   └── muxio-custom-server/
│       ├── Cargo.toml
│       └── src/
│           └── lib.rs

Add to workspace: Cargo.toml:19-31

Dependency Configuration

Workspace Dependencies: Cargo.toml:39-48

Extension Crate Dependencies:

Metadata Inheritance

Use workspace metadata inheritance to maintain consistency Cargo.toml:1-17:

Testing Integration

Create dev dependencies for integration testing:

Reference the example service definitions for end-to-end tests, as demonstrated in extensions/muxio-rpc-service-endpoint/Cargo.toml:29-33

Sources: Cargo.toml:1-71 extensions/muxio-rpc-service-endpoint/Cargo.toml:1-33


Extension Checklist

When implementing a new platform extension, ensure the following:

For Client Extensions

  • Implement RpcServiceCallerInterface trait
  • Create RpcDispatcher with platform-specific write callback
  • Start read loop that feeds bytes to RpcDispatcher::read()
  • Manage connection lifecycle with state tracking
  • Provide async or sync API appropriate for runtime
  • Handle connection errors and cleanup
  • Add feature flags for optional dependencies
  • Document platform-specific requirements

For Server Extensions

  • Implement RpcServiceEndpointInterface trait
  • Accept connections and create per-connection RpcDispatcher
  • Create per-connection RpcServiceEndpoint
  • Register handlers from shared definitions
  • Implement request routing and execution
  • Send responses via RpcDispatcher
  • Handle graceful shutdown
  • Document server setup and configuration

For Transport Extensions

  • Define platform-specific connection type
  • Implement write callback for outgoing frames
  • Implement read mechanism for incoming frames
  • Handle connection establishment and teardown
  • Provide error handling and recovery
  • Document transport-specific limitations

Sources: extensions/muxio-tokio-rpc-client/ extensions/muxio-tokio-rpc-server/ extensions/muxio-wasm-rpc-client/


GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

JavaScript and WASM Integration

Loading…

JavaScript and WASM Integration

Relevant source files

Purpose and Scope

This page documents the WebAssembly (WASM) integration layer that enables muxio RPC clients to run in browser environments. It covers the bridge architecture between Rust WASM code and JavaScript, the static client pattern used for managing singleton instances, and the specific integration points required for WebSocket communication.

For general information about the WASM RPC Client platform implementation, see WASM RPC Client. For cross-platform deployment strategies, see Cross-Platform Deployment.

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-182 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:1-82


WASM Bridge Architecture

The muxio WASM integration uses a byte-passing bridge pattern to connect Rust code compiled to WebAssembly with JavaScript host environments. This design is lightweight and avoids complex FFI patterns by restricting communication to byte arrays (Vec<u8> in Rust, Uint8Array in JavaScript).

Core Bridge Components

The bridge consists of three primary components:

ComponentTypeRole
RpcWasmClientRust structManages RPC dispatcher, endpoint, and connection state
static_muxio_write_bytesJavaScript-callable functionEmits bytes from Rust to JavaScript
MUXIO_STATIC_RPC_CLIENT_REFThread-local singletonStores the static client instance

The RpcWasmClient structure extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-24 contains:

  • dispatcher: An Arc<tokio::sync::Mutex<RpcDispatcher>> for request/response correlation
  • endpoint: An Arc<RpcServiceEndpoint<()>> for handling incoming RPC calls
  • emit_callback: An Arc<dyn Fn(Vec<u8>)> that bridges to JavaScript
  • state_change_handler: Callback for connection state changes
  • is_connected: Atomic boolean tracking connection status

Diagram: WASM-JavaScript Bridge Architecture

graph TB
    subgraph "JavaScript/Browser Environment"
        JS_WS["WebSocket API\n(Browser Native)"]
JS_BRIDGE["JavaScript Bridge Layer\nstatic_muxio_write_bytes"]
JS_APP["Web Application Code"]
end
    
    subgraph "WASM Module (Rust Compiled)"
        WASM_CLIENT["RpcWasmClient"]
WASM_DISP["RpcDispatcher"]
WASM_EP["RpcServiceEndpoint"]
STATIC_REF["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local RefCell"]
WASM_CLIENT -->|owns Arc| WASM_DISP
 
       WASM_CLIENT -->|owns Arc| WASM_EP
 
       STATIC_REF -->|stores Arc| WASM_CLIENT
    end
    
 
   JS_WS -->|onmessage bytes| JS_BRIDGE
 
   JS_BRIDGE -->|read_bytes &[u8]| WASM_CLIENT
 
   WASM_CLIENT -->|emit_callback Vec<u8>| JS_BRIDGE
 
   JS_BRIDGE -->|send Uint8Array| JS_WS
    
 
   JS_APP -->|RPC method calls| WASM_CLIENT
 
   WASM_CLIENT -->|responses| JS_APP
    
 
   JS_WS -->|onopen| WASM_CLIENT
 
   JS_WS -->|onclose/onerror| WASM_CLIENT

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-11


Static Client Pattern

The WASM client uses a singleton pattern implemented with thread_local! storage and RefCell for interior mutability. This pattern is necessary because:

  1. WASM runs single-threaded, making thread-local storage equivalent to global storage
  2. Multiple JavaScript functions may need access to the same client instance
  3. The client must survive across multiple JavaScript->Rust function call boundaries

Thread-Local Storage Implementation

The static client is stored in MUXIO_STATIC_RPC_CLIENT_REF extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:9-11:

Key characteristics:

  • thread_local! : Creates a separate instance per thread (single thread in WASM)
  • RefCell<Option<Arc<T>>>: Enables runtime-checked mutable borrowing with optional value
  • Arc<RpcWasmClient>: Allows cloning references without copying the entire client

Initialization and Access Functions

Three functions manage the static client lifecycle:

FunctionPurposeIdempotency
init_static_client()Creates the client if not presentYes - multiple calls are safe
get_static_client()Retrieves the current clientN/A - read-only
with_static_client_async()Executes async operations with the clientN/A - wrapper

The init_static_client() function extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36 checks for existing initialization before creating a new instance:

Diagram: Static Client Initialization Flow

Sources : extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:25-36


JavaScript Integration Points

The WASM client exposes specific methods designed to be called from JavaScript in response to WebSocket events. These methods bridge the gap between JavaScript’s event-driven API and Rust’s async/await model.

Connection Lifecycle Methods

Three methods handle WebSocket lifecycle events:

MethodJavaScript EventPurpose
handle_connect()WebSocket.onopenSets connected state, invokes state change handler
read_bytes(&[u8])WebSocket.onmessageProcesses incoming binary data
handle_disconnect()WebSocket.onclose / WebSocket.onerrorClears connected state, fails pending requests

handle_connect Implementation

The handle_connect() method extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-44 updates connection state and notifies handlers:

Key operations:

  • Atomic flag update : Uses AtomicBool::store with SeqCst ordering for thread-safe state change
  • Handler invocation : Calls registered state change callback if present
  • Async execution : Returns Future that JavaScript must await

read_bytes Implementation

The read_bytes() method extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121 implements a three-stage processing pipeline:

Stage 1: Synchronous Reading (dispatcher lock held briefly)

  • Acquires dispatcher mutex
  • Calls dispatcher.read_bytes(bytes) to parse frames
  • Extracts finalized requests from dispatcher
  • Releases dispatcher lock

Stage 2: Asynchronous Handler Execution (no locks held)

  • Calls process_single_prebuffered_request() for each request
  • User handlers execute concurrently via join_all()
  • Dispatcher remains unlocked during handler execution

Stage 3: Synchronous Response Writing (dispatcher lock re-acquired)

  • Re-acquires dispatcher mutex
  • Calls dispatcher.respond() for each result
  • Emits response bytes via emit_callback
  • Releases dispatcher lock
graph TB
    JS_MSG["JavaScript: WebSocket.onmessage"]
subgraph "Stage 1: Synchronous Reading"
        LOCK1["Acquire dispatcher lock"]
READ["dispatcher.read_bytes(bytes)"]
EXTRACT["Extract finalized requests"]
UNLOCK1["Release dispatcher lock"]
end
    
    subgraph "Stage 2: Async Handler Execution"
        PROCESS["process_single_prebuffered_request()"]
HANDLERS["User handlers execute"]
JOIN["join_all()
responses"]
end
    
    subgraph "Stage 3: Synchronous Response"
        LOCK2["Re-acquire dispatcher lock"]
RESPOND["dispatcher.respond()"]
EMIT["emit_callback(bytes)"]
UNLOCK2["Release dispatcher lock"]
end
    
 
   JS_MSG --> LOCK1
 
   LOCK1 --> READ
 
   READ --> EXTRACT
 
   EXTRACT --> UNLOCK1
    
 
   UNLOCK1 --> PROCESS
 
   PROCESS --> HANDLERS
 
   HANDLERS --> JOIN
    
 
   JOIN --> LOCK2
 
   LOCK2 --> RESPOND
 
   RESPOND --> EMIT
 
   EMIT --> UNLOCK2
    
 
   UNLOCK2 --> JS_EMIT["JavaScript: WebSocket.send()"]

This staged approach prevents deadlocks by ensuring user handlers never execute while the dispatcher is locked.

Diagram: read_bytes Processing Pipeline

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121

handle_disconnect Implementation

The handle_disconnect() method extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:123-134 performs cleanup:

Key operations:

  • Atomic swap : Uses swap() to atomically test-and-set connection flag
  • Conditional execution : Only processes if connection was previously established
  • Error propagation : Calls fail_all_pending_requests() to reject outstanding futures
  • State notification : Invokes registered disconnect handler

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:37-44 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:123-134


wasm-bindgen Integration

The WASM client uses wasm-bindgen to generate JavaScript bindings for Rust code. Key integration patterns include:

Type Conversions

Rust TypeJavaScript TypeConversion Method
Vec<u8>Uint8ArrayAutomatic via wasm-bindgen
Result<T, String>Promise<T>future_to_promise()
T: Into<JsValue>Any JS value.into() conversion

Promise-Based Async Functions

The with_static_client_async() helper extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72 wraps Rust async functions for JavaScript consumption:

This pattern:

  • Retrieves static client : Clones Arc from thread-local storage
  • Executes user closure : Awaits the provided async function
  • Converts result : Transforms Result<T, String> to JavaScript Promise
  • Error handling : Converts Rust errors to rejected promises

Diagram: JavaScript-Rust Promise Interop

sequenceDiagram
    participant JS as "JavaScript Code"
    participant BINDGEN as "wasm-bindgen Layer"
    participant WRAPPER as "with_static_client_async()"
    participant TLS as "Thread-Local Storage"
    participant CLOSURE as "User Closure<F>"
    participant CLIENT as "RpcWasmClient"
    
    JS->>BINDGEN: Call exported WASM function
    BINDGEN->>WRAPPER: Invoke wrapper function
    WRAPPER->>TLS: Get static client
    TLS->>WRAPPER: Arc<RpcWasmClient>
    
    WRAPPER->>CLOSURE: Execute f(client).await
    CLOSURE->>CLIENT: Perform RPC operations
    CLIENT->>CLOSURE: Result<T, String>
    
    CLOSURE->>WRAPPER: Return result
    
    alt "Success"
        WRAPPER->>BINDGEN: Ok(value.into())
        BINDGEN->>JS: Promise resolves with value
    else "Error"
        WRAPPER->>BINDGEN: Err(JsValue)
        BINDGEN->>JS: Promise rejects with error
    end

Sources : extensions/muxio-wasm-rpc-client/src/static_lib/static_client.rs:54-72


Bidirectional Data Flow

The WASM integration enables bidirectional RPC communication, where both JavaScript and WASM can initiate calls and handle responses.

Outbound Flow (WASM → JavaScript → Server)

  1. Application code calls RPC method on RpcWasmClient
  2. RpcServiceCallerInterface implementation encodes request
  3. RpcDispatcher serializes and chunks data
  4. emit_callback invokes static_muxio_write_bytes
  5. JavaScript bridge sends bytes via WebSocket.send()

Inbound Flow (Server → JavaScript → WASM)

  1. JavaScript receives WebSocket.onmessage event
  2. Bridge calls RpcWasmClient.read_bytes(&[u8])
  3. Dispatcher parses frames and routes to endpoint
  4. Endpoint dispatches to registered handlers
  5. Handlers execute and return responses
  6. Responses emitted back through emit_callback

Diagram: Complete Bidirectional Message Flow

graph TB
    subgraph "JavaScript Layer"
        JS_APP["Web Application"]
WS_API["WebSocket API"]
BRIDGE_OUT["static_muxio_write_bytes"]
BRIDGE_IN["read_bytes handler"]
end
    
    subgraph "WASM Client (RpcWasmClient)"
        CALLER["RpcServiceCallerInterface"]
ENDPOINT["RpcServiceEndpoint"]
DISPATCHER["RpcDispatcher"]
EMIT["emit_callback"]
end
    
    subgraph "Server"
        SERVER["Tokio RPC Server"]
end
    
 
   JS_APP -->|Call RPC method| CALLER
 
   CALLER -->|Encode request| DISPATCHER
 
   DISPATCHER -->|Serialize frames| EMIT
 
   EMIT -->|Vec<u8>| BRIDGE_OUT
 
   BRIDGE_OUT -->|Uint8Array| WS_API
 
   WS_API -->|Binary message| SERVER
    
 
   SERVER -->|Binary response| WS_API
 
   WS_API -->|onmessage event| BRIDGE_IN
 
   BRIDGE_IN -->|&[u8]| DISPATCHER
 
   DISPATCHER -->|Route request| ENDPOINT
 
   ENDPOINT -->|Invoke handler| ENDPOINT
 
   ENDPOINT -->|Response| DISPATCHER
 
   DISPATCHER -->|Serialize frames| EMIT
    
 
   DISPATCHER -->|Deliver result| CALLER
 
   CALLER -->|Return typed result| JS_APP

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:48-121


RpcServiceCallerInterface Implementation

The RpcWasmClient implements the RpcServiceCallerInterface trait extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181 enabling it to be used interchangeably with RpcClient for Tokio environments. This trait provides:

MethodPurposeReturn Type
get_dispatcher()Returns dispatcher for request/response managementArc<Mutex<RpcDispatcher>>
get_emit_fn()Returns callback for byte emissionArc<dyn Fn(Vec<u8>)>
is_connected()Checks current connection statusbool
set_state_change_handler()Registers connection state callbackasync fn

The implementation delegates to internal methods:

This implementation ensures that WASM and Tokio clients share the same interface, enabling code reuse at the application layer. The trait abstraction is documented in detail in Service Caller Interface.

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:154-181 extensions/muxio-wasm-rpc-client/src/lib.rs:1-10


Module Structure and Re-exports

The muxio-wasm-rpc-client crate is organized with clear module boundaries:

ModuleContentsPurpose
rpc_wasm_clientRpcWasmClient structCore client implementation
static_libStatic client utilitiesSingleton pattern helpers
Root (lib.rs)Re-exportsSimplified imports

The root module extensions/muxio-wasm-rpc-client/src/lib.rs:1-10 re-exports key types for convenience:

This allows consumers to import all necessary types from a single crate:

Sources : extensions/muxio-wasm-rpc-client/src/lib.rs:1-10


Comparison with Tokio Client

While both RpcWasmClient and RpcClient implement RpcServiceCallerInterface, their internal architectures differ due to platform constraints:

AspectRpcWasmClientRpcClient
RuntimeSingle-threaded WASMMulti-threaded Tokio
Storagethread_local! + RefCellArc + background tasks
TransportJavaScript WebSocket APItokio-tungstenite
EmitCallback to static_muxio_write_bytesChannel to writer task
LifecycleManual JS event handlersAutomatic via async tasks
Initializationinit_static_client()RpcClient::new()

Despite these differences, both clients expose identical RPC method calling interfaces, enabling cross-platform application code. The platform-specific details are encapsulated behind the RpcServiceCallerInterface abstraction.

Sources : extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:16-35 extensions/muxio-tokio-rpc-client/src/lib.rs:1-8 README.md:48-49