Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Advanced Topics

Relevant source files

This section covers advanced features, optimization techniques, and deep implementation details for developers who need to extend muxio beyond standard usage patterns. Topics include custom transport implementations, cross-platform deployment strategies, performance tuning, and low-level protocol internals.

For basic RPC usage patterns, see RPC Framework. For standard transport implementations, see Transport Implementations. For testing approaches, see Testing.


Advanced Multiplexing Patterns

The RpcDispatcher supports patterns beyond simple request-response cycles. These patterns leverage the underlying frame multiplexing to achieve concurrent operations over a single connection.

Concurrent Request Pipelining

Multiple RPC requests can be issued without waiting for responses. The dispatcher assigns unique request IDs and correlates responses when they arrive, enabling high throughput even over high-latency connections.

sequenceDiagram
    participant App as "Application"
    participant Dispatcher as "RpcDispatcher"
    participant Transport as "Transport"
    
    App->>Dispatcher: Request A (ID=1)
    Dispatcher->>Transport: Serialize frames
    App->>Dispatcher: Request B (ID=2)
    Dispatcher->>Transport: Serialize frames
    App->>Dispatcher: Request C (ID=3)
    Dispatcher->>Transport: Serialize frames
    Transport->>Dispatcher: Response C (ID=3)
    Dispatcher->>App: Return C result
    Transport->>Dispatcher: Response A (ID=1)
    Dispatcher->>App: Return A result
    Transport->>Dispatcher: Response B (ID=2)
    Dispatcher->>App: Return B result

The dispatcher maintains a HashMap of pending requests indexed by request ID. When frames arrive, the dispatcher extracts the request ID from the RpcHeader and routes the payload to the appropriate response channel.

Sources: muxio/src/rpc_dispatcher.rs

Interleaved Frame Transmission

Large payloads are automatically chunked into frames. Multiple concurrent requests can have their frames interleaved during transmission, ensuring no single large transfer monopolizes the connection.

Frame SequenceRequest IDFrame TypePayload Size
Frame 142First64 KB
Frame 243First64 KB
Frame 342Middle64 KB
Frame 443Last32 KB
Frame 542Last32 KB

This interleaving is transparent to application code. The framing protocol handles reassembly using the FrameType enum values: First, Middle, Last, and OnlyChunk.

Sources: muxio/src/rpc_request_response.rs

Request Cancellation

RPC requests can be cancelled mid-flight by dropping the response future on the client side. The dispatcher detects the dropped receiver and removes the pending request from its internal map.

The cancellation is local to the client; the server may continue processing unless explicit cancellation messages are implemented at the application level. For distributed cancellation, implement a custom RPC method that signals the server to abort processing.

Sources: muxio/src/rpc_dispatcher.rs


Transport Adapter Architecture

The core RpcDispatcher interacts with transports through a callback-based interface. This design enables integration with diverse runtime environments without coupling to specific I/O frameworks.

graph TB
    subgraph "RpcDispatcher Core"
        Dispatcher["RpcDispatcher"]
PendingMap["pending_requests\nHashMap<u32, ResponseSender>"]
ReassemblyMap["pending_responses\nHashMap<u32, Vec<Frame>>"]
end
    
    subgraph "Transport Adapter"
        ReadCallback["read_bytes callback\nCalled by transport"]
WriteCallback["write_bytes callback\nProvided to dispatcher"]
Transport["Transport Implementation\n(WebSocket/TCP/Custom)"]
end
    
    subgraph "Application Layer"
        RpcCaller["RpcServiceCallerInterface"]
RpcEndpoint["RpcServiceEndpointInterface"]
end
    
 
   Transport -->|Incoming bytes| ReadCallback
 
   ReadCallback -->|process_incoming_bytes| Dispatcher
 
   Dispatcher -->|Uses internally| PendingMap
 
   Dispatcher -->|Uses internally| ReassemblyMap
 
   Dispatcher -->|Outgoing bytes| WriteCallback
 
   WriteCallback -->|Send to network| Transport
    
 
   RpcCaller -->|Send request| Dispatcher
 
   Dispatcher -->|Deliver response| RpcCaller
 
   RpcEndpoint -->|Send response| Dispatcher
 
   Dispatcher -->|Deliver request| RpcEndpoint

Dispatcher Interface Contract

The dispatcher exposes process_incoming_bytes for feeding received data and accepts a write_bytes closure for transmitting serialized frames. This bidirectional callback model decouples the dispatcher from transport-specific details.

Sources: muxio/src/rpc_dispatcher.rs extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs

Custom Transport Implementation

To implement a new transport, provide three components:

  1. Connection Management : Establish and maintain the underlying connection (TCP, UDP, IPC, etc.)
  2. Read Integration : Call dispatcher.process_incoming_bytes() when data arrives
  3. Write Integration : Pass a closure to the dispatcher that transmits bytes

Example Transport Structure

The transport must handle buffering, error recovery, and connection state. The dispatcher remains unaware of these transport-specific concerns.

For reference implementations, see extensions/muxio-tokio-rpc-client/src/rpc_client.rs:50-200 for Tokio/WebSocket integration and extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:80-250 for WASM/browser integration.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs


Performance Optimization Strategies

Serialization Overhead Reduction

The default service definition examples use bitcode for serialization. For performance-critical paths, consider these alternatives:

Serialization FormatUse CaseRelative SpeedBinary Size
bitcodeGeneral purposeBaseline (1x)Compact
bytemuck castNumeric arrays10-50x fasterMinimal
Custom binary layoutFixed schemas5-20x fasterOptimal
Zero-copy viewsLarge buffersNear-instantSame as input

For numeric array transfers (e.g., sensor data, time series), implement RpcMethodPrebuffered with bytemuck::cast_slice to avoid serialization overhead entirely. This approach requires fixed-size, layout-compatible types.

Sources: Cargo.toml53 examples/example-muxio-rpc-service-definition/src/prebuffered.rs

Frame Size Tuning

The framing protocol uses a default chunk size. Adjusting this affects throughput and latency:

  • Smaller frames (8-16 KB) : Lower latency for concurrent requests, better interleaving
  • Larger frames (64-128 KB) : Higher throughput, fewer frame headers, reduced CPU overhead

For bulk data transfer, prefer larger frames. For interactive applications, prefer smaller frames. The optimal size depends on the transport's MTU and the application's latency requirements.

Sources: muxio/src/rpc_request_response.rs

graph TB
    subgraph "Application Threads"
        Thread1["Thread 1"]
Thread2["Thread 2"]
Thread3["Thread 3"]
end
    
    subgraph "Connection Pool"
        Pool["Arc<RpcClient>"]
Connection1["Connection 1"]
Connection2["Connection 2"]
end
    
    subgraph "Server"
        Server["RpcServer"]
end
    
 
   Thread1 -->|Shared ref| Pool
 
   Thread2 -->|Shared ref| Pool
 
   Thread3 -->|Shared ref| Pool
    
 
   Pool -->|Uses| Connection1
 
   Pool -->|Uses| Connection2
    
 
   Connection1 -->|WebSocket| Server
 
   Connection2 -->|WebSocket| Server

Connection Pooling

For native clients making many short-lived RPC calls, maintain a connection pool to amortize connection establishment overhead. The RpcClient is Send + Sync, allowing shared usage across threads.

Wrap clients in Arc and clone the Arc across threads. Each thread can concurrently issue requests over the same connection without blocking.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs README.md:92-128

Memory Management for Streaming

When using streaming RPC calls (see Streaming RPC Calls), allocate channels with bounded capacity to prevent unbounded memory growth if the consumer cannot keep pace with the producer.

Bounded channels apply backpressure, blocking the sender when the buffer fills. This prevents memory exhaustion at the cost of potential deadlock if channels form cycles. For unidirectional streams, bounded channels are typically safe.

Sources: extensions/muxio-rpc-service-caller/src/streaming/ extensions/muxio-rpc-service-endpoint/src/streaming/


Cross-Platform Deployment Patterns

graph TB
    subgraph "Shared Crate"
        ServiceDef["example-muxio-rpc-service-definition"]
AddMethod["Add::METHOD_ID\nencode_request\ndecode_response"]
MultMethod["Mult::METHOD_ID\nencode_request\ndecode_response"]
end
    
    subgraph "Native Client"
        NativeApp["Native Application"]
RpcClient["RpcClient\n(Tokio)"]
end
    
    subgraph "WASM Client"
        WasmApp["WASM Application"]
RpcWasmClient["RpcWasmClient\n(wasm-bindgen)"]
end
    
    subgraph "Server"
        RpcServer["RpcServer\n(Tokio)"]
Handlers["Method Handlers"]
end
    
 
   ServiceDef -->|Defines API| AddMethod
 
   ServiceDef -->|Defines API| MultMethod
    
 
   NativeApp -->|Uses| AddMethod
 
   NativeApp -->|Calls via| RpcClient
    
 
   WasmApp -->|Uses| AddMethod
 
   WasmApp -->|Calls via| RpcWasmClient
    
 
   RpcServer -->|Uses| AddMethod
 
   RpcServer -->|Implements| Handlers
    
 
   RpcClient -->|WebSocket| RpcServer
 
   RpcWasmClient -->|WebSocket| RpcServer

Shared Service Definitions

The key to cross-platform deployment is defining RPC methods in a platform-agnostic crate that both native and WASM clients import.

All three environments compile the same service definition crate. The native and WASM clients use different transport implementations (RpcClient vs RpcWasmClient), but the method invocation syntax is identical.

Sources: examples/example-muxio-rpc-service-definition/ README.md:47-48

Conditional Compilation for Platform-Specific Features

Use Cargo features to enable platform-specific functionality:

Application code can conditionally compile different client types:

The service definitions remain unchanged. Only the transport layer varies.

Sources: extensions/muxio-tokio-rpc-client/Cargo.toml extensions/muxio-wasm-rpc-client/Cargo.toml

WASM Binary Size Optimization

WASM builds benefit from aggressive optimization flags:

Additionally, ensure the WASM client does not transitively depend on native-only crates like tokio or tokio-tungstenite. The workspace structure in Cargo.toml:19-31 isolates WASM-specific dependencies to prevent bloat.

Sources: Cargo.toml extensions/muxio-wasm-rpc-client/Cargo.toml


Low-Level Protocol Details

Binary Frame Format

Each frame transmitted over the transport follows this structure:

OffsetSizeFieldDescription
01FrameTypeEnum: First=0, Middle=1, Last=2, OnlyChunk=3
14request_idu32 unique identifier
5NpayloadSerialized RPC data

The FrameType enum enables the dispatcher to reassemble multi-frame messages. For single-frame messages, OnlyChunk (value 3) is used, avoiding intermediate buffering.

Sources: muxio/src/rpc_request_response.rs

RPC Header Structure

Within the frame payload, RPC messages include a header:

The method_id field uses the xxhash-rust XXH3 algorithm to hash method names at compile time. For example, "Add" hashes to a specific u64. This hash serves as the routing key on the server.

The optional fields support different RPC patterns:

  • params_bytes: Small, inline parameters (e.g., method arguments)
  • prebuffered_payload_bytes: Large, prebuffered data (e.g., file contents)
  • Absence of both: Parameter-less methods

Sources: muxio/src/rpc_request_response.rs extensions/muxio-rpc-service/src/lib.rs

Method ID Collision Detection

Method IDs are generated at compile time using the xxhash crate:

While XXH3 is a high-quality hash, collisions are theoretically possible. The system does not automatically detect collisions across a codebase. If two methods hash to the same ID, the server will route requests to whichever handler was registered last.

To mitigate collision risk:

  1. Use descriptive, unique method names
  2. Implement integration tests that register all methods and verify correct routing
  3. Consider a build-time collision checker using build.rs

Sources: extensions/muxio-rpc-service/src/lib.rs Cargo.toml64


Advanced Error Handling Strategies

Layered Error Propagation

Errors flow through multiple system layers, each with its own error type:

Each layer handles errors appropriate to its abstraction level. Application errors are serialized and returned in RPC responses. Dispatcher errors indicate protocol violations. Transport errors trigger state changes.

Sources: muxio/src/rpc_request_response.rs extensions/muxio-rpc-service/src/error.rs

Handling Partial Failures

When a server handler fails, the error is serialized into the RpcResponse and transmitted to the client. The client's RpcServiceCallerInterface implementation deserializes the error and returns it to the application.

For transient failures (e.g., temporary resource unavailability), implement retry logic in the application layer. The transport layer does not retry failed RPC calls.

For permanent failures (e.g., method not implemented), the server returns RpcServiceError::MethodNotFound. Clients should not retry these errors.

Sources: extensions/muxio-rpc-service/src/error.rs extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs

Connection State Management

The RpcTransportState enum tracks connection lifecycle:

Clients can register a state change callback to implement custom reconnection strategies:

The handler is invoked on every state transition, enabling reactive error handling.

Sources: extensions/muxio-tokio-rpc-client/src/rpc_client.rs README.md:138-141


graph LR
    subgraph "Test Harness"
        ClientDispatcher["Client RpcDispatcher"]
ServerDispatcher["Server RpcDispatcher"]
ClientToServer["mpsc channel"]
ServerToClient["mpsc channel"]
end
    
 
   ClientDispatcher -->|write_bytes| ClientToServer
 
   ClientToServer -->|read_bytes| ServerDispatcher
 
   ServerDispatcher -->|write_bytes| ServerToClient
 
   ServerToClient -->|read_bytes| ClientDispatcher

Testing Advanced Scenarios

Mock Transport for Unit Tests

Create an in-memory transport using channels to test RPC logic without network I/O:

This pattern isolates RPC logic from transport concerns, enabling deterministic tests of error conditions, cancellation, and concurrent request handling.

Sources: extensions/muxio-ext-test/

Integration Testing Across Platforms

Run integration tests that compile the same service definition for both native and WASM targets:

Use wasm-pack test --headless --chrome to run WASM tests in a browser environment. This validates that both client types correctly implement the RpcServiceCallerInterface.

Sources: extensions/muxio-tokio-rpc-client/tests/prebuffered_integration_tests.rs extensions/muxio-wasm-rpc-client/tests/

Load Testing and Benchmarking

Use the criterion crate to benchmark serialization overhead, frame processing throughput, and end-to-end latency:

For distributed load testing, spawn multiple client instances and measure:

  • Requests per second
  • 99th percentile latency
  • Connection establishment time
  • Memory usage under load

Sources: Cargo.toml54 DRAFT.md23


Monitoring and Observability

Tracing Integration

The system uses the tracing crate for structured logging. Enable verbose logging during development:

Key tracing events include:

  • Request dispatch: request_id, method_id, params_size
  • Response completion: request_id, elapsed_time_ms
  • Connection state changes: old_state, new_state
  • Frame processing: frame_type, payload_size

Sources: Cargo.toml37 README.md84

Custom Metrics Collection

Implement custom metrics by wrapping the RpcServiceCallerInterface:

Override trait methods to record metrics before delegating to the wrapped client. This pattern enables integration with Prometheus, statsd, or custom monitoring systems without modifying core muxio code.

Sources: extensions/muxio-rpc-service-caller/src/caller_interface.rs

Connection Health Monitoring

Implement heartbeat RPC methods to detect dead connections:

Periodically invoke Ping::call() and measure response time. Elevated latency or timeouts indicate network degradation.

Sources: examples/example-muxio-rpc-service-definition/src/prebuffered.rs


This page covers advanced usage patterns, optimization techniques, and low-level implementation details. For extension development guidelines, see the extensions/README.md For basic usage, start with Overview.

Dismiss

Refresh this wiki

Enter email to refresh