Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Overview

Relevant source files

Purpose and Scope

This document provides a high-level introduction to the rust-muxio system, a toolkit for building efficient, transport-agnostic multiplexed communication systems with type-safe RPC capabilities. This page explains what Muxio is, its architectural layers, and core design principles.

For detailed information about specific subsystems:

Sources: README.md:1-166 DRAFT.md:9-53


What is Muxio?

Muxio is a high-performance Rust framework that provides two primary capabilities:

  1. Binary Stream Multiplexing : A low-level framing protocol that manages multiple concurrent data streams over a single connection, handling frame interleaving, reassembly, and ordering.

  2. Lightweight RPC Framework : A minimalist RPC abstraction built on the multiplexing layer, providing request correlation, method dispatch, and bidirectional communication without imposing opinions about serialization or transport.

The system is designed around a "core + extensions" architecture. The muxio core library (Cargo.toml10) provides runtime-agnostic, transport-agnostic primitives. Extension crates build concrete implementations for specific environments (Tokio async runtime, WebAssembly/browser, etc.).

Sources: README.md:18-23 Cargo.toml:10-17


System Architecture

The following diagram illustrates the layered architecture and primary components:

Layered Architecture Overview

graph TB
    subgraph "Application Layer"
        APP["Application Code\nTyped RPC Calls"]
end
    
    subgraph "Service Definition Layer"
        SERVICE_DEF["Service Definition Crate\nRpcMethodPrebuffered implementations\nMETHOD_ID generation"]
end
    
    subgraph "RPC Abstraction Layer"
        CALLER["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
ENDPOINT["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
SERVICE["muxio-rpc-service\nRpcMethodPrebuffered trait"]
end
    
    subgraph "Core Multiplexing Layer"
        DISPATCHER["RpcDispatcher\nRequest correlation\nStream management"]
FRAMING["Binary Framing Protocol\nFrame chunking and reassembly"]
end
    
    subgraph "Transport Implementations"
        TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer\nAxum + WebSocket"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient\nTokio + tungstenite"]
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient\nwasm-bindgen"]
end
    
 
   APP --> SERVICE_DEF
 
   SERVICE_DEF --> CALLER
 
   SERVICE_DEF --> ENDPOINT
 
   SERVICE_DEF --> SERVICE
    
 
   CALLER --> DISPATCHER
 
   ENDPOINT --> DISPATCHER
 
   SERVICE --> CALLER
 
   SERVICE --> ENDPOINT
    
 
   DISPATCHER --> FRAMING
    
 
   FRAMING --> TOKIO_SERVER
 
   FRAMING --> TOKIO_CLIENT
 
   FRAMING --> WASM_CLIENT
LayerCratesResponsibilities
ApplicationUser codeInvokes typed RPC methods, receives typed responses
Service Definitionexample-muxio-rpc-service-definitionDefines shared service contracts with compile-time METHOD_ID generation
RPC Abstractionmuxio-rpc-service, muxio-rpc-service-caller, muxio-rpc-service-endpointProvides traits for method definition, client invocation, and server dispatch
Core MultiplexingmuxioManages request correlation, stream multiplexing, and binary framing
Transportmuxio-tokio-rpc-server, muxio-tokio-rpc-client, muxio-wasm-rpc-clientConcrete implementations for specific runtimes and platforms

Each layer depends only on the layers below it, enabling modular composition. The core muxio library has zero knowledge of RPC concepts, and the RPC layer has zero knowledge of specific transports.

Sources: README.md:14-40 Cargo.toml:19-31 DRAFT.md:9-26


Key Design Principles

Binary Protocol

All data transmission uses a compact binary format. The framing protocol uses minimal headers to reduce overhead. RPC payloads are serialized as raw bytes, with no assumptions about the serialization format (though extensions commonly use bitcode for efficiency).

Key characteristics:

  • Frame headers contain only essential metadata
  • No text-based parsing overhead
  • Supports arbitrary binary payloads
  • Low CPU and bandwidth requirements

Transport Agnostic

The muxio core library implements all multiplexing logic through callback interfaces. This design allows integration with any transport mechanism:

  • WebSocket : Used by Tokio server/client and WASM client
  • TCP : Can be implemented with custom transports
  • In-memory channels : Used for testing
  • Any byte-oriented transport : Custom implementations possible

The core library never directly performs I/O. Instead, it accepts bytes via callbacks and emits bytes through return values or callbacks.

Runtime Agnostic

The core muxio library uses synchronous control flow with callbacks, avoiding dependencies on specific async runtimes:

  • No async/await in core library
  • Compatible with Tokio, async-std, or no runtime at all
  • WASM-compatible (runs in single-threaded browser environment)
  • Extension crates adapt the core to specific runtimes (e.g., muxio-tokio-rpc-server uses Tokio)

This design enables the same core logic to work across radically different execution environments.

graph LR
    subgraph "Shared Definition"
        SERVICE["example-muxio-rpc-service-definition\nAdd, Mult, Echo methods\nRpcMethodPrebuffered implementations"]
end
    
    subgraph "Native Server"
        SERVER["RpcServer\nTokio runtime\nLinux/macOS/Windows"]
end
    
    subgraph "Native Client"
        NATIVE["RpcClient\nTokio runtime\nCommand-line tools"]
end
    
    subgraph "Web Client"
        WASM["RpcWasmClient\nWebAssembly\nBrowser JavaScript"]
end
    
    SERVICE -.shared contract.-> SERVER
    SERVICE -.shared contract.-> NATIVE
    SERVICE -.shared contract.-> WASM
    
    NATIVE <-.WebSocket.-> SERVER
    WASM <-.WebSocket.-> SERVER

Cross-Platform Deployment

The architecture supports "write once, deploy everywhere" through shared service definitions:

All implementations depend on the same service definition crate, ensuring API compatibility at compile time. A single server can handle requests from both native and WASM clients simultaneously.

Sources: README.md:41-52 DRAFT.md:48-52 README.md:63-160


Repository Structure

The repository uses a Cargo workspace with the following organization:

Core Library

  • muxio (Cargo.toml10): The foundational crate providing stream multiplexing and binary framing. This crate has minimal dependencies and makes no assumptions about RPC, serialization, or transport.

RPC Extensions

Located in extensions/ (Cargo.toml:20-28):

CratePurpose
muxio-rpc-serviceDefines RpcMethodPrebuffered trait for service contracts
muxio-rpc-service-callerProvides RpcServiceCallerInterface for client-side RPC invocation
muxio-rpc-service-endpointProvides RpcServiceEndpointInterface for server-side RPC dispatch
muxio-tokio-rpc-serverTokio-based server with Axum and WebSocket support
muxio-tokio-rpc-clientTokio-based client with connection management
muxio-wasm-rpc-clientWebAssembly client for browser environments
muxio-ext-testTesting utilities for integration tests

Examples

Located in examples/ (Cargo.toml:29-30):

  • example-muxio-rpc-service-definition : Demonstrates shared service definitions with Add, Mult, and Echo methods
  • example-muxio-ws-rpc-app : Complete WebSocket RPC application showing server and client usage

Dependency Flow

Extensions depend on the core library and build progressively more opinionated abstractions. Applications depend on extensions and service definitions, never directly on the core library.

Sources: Cargo.toml:19-31 Cargo.toml:39-47 README.md:61-62


Type Safety Through Shared Definitions

The system achieves compile-time type safety by requiring both clients and servers to depend on the same service definition crate. The RpcMethodPrebuffered trait defines the contract:

Compile-Time Guarantees:

graph TB
    subgraph "Service Definition"
        TRAIT["RpcMethodPrebuffered trait"]
ADD["Add struct\nMETHOD_ID = xxhash('Add')\nencode_request(Vec&lt;f64&gt;)\ndecode_response() -> f64"]
MULT["Mult struct\nMETHOD_ID = xxhash('Mult')\nencode_request(Vec&lt;f64&gt;)\ndecode_response() -> f64"]
end
    
    subgraph "Client Usage"
        CLIENT_CALL["Add::call(&client, vec![1.0, 2.0, 3.0])\nResult&lt;f64, RpcServiceError&gt;"]
end
    
    subgraph "Server Handler"
        SERVER_HANDLER["endpoint.register_prebuffered\n(Add::METHOD_ID, handler_fn)"]
HANDLER_FN["handler_fn(request_bytes) ->\ndecode -> compute -> encode"]
end
    
 
   TRAIT --> ADD
 
   TRAIT --> MULT
    
    ADD -.compile-time guarantee.-> CLIENT_CALL
    ADD -.compile-time guarantee.-> SERVER_HANDLER
    
 
   SERVER_HANDLER --> HANDLER_FN
  1. Method ID Consistency : Each method's METHOD_ID is generated at compile time by hashing the method name with xxhash-rust. The same name always produces the same ID.

  2. Type Consistency : Both encode_request/decode_request and encode_response/decode_response use shared type definitions. Changing a parameter type breaks compilation for both client and server.

  3. Collision Detection : Duplicate method names produce duplicate METHOD_ID values, causing runtime panics during handler registration (which surface during integration tests).

This design eliminates a common class of distributed system bugs where client and server APIs drift out of sync.

Sources: README.md49 README.md:69-118 Cargo.toml52 Cargo.toml64


sequenceDiagram
    participant App as "Application"
    participant Method as "Add::call"
    participant Client as "RpcClient\n(or RpcWasmClient)"
    participant Dispatcher as "RpcDispatcher"
    participant Transport as "WebSocket"
    participant Server as "RpcServer"
    participant Handler as "Add handler"
    
    App->>Method: call(&client, vec![1.0, 2.0, 3.0])
    Method->>Method: encode_request() -> bytes
    Method->>Client: invoke(METHOD_ID, bytes)
    Client->>Dispatcher: send_request(METHOD_ID, bytes)
    Dispatcher->>Dispatcher: assign request_id
    Dispatcher->>Dispatcher: serialize to frames
    Dispatcher->>Transport: write binary frames
    
    Transport->>Server: receive frames
    Server->>Dispatcher: process_incoming_bytes
    Dispatcher->>Dispatcher: reassemble frames
    Dispatcher->>Dispatcher: route by METHOD_ID
    Dispatcher->>Handler: invoke(request_bytes)
    Handler->>Handler: decode -> compute -> encode
    Handler->>Dispatcher: return response_bytes
    
    Dispatcher->>Dispatcher: serialize response
    Dispatcher->>Transport: write binary frames
    Transport->>Client: receive frames
    Client->>Dispatcher: process_incoming_bytes
    Dispatcher->>Dispatcher: match request_id
    Dispatcher->>Client: resolve with bytes
    Client->>Method: return bytes
    Method->>Method: decode_response() -> f64
    Method->>App: return Result<f64>

Communication Flow

The following diagram traces a complete RPC call from application code through all system layers:

Key Observations:

  1. Application code works with typed values (Vec<f64> in, f64 out)
  2. Service definitions handle encoding/decoding
  3. RpcDispatcher manages request correlation and multiplexing
  4. Multiple requests can be in-flight simultaneously over a single connection
  5. The binary framing protocol handles interleaved frames from concurrent requests

Sources: README.md:69-160


Development Status

The project is currently in alpha status (Cargo.toml3) and under active development (README.md14). The core architecture is stable, but APIs may change before the 1.0 release.

Current Version: 0.10.0-alpha

Sources: README.md14 Cargo.toml3

Dismiss

Refresh this wiki

Enter email to refresh