Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Core Concepts

Loading…

Core Concepts

Relevant source files

This document explains the fundamental architectural concepts and design principles that underpin Muxio. It provides a conceptual overview of the binary protocol, multiplexing mechanisms, RPC abstraction layer, and cross-platform capabilities. For detailed implementation specifics of the core library, see Core Library (muxio)). For RPC framework details, see RPC Framework. For transport implementation patterns, see Transport Implementations.

Scope and Purpose

Muxio is built on a layered architecture where each layer serves a distinct purpose:

  1. Binary Framing Layer : Low-level frame encoding/decoding with stream identification
  2. Stream Multiplexing Layer : Managing multiple concurrent streams over a single connection
  3. RPC Protocol Layer : Request/response semantics and correlation
  4. Transport Abstraction Layer : Runtime-agnostic interfaces for different environments

The design prioritizes transport agnosticism, cross-platform compatibility (native + WASM), and type safety through shared service definitions.

Sources: README.md:17-54

Binary Protocol Foundation

Frame-Based Communication

Muxio uses a compact binary framing protocol where all data is transmitted as discrete frames. The RpcFrame struct defines the wire format, and the RpcFrameType enum distinguishes frame purposes.

RpcFrame Structure:

FieldTypePurpose
stream_idu32Identifies which logical stream the frame belongs to
frame_typeRpcFrameTypeEnum indicating frame purpose
payload_bytesVec<u8>Raw bytes specific to the frame type

RpcFrameType Variants:

VariantBinary ValuePurpose
Header0x01Contains serialized RpcHeader structure
Data0x02Contains payload chunk (up to DEFAULT_MAX_CHUNK_SIZE)
End0x03Signals stream completion
Cancel0x04Aborts stream mid-transmission
stateDiagram-v2
    [*] --> AwaitingHeader
    AwaitingHeader --> ReceivingPayload: RpcFrameType::Header
    AwaitingHeader --> Complete: RpcFrameType::End
    AwaitingHeader --> Cancelled: RpcFrameType::Cancel
    ReceivingPayload --> ReceivingPayload: RpcFrameType::Data
    ReceivingPayload --> Complete: RpcFrameType::End
    ReceivingPayload --> Cancelled: RpcFrameType::Cancel
    Complete --> [*]
    Cancelled --> [*]

This framing approach enables multiple independent data streams to be interleaved over a single physical connection without interference. The RpcStreamEncoder::write_frame() method emits frames, while RpcStreamDecoder::decode_frame() processes incoming frames.

Diagram: Frame Type State Machine

Sources: README.md:33-34 DRAFT.md:9-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-100 src/rpc/rpc_internals/rpc_stream_encoder.rs:1-80

Non-Async Callback Design

The core multiplexing logic in muxio is implemented using synchronous control flow with callbacks rather than async/await. This design decision enables:

  • WASM Compatibility : Works in single-threaded JavaScript environments
  • Runtime Agnosticism : No dependency on Tokio, async-std, or any specific runtime
  • Flexible Integration : Can be wrapped with async interfaces when needed

Sources: README.md:35-36 DRAFT.md:48-52

Stream Multiplexing Architecture

RpcSession: Multi-Stream Management

The RpcSession component manages multiple concurrent logical streams over a single connection. It maintains per-stream state and ensures frames are correctly routed to their destination stream.

Key Methods:

MethodSignaturePurpose
allocate_stream_id()fn(&mut self) -> u32Assigns unique identifiers to new streams
init_request()fn(&mut self, header: RpcHeader) -> u32Creates new stream, returns stream_id
read_bytes()fn(&mut self, bytes: &[u8], callback: F)Processes incoming frames, routes to decoders
remove_decoder()fn(&mut self, stream_id: u32)Cleans up completed/cancelled streams

Internal State:

  • decoders: HashMap<u32, RpcStreamDecoder> - Per-stream decoder instances
  • next_stream_id: u32 - Monotonically increasing stream identifier counter
  • event_callback: F - Closure invoked for each RpcStreamEvent

Diagram: RpcSession Stream Routing

graph TB
    RawBytes["Raw Bytes from Transport"]
subgraph RpcSession["RpcSession"]
ReadBytes["read_bytes(bytes, callback)"]
DecoderMap["decoders: HashMap&lt;u32, RpcStreamDecoder&gt;"]
Decoder1["RpcStreamDecoder { stream_id: 1 }"]
Decoder2["RpcStreamDecoder { stream_id: 2 }"]
DecoderN["RpcStreamDecoder { stream_id: N }"]
RemoveDecoder["remove_decoder(stream_id)"]
end
    
    subgraph Events["RpcStreamEvent Variants"]
Header["Header { stream_id, header: RpcHeader }"]
Data["Data { stream_id, bytes: Vec&lt;u8&gt; }"]
End["End { stream_id }"]
Cancel["Cancel { stream_id }"]
end
    
 
   RawBytes --> ReadBytes
 
   ReadBytes --> DecoderMap
 
   DecoderMap --> Decoder1
 
   DecoderMap --> Decoder2
 
   DecoderMap --> DecoderN
    
 
   Decoder1 --> Header
 
   Decoder2 --> Data
 
   DecoderN --> End
 
   End --> RemoveDecoder
 
   Cancel --> RemoveDecoder

Each RpcStreamDecoder maintains its own state machine via the RpcStreamDecoderState enum:

  • AwaitingHeader - Waiting for initial RpcFrameType::Header frame
  • ReceivingPayload - Accumulating RpcFrameType::Data frames
  • Complete - Stream finalized after RpcFrameType::End

Sources: README.md:29-30 src/rpc/rpc_internals/rpc_session.rs:1-150 src/rpc/rpc_internals/rpc_stream_decoder.rs:1-120

RpcDispatcher: Request/Response Correlation

The RpcDispatcher sits above RpcSession and provides RPC-specific semantics. It maintains a HashMap<u32, PendingRequest> to track in-flight requests.

Key Methods:

MethodSignaturePurpose
call()fn(&mut self, request: RpcRequest, on_response: F)Initiates RPC call, registers response callback
respond()fn(&mut self, response: RpcResponse)Sends response back to caller
read_bytes()fn(&mut self, bytes: &[u8])Processes incoming frames via RpcSession
cancel()fn(&mut self, request_id: u32)Aborts pending request

Internal State:

  • session: RpcSession - Underlying multiplexing layer
  • pending_requests: HashMap<u32, PendingRequest> - Tracks active calls
  • next_request_id: u32 - Monotonically increasing request identifier
  • write_bytes_fn: F - Callback to emit frames to transport

Diagram: RpcDispatcher Call Flow with Code Entities

sequenceDiagram
    participant App as "Application Code"
    participant Dispatcher as "RpcDispatcher"
    participant Session as "RpcSession"
    participant Pending as "pending_requests: HashMap"
    participant Encoder as "RpcStreamEncoder"
    
    App->>Dispatcher: call(RpcRequest, on_response)
    Dispatcher->>Dispatcher: next_request_id++
    Dispatcher->>Pending: insert(request_id, PendingRequest)
    Dispatcher->>Session: init_request(RpcHeader)
    Session->>Session: allocate_stream_id()
    Session->>Encoder: write_frame(RpcFrameType::Header)
    Encoder->>Encoder: write_frame(RpcFrameType::Data)
    Encoder->>Encoder: write_frame(RpcFrameType::End)
    
    Note over App,Encoder: Response Path
    
    Session->>Dispatcher: RpcStreamEvent::Header
    Dispatcher->>Dispatcher: Buffer payload in PendingRequest
    Session->>Dispatcher: RpcStreamEvent::Data
    Session->>Dispatcher: RpcStreamEvent::End
    Dispatcher->>Pending: remove(request_id)
    Dispatcher->>App: on_response(RpcResponse)

The PendingRequest struct accumulates stream data:

struct PendingRequest {
    header: RpcHeader,
    accumulated_bytes: Vec<u8>,
    on_response: Box<dyn FnOnce(Result<RpcResponse>)>
}

Sources: README.md:29-30 src/rpc/rpc_dispatcher.rs:1-300 src/rpc/rpc_internals/rpc_stream_encoder.rs:1-100

RPC Protocol Layer

Request and Response Types

Muxio defines structured types for RPC communication. These types are serialized using bitcode for transmission.

RpcHeader Structure:

pub struct RpcHeader {
    pub msg_type: RpcMsgType,          // Call(0x01) or Response(0x02)
    pub request_id: u32,                // Correlation identifier
    pub method_id: u32,                 // xxhash of method name
    pub rpc_param_bytes: Option<Vec<u8>>, // Inline params (if small)
    pub metadata_bytes: Vec<u8>,        // Optional auxiliary data
}

RpcMsgType Enum:

Variantnum_enum ValuePurpose
Call0x01Client-initiated request
Response0x02Server-generated response

RpcRequest Structure:

pub struct RpcRequest {
    pub header: RpcHeader,              // Contains method_id, request_id
    pub param_bytes: Vec<u8>,           // Full serialized parameters
    pub param_stream_rx: Option<...>,   // Optional streaming channel
}

RpcResponse Structure:

pub struct RpcResponse {
    pub request_id: u32,                // Matches original request
    pub result_type: RpcResultType,     // Ok(0x01) or Err(0x02)
    pub result_bytes: Vec<u8>,          // Serialized return value or error
}

Diagram: Type Relationships

Sources: README.md:33-34 src/rpc/types/rpc_header.rs:1-50 src/rpc/types/rpc_request.rs:1-40 src/rpc/types/rpc_response.rs:1-40

Method Routing

RPC methods are identified by numeric method_id values generated at compile-time using xxhash::xxh32() of the method name. This enables:

  • Constant-time lookups : Direct HashMap<u32, Handler> access rather than string comparison
  • Type safety : Method IDs are generated from trait definitions shared between client and server
  • Compact representation : 4-byte method identifiers instead of variable-length strings

RpcMethodPrebuffered Trait Definition:

Example Service Definition:

Method Dispatch Flow:

Sources: README.md:50-51 README.md:102-118 extensions/muxio-rpc-service/src/lib.rs:1-100

Transport Agnosticism

Generic Caller Interface

The RpcServiceCallerInterface trait abstracts the client-side transport layer, enabling the same application code to work with multiple implementations.

Trait Definition:

Concrete Implementations:

TypeModuleTransportPlatform
RpcClientmuxio-tokio-rpc-clienttokio-tungstenite WebSocketNative (Tokio)
RpcWasmClientmuxio-wasm-rpc-clientwasm-bindgen → JavaScript WebSocketBrowser (WASM)

Diagram: Trait Implementation and Usage

This abstraction allows writing code once that compiles for multiple targets:

  • Native applications : Use RpcClient with tokio::spawn() background tasks
  • Browser/WASM : Use RpcWasmClient with static_muxio_write_bytes() JavaScript bridge
  • Custom transports : Implement the trait for specialized needs (e.g., IPC, embedded systems)

Sources: README.md:48-49 extensions/muxio-rpc-service-caller/src/caller_interface.rs:1-100 extensions/muxio-tokio-rpc-client/src/rpc_client.rs:1-50 extensions/muxio-wasm-rpc-client/src/rpc_wasm_client.rs:1-50

Generic Endpoint Interface

The RpcServiceEndpointInterface trait abstracts the server-side handler registration. Handlers are stored in a HashMap<u32, Handler> indexed by method_id.

Trait Definition:

RpcContext Structure:

Diagram: Handler Registration and Dispatch

Sources: README.md:98-119 extensions/muxio-rpc-service-endpoint/src/endpoint_interface.rs:1-150

Cross-Platform Support

Single Codebase, Multiple Targets

Muxio enables true cross-platform RPC by separating concerns:

  1. Service Definition Layer : Platform-agnostic method definitions shared between client and server
  2. Transport Layer : Platform-specific implementations (Tokio, WASM) hidden behind traits
  3. Application Logic : Written once against the trait interface

The same Add::call(), Mult::call(), and Echo::call() method invocations work identically whether called from native code or WASM, as demonstrated in the example application.

Sources: README.md:48-49 README.md:64-161 Diagram 2 from high-level architecture

Type Safety Model

Compile-Time Contract Enforcement

Muxio enforces API contracts at compile-time through shared service definitions. Both client and server depend on the same crate containing method definitions, ensuring:

  • Parameter type mismatches are caught at compile-time
  • Return type mismatches are caught at compile-time
  • Method ID collisions are prevented by the build system
  • Serialization format consistency is guaranteed

The flow for type-safe RPC:

Any mismatch in the shared trait definition causes a compilation error, eliminating an entire class of runtime errors common in dynamically-typed RPC systems.

Sources: README.md:50-51 README.md:102-118

Serialization Layer

Muxio uses bitcode for efficient binary serialization, but the design is format-agnostic. The RpcMethodPrebuffered trait defines encode/decode methods, allowing alternative serialization libraries to be substituted if needed:

  • Bitcode : Default choice for compact binary format
  • Bincode : Alternative binary format
  • MessagePack : Cross-language compatibility
  • Custom formats : Full control over wire protocol

The key requirement is that both client and server use the same serialization implementation for a given method.

Sources: README.md:33-34

Summary

Muxio’s core concepts revolve around:

  1. Binary Framing : Efficient, low-overhead frame-based protocol
  2. Stream Multiplexing : Multiple concurrent streams via RpcSession and per-stream decoders
  3. Request Correlation : Matching responses to requests via RpcDispatcher
  4. Transport Abstraction : Generic traits (RpcServiceCallerInterface, RpcServiceEndpointInterface) enable multiple implementations
  5. Non-Async Core : Callback-driven design supports WASM and multiple runtimes
  6. Type Safety : Shared service definitions provide compile-time contract enforcement
  7. Cross-Platform : Single codebase runs on native (Tokio) and browser (WASM) clients

These concepts are elaborated in subsequent sections: Design Philosophy covers the reasoning behind these choices, and Layered Architecture provides detailed implementation patterns.

Sources: README.md:17-54 DRAFT.md:9-52 All high-level architecture diagrams