Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Core Library (muxio)

Loading…

Core Library (muxio)

Relevant source files

Purpose and Scope

This document describes the foundational muxio crate, which provides the low-level binary framing protocol and stream multiplexing capabilities that form the base layer of the Muxio framework. The core library is transport-agnostic, non-async, and callback-driven, enabling integration with any runtime environment including Tokio, standard library, and WebAssembly.

For higher-level RPC service abstractions built on top of this core, see RPC Framework. For concrete client and server implementations that use this core library, see Transport Implementations.

Sources: Cargo.toml:1-71 README.md:17-24

Architecture Overview

The muxio core library implements a layered architecture where each layer has a specific, well-defined responsibility. The design separates concerns into three distinct layers: binary framing, stream multiplexing, and RPC protocol.

Diagram: Core Library Component Layering

graph TB
    subgraph "Application Code"
        App["Application Logic"]
end
    
    subgraph "RPC Protocol Layer"
        Dispatcher["RpcDispatcher"]
Request["RpcRequest"]
Response["RpcResponse"]
Header["RpcHeader"]
end
    
    subgraph "Stream Multiplexing Layer"
        Session["RpcSession"]
StreamDecoder["RpcStreamDecoder"]
StreamEncoder["RpcStreamEncoder"]
StreamEvent["RpcStreamEvent"]
end
    
    subgraph "Binary Framing Layer"
        MuxDecoder["FrameMuxStreamDecoder"]
Frame["DecodedFrame"]
FrameKind["FrameKind"]
end
    
    subgraph "Transport Layer"
        Transport["Raw Bytes\nWebSocket/TCP/Custom"]
end
    
 
   App --> Dispatcher
 
   Dispatcher --> Request
 
   Dispatcher --> Response
 
   Request --> Header
 
   Response --> Header
    
 
   Dispatcher --> Session
 
   Session --> StreamDecoder
 
   Session --> StreamEncoder
 
   StreamDecoder --> StreamEvent
 
   StreamEncoder --> Header
    
 
   Session --> MuxDecoder
 
   MuxDecoder --> Frame
 
   Frame --> FrameKind
    
 
   MuxDecoder --> Transport
 
   Session --> Transport

Sources: src/rpc/rpc_internals/rpc_session.rs:1-118

Key Components

The core library consists of several primary components organized into distinct functional layers:

ComponentFile LocationLayerPrimary ResponsibilityDetails
FrameMuxStreamDecodersrc/frame/Binary FramingDecodes raw bytes into DecodedFrame structuresSee Binary Framing Protocol
DecodedFramesrc/frame/Binary FramingContainer for decoded frame data with stream ID and payloadSee Binary Framing Protocol
FrameKindsrc/frame/Binary FramingFrame type enumeration (Data, End, Cancel)See Binary Framing Protocol
RpcSessionsrc/rpc/rpc_internals/rpc_session.rs:20-117Stream MultiplexingManages stream ID allocation and per-stream decodersSee Stream Multiplexing
RpcStreamDecodersrc/rpc/rpc_internals/rpc_stream_decoder.rs:11-186Stream MultiplexingMaintains state machine for individual stream decodingSee Stream Multiplexing
RpcStreamEncodersrc/rpc/rpc_internals/Stream MultiplexingEncodes RPC headers and payloads into framesSee Stream Multiplexing
RpcDispatchersrc/rpc/rpc_dispatcher.rsRPC ProtocolCorrelates requests with responses via request_idSee RPC Dispatcher
RpcRequestsrc/rpc/RPC ProtocolRequest data structure with method ID and parametersSee Request and Response Types
RpcResponsesrc/rpc/RPC ProtocolResponse data structure with result or errorSee Request and Response Types
RpcHeadersrc/rpc/rpc_internals/RPC ProtocolContains RPC metadata (message type, IDs, metadata bytes)See Request and Response Types

Sources: src/rpc/rpc_internals/rpc_session.rs:15-24 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18

Component Interaction Flow

The following diagram illustrates how the core components interact during a typical request/response cycle, showing the actual method calls and data structures involved:

Diagram: Request/Response Data Flow Through Core Components

sequenceDiagram
    participant App as "Application"
    participant Disp as "RpcDispatcher"
    participant Sess as "RpcSession"
    participant Enc as "RpcStreamEncoder"
    participant Dec as "RpcStreamDecoder"
    participant Mux as "FrameMuxStreamDecoder"
    
    rect rgb(245, 245, 245)
        Note over App,Mux: Outbound Request Flow
        App->>Disp: call(RpcRequest)
        Disp->>Disp: assign request_id
        Disp->>Sess: init_request(RpcHeader, max_chunk_size, on_emit)
        Sess->>Sess: allocate stream_id via increment_u32_id()
        Sess->>Enc: RpcStreamEncoder::new(stream_id, header, on_emit)
        Enc->>Enc: encode header + payload into frames
        Enc->>App: emit bytes via on_emit callback
    end
    
    rect rgb(245, 245, 245)
        Note over App,Mux: Inbound Response Flow
        App->>Sess: read_bytes(input, on_rpc_stream_event)
        Sess->>Mux: FrameMuxStreamDecoder::read_bytes(input)
        Mux->>Sess: Iterator<Result<DecodedFrame>>
        Sess->>Dec: RpcStreamDecoder::decode_rpc_frame(frame)
        Dec->>Dec: state machine: AwaitHeader → AwaitPayload → Done
        Dec->>Sess: Vec<RpcStreamEvent>
        Sess->>App: on_rpc_stream_event(RpcStreamEvent::Header)
        Sess->>App: on_rpc_stream_event(RpcStreamEvent::PayloadChunk)
        Sess->>App: on_rpc_stream_event(RpcStreamEvent::End)
    end

Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 src/rpc/rpc_internals/rpc_stream_decoder.rs:53-186

Non-Async, Callback-Driven Design

A fundamental design characteristic of the core library is its non-async, callback-driven architecture. This design choice enables the library to be used across different runtime environments without requiring a specific async runtime.

graph LR
    subgraph "Callback Trait System"
        RpcEmit["RpcEmit trait"]
RpcStreamEventHandler["RpcStreamEventDecoderHandler trait"]
RpcEmit --> TokioImpl["Tokio Implementation:\nasync fn + channels"]
RpcEmit --> WasmImpl["WASM Implementation:\nwasm_bindgen + JS bridge"]
RpcEmit --> CustomImpl["Custom Implementation:\nuser-defined"]
RpcStreamEventHandler --> DispatcherHandler["RpcDispatcher handler"]
RpcStreamEventHandler --> CustomHandler["Custom event handler"]
end

Callback Traits

The core library defines several callback traits that enable integration with different transport layers:

Diagram: Callback Trait Architecture

The callback-driven design means:

  1. No built-in async/await : Core methods like RpcSession::read_bytes() and RpcSession::init_request() are synchronous
  2. Emit callbacks : Output is sent via callback functions implementing the RpcEmit trait
  3. Event callbacks : Decoded events are delivered via RpcStreamEventDecoderHandler implementations
  4. Transport agnostic : Any byte transport can be used as long as it can call the core methods and handle callbacks

Sources: src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_session.rs:53-117 README.md:35-36

Core Abstractions

The core library provides three fundamental abstractions that enable runtime-agnostic operation:

Stream Lifecycle Management

The RpcSession component provides stream multiplexing by maintaining per-stream state. Each outbound call allocates a unique stream_id via increment_u32_id(), and incoming frames are demultiplexed to their corresponding RpcStreamDecoder instances. For detailed information on stream multiplexing mechanics, see Stream Multiplexing.

Binary Protocol Format

All communication uses a binary framing protocol with fixed-size headers and variable-length payloads. The protocol supports chunking large messages using DEFAULT_MAX_CHUNK_SIZE and includes frame types (FrameKind::Data, FrameKind::End, FrameKind::Cancel) for stream control. For complete protocol specification, see Binary Framing Protocol.

Request Correlation

The RpcDispatcher component manages request/response correlation using unique request_id values. It maintains a HashMap of pending requests and routes incoming responses to the appropriate callback handlers. For dispatcher implementation details, see RPC Dispatcher.

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:53-117

Dependencies and Build Configuration

The core muxio crate has minimal dependencies to maintain its lightweight, transport-agnostic design:

DependencyPurposeVersion
chronoTimestamp utilities0.4.41
once_cellLazy static initialization1.21.3
tracingLogging and diagnostics0.1.41

Development dependencies include:

  • bitcode: Used in tests for serialization examples
  • rand: Random data generation for tests
  • tokio: Async runtime for integration tests

The crate is configured to publish to crates.io with Apache-2.0 license Cargo.toml:1-18

Sources: Cargo.toml:34-71

Integration with Extensions

The core library is designed to be extended through separate crates in the workspace. The callback-driven, non-async design enables these extensions without modification to the core:

Diagram: Core Library Extension Architecture

graph TB
    subgraph "Core Library: muxio"
        Session["RpcSession\n(callback-driven)"]
Dispatcher["RpcDispatcher\n(callback-driven)"]
end
    
    subgraph "RPC Service Layer"
        ServiceTrait["muxio-rpc-service\nRpcMethodPrebuffered trait"]
Caller["muxio-rpc-service-caller\nRpcServiceCallerInterface"]
Endpoint["muxio-rpc-service-endpoint\nRpcServiceEndpointInterface"]
end
    
    subgraph "Runtime Implementations"
        TokioServer["muxio-tokio-rpc-server\nasync Tokio + Axum"]
TokioClient["muxio-tokio-rpc-client\nasync Tokio + WebSocket"]
WasmClient["muxio-wasm-rpc-client\nwasm-bindgen"]
end
    
 
   Session --> ServiceTrait
 
   Dispatcher --> Caller
 
   Dispatcher --> Endpoint
    
 
   Caller --> TokioClient
 
   Caller --> WasmClient
 
   Endpoint --> TokioServer

Extensions built on the core library:

  • RPC Service Layer RPC Framework: Provides method definition traits and abstractions
  • Tokio Server Tokio RPC Server: Async server implementation using tokio and axum
  • Tokio Client Tokio RPC Client: Async client using tokio-tungstenite WebSockets
  • WASM Client WASM RPC Client: Browser-based client using wasm-bindgen

Sources: Cargo.toml:19-31 README.md:37-41

Memory and Performance Characteristics

The core library is designed for efficiency with several key optimizations:

OptimizationImplementationBenefit
Zero-copy frame parsingFrameMuxStreamDecoder processes bytes in-placeEliminates unnecessary allocations during frame decoding
Shared headersRpcHeader wrapped in Arc src/rpc/rpc_internals/rpc_stream_decoder.rs111Multiple events reference same header without cloning
Minimal bufferingStream decoders emit chunks immediately after header parseLow memory footprint for large payloads
Automatic cleanupStreams removed from HashMap on End/Cancel src/rpc/rpc_internals/rpc_session.rs74Prevents memory leaks from completed streams
Configurable chunksmax_chunk_size parameter in init_request() src/rpc/rpc_internals/rpc_session.rs:35-50Tune for different payload sizes and network conditions

Typical Memory Usage

  • RpcSession: Approximately 48 bytes base + HashMap overhead
  • RpcStreamDecoder: Approximately 80 bytes + buffered payload size
  • RpcHeader (shared): 24 bytes + metadata length
  • Active stream overhead: ~160 bytes per concurrent stream

For performance tuning strategies and benchmarks, see Performance Considerations.

Sources: src/rpc/rpc_internals/rpc_session.rs:20-33 src/rpc/rpc_internals/rpc_session.rs:35-50 src/rpc/rpc_internals/rpc_stream_decoder.rs:11-18 src/rpc/rpc_internals/rpc_stream_decoder.rs:111-116

Error Handling

The core library uses Result types with specific error enums:

  • FrameDecodeError : Returned by FrameMuxStreamDecoder::read_bytes() and RpcStreamDecoder::decode_rpc_frame()
    • CorruptFrame: Invalid frame structure or header data
    • ReadAfterCancel: Attempt to read after stream cancellation
  • FrameEncodeError : Returned by encoding operations
    • Propagated from RpcStreamEncoder::new()

Error events are emitted as RpcStreamEvent::Error src/rpc/rpc_internals/rpc_session.rs:84-91 src/rpc/rpc_internals/rpc_session.rs:103-110 containing:

  • rpc_header: The header if available
  • rpc_request_id: The request ID if known
  • rpc_method_id: The method ID if parsed
  • frame_decode_error: The underlying error

For comprehensive error handling patterns, see Error Handling.

Sources: src/rpc/rpc_internals/rpc_session.rs:80-111