Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

GitHub

This documentation is part of the "Projects with Books" initiative at zenOSmosis.

The source code for this project is available on GitHub.

Layered Architecture

Loading…

Layered Architecture

Relevant source files

Purpose and Scope

This document explains the layered transport kit design of the muxio system, describing how each layer builds upon the previous one to provide progressively higher-level abstractions. The architecture separates concerns into six distinct layers: binary framing, stream multiplexing, RPC protocol, RPC abstractions, service definitions, and platform extensions.

For information about the design principles that motivated this architecture, see Design Philosophy. For detailed implementation details of individual layers, see Core Library (muxio)) and RPC Framework.

Architectural Overview

The muxio system implements a layered transport kit where each layer has a well-defined responsibility and interacts only with adjacent layers. This separation enables runtime-agnostic operation and cross-platform deployment.

Sources:

graph TB
    subgraph "Layer 6: Application Code"
        APP["User Application\nBusiness Logic"]
end
    
    subgraph "Layer 5: Service Definition Layer"
        SD["RpcMethodPrebuffered Traits\nCompile-Time Method IDs\nShared Type Contracts"]
end
    
    subgraph "Layer 4: RPC Abstraction Layer"
        CALLER["RpcServiceCallerInterface\nPlatform-Agnostic Client API"]
ENDPOINT["RpcServiceEndpointInterface\nPlatform-Agnostic Server API"]
end
    
    subgraph "Layer 3: RPC Protocol Layer"
        DISPATCHER["RpcDispatcher\nRequest Correlation\nResponse Routing"]
end
    
    subgraph "Layer 2: Stream Multiplexing Layer"
        SESSION["RpcSession\nStream ID Allocation\nPer-Stream Decoders"]
end
    
    subgraph "Layer 1: Binary Framing Layer"
        ENCODER["RpcStreamEncoder\nFrame Construction"]
DECODER["RpcStreamDecoder\nFrame Reconstruction"]
end
    
    subgraph "Layer 0: Platform Extensions"
        TOKIO_CLIENT["RpcClient\ntokio + tokio-tungstenite"]
TOKIO_SERVER["RpcServer\naxum + tokio-tungstenite"]
WASM_CLIENT["RpcWasmClient\nwasm-bindgen + js-sys"]
end
    
 
   APP --> SD
 
   SD --> CALLER
 
   SD --> ENDPOINT
 
   CALLER --> DISPATCHER
 
   ENDPOINT --> DISPATCHER
 
   DISPATCHER --> SESSION
 
   SESSION --> ENCODER
 
   SESSION --> DECODER
 
   ENCODER --> TOKIO_CLIENT
 
   ENCODER --> TOKIO_SERVER
 
   ENCODER --> WASM_CLIENT
 
   DECODER --> TOKIO_CLIENT
 
   DECODER --> TOKIO_SERVER
 
   DECODER --> WASM_CLIENT

Layer 1: Binary Framing Protocol

The binary framing layer defines the wire format for all data transmission. It provides discrete message boundaries over byte streams using a compact header structure.

Frame Structure

Each frame consists of a fixed-size header followed by a variable-length payload chunk:

FieldTypeSizeDescription
stream_idu324 bytesIdentifies which logical stream this frame belongs to
flagsu81 byteControl flags (Start, End, Error, Cancelation)
payload[u8]VariableBinary data chunk

The frame header is defined in RpcHeader and serialized using bytemuck for zero-copy conversion.

Frame Types

Frames are categorized by their flags field, encoded using num_enum:

  • Start Frame : First frame of a stream, initializes decoder state
  • Data Frame : Intermediate payload chunk
  • End Frame : Final frame, triggers stream completion
  • Error Frame : Signals stream-level error
  • Cancelation Frame : Requests stream termination

Encoding and Decoding

The RpcStreamEncoder serializes data into frames with automatic chunking based on DEFAULT_MAX_CHUNK_SIZE. The RpcStreamDecoder reconstructs the original message from potentially out-of-order frames.

Sources:

graph LR
    INPUT["Input Bytes"]
CHUNK["Chunk into\nDEFAULT_MAX_CHUNK_SIZE"]
HEADER["Add RpcHeader\nstream_id + flags"]
FRAME["Binary Frame"]
INPUT --> CHUNK
 
   CHUNK --> HEADER
 
   HEADER --> FRAME
    
    RECV["Received Frames"]
DEMUX["Demultiplex by\nstream_id"]
BUFFER["Reassemble Chunks"]
OUTPUT["Output Bytes"]
RECV --> DEMUX
 
   DEMUX --> BUFFER
 
   BUFFER --> OUTPUT

Layer 2: Stream Multiplexing Layer

The stream multiplexing layer, implemented by RpcSession, manages multiple concurrent logical streams over a single connection. Each stream has independent state and lifecycle.

RpcSession Responsibilities

The RpcSession struct provides:

  • Stream ID Allocation : Monotonically increasing u32 identifiers
  • Per-Stream Decoders : HashMap<u32, RpcStreamDecoder> for concurrent reassembly
  • Frame Muxing : Interleaving frames from multiple streams
  • Frame Demuxing : Routing incoming frames to the correct decoder
  • Stream Lifecycle : Automatic decoder cleanup on End/Error events

Stream Lifecycle Management

The session maintains a decoder for each active stream in the decoders field. When a stream completes (End/Error/Cancelation), its decoder is removed from the map, freeing resources.

Concurrent Stream Operations

Multiple streams can be active simultaneously:

Sources:

Layer 3: RPC Protocol Layer

The RPC protocol layer, implemented by RpcDispatcher, adds request/response semantics on top of the stream multiplexer. It correlates requests with responses using unique request IDs.

RpcDispatcher Structure

Request Correlation

The dispatcher assigns each RPC call a unique request_id:

  1. Client calls RpcDispatcher::call(RpcRequest)
  2. Dispatcher assigns monotonic request_id from next_request_id
  3. Request is serialized with embedded request_id
  4. Dispatcher stores callback in pending_requests map
  5. Server processes request and returns RpcResponse with same request_id
  6. Dispatcher looks up callback in pending_requests and invokes it
  7. Entry is removed from pending_requests

RPC Message Types

The protocol uses num_enum to encode message types in frame payloads:

Message TypeDirectionContains
RpcRequestClient → Serverrequest_id, method_id, params
RpcResponseServer → Clientrequest_id, result or error
RpcStreamChunkBidirectionalrequest_id, chunk_data
RpcStreamEndBidirectionalrequest_id

Request/Response Flow with Code Entities

Sources:

Layer 4: RPC Abstraction Layer

The RPC abstraction layer defines platform-agnostic traits that enable the same application code to work across different runtime environments.

RpcServiceCallerInterface

The RpcServiceCallerInterface trait abstracts client-side RPC invocation:

This trait is implemented by:

  • RpcClient (Tokio-based native client)
  • RpcWasmClient (WASM browser client)

RpcServiceEndpointInterface

The RpcServiceEndpointInterface trait abstracts server-side handler registration:

Platform Abstraction Benefits

AspectImplementation DetailAbstracted By
TransportWebSocket, TCP, Browser APIsCaller/Endpoint traits
RuntimeTokio, WASM event loop, std::threadAsync trait methods
Serializationbitcode encoding/decodingVec<u8> byte interface
Error HandlingPlatform-specific errorsRpcServiceError enum

Sources:

Layer 5: Service Definition Layer

The service definition layer provides compile-time type safety through shared trait definitions between client and server.

RpcMethodPrebuffered Trait

Service methods are defined using the RpcMethodPrebuffered trait:

Compile-Time Method ID Generation

The METHOD_ID is computed at compile time using xxhash3_64 from the xxhash-rust crate:

graph TB
    DEF["Service Definition Crate\nRpcMethodPrebuffered impls"]
SERVER["Server Crate"]
CLIENT["Client Crate"]
DEF -->|depends on| SERVER
 
   DEF -->|depends on| CLIENT
    
 
   SERVER -->|register_prebuffered Add::METHOD_ID, handler| ENDPOINT["RpcServiceEndpoint"]
CLIENT -->|Add::call client, params| CALLER["RpcServiceCaller"]
ENDPOINT -->|decode_request| DEF
 
   ENDPOINT -->|encode_response| DEF
 
   CALLER -->|encode_request| DEF
 
   CALLER -->|decode_response| DEF
    
    Note1["Compile Error if:\n- Method name mismatch\n- Type signature mismatch\n- Serialization incompatibility"]

This ensures that method names are never transmitted on the wire—only their compact 8-byte hash values.

Type Safety Enforcement

Sources:

graph TB
    subgraph "Tokio Native Platform"
        TOKIO_CLIENT["RpcClient\nArc&lt;RpcClientInner&gt;"]
TOKIO_INNER["RpcClientInner\ndispatcher: TokioMutex&lt;RpcDispatcher&gt;\nendpoint: Arc&lt;RpcServiceEndpoint&gt;"]
TOKIO_TRANSPORT["tokio-tungstenite\nWebSocketStream"]
TOKIO_TASKS["Background Tasks\nread_task\nwrite_task"]
TOKIO_CLIENT -->|owns Arc| TOKIO_INNER
 
       TOKIO_INNER -->|uses| TOKIO_TRANSPORT
 
       TOKIO_CLIENT -->|spawns| TOKIO_TASKS
    end
    
    subgraph "WASM Browser Platform"
        WASM_CLIENT["RpcWasmClient\nRpcClientInner"]
WASM_BRIDGE["static_muxio_write_bytes\nJavaScript Bridge"]
WASM_STATIC["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local! RefCell"]
WASM_WSAPI["Browser WebSocket API\njs-sys bindings"]
WASM_CLIENT -->|calls| WASM_BRIDGE
 
       WASM_BRIDGE -->|write to| WASM_WSAPI
 
       WASM_STATIC -->|holds| WASM_CLIENT
    end
    
    subgraph "Shared Abstractions"
        CALLER_TRAIT["RpcServiceCallerInterface"]
ENDPOINT_TRAIT["RpcServiceEndpointInterface"]
end
    
    TOKIO_CLIENT -.implements.-> CALLER_TRAIT
    WASM_CLIENT -.implements.-> CALLER_TRAIT
 
   TOKIO_INNER -->|owns| ENDPOINT_TRAIT
 
   WASM_CLIENT -->|owns| ENDPOINT_TRAIT

Layer 6: Platform Extensions

Platform extensions implement the abstraction layer traits for specific runtime environments, providing concrete transport mechanisms.

Platform Extension Architecture

Extension Crate Mapping

Extension CrateImplementsRuntimeTransport
muxio-tokio-rpc-clientRpcServiceCallerInterface, RpcServiceEndpointInterfaceTokio asynctokio-tungstenite WebSocket
muxio-tokio-rpc-serverRpcServiceEndpointInterfaceTokio + Axumtokio-tungstenite WebSocket
muxio-wasm-rpc-clientRpcServiceCallerInterface, RpcServiceEndpointInterfaceBrowser event loopwasm-bindgen + js-sys

Tokio Client Lifecycle

The RpcClient manages lifecycle through Arc reference counting:

Background tasks (read_task, write_task) hold Arc clones and automatically clean up when the connection drops.

WASM Client Singleton Pattern

The WASM client uses a thread-local singleton for JavaScript interop:

This enables JavaScript to write bytes into the Rust dispatcher without async overhead.

Sources:

Cross-Cutting Concerns

Several subsystems span multiple layers:

Serialization (bitcode)

The bitcode crate provides compact binary serialization at Layer 5 (Service Definitions):

  • encode() in RpcMethodPrebuffered::encode_request/encode_response
  • decode() in RpcMethodPrebuffered::decode_request/decode_response
  • Configured in service definition crates, used by both client and server

Observability (tracing)

The tracing crate provides structured logging at Layers 2-4:

  • Frame-level events in RpcSession
  • Request/response correlation in RpcDispatcher
  • Connection state changes in platform extensions

Error Propagation

Errors flow upward through layers:

Each layer defines its own error type and converts lower-layer errors appropriately.

Sources:

Layer Interaction Patterns

Write Path (Client → Server)

Read Path (Server → Client)

Sources:

Summary

The layered architecture enables:

  1. Separation of Concerns : Each layer has a single, well-defined responsibility
  2. Runtime Agnosticism : Core layers (1-3) use non-async, callback-driven design
  3. Platform Extensibility : Layer 6 implements platform-specific transports
  4. Type Safety : Layer 5 enforces compile-time contracts
  5. Code Reuse : Same service definitions work across all platforms

This design allows the same business logic to execute in Tokio native environments, WASM browsers, and potentially other runtimes without modification to the core layers.

Sources: