This documentation is part of the "Projects with Books" initiative at zenOSmosis.
The source code for this project is available on GitHub.
Platform Implementations
Loading…
Platform Implementations
Relevant source files
Purpose and Scope
This page documents the concrete platform-specific implementations that enable muxio to run in different runtime environments. These implementations provide the bridge between the runtime-agnostic RPC framework core and actual platform capabilities, including async runtimes, network protocols, and JavaScript interop.
The three production-ready implementations are:
- muxio-tokio-rpc-server : Native server using Tokio runtime and Axum framework
- muxio-tokio-rpc-client : Native client using Tokio runtime and tokio-tungstenite
- muxio-wasm-rpc-client : Browser client using wasm-bindgen and JavaScript WebSocket API
For details on the RPC abstraction layer these platforms implement, see RPC Framework. For guidance on creating custom platform implementations, see Extending the Framework.
Overview
The muxio framework provides three platform implementations, each targeting different deployment environments while sharing the same core RPC abstractions and service definitions. All implementations use WebSocket as the transport protocol and communicate using the same binary framing format.
| Implementation | Runtime Environment | Primary Dependencies | Typical Use Cases |
|---|---|---|---|
muxio-tokio-rpc-server | Native (Tokio async) | axum, tokio-tungstenite | HTTP/WebSocket servers, microservices |
muxio-tokio-rpc-client | Native (Tokio async) | tokio, tokio-tungstenite | CLI tools, native apps, integration tests |
muxio-wasm-rpc-client | WebAssembly (browser) | wasm-bindgen, js-sys | Web applications, browser extensions |
All implementations are located in extensions/ and follow the workspace structure defined in Cargo.toml:19-31
Sources: Cargo.toml:19-31 README.md:38-40 Cargo.lock:897-954
Platform Integration Architecture
The following diagram shows how platform implementations integrate with the RPC framework and muxio core components:
Sources: Cargo.toml:39-47 Cargo.lock:897-954 README.md:38-51
graph TB
subgraph "Application Layer"
APP["Application Code\nService Methods"]
end
subgraph "RPC Abstraction Layer"
CALLER["RpcServiceCallerInterface\nClient-side trait"]
ENDPOINT["RpcServiceEndpointInterface\nServer-side trait"]
SERVICE["RpcMethodPrebuffered\nService definitions"]
end
subgraph "Transport Implementations"
TOKIO_SERVER["muxio-tokio-rpc-server\nRpcServer struct"]
TOKIO_CLIENT["muxio-tokio-rpc-client\nRpcClient struct"]
WASM_CLIENT["muxio-wasm-rpc-client\nRpcWasmClient struct"]
end
subgraph "Core Layer"
DISPATCHER["RpcDispatcher\nRequest correlation"]
FRAMING["Binary Framing Protocol\nStream multiplexing"]
end
subgraph "Network Layer"
WS_SERVER["tokio_tungstenite\nWebSocket server"]
WS_CLIENT_NATIVE["tokio_tungstenite\nWebSocket client"]
WS_CLIENT_WASM["Browser WebSocket API\nvia wasm_bindgen"]
end
APP --> SERVICE
SERVICE --> CALLER
SERVICE --> ENDPOINT
CALLER --> TOKIO_CLIENT
CALLER --> WASM_CLIENT
ENDPOINT --> TOKIO_SERVER
TOKIO_SERVER --> DISPATCHER
TOKIO_CLIENT --> DISPATCHER
WASM_CLIENT --> DISPATCHER
DISPATCHER --> FRAMING
TOKIO_SERVER --> WS_SERVER
TOKIO_CLIENT --> WS_CLIENT_NATIVE
WASM_CLIENT --> WS_CLIENT_WASM
FRAMING --> WS_SERVER
FRAMING --> WS_CLIENT_NATIVE
FRAMING --> WS_CLIENT_WASM
Tokio RPC Server
The extensions/muxio-tokio-rpc-server/ crate provides a production-ready WebSocket server implementation using the Tokio async runtime. The central type is RpcServer, which combines Axum’s HTTP/WebSocket capabilities with the RpcServiceEndpointInterface trait for handler registration.
graph TB
subgraph "RpcServer Structure"
SERVER["RpcServer\nArc-wrapped"]
ENDPOINT_FIELD["endpoint: Arc<RpcServiceEndpoint>"]
CALLER_FIELD["caller: Arc<RpcServiceCaller>"]
end
subgraph "Axum Integration"
ROUTER["axum::Router"]
WS_UPGRADE["WebSocketUpgrade handler"]
WS_ROUTE["/ws route"]
end
subgraph "Connection Handler"
ACCEPT_CONN["handle_websocket_connection()"]
TOKIO_SPAWN["tokio::spawn per connection"]
MSG_LOOP["Message read/write loop"]
end
subgraph "Dependencies"
AXUM_CRATE["axum v0.8.4"]
TOKIO_TUNG["tokio-tungstenite v0.26.2"]
TOKIO_RT["tokio v1.45.1"]
end
SERVER --> ENDPOINT_FIELD
SERVER --> CALLER_FIELD
SERVER --> ROUTER
ROUTER --> WS_ROUTE
WS_ROUTE --> WS_UPGRADE
WS_UPGRADE --> ACCEPT_CONN
ACCEPT_CONN --> TOKIO_SPAWN
TOKIO_SPAWN --> MSG_LOOP
ROUTER --> AXUM_CRATE
WS_UPGRADE --> TOKIO_TUNG
TOKIO_SPAWN --> TOKIO_RT
Core Components
Sources: Cargo.lock:917-933 README.md:94-128
Server Lifecycle
The server follows this initialization and operation sequence:
| Phase | Method | Description |
|---|---|---|
| Construction | RpcServer::new(config) | Creates server with optional configuration, initializes endpoint and caller |
| Handler Registration | endpoint().register_prebuffered() | Registers RPC method handlers before starting server |
| Binding | serve_with_listener(listener) | Accepts a TcpListener and starts serving on it |
| Connection Acceptance | Internal | Axum router upgrades HTTP connections to WebSocket |
| Per-Connection Spawn | Internal | Each WebSocket connection gets its own tokio::spawn task |
| Message Processing | Internal | Reads WebSocket binary messages, feeds to RpcDispatcher |
Example from README.md:94-128:
Sources: README.md:94-128
Integration with Axum
The server uses Axum’s router to expose a WebSocket endpoint at /ws. The implementation leverages:
axum::extract::ws::WebSocketUpgradefor protocol upgradeaxum::Router::new().route("/ws", get(handler))for routing- Per-connection state isolation using
Arccloning
Sources: Cargo.lock:80-114 Cargo.lock:917-933
Tokio RPC Client
The extensions/muxio-tokio-rpc-client/ crate provides a native client implementation that establishes WebSocket connections and makes RPC calls using the Tokio runtime. The primary type is RpcClient, which implements RpcServiceCallerInterface.
graph TB
subgraph "RpcClient Structure"
CLIENT["RpcClient"]
INNER["ClientInner\nArc<TokioMutex<...>>"]
DISPATCHER_REF["dispatcher: Arc<TokioMutex<RpcDispatcher>>"]
ENDPOINT_REF["endpoint: Arc<RpcServiceEndpoint>"]
STATE_HANDLER["state_handler: Option<Callback>"]
end
subgraph "Background Tasks"
READ_TASK["tokio::spawn read_task"]
WRITE_TASK["tokio::spawn write_task"]
STATE_TASK["State change publisher"]
end
subgraph "WebSocket Communication"
WS_STREAM["WebSocketStream"]
SPLIT_WRITE["SplitSink<write>"]
SPLIT_READ["SplitStream<read>"]
end
subgraph "Dependencies"
TOKIO_TUNG_CLI["tokio-tungstenite v0.26.2"]
TOKIO_RT_CLI["tokio v1.45.1"]
FUTURES["futures-util"]
end
CLIENT --> INNER
INNER --> DISPATCHER_REF
INNER --> ENDPOINT_REF
INNER --> STATE_HANDLER
CLIENT --> READ_TASK
CLIENT --> WRITE_TASK
READ_TASK --> SPLIT_READ
WRITE_TASK --> SPLIT_WRITE
SPLIT_READ --> WS_STREAM
SPLIT_WRITE --> WS_STREAM
WS_STREAM --> TOKIO_TUNG_CLI
READ_TASK --> TOKIO_RT_CLI
WRITE_TASK --> TOKIO_RT_CLI
SPLIT_READ --> FUTURES
SPLIT_WRITE --> FUTURES
Client Architecture
Sources: Cargo.lock:898-916 README.md:136-142
Connection Establishment
The client connection follows this sequence:
- DNS Resolution & TCP Connection:
tokio_tungstenite::connect_async()establishes TCP connection - WebSocket Handshake : HTTP upgrade to WebSocket protocol
- Stream Splitting : WebSocket stream split into separate read/write halves using
futures::StreamExt::split() - Background Task Spawn : Two tasks spawned for bidirectional communication
- State Notification : Connection state changes from
ConnectingtoConnected
Example from README.md:136-142:
Sources: README.md:136-142
Arc-Based Lifecycle Management
The client uses Arc reference counting for shared ownership:
RpcDispatcherwrapped inArc<TokioMutex<>>for concurrent accessRpcServiceEndpointwrapped inArc<>for shared handler registry- Background tasks hold
Arcclones to prevent premature cleanup - When all
Arcreferences drop, connection automatically closes
This design enables:
- Multiple concurrent RPC calls from different tasks
- Bidirectional RPC (client can handle incoming calls from server)
- Automatic cleanup on disconnect without manual resource management
Sources: Cargo.lock:898-916
Background Task Architecture
The client spawns two persistent Tokio tasks:
| Task | Purpose | Error Handling |
|---|---|---|
| Read Task | Reads WebSocket binary messages, feeds bytes to RpcDispatcher | Exits on error, triggers state change to Disconnected |
| Write Task | Receives bytes from RpcDispatcher write callback, sends as WebSocket messages | Exits on error, triggers state change to Disconnected |
Both tasks communicate through channels and callbacks, maintaining the non-async core design of muxio.
Sources: Cargo.lock:898-916
WASM RPC Client
The extensions/muxio-wasm-rpc-client/ crate enables RPC communication from browser environments by bridging Rust WebAssembly code with JavaScript’s native WebSocket API. This implementation demonstrates muxio’s cross-platform capability without requiring Tokio.
graph TB
subgraph "Rust WASM Layer"
WASM_CLIENT["RpcWasmClient"]
STATIC_REF["MUXIO_STATIC_RPC_CLIENT_REF\nthread_local! RefCell"]
DISPATCHER_WASM["RpcDispatcher"]
ENDPOINT_WASM["RpcServiceEndpoint"]
end
subgraph "wasm-bindgen Bridge"
EXTERN_FN["#[wasm_bindgen]\nstatic_muxio_write_bytes()"]
CLOSURE["Closure::wrap callbacks"]
JS_VALUE["JsValue conversions"]
end
subgraph "JavaScript Environment"
WS_API["new WebSocket(url)"]
ONMESSAGE["ws.onmessage event"]
ONERROR["ws.onerror event"]
ONOPEN["ws.onopen event"]
SEND["ws.send(bytes)"]
end
subgraph "Dependencies"
WASM_BINDGEN_CRATE["wasm-bindgen v0.2.100"]
JS_SYS_CRATE["js-sys v0.3.77"]
WASM_FUTURES_CRATE["wasm-bindgen-futures v0.4.50"]
end
WASM_CLIENT --> STATIC_REF
WASM_CLIENT --> DISPATCHER_WASM
WASM_CLIENT --> ENDPOINT_WASM
DISPATCHER_WASM --> EXTERN_FN
EXTERN_FN --> SEND
ONMESSAGE --> CLOSURE
CLOSURE --> DISPATCHER_WASM
EXTERN_FN --> WASM_BINDGEN_CRATE
CLOSURE --> WASM_BINDGEN_CRATE
JS_VALUE --> JS_SYS_CRATE
WS_API --> JS_SYS_CRATE
WASM Bridge Architecture
Sources: Cargo.lock:934-954 README.md51
JavaScript Interop Pattern
The WASM client uses a bidirectional byte-passing bridge between Rust and JavaScript:
Rust → JavaScript (outgoing data):
RpcDispatcherinvokes write callback withVec<u8>- Callback invokes
#[wasm_bindgen]extern functionstatic_muxio_write_bytes() - JavaScript receives
Uint8Arrayand callsWebSocket.send()
JavaScript → Rust (incoming data):
- JavaScript
ws.onmessagereceivesArrayBuffer - Converts to
Uint8Arrayand passes to Rust entry point - Rust code accesses
MUXIO_STATIC_RPC_CLIENT_REFand feeds bytes toRpcDispatcher
This design eliminates async runtime dependencies while maintaining compatibility with the same service definitions used by native clients.
Sources: Cargo.lock:934-954 README.md:51-52
Static Client Pattern
The WASM client uses a thread-local static reference for JavaScript access:
This pattern enables:
- Simple JavaScript API that doesn’t require passing Rust objects
- Single global client instance per WebAssembly module
- Automatic memory management through
Arcreference counting
Sources: Cargo.lock:934-954
Browser Compatibility
The WASM client compiles to wasm32-unknown-unknown target and relies on standard browser APIs:
WebSocketconstructor for connection establishmentWebSocket.send()for binary message transmissionWebSocket.onmessagefor binary message receptionWebSocket.onerrorandWebSocket.onclosefor error handling
No polyfills or special browser features required beyond standard WebSocket support (available in all modern browsers).
Sources: Cargo.lock:934-954 Cargo.lock:1637-1646 Cargo.lock:1663-1674
WebSocket Protocol Selection
All transport implementations use WebSocket as the underlying protocol for several reasons:
| Criterion | Rationale |
|---|---|
| Binary support | Native support for binary frames aligns with muxio’s binary framing protocol |
| Bidirectional | Full-duplex communication enables server-initiated messages and streaming |
| Browser compatibility | Widely supported in all modern browsers via standard JavaScript API |
| Connection persistence | Single long-lived connection reduces overhead of multiple HTTP requests |
| Framing built-in | WebSocket’s message framing complements muxio’s multiplexing layer |
WebSocket messages carry the binary-serialized RPC frames defined by the muxio core protocol. The transport layer is responsible for:
- Establishing and maintaining WebSocket connections
- Converting between WebSocket binary messages and byte slices
- Handling connection lifecycle events (connect, disconnect, errors)
- Providing state change notifications to application code
Sources: Cargo.lock:1446-1455 Cargo.lock:1565-1580 README.md32
stateDiagram-v2
[*] --> Disconnected : Initial state
Disconnected --> Connecting: RpcClient::new() called
Connecting --> Connected : WebSocket handshake success
Connecting --> Disconnected : Connection failure DNS error Network timeout
Connected --> Disconnected : Network error Server closes connection Client drop
Disconnected --> [*] : All Arc references dropped
note right of Disconnected
RpcTransportState::Disconnected
end note
note right of Connecting
RpcTransportState::Connecting
end note
note right of Connected
RpcTransportState::Connected
end note
Connection Lifecycle and State Management
All client implementations (RpcClient and RpcWasmClient) implement connection state tracking through the RpcTransportState enum. State changes are exposed to application code via callback handlers, enabling reactive connection management.
Connection State Machine
Sources: README.md:138-141
State Change Handler Registration
Applications register state change callbacks using set_state_change_handler():
The handler receives RpcTransportState enum variants:
RpcTransportState::Disconnected- Not connected, safe to drop clientRpcTransportState::Connecting- Connection in progress, should not send requestsRpcTransportState::Connected- Fully connected, ready for RPC calls
Sources: README.md:138-141 README.md:75-76
Automatic Cleanup on Disconnect
Both client implementations use Arc reference counting for automatic resource cleanup:
| Resource | Cleanup Mechanism | Trigger |
|---|---|---|
| WebSocket connection | Dropped when read/write tasks exit | Network error, server close, last Arc dropped |
| Background tasks | tokio::spawn tasks exit naturally | Connection close detected |
| Pending requests | RpcDispatcher returns errors for in-flight requests | State change to Disconnected |
| Stream decoders | Removed from RpcSession decoder map | Stream end or connection close |
When the last Arc<RpcClient> reference is dropped:
- Destructor signals background tasks to exit
- Read/write tasks complete their current iteration and exit
- WebSocket connection closes gracefully (if still open)
- All pending request futures resolve with connection errors
- State transitions to
Disconnected
This design eliminates manual cleanup and prevents resource leaks.
Sources: Cargo.lock:898-916 Cargo.lock:934-954
State-Based Application Logic
Common patterns using state callbacks:
UI Connection Indicator:
Automatic Reconnection:
Request Queueing:
Sources: README.md:138-141
graph TD
subgraph "Tokio Server Stack"
TOKIO_SRV["muxio-tokio-rpc-server"]
AXUM["axum\nv0.8.4"]
TOKIO_1["tokio\nv1.45.1"]
TUNGSTENITE_1["tokio-tungstenite\nv0.26.2"]
end
subgraph "Tokio Client Stack"
TOKIO_CLI["muxio-tokio-rpc-client"]
TOKIO_2["tokio\nv1.45.1"]
TUNGSTENITE_2["tokio-tungstenite\nv0.26.2"]
end
subgraph "WASM Client Stack"
WASM_CLI["muxio-wasm-rpc-client"]
WASM_BINDGEN["wasm-bindgen\nv0.2.100"]
JS_SYS_DEP["js-sys\nv0.3.77"]
WASM_FUTURES["wasm-bindgen-futures\nv0.4.50"]
end
subgraph "Shared RPC Layer"
RPC_SERVICE["muxio-rpc-service"]
RPC_CALLER["muxio-rpc-service-caller"]
RPC_ENDPOINT["muxio-rpc-service-endpoint"]
end
subgraph "Core"
MUXIO_CORE["muxio"]
end
TOKIO_SRV --> AXUM
TOKIO_SRV --> TOKIO_1
TOKIO_SRV --> TUNGSTENITE_1
TOKIO_SRV --> RPC_ENDPOINT
TOKIO_CLI --> TOKIO_2
TOKIO_CLI --> TUNGSTENITE_2
TOKIO_CLI --> RPC_CALLER
WASM_CLI --> WASM_BINDGEN
WASM_CLI --> JS_SYS_DEP
WASM_CLI --> WASM_FUTURES
WASM_CLI --> RPC_CALLER
RPC_ENDPOINT --> RPC_SERVICE
RPC_CALLER --> RPC_SERVICE
RPC_SERVICE --> MUXIO_CORE
Dependency Graph
The following diagram shows the concrete dependency relationships between transport implementations and their supporting crates:
Sources: Cargo.lock:917-933 Cargo.lock:898-916 Cargo.lock:934-954 Cargo.toml:39-64
Cross-Platform Service Definition Sharing
A key design principle is that all transport implementations can consume the same service definitions. This is achieved through the RpcMethodPrebuffered trait, which defines methods with compile-time generated method IDs and encoding/decoding logic.
| Component | Role | Shared Across Transports |
|---|---|---|
RpcMethodPrebuffered trait | Defines RPC method signature | ✓ Yes |
encode_request() / decode_request() | Parameter serialization | ✓ Yes |
encode_response() / decode_response() | Result serialization | ✓ Yes |
METHOD_ID constant | Compile-time hash of method name | ✓ Yes |
| Transport connection logic | WebSocket handling | ✗ No (platform-specific) |
Example service definition usage from README.md:144-151:
The same service definitions work identically with RpcClient (Tokio), RpcWasmClient (WASM), and any future transport implementations that implement RpcServiceCallerInterface.
Sources: README.md:47-49 README.md:69-73 README.md:144-151 Cargo.toml42
Implementation Selection Guidelines
Choose the appropriate transport implementation based on your deployment target:
Usemuxio-tokio-rpc-server when:
- Building server-side applications
- Need to handle multiple concurrent client connections
- Require integration with existing Tokio/Axum infrastructure
- Operating in native Rust environments
Usemuxio-tokio-rpc-client when:
- Building native client applications (CLI tools, desktop apps)
- Writing integration tests for server implementations
- Need Tokio’s async runtime features
- Operating in native Rust environments
Usemuxio-wasm-rpc-client when:
- Building web applications that run in browsers
- Creating browser extensions
- Need to communicate with servers from JavaScript contexts
- Targeting the
wasm32-unknown-unknownplatform
For detailed usage examples of each transport, refer to the subsections Tokio RPC Server, Tokio RPC Client, and WASM RPC Client.
Sources: README.md:38-51 Cargo.toml:19-31