2MB async protocol-translation bridge for dead zone communication systems.
Float Protocols bridges existing dead zone communication systems (Iridium, Inmarsat, VSAT, HF/VHF, RockBLOCK) to AST SpaceMobile's direct-to-cell network. Users bring their own ASTS account details for authentication. The system integrates with telemetry for accurate ping monitoring.
Design Principles:
- Async-first architecture for low latency
- 99.9% uptime with circuit breakers, retries, and health checks
- Ultra-lightweight: runs on pre-existing RAM on local devices
- Zero-allocation where possible using stack-allocated buffers
- Fixed-size buffers for memory efficiency
- Memory sharding for immediate deadzone uplink (InferX pattern)
- Snapshotting for fast uplink building
- Inspired by vLLM batching and LMCache caching patterns
- Iridium SBD - Iridium Short Burst Data (340 bytes max)
- Inmarsat C - Inmarsat teletype format (128 bytes max)
- VSAT - VSAT IP packets with compression
- HF/VHF - HF/VHF radio with codec translation
- RockBLOCK - RockBLOCK IoT satellite communication
- Samsara - Samsara fleet management cellular broadband (1MB typical)
- AST SpaceMobile - Direct-to-cell cellular format
- Zero-Allocation Hot Path: Iridium SBD to ASTS Protobuf translation with NO heap allocations
- Protocol Translation: Async translation between legacy protocols and AST SpaceMobile
- Bi-Temporal Logic: Dual timestamps (t_event, t_system) for insurance underwriting and trade compliance
- Spread Calculation: Deterministic mark between event time and system time for compliance
- Intelligent Batching: vLLM-inspired message batching with emergency bypass
- Distributed Caching: LMCache-inspired caching with TTL and invalidation
- Memory Sharding: Pre-sharded memory for immediate uplink when deadzone is detected
- Snapshotting: Fast uplink building from pre-computed message batches
- Reliability: Circuit breakers, retry policies, and health checks for 99.9% uptime
- Telemetry Integration: Accurate ping monitoring and metrics
- BYO Authentication: Users bring their own ASTS account details
┌─────────────────┐
│ Legacy System │
│ (Iridium, etc) │
└────────┬────────┘
│
▼
┌─────────────────────────────────────────┐
│ Float Protocols Gateway │
│ ┌─────────────┐ ┌──────────────┐ │
│ │ Translator │→ │ Batcher │ │
│ └─────────────┘ └──────────────┘ │
│ ↓ ↓ │
│ ┌─────────────┐ ┌──────────────┐ │
│ │ Cache │ │ Reliability │ │
│ └─────────────┘ └──────────────┘ │
│ ↓ ↓ │
│ ┌─────────────┐ ┌──────────────┐ │
│ │ Sharding │ │ Snapshotting │ │
│ └─────────────┘ └──────────────┘ │
└────────┬──────────────────────────────┘
│
▼
┌─────────────────┐
│ AST SpaceMobile │
│ (BYO Account) │
└─────────────────┘
cargo install float-protocols# AST SpaceMobile BYO Credentials (optional - for ASTS integration)
export ASTS_ACCOUNT_ID="your_account_id"
export ASTS_API_KEY="your_api_key"
export ASTS_MNO_PARTNER_ID="partner_id" # optional
# Telemetry Configuration
export TELEMETRY_ENABLED="true"
export TELEMETRY_ENDPOINT="https://your-telemetry-endpoint.com"
export TELEMETRY_PING_INTERVAL_MS="5000"
# Logging
export RUST_LOG="float_protocols=info,tokio=warn"cargo run --release# Run with test message
FLOAT_PROTOCOLS_TEST=1 cargo run --releaseFloat Protocols implements a zero-allocation hot path for Iridium SBD to ASTS Protobuf translation:
- No Heap Allocations: The critical translation path uses only stack-allocated buffers
- Zero-Copy Parsing: Iridium SBD messages are parsed directly from input buffer
- Stack-Allocated Buffers: Fixed-size buffers on stack, no dynamic allocation
- Zero-Copy Translation: Payload is copied directly to output buffer without intermediate allocations
use float_protocols::{IridiumSBDMessage, ZeroCopyTranslator};
// Parse Iridium SBD (zero-allocation)
let iridium_msg = IridiumSBDMessage::parse(&iridium_data).unwrap();
// Translate to ASTS Protobuf (zero-allocation)
let mut translator = ZeroCopyTranslator::new();
let mut output_buffer = [0u8; 2048];
let size = translator.translate(&iridium_msg, &mut output_buffer).unwrap();
// Output buffer now contains ASTS Protobuf dataFor maximum performance in the critical hot path, use the synchronous API:
use float_protocols::translate_iridium_to_asts_sync;
let mut buffer = [0u8; 2048];
let size = translate_iridium_to_asts_sync(&iridium_data, &mut buffer)?;
// buffer[..size] now contains ASTS Protobuf dataThe async architecture (Tokio) requires heap allocations for:
- Task spawning and scheduling
- Channel buffers
- Arc reference counting
However, the core protocol parsing (IridiumSBDMessage::parse, ZeroCopyTranslator::translate) is genuinely zero-allocation. Use the synchronous API when you need:
- Maximum performance in the hot path
- No async overhead
- Direct control over memory allocation
Use the async Gateway when you need:
- Bi-temporal storage
- Caching
- Reliability patterns (circuit breakers, retries)
- Telemetry integration
Float Protocols addresses several critical problems in protocol translation:
Problem: Zero-allocation shifts burden to lifetime management. If Translator returns a slice tied to a temporary buffer, DX suffers.
Solution: Hybrid translation layer providing both zero-allocation and lifetime-safe paths:
use float_protocols::{HybridTranslator, SafeTranslationResult};
// Lifetime-safe translation (returns owned Bytes)
let mut translator = HybridTranslator::new(8, 2048);
let result = translator.translate_safe(&iridium_msg).unwrap();
// result.data is owned Bytes, no lifetime issuesProblem: Device internal clocks may drift from ASTS network time during dead zone bursts, causing logically consistent but factually wrong timestamps.
Solution: Clock reconciliation algorithm with offset tracking:
use float_protocols::{ClockReconciler, ClockOffset, NetworkTimeSource};
let mut reconciler = ClockReconciler::new(10, Duration::from_secs(300), NetworkTimeSource::ASTS);
// Register device clock offset
let offset = ClockOffset::new(device_id, 1000, 0.9);
reconciler.register_offset(offset);
// Reconcile device timestamp to network time
let network_time = reconciler.reconcile(device_id, device_timestamp_ms).unwrap();Problem: Fixed-size stack buffers may overflow with unexpected payloads from legacy systems. Hard-coding buffer sizes breaks with protocol updates.
Solution: Dynamic buffer with bounds checking and graceful degradation:
use float_protocols::{DynamicBuffer, DynamicBufferPool};
let mut buffer = DynamicBuffer::new(10, 10000).unwrap();
buffer.write(&large_payload)?; // Grows automatically up to max_capacityProblem: Iridium is high-latency/low-bandwidth (heartbeat once per hour), ASTS is low-latency/high-bandwidth (heartbeat every 5 seconds). Managing differing cadences is complex.
Solution: Cadence translation layer with configurable rules:
use float_protocols::{CadenceTranslator, CadenceRule, MessageCadence, CadenceTransformation};
let mut translator = CadenceTranslator::new();
translator.add_rule(CadenceRule {
source_protocol: Protocol::IridiumSBD,
source_cadence: MessageCadence::OncePerHour,
target_protocol: Protocol::ASTSpaceMobile,
target_cadence: MessageCadence::Every5Seconds,
message_type: "heartbeat".to_string(),
transformation: CadenceTransformation::Duplicate { max_duplicates: 720 },
});
let action = translator.translate_message("heartbeat", Protocol::IridiumSBD, Priority::Operational);Float Protocols implements bi-temporal modeling for high-end insurance underwriting and global trade compliance:
- t_event (Valid Time): When the sensor actually recorded the event in the physical world
- t_system (Transaction Time): When your system first learned about that event
- Spread Calculation: Deterministic mark between t_event and t_system for compliance
This enables critical queries:
- "What did we believe the state of the fleet was at 3 PM yesterday?" (transaction time query)
- "What actually happened at 3 PM yesterday?" (valid time query)
// Query by valid time (what actually happened)
let actual_events = gateway.query_valid_time(start_ms, end_ms).await;
// Query by transaction time (what system believed)
let system_beliefs = gateway.query_transaction_time(start_ms, end_ms).await;
// Get spread statistics for insurance underwriting
let spread_stats = gateway.spread_stats(start_ms, end_ms).await;
println!("Average delay: {} seconds", spread_stats.avg_spread_seconds());
// Get system belief at specific timestamp
let belief = gateway.system_belief_at(timestamp_ms).await;
// Get actual state at specific timestamp
let actual = gateway.actual_state_at(timestamp_ms).await;The spread between t_event and t_system is calculated as:
spread_ms = t_system - t_event
- Positive spread: Message was delayed (system learned about it after it happened)
- Negative spread: Message from the future (system learned about it before it happened)
- Zero spread: Real-time processing
This deterministic mark is critical for:
- Insurance underwriting (proving when events actually occurred)
- Trade compliance (demonstrating timely reporting)
- Audit trails (reconstructing historical states)
Float Protocols uses memory sharding (InferX pattern) to provide immediate uplink when a deadzone is detected:
- Dedicated Deadzone Shard: Pre-allocated buffer for emergency messages
- Load Balancing: Regular shards distribute load across available memory
- Zero Allocation: Pre-allocated buffers eliminate allocation latency during critical transitions
- Immediate Uplink: When deadzone detected, messages route to dedicated shard without blocking
Snapshotting enables fast uplink building by creating pre-computed message batches:
- Instant Uplink: Retrieve snapshots without reprocessing
- Protocol-Specific: Separate snapshots per protocol type
- TTL-Based: Expired snapshots automatically evicted
- Memory Efficient: Fixed-size snapshot pool with LRU eviction
Float Protocols is designed for 99.9% uptime:
- Circuit Breakers: Prevent cascading failures
- Retry Policies: Exponential backoff for transient failures
- Health Checks: Continuous monitoring of system health
- Graceful Degradation: Non-critical features disabled under stress
- Binary Size: <2MB optimized with LTO
- Memory Footprint: <50MB with default configuration
- Latency: <2ms for emergency messages
- Throughput: 10,000+ messages/second
- Cache Hit Rate: >80% for repeated translations
cargo build --releasecargo testcargo clippy --all-targets --all-features -- -D warningsLicensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Contributions welcome! Please open an issue or submit a pull request.
- Inspired by vLLM's batching and optimization patterns
- Inspired by LMCache's distributed caching architecture
- Inspired by InferX's memory sharding for bursty workloads