Alloy.Rx: Native Reactivity in Fidelity

Porting A Push Model with Principled Performance
work-single-image

The integration of reactive programming into the Fidelity framework presents a fascinating challenge at the intersection of practical engineering and algorithmic integrity. While exploring various reactive models, we discovered valuable insights from Ken Okabe’s Timeline library - a minimalist F# implementation that demonstrated how powerful reactive systems could be built with remarkably little code. This simplicity was a key inspiration for Alloy.Rx, though we’ve evolved the concepts to align with Fidelity’s unique architectural requirements.

The key insight driving Alloy.Rx is that reactive programming patterns represent a distinct form of codata - one where the observation method involves registration of callbacks rather than explicit pulling. This mathematical foundation enables the Firefly compiler to apply the same sophisticated analysis techniques to reactive code that it uses for async operations, while maintaining the transparency and efficiency that define the Fidelity framework.

Truth in Math: Push vs Pull Codata

To understand why Alloy.Rx is essential to the Fidelity ecosystem, we must first examine the mathematical distinction between pull-based and push-based codata structures.

Pull-Based Codata (Async/Await)

In category theory, pull-based codata is defined by its elimination rule - how we observe or consume it:

\[\text{Stream}\langle A \rangle = \nu X. 1 + A \times X\] \[\text{observe} : \text{Stream}\langle A \rangle \to 1 + A \times \text{Stream}\langle A \rangle\]

This translates to F# async patterns where the consumer explicitly requests each value:

let processData() = async {
    let! sensor = readSensor()        // Pull: "I need sensor data now"
    let! processed = transform sensor  // Pull: "I need transformation now"
    return processed
}

The coeffect analysis recognizes this pattern and tracks context requirements:

  • readSensor has coeffect: AsyncBoundary @ ResourceAccess(Sensor)
  • transform has coeffect: Pure @ Computation

Push-Based Codata (Observable Streams)

Push-based codata inverts the control flow. Instead of consumers pulling values, producers push updates to registered observers:

\[\text{Observable}\langle A \rangle = \nu X. A \times \text{List}\langle A \to \text{Unit} \rangle \times (A \to X)\] \[\text{register} : (A \to \text{Unit}) \to \text{Observable}\langle A \rangle \to \text{Observable}\langle A \rangle\] \[\text{update} : A \to \text{Observable}\langle A \rangle \to \text{Unit}\]

This manifests in the Alloy.Rx API:

let counter = Observable.create 0

// Register observer (push-based)
counter |> Rx.map (fun count -> 
    printfn "Counter changed to: %d" count
)

// Producer pushes updates
counter |> Rx.next 1  // All observers notified immediately

The mathematical elegance lies in recognizing both patterns as codata with different observation strategies. This insight enables a unified optimization approach in the Firefly compiler.

The Elegance of Simplicity

What makes Alloy.Rx remarkable is how little code is required to implement a complete reactive framework. The entire core can be expressed in under 200 lines of F#, yet it provides all the power needed for complex reactive applications. This isn’t minimalism for its own sake - it’s a direct consequence of honoring the right mathematical abstractions and applying them effectively to a principled, grounded engineering approach.

Consider the complete implementation of the core reactive type:

type Observable<'a> = 
    { mutable Last: 'a
      mutable Observers: ResizeArray<'a -> unit> }

// No hidden state machines, no runtime services, no allocation pools.

This simplicity yields profound benefits. Every operation is transparent. Every subscription is visible. Every update path is traceable. When something goes wrong, you would be able to set a breakpoint and see exactly what’s happening - no diving through layers of runtime abstraction.

Coeffect Composition in Reactive Operations

Just as with async operations, reactive operations in Alloy.Rx form a coeffect algebra that enables compositional analysis. When we compose reactive operations, their coeffects combine according to precise mathematical rules:

\[\frac{\text{source} @ R_1 \vdash \text{Observable}\langle A \rangle \quad f @ R_2 \vdash A \to B}{\text{source.map}(f) @ R_1 \sqcup R_2 \vdash \text{Observable}\langle B \rangle}\]

This rule states that mapping a function over an observable combines the coeffects of both the source observable and the mapping function. The \(\sqcup\) operator represents the least upper bound in the coeffect semilattice.

For reactive operations, the coeffect composition follows these patterns:

\[\begin{align} \text{Pure} \sqcup \text{Pure} &= \text{Pure} \\ \text{Pure} \sqcup \text{UIThread} &= \text{UIThread} \\ \text{UIThread} \sqcup \text{BackgroundThread} &= \text{CrossThread} \\ \text{ResourceAccess}(S_1) \sqcup \text{ResourceAccess}(S_2) &= \text{ResourceAccess}(S_1 \cup S_2) \end{align}\]

This structure enables the Firefly compiler to make optimal decisions about where and how to execute reactive pipelines. For example, when the compiler detects a CrossThread coeffect, it would be able to automatically insert efficient synchronization primitives.

Observability: Transparency vs Runtime Opacity

One of the most striking differences between Alloy.Rx and traditional .NET reactive implementations lies in observability. To understand this contrast, we need to examine what happens under the hood in each approach.

The .NET Reactive Extensions (Rx.NET) Black Box

In traditional .NET reactive programming, even simple operations involve multiple layers of abstraction:

// What looks simple in C#...
observable
    .Where(x => x > 0)
    .Select(x => x * 2)
    .Subscribe(Console.WriteLine);

// ...actually creates:
// - Multiple heap-allocated operator objects
// - Hidden subscription chains
// - Internal schedulers with thread pools
// - Disposable wrappers for cleanup
// - Concurrent collections for thread safety

When debugging, you encounter:

  • Stack traces filled with internal framework methods
  • Heap dumps showing mysterious AnonymousObserver<T> instances
  • Performance profiles dominated by allocation and GC overhead
  • Race conditions hidden in the depths of scheduler implementations

The runtime machinery becomes a black box that obscures the actual data flow. When a value doesn’t appear where expected, you’re left wondering: Is it stuck in a scheduler queue? Lost in a concurrent collection? Blocked by a synchronization primitive?

The Alloy.Rx Transparency Principle

Alloy.Rx takes a radically different approach. There is no hidden runtime machinery, but that doesn’t mean there’s no coordination - instead, synchronization is explicit and observable:

// In Alloy.Rx, what you see is what you get
let observable = { Last = 0; Observers = ResizeArray() }

// Direct observation within a single actor
observable.Observers.Add(fun x -> printfn "%d" x)
observable.Last <- 42
observable.Observers |> Seq.iter (fun f -> f observable.Last)

Every aspect is visible and debuggable:

  • Direct observation: Set a breakpoint and see exactly which observers are registered
  • Explicit updates: Watch values propagate through the system in real-time
  • Stack-allocated operations: No hidden heap allocations or GC pressure
  • Actor-based synchronization: Cross-thread coordination happens through explicit message passing

This transparency extends to production diagnostics. When an issue arises, you can:

  • Inspect the exact observer list at any moment
  • Trace value propagation with simple logging
  • Profile without wading through framework overhead
  • Reason about concurrency because it’s explicit through actor boundaries

Zero-Allocation Reactive Patterns

The simplicity of Alloy.Rx enables something remarkable: truly zero-allocation reactive programming. Because observers are just functions in a resizable array, common patterns compile to efficient loops:

// This reactive pipeline...
source 
|> Rx.map (fun x -> x * 2)
|> Rx.filter (fun x -> x > threshold)
|> Rx.subscribe handler

// ...compiles to something like:
source.Observers.Add(fun x ->
    let doubled = x * 2
    if doubled > threshold then
        handler doubled
)

No intermediate observables. No allocation per event. Just direct function calls. This is only possible because we’ve stripped away the runtime machinery and embraced simplicity provided by the foundations on which this model is based.

What Fidelity Provides “For Free”

The Firefly compiler’s coeffect and codata analysis already provides substantial infrastructure that Alloy.Rx can leverage:

1. Efficient Suspension and Resumption

When async operations compose with reactive operations, the compiler recognizes the suspension points:

observable |> Rx.bind(fun value -> async {
    let! data = fetchFromServer value  // Coeffect: AsyncBoundary @ Network
    let resultObservable = Observable.create NoValue
    resultObservable |> Rx.next data
    return resultObservable
})

The compiler can optimize this pattern by:

  • Recognizing the async boundary within the reactive bind
  • Preserving delimited continuations for the async portion
  • Compiling the observable update to direct memory operations

2. Automatic Resource Management

RAII principles extend naturally to observable subscriptions:

use subscription = 
    sensorObservable 
    |> Rx.map processSensorData
    |> Rx.subscribe

// Subscription automatically cleaned up at scope exit

The compiler tracks subscription lifetimes as resources, ensuring deterministic cleanup without garbage collection.

3. Zero-Allocation Streaming

When observable operations form pipelines, the compiler can eliminate intermediate allocations:

// Developer writes:
dataStream
|> Rx.map transform1
|> Rx.map transform2  
|> Rx.map transform3

// Compiler recognizes codata pattern and fuses:
dataStream
|> Rx.map (transform1 >> transform2 >> transform3)

This optimization emerges from recognizing that observable transformations form a category where composition is associative.

The Push-Based Reactivity Gap

Despite these powerful optimizations, async/await patterns cannot naturally express certain reactive scenarios:

Multiple Concurrent Observers

let marketData = Observable.create<MarketPrice> NoValue

// Multiple UI components observe the same stream
let priceDisplay = marketData |> Rx.map updatePriceLabel
let chart = marketData |> Rx.map updateChart
let alerts = marketData |> Rx.filter (fun p -> p.Change > 0.05) 
                        |> Rx.map triggerAlert

This pattern requires maintaining a list of active observers - something async doesn’t provide.

Decoupled Producer-Consumer Communication

// Producer (background thread)
async {
    while true do
        let! data = pollSensor()
        sensorObservable |> Rx.next data
        do! Async.Sleep 1000
} |> Async.Start

// Consumers (UI thread) register independently
sensorObservable |> Rx.map updateUI
sensorObservable |> Rx.filter criticalCondition |> Rx.map sendAlert

The producer doesn’t need to know about consumers - a fundamental inversion of control.

Temporal Composition

let mouseClicks = Observable.create<Point> NoValue
let keyPresses = Observable.create<Key> NoValue

// Combine temporal streams
let shortcuts = 
    Rx.And mouseClicks keyPresses
    |> Rx.map (fun result ->
        match result.result with
        | [click; key] -> detectShortcut click key
        | _ -> NoShortcut
    )

This represents temporal correlation - a pattern that requires maintaining state across time.

Alloy.Rx Design Principles and API

The design of Alloy.Rx embodies a hybrid approach that presents developers with a pure, immutable interface while allowing the compiler to choose optimal implementations based on usage patterns. This design philosophy gives us the elegant programming model of functional code with the performance characteristics of carefully crafted mutable implementations where the math shows it’s appropriate.

Core Type Design: Immutable Interface, Flexible Implementation

module Alloy.Rx

// What developers see: pure, immutable types
type Observable<'a> = 
    private {
        Last: 'a
        Observers: ObserverTree<'a>        // Persistent data structure
        Version: uint64                    // Generation tracking
        Coeffects: ContextRequirement Set
        SourceLocation: SourceInfo option
    }

// Linear types ensure safe ownership transfer
and LinearObservable<'a> = 
    | LinearObs of Observable<'a> * LinearToken

// Persistent tree structure for efficient observer management
and ObserverTree<'a> =
    | Empty
    | Leaf of Observer<'a>
    | Branch of left: ObserverTree<'a> * right: ObserverTree<'a> * count: int

and Observer<'a> = 
    { Id: ObserverId
      Handler: 'a -> unit
      Coeffects: ContextRequirement Set }

// Principled handling of reactive values
type ReactiveValue<'a> =
    | HasValue of 'a
    | NoValue
    | Error of exn

The key innovation is that while developers work with immutable types, the compiler can transform these to efficient mutable implementations when it detects that old versions aren’t retained.

Core API Design: Pure Functions with Smart Compilation

module Rx =
    // Creation functions return immutable observables
    let create (initial: 'a) : Observable<'a> = 
        { Last = initial
          Observers = Empty
          Version = 0UL
          Coeffects = Set.singleton Pure
          SourceLocation = getCallSite() }
    
    // All operations return new observables (from developer's perspective)
    let next (value: 'a) (obs: Observable<'a>) : Observable<'a> =
        // Developer sees this pure function
        { obs with 
            Last = value
            Version = obs.Version + 1UL }
        // Compiler may transform to in-place update when safe
    
    // Subscribe returns both the new observable and a subscription token
    let subscribe (handler: 'a -> unit) (obs: Observable<'a>) : Observable<'a> * Subscription =
        let observer = {
            Id = ObserverId.generate()
            Handler = handler
            Coeffects = inferCoeffects handler
        }
        let newObservers = ObserverTree.add observer obs.Observers
        let newObs = { obs with Observers = newObservers }
        let subscription = { ObservableId = obs.Id; ObserverId = observer.Id }
        (newObs, subscription)
    
    // Map preserves immutability while tracking coeffects
    let map (f: 'a -> 'b) (source: Observable<'a>) : Observable<'b> =
        let coeffects = source.Coeffects ∪ inferCoeffects f
        let target = create (f source.Last)
        
        // This looks pure but compiler can optimize to mutation
        let (sourceWithObserver, _) = 
            source |> subscribe (fun a -> 
                target |> next (f a) |> ignore)
        
        { target with Coeffects = coeffects }

Linear Types for Performance-Critical Scenarios

When developers need guaranteed performance, they can opt into linear types:

// Linear API for zero-copy updates
module LinearRx =
    // Take ownership of an observable
    let acquire (obs: Observable<'a>) : LinearObservable<'a> =
        LinearObs(obs, LinearToken.create())
    
    // Update with guaranteed no allocation
    let update (value: 'a) (LinearObs(obs, token)) : LinearObservable<'a> =
        // Compiler knows this is safe to mutate
        LinearObs({ obs with Last = value }, token)
    
    // Must explicitly release
    let release (LinearObs(obs, token)) : Observable<'a> =
        LinearToken.release token
        obs

// Usage example
let processHighFrequencyData data =
    let linear = LinearRx.acquire myObservable
    
    // Process with zero allocations
    let updated = 
        data 
        |> Array.fold (fun lo value -> 
            LinearRx.update value lo) linear
    
    LinearRx.release updated

Compiler Transformation Strategy

The Firefly compiler analyzes usage patterns to choose implementations:

// What developer writes
let counter =
    Observable.create 0
    |> Rx.scan (+) 0
    |> Rx.map (fun x -> x * 2)
// PSG derives MLIR representation:
func @counter() -> !rx.observable<i32> {
  %0 = rx.create %c0 : i32
  %1 = rx.scan %0, @add : (!rx.observable<i32>, (i32, i32) -> i32) -> !rx.observable<i32>
  %2 = rx.map %1, @multiply_by_2 : (!rx.observable<i32>, (i32) -> i32) -> !rx.observable<i32>
  return %2 : !rx.observable<i32>
}

The PSG recognizes reactive patterns as first-class constructs, enabling cross-operation optimization at the MLIR level:

// Observable fusion patterns the compiler recognizes
let mapFusion source =
    source 
    |> Rx.map f 
    |> Rx.map g 
    |> Rx.map h
    // Compiler fuses to: source |> Rx.map (f >> g >> h)

// Thread transition based on coeffect analysis
let processUIEvents (uiEvents: Observable<Event>) =  // Coeffect: UIThread
    uiEvents
    |> Rx.map heavyComputation  // Coeffect: Background - compiler inserts thread transition
    |> Rx.map updateDisplay     // Coeffect: UIThread - compiler returns to UI thread

The thread transitions happen through TableGen pattern matching on coeffect attributes:

// Functions annotated with coeffect attributes
func @heavyComputation(%x: f64) -> f64 attributes {coeffect.thread = "background"}
func @updateDisplay(%x: f64) -> !ui.element attributes {coeffect.thread = "ui"}

// After TableGen pattern application - using async dialect
func @processUIEvents_lowered(%event: !ui.event) -> !ui.element {
  // Pattern inserts async.execute for thread transition
  %token = async.execute {
    %result = call @heavyComputation(%event) : (f64) -> f64
    async.yield %result : f64
  } : !async.token
  
  %computed = async.await %token : !async.token
  
  // Return to UI thread
  %ui_token = async.execute [%ui_executor] {
    %display = call @updateDisplay(%computed) : (f64) -> !ui.element
    async.yield %display : !ui.element
  } : !async.token
  
  %result = async.await %ui_token : !async.token
  return %result : !ui.element
}

When this feature is finalized, the compiler will automatically insert thread transitions when coeffect boundaries are detected. Depending on the deployment context determined by PSG analysis, these patterns could alternatively use the dcont dialect for delimited continuations or inet dialect for distributed execution. The choice of dialect emerges from the comprehensive analysis of the program’s dataflow graph, control flow graph, and zipper-based traversal annotations that capture the full computational context.

Observable Collections: Reactive Data Structures

Beyond single-value observables, Alloy.Rx provides reactive collections that maintain the same immutable interface with efficient implementations. The power of this design lies in how it adapts to different levels of the Fidelity memory model, providing appropriate synchronization strategies for each tier without forcing developers into a single concurrency paradigm.

The Memory Model Gradient

Reactive collections in Alloy.Rx follow the same progression as Fidelity’s overall memory architecture, starting from simple single-threaded scenarios and scaling up to distributed systems. This gradual approach ensures developers only pay for the complexity they need:

%%{init: {'theme': 'neutral'}}%% graph TD subgraph "Memory Model Tiers" L1[Level 1: Zero Allocation
Single-threaded, Stack-based] L2[Level 2: Arena Memory
Controlled allocation, RAII] L3[Level 3: Linear Types
Ownership transfer, Zero-copy] L4[Level 4: Actor Model
Distributed, Message-based] end subgraph "Reactive Collection Strategies" S1[Pure Functional
No synchronization needed] S2[Arena-scoped
Deterministic lifetime] S3[Linear ownership
Explicit transfer] S4[Actor-coordinated
Location transparent] end L1 --> S1 L2 --> S2 L3 --> S3 L4 --> S4 style L1 fill:#e8f5e8 style L2 fill:#fff3e0 style L3 fill:#e8eaf6 style L4 fill:#ffebee

Level 1: Pure Functional Collections (Zero Allocation)

At the simplest level, when coeffect analysis determines that a collection is used within a single execution context with no concurrent access, Alloy.Rx generates the most efficient possible code:

// Single-threaded scenario - no synchronization needed
let processLocalData (data: float[]) =
    // Compiler detects: Pure + SingleThreaded coeffects
    let results = ObservableList.empty<float>
    
    // All operations happen on the stack
    for value in data do
        if value > threshold then
            results |> ObservableList.add (transform value)
    
    // No allocations, no synchronization, just direct memory operations
    results |> ObservableList.toArray

In this scenario, the compiler recognizes that:

  • No concurrent access is possible
  • All operations happen within a single stack frame
  • The collection can be stack-allocated with zero heap usage

The generated code is as efficient as hand-written imperative code, but maintains functional semantics.

Level 2: Arena-Scoped Collections (Controlled Allocation)

When collections need to outlive stack frames but remain within controlled boundaries, arena allocation provides deterministic memory management without forcing actor adoption:

type DataProcessor() =
    // Arena provides lifetime management
    let arena = Arena.create { Size = 10<MB>; Strategy = Sequential }
    
    // Collections live within arena scope
    let cache = arena.allocate<ObservableMap<string, ProcessedData>>()
    let results = arena.allocate<ObservableList<Result>>()
    
    member this.Process(input: DataBatch) =
        // Arena-scoped updates - no actor required
        arena.transaction (fun () ->
            for item in input.Items do
                match cache |> ObservableMap.tryFind item.Key with
                | Some cached -> 
                    results |> ObservableList.add cached
                | None ->
                    let processed = processItem item
                    cache |> ObservableMap.set item.Key processed
                    results |> ObservableList.add processed
        )
        
    interface IDisposable with
        member this.Dispose() =
            // Entire arena cleaned up at once
            arena.Dispose()

Arena-scoped collections provide:

  • Deterministic cleanup: When the arena is disposed, all collections within it are reclaimed
  • Cache-friendly allocation: Related data stays together in memory
  • Transaction boundaries: Updates can be grouped for consistency
  • No GC pressure: Arena cleanup is immediate and predictable

Level 3: Linear Types for Ownership Transfer

For scenarios requiring explicit ownership transfer without the overhead of message passing, linear types provide zero-copy collection updates:

// Linear types enable safe mutation with ownership tracking
module LinearCollections =
    type LinearList<'a> = 
        private | Linear of ObservableList<'a> * LinearToken
    
    // Acquire exclusive access
    let acquire (list: ObservableList<'a>) : LinearList<'a> =
        // Compiler ensures no other references exist
        Linear(list, LinearToken.create())
    
    // Operations consume and return ownership
    let add (item: 'a) (Linear(list, token)) : LinearList<'a> =
        // Safe mutation - we own the only reference
        let updated = list.InPlaceAdd(item)  // Zero-copy update
        Linear(updated, token)
    
    // Release back to shared immutable
    let release (Linear(list, token)) : ObservableList<'a> =
        LinearToken.consume token
        list.ToImmutable()

// Usage example: High-frequency trading scenario
let processMarketData (feed: MarketDataFeed) =
    let mutable orderBook = ObservableList.empty<Order>
    
    for update in feed.Updates do
        // Acquire linear ownership for batch updates
        let linear = LinearCollections.acquire orderBook
        
        let updated = 
            update.Orders
            |> Seq.fold (fun book order ->
                LinearCollections.add order book) linear
        
        // Release creates immutable snapshot
        orderBook <- LinearCollections.release updated
        
        // Observers see consistent snapshot
        notifyObservers orderBook

Linear collections excel in scenarios with:

  • High-frequency updates from a single source
  • Clear ownership boundaries
  • Need for zero-copy performance
  • Periodic snapshot requirements

Level 4: Copy-on-Write with Epoch-Based Synchronization

Before reaching full actor-based coordination, Alloy.Rx provides an intermediate synchronization strategy inspired by Linux kernel’s RCU (Read-Copy-Update):

// Epoch-based synchronization for multi-reader scenarios
type EpochMap<'k, 'v when 'k : comparison> = {
    mutable Current: Map<'k, 'v>
    mutable Epoch: uint64
    Observers: ObserverTree<MapChange<'k, 'v>>
}

module EpochCollections =
    // Readers always see consistent snapshots
    let read (map: EpochMap<'k, 'v>) =
        // Capture current epoch
        let epoch = Volatile.Read(&map.Epoch)
        let snapshot = map.Current
        
        // Memory barrier ensures we see all writes from this epoch
        Thread.MemoryBarrier()
        
        (snapshot, epoch)
    
    // Writers create new epochs atomically
    let update (f: Map<'k, 'v> -> Map<'k, 'v>) (map: EpochMap<'k, 'v>) =
        // Create new version
        let newMap = f map.Current
        
        // Atomic epoch increment and map swap
        lock map (fun () ->
            map.Current <- newMap
            map.Epoch <- map.Epoch + 1UL
        )
        
        // Notify observers of changes
        notifyObservers map.Observers newMap

// Multi-threaded sensor processing without actors
let processSensorNetwork (sensors: Sensor[]) =
    let sensorData = EpochCollections.create<SensorId, Reading>()
    
    // Multiple reader threads
    let readers = 
        [|1..4|] |> Array.map (fun i ->
            async {
                while true do
                    let (snapshot, epoch) = EpochCollections.read sensorData
                    processSnapshot snapshot epoch
                    do! Async.Sleep 100
            })
    
    // Single writer thread aggregating updates
    let writer = async {
        while true do
            let updates = collectSensorUpdates sensors
            EpochCollections.update (fun current ->
                updates |> Seq.fold (fun map (id, reading) ->
                    Map.add id reading map) current
            ) sensorData
            do! Async.Sleep 10
    }

This approach provides:

  • Wait-free reads: Readers never block
  • Consistent snapshots: Each read sees a complete state
  • Predictable write latency: Writers don’t wait for readers
  • No actor overhead: Direct memory operations

Level 5: Actor-Based Coordination (Distributed Systems)

For truly distributed scenarios or when location transparency is required, the actor model provides the most powerful coordination mechanism:

// Actor-based collections for distributed scenarios
type DistributedCache() =
    inherit Actor<CacheMessage>()
    
    let cache = ObservableMap.empty<string, CachedValue>
    let replicas = ResizeArray<ActorRef>()
    
    override this.Receive message =
        match message with
        | Get(key, replyTo) ->
            match cache |> ObservableMap.tryFind key with
            | Some value -> replyTo <! CacheHit(value)
            | None -> replyTo <! CacheMiss
            
        | Put(key, value) ->
            // Update local cache
            cache |> ObservableMap.set key value
            
            // Replicate to peers
            for replica in replicas do
                replica <! Replicate(key, value)
            
        | Subscribe(subscriber) ->
            cache.Observers.Add(fun change ->
                subscriber <! CacheChanged(change))

Actor-based collections enable:

  • Location transparency: Collections can span process boundaries
  • Fault tolerance: Actor supervision handles failures
  • Elastic scaling: Add/remove replicas dynamically
  • Zero-copy messaging: Via BAREWire when actors are local

Choosing the Right Level

The Firefly compiler uses coeffect analysis to automatically select the appropriate synchronization strategy:

// Developer writes simple code
let analyzeData (source: DataSource) =
    let results = ObservableList.empty<Analysis>
    
    source.Items
    |> Seq.map analyze
    |> Seq.filter isSignificant
    |> Seq.iter (ObservableList.add results)
    
    results

// Compiler determines based on coeffects:
// - Pure + SingleThreaded → Stack allocation, no sync
// - Pure + MultiThreaded → Epoch-based RCU
// - ResourceAccess + Local → Arena-scoped
// - ResourceAccess + Distributed → Actor-coordinated

Performance Characteristics by Level

LevelSynchronizationAllocationLatencyThroughputUse Case
Pure FunctionalNoneStack only~0nsMaximumSingle-threaded algorithms
Arena-ScopedRAII boundariesArena pool~10nsVery HighRequest processing
Linear TypesOwnership transferZero-copy~5nsVery HighStreaming updates
Epoch-BasedRCU-styleCopy-on-write~50nsHighMulti-reader caches
Actor-BasedMessage passingPer-message~1μsModerateDistributed systems

Gradual Complexity, Consistent API

The beauty of this design is that the API remains consistent across all levels. Developers write the same functional code, and the compiler chooses the optimal implementation:

// This code works at any level
let updatePrices (updates: PriceUpdate seq) (book: ObservableMap<Symbol, Price>) =
    updates
    |> Seq.fold (fun b update ->
        b |> ObservableMap.set update.Symbol update.Price) book

// Level 1: Compiles to direct memory updates
// Level 2: Scoped within arena transaction  
// Level 3: Requires linear ownership
// Level 4: Uses epoch-based synchronization
// Level 5: Sends actor messages

This gradual approach to reactive collections ensures that:

  • Simple scenarios remain simple and fast
  • Complex scenarios are possible without rewriting
  • Performance characteristics are predictable
  • Developers can reason about their code at the appropriate level of abstraction

The progression from zero-allocation pure functional collections to distributed actor-based systems mirrors Fidelity’s overall philosophy: provide the right tool for each problem scale while maintaining consistent principles and APIs throughout.

Compiler Optimizations for Collections

The compiler applies sophisticated optimizations to observable collections, adapting to the detected memory model tier:

// Developer writes
let items = 
    ObservableList.empty
    |> ObservableList.add "First"
    |> ObservableList.add "Second"
    |> ObservableList.add "Third"

// Compiler optimization varies by coeffect analysis:

// Tier 1: Pure single-threaded → Stack allocation
let items_stack = 
    let mutable array = [||]  // Stack array
    array <- Array.append array [|"First"|]
    array <- Array.append array [|"Second"|]
    array <- Array.append array [|"Third"|]
    ObservableList.fromArray array  // Zero allocation

// Tier 2: Arena-scoped → Batch arena allocation
let items_arena = 
    use arena = Arena.current()
    arena.batchAllocate (fun () ->
        ObservableList.ofSeq ["First"; "Second"; "Third"])

// Tier 3: Linear ownership → In-place builder
let items_linear = 
    let builder = LinearBuilder.create()
    builder.Add("First")
    builder.Add("Second") 
    builder.Add("Third")
    builder.ToObservableList()  // Single allocation

// Tier 4: Multi-threaded → Epoch-based batch
let items_epoch = 
    EpochCollections.batchUpdate (fun current ->
        current 
        |> List.add "First"
        |> List.add "Second"
        |> List.add "Third")

// Tier 5: Actor-based → Message batching
let items_actor = 
    Actor.current().Tell(
        BatchAdd ["First"; "Second"; "Third"])

This adaptive optimization ensures that simple cases remain allocation-free while complex scenarios get appropriate synchronization. The compiler’s coeffect analysis determines the optimal strategy automatically, maintaining the principle that you only pay for the complexity you actually need.

RAII Integration: Memory Management for Reactive Systems

While the hybrid API design provides an elegant programming model, the real power of Alloy.Rx emerges when integrated with Fidelity’s broader RAII-based memory management architecture. This integration ensures that reactive programming doesn’t become a source of memory leaks or unpredictable resource consumption, problems that plague traditional reactive frameworks.

RAII Across the Memory Model Tiers

Just as reactive collections adapt to different synchronization needs, RAII integration scales from simple stack-based cleanup to sophisticated distributed resource management:

%%{init: {'theme': 'neutral'}}%% graph LR subgraph "RAII Patterns" R1[Stack Scope
Automatic cleanup] R2[Arena Scope
Batch cleanup] R3[Linear Scope
Explicit transfer] R4[Actor Lifetime
Supervised cleanup] end subgraph "Resource Types" RT1[Subscriptions] RT2[Memory Buffers] RT3[External Resources] RT4[Cross-Process Refs] end R1 --> RT1 R2 --> RT2 R3 --> RT3 R4 --> RT4 style R1 fill:#e8f5e8 style R2 fill:#fff3e0 style R3 fill:#e8eaf6 style R4 fill:#ffebee

Tier 1: Stack-Based RAII for Simple Subscriptions

At the simplest level, reactive subscriptions follow standard RAII patterns using F#’s use bindings:

// Simple subscription with stack-based cleanup
let processEvents (source: Observable<Event>) =
    use subscription = source |> Rx.subscribe handleEvent
    
    // Process until condition met
    while continueProcessing() do
        doWork()
    
    // Subscription automatically disposed at scope exit
    // No heap allocation, no GC pressure

// Even simpler with computation expressions
let processAsync source = async {
    use! sub = source |> Rx.subscribeAsync handleEventAsync
    do! importantWork()
    // Cleanup guaranteed even if work throws
}

Tier 2: Arena-Based RAII for Collection Hierarchies

When reactive systems grow beyond simple subscriptions, arena-based RAII provides efficient batch cleanup:

Tier 2: Arena-Based RAII for Collection Hierarchies

When reactive systems grow beyond simple subscriptions, arena-based RAII provides efficient batch cleanup:

type DataPipeline() =
    // Arena owns all reactive resources
    let arena = Arena.create { 
        Size = 50<MB>
        Strategy = GrowthOptimized 
    }
    
    // All collections allocated within arena
    let sensors = arena.allocate<ObservableMap<SensorId, SensorData>>()
    let processed = arena.allocate<ObservableList<ProcessedReading>>()
    let errors = arena.allocate<ObservableList<ProcessingError>>()
    
    // Subscriptions also arena-managed
    let subscriptions = arena.allocate<ResizeArray<IDisposable>>()
    
    member this.ConnectSensor(sensor: Sensor) =
        arena.transaction (fun () ->
            // Create subscription within arena
            let sub = sensor.DataStream 
                     |> Rx.subscribe (fun data ->
                         match processReading data with
                         | Ok result -> processed.Add(result)
                         | Error e -> errors.Add(e))
            
            subscriptions.Add(sub)
            sensors.Add(sensor.Id, sensor.Info)
        )
    
    interface IDisposable with
        member this.Dispose() =
            // Single arena disposal cleans up everything:
            // - All observable collections
            // - All subscriptions
            // - All internal buffers
            arena.Dispose()

Arena-based RAII provides deterministic cleanup for entire object graphs without the complexity of tracking individual resources.

Tier 3: Linear Types for Transfer Semantics

Linear types extend RAII to support ownership transfer, crucial for streaming scenarios:

// Linear ownership ensures exactly-once resource handling
type StreamProcessor = 
    | Active of LinearObservable<StreamData> * IDisposable
    | Transferred
    | Completed

let processStream (source: DataSource) =
    // Acquire linear ownership of stream
    let stream = source.AcquireStream()
    let linearObs = LinearRx.acquire (Observable.create<StreamData> NoValue)
    
    // Create processor with cleanup responsibility
    let mutable processor = Active(linearObs, stream)
    
    try
        // Process until complete or transferred
        while not (isComplete processor) do
            match processor with
            | Active(linear, resource) ->
                // Safe processing with linear ownership
                let data = resource.ReadNext()
                let updated = LinearRx.update data linear
                
                if shouldTransfer() then
                    // Transfer ownership to another processor
                    let transferred = transferOwnership updated resource
                    processor <- Transferred
                else
                    processor <- Active(updated, resource)
                    
            | _ -> ()
    finally
        // RAII cleanup based on final state
        match processor with
        | Active(linear, resource) ->
            LinearRx.release linear |> ignore
            resource.Dispose()
        | Transferred -> 
            () // New owner responsible for cleanup
        | Completed -> 
            () // Already cleaned up

Tier 4: Actor-Based RAII for Distributed Resources

When moving to distributed systems, actor supervision provides RAII at the process level:

type MarketDataActor() =
    inherit Actor<MarketMessage>()
    
    // Actor's arena provides memory for reactive collections
    let arena = Arena.forCurrentActor()
    
    // Observable collections allocated within actor's arena
    let priceStreams = ObservableMap.empty<Symbol, Observable<Price>>
    let orderBook = ObservableList.empty<Order>
    
    override this.Receive message =
        match message with
        | PriceUpdate(symbol, price) ->
            // Updates happen within arena boundaries
            match priceStreams |> ObservableMap.tryFind symbol with
            | Some stream -> 
                stream |> Rx.next price |> ignore
            | None ->
                // New observable allocated in actor's arena
                let newStream = Observable.create price
                priceStreams |> ObservableMap.set symbol newStream |> ignore
        
        | OrderReceived order ->
            // List operations use arena memory
            orderBook |> ObservableList.add order |> ignore
    
    interface IDisposable with
        member this.Dispose() =
            // Arena cleanup automatically handles all observables
            arena.Dispose()

The key insight is that when an actor terminates, its entire arena is reclaimed, automatically cleaning up all observable collections and their subscriptions. This provides the same deterministic cleanup guarantees for reactive programming that RAII provides for traditional resource management.

Cross-Tier Resource Management

Real applications often span multiple tiers, and RAII adapts accordingly:

// Application using multiple RAII tiers
type HybridApplication() =
    // Tier 2: Arena for application lifetime
    let appArena = Arena.create { Size = 100<MB> }
    
    // Tier 4: Actors for distributed components  
    let dataActor = Prospero.spawn<DataProcessor>()
    let uiActor = Prospero.spawn<UIUpdater>()
    
    member this.ProcessRequest(request: Request) = async {
        // Tier 1: Stack-based for request scope
        use requestTimer = startTimer()
        
        // Tier 3: Linear for streaming data
        let! streamData = acquireLinearStream request.DataSource
        
        try
            // Process with appropriate tier
            match request.Type with
            | SimpleQuery ->
                // Tier 1: Stack-based processing
                use subscription = 
                    Observable.create request.Query
                    |> Rx.subscribe (fun result ->
                        printfn "Result: %A" result)
                return! processSimple request
                
            | StreamingQuery ->
                // Tier 3: Linear ownership for efficiency
                return! processWithLinearStream streamData request
                
            | DistributedQuery ->
                // Tier 4: Actor-based coordination
                let! result = dataActor.Ask(ProcessDistributed request)
                return result
        finally
            // Cleanup happens at appropriate tier
            releaseLinearStream streamData
    }
    
    interface IDisposable with
        member this.Dispose() =
            // Cleanup in reverse order of creation
            Prospero.terminate uiActor
            Prospero.terminate dataActor
            appArena.Dispose()

Memory Safety Guarantees

The RAII integration provides strong guarantees at each tier:

TierGuaranteeMechanismCleanup Trigger
StackDeterministicScope exitFunction return/exception
ArenaBatch cleanupArena disposalExplicit/scope exit
LinearExactly-onceType systemOwnership transfer/release
ActorSupervisedActor lifecycleTermination/supervision

Zero-Cost Abstractions Through RAII

The beauty of RAII integration is that cleanup code compiles away to simple pointer arithmetic:

// What you write
use subscription = observable |> Rx.subscribe handler

// What executes (conceptually)
let subscription = {
    Observer = handler
    Cleanup = fun () -> removeObserver(observable, handler)
}
try
    // Your code
finally
    subscription.Cleanup()  // Inlined, no allocation

// At assembly level: just stack pointer adjustment

This ensures that reactive programming in Fidelity maintains zero-cost abstractions while providing memory safety through proven RAII patterns rather than garbage collection.

Cross-Boundary Reactive Streams

Reactive streams often need to cross boundaries - whether thread boundaries, process boundaries, or network boundaries. Alloy.Rx provides appropriate mechanisms at each level:

Thread Boundaries (Epoch-Based)

// Efficient cross-thread streaming without actors
let crossThreadPipeline (source: Observable<InputData>) =
    // Create thread-safe pipeline using epochs
    let pipeline = EpochCollections.createPipeline()
    
    // Producer thread
    let producer = async {
        source |> Rx.subscribe (fun data ->
            pipeline.Publish(ProcessingStage1, data)
        )
    }
    
    // Consumer thread  
    let consumer = async {
        pipeline.Subscribe(ProcessingStage1, fun data ->
            let processed = heavyComputation data
            updateUI processed
        )
    }
    
    // No actors needed - just efficient epoch-based coordination
    Async.Parallel [producer; consumer]

Process Boundaries (Memory-Mapped)

// Zero-copy cross-process streaming via BAREWire
let crossProcessPipeline (targetProcess: ProcessId) =
    // Create memory-mapped channel
    let channel = BAREWire.createChannel<MarketData> {
        Size = 100<MB>
        Mode = ProducerConsumer
        Target = targetProcess
    }
    
    // Producer process writes directly to shared memory
    marketDataFeed |> Rx.subscribe (fun data ->
        channel.Write(data)  // Zero-copy write
    )
    
    // Consumer process reads without serialization
    channel.AsObservable()
    |> Rx.map processMarketData
    |> Rx.subscribe updateTradingStrategy

Actor Boundaries (When Distribution Needed)

// Actor-based only when true distribution is required
type DashboardActor() =
    inherit Actor<DashboardMessage>()
    
    let arena = Arena.forCurrentActor()
    let mutable analyticsSubscription: Subscription option = None
    
    override this.Receive message =
        match message with
        | ConnectToAnalytics analyticsRef ->
            // Cross-actor subscription with proper cleanup
            let subscription = 
                analyticsRef.GetObservable<MetricsUpdate>()
                |> Rx.map (fun metrics -> 
                    // Processing happens in UI actor's arena
                    updateDashboard metrics)
                |> Rx.subscribe
            
            // Track subscription for cleanup
            analyticsSubscription <- Some subscription
            
        | Disconnect ->
            // Explicit cleanup when needed
            analyticsSubscription |> Option.iter (fun s -> s.Dispose())
            analyticsSubscription <- None

Reference Sentinels for Failure Detection

For scenarios requiring rich failure information across boundaries (not just actor-based), the Reference Sentinel pattern provides sophisticated state tracking:

// Sentinels work with any remote reference, not just actors
type RemoteObservable<'a> = {
    LocalProxy: Observable<'a>
    Sentinel: ReferenceSentinel
    ConnectionType: ConnectionType
}

and ConnectionType =
    | InProcess        // Same process, different thread
    | CrossProcess     // Different process, same machine  
    | NetworkRemote    // Different machine
    | ActorRemote      // Actor-based distribution

module RemoteRx =
    let connect (remote: RemoteReference) : RemoteObservable<'a> =
        let localProxy = Observable.create NoValue
        let sentinel = remote.Sentinel
        
        match remote.Type with
        | InProcess ->
            // Simple epoch-based coordination
            EpochCollections.bridge remote.Source localProxy
            
        | CrossProcess ->
            // BAREWire memory-mapped channel
            BAREWire.bridge remote.Channel localProxy
            
        | NetworkRemote ->
            // Network protocol with retry logic
            Network.bridge remote.Endpoint localProxy sentinel
            
        | ActorRemote ->
            // Full actor coordination
            Prospero.bridge remote.ActorRef localProxy sentinel
        
        { LocalProxy = localProxy
          Sentinel = sentinel
          ConnectionType = remote.Type }

Choosing the Right Boundary Crossing

The Firefly compiler helps choose the appropriate mechanism:

// Developer writes boundary-agnostic code
let analyzeDataStream (source: IObservable<SensorData>) =
    source
    |> Rx.window (TimeSpan.FromSeconds 10.0)
    |> Rx.map computeStatistics
    |> Rx.filter isAnomalous
    |> Rx.subscribe alert

// Compiler chooses based on deployment:
// - Same thread: Direct function calls
// - Cross-thread: Epoch-based coordination
// - Cross-process: BAREWire zero-copy
// - Cross-machine: Network protocol
// - Elastic/fault-tolerant: Actor model

This flexibility ensures that simple scenarios remain simple while complex distributed scenarios are fully supported when needed.

Context-Aware Memory Optimization

The compiler’s choice between immutable and mutable implementations for observable collections considers the full context of usage, not just arena allocation patterns:

// Compiler analysis determines optimal implementation
[<CompilerGenerated>]
module ObservableOptimization =
    // Single-threaded context: mutable for performance
    let createForComputation (coeffects: ContextRequirement Set) =
        if coeffects.Contains(Pure) && coeffects.Contains(SingleThreaded) then
            ObservableList.createMutable()  // Direct mutation safe
        else
            ObservableList.createImmutable() // Structural sharing needed
    
    // Arena context: batch allocation strategies
    let createForArena (arena: Arena) =
        match arena.Strategy with
        | Sequential -> 
            // Linear allocation for sequential access
            ObservableList.createLinearIn arena
        | RandomAccess ->
            // Tree structure for efficient random access
            ObservableList.createTreeIn arena
        | Streaming ->
            // Ring buffer for streaming scenarios
            ObservableList.createRingBufferIn arena
    
    // Cross-boundary context: appropriate synchronization
    let createForBoundary (boundary: BoundaryType) =
        match boundary with
        | ProcessLocal ->
            // Epoch-based synchronization
            ObservableList.createWithEpochs()
        | MachineLocal ->
            // Memory-mapped via BAREWire
            ObservableList.createMemoryMapped()
        | NetworkDistributed ->
            // Actor-based coordination
            ObservableList.createActorBased()

This context-aware compilation ensures that reactive collections use memory efficiently within the constraints of their deployment environment, whether that’s a single thread, an arena, or a distributed system.

Performance Implications by Context

The adaptive implementation strategy yields dramatic performance differences:

ContextImplementationMemory UsageUpdate CostRead Cost
Pure Single-ThreadMutable arrayMinimalO(1) amortizedO(1)
Multi-Thread LocalEpoch-based2x snapshotsO(1) + fenceO(1)
Arena-ScopedCustom allocatorArena-localO(1)O(1)
Cross-ProcessMemory-mappedShared pagesO(1) + IPCO(1)
DistributedActor-coordinatedPer-actorMessage costMessage cost

Alloy.Rx with Fidelity’s adaptive memory management demonstrates that reactive programming can be both elegant and efficient at any scale.

By providing a unified API that adapts to different contexts - from stack allocation to distributed systems - we eliminate the traditional trade-offs between safety, performance, and expressiveness. This approach enables developers to write reactive code that automatically optimizes for its deployment context while maintaining the predictability required for systems programming.

Advanced Operators

Our approach also enables sophisticated temporal operators:

// Temporal alignment operator
let align (obs1: Observable<'a>) (obs2: Observable<'b>) : Observable<'a * 'b> =
    let target = create NoValue
    
    // Track latest values using structural sharing
    let state = ref (None, None)
    
    let updateIfBoth() =
        match !state with
        | (Some a, Some b) -> target |> next (HasValue (a, b)) |> ignore
        | _ -> ()
    
    obs1 |> subscribe (fun a -> 
        state := (Some a, snd !state)
        updateIfBoth()
    ) |> ignore
    
    obs2 |> subscribe (fun b ->
        state := (fst !state, Some b)
        updateIfBoth()
    ) |> ignore
    
    target

// Categorical composition - Kleisli composition for the Observable monad
// In category theory notation: \((f >=> g)(a) = f(a) \text{ bind } g\)
let (>=>) (f: 'a -> Observable<'b>) (g: 'b -> Observable<'c>) : 'a -> Observable<'c> =
    fun a -> f a |> bind g

// Applicative operators
let (<*>) (obsF: Observable<'a -> 'b>) (obsA: Observable<'a>) : Observable<'b> =
    align obsF obsA 
    |> map (fun (f, a) -> f a)

Memory Management Integration

Alloy.Rx integrates with Fidelity’s RAII-based memory management:

type Subscription = 
    { ObservableId: Guid
      ObserverId: ObserverId }
    interface IDisposable with
        member this.Dispose() =
            // Cleanup handled by RAII principles
            Registry.unsubscribe this

// Safe subscription with automatic cleanup
let subscribe (handler: 'a -> unit) (obs: Observable<'a>) : Subscription =
    let (newObs, subscription) = Rx.subscribe handler obs
    Registry.register subscription newObs
    subscription

// Usage with RAII
let processEvents events =
    use sub = events |> Rx.subscribe processEvent
    // Subscription automatically cleaned up
    doWork()

Compiler Optimizations

The Firefly compiler can apply sophisticated optimizations to Alloy.Rx code by leveraging coeffect information:

Pattern Recognition and Fusion

// Pattern: Multiple maps
source |> Rx.map f |> Rx.map g |> Rx.map h

// Compiler recognizes functor laws and fuses:
source |> Rx.map (f >> g >> h)
// Pattern: Filter-map combination  
source |> Rx.filter predicate |> Rx.map transform

// Compiler generates:
source |> Rx.choose (fun x -> if predicate x then Some (transform x) else None)

Cross-Thread Optimization

When coeffects indicate thread boundaries:

// UI thread observable
let uiEvents = Observable<UIEvent> { Coeffects = Set [UIThread] }

// Background computation
let processed = 
    uiEvents 
    |> Rx.map (fun e -> heavyComputation e)  // Coeffect: Computation

// Compiler recognizes thread transition and generates:
let processed = 
    uiEvents 
    |> Rx.observeOn backgroundScheduler
    |> Rx.map heavyComputation
    |> Rx.observeOn uiScheduler

Zero-Allocation Transformations

For pure transformations, the compiler can eliminate intermediate observables:

// Developer writes:
let pipeline =
    sensor
    |> Rx.map normalize
    |> Rx.filter (fun x -> x > threshold)
    |> Rx.scan (+) 0.0

// Compiler generates single-pass iteration:
let pipeline = {
    Last = 0.0
    Observers = ResizeArray()
    Handler = fun sensorValue ->
        let normalized = normalize sensorValue
        if normalized > threshold then
            runningSum <- runningSum + normalized
            notifyObservers runningSum
}

Integration with Fabulous.Fidelity

The true power of Alloy.Rx emerges in UI scenarios where Fabulous.Fidelity provides declarative bindings:

// Define reactive model
type Model = {
    SearchText: Observable<string>
    Results: Observable<SearchResult list>
    IsLoading: Observable<bool>
}

// Fabulous view with reactive bindings
let view (model: Model) dispatch =
    VStack() {
        // Two-way binding
        SearchBox()
            .bindText(model.SearchText)
            .onTextChanged(fun text ->
                model.SearchText |> Rx.next text
                dispatch (Search text))
        
        // Conditional rendering based on observable
        model.IsLoading
        |> Rx.map (fun loading ->
            if loading then
                ProgressBar() :> View
            else
                ResultList()
                    .bindItems(model.Results) :> View
        )
        |> ReactiveContent
    }

The compiler recognizes these patterns and generates efficient update code:

  • Property bindings compile to direct property setters
  • Conditional rendering compiles to efficient view swapping
  • Collection bindings use incremental updates

The Architectural Synthesis

Alloy.Rx represents a synthesis of functional reactive programming with Fidelity’s systems programming goals.

This synthesis isn’t merely a combination of existing ideas but rather a fundamental rethinking of how reactive systems should work when freed from the constraints of managed runtimes. By grounding our design in mathematical principles and refusing to accept the performance penalties traditionally associated with reactive programming, we’ve rediscovered a principled path that exceeds many patterns used in today’s technology landscape.

Mathematical Rigor

The algorithmic foundations of Alloy.Rx go beyond academic exercise - they provide the conceptual framework that makes everything else possible. When we recognize observables as forming a proper monad with well-defined laws, we unlock the ability to reason about reactive programs with the same confidence mathematicians reason about algebraic structures.

  • Observables form a monad with proper laws
  • Temporal operators have categorical semantics
  • Coeffect tracking ensures context-aware compilation

The temporal operators we’ve defined have categorical semantics that ensure composability at any scale, from simple event handlers to complex distributed systems. Most importantly, our coeffect tracking doesn’t just annotate code - it provides a mathematical proof of where and how reactive operations can execute, enabling the compiler to make optimization decisions that would be unsafe without this formal foundation.

Performance Excellence

The performance characteristics of Alloy.Rx demonstrate what becomes possible when we challenge the assumption that abstraction must come at a cost. Our zero-allocation transformations for pure reactive pipelines show that functional programming patterns can compile to code as efficient as hand-written loops.

  • Zero-allocation for pure transformations
  • RAII-based subscription management
  • Compiler fusion of observable pipelines

The RAII-based subscription management eliminates the garbage collection pressure that makes traditional reactive frameworks unsuitable for real-time systems. Perhaps most remarkably, our compiler fusion of observable pipelines means that complex reactive graphs can execute with the same efficiency as if a systems programmer had manually optimized every operation. This isn’t theoretical - it’s the practical result of building on solid foundations.

Developer Ergonomics

While performance and correctness are crucial, we’ve never lost sight of the developer experience. F# developers encountering Alloy.Rx will find familiar functional reactive programming patterns that work exactly as expected. The progressive disclosure of complexity means that simple reactive programs remain simple, while sophisticated scenarios have clear paths to implementation.

The seamless integration with async workflows acknowledges that real applications need both push and pull patterns, often in the same codebase.

By making the right thing also the easy thing, we ensure that developers naturally write efficient, correct reactive code without having to become experts in compiler optimization or memory management.

  • Familiar FRP patterns for F# developers
  • Progressive disclosure of complexity
  • Seamless integration with async workflows

The integration succeeds because it recognizes push-based reactivity as a fundamental pattern that complements pull-based async, both unified under the mathematical framework of codata. This isn’t about choosing between approaches - it’s about providing the right tool for each scenario while maintaining the performance and safety guarantees that make Fidelity unique.

Future Directions

As Alloy.Rx evolves within the Fidelity ecosystem, several exciting possibilities emerge. These aren’t speculative features but natural extensions of the mathematical and architectural foundations we’ve established. Each represents a frontier where the principles driving Alloy.Rx can unlock new capabilities that were previously thought to require specialized systems or accept significant performance compromises.

Compile-Time Reactive Graphs

The ability to analyze entire reactive graphs at compile time opens doors that traditional reactive frameworks cannot consider. When the Firefly compiler can see the complete topology of reactive relationships, it can perform global optimizations that go far beyond local transformations. Imagine dead code elimination that understands which event streams will never fire, or automatic parallelization that recognizes independent subgraphs. The mathematical structure of our observables makes these analyses not just possible but tractable, turning what would be NP-hard problems in general programs into polynomial-time optimizations in the reactive domain.

The compiler could analyze entire reactive graphs at compile time, optimizing data flow across component boundaries. We’re very excited to consider this future work, and are hoping the community will join us on that portion of the journey after the initial tooling is in place and can be made available for review and feedback.

Hardware-Accelerated Events

The transparency of Alloy.Rx’s implementation creates unprecedented opportunities for hardware integration. On embedded systems, reactive streams could compile directly to interrupt service routines, with the compiler verifying that handlers meet real-time constraints. The coeffect system would ensure that only appropriate operations occur in interrupt context, while the RAII integration guarantees bounded memory usage.

This isn’t about building a special embedded version of Alloy.Rx - it’s the natural result of a design that respects the realities of hardware from the ground up.

For embedded systems, reactive streams could compile directly to interrupt handlers with minimal overhead. The same reactive code that coordinates UI updates on a desktop could drive motor control on a robot, with the compiler generating the appropriate implementation for each target.

Distributed Reactive Systems

The ultimate validation of Alloy.Rx’s design will emerge in our roadmap for distributed systems, where the combination of actor model integration, reference sentinels, and coeffect tracking enables reactive streams that span process and machine boundaries as naturally as local observers. When reactive streams cross network boundaries, the compiler can automatically insert the appropriate serialization and error handling based on coeffect analysis.

Our unique actor model integration will enable reactive streams that span process boundaries efficiently.

Reference sentinels provide rich failure information that traditional null references cannot express, enabling sophisticated retry and failover strategies. The RAII integration ensures that distributed subscriptions are cleaned up properly even when remote processes fail. This creates a unified programming model where location transparency isn’t an aspiration but an architectural guarantee.

A Paradigm Shift in Reactive Programming

What we’ve designed with Alloy.Rx represents a significant departure not just from .NET’s Reactive Extensions, but from the broader landscape of reactive programming implementations industry-wide. Frameworks like RxJS, RxJava, and ReactiveX have accepted runtime overhead as the price of abstraction. Systems like Rust’s futures have chosen zero-cost abstractions at the expense of ergonomics. And OCaml’s React uses weak references and finalizers for dependency tracking. By contrast, our choices aren’t a compromise or a fragile balancing act - it’s a fundamental recognition that the right mathematical abstractions compile naturally to powerful, safe code.

The design of Alloy.Rx demonstrates that we can have both elegant expression with “close to the metal” performance.

By understanding push-based reactivity as a form of codata dual to the pull-based codata of async streams, we have showed the emergent properties that each pattern could be unified under a single compilation strategy. Coeffects track context requirements. RAII integration ensures deterministic cleanup. The Olivier actor model provides natural concurrency boundaries. None of these choices are arbitrary engineering decisions. Each emerges naturally from the mathematical structure once we commit to preserving that structure through compilation rather than erasing it at runtime.

This approach doesn’t just serve today’s systems well - it’s designed for the emerging landscape of heterogeneous computing that will define the next era of technology. Processors will become increasingly specialized. Edge computing will push intelligence to every device. Distributed systems will span from IoT sensors to quantum computers. The need for programming models that can efficiently target this diversity becomes critical. Alloy.Rx provides that model, not through platform-specific adaptations but through mathematical principles that remain valid whether compiling to a microcontroller interrupt handler or a distributed stream processing cluster. The future of computing is heterogeneous, distributed, and reactive - and with Alloy.Rx, we’re ready to meet it with a proper balance of mathematical rigor, elegance, and engineering efficiency.

Author
Houston Haynes
date
August 03, 2025
category
Design

We want to hear from you!

Contact Us