The Rust programming ecosystem has transformed how the software industry views systems programming. By pioneering its ownership system with “borrowing” and “lifetimes”, Rust brought compile-time memory safety into mainstream development. Beyond memory management, Rust’s innovations in zero-cost abstractions, trait-based generics, and “fearless concurrency” philosophy have influenced an entire echelon of language designers.
🔄 Updated July 26, 2025
- Comparison of Prospero/BAREWire capability model vs Rust’s ownership model
- Delimited continuations for deterministic async vs Rust’s runtime-dependent state machines
- Detailing optionality of MLIR’s multi-level optimizations vs Rust’s direct LLVM compilation
- RAII through Fidelity continuation boundaries vs Rust’s explicit Drop trait implementations
- Zipper traversal for control and data graph analysis vs Rusts’s forward-only compilation
At SpeakEZ, we’ve reviewed some of Rust’s design choices while developing the Fidelity Framework. This analysis explores both the similarities stemming from our shared OCaml heritage and the contrasts arising from different design philosophies, particularly around async programming models, ecosystem coherence, and developer ergonomics.
Honoring Rust’s Novel Approach
Rust deserves credit for showing that memory safety doesn’t require sacrificing performance. Its traits demonstrated that formalism can create efficiency, making imperative programming dramatically safer. The language’s zero-cost abstractions showed that high-level programming constructs could compile down to efficient machine code equivalent to hand-written C. This created a sea change in technology circles that continues to resonate today.
Interestingly, despite fundamental differences in design philosophy, both Rust and F# share a connection to OCaml. Rust’s first compiler was written in OCaml, and traces of this heritage can be seen in features like pattern matching and the influence on its type system. F# began explicitly as an adaptation of OCaml for the .NET ecosystem, and later a Fable compiler for F# (to JavaScript) emerged and flourished. This shared lineage manifests in some similar constructs, anonymous records, algebraic data types, and pattern matching, though the languages have evolved toward decidedly different goals. Where F# embraced functional reactive programming with immutability by default, Rust pursued an imperative design with its ownership system for memory safety.
The Fidelity Framework acknowledges these important contributions while exploring a unique path, drawing from F#’s twenty-year history of innovation. Since Don Syme first developed F# in the early 2000s, the language has pioneered its own approaches to type safety and expressiveness. Units of measure, discriminated unions, type providers, and computation expressions each represent significant advances in type system design and functional programming patterns. Beyond its OCaml foundations, F# also incorporated the actor model through its MailboxProcessor, drawing inspiration from Erlang’s approach to concurrent, fault-tolerant systems. This synthesis of OCaml’s type safety with Erlang’s actor-based concurrency created unique capabilities that Fidelity now extends further.
This blog entry explores some of the similarities and differences that we found along our own path to building a unique systems programming path for F# in the Fidelity framework.
Rust’s Async Runtime Challenge
Rust’s ecosystem has matured remarkably over the past decade, and with that maturity comes valuable lessons about the challenges of system design at scale. Perhaps nowhere is this more apparent than in the evolution of Rust’s async story.
The Hidden Complexity of Async Rust
While Rust’s async/await syntax appears straightforward, the underlying reality involves significant complexity that impacts code clarity and debugging:
// What you write - seemingly simple
async fn fetch_and_process(url: &str) -> Result<Data, Error> {
let response = fetch_url(url).await?;
let processed = process_data(response).await?;
Ok(processed)
}
// What actually happens - opaque runtime machinery
// 1. This becomes a state machine
// 2. Which runtime executes it? Tokio? async-std? smol?
// 3. Each runtime has different:
// - Scheduling algorithms
// - Performance characteristics
// - Feature sets
// - Debugging capabilities
The fragmentation creates several challenges:
Runtime Lock-in: Once you choose Tokio or some other runtime, your entire dependency tree must use libraries that are “locked” to that early choice.
Opaque Execution: The transformation from async/await to state machines makes debugging difficult. Stack traces become nearly unreadable:
thread 'tokio-runtime-worker' panicked at 'error', src/main.rs:42:5
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: <tokio::runtime::task::harness::Harness<T,S> as core::future::Future>::poll
3: tokio::runtime::task::raw::poll
4: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
5: tokio::runtime::scheduler::multi_thread::worker::run
...20 more frames of runtime internals...
- Incompatible Ecosystems: Libraries must often provide multiple implementations:
// A library author's dilemma
#[cfg(feature = "tokio")]
pub async fn do_io_tokio() -> Result<(), Error> {
tokio::fs::read_to_string("file.txt").await
}
#[cfg(feature = "async-std")]
pub async fn do_io_async_std() -> Result<(), Error> {
async_std::fs::read_to_string("file.txt").await
}
#[cfg(feature = "smol")]
pub async fn do_io_smol() -> Result<(), Error> {
smol::fs::read_to_string("file.txt").await
}
F#’s Unified Async Model
In contrast, F# has maintained a single, coherent async model since 2007. It’s worth noting that F#’s async workflows were pioneering - predating C#’s async/await by 5 years, JavaScript’s by 4 years, and Python’s by 8 years. This early innovation influenced the async designs in numerous languages that followed:
// F# async - consistent since 2007
let fetchAndProcess url = async {
let! response = fetchUrl url
let! processed = processData response
return processed
}
// Clear stack traces that directly map to your code
// No runtime machinery obscuring the execution path
// Same model works everywhere - client, server, embedded
The benefits of this unified approach become clear in practice:
- Predictable execution: You know exactly how your async code will behave
- Clear debugging: Stack traces point directly to your code
- Universal compatibility: All F# async code works together
- Simplified mental model: One way to think about concurrency
Fidelity’s Innovation: Compile-Time Determinism
While F# established the async pattern, Fidelity takes it further by using delimited continuations and zipper traversal to achieve compile-time determinism in async state machines:
// What you write - standard F# async
let processAsync data = async {
let! validated = validateAsync data
let! transformed = transformAsync validated
return transformed
}
// What Fidelity does - delimited continuations create explicit control flow
// that the compiler can analyze and optimize at compile time
let processWithDcont data =
reset (fun () ->
let validated = shift (fun k -> validateAsync data |> continueWith k)
let transformed = shift (fun k -> transformAsync validated |> continueWith k)
transformed
)
// Zipper traversal enables bidirectional analysis of the state machine
// allowing MLIR to generate optimal code with known state transitions
This approach provides several advantages over runtime-based async implementations:
- Compile-time state machine generation: MLIR knows all possible state transitions at compile time
- Deterministic memory layout: Stack frame sizes and allocations are predetermined
- Optimal scheduling: The compiler can make informed decisions about parallelism and resource usage
- Zero runtime overhead: No dynamic dispatch or runtime state machine interpretation
The combination of delimited continuations for control flow and zipper traversal for program analysis allows Fidelity to generate MLIR code that is both more efficient and more predictable than traditional runtime-based async implementations.
Shared Approach: Result-Based Error Handling
Interestingly, Rust and Fidelity framework have converged on very similar approaches to error handling, both using Result types with monadic composition:
// Rust: Result with ? operator
fn process_file(path: &str) -> Result<Summary, ProcessError> {
let contents = std::fs::read_to_string(path)?;
let parsed = parse_contents(&contents)?;
let validated = validate_data(parsed)?;
let summary = summarize(validated)?;
Ok(summary)
}
// F#: Result with computation expressions
let processFile path = result {
let! contents = File.readAllText path
let! parsed = parseContents contents
let! validated = validateData parsed
let! summary = summarize validated
return summary
}
Both approaches:
- Use algebraic data types for errors (Rust’s enums, F#’s discriminated unions)
- Provide syntactic sugar for error propagation (? vs let!)
- Maintain type safety throughout the error handling flow
- Support composable error transformations
Type System Philosophies: Traits vs. Type Classes and Beyond
While both languages draw inspiration from functional programming, they’ve evolved different approaches to polymorphism and type abstraction.
Rust’s Trait System
Rust’s traits provide a powerful system for defining shared behavior:
// Rust traits with associated types and lifetimes
trait Container<'a> {
type Item: 'a;
fn get(&'a self, index: usize) -> Option<&'a Self::Item>;
fn len(&self) -> usize;
fn is_empty(&self) -> bool {
self.len() == 0 // Default implementation
}
}
// Implementation requires explicit lifetime management
impl<'a, T: 'a> Container<'a> for Vec<T> {
type Item = T;
fn get(&'a self, index: usize) -> Option<&'a T> {
self.get(index)
}
fn len(&self) -> usize {
self.len()
}
}
F#’s Structural Type System and Type Providers
F# takes a different approach with structural typing and innovative features like type providers:
// F# uses structural typing and type inference
type Container<'T> = {
Get: int -> 'T option
Length: int
}
// Type providers generate types from external data sources
type Database = SqlDataConnection<"Server=...;Database=...">
// This creates strongly-typed access to database schema at compile time!
let getCustomers() =
query {
for customer in Database.Customers do
where (customer.Country = "USA")
select customer
}
// Units of measure provide zero-cost domain modeling
[<Measure>] type USD
[<Measure>] type EUR
let convertCurrency (amount: decimal<USD>) (rate: decimal<EUR/USD>) : decimal<EUR> =
amount * rate // Type-safe currency conversion
F#’s type providers represent a unique innovation, generating types at compile time from external schemas, something that would require complex procedural macros in Rust. It’s something that’s beyond the current scope of the Fidelity framework, but we’re working to keep that key feature “on the board” as our roadmap develops.
Functional-First Design With Strategic Imperative Code
Rust’s Innovation: Rust’s ownership system manages memory through strict rules about who “owns” data and how it can be borrowed, enforced by the borrow checker at compile time. This represents a fundamental breakthrough in systems programming.
Consider how Rust handles a common pattern, processing data that needs both reading and mutation:
// Rust: Explicit lifetime management with borrow checker
fn process_data<'a>(data: &'a mut Vec<DataPoint>, refs: &'a [Reference])
-> Result<Statistics, Error> {
// Must carefully manage mutable and immutable borrows
let stats = calculate_stats(&*data)?; // Immutable borrow
for (i, point) in data.iter_mut().enumerate() { // Mutable borrow
if refs.get(i).map_or(false, |r| r.should_update) {
point.value *= 2.0;
}
}
Ok(stats)
}
Fidelity’s Complementary Approach: While respecting Rust’s ownership model, Fidelity explores a capability-based architecture that offers different ergonomic benefits. Using Prospero for capability marshaling and BAREWire for zero-copy operations, we achieve memory safety with more flexibility than traditional ownership models:
// Domain layer: Pure functional code focusing on business logic
module Analytics =
// Express complex algorithms without memory management concerns
let identifyAnomalies (timeSeries: TimeSeries) : AnomalyReport =
timeSeries
|> TimeSeries.rollingWindow 20
|> Seq.map calculateVariance
|> Seq.filter (fun v -> v > anomalyThreshold)
|> Seq.map createAnomalyEvent
|> AnomalyReport.fromEvents
// Infrastructure layer: Capability-based zero-copy operations
module TimeSeriesLoader =
// Prospero manages capabilities, BAREWire enables zero-copy access
let loadFromBuffer (bufferCap: BufferCapability) : Result<TimeSeries, LoadError> =
// BAREWire provides zero-copy view into the buffer
let dataView = BAREWire.createView bufferCap
// Prospero ensures capability validity across async boundaries
Prospero.withCapability bufferCap (fun () ->
dataView
|> BAREWire.decode<TimeSeriesData> // Zero-copy deserialization
|> Result.map TimeSeries.fromRawPoints
)
// Capabilities can be safely shared across async operations
let processMultipleBuffers (caps: BufferCapability list) = async {
let! results =
caps
|> List.map loadFromBuffer
|> Async.Parallel
return results |> Array.choose Result.toOption
}
This capability model provides several advantages for Fidelity:
- Zero-copy access: BAREWire enables direct memory access without copying
- Flexible sharing: Capabilities can be safely shared across async boundaries
- Fast resolution: Prospero’s capability marshaling provides efficient runtime checks
- Composable: Capabilities compose naturally with F#’s functional patterns
RAII Integration: Different Approaches to Deterministic Cleanup
One of Fidelity’s key design choices is the integration of RAII (Resource Acquisition Is Initialization) principles into F#’s functional paradigm, providing automatic resource management without explicit interface implementations.
Let’s compare how both languages handle RAII patterns:
// Rust: RAII with Drop trait
struct DataProcessor {
arena: Arena,
cache: HashMap<String, ProcessedData>,
}
impl DataProcessor {
fn new() -> Self {
Self {
arena: Arena::with_capacity(100 * MB),
cache: HashMap::new(),
}
}
fn process(&mut self, data: RawData) -> Result<(), Error> {
// Allocations happen in the arena
let processed = self.arena.alloc(|| {
transform_data(&data)
})?;
self.cache.insert(data.id.clone(), processed);
Ok(())
}
}
impl Drop for DataProcessor {
fn drop(&mut self) {
// Arena cleaned up through its own Drop impl
println!("Cleaning up processor resources");
}
}
// Usage - RAII ensures cleanup
{
let mut processor = DataProcessor::new();
processor.process(raw_data)?;
} // Drop called here automatically
// Fidelity: RAII with automatic resource management
type DataProcessor() =
// Resources automatically managed
let arena = Arena.allocate(100 * MB)
let cache = Dictionary<string, ProcessedData>()
member this.Process(data: RawData) =
// 'use' provides automatic cleanup for scoped resources
use scope = arena.CreateScope()
let processed = transformData data
cache.[data.Id] <- processed
// Cleanup happens automatically when DataProcessor goes out of scope
// No explicit interface implementation needed
// Usage - 'use' ensures cleanup
use processor = new DataProcessor()
processor.Process(rawData)
// Resources cleaned up automatically
F#’s approach to RAII is fundamentally different from Rust’s. In Fidelity, delimited continuations provide natural scope boundaries where memory release happens automatically - no disposal patterns or interfaces needed. Where Rust requires explicit Drop trait implementation, Fidelity’s delimited continuations handle scope exit and resource cleanup as a natural consequence of the control flow structure.
Type-Level Memory Safety: Different Paths to the Same Goal
Rust’s Innovation: Rust extended type systems to enforce memory safety rules at compile time, using lifetime annotations and borrowing rules to eliminate memory errors without runtime checks.
Here’s how both languages approach type-safe memory access:
// Rust: Lifetime annotations for memory safety
struct MemoryRegion<'a, T> {
data: &'a mut [T],
_phantom: PhantomData<T>,
}
impl<'a, T> MemoryRegion<'a, T> {
fn access(&self, offset: usize) -> Option<&T> {
self.data.get(offset)
}
fn access_mut(&mut self, offset: usize) -> Option<&mut T> {
self.data.get_mut(offset)
}
}
// Safe usage with compile-time checks
fn process_memory<'a>(region: &'a mut MemoryRegion<'a, i32>) {
if let Some(value) = region.access_mut(0) {
*value += 42; // Safe mutation
}
// Lifetime 'a ensures region remains valid
}
// This won't compile - lifetime violation
// fn return_ref<'a>(region: &'a MemoryRegion<'a, i32>) -> &'a i32 {
// region.access(0).unwrap() // Error: cannot return reference to local data
// }
Fidelity’s Extension: Building on F#’s units of measure system, Fidelity provides compile-time memory safety through a different mechanism that may be more familiar to F# developers. Implementing non-numeric units of measure not only provides design-time support, but they also support memory mapping through BAREWire. What’s more, after that work is done they are fully erased at the last stages of lowering to provide a “zero cost” abstraction that supports multiple goals without sacrificing speed:
// Fidelity: Memory safety through F#'s units of measure
[<Measure>] type address
[<Measure>] type offset
[<Measure>] type bytes
// Type-safe memory operations
let accessMemory (region: MemoryRegion<'T>) (offset: int<offset>) : 'T =
if int offset >= region.Length / sizeof<'T>
then failwith "Memory access out of bounds"
else region.Data[int offset]
// The type system prevents common errors
let example region =
let addr = 0x1000<address>
let off = 16<offset>
// Compile error: cannot add address to offset without explicit conversion
// let invalid = addr + off
let data = accessMemory region off // Type-safe access
Both approaches prevent memory errors at compile time, but through different mechanisms. Rust uses lifetime tracking to ensure references remain valid, while Fidelity uses units of measure to prevent type confusion and ensure correct usage of memory addresses and offsets at zero runtime cost.
Concurrency: Building on Strong Foundations
Rust’s Achievement: Rust’s ownership system prevents data races at compile time, delivering on its promise of “fearless concurrency” without runtime overhead.
Let’s compare how both languages handle concurrent data processing:
// Rust: Ownership ensures thread safety
use std::sync::{Arc, Mutex};
use rayon::prelude::*;
fn process_concurrent(data: Vec<DataPoint>) -> Vec<Result<Processed, Error>> {
// Parallel iterator with automatic work distribution
data.par_iter()
.map(|point| {
// Each thread gets immutable access
process_single(point)
})
.collect()
}
// Shared mutable state requires explicit synchronization
fn process_with_shared_state(data: Vec<DataPoint>) -> Result<Summary, Error> {
let shared_state = Arc::new(Mutex::new(Summary::default()));
data.par_iter()
.try_for_each(|point| {
let result = process_single(point)?;
// Must lock to access shared state
let mut state = shared_state.lock().unwrap();
state.update(result);
Ok(())
})?;
// Extract final result
Arc::try_unwrap(shared_state)
.unwrap()
.into_inner()
.unwrap()
}
F#’s Heritage and Fidelity’s Extension: F# pioneered compositional asynchronous programming with async workflows in 2007, years before similar features appeared in other languages. Fidelity builds on this foundation while incorporating a capability-based model through Prospero and BAREWire for zero-copy operations:
// F#/Fidelity: Compositional concurrency with capability-based isolation
let processStreamConcurrently = coldStream {
// BAREWire provides zero-copy buffer access via capabilities
let! bufferCap = BAREWire.receiveCapability messagePort
// Prospero marshals capabilities across async boundaries
let! results =
Prospero.withCapability bufferCap (fun cap ->
cap
|> BAREWire.createView // Zero-copy view
|> ColdStream.map processChunk
|> ColdStream.parallel maxConcurrency
|> ColdStream.withTimeout timeout
)
return aggregateResults results
}
// Actor-based approach with capability management
type SummaryActor() =
inherit Actor<SummaryMessage>()
let mutable state = Summary.empty
override this.Receive message =
match message with
| UpdateWithCapability(dataCap) ->
// Prospero ensures capability validity
let data = BAREWire.readWithCapability dataCap
state <- state.update data
| GetSummary replyChannel ->
replyChannel.Reply state
// Usage: Capabilities provide safe, fast access across actors
let processConcurrentWithActors data = async {
let summaryActor = spawn <| SummaryActor()
// Create capabilities for zero-copy sharing
let! capabilities = BAREWire.createCapabilities data
let! results =
capabilities
|> List.map (fun cap -> async {
let! result = processWithCapability cap
summaryActor <! UpdateWithCapability cap
return result
})
|> Async.Parallel
let! summary = summaryActor <? GetSummary
return summary
}
Both approaches prevent data races, but through different mechanisms. Rust uses its ownership system with explicit synchronization primitives, while F#/Fidelity leverages multiple concurrency patterns - from actor isolation to capability-based sharing - all unified by Prospero’s capability marshaling and BAREWire’s zero-copy operations. This provides flexibility to choose the right concurrency pattern for each use case while maintaining safety and performance, all with a consistent idiomatic F# development experience at design-time.
Compilation Architecture: MLIR vs Direct LLVM
A fundamental architectural difference between Rust and Fidelity lies in their compilation strategies, which has profound implications for optimization opportunities and system targets.
Rust’s Direct LLVM Approach
Rust compiles directly to LLVM IR, which provides excellent low-level optimizations but limits higher-level transformations:
// Rust source -> HIR -> MIR -> LLVM IR -> Machine Code
fn process_data(data: &[u8]) -> Result<(), Error> {
// Rust's MIR (Mid-level IR) has limited abstraction
// LLVM sees low-level operations, not high-level intent
data.chunks(1024)
.map(|chunk| transform(chunk))
.collect()
}
Fidelity’s MLIR Advantage
Fidelity leverages MLIR (Multi-Level Intermediate Representation), which sits above LLVM and provides multiple abstraction levels:
// F# source -> PSG -> MLIR Dialects -> LLVM IR -> Machine Code
let processData data =
data
|> Array.chunkBySize 1024
|> Array.map transform
|> Array.concat
// MLIR preserves high-level semantics through multiple dialects:
// - Async dialect (understands async operations)
// - Memory dialect (tracks allocations and lifetimes)
// - Parallel dialect (expresses parallelism opportunities)
// - Target-specific dialects (GPU, TPU, etc.)
This architectural difference provides Fidelity with optimization opportunities that Rust cannot access:
- Cross-domain optimizations: MLIR can optimize across async boundaries, memory operations, and parallelism in ways LLVM cannot
- Target-specific transformations: Different MLIR dialects for different hardware (CPU, GPU, TPU) while maintaining the same source code
- High-level pattern matching: MLIR can recognize and optimize high-level patterns that are lost in LLVM’s low-level representation
- Progressive lowering: Optimizations can happen at the appropriate abstraction level
- Alternate compiler pathways: Using MLIR means that Fidelity also preserves options for producing code through mechanisms other than LLVM.
The result is that Fidelity can make more informed decisions because it maintains semantic information longer in the compilation pipeline and preserves optionality throughout the process.
Developer Experience: Explicit Control vs. Compositional Flow
The daily experience of working with each language reflects their different design priorities.
Rust’s Explicit Everything
Rust makes nearly every decision explicit, which provides control but requires constant attention:
// Rust: Every decision is visible
use std::rc::Rc;
use std::cell::RefCell;
struct SharedState {
data: Rc<RefCell<Vec<String>>>, // Explicit reference counting + interior mutability
}
impl SharedState {
fn add(&self, item: String) {
self.data.borrow_mut().push(item); // Explicit borrow
}
fn process(&self) -> Result<(), Error> {
let data = self.data.borrow(); // Explicit immutable borrow
if data.len() > 100 {
drop(data); // Must explicitly drop before mutable borrow
self.data.borrow_mut().clear(); // Now we can mutably borrow
}
Ok(())
}
}
F#’s Compositional Abstraction
F# maintains access to depth with initial abstraction maintaining an idiomatic F# experience:
// F#: Complexity hidden behind clean abstractions
type SharedState() =
let data = ResizeArray<string>()
member _.Add(item) = data.Add(item)
member _.Process() =
if data.Count > 100 then
data.Clear()
Ok()
// Or using immutable approach with no explicit memory management
type ImmutableState = {
Data: string list
}
let add item state =
{ state with Data = item :: state.Data }
let process state =
if List.length state.Data > 100 then
{ state with Data = [] }
else
state
The F# approach maintains full developer control with sensible defaults, while Rust trades conciseness for explicit control. Neither is inherently superior. They serve different needs and preferences.
Memory Management: One Aspect of Many
Memory management remains key, and it’s a dimension where Rust and F#/Fidelity take materially different approaches. Both languages achieve memory safety but through divergent mechanisms:
- Rust: Monolithic ownership model with explicit lifetimes and RAII through the Drop trait, compiling directly to LLVM
- Fidelity: Graduated approach where delimited continuations provide natural scope boundaries for automatic resource cleanup, leveraging MLIR’s multi-level optimization capabilities
The key difference is architectural: Rust makes memory management explicit everywhere with direct LLVM compilation, while Fidelity uses F# idioms and background structure such as delimited continuations to create automatic cleanup at scope boundaries. This can then be fed into MLIR as mapped memory with type annotations to further inform any needed transformations.
Conclusion: Distinct Paths in Systems Programming
This analysis reveals how Rust and F#/Fidelity, despite their shared OCaml heritage, have evolved along remarkably different trajectories. Each language embodies distinct philosophies about systems programming:
Rust’s Achievements:
- Proved memory safety without garbage collection is possible and practical
- Created a powerful trait system for zero-cost abstractions
- Built a vibrant ecosystem exploring the boundaries of systems programming
- Demonstrated that functional programming concepts can enhance imperative languages
Rust’s Ongoing Challenges:
- Async runtime fragmentation creating ecosystem silos
- Complex error handling boilerplate
- Steep learning curve with the borrow checker
- Debugging opacity in async code
F#/Fidelity’s Different Path:
- Unified async model providing consistent behavior across platforms
- Compositional error handling reducing visual noise
- Type providers and units of measure offer uniquely powerful tools
- Concurrency patterns emerge naturally from delimited continuations
The key insight from this comparison is that there’s no single “correct” approach to systems programming. Rust’s explicit control and safety guarantees appeal to developers who want to understand every detail of their program’s behavior. F#/Fidelity’s compositional approach and unified abstractions appeal to those who prefer to focus on domain logic with options to take a more imperative approach where the domain calls for it.
Moreover, Fidelity’s use of MLIR rather than direct LLVM compilation provides additional optimization opportunities that Rust cannot access. By maintaining higher-level semantic information through multiple compilation stages, Fidelity can perform cross-domain optimizations and even re-target to other compilers beyond LLVM as needed.
Rather than viewing these languages in opposition, we should view them as complementary explorations of the systems programming design space. Whether you’re drawn to Rust’s explicit control or F#’s compositional elegance, both languages are in the vanguard of what’s possible in safe, performant systems programming.
The future belongs not to a single language but to an ecosystem of languages pushing the solution space forward, and both Rust and the Fidelity framework with F# have much to show the world as they continue on their own respective paths.