Blog posts exploring the concept "Distributed-Systems"
← Back to all tagsBlog posts exploring the concept "Distributed-Systems"
← Back to all tagsThe story of distributed systems in F# begins with two distinct programming traditions that converge in F# in unique ways. From OCaml came the functional programming foundation and type system rigor. From Erlang came the mailboxprocessor and with it their ground-breaking approach to fault-tolerant distributed systems. And Don Syme’s innovations that fused true concurrency into the primitives of a high-level programming languages was a revelation. What emerged in F# was neither a simple port nor a mere combination, but something distinctly new: a language that could express actor-based concurrency with type safety, integrate with existing ecosystems, and compile to multiple target platforms.
Read More
The actor model isn’t new. Carl Hewitt introduced it at MIT in 1973, the same year that Ethernet was invented. For fifty years, this elegant model of computation, where independent actors maintain state and communicate through messages, has powered everything from Erlang’s telecom switches to WhatsApp’s billions of messages. But until now it has required specialized runtimes, complex deployment, or significant infrastructure overhead. Today’s “AI agents” are essentially rediscovering what distributed systems engineers have known for decades: isolated, message-passing actors are the natural way to build resilient, scalable systems.
Read More
The promise of edge computing for AI workloads has evolved from experimental optimization to production-ready enterprise architecture. What began as our exploration of WASM efficiency gains has matured into a comprehensive platform strategy that leverages Cloudflare’s full spectrum of services; from Workers and AI inference to containers, durable execution, and Zero Trust security. A Pragmatic Approach Our initial focus on pure WASM compilation through the Fidelity framework revealed both the tremendous potential and practical limitations of edge-first development.
Read More
The future of AI inference lies not in ever-larger transformer models demanding massive GPU clusters, but in a diverse ecosystem of specialized architectures optimized for specific deployment scenarios. At SpeakEZ, we’re developing the infrastructure that could make this future a reality. While our “Beyond Transformers” analysis explored the theoretical foundations of matmul-free and sub-quadratic models, this article outlines how our Fidelity Framework could transform these innovations into practical, high-performance inference systems that would span from edge devices to distributed data centers.
Read More
As a companion to our exploration of CXL and memory coherence, this article examines how the Fidelity framework could extend its zero-copy paradigm beyond single-system boundaries. While our BAREWire protocol is designed to enable high-performance, zero-copy communication within a system, modern computing workloads often span multiple machines or data centers. Remote Direct Memory Access (RDMA) technologies represent a promising avenue for extending BAREWire’s zero-copy semantics across network boundaries. This planned integration of RDMA capabilities with BAREWire’s memory model would allow Fidelity to provide consistent zero-copy semantics from local processes all the way to cross-datacenter communication, expressed through F#’s elegant functional programming paradigm.
Read More
Note: This article was updated September 27, 2025, incorporating insights from recent research and a recent Richard Sutton interview that affirm many of the tenets we have put forward over the years. We’re considering designs with innovative approaches to distributed training of models that extend beyond the constraints of matrix multiplication. Matrix multiplication has served as the computational cornerstone of deep learning for over a decade, yet examining its dominance reveals an architectural assumption that may be limiting the field’s potential.
Read More
Erlang emerged in the late 1980s at Ericsson, during an epoch when distributed systems were in their infancy and reliability was becoming a critical concern in telecommunications. Born out of the practical need to build telephone exchanges that could achieve the mythical “nine nines” (99.9999999%) of uptime, Erlang introduced a paradigm shift in how we approach concurrency and fault tolerance. A Pioneer in Reliable Distributed Computing In an era dominated by object-oriented programming and shared-state concurrency, Erlang boldly embraced functional programming with immutable data and the actor model.
Read More
Last year, we explored how F#’s type system could transform threshold signature security through FROST. Today, we’re tackling an even more challenging problem: the conspicuous absence of end-to-end encryption in group messaging. While Signal has admirably protected one-to-one conversations for years, their group chat implementation remains a study in compromise. Telegram simply gave up, offering no end-to-end encryption for groups at all. The reasons aren’t mysterious. Group encryption faces fundamental mathematical challenges that individual encryption elegantly sidesteps.
Read More
In the world of distributed systems, trust is fundamentally a mathematical problem. For decades, organizations have relied on single points of failure: a master key, a root certificate, a privileged administrator. But what if we told you that the mathematics of secure multi-party computation, pioneered by Adi Shamir in 1979 and refined through Schnorr signatures, has reached a point where distributed trust is not just theoretically possible, but practically superior to centralized approaches?
Read More