Blog posts exploring the concept "Formal-Verification"
← Back to all tagsBlog posts exploring the concept "Formal-Verification"
← Back to all tagsA Confession and a Vision A personal note from the founder of SpeakEZ Technolgies, Houston Haynes I must admit something upfront: when I began design of the Fidelity framework in 2020, I was driven by practical engineering frustrations, particularly with AI development. The limitations of a managed runtime, the endless battle with numeric precision, machine learning framework quirks, constant bug chasing; these weren’t just inconveniences, they felt like fundamental architectural flaws.
Read MoreEvery software engineering team knows the testing treadmill. Write code, write tests, run tests, fix failures, write more tests to catch what you missed, maintain those tests forever. We’ve accepted this as the calendar and staffing cost multiplication demanded by standard approaches to quality software. But what if this entire cycle represents a fundamental inefficiency; a workaround in the absence of something better? The Fidelity framework’s proof-aware compilation offers a well established garden path: mathematical certainty at compile time, eliminating entire categories of tests while actually increasing safety.
Read MoreSoftware verification has always forced a cruel choice: accept runtime overhead for safety checks, or trust that your optimizing compiler won’t break critical invariants. Traditional compilers treat proofs as obstacles to optimization, while proof assistants have a reputation (earned or not) that they generate code too conservative for production use. But what if verification and optimization weren’t opposing forces but complementary dimensions of the same compilation process? The Fidelity Framework’s hypergraph architecture makes this vision real by treating proof obligations as first-class hyperedges that guide, not hinder, aggressive optimization.
Read MoreThe Fidelity Framework faces a fascinating challenge: how do we identify opportunities for massive parallelism hidden within sequential-looking F# code? The answer lies in an elegant application of graph coloring to our Program Hypergraph (PHG), using bidirectional zippers to traverse and analyze control flow and data flow patterns. This approach, inspired by insights from Ramsey graph theory, enables automatic discovery of where async and continuation-based code can be transformed into interaction nets for parallel execution.
Read MoreThe cybersecurity landscape has shifted dramatically in recent years, with memory safety vulnerabilities accounting for approximately 70% of critical security issues in systems software. This reality has prompted governments and industries to mandate transitions to memory-safe languages for critical infrastructure. Yet the economics of wholesale rewrites are daunting: decades of refined C and C++ code represent trillions of dollars in intellectual property and domain expertise. What if, instead of rewriting everything, we could wrap existing code in provably safe interfaces?
Read MoreCreating software with strong correctness guarantees has traditionally forced developers to choose between practical languages and formal verification. The Fidelity Framework addresses this challenge through a groundbreaking integration of F# code, F* proofs, and MLIR’s semantic dialects. This essay explores how the Fidelity Framework builds upon the semantic verification foundations introduced in “First-Class Verification Dialects for MLIR” (Fehr et al., 2025) to create a unique pipeline that preserves formal verification from source code to optimized binary.
Read MoreAt SpeakEZ, we are working on transformative approaches to transfer learning that combine convolutional neural networks (CNNs) with Topological Object Classification (TopOC) methods. This memo outlines our design approach to creating dimensionally-constrained models that maintain representational integrity throughout the transfer learning process while enabling deployment to resource-constrained hardware through our Fidelity Framework compilation pipeline. By leveraging F#’s Units of Measure (UMX) system to enforce dimensional constraints across the entire model architecture, we achieve not only safer and more reliable models but also significantly more efficient computational patterns that can be directly compiled to FPGAs and custom ASICs.
Read MoreIn the coming waves of “AI” innovation, the computing landscape will continue to fragment into an increasingly divergent array of hardware choices. From embedded microcontrollers to mobile devices, workstations, and accelerated compute clusters, developers will face a challenging decision: build with distinctly different “stacks” for each target or accept the deep compromises of existing cross-platform frameworks. Meanwhile, Python continues its paradoxical ascent, simultaneously becoming the lingua franca of modern computing while quietly imposing an unsustainable tax on engineering resources.
Read More