Blog posts discussing the technical implementation detail "Dataflow"
← Back to all tagsBlog posts discussing the technical implementation detail "Dataflow"
← Back to all tagsAI’s Berlin Wall In our exploration of neuromorphic computing, we examined how specialized hardware might finally deliver on AI’s efficiency promises. But hardware alone cannot solve AI’s most fundamental limitation: the artificial wall between how systems learn and how they operate. 🔄 Updated October 22, 2025 This article now includes cross-references to related blog entries, connecting broader concepts presented here to detailed technical explorations elsewhere. These inline links serve as entry points for readers seeking deeper dives into various topics, while this blog entry illuminates our broader vision.
Read More
The actor model isn’t new. Carl Hewitt introduced it at MIT in 1973, the same year that Ethernet was invented. For fifty years, this elegant model of computation, where independent actors maintain state and communicate through messages, has powered everything from Erlang’s telecom switches to WhatsApp’s billions of messages. But until now it has required specialized runtimes, complex deployment, or significant infrastructure overhead. Today’s “AI agents” are essentially rediscovering what distributed systems engineers have known for decades: isolated, message-passing actors are the natural way to build resilient, scalable systems.
Read More
The technology industry has developed an unfortunate habit of wrapping straightforward engineering advances in mystical language. When sites online boast of claims to “blur the lines between P and NP,” they’re usually describing something far more mundane: dealing with technology problems more efficiently. The mathematical complexity remains unchanged, but it shouldn’t be used as a barrier to understanding the practicalities. This isn’t cheating or transcending mathematics - it’s recognizing that most real-world performance barriers come from architectural mismatches, not algorithmic limits.
Read More
In a recent YouTube interview, Tri Dao, architect of Flash Attention and contributor to Mamba, delivered an insight worth exploring here: “If you’re a startup you have to make a bet … you have to make an outsized bet” [8:29]. This admission, in the flow of discussion about AI, reveals the fundamental tension in today’s technology infrastructure landscape. Most of the industry has agreed to a single big bet: placing the majority of resources on GPU-centric architectures and matrix multiplication as the foundation of intelligence even as the limits of Moore’s Law and the laws of thermodynamics loom large.
Read More
The industry is witnessing an unprecedented $4 billion investment to finally set aside the 80-year-old Harvard/Von Neumann computer design pattern. Companies like NextSilicon, Groq, and Tenstorrent are building novel, alternative architectures that eliminate the traditional bottlenecks between memory and program execution. Yet compiler architectures remain trapped in antiquated patterns - forcing stilted relationships into artificial constructions, obscuring the natural alignment with the emerging dominance of dataflow patterns. What if the key to targeting both traditional and revolutionary architectures lies not in choosing sides, but in recognizing that programs are “hypergraphs” by nature?
Read More