Introducing Conclave

A Reliable Partner for Delivering Robust AI Systems
work-single-image

Conclave: A sequestered location for confidential deliberation and decision-making.

Organizations considering agentic AI systems face a multitude of challenges. The first hurdle is how to build and deploy agentic systems in a way that they can effectively manage. There are a myriad decision points, such as controlling which agents can communicate, preventing runaway costs from cascading tool calls, ensuring data stays within security boundaries, and maintaining responsive service at scale. Traditional architectures force a choice between expensive synchronous orchestration that kills performance or chaotic async patterns that sacrifice control.

%%{init: {'theme': 'neutral'}}%% graph TD START([Implement AI Decision
Support System]) --> ARCH{Architecture
Pattern?} ARCH -->|Synchronous
Orchestration| SYNC[High Control
Poor Performance
High Cost] ARCH -->|Async
Workflow| ASYNC[Low Control
Runaway Costs
Security Risks] SYNC --> COST1{Accept Trade-offs?} ASYNC --> COST2{Accept Trade-offs?} COST1 -->|No| RETHINK[Reconsider
Approach] COST2 -->|No| RETHINK COST1 -->|Yes| PROBLEMS1[Performance Bottlenecks
Scaling Challenges] COST2 -->|Yes| PROBLEMS2[Security Vulnerabilities
Cost Overruns] style START fill:#e3f2fd,stroke:#1976d2,stroke-width:2px style SYNC fill:#ffebee,stroke:#c62828,stroke-width:2px style ASYNC fill:#ffebee,stroke:#c62828,stroke-width:2px style PROBLEMS1 fill:#ffcdd2,stroke:#c62828,stroke-width:2px style PROBLEMS2 fill:#ffcdd2,stroke:#c62828,stroke-width:2px style RETHINK fill:#fff3e0,stroke:#ef6c00,stroke-width:2px

A Cohesive Solution for a Fragmented Era

SpeakEZ resolves these challenges in a grounded, scalable platform - Conclave. We resolve this tension through a proven pattern of “tell-first” actor architecture providing millisecond-scale message delivery, capability-based security enforced at the platform level, and automatic scaling across Cloudflare’s 320+ edge locations. The result is AI orchestration that costs a fraction of traditional approaches while delivering orders-of-magnitude speed and throughput beyond other services, all built with security as a first class consideration.

And this isn’t a half-baked platform-of-the-week spun out by the marketing department of some random AI startup. This architectural pattern goes back decades and has been proven at global scale time and again. Don Syme’s early decision to include the MailboxProcessor as a built-in F# primitive brought Erlang’s actor model into the type-safe world of functional programming. As explored in our analysis of this unexpected fusion, F# uniquely combines OCaml’s type system rigor with Erlang’s message-passing concurrency for a robust, cohesive system that has no peer in technology circles.

Couple that with Cloudflare’s recent Worker Loader API, we now have the infrastructure to dynamically deploy these actor patterns at global scale. This enables systems where actors can spawn actors with controlled capabilities, communicate through direct bindings, and adapt their topology at runtime. Our Conclave platform implements these key services as a managed service built on our CloudflareFS bindings, leveraging established software engineering principles with industry-leading edge computing infrastructure.

%%{init: {'theme': 'neutral'}}%% graph TD CONCLAVE[Conclave Platform
Actor Patterns & Agentic AI] CFS[CloudflareFS
F# Bindings & Type Safety] CF[Cloudflare Services] CONCLAVE --> CFS CFS --> CF CF --> W[Workers API] CF --> DO[Durable Objects] CF --> SB[Platform Bindings] CF --> WL[LLM Services] style CONCLAVE fill:#e3f2fd,stroke:#1976d2,stroke-width:2px style CFS fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px style CF fill:#fff3e0,stroke:#ef6c00,stroke-width:2px

This convergence of actor patterns, functional programming, and edge computing enables agentic AI systems that naturally map to controlled, well-managed distributed architectures, as detailed in our vision for lean, smart AI cloud systems.

The layered architecture shows how Conclave builds on proven foundations. At the base, Cloudflare’s global edge compute network provides Workers, Durable Objects, and component bindings as primitive building blocks. CloudflareFS wraps these services with F#’s type safety and functional programming patterns, eliminating entire classes of runtime errors. Conclave then layers actor model abstractions and agentic AI capabilities on top, creating a complete solution backplane where intelligent systems can spawn, communicate, and evolve with enforced security boundaries.

Worker Loader: Dynamic Actors 🔑

Kenton Varda’s article about Dynamic Worker Loaders described their pivotal mechanism for dynamically spawning Workers with custom bindings. The implementation enables actor spawned semantics with capability-based security on a global edge compute network:

module Conclave.Core

// Actors spawn actors with specific capabilities
type ActorSpawner() =
    inherit DurableObject()
    
    member this.spawnWithCapabilities(spec: ActorSpec) = async {
        // Props become unforgeable capability tokens
        let capabilities = {|
            canTell = spec.authorizedTargets
            canQuery = spec.queryPermissions  
            canSpawn = spec.spawnRights
        |}
        
        // Direct bindings replace service discovery
        let directBindings = 
            spec.authorizedTargets
            |> List.map (fun target -> 
                target.id, this.env.service(target.id))
            |> Map.ofList
        
        // Spawned actor receives these exact capabilities
        let! actor = this.env.loader.loadWorker {
            script = spec.implementation
            bindings = {|
                ACTORS = directBindings  // Direct message paths
                SUPERVISOR = this.env.SELF
            |}
            props = capabilities  // Cannot be modified by child
        }
        
        return ActorRef(actor)
    }

This pattern provides actors with their communication topology as bindings at spawn time. Messages route directly between actors without HTTP overhead, service meshes, or discovery protocols. The TypeScript SDK documentation covers the basic API, but the actor model implications extend further: this enables runtime topology evolution with security boundaries enforced at the platform level.

Tell-First Architecture

The actor model’s “fire-and-forget” tell semantics assume asynchronous, one-way communication. Conclave implements this as the primary messaging pattern:

module Conclave.Messaging

// Tell operations complete immediately
type TellActor<'State, 'Message>() =
    inherit ConclaveActor<'State>()
    
    // Direct tell through projected bindings
    member this.tell(target: ActorId, message: 'Message) =
        match this.env.ACTORS.TryFind(target) with
        | Some binding ->
            // Direct worker-to-worker call
            binding.invoke(serialize message)
            // Returns immediately
        | None ->
            // Actor lacks authorization
            this.logUnauthorized(target)
    
    // Parallel message distribution
    member this.tellMultiple(targets: ActorId list, message: 'Message) =
        targets 
        |> List.iter (fun target -> this.tell(target, message))
    
    // State-based conditional messaging
    member this.tellIf(predicate: 'State -> bool) 
                      (target: ActorId) 
                      (message: 'Message) =
        if predicate this.state then
            this.tell(target, message)

Traditional RPC patterns block on every interaction. Conclave uses tells for most communication, reserving ask patterns for specific requirements:

// Ask available when synchronous response required
type AskPattern() =
    member this.ask(target: ActorRef, query: Query) = async {
        let replyChannel = AsyncReplyChannel<Response>()
        target.tell(QueryMessage(query, replyChannel))
        
        // Blocks only when response is essential
        let! response = replyChannel.Receive(timeout = 5000)
        return response
    }
    
// Preferred: tell with continuation actor
type ContinuationPattern() =
    member this.tellWithContinuation(target: ActorRef, message: Message) =
        let continuationActor = this.spawnEphemeral(fun response ->
            this.handleResponse response
        )
        target.tell(MessageWithReplyTo(message, continuationActor))
        // Control returns immediately

Direct Bindings Replace HTTP

The Worker Loader documentation describes service bindings. Conclave uses this mechanism to create direct actor-to-actor communication paths:

%%{init: {'theme': 'neutral'}}%% graph TD subgraph "Conclave Direct Tell" C1[Actor A] -->|Direct Binding| C2[Actor B] end subgraph "Traditional Services" A1[Actor A] -->|HTTP Request| LB[Load Balancer] LB --> SM[Service Mesh] SM --> SD[Service Discovery] SD --> A2[Actor B] end style C1 fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px style C2 fill:#e8f5e9,stroke:#2e7d32,stroke-width:3px style A1 fill:#ffebee,stroke:#c62828 style LB fill:#ffebee,stroke:#c62828

Message delivery happens at memory speed when actors are co-located within V8 isolates, and at network speed when distributed. The platform handles routing without HTTP parsing, header processing, or protocol negotiation.

Supervision Without Servers

Traditional frameworks require servers or containers to run as supervisors. Conclave leverages Workers with Durable Objects to create serverless supervision hierarchies that maintain state across the global network:

module Conclave.Supervision

type SupervisorStrategy =
    | OneForOne  // Restart only the failed actor
    | AllForOne  // Restart all children if one fails
    | RestForOne // Restart failed actor and all spawned after it

type DistributedSupervisor() =
    inherit DurableObject()
    
    let mutable children = Map.empty<ActorId, ActorRef * ActorSpec>
    
    member this.supervise(strategy: SupervisorStrategy) = async {
        // Supervision loop runs without a server
        let! monitoring = this.state.storage.get<MonitoringState>("monitoring")
        
        match monitoring.detectFailure() with
        | Some(failedId, error) ->
            match strategy with
            | OneForOne ->
                // Restart just the failed actor with its original capabilities
                let (_, spec) = children.[failedId]
                let! newActor = this.spawnWithCapabilities(spec)
                children <- children.Add(failedId, (newActor, spec))
                
                // Tell dependent actors about the restart
                this.tellDependents(failedId, ActorRestarted)
                
            | AllForOne ->
                // Restart entire actor system
                for (id, (ref, spec)) in Map.toSeq children do
                    ref.tell(Shutdown)
                    let! newActor = this.spawnWithCapabilities(spec)
                    children <- children.Add(id, (newActor, spec))
                    
            | RestForOne ->
                // Restart failed and all younger siblings
                let restartFrom = children |> Map.findIndex failedId
                children 
                |> Map.toList
                |> List.skip restartFrom
                |> List.iter (fun (id, (ref, spec)) ->
                    ref.tell(Shutdown)
                    let! newActor = this.spawnWithCapabilities(spec)
                    children <- children.Add(id, (newActor, spec))
                )
        | None -> ()
    }

This supervision model operates continuously without dedicated infrastructure. The platform handles durability, the supervisor handles logic, and the system maintains resilience from regional to global scale.

Agentic AI Systems as Actors

AI agents map directly to actors: they maintain state, process messages (prompts), and interact asynchronously. Conclave implements this correspondence:

module Conclave.Agents

type AgentCapability =
    | CanAccessModel of model: string
    | CanQueryKnowledge of domain: string  
    | CanCollaborateWith of agents: ActorId list
    | CanSpawnSubAgents of limit: int

type AIAgent(capabilities: Set<AgentCapability>) =
    inherit ConclaveActor<AgentState>()
    
    // Broadcast discoveries without waiting
    member this.shareInsight(insight: Insight) =
        this.collaborators 
        |> Set.iter (fun peer ->
            this.tell(peer, NewInsight insight)
        )
    
    // Spawn specialized agents for task decomposition
    member this.decomposeTask(task: ComplexTask) = async {
        let subtasks = this.analyzeTask(task)
        
        for subtask in subtasks do
            let! subAgent = this.spawn {
                implementation = this.selectImplementation(subtask)
                capabilities = this.projectCapabilities(subtask)
                authorizedTargets = [this.self; this.coordinator]
            }
            
            subAgent.tell(Execute subtask)
    }
    
    // Build collaboration networks through messages
    member this.formCollaborationNetwork(goal: Goal) =
        let relevantPeers = this.findPeersWithCapabilities(goal.requirements)
        
        relevantPeers |> List.iter (fun peer ->
            this.tell(peer, ProposeCollaboration(goal, this.self))
        )

Agents spawn specialized sub-agents with specific capabilities for their tasks. The Worker Loader’s props mechanism enforces these capability boundaries at the platform level. Systems evolve their communication topology through runtime message exchange rather than central coordination.

Runtime Evolution Through Capability Projection

The combination of Worker Loaders and actor projections enables systems that adapt their topology dynamically:

module Conclave.Evolution

type EvolvingSystem() =
    inherit SystemSupervisor()
    
    // System learns optimal actor topology over time
    member this.evolveTopology(performanceMetrics: Metrics) = async {
        let analysis = this.analyzeInteractionPatterns(performanceMetrics)
        
        match analysis with
        | HighLatencyBetween(actorA, actorB) ->
            // Spawn a mediator with bindings to both
            let! mediator = this.spawn {
                implementation = MediatorActor.implementation
                authorizedTargets = [actorA; actorB]
                capabilities = {| 
                    canCache = true
                    canTransform = true 
                |}
            }
            
            // Update A and B to talk through mediator
            actorA.tell(RouteThrough(actorB.id, mediator))
            actorB.tell(RouteThrough(actorA.id, mediator))
            
        | BottleneckAt(actor) ->
            // Spawn replicas with shared capability projection
            let! replicas = 
                [1..3] 
                |> List.map (fun i -> this.spawnReplica(actor))
                |> Async.Parallel
                
            // Tell router about new replicas
            this.router.tell(AddReplicas(actor.id, replicas))
            
        | UnusedCapability(actor, capability) ->
            // Revoke unused capabilities by respawning
            let newSpec = { actor.spec with 
                capabilities = actor.spec.capabilities.Remove(capability) }
            let! restricted = this.spawn(newSpec)
            
            // Tell dependents about the replacement
            this.tellDependents(actor.id, ReplacedWith restricted)
    }

This evolutionary capability, powered by the Worker Loader’s dynamic spawning and Cloudflare’s global data centers, enables systems that can optimize continuously without tooling slowing down design and deployment.

Performance Through Principled Design

The tell-first architecture delivers measurable performance advantages:

%%{init: {'theme': 'neutral'}}%% sequenceDiagram participant Client participant ActorA as Actor A participant ActorB as Actor B participant ActorC as Actor C Note over Client,ActorC: Traditional RPC (Sequential) Client->>ActorA: Request ActorA-->>Client: Response Client->>ActorB: Request ActorB-->>Client: Response Client->>ActorC: Request ActorC-->>Client: Response Note right of ActorC: Total: 3 round trips Note over Client,ActorC: Conclave Tell Pattern (Parallel) Client->>ActorA: Tell Client->>ActorB: Tell Client->>ActorC: Tell Note right of ActorC: Total: Fire and forget ActorA->>Client: Tell (when ready) ActorB->>Client: Tell (when ready) ActorC->>Client: Tell (when ready) Note right of ActorC: Responses arrive asynchronously

Estimates from our lean AI systems entry shows promise:

  • 10-100x reduction in coordination overhead
  • Single-millisecond message delivery times within regions
  • Near-zero memory overhead for message passing
  • Linear scalability with actor count

The Stack

Conclave represents a carefully layered architecture where each level provides distinct value:

module Conclave.Stack

// Layer 1: CloudflareFS - Raw platform bindings
module CloudflareFS =
    // Direct mappings to Cloudflare services
    let worker = Bindings.Worker
    let durableObject = Bindings.DurableObject
    let loader = Bindings.WorkerLoader
    
// Layer 2: Conclave Core - Actor abstractions
module ConclaveCore =
    // Actor model primitives
    type Actor<'State, 'Message> = 
        inherit CloudflareFS.DurableObject
        abstract member tell: 'Message -> unit
        abstract member spawn: ActorSpec -> Async<ActorRef>
    
// Layer 3: Conclave Patterns - Domain solutions
module ConclavePatterns =
    // Pre-built patterns for common scenarios
    type AgentOrchestrator = inherit Actor<OrchestratorState, AgentMessage>
    type EventSourcedActor = inherit Actor<EventLog, Event>
    type SagaCoordinator = inherit Actor<SagaState, SagaCommand>
    
// Layer 4: Domain Applications
module YourApplication =
    // Your business logic here
    type OrderProcessor = inherit ConclavePatterns.EventSourcedActor
    type FraudDetector = inherit ConclavePatterns.AgentOrchestrator

Each layer can be used independently, but together they provide a complete solution for building distributed intelligent systems.

Why This Matters Now

The convergence of several factors makes this the right moment for actor-based architectures:

  1. AI Needs Actors: Agentic systems are inherently actor-like. The impedance mismatch between synchronous APIs and asynchronous agents creates unnecessary complexity.

  2. Edge Computing Enables It: Cloudflare’s global network of data centers provides the infrastructure actors need without the operational overhead of traditional deployments.

  3. Worker Loader Is A Game-Changer: Dynamic serverless spawning with capability projection wasn’t broadly available until now. Kenton Varda’s innovation unlocks patterns that were previously confined to specialized frameworks.

  4. Tell Semantics Match Reality: Real-world systems are asynchronous. The tell-first approach aligns with how distributed systems actually behave, rather than forcing synchronous abstractions onto naturally asynchronous patterns.

Migration Without Disruption

While many of our projects are centered on building new capabilities, there are significant “shared edges” with standing technology that could serve as a bridge to and from these Conclave implmentations. Organizations can adopt Conclave incrementally, starting with new services or wrapping existing ones as sources of knowledge for new “AI enhanced” functionality:

module Conclave.Migration

// Wrap existing HTTP services as tell-first actors
type HttpServiceActor(serviceUrl: string) =
    inherit ConclaveActor<unit>()
    
    override this.handleTell msg = 
        // Convert tell to HTTP POST asynchronously
        fetch(serviceUrl, {| 
            method = "POST"
            body = serialize msg 
        }) |> ignore  // Fire and forget
        
// Bridge existing WebSocket connections
type WebSocketBridge(ws: WebSocket) =
    inherit ConclaveActor<BridgeState>()
    
    member this.initialize() =
        ws.addEventListener("message", fun e ->
            let msg = deserialize e.data
            // Convert WebSocket messages to tells
            this.tell(this.router, RouteMessage msg)
        )

This approach preserves existing investments while enabling gradual modernization toward an intelligent solution that can support your company in the new era of AI. There’s no need to over-invest as solution can adapt to your needs and allows you to make prudent results-based assessment of the application’s return-of-value.

The Actor Model Meets Its Moment

While due credit goes to Don Syme for integrating Erlang mechanics into the F# language, Carl Hewitt’s 2010 reprint of his writings from 1973 reminds us of the inspirational thinking that described “universal modular actors that communicate by passing messages.” Today, Cloudflare’s Worker Loader API now provides the cutting-edge infrastructure to implement this model at global scale. The timing aligns with the rise of agentic AI systems that require the asynchronous, fault-tolerant patterns actors provide.

Conclave implements actors on Cloudflare by recognizing that their new “Worker Loader” enables runtime actor spawning with critical “capability” projection. Actors establish direct communication channels through bindings. The system operates without bloated cloud infrastructure by leveraging Cloudflare’s Durable Objects for state persistence and supervision. It’s an elegant, scalable pattern tailor-made for this emerging era of intelligent systems.

For organizations considering agentic AI systems as part of their portfolio, this architecture eliminates the impedance mismatches between LLMs and traditional systems. Bloated container orchestration adds complexity without solving core problems, while unconstrained workflows create runaway costs and security vulnerabilities. SpeakEZ’s Conclave as a Managed Service offers agentic solutions tailored to suit your needs with an adaptive topology that make these systems really shine. The Conclave managed service delivers direct, efficient compute while maintaining the flexibility businesses need to realize high value solutions.

Sequester as a Service

Over the decades, the actor model has demonstrated its value from telecom to distributed systems, and even social applications such as WhatsApp. SpeakEZ Technologies designed Conclave as a secure agentic service delivery to bridge the gap between proven architectural principles and the developing landscape of modern AI orchestration. Cloudflare’s Worker Loader API now elevates these patterns and makes them practical at global scale without the operational overhead of traditional infrastructure, or the brittleness of workflow scripting experiments marketed as “agentic platforms”.

By combining our engineering expertise with Cloudflare’s edge infrastructure, organizations can partner with us to build truly robust agentic AI systems to serve their needs, delivering millisecond response times, ensuring security boundaries at the platform level, and scaling elastically up to 320+ global locations. Our support for an incremental adoption path preserves your existing technology investments while enabling new capabilities where they provide the most value. Cloudflare has the infrastructure that makes this approach possible, and with SpeakEZ’s expertise and the Conclave managed service you can maximize the return on your next AI initiative.

Author
Houston Haynes
date
November 9, 2025
category
Architecture

We want to hear from you!

Contact Us