Cloudflare Agent Cloud Now Runs on OpenAI GPT-5 and Codex

Cloudflare Agent Cloud Now Runs on OpenAI GPT-5 and Codex

Enterprises have been promised “AI agents that actually work” for about two years now. Most of what’s shipped has been demos, wrappers, and a lot of wishful thinking. So when Cloudflare announced it’s bringing OpenAI’s GPT-5 and Codex directly into its Agent Cloud platform, it’s worth paying attention — because this one has real infrastructure behind it, not just a press release. The official announcement from OpenAI dropped on April 13, 2026, and it signals something meaningful: the race to own enterprise agentic infrastructure is heating up fast.

How We Got Here: The Push Toward Enterprise-Grade AI Agents

Cloudflare has spent the last three years quietly becoming one of the most important pieces of internet infrastructure most people never think about. It started with DDoS protection and CDN services, then moved into serverless computing with Cloudflare Workers, and more recently into AI inference with its Workers AI platform.

The Agent Cloud launch isn’t a sudden pivot — it’s the logical next step. Cloudflare already has edge nodes in over 300 cities worldwide, which means latency advantages that centralized cloud providers like AWS or Azure simply can’t match for certain workloads. Layering agentic AI on top of that infrastructure makes strategic sense.

OpenAI’s side of this story is equally deliberate. GPT-4 proved the model. GPT-4o made it fast and multimodal. GPT-5 is where OpenAI is betting that reasoning quality finally crosses the threshold enterprises actually need — not just for answering questions, but for completing complex, multi-step tasks autonomously. Codex, OpenAI’s code-generation model, adds a second capability layer that’s particularly relevant for enterprises looking to automate developer workflows or integrate agents into software pipelines.

The timing also responds directly to competitive pressure. Google has been pushing its Gemini-powered agent tools through Google Cloud. Anthropic is deepening Claude’s enterprise integrations. Microsoft has Copilot Studio baked into the Azure stack. Cloudflare and OpenAI pairing up is essentially a third-party answer to those vertically integrated plays — one that lets enterprises avoid full lock-in to any single hyperscaler.

What Cloudflare Agent Cloud Actually Does

Let’s get specific, because “AI agent platform” can mean anything from a glorified chatbot to a genuinely autonomous workflow engine. Here’s what the Cloudflare Agent Cloud integration with OpenAI delivers:

  • GPT-5 inference at the edge: Enterprises can run GPT-5 powered agents directly on Cloudflare’s global edge network, reducing round-trip latency for agent reasoning steps compared to sending everything back to a centralized data center.
  • Codex integration for code-centric tasks: Agents built on Agent Cloud can use Codex to write, review, debug, and execute code as part of automated workflows — not just generate it for a human to copy-paste.
  • Durable Objects for stateful agents: Cloudflare’s existing Durable Objects technology gives agents persistent memory within a session or across sessions, solving one of the core problems with stateless LLM calls.
  • Workers AI orchestration layer: Agents can chain multiple model calls, tool uses, and external API calls through Cloudflare’s existing Workers runtime, which already handles trillions of requests per day.
  • Built-in security controls: Because it runs on Cloudflare’s network, enterprises get rate limiting, access controls, and data residency options baked in — not bolted on afterward.
  • Zero Trust integration: Agent Cloud connects with Cloudflare’s Zero Trust networking products, meaning enterprises can define exactly which internal systems an agent can reach and under what conditions.

Who’s the Target Customer?

This isn’t aimed at a startup spinning up a side project. The target is a mid-to-large enterprise that already uses Cloudflare for security or networking, has developers comfortable with the Workers runtime, and wants to run agents that touch sensitive internal systems without routing everything through a third-party cloud that also happens to be a competitor.

Think financial services firms, healthcare networks, and logistics companies — the kinds of organizations where data sovereignty and audit trails aren’t optional. OpenAI has been building exactly this playbook in financial services, as we covered in our piece on OpenAI’s financial services AI strategy. Agent Cloud is where that strategy meets deployable infrastructure.

Pricing and Availability

Cloudflare hasn’t published a specific Agent Cloud pricing sheet as a standalone product — it layers on top of existing Workers and Workers AI pricing. GPT-5 access through the integration would follow OpenAI’s API pricing, which for GPT-5 sits at approximately $15 per million input tokens and $60 per million output tokens at current enterprise rates. Codex API pricing is separate and usage-based. Enterprise contracts with either Cloudflare or OpenAI will likely include negotiated volume discounts for high-throughput agentic workloads.

The integration is available now for enterprises with existing Cloudflare and OpenAI API access. There’s no waitlist announced as of the publication date.

What This Actually Means for the AI Agent Market

Here’s the thing about AI agents: the model quality matters, but it’s not the only thing that matters. Deployment infrastructure, security, observability, and cost at scale are often the real bottlenecks stopping enterprises from going beyond a pilot.

Cloudflare’s edge network solves the latency problem in a way that most pure-play AI companies can’t. When an agent needs to make five or six reasoning calls in sequence to complete a task, each call being 50ms faster adds up. That’s the difference between an agent that feels snappy and one that feels like it’s thinking too hard.

The security angle is arguably just as important. Right now, a lot of enterprise AI agent deployments involve awkward data flows — internal data leaves the corporate network, goes to an AI provider, comes back as output. Cloudflare’s Zero Trust integration and the ability to define precise access policies for agents changes that calculus. An agent can be authorized to read from a specific internal database, write to a specific ticketing system, and nothing else. That’s the kind of control legal and compliance teams need before they’ll sign off on autonomous agents touching production systems.

I wouldn’t be surprised if this deal accelerates similar announcements from Cloudflare’s competitors. Fastly, Akamai, and the hyperscaler CDN arms of AWS and Google Cloud all have reason to respond. The question is whether they move fast enough or get outpaced by Cloudflare’s head start in the serverless AI inference space.

Where Competitors Stand

Google’s Gemini agents on Google Cloud benefit from deep integration with Workspace and BigQuery, which is a genuine advantage for Google-heavy enterprises. But Google Cloud’s edge network isn’t as globally distributed for low-latency inference as Cloudflare’s purpose-built edge. Microsoft’s Copilot Studio is deeply embedded in the Azure and Microsoft 365 stack, which is powerful for Microsoft shops but creates a walled garden. Anthropic’s Claude doesn’t yet have a comparable edge deployment partnership at this scale. Meta’s Llama models offer a self-hosted option for enterprises that want full control, but that brings its own operational overhead.

None of those options give you Cloudflare’s network with GPT-5’s reasoning capability in a single integrated product. That’s the gap this fills. Whether it fills it well enough to capture meaningful market share depends heavily on how the developer experience shakes out in practice — documentation, debugging tools, and pricing transparency will matter enormously.

For a look at how enterprise teams are already applying OpenAI’s models in production workflows, the full breakdown of OpenAI’s application areas gives useful context for where agents fit into a broader deployment strategy.

Key Takeaways for Enterprise Teams

  • Cloudflare Agent Cloud now supports GPT-5 and Codex, enabling production-grade AI agents with real infrastructure backing.
  • Edge deployment means lower latency for multi-step agentic reasoning — a meaningful difference for complex, real-time workflows.
  • Zero Trust integration gives security and compliance teams the access controls they need before approving autonomous agents in production.
  • This is available now for enterprises with Cloudflare and OpenAI API access — no waitlist required.
  • Pricing follows existing OpenAI API rates plus Cloudflare’s Workers infrastructure costs — enterprise contracts will negotiate from there.
  • The competitive impact is real: Google, Microsoft, and Anthropic all have reason to accelerate their own edge-AI partnerships in response.

Enterprises already building with OpenAI’s APIs should also look at how AI models are being used to turn raw data into operational decisions — because that’s exactly the kind of workflow Agent Cloud is designed to run autonomously at scale.

Frequently Asked Questions

What is Cloudflare Agent Cloud?

Cloudflare Agent Cloud is an enterprise platform for building, deploying, and scaling AI agents using Cloudflare’s global edge network. With the new OpenAI integration, it supports GPT-5 and Codex as the underlying AI models powering those agents. It’s built on top of Cloudflare Workers and Durable Objects, giving enterprises familiar infrastructure with added AI capabilities.

How does GPT-5 differ from GPT-4 for agentic use cases?

GPT-5 offers significantly improved multi-step reasoning, better instruction-following in long contexts, and lower error rates on complex tasks — all of which matter when an agent is executing a workflow autonomously rather than just answering a single question. For agentic use, the reduction in reasoning errors is the most practically important improvement over GPT-4o.

Is this secure enough for regulated industries?

Cloudflare’s Zero Trust integration and built-in access controls make it more viable for regulated industries than many alternatives. Agents can be scoped to specific data sources and systems, and Cloudflare’s network offers data residency options for enterprises with geographic compliance requirements. That said, individual compliance sign-off will depend on each organization’s specific regulatory environment.

How does this compare to building AI agents on AWS or Azure?

AWS and Azure offer AI agent tooling that’s deeply integrated with their respective cloud services, which is advantageous if you’re already heavily invested in those ecosystems. Cloudflare Agent Cloud’s main advantage is edge distribution and the ability to avoid full hyperscaler lock-in while still accessing frontier models like GPT-5. For enterprises already using Cloudflare for networking and security, the integration overhead is also significantly lower.

The enterprise AI agent market is moving from proof-of-concept to production faster than most organizations are ready for. Cloudflare and OpenAI are betting that the enterprises who get there first will be the ones with the right infrastructure from the start — not just the best model. Given how often infrastructure wins in enterprise software, that’s not a bad bet to make.