Code Mode: the better way to use MCP

It turns out we

Code Mode: Why Letting LLMs Write TypeScript Beats Direct Tool Calls

This post explains a new way to connect LLM agents to tools using Code Mode.
It focuses on turning Model Context Protocol (MCP) tools into a TypeScript API that the model writes against, instead of exposing raw tools directly.
Why this matters to you:

  • It reduces token and orchestration overhead when agents chain multiple calls.
  • It leverages the large amount of real-world TypeScript LLMs have been trained on.
  • It preserves sandboxing and uniform authorization via MCP while improving reliability.

Imagine asking an agent to coordinate several services and return a single, accurate summary.
Do you want the model to switch context dozens of times and pass JSON blobs back and forth?
Or would you rather it write a small script that calls the right APIs and returns only the result you need?
Cloudflare’s new Code Mode follows the second path.

What Code Mode changes

Traditional MCP exposes tools as RPC-like calls that the LLM must invoke via special non-text tokens.
These tokens are synthetic and limited in real-world exposure during model training.
Models struggle when too many or too-complex tools are presented this way.
Code Mode converts an MCP schema into a typed TypeScript API with doc comments.
The agent then writes and runs code that calls that API directly.

Why TypeScript helps

LLMs have seen massive amounts of code, including TypeScript, in training corpora.
That familiarity translates into better code generation than tool-calling syntax generation.
When an agent can write code it can compose multiple calls locally and only return final results.
That saves tokens, reduces error propagation, and speeds up execution.

Practical implications

For engineers and product leaders this means simpler integrations and more capable agents.
You get predictable authorization and schema discovery from MCP.
You also get the flexibility of full APIs without exposing your entire backend to the model.
Cloudflare has implemented this approach in its Agents SDK and documents the pattern in their post here: https://blog.cloudflare.com/code-mode/

Short case study

I recently prototyped an assistant that books meetings and fetches documents.
In the tool-call pattern the model produced several intermediate calls and often mismatched parameters.
In Code Mode the same assistant generated a small TypeScript workflow that executed reliably the first time.
The result required fewer API round trips and simpler error handling.

Where this goes next

Expect agents to become better at multi-step tasks as more SDKs adopt Code Mode.
We’ll see richer developer tooling, sandboxed runtimes, and finer-grained API browsing for agents.
Will this change how you design integrations?
Probably yes, if you value reliability and efficiency in agent-driven automation.

If you want to explore the implementation and examples, read Cloudflare’s detailed write-up at the link above.
This approach is a pragmatic step toward more capable, economical agent systems that fit enterprise constraints.

Kommentar abschicken