Skip to main content

OpenAI SDK Meets MCP (Model Context Protocol)

· 6 min read
Adrian Escutia
La Rebelion Founder

An evolution of tool calling with MCP, thanks to OpenAI's latest SDK updates.

Step-by-step, I'll guide you through setting up an MCP server, integrating it with the OpenAI SDK, and running a complete example that showcases dynamic tool calling. By the end of this post, you'll be equipped to leverage MCP in your own OpenAI-powered applications.

End-to-End Example, Setting Up an MCP Server, Integrating with OpenAI LLM, and Running some tests to see it in action.

MCP, or Model Context Protocol, is a framework that enhances how AI models interact with their operational context. By integrating MCP, you ensure your AI models are not only aware of the data they process but also the environment in which they operate. This leads to more accurate and contextually relevant outputs.

Why Integrate OpenAI with MCP?

OpenAI's powerful language models take tool calling to the next level when you leverage MCP. This integration enables dynamic interactions, letting you build applications that adapt to various scenarios and user needs. I explained how MCP tool calling works in a previous post. Now, I'll show you how OpenAI's implementation impressed me.

With a simple, clever setup, you create applications that understand user queries in natural language and fetch relevant tools from MCP servers. This is extremely powerful—it enables seamless, efficient interactions with AI models, making them more useful in real-world applications. Imagine an automated customer-support bot using MCP: it instantly retrieves necessary tools from MCP servers to resolve complex queries, such as billing discrepancies or technical troubleshooting, on the first call. This opens new possibilities for building intelligent applications that respond to user needs with greater precision and context-awareness.

Anthropic's Claude Desktop app (MCP Client) was groundbreaking almost a year ago. Early adopters were impressed, and now OpenAI seamlessly integrates these capabilities into their SDKs. As the Spanish saying goes, "Nadie sabe para quien trabaja" (Nobody knows for whom they work), but MCP is clearly shaping the future of AI applications. OpenAI is making significant strides by implementing Anthropic's legacy directly in the LLM.

You can build smarter, more adaptive applications by letting AI models fetch and use tools from MCP servers, reducing manual integration and boosting context-awareness.

note

The OpenAI's MCP integration is currently in beta, so expect some rough edges. However, the potential is immense, and I'm excited to see how this evolves.

Three Approaches to Tool Calling

You have three main ways to implement tool calling:

ApproachDescriptionKey Strength
MCP ClientsIntermediary applications bridging models and external tool servers.Dynamic retrieval and adaptation
Direct Tool CallingLLMs with built-in function calling via structured JSON.Precise programmatic control
Native MCP IntegrationOpenAI’s latest approach combining both worlds.Automatic discovery and zero-config operation

When choosing, remember: Direct Tool Calling is easy to set up but requires manual tool management. MCP Clients offer dynamic adaptability, fetching tools as needed and calling them for the user. LLMs with Native MCP Support provide the most seamless experience, automatically integrating and using tools from MCP Servers, though you may need some upfront knowledge about OpenAI's system and SDKs.

Summary:
Select the approach that best fits your workflow—manual, dynamic, or fully automated tool integration.

note

The OpenAI's MCP integration may not be call by the model directly; my guess is that it uses an internal MCP Client to fetch and call the tools, similar to how QBot works.

End-to-End Example

Quick check:

curl -X POST "https://api.openai.com/v1/responses" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"tools": [
{
"type": "mcp",
"server_label": "tacosmcp",
"server_url": "https://tacostore.run.mcp.com.ai",
"require_approval": "never"
}
],
"input": "What tacos do you have in the menu?"
}'

Let's walk through the process: set up an MCP server, integrate the OpenAI SDK, write the code, run it, and see the results. These steps help you use MCP to enhance your applications.

Summary:
Follow these steps to quickly connect OpenAI models to MCP and unlock dynamic tool calling.

1. Set Up an MCP Server

First, set up an MCP server hosting the tools you want to use. You can run your own MCP server or use an existing one. For this example, use the HAPI MCP Server—a simple way to get started. Find the HAPI MCP Server docs here.

Open a terminal, ensure Bun is installed, and run:

# Start a HAPI MCP Server
bun dev serve tacos --headless

This starts a local MCP server hosting a tacos online store at http://localhost:3000. I asked Postman Agent Builder to create a simple tacos store API, which I used for the MCP server.

2. Create an OpenAI Account

If you don't have one, sign up for an OpenAI account and get your API key at the OpenAI platform.

3. Install the OpenAI SDK

Use the OpenAI SDK in your preferred language. For this example, use JavaScript and TypeScript with Bun. In another terminal, run:

# Initialize a new Bun project
bun init -y -m
# Install the OpenAI SDK
bun add openai

4. Write the Code

Create a new file, for example, index.js, and add:

import OpenAI from "openai"

const client = new OpenAI({
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
})

const resp = await client.responses.create({
model: "gpt-5",
tools: [
{
type: "mcp",
server_label: "tacosmcp",
server_description: "The Tacos MCP Client",
server_url: "https://tacostore.run.mcp.com.ai",
require_approval: "never",
},
],
input: "What tacos do you have in the menu?",
})

console.log(resp.output_text)

5. Run the Code

Set your OpenAI API key in the OPENAI_API_KEY environment variable, then run:

bun run index.js

6. See the Magic

The model fetches tools from the MCP server and uses them to answer your query. You should see a response listing the tacos available on the menu. To reinforce your learning, validate the returned taco list against the API response. This simple verification ensures tool execution works correctly and builds confidence in using MCP.

7. Explore Further

Modify the input query to ask different questions or explore other tools on the MCP server. The possibilities are endless!

Summary:
You can set up, connect, and validate MCP-powered tool calling in minutes, then experiment to discover more capabilities.

Demo Video

Conclusion

Integrating OpenAI with MCP unlocks a world of possibilities for building intelligent applications that adapt to various scenarios and user needs. By leveraging MCP, you create dynamic, context-aware interactions with AI models, leading to more accurate and relevant outputs.

Summary:
Use MCP to make your AI applications smarter, more flexible, and better suited to real-world challenges.

References