OpenAI SDK Meets MCP (Model Context Protocol)
MCP, or Model Context Protocol, is a framework that enhances how AI models interact with their operational context. By integrating MCP, you ensure your AI models are not only aware of the data they process but also the environment in which they operate. This leads to more accurate and contextually relevant outputs.
Summary:
You will learn how OpenAI's MCP integration empowers models to deliver smarter, context-aware results by connecting them to their operational environment.
Why Integrate OpenAI with MCP?
OpenAI's powerful language models take tool calling to the next level when you leverage MCP. This integration enables dynamic interactions, letting you build applications that adapt to various scenarios and user needs. I explained how MCP tool calling works in a previous post. Now, I'll show you how OpenAI's implementation impressed me.
With a simple, clever setup, you create applications that understand user queries in natural language and fetch relevant tools from MCP servers. This is extremely powerful—it enables seamless, efficient interactions with AI models, making them more useful in real-world applications. Imagine an automated customer-support bot using MCP: it instantly retrieves necessary tools from MCP servers to resolve complex queries, such as billing discrepancies or technical troubleshooting, on the first call. This opens new possibilities for building intelligent applications that respond to user needs with greater precision and context-awareness.
Anthropic's Claude Desktop app (MCP Client) was groundbreaking almost a year ago. Early adopters were impressed, and now OpenAI seamlessly integrates these capabilities into their SDKs. As the Spanish saying goes, "Nadie sabe para quien trabaja" (Nobody knows for whom they work), but MCP is clearly shaping the future of AI applications. OpenAI is making significant strides by implementing Anthropic's legacy directly in the LLM.
You can build smarter, more adaptive applications by letting AI models fetch and use tools from MCP servers, reducing manual integration and boosting context-awareness.
The OpenAI's MCP integration is currently in beta, so expect some rough edges. However, the potential is immense, and I'm excited to see how this evolves.
Three Approaches to Tool Calling
You have three main ways to implement tool calling:
- Direct Tool Calling: Use LLMs that support tool calling directly, such as Llama 3, GPT-4o, and others. You provide tools to the model manually (programmatically).
- MCP Clients: Use applications that fetch tools from MCP servers and provide them to LLMs. MCP Clients act as intermediaries, enabling dynamic tool retrieval based on user queries.
- LLMs with Native MCP Support: OpenAI now integrates native MCP support in their models, letting them fetch and use tools from MCP servers directly. Just plug in the MCP Server URL, and the model handles the rest.
When choosing, remember: Direct Tool Calling is easy to set up but requires manual tool management. MCP Clients offer dynamic adaptability, fetching tools as needed and calling them for the user. LLMs with Native MCP Support provide the most seamless experience, automatically integrating and using tools from MCP Servers, though you may need some upfront knowledge about OpenAI's system and SDKs.
Summary:
Select the approach that best fits your workflow—manual, dynamic, or fully automated tool integration.
The OpenAI's MCP integration may not be call by the model directly; my guess is that it uses an internal MCP Client to fetch and call the tools, similar to how QBot works.
End-to-End Example
Quick check:
curl -X POST "https://api.openai.com/v1/responses" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"tools": [
{
"type": "mcp",
"server_label": "tacosmcp",
"server_url": "https://tacostore.run.mcp.com.ai",
"require_approval": "never"
}
],
"input": "What tacos do you have in the menu?"
}'
Let's walk through the process: set up an MCP server, integrate the OpenAI SDK, write the code, run it, and see the results. These steps help you use MCP to enhance your applications.
Summary:
Follow these steps to quickly connect OpenAI models to MCP and unlock dynamic tool calling.
1. Set Up an MCP Server
First, set up an MCP server hosting the tools you want to use. You can run your own MCP server or use an existing one. For this example, use the HAPI MCP Server—a simple way to get started. Find the HAPI MCP Server docs here.
Open a terminal, ensure Bun is installed, and run:
# Start a HAPI MCP Server
bun dev serve tacos --headless
This starts a local MCP server hosting a tacos online store at http://localhost:3000
. I asked Postman Agent Builder to create a simple tacos store API, which I used for the MCP server.
2. Create an OpenAI Account
If you don't have one, sign up for an OpenAI account and get your API key at the OpenAI platform.
3. Install the OpenAI SDK
Use the OpenAI SDK in your preferred language. For this example, use JavaScript and TypeScript with Bun. In another terminal, run:
# Initialize a new Bun project
bun init -y -m
# Install the OpenAI SDK
bun add openai
4. Write the Code
Create a new file, for example, index.js
, and add:
import OpenAI from "openai"
const client = new OpenAI({
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
})
const resp = await client.responses.create({
model: "gpt-5",
tools: [
{
type: "mcp",
server_label: "tacosmcp",
server_description: "The Tacos MCP Client",
server_url: "https://tacostore.run.mcp.com.ai",
require_approval: "never",
},
],
input: "What tacos do you have in the menu?",
})
console.log(resp.output_text)
5. Run the Code
Set your OpenAI API key in the OPENAI_API_KEY
environment variable, then run:
bun run index.js
6. See the Magic
The model fetches tools from the MCP server and uses them to answer your query. You should see a response listing the tacos available on the menu. To reinforce your learning, validate the returned taco list against the API response. This simple verification ensures tool execution works correctly and builds confidence in using MCP.
7. Explore Further
Modify the input query to ask different questions or explore other tools on the MCP server. The possibilities are endless!
Summary:
You can set up, connect, and validate MCP-powered tool calling in minutes, then experiment to discover more capabilities.
Demo Video
[PLACEHOLDER] - video coming soon
Conclusion
Integrating OpenAI with MCP unlocks a world of possibilities for building intelligent applications that adapt to various scenarios and user needs. By leveraging MCP, you create dynamic, context-aware interactions with AI models, leading to more accurate and relevant outputs.
Summary:
Use MCP to make your AI applications smarter, more flexible, and better suited to real-world challenges.