Separate Design from Runtime in AI Agent Development

Clear thinking about the Model Context Protocol: limitations, misconceptions, and practical patterns for building reliable agent systems. Learn from deep dives and hands-on guidance.

Read the MCP Docs

Model Context Protocol (MCP)

OASMCPArazzoLLMs

We see HAPI MCP as the convergence layer between OpenAPI (OAS), MCP, Arazzo, and LLMs — “OAS v4” for AI and agentic workflows. It separates design from runtime, bringing modularity, reliability, and observability to AI systems.

Key Benefits

Modularity

Separate design artifacts (prompts, tools, policies) from runtime execution for faster iteration.

Scalability

Standardized interfaces enable interoperable agents and services across teams and infra.

Reliability

Deterministic contracts with validation make agent behavior predictable and testable.

Observability

Inspect flows, events, and metrics to continuously improve quality and safety.

Security

Principle-of-least-privilege for tools and data access with clear boundaries.

Interop

OAS + MCP + Arazzo + LLMs provide a portable, vendor-neutral foundation.

Use Cases

Customer Service Automation

Assist agents, triage requests, and automate resolutions with grounded context and safe tool use.

Enterprise Knowledge Management

Unify knowledge graphs and services for accurate, explainable retrieval-augmented workflows.

Data Analysis & Reporting

Governed analysis pipelines with human-in-the-loop approvals and reproducible outputs.

What Teams Say

“MCP gave us predictable agent behavior and faster releases.”
CTO, Fintech
“The design/runtime split made compliance sign‑off straightforward.”
Head of Risk, Enterprise
“We moved from prototypes to production with clear contracts.”
Director of AI, SaaS

FAQs

Is MCP a replacement for OpenAPI?

No. We see MCP as complementary — it brings agent context and tool protocols. Together with OAS and Arazzo, you get a cohesive contract for agentic workflows.

How does HAPI MCP fit in?

HAPI MCP implements the protocol and adds production‑ready features: auth, routing, orchestration, and observability across services and agents.

Does this work with any LLM?

Yes. The interfaces are model‑agnostic. You can bring your preferred providers and switch without changing integration contracts.

Can I keep my existing microservices?

Absolutely. MCP builds on service boundaries. Wrap capabilities as tools, describe contracts, and compose workflows safely.

What about security and privacy?

Use scoped credentials, audited tool calls, and least‑privilege access. HAPI MCP adds guardrails and observability for regulated environments.

Subscribe to the Newsletter

Insights on MCP, patterns, pitfalls, and hands-on examples delivered monthly.

Request a Demo

See HAPI MCP in action — the convergence of OAS, MCP, Arazzo, and LLMs.

Recent Posts

View all
Skip to main content

An evolution of tool calling with MCP, thanks to OpenAI's latest SDK updates.

Step-by-step, I'll guide you through setting up an MCP server, integrating it with the OpenAI SDK, and running a complete example that showcases dynamic tool calling. By the end of this post, you'll be equipped to leverage MCP in your own OpenAI-powered applications.

End-to-End Example, Setting Up an MCP Server, Integrating with OpenAI LLM, and Running some tests to see it in action.

Prompt injection is a critical security risk for any system using large language models (LLMs), including those built with Model Context Protocol (MCP). You must understand how prompt injection works, why MCP cannot prevent it, and what steps you should take to protect your users and applications (MCP Clients).

🚨 "OpenAPI Specification (OAS) v4 is out" - That I wish, this is the kind of headline I would expect to see soon, because OAS can easily be extended to enable RESTful APIs work seamlessly with AI.

By the end of this article, you'll know how to let any LLM call your REST tools automatically using OAS.

You've reached the end