Model Context Protocol (MCP) Integration
Easily integrate OK!Gotcha with LLM frameworks using the Model Context Protocol
What is MCP Integration?
The Model Context Protocol (MCP) integration is a lightweight approach to adding human approval workflows to your AI applications and agents. It’s specifically designed for use with Large Language Models (LLMs) like GPT, Claude, and others, and integrates seamlessly with popular AI frameworks.
MCP integration is ideal for AI tools and frameworks where simplicity is important and you want minimal changes to your existing code.
Benefits of MCP Integration
Minimal Code Changes
Integrate with just a few lines of code in your existing LLM applications.
Framework Agnostic
Works with OpenAI, Anthropic, LangChain, AutoGen, and more.
Zero Context Overhead
No need to add approval-specific instructions to your LLM prompts.
Automatic Tool Validation
Functions requiring approval are automatically validated before execution.
Getting Started
Install the SDK
Create Your Tool Functions
Define your tool functions as normal, then wrap the ones that require approval:
Define OpenAI Tool Schema
Define the tool schema as you normally would for OpenAI:
Use with OpenAI
Framework Integrations
LangChain Integration
Approval Management
When using MCP integration, approvals are managed the same way as with the SDK integration:
- Approvers receive notifications through configured channels
- Approvers can approve or reject requests through the OK!Gotcha dashboard
- Upon approval, the function executes and results are returned to the agent
- If rejected, an error is returned to the agent
You can also check approval status programmatically:
Next Steps
- Learn about audit trails for logging all approval activities
- Explore notification options for alerting approvers
- Check our specific framework integrations for detailed guides