What is MCP Integration?
The Model Context Protocol (MCP) integration is a lightweight approach to adding human approval workflows to your AI applications and agents. It’s specifically designed for use with Large Language Models (LLMs) like GPT, Claude, and others, and integrates seamlessly with popular AI frameworks.
MCP integration is ideal for AI tools and frameworks where simplicity is important and you want minimal changes to your existing code.
Benefits of MCP Integration
Minimal Code Changes Integrate with just a few lines of code in your existing LLM applications.
Framework Agnostic Works with OpenAI, Anthropic, LangChain, AutoGen, and more.
Zero Context Overhead No need to add approval-specific instructions to your LLM prompts.
Automatic Tool Validation Functions requiring approval are automatically validated before execution.
Getting Started
Install the SDK
npm install @okgotcha/sdk
Define your tool functions as normal, then wrap the ones that require approval:
import { OkGotcha } from '@okgotcha/sdk' ;
import OpenAI from 'openai' ;
// Initialize OkGotcha
const ok = OkGotcha ();
// Define your tools
const tools = {
// Regular function - no approval needed
getAccountBalance : async ( accountId : string ) : Promise < number > => {
// Implementation here
return 10000 ; // Example
},
// Function requiring approval
transferFunds: ok . requireApproval ({
title: "Fund Transfer" ,
description: "Transfer funds between accounts" ,
approvers: [ "finance-team" , "security-team" ]
})( async ( fromAccount : string , toAccount : string , amount : number ) : Promise < boolean > => {
// Implementation here
return true ;
})
};
Define the tool schema as you normally would for OpenAI:
// Define OpenAI tools schema
const openaiTools = [
{
type: "function" ,
function: {
name: "getAccountBalance" ,
description: "Get the current balance of an account" ,
parameters: {
type: "object" ,
properties: {
accountId: { type: "string" }
},
required: [ "accountId" ]
}
}
},
{
type: "function" ,
function: {
name: "transferFunds" ,
description: "Transfer funds between accounts" ,
parameters: {
type: "object" ,
properties: {
fromAccount: { type: "string" },
toAccount: { type: "string" },
amount: { type: "number" }
},
required: [ "fromAccount" , "toAccount" , "amount" ]
}
}
}
];
Use with OpenAI
import OpenAI from 'openai' ;
// Create OpenAI client
const openai = new OpenAI ();
// Example of using tools with OpenAI
async function processUserRequest ( userInput : string ) {
const messages = [{ role: "user" , content: userInput }];
let response = await openai . chat . completions . create ({
model: "gpt-3.5-turbo-1106" ,
messages ,
tools: openaiTools ,
tool_choice: "auto"
});
while ( response . choices [ 0 ]. message . tool_calls ) {
messages . push ( response . choices [ 0 ]. message );
// Handle tool calls
for ( const toolCall of response . choices [ 0 ]. message . tool_calls ) {
const toolName = toolCall . function . name ;
const toolArgs = JSON . parse ( toolCall . function . arguments );
try {
// This will automatically handle approval if needed
const toolResult = await tools [ toolName ]( ... Object . values ( toolArgs ));
messages . push ({
role: "tool" ,
name: toolName ,
content: JSON . stringify ( toolResult ),
tool_call_id: toolCall . id
});
} catch ( error ) {
if ( error instanceof OkGotcha . PendingApprovalError ) {
messages . push ({
role: "tool" ,
name: toolName ,
content: JSON . stringify ({
status: "pending_approval" ,
message: "This action requires human approval. Waiting for approval..." ,
approvalId: error . approvalId
}),
tool_call_id: toolCall . id
});
} else {
messages . push ({
role: "tool" ,
name: toolName ,
content: JSON . stringify ({ error: error . message }),
tool_call_id: toolCall . id
});
}
}
}
// Continue the conversation
response = await openai . chat . completions . create ({
model: "gpt-3.5-turbo-1106" ,
messages ,
tools: openaiTools
});
}
return response . choices [ 0 ]. message . content ;
}
// Use the tools
const result = await processUserRequest (
"Transfer $5000 from account 123 to account 456"
);
Framework Integrations
LangChain Integration
import { OkGotcha } from '@okgotcha/sdk' ;
import { ChatOpenAI } from "langchain/chat_models/openai" ;
import {
ChatPromptTemplate ,
MessagesPlaceholder
} from "langchain/prompts" ;
import { createOpenAIFunctionsAgent } from "langchain/agents" ;
const ok = OkGotcha ();
// Create your tools with approval requirements
const tools = [
new DynamicTool ({
name: "get_account_balance" ,
description: "Get the current balance of an account" ,
func : async ( accountId : string ) => {
return "10000" ;
},
}),
new DynamicTool ({
name: "transfer_funds" ,
description: "Transfer funds between accounts" ,
func: ok . requireApproval ({
title: "Fund Transfer" ,
description: "Transfer funds between accounts" ,
approvers: [ "finance-team" ]
})( async ( input : string ) => {
const { fromAccount , toAccount , amount } = JSON . parse ( input );
// Transfer implementation
return "Transfer initiated successfully" ;
}),
}),
];
// Create the agent
const chat = new ChatOpenAI ();
const prompt = ChatPromptTemplate . fromMessages ([
[ "system" , "You are a helpful banking assistant." ],
new MessagesPlaceholder ( "chat_history" ),
[ "human" , "{input}" ],
new MessagesPlaceholder ( "agent_scratchpad" ),
]);
const agent = await createOpenAIFunctionsAgent ({
llm: chat ,
tools ,
prompt ,
});
const agentExecutor = AgentExecutor . fromAgentAndTools ({
agent ,
tools ,
});
// Execute the agent
const result = await agentExecutor . invoke ({
input: "Transfer $5000 from account 123 to account 456" ,
chat_history: [],
});
Approval Management
When using MCP integration, approvals are managed the same way as with the SDK integration:
Approvers receive notifications through configured channels
Approvers can approve or reject requests through the OK!Gotcha dashboard
Upon approval, the function executes and results are returned to the agent
If rejected, an error is returned to the agent
You can also check approval status programmatically:
const status = await ok . getApprovalStatus ( approvalId );
Next Steps