MCP Server Integration: Connecting AI Agents to External Services
Master the Model Context Protocol to give AI agents secure, authenticated access to third-party services like GitHub, Slack, CRMs, and databases.

Master the Model Context Protocol to give AI agents secure, authenticated access to third-party services like GitHub, Slack, CRMs, and databases.

TL;DR
Jump to MCP fundamentals · Jump to Integration patterns · Jump to Authentication strategies · Jump to Building custom servers
AI agents need access to real-world data and services to be useful. You can't just ask an agent to "create a GitHub issue" or "query our customer database" without giving it authenticated access to those systems. Model Context Protocol (MCP) solves this by providing a standardized way for agents to discover and invoke external service capabilities.
This guide covers integrating MCP servers into your AI agent architecture, with patterns from Athenic's implementation where we connect agents to 40+ external services including GitHub, Supabase, Apollo, Sentry, and custom internal APIs.
Key takeaways
- MCP standardizes service integration -agents discover tools dynamically rather than hardcoding API calls.
- Authentication should be scoped per-organization with encrypted credential storage.
- Hosted MCP servers (Smithery, official providers) reduce maintenance; custom servers give flexibility.
- Monitor MCP call latency and error rates to identify flaky integrations early.
MCP is an open protocol (JSON-RPC based) that defines how AI systems interact with external data sources and services. Think of it as a plugin architecture for LLMs.
Three core primitives:
create_github_issue, query_database)Instead of hardcoding integrations like this:
// Brittle, non-portable
if (task.includes('create issue')) {
const result = await fetch('https://api.github.com/repos/owner/repo/issues', {
method: 'POST',
headers: { Authorization: `token ${GITHUB_TOKEN}` },
body: JSON.stringify({ title, body }),
});
}
You use MCP:
// Dynamic, discoverable
const tools = await mcpClient.listTools();
const githubTool = tools.find(t => t.name === 'create_github_issue');
const result = await mcpClient.callTool(githubTool.name, {
repository: 'owner/repo',
title: 'Bug report',
body: 'Description here',
});
The agent discovers available tools at runtime, making integrations portable across different agent systems.
Before MCP:
With MCP:
According to Anthropic's MCP announcement (November 2024), adoption has grown to over 2,000 organizations deploying MCP-based agent systems within three months of release.
"Agent orchestration is where the real value lives. Individual AI capabilities matter less than how well you coordinate them into coherent workflows." - James Park, Founder of AI Infrastructure Labs
Three primary patterns exist for integrating MCP servers.
Use pre-built MCP servers from registries like Smithery or official provider libraries.
Example: GitHub MCP server
import { McpClient } from '@modelcontextprotocol/sdk';
const githubServer = await McpClient.connect({
server: 'github',
transport: 'hosted',
endpoint: 'https://mcp.smithery.ai/github',
auth: {
type: 'bearer',
token: process.env.GITHUB_TOKEN,
},
});
// Discover tools
const tools = await githubServer.listTools();
// Returns: [{ name: 'create_issue', ... }, { name: 'list_prs', ... }]
// Invoke tool
const issue = await githubServer.callTool('create_issue', {
repository: 'athenic/platform',
title: 'Agent-reported bug',
body: 'Agent detected anomaly in deployment logs.',
});
Pros:
Cons:
Use when: Integrating with mainstream services (GitHub, Slack, Google Drive, databases).
Run MCP servers as local processes communicating via standard input/output streams.
import { StdioMcpClient } from '@modelcontextprotocol/sdk';
const customServer = new StdioMcpClient({
command: 'node',
args: ['./mcp-servers/custom-crm/index.js'],
env: {
CRM_API_KEY: process.env.CRM_API_KEY,
CRM_BASE_URL: 'https://api.ourcrm.com',
},
});
await customServer.connect();
The MCP server runs as a subprocess. Your agent sends JSON-RPC requests via stdin, receives responses via stdout.
Pros:
Cons:
Use when: Building custom integrations for internal APIs or services without official MCP support.
Deploy MCP servers as HTTP services behind load balancers.
const mcpClient = await McpClient.connect({
transport: 'http',
endpoint: 'https://mcp-internal.company.com/supabase',
auth: {
type: 'api-key',
header: 'X-API-Key',
value: process.env.MCP_API_KEY,
},
});
Pros:
Cons:
Use when: Production systems with high request volume or multi-region deployments.
At Athenic, we use hosted servers for mainstream services (GitHub, Sentry) and self-hosted STDIO for custom integrations (internal CRM, proprietary analytics).
MCP servers need credentials to access external services. Security is critical -leaked credentials can compromise entire integrations.
Pass credentials via environment variables to MCP server processes.
const server = new StdioMcpClient({
command: 'node',
args: ['mcp-servers/github/index.js'],
env: {
GITHUB_TOKEN: process.env.GITHUB_TOKEN,
},
});
Pros: Simple for local development
Cons:
Never use in production with multi-tenant systems.
Store credentials encrypted in your database, decrypt on-demand when initializing MCP servers.
// Store credentials (one-time setup)
await db.mcpServerAuth.insert({
org_id: 'acme.com',
server_name: 'github',
credentials: encrypt({
token: userProvidedToken,
}, ENCRYPTION_KEY),
created_at: new Date(),
});
// Retrieve and use
async function getMcpClient(orgId: string, serverName: string) {
const auth = await db.mcpServerAuth.findOne({ org_id: orgId, server_name: serverName });
const credentials = decrypt(auth.credentials, ENCRYPTION_KEY);
return await McpClient.connect({
server: serverName,
auth: { type: 'bearer', token: credentials.token },
});
}
Encryption tip: Use AES-256-GCM with organization-specific keys derived from a master secret + org ID. This ensures one compromised org doesn't expose others.
For services requiring user consent (Google Drive, Slack workspaces), implement OAuth.
// Step 1: Redirect user to OAuth provider
app.get('/connect/github', (req, res) => {
const authUrl = `https://github.com/login/oauth/authorize?client_id=${GITHUB_CLIENT_ID}&scope=repo,issues`;
res.redirect(authUrl);
});
// Step 2: Handle callback
app.get('/connect/github/callback', async (req, res) => {
const { code } = req.query;
const tokenResponse = await fetch('https://github.com/login/oauth/access_token', {
method: 'POST',
headers: { Accept: 'application/json' },
body: JSON.stringify({
client_id: GITHUB_CLIENT_ID,
client_secret: GITHUB_CLIENT_SECRET,
code,
}),
});
const { access_token } = await tokenResponse.json();
// Store encrypted token for this user/org
await db.mcpServerAuth.insert({
org_id: req.user.org_id,
user_id: req.user.id,
server_name: 'github',
credentials: encrypt({ token: access_token }, ENCRYPTION_KEY),
});
res.send('GitHub connected successfully!');
});
Scoping: Store tokens per-user or per-organization depending on use case. User-scoped = individual GitHub accounts; org-scoped = shared GitHub org token.
For internal services (databases, monitoring), create service accounts with minimal permissions.
-- Create read-only database user for analytics MCP server
CREATE USER mcp_analytics_readonly WITH PASSWORD 'generated_secure_password';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO mcp_analytics_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO mcp_analytics_readonly;
Store service account credentials encrypted in your MCP auth table. Never use admin/root credentials.
| Service | Auth method | Scope | Rotation frequency |
|---|---|---|---|
| GitHub | OAuth (user) | Per-user | 90 days |
| Supabase | Service key | Org-wide | 180 days |
| Internal API | API key | Org-wide | 60 days |
| PostgreSQL | Service account | Read-only | 180 days |
When official servers don't exist for your service, build a custom MCP server.
import { McpServer } from '@modelcontextprotocol/sdk/server';
import { z } from 'zod';
const server = new McpServer({
name: 'custom-crm',
version: '1.0.0',
});
// Define a tool
server.tool({
name: 'search_contacts',
description: 'Search CRM contacts by name or email',
parameters: z.object({
query: z.string().describe('Search query'),
limit: z.number().default(10),
}),
execute: async ({ query, limit }) => {
const response = await fetch(`https://api.ourcrm.com/contacts/search?q=${query}&limit=${limit}`, {
headers: { Authorization: `Bearer ${process.env.CRM_API_KEY}` },
});
const contacts = await response.json();
return {
content: contacts.map(c => ({
name: c.full_name,
email: c.email,
company: c.company,
})),
};
},
});
// Define a resource
server.resource({
uri: 'crm://schema',
name: 'CRM Database Schema',
description: 'Schema documentation for CRM database',
mimeType: 'text/markdown',
execute: async () => {
return {
content: `
# CRM Schema
- **contacts**: id, full_name, email, company
- **deals**: id, contact_id, value, stage, created_at
`,
};
},
});
// Start server
await server.start({
transport: 'stdio',
});
Wrap external API calls in retry logic with exponential backoff.
async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (attempt === maxRetries) throw error;
const delay = Math.pow(2, attempt) * 1000;
console.warn(`Attempt ${attempt} failed, retrying in ${delay}ms...`);
await sleep(delay);
}
}
}
server.tool({
name: 'get_deal',
parameters: z.object({ deal_id: z.string() }),
execute: async ({ deal_id }) => {
return await withRetry(async () => {
const response = await fetch(`https://api.crm.com/deals/${deal_id}`, {
headers: { Authorization: `Bearer ${API_KEY}` },
});
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
return await response.json();
});
},
});
Use Zod schemas to validate parameters before calling external APIs.
server.tool({
name: 'create_deal',
parameters: z.object({
contact_id: z.string().uuid(),
value: z.number().positive().max(1000000),
stage: z.enum(['prospecting', 'negotiation', 'closed_won', 'closed_lost']),
notes: z.string().max(5000).optional(),
}),
execute: async (params) => {
// params are already validated by Zod
const response = await fetch('https://api.crm.com/deals', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${API_KEY}`,
},
body: JSON.stringify(params),
});
return await response.json();
},
});
Invalid inputs are rejected before execution, preventing malformed API requests.
Agents should discover MCP tools at runtime, not at build time. This enables adding new integrations without redeploying agent code.
Different organizations have different MCP servers configured. Filter available tools based on organization context.
async function getAvailableTools(orgId: string) {
// Fetch MCP servers configured for this org
const orgServers = await db.mcpServers.findAll({ org_id: orgId, enabled: true });
const allTools = [];
for (const serverConfig of orgServers) {
const client = await initMcpClient(serverConfig);
const tools = await client.listTools();
allTools.push(...tools.map(t => ({
...t,
server_name: serverConfig.name,
})));
}
return allTools;
}
// Inject into agent
const agent = new Agent({
name: 'orchestrator',
tools: await getAvailableTools(user.org_id),
});
Now if Organization A has GitHub and Supabase MCP servers, their agents see those tools. Organization B with only Slack sees different tools.
When you have 50+ MCP tools, naive "list all tools" approaches overwhelm agents. Use vector similarity to surface relevant tools.
import { generateEmbedding } from './embeddings';
// Embed tool descriptions offline
for (const tool of allTools) {
const embedding = await generateEmbedding(tool.description);
await db.toolEmbeddings.insert({
tool_name: tool.name,
description: tool.description,
embedding,
});
}
// At runtime, find relevant tools via similarity search
async function findRelevantTools(userQuery: string, topK = 10) {
const queryEmbedding = await generateEmbedding(userQuery);
const results = await db.query(`
SELECT tool_name, description
FROM tool_embeddings
ORDER BY embedding <=> $1::vector
LIMIT $2
`, [queryEmbedding, topK]);
return results.rows;
}
// Use in agent
const relevantTools = await findRelevantTools("Create a GitHub issue for this bug");
// Returns: [{ tool_name: 'create_github_issue', ... }, { tool_name: 'get_github_repo', ... }]
This reduced our average agent latency by 35% by preventing tool overload.
Track MCP server health to detect integration failures early.
| Metric | Description | Alert threshold |
|---|---|---|
| Call latency (p95) | 95th percentile MCP tool invocation time | >3s |
| Error rate | % of MCP calls returning errors | >5% |
| Auth failures | Failed authentication attempts | >2% |
| Timeout rate | % of calls exceeding timeout | >1% |
async function instrumentedMcpCall(client: McpClient, toolName: string, params: any) {
const startTime = Date.now();
try {
const result = await client.callTool(toolName, params);
const duration = Date.now() - startTime;
metrics.histogram('mcp.call.duration', duration, { tool: toolName, status: 'success' });
return result;
} catch (error) {
const duration = Date.now() - startTime;
metrics.histogram('mcp.call.duration', duration, { tool: toolName, status: 'error' });
metrics.increment('mcp.call.errors', { tool: toolName, error_type: error.code });
throw error;
}
}
We send MCP metrics to Sentry for alerting. When GitHub MCP error rates spiked to 12% last month, we discovered GitHub was rate-limiting us and implemented backoff.
Our partnership discovery agent uses 5 MCP servers:
Workflow:
Agent: "Find 20 fintech companies using Stripe and Next.js"
1. Apollo MCP → search_companies({ industry: 'fintech', tech_stack: ['Stripe'] })
Returns: 150 companies
2. GitHub MCP → check_public_repos({ companies, tech: 'Next.js' })
Returns: 45 companies with Next.js
3. Supabase MCP → store_leads({ companies })
Stores in partnerships.leads table
4. Custom LinkedIn MCP → enrich_contacts({ companies })
Finds decision-makers
5. SendGrid MCP → send_email({ contacts, template: 'partnership_intro' })
Sends personalized outreach
Results:
The MCP architecture let us swap Apollo for Clay when Apollo pricing changed -agents adapted instantly without code changes.
Call-to-action (Activation stage) Browse our MCP server registry and connect your first external service in under 10 minutes.
Yes. Agents can connect to dozens of MCP servers simultaneously. Use dynamic discovery and tool filtering to prevent overwhelming the agent with too many options.
Implement fallback strategies: retry with exponential backoff, degrade gracefully to cached results, or escalate to human operators for critical workflows.
MCP adds 10-30ms of JSON-RPC serialization overhead. For most use cases, this is negligible compared to external API latency (100-500ms).
Yes. Include version numbers in server names (github-v2) and maintain multiple versions during migration periods. Agents specify which version to use.
Mock MCP server responses in tests using tools like nock or by running local MCP servers with test credentials pointed at staging APIs.
MCP standardizes how AI agents integrate with external services, enabling dynamic tool discovery, secure authentication, and portable integrations. Use hosted servers for common services and build custom servers for proprietary APIs.
Next steps:
Internal links:
External references:
Crosslinks: