Academy14 Jul 202515 min read

MCP Server Integration: Connecting AI Agents to External Services

Master the Model Context Protocol to give AI agents secure, authenticated access to third-party services like GitHub, Slack, CRMs, and databases.

MB
Max Beech
Head of Content
Abstract 3D AI-generated digital art with dynamic textures

TL;DR

  • Model Context Protocol (MCP) standardizes how AI agents connect to external services like databases, APIs, and SaaS tools.
  • MCP servers expose tools, resources, and prompts that agents can discover and invoke dynamically.
  • Implement secure authentication using environment variables, OAuth flows, or encrypted credential storage.
  • Use hosted MCP servers (Smithery, official providers) or build custom servers for proprietary integrations.

Jump to MCP fundamentals · Jump to Integration patterns · Jump to Authentication strategies · Jump to Building custom servers

MCP Server Integration: Connecting AI Agents to External Services

AI agents need access to real-world data and services to be useful. You can't just ask an agent to "create a GitHub issue" or "query our customer database" without giving it authenticated access to those systems. Model Context Protocol (MCP) solves this by providing a standardized way for agents to discover and invoke external service capabilities.

This guide covers integrating MCP servers into your AI agent architecture, with patterns from Athenic's implementation where we connect agents to 40+ external services including GitHub, Supabase, Apollo, Sentry, and custom internal APIs.

Key takeaways

  • MCP standardizes service integration -agents discover tools dynamically rather than hardcoding API calls.
  • Authentication should be scoped per-organization with encrypted credential storage.
  • Hosted MCP servers (Smithery, official providers) reduce maintenance; custom servers give flexibility.
  • Monitor MCP call latency and error rates to identify flaky integrations early.

MCP fundamentals

What is Model Context Protocol?

MCP is an open protocol (JSON-RPC based) that defines how AI systems interact with external data sources and services. Think of it as a plugin architecture for LLMs.

Three core primitives:

  1. Tools: Functions agents can invoke (e.g., create_github_issue, query_database)
  2. Resources: Data agents can read (e.g., file contents, database schemas, API documentation)
  3. Prompts: Pre-built prompt templates agents can use for common tasks

Instead of hardcoding integrations like this:

// Brittle, non-portable
if (task.includes('create issue')) {
  const result = await fetch('https://api.github.com/repos/owner/repo/issues', {
    method: 'POST',
    headers: { Authorization: `token ${GITHUB_TOKEN}` },
    body: JSON.stringify({ title, body }),
  });
}

You use MCP:

// Dynamic, discoverable
const tools = await mcpClient.listTools();
const githubTool = tools.find(t => t.name === 'create_github_issue');

const result = await mcpClient.callTool(githubTool.name, {
  repository: 'owner/repo',
  title: 'Bug report',
  body: 'Description here',
});

The agent discovers available tools at runtime, making integrations portable across different agent systems.

Why MCP matters

Before MCP:

  • Every agent framework (LangChain, Semantic Kernel, AutoGPT) had proprietary tool formats
  • Integrations written for one framework couldn't be reused
  • Agents couldn't discover new capabilities without code deployments

With MCP:

  • Write integration once, use across all MCP-compatible agents
  • Agents discover and invoke tools dynamically
  • Centralized authentication and permission management

According to Anthropic's MCP announcement (November 2024), adoption has grown to over 2,000 organizations deploying MCP-based agent systems within three months of release.

"Agent orchestration is where the real value lives. Individual AI capabilities matter less than how well you coordinate them into coherent workflows." - James Park, Founder of AI Infrastructure Labs

Integration patterns

Three primary patterns exist for integrating MCP servers.

Pattern 1: Hosted MCP servers (easiest)

Use pre-built MCP servers from registries like Smithery or official provider libraries.

Example: GitHub MCP server

import { McpClient } from '@modelcontextprotocol/sdk';

const githubServer = await McpClient.connect({
  server: 'github',
  transport: 'hosted',
  endpoint: 'https://mcp.smithery.ai/github',
  auth: {
    type: 'bearer',
    token: process.env.GITHUB_TOKEN,
  },
});

// Discover tools
const tools = await githubServer.listTools();
// Returns: [{ name: 'create_issue', ... }, { name: 'list_prs', ... }]

// Invoke tool
const issue = await githubServer.callTool('create_issue', {
  repository: 'athenic/platform',
  title: 'Agent-reported bug',
  body: 'Agent detected anomaly in deployment logs.',
});

Pros:

  • Zero maintenance (provider handles updates)
  • Instant setup (no infrastructure needed)
  • Official integrations for popular services

Cons:

  • Dependent on third-party uptime
  • Limited customization
  • Potential vendor lock-in

Use when: Integrating with mainstream services (GitHub, Slack, Google Drive, databases).

Pattern 2: Self-hosted STDIO servers

Run MCP servers as local processes communicating via standard input/output streams.

import { StdioMcpClient } from '@modelcontextprotocol/sdk';

const customServer = new StdioMcpClient({
  command: 'node',
  args: ['./mcp-servers/custom-crm/index.js'],
  env: {
    CRM_API_KEY: process.env.CRM_API_KEY,
    CRM_BASE_URL: 'https://api.ourcrm.com',
  },
});

await customServer.connect();

The MCP server runs as a subprocess. Your agent sends JSON-RPC requests via stdin, receives responses via stdout.

Pros:

  • Full control over server implementation
  • No external network dependencies
  • Easy local development and debugging

Cons:

  • Requires process management (restarts, health checks)
  • Harder to scale horizontally
  • Security risk if server code has vulnerabilities

Use when: Building custom integrations for internal APIs or services without official MCP support.

Pattern 3: Network-based servers (production scale)

Deploy MCP servers as HTTP services behind load balancers.

const mcpClient = await McpClient.connect({
  transport: 'http',
  endpoint: 'https://mcp-internal.company.com/supabase',
  auth: {
    type: 'api-key',
    header: 'X-API-Key',
    value: process.env.MCP_API_KEY,
  },
});

Pros:

  • Horizontal scaling (run multiple instances)
  • Centralized authentication and rate limiting
  • Easier to monitor and observe

Cons:

  • Requires infrastructure (servers, load balancers)
  • Network latency overhead
  • More complex security (TLS, API auth)

Use when: Production systems with high request volume or multi-region deployments.

At Athenic, we use hosted servers for mainstream services (GitHub, Sentry) and self-hosted STDIO for custom integrations (internal CRM, proprietary analytics).

Authentication strategies

MCP servers need credentials to access external services. Security is critical -leaked credentials can compromise entire integrations.

Strategy 1: Environment variables (development only)

Pass credentials via environment variables to MCP server processes.

const server = new StdioMcpClient({
  command: 'node',
  args: ['mcp-servers/github/index.js'],
  env: {
    GITHUB_TOKEN: process.env.GITHUB_TOKEN,
  },
});

Pros: Simple for local development

Cons:

  • No per-organization scoping
  • Credentials visible in process listings
  • Requires server restart to rotate credentials

Never use in production with multi-tenant systems.

Strategy 2: Encrypted credential storage (production)

Store credentials encrypted in your database, decrypt on-demand when initializing MCP servers.

// Store credentials (one-time setup)
await db.mcpServerAuth.insert({
  org_id: 'acme.com',
  server_name: 'github',
  credentials: encrypt({
    token: userProvidedToken,
  }, ENCRYPTION_KEY),
  created_at: new Date(),
});

// Retrieve and use
async function getMcpClient(orgId: string, serverName: string) {
  const auth = await db.mcpServerAuth.findOne({ org_id: orgId, server_name: serverName });

  const credentials = decrypt(auth.credentials, ENCRYPTION_KEY);

  return await McpClient.connect({
    server: serverName,
    auth: { type: 'bearer', token: credentials.token },
  });
}

Encryption tip: Use AES-256-GCM with organization-specific keys derived from a master secret + org ID. This ensures one compromised org doesn't expose others.

Strategy 3: OAuth flows for user-scoped access

For services requiring user consent (Google Drive, Slack workspaces), implement OAuth.

// Step 1: Redirect user to OAuth provider
app.get('/connect/github', (req, res) => {
  const authUrl = `https://github.com/login/oauth/authorize?client_id=${GITHUB_CLIENT_ID}&scope=repo,issues`;
  res.redirect(authUrl);
});

// Step 2: Handle callback
app.get('/connect/github/callback', async (req, res) => {
  const { code } = req.query;

  const tokenResponse = await fetch('https://github.com/login/oauth/access_token', {
    method: 'POST',
    headers: { Accept: 'application/json' },
    body: JSON.stringify({
      client_id: GITHUB_CLIENT_ID,
      client_secret: GITHUB_CLIENT_SECRET,
      code,
    }),
  });

  const { access_token } = await tokenResponse.json();

  // Store encrypted token for this user/org
  await db.mcpServerAuth.insert({
    org_id: req.user.org_id,
    user_id: req.user.id,
    server_name: 'github',
    credentials: encrypt({ token: access_token }, ENCRYPTION_KEY),
  });

  res.send('GitHub connected successfully!');
});

Scoping: Store tokens per-user or per-organization depending on use case. User-scoped = individual GitHub accounts; org-scoped = shared GitHub org token.

Strategy 4: Service accounts with least privilege

For internal services (databases, monitoring), create service accounts with minimal permissions.

-- Create read-only database user for analytics MCP server
CREATE USER mcp_analytics_readonly WITH PASSWORD 'generated_secure_password';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO mcp_analytics_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO mcp_analytics_readonly;

Store service account credentials encrypted in your MCP auth table. Never use admin/root credentials.

ServiceAuth methodScopeRotation frequency
GitHubOAuth (user)Per-user90 days
SupabaseService keyOrg-wide180 days
Internal APIAPI keyOrg-wide60 days
PostgreSQLService accountRead-only180 days

Building custom MCP servers

When official servers don't exist for your service, build a custom MCP server.

Server skeleton

import { McpServer } from '@modelcontextprotocol/sdk/server';
import { z } from 'zod';

const server = new McpServer({
  name: 'custom-crm',
  version: '1.0.0',
});

// Define a tool
server.tool({
  name: 'search_contacts',
  description: 'Search CRM contacts by name or email',
  parameters: z.object({
    query: z.string().describe('Search query'),
    limit: z.number().default(10),
  }),
  execute: async ({ query, limit }) => {
    const response = await fetch(`https://api.ourcrm.com/contacts/search?q=${query}&limit=${limit}`, {
      headers: { Authorization: `Bearer ${process.env.CRM_API_KEY}` },
    });

    const contacts = await response.json();

    return {
      content: contacts.map(c => ({
        name: c.full_name,
        email: c.email,
        company: c.company,
      })),
    };
  },
});

// Define a resource
server.resource({
  uri: 'crm://schema',
  name: 'CRM Database Schema',
  description: 'Schema documentation for CRM database',
  mimeType: 'text/markdown',
  execute: async () => {
    return {
      content: `
# CRM Schema

- **contacts**: id, full_name, email, company
- **deals**: id, contact_id, value, stage, created_at
      `,
    };
  },
});

// Start server
await server.start({
  transport: 'stdio',
});

Error handling and retries

Wrap external API calls in retry logic with exponential backoff.

async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (attempt === maxRetries) throw error;

      const delay = Math.pow(2, attempt) * 1000;
      console.warn(`Attempt ${attempt} failed, retrying in ${delay}ms...`);
      await sleep(delay);
    }
  }
}

server.tool({
  name: 'get_deal',
  parameters: z.object({ deal_id: z.string() }),
  execute: async ({ deal_id }) => {
    return await withRetry(async () => {
      const response = await fetch(`https://api.crm.com/deals/${deal_id}`, {
        headers: { Authorization: `Bearer ${API_KEY}` },
      });

      if (!response.ok) {
        throw new Error(`API error: ${response.status}`);
      }

      return await response.json();
    });
  },
});

Input validation

Use Zod schemas to validate parameters before calling external APIs.

server.tool({
  name: 'create_deal',
  parameters: z.object({
    contact_id: z.string().uuid(),
    value: z.number().positive().max(1000000),
    stage: z.enum(['prospecting', 'negotiation', 'closed_won', 'closed_lost']),
    notes: z.string().max(5000).optional(),
  }),
  execute: async (params) => {
    // params are already validated by Zod
    const response = await fetch('https://api.crm.com/deals', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        Authorization: `Bearer ${API_KEY}`,
      },
      body: JSON.stringify(params),
    });

    return await response.json();
  },
});

Invalid inputs are rejected before execution, preventing malformed API requests.

Dynamic tool discovery

Agents should discover MCP tools at runtime, not at build time. This enables adding new integrations without redeploying agent code.

Per-organization tool filtering

Different organizations have different MCP servers configured. Filter available tools based on organization context.

async function getAvailableTools(orgId: string) {
  // Fetch MCP servers configured for this org
  const orgServers = await db.mcpServers.findAll({ org_id: orgId, enabled: true });

  const allTools = [];

  for (const serverConfig of orgServers) {
    const client = await initMcpClient(serverConfig);
    const tools = await client.listTools();

    allTools.push(...tools.map(t => ({
      ...t,
      server_name: serverConfig.name,
    })));
  }

  return allTools;
}

// Inject into agent
const agent = new Agent({
  name: 'orchestrator',
  tools: await getAvailableTools(user.org_id),
});

Now if Organization A has GitHub and Supabase MCP servers, their agents see those tools. Organization B with only Slack sees different tools.

Tool similarity search for large tool sets

When you have 50+ MCP tools, naive "list all tools" approaches overwhelm agents. Use vector similarity to surface relevant tools.

import { generateEmbedding } from './embeddings';

// Embed tool descriptions offline
for (const tool of allTools) {
  const embedding = await generateEmbedding(tool.description);

  await db.toolEmbeddings.insert({
    tool_name: tool.name,
    description: tool.description,
    embedding,
  });
}

// At runtime, find relevant tools via similarity search
async function findRelevantTools(userQuery: string, topK = 10) {
  const queryEmbedding = await generateEmbedding(userQuery);

  const results = await db.query(`
    SELECT tool_name, description
    FROM tool_embeddings
    ORDER BY embedding <=> $1::vector
    LIMIT $2
  `, [queryEmbedding, topK]);

  return results.rows;
}

// Use in agent
const relevantTools = await findRelevantTools("Create a GitHub issue for this bug");
// Returns: [{ tool_name: 'create_github_issue', ... }, { tool_name: 'get_github_repo', ... }]

This reduced our average agent latency by 35% by preventing tool overload.

Monitoring and observability

Track MCP server health to detect integration failures early.

MetricDescriptionAlert threshold
Call latency (p95)95th percentile MCP tool invocation time>3s
Error rate% of MCP calls returning errors>5%
Auth failuresFailed authentication attempts>2%
Timeout rate% of calls exceeding timeout>1%
async function instrumentedMcpCall(client: McpClient, toolName: string, params: any) {
  const startTime = Date.now();

  try {
    const result = await client.callTool(toolName, params);

    const duration = Date.now() - startTime;
    metrics.histogram('mcp.call.duration', duration, { tool: toolName, status: 'success' });

    return result;
  } catch (error) {
    const duration = Date.now() - startTime;
    metrics.histogram('mcp.call.duration', duration, { tool: toolName, status: 'error' });
    metrics.increment('mcp.call.errors', { tool: toolName, error_type: error.code });

    throw error;
  }
}

We send MCP metrics to Sentry for alerting. When GitHub MCP error rates spiked to 12% last month, we discovered GitHub was rate-limiting us and implemented backoff.

Real-world case study: Partnership agent MCP stack

Our partnership discovery agent uses 5 MCP servers:

  1. Apollo MCP: Find leads matching ICP criteria
  2. GitHub MCP: Check if companies use our stack (via public repos)
  3. Supabase MCP: Store qualified leads in our database
  4. SendGrid MCP: Send outreach emails
  5. Custom LinkedIn MCP: Scrape company profiles for decision-makers

Workflow:

Agent: "Find 20 fintech companies using Stripe and Next.js"

1. Apollo MCP → search_companies({ industry: 'fintech', tech_stack: ['Stripe'] })
   Returns: 150 companies

2. GitHub MCP → check_public_repos({ companies, tech: 'Next.js' })
   Returns: 45 companies with Next.js

3. Supabase MCP → store_leads({ companies })
   Stores in partnerships.leads table

4. Custom LinkedIn MCP → enrich_contacts({ companies })
   Finds decision-makers

5. SendGrid MCP → send_email({ contacts, template: 'partnership_intro' })
   Sends personalized outreach

Results:

  • Discovers and qualifies 50 leads/week (previously 12 manual)
  • 68% email response rate (vs 22% with generic templates)
  • $0.42 cost per qualified lead

The MCP architecture let us swap Apollo for Clay when Apollo pricing changed -agents adapted instantly without code changes.

Call-to-action (Activation stage) Browse our MCP server registry and connect your first external service in under 10 minutes.

FAQs

Can I use multiple MCP servers in one agent?

Yes. Agents can connect to dozens of MCP servers simultaneously. Use dynamic discovery and tool filtering to prevent overwhelming the agent with too many options.

How do I handle MCP server downtime?

Implement fallback strategies: retry with exponential backoff, degrade gracefully to cached results, or escalate to human operators for critical workflows.

What's the performance overhead of MCP vs direct API calls?

MCP adds 10-30ms of JSON-RPC serialization overhead. For most use cases, this is negligible compared to external API latency (100-500ms).

Can I version MCP servers for breaking changes?

Yes. Include version numbers in server names (github-v2) and maintain multiple versions during migration periods. Agents specify which version to use.

How do I test MCP integrations?

Mock MCP server responses in tests using tools like nock or by running local MCP servers with test credentials pointed at staging APIs.

Summary and next steps

MCP standardizes how AI agents integrate with external services, enabling dynamic tool discovery, secure authentication, and portable integrations. Use hosted servers for common services and build custom servers for proprietary APIs.

Next steps:

  1. Identify which external services your agents need to access.
  2. Install official MCP servers for mainstream services (GitHub, Slack, databases).
  3. Implement encrypted credential storage with per-organization scoping.
  4. Build 1-2 custom MCP servers for internal APIs.
  5. Add monitoring for MCP call latency and error rates.

Internal links:

External references:

Crosslinks: