Overview
The Model Context Protocol (MCP) is transforming how AI coding assistants interact with external tools and data sources. If you're a GTM engineer using Cursor IDE, configuring MCP servers unlocks capabilities that go far beyond basic code completion—think direct CRM queries, live enrichment data, and automated research workflows all accessible from your editor.
This guide walks through the complete setup process for MCP servers in Cursor, from basic configuration to advanced multi-server architectures. Whether you're building AI pipelines that run themselves or just want to pull prospect data without leaving your IDE, you'll find the practical steps here.
What Is MCP and Why Does It Matter for Cursor?
MCP (Model Context Protocol) is an open standard that lets AI models connect to external data sources and tools through a unified interface. Instead of copying and pasting data into your prompts, MCP servers expose resources, tools, and prompts that the AI can access directly.
For Cursor users, this means:
- Live data access: Query your CRM, databases, or APIs without switching windows
- Tool execution: Run scripts, fetch web content, or trigger webhooks from within the AI context
- Persistent context: MCP servers maintain state across conversations, so you don't lose context between sessions
- Custom integrations: Build your own servers to connect proprietary systems
For teams doing automated research, scoring, and email copy generation, MCP bridges the gap between your development environment and your GTM stack.
Prerequisites
Before configuring MCP servers, ensure you have:
| Requirement | Version | Notes |
|---|---|---|
| Cursor IDE | 0.43+ | MCP support added in late 2024 |
| Node.js | 18+ | Required for most MCP servers |
| Python | 3.10+ | Optional, for Python-based servers |
| npx or uvx | Latest | For running server packages |
While VS Code also supports MCP through extensions, Cursor has native MCP integration in its AI features. The configuration files are similar but stored in different locations.
Basic MCP Configuration in Cursor
Cursor stores MCP configuration in a JSON file. The location depends on your operating system and whether you want project-specific or global settings.
Configuration File Locations
| Scope | macOS/Linux | Windows |
|---|---|---|
| Global | ~/.cursor/mcp.json |
%APPDATA%\Cursor\mcp.json |
| Project | .cursor/mcp.json |
.cursor\mcp.json |
Basic Configuration Structure
The configuration file follows this structure:
{
"mcpServers": {
"server-name": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/directory"],
"env": {
"API_KEY": "your-api-key"
}
}
}
}
Each server entry includes:
- command: The executable to run (npx, uvx, node, python, etc.)
- args: Command-line arguments passed to the server
- env: Environment variables for API keys and configuration
Step-by-Step Setup Guide
Let's walk through setting up your first MCP server in Cursor.
Create the Configuration File
Open your terminal and create the global configuration directory and file:
# macOS/Linux
mkdir -p ~/.cursor
touch ~/.cursor/mcp.json
# Windows (PowerShell)
New-Item -ItemType Directory -Force -Path "$env:APPDATA\Cursor"
New-Item -ItemType File -Path "$env:APPDATA\Cursor\mcp.json"
Add Your First Server
Start with the filesystem server—it's the simplest way to verify your setup works. Edit mcp.json:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/yourname/projects"
]
}
}
}
Replace /Users/yourname/projects with a directory you want the AI to access.
Restart Cursor
Close and reopen Cursor completely. MCP servers are loaded at startup, so configuration changes require a restart.
Verify the Connection
Open Cursor's AI chat and ask: "What MCP tools do you have access to?" The AI should list the filesystem tools if configured correctly.
Test the Server
Try a simple command: "List the files in my projects directory." The AI should use the MCP filesystem server to return actual directory contents.
Popular MCP Servers for GTM Engineering
Beyond the basic filesystem server, several MCP servers are particularly useful for GTM engineering workflows.
Web and Research Servers
| Server | Package | Use Case |
|---|---|---|
| Fetch | @modelcontextprotocol/server-fetch |
Retrieve web content and APIs |
| Puppeteer | @modelcontextprotocol/server-puppeteer |
Browser automation and scraping |
| Brave Search | @anthropic/mcp-server-brave-search |
Web search integration |
Database and CRM Servers
| Server | Package | Use Case |
|---|---|---|
| PostgreSQL | @modelcontextprotocol/server-postgres |
Direct database queries |
| SQLite | @modelcontextprotocol/server-sqlite |
Local database access |
| Airtable | @anthropic/mcp-server-airtable |
Airtable base access |
For teams working on CRM sync workflows, the PostgreSQL server lets you query Salesforce or HubSpot data warehouses directly from Cursor.
Example Multi-Server Configuration
Here's a production-ready configuration combining multiple servers:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/me/gtm-projects"]
},
"fetch": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-fetch"]
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"POSTGRES_CONNECTION_STRING": "postgresql://user:pass@host:5432/db"
}
},
"brave-search": {
"command": "npx",
"args": ["-y", "@anthropic/mcp-server-brave-search"],
"env": {
"BRAVE_API_KEY": "your-brave-api-key"
}
}
}
}
Security Best Practices
MCP servers can access sensitive systems. Follow these practices to maintain security, especially when building systems that handle API keys and credentials.
Environment Variables
Never hardcode API keys in mcp.json. Instead, reference environment variables:
{
"mcpServers": {
"my-server": {
"command": "node",
"args": ["server.js"],
"env": {
"API_KEY": "${MYSERVICE_API_KEY}"
}
}
}
}
Then set the variable in your shell profile or a .env file that's excluded from version control.
Principle of Least Privilege
- Filesystem access: Only expose directories the AI genuinely needs
- Database permissions: Use read-only credentials when possible
- API scopes: Request minimal permissions for third-party integrations
Use project-specific .cursor/mcp.json files to limit server access per project. A Clay integration project doesn't need access to your production database credentials.
Troubleshooting Common Issues
When MCP servers fail to connect, these are the most common culprits.
Server Not Appearing in Cursor
- Check JSON syntax: Use a JSON validator—missing commas break the entire file
- Verify the path: Ensure your
mcp.jsonis in the correct location - Restart completely: Close all Cursor windows, wait 5 seconds, reopen
- Check Cursor logs: Open the Output panel and look for MCP-related errors
Server Crashes on Startup
# Test the server manually
npx -y @modelcontextprotocol/server-filesystem /your/path
# Check for missing dependencies
npm install -g @modelcontextprotocol/server-filesystem
Permission Denied Errors
On macOS, you may need to grant Cursor Full Disk Access in System Preferences if accessing directories outside your home folder.
Environment Variables Not Loading
Cursor may not inherit shell environment variables. Explicitly define them in the env block or use absolute paths to credential files.
Building Custom MCP Servers
Pre-built servers cover common use cases, but GTM teams often need custom integrations. Here's a minimal Node.js server template:
// my-gtm-server.js
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server({
name: "my-gtm-server",
version: "1.0.0"
}, {
capabilities: {
tools: {}
}
});
server.setRequestHandler("tools/list", async () => ({
tools: [{
name: "lookup_company",
description: "Look up company data from our enrichment API",
inputSchema: {
type: "object",
properties: {
domain: { type: "string", description: "Company domain" }
},
required: ["domain"]
}
}]
}));
server.setRequestHandler("tools/call", async (request) => {
if (request.params.name === "lookup_company") {
const { domain } = request.params.arguments;
// Your enrichment logic here
const data = await fetchCompanyData(domain);
return { content: [{ type: "text", text: JSON.stringify(data) }] };
}
});
const transport = new StdioServerTransport();
await server.connect(transport);
Register it in mcp.json:
{
"mcpServers": {
"my-gtm-server": {
"command": "node",
"args": ["/path/to/my-gtm-server.js"]
}
}
}
This approach works well for teams building reusable AI workflows that need to query internal systems.
The Context Problem at Scale
Setting up MCP servers for a single developer is straightforward. Managing them across a team of 10 GTM engineers gets complicated fast. Each person has different API keys, different directory structures, and different server configurations.
The challenges compound when you consider:
- Shared credentials need secure distribution
- Server configurations drift between team members
- Updates to enrichment logic require manual propagation
- No centralized view of what data the AI is actually accessing
What teams actually need is a unified context layer that handles credential management, maintains consistent configurations, and ensures everyone's AI assistant has access to the same up-to-date data. This is exactly what platforms like Octave provide—instead of manually configuring MCP servers for each tool in your stack, Octave maintains a unified context graph that keeps your GTM data synchronized and accessible. For teams running Clay, CRM, and sequencer workflows at volume, it eliminates the configuration sprawl that makes MCP hard to maintain.
FAQ
Yes, MCP server support is available on all Cursor tiers. However, the AI features that use MCP tools require Cursor Pro or Business for full functionality.
There's no hard limit, but each server consumes system resources. Most setups work well with 3-5 servers. If you need more, consider building a single custom server that aggregates multiple data sources.
Yes, and this is where MCP shines. Agent mode can autonomously use MCP tools to gather information, modify files, and execute multi-step workflows without manual intervention.
Commit .cursor/mcp.json to your repository for project-specific servers. Use environment variables for credentials so each team member can set their own keys. For team-wide consistency, document your standard server stack in your engineering wiki.
Cursor's @-commands (like @web or @docs) are built-in shortcuts. MCP servers extend this with custom tools—you could build a @company-lookup command that queries your specific enrichment stack.
MCP servers run locally and communicate via stdio (standard input/output), not network requests. Data doesn't leave your machine unless the server explicitly makes external API calls. Always audit servers before installing them.
Conclusion
MCP servers transform Cursor from a smart code editor into a connected GTM development environment. The setup isn't complicated—create a JSON file, add your servers, restart Cursor—but the impact on your workflow is substantial.
Start with the filesystem and fetch servers to get comfortable with the basics. Then add database connections as your workflows mature. For teams serious about productionizing AI for sales, custom MCP servers that connect to your specific enrichment and outreach tools become essential infrastructure.
The key is starting simple. Get one server working, verify it with a basic query, then expand. The patterns here scale from individual experiments to team-wide tooling—you just need to take the first step.
