Overview
Every GTM stack eventually becomes an integration project. Your CRM needs enrichment data from Clay. Your sequencer needs scoring results from your qualification pipeline. Your data warehouse needs activity logs from six different platforms. And each connection means learning another API, wiring up authentication, handling pagination, and praying the vendor doesn't push a breaking change on a Friday afternoon.
Cursor changes the economics of this work. As an AI-native IDE built on VS Code, it understands your codebase, your conventions, and the API documentation you feed it. Instead of context-switching between docs tabs, Stack Overflow threads, and your editor, you describe what you need and Cursor generates code that follows your existing patterns. For GTM Engineers who spend half their time building and maintaining integrations, that's a meaningful shift in how quickly you can connect new tools and keep existing connections reliable.
This guide walks through the full lifecycle of API integration development in Cursor: setting up project context with documentation, generating typed client code, building authentication handlers, creating reusable wrapper functions, testing your integrations, and implementing error handling patterns that survive production traffic. Every example is grounded in the kind of work GTM Engineers actually do.
Setting Up Project Context with API Docs
Cursor's code generation quality is directly proportional to the context you give it. For API integration work, that means teaching Cursor about the APIs you're connecting to before you ask it to write a single line of code.
Structuring Your Documentation Folder
Create a docs/ directory at the root of your integration project. This is where Cursor looks when it needs to understand an API's behavior, endpoints, and data shapes.
docs/
apis/
hubspot/
contacts-api.md
deals-api.md
auth-guide.md
example-responses/
contact-response.json
deal-response.json
clay/
tables-api.md
webhooks.md
example-responses/
table-row.json
outreach/
prospects-api.md
sequences-api.md
oauth-flow.md
The key insight: include actual API response examples as JSON files. When you later ask Cursor to parse a response or build type definitions, it references these concrete examples instead of hallucinating field names. Teams running coordinated Clay, CRM, and sequencer workflows often deal with subtle field naming differences between systems. Having the real responses in your project catches these mismatches early.
Writing a .cursorrules File for Integration Work
The .cursorrules file in your project root tells Cursor how your team writes integration code. Be specific about patterns that matter for API work:
// .cursorrules
Project: GTM Integration Service
Stack:
- TypeScript 5.x with strict mode
- Node.js 20+ with native fetch
- Zod for runtime validation
- Vitest for testing
API Integration Standards:
- All API clients must implement the BaseApiClient interface
- Use environment variables for secrets (never hardcode)
- Implement exponential backoff with jitter for retries
- Log every external API call with: endpoint, method, status, duration, request ID
- Parse all responses through Zod schemas before returning
- Handle rate limits by reading X-RateLimit-Remaining headers
- Use async generators for paginated endpoints
- Include JSDoc with @example for all public methods
- Throw typed errors (ApiError, AuthError, RateLimitError, ValidationError)
If your team uses OpenAPI or Swagger specs, drop them directly into your docs/ folder. Cursor can read these and generate clients that match the spec precisely, including proper type definitions for every endpoint.
Adding Context via @-References
When working in Cursor's chat or Composer, reference specific files with @ mentions. Instead of a vague "build me a HubSpot client," try: "Build a HubSpot contacts client following the patterns in @src/shared/base-client.ts, using the API reference in @docs/apis/hubspot/contacts-api.md." This focuses Cursor's generation on exactly the right context, producing code that fits your existing architecture.
Generating API Client Code
With your documentation indexed and rules defined, Cursor can generate typed API clients that follow your conventions from the start. The approach matters: building on a shared base client keeps your integration code consistent as your stack grows.
Building the Base HTTP Client
Start by describing your base client requirements in Cursor's Composer. This becomes the foundation every integration inherits from:
// src/shared/base-client.ts
import { z } from 'zod';
interface RequestOptions {
method: 'GET' | 'POST' | 'PUT' | 'PATCH' | 'DELETE';
path: string;
body?: unknown;
params?: Record<string, string>;
headers?: Record<string, string>;
timeout?: number;
}
abstract class BaseApiClient {
protected abstract baseUrl: string;
protected abstract serviceName: string;
protected async request<T>(
options: RequestOptions,
schema: z.ZodType<T>
): Promise<T> {
const requestId = crypto.randomUUID();
const startTime = Date.now();
const url = this.buildUrl(options.path, options.params);
try {
const response = await fetch(url, {
method: options.method,
headers: {
'Content-Type': 'application/json',
'X-Request-ID': requestId,
...this.getAuthHeaders(),
...options.headers
},
body: options.body ? JSON.stringify(options.body) : undefined,
signal: AbortSignal.timeout(options.timeout ?? 30000)
});
this.logRequest(options, response.status, Date.now() - startTime, requestId);
if (!response.ok) {
throw await this.handleErrorResponse(response, requestId);
}
const data = await response.json();
return schema.parse(data);
} catch (error) {
if (error instanceof z.ZodError) {
throw new ValidationError(this.serviceName, error, requestId);
}
throw error;
}
}
protected abstract getAuthHeaders(): Record<string, string>;
}
The Zod validation layer is critical. APIs change their response shapes without warning, and runtime validation catches these breaking changes before bad data propagates through your field mapping logic and into your CRM.
Generating Endpoint-Specific Methods
With your base client defined, ask Cursor to generate endpoint methods for each API you're integrating. The key is specificity in your prompt:
// src/integrations/hubspot/contacts-client.ts
import { BaseApiClient } from '../../shared/base-client';
import { z } from 'zod';
const HubSpotContactSchema = z.object({
id: z.string(),
properties: z.object({
email: z.string().email(),
firstname: z.string().optional(),
lastname: z.string().optional(),
company: z.string().optional(),
lifecyclestage: z.string().optional()
}),
createdAt: z.string(),
updatedAt: z.string()
});
type HubSpotContact = z.infer<typeof HubSpotContactSchema>;
const ContactListSchema = z.object({
results: z.array(HubSpotContactSchema),
paging: z.object({
next: z.object({
after: z.string()
}).optional()
}).optional()
});
class HubSpotContactsClient extends BaseApiClient {
protected baseUrl = 'https://api.hubapi.com/crm/v3';
protected serviceName = 'hubspot';
/**
* Retrieve a single contact by ID
* @example
* const contact = await client.getContact('12345');
*/
async getContact(contactId: string): Promise<HubSpotContact> {
return this.request(
{ method: 'GET', path: `/objects/contacts/${contactId}` },
HubSpotContactSchema
);
}
/**
* Iterate through all contacts matching filters
* Handles pagination automatically via async generator
*/
async *listContacts(
properties: string[] = ['email', 'firstname', 'lastname']
): AsyncGenerator<HubSpotContact> {
let after: string | undefined;
do {
const params: Record<string, string> = {
limit: '100',
properties: properties.join(',')
};
if (after) params.after = after;
const response = await this.request(
{ method: 'GET', path: '/objects/contacts', params },
ContactListSchema
);
for (const contact of response.results) {
yield contact;
}
after = response.paging?.next?.after;
} while (after);
}
}
Notice how Cursor follows the patterns from .cursorrules: Zod schemas for validation, async generators for pagination, JSDoc with examples. Once you establish these conventions in one client, ask Cursor to apply the same pattern when building clients for additional APIs.
When generating your second and third API clients, reference the first one explicitly: "Generate a Salesforce contacts client following the same structure as @src/integrations/hubspot/contacts-client.ts." This keeps your integration layer uniform, making maintenance far easier as you scale your GTM engineering platform.
Building Authentication Handlers
Authentication is the foundation that keeps your integrations running. Get it wrong, and your entire pipeline fails silently at 3 AM. Cursor can generate solid auth patterns, but you need to guide it toward the specific flows your integrations require.
API Key Authentication
The simplest auth pattern, used by most enrichment providers and many internal services:
// src/auth/api-key-auth.ts
class ApiKeyAuth {
private readonly apiKey: string;
private readonly headerName: string;
private readonly prefix: string;
constructor(options: {
envVar: string;
headerName?: string;
prefix?: string;
}) {
const key = process.env[options.envVar];
if (!key) {
throw new Error(
`Missing required environment variable: ${options.envVar}`
);
}
this.apiKey = key;
this.headerName = options.headerName ?? 'Authorization';
this.prefix = options.prefix ?? 'Bearer';
}
getHeaders(): Record<string, string> {
return {
[this.headerName]: `${this.prefix} ${this.apiKey}`
};
}
}
// Usage for different API styles
const clayAuth = new ApiKeyAuth({
envVar: 'CLAY_API_KEY',
headerName: 'Authorization',
prefix: 'Bearer'
});
const enrichmentAuth = new ApiKeyAuth({
envVar: 'ENRICHMENT_API_KEY',
headerName: 'X-API-Key',
prefix: ''
});
For teams managing multiple API keys across integrations, a centralized approach like this prevents credential sprawl. See our guide on secrets management for sales AI keys for production-grade patterns.
OAuth 2.0 with Automatic Token Refresh
Most CRM and sequencer integrations use OAuth. The critical requirement is automatic token refresh that works without human intervention:
// src/auth/oauth-manager.ts
interface TokenSet {
accessToken: string;
refreshToken: string;
expiresAt: number;
}
class OAuthManager {
private tokens: Map<string, TokenSet> = new Map();
constructor(
private config: {
clientId: string;
clientSecret: string;
tokenUrl: string;
store: TokenStore;
}
) {}
async getAccessToken(connectionId: string): Promise<string> {
let tokenSet = this.tokens.get(connectionId)
?? await this.config.store.load(connectionId);
if (!tokenSet) {
throw new AuthError(`No tokens found for connection: ${connectionId}`);
}
// Refresh 5 minutes before expiry to avoid race conditions
if (tokenSet.expiresAt - Date.now() < 5 * 60 * 1000) {
tokenSet = await this.refreshTokens(connectionId, tokenSet.refreshToken);
}
return tokenSet.accessToken;
}
private async refreshTokens(
connectionId: string,
refreshToken: string
): Promise<TokenSet> {
const response = await fetch(this.config.tokenUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
grant_type: 'refresh_token',
refresh_token: refreshToken,
client_id: this.config.clientId,
client_secret: this.config.clientSecret
})
});
if (!response.ok) {
const error = await response.text();
// Token revoked or expired beyond refresh
if (response.status === 400 || response.status === 401) {
await this.config.store.markDisconnected(connectionId);
throw new AuthError(
`OAuth connection ${connectionId} requires re-authorization: ${error}`
);
}
throw new AuthError(`Token refresh failed: ${error}`);
}
const data = await response.json();
const tokenSet: TokenSet = {
accessToken: data.access_token,
refreshToken: data.refresh_token ?? refreshToken,
expiresAt: Date.now() + (data.expires_in * 1000)
};
this.tokens.set(connectionId, tokenSet);
await this.config.store.save(connectionId, tokenSet);
return tokenSet;
}
}
When asking Cursor to generate OAuth handlers, specify the exact vendor flow: "Generate an OAuth 2.0 handler for Salesforce's authorization code flow, including automatic refresh with the 5-minute buffer pattern from @src/auth/oauth-manager.ts." Cursor hallucinates less when you name the specific vendor and grant type. For a deeper dive into authentication methods, check our REST API authentication guide.
Creating Reusable Wrapper Functions
Raw API client code handles individual requests. Wrapper functions handle business logic: the actual operations your GTM workflows perform. This is where Cursor's codebase awareness shines, because wrappers connect multiple clients and data transformations.
Enrichment-to-CRM Sync Wrapper
A common GTM pattern: take enriched data from Clay and sync it to your CRM with proper field mapping and deduplication:
// src/workflows/enrich-and-sync.ts
interface EnrichmentResult {
email: string;
company: string;
title: string;
enrichmentScore: number;
signals: string[];
}
interface SyncResult {
created: number;
updated: number;
skipped: number;
errors: SyncError[];
}
async function enrichAndSync(
enrichmentData: EnrichmentResult[],
options: { dryRun?: boolean; minScore?: number } = {}
): Promise<SyncResult> {
const { dryRun = false, minScore = 50 } = options;
const result: SyncResult = { created: 0, updated: 0, skipped: 0, errors: [] };
// Filter by minimum enrichment score
const qualified = enrichmentData.filter(d => d.enrichmentScore >= minScore);
result.skipped = enrichmentData.length - qualified.length;
for (const record of qualified) {
try {
// Check for existing contact
const existing = await crmClient.findContactByEmail(record.email);
const mappedFields = mapEnrichmentToCrm(record);
if (dryRun) {
console.log(`[DRY RUN] Would ${existing ? 'update' : 'create'}: ${record.email}`);
continue;
}
if (existing) {
await crmClient.updateContact(existing.id, mappedFields);
result.updated++;
} else {
await crmClient.createContact(mappedFields);
result.created++;
}
} catch (error) {
result.errors.push({ email: record.email, error: String(error) });
}
}
return result;
}
function mapEnrichmentToCrm(data: EnrichmentResult): CrmContactFields {
return {
email: data.email.toLowerCase().trim(),
company: data.company,
job_title: data.title,
lead_score: data.enrichmentScore,
enrichment_signals: data.signals.join('; '),
enriched_at: new Date().toISOString(),
lead_source: 'clay_enrichment'
};
}
Notice the dryRun option. When building wrappers that write to production systems, always include a dry run mode. It's the difference between confidently testing a new workflow and discovering at midnight that you just overwrote 500 CRM records.
Webhook Handler Wrappers
For real-time webhook-driven workflows, wrapper functions bridge the gap between raw webhook payloads and your downstream integrations:
// src/webhooks/handlers/clay-enrichment-handler.ts
async function handleClayEnrichmentWebhook(
payload: unknown
): Promise<WebhookResponse> {
// Validate the incoming payload
const validated = ClayWebhookSchema.safeParse(payload);
if (!validated.success) {
return {
status: 422,
body: { error: 'Invalid payload', details: validated.error.issues }
};
}
const { rows, tableId, webhookId } = validated.data;
// Process each enriched row
const results = await Promise.allSettled(
rows.map(row => processEnrichedRow(row))
);
const succeeded = results.filter(r => r.status === 'fulfilled').length;
const failed = results.filter(r => r.status === 'rejected').length;
return {
status: 200,
body: {
processed: rows.length,
succeeded,
failed,
tableId
}
};
}
Getting Cursor to Generate Wrappers Effectively
Wrappers are where vague prompts produce the worst results. Be explicit about the business logic:
Weak prompt: "Create a function to sync data between Clay and HubSpot."
Strong prompt: "Create a sync wrapper that takes an array of Clay enrichment results (schema in @docs/apis/clay/example-responses/table-row.json), maps them to HubSpot contact fields using the mapping in @src/config/field-mappings.ts, deduplicates by email, and upserts to HubSpot. Include dry run mode, minimum score filtering, and return a detailed result object with counts and any errors."
This level of specificity produces code that actually works the first time, reducing the ad-hoc debugging cycles that eat into your development time.
Testing and Debugging API Calls
Integration code that works in development and fails in production is the norm, not the exception. A proper testing strategy catches the difference before your pipeline goes silent.
Unit Testing with Mocked API Responses
Use Cursor to generate test suites that cover happy paths, error scenarios, and edge cases. Start by describing the behavior, not the implementation:
// src/integrations/hubspot/__tests__/contacts-client.test.ts
import { describe, it, expect, vi, beforeEach } from 'vitest';
describe('HubSpotContactsClient', () => {
let client: HubSpotContactsClient;
let fetchMock: ReturnType<typeof vi.fn>;
beforeEach(() => {
fetchMock = vi.fn();
global.fetch = fetchMock;
client = new HubSpotContactsClient();
});
describe('getContact', () => {
it('parses a valid contact response', async () => {
fetchMock.mockResolvedValueOnce(
new Response(JSON.stringify({
id: '123',
properties: {
email: 'test@example.com',
firstname: 'Jane',
lastname: 'Doe'
},
createdAt: '2026-01-15T10:00:00Z',
updatedAt: '2026-02-20T14:30:00Z'
}), { status: 200 })
);
const contact = await client.getContact('123');
expect(contact.id).toBe('123');
expect(contact.properties.email).toBe('test@example.com');
});
it('throws ValidationError when response shape is unexpected', async () => {
fetchMock.mockResolvedValueOnce(
new Response(JSON.stringify({
id: '123',
// Missing required 'properties' field
createdAt: '2026-01-15T10:00:00Z'
}), { status: 200 })
);
await expect(client.getContact('123')).rejects.toThrow(ValidationError);
});
it('retries on 429 rate limit responses', async () => {
fetchMock
.mockResolvedValueOnce(
new Response('Rate limited', {
status: 429,
headers: { 'Retry-After': '1' }
})
)
.mockResolvedValueOnce(
new Response(JSON.stringify(validContactResponse), { status: 200 })
);
const contact = await client.getContact('123');
expect(fetchMock).toHaveBeenCalledTimes(2);
expect(contact.id).toBe('123');
});
});
describe('listContacts', () => {
it('paginates through all results', async () => {
fetchMock
.mockResolvedValueOnce(
new Response(JSON.stringify({
results: [makeContact('1'), makeContact('2')],
paging: { next: { after: 'cursor-abc' } }
}), { status: 200 })
)
.mockResolvedValueOnce(
new Response(JSON.stringify({
results: [makeContact('3')],
paging: {}
}), { status: 200 })
);
const contacts: HubSpotContact[] = [];
for await (const contact of client.listContacts()) {
contacts.push(contact);
}
expect(contacts).toHaveLength(3);
expect(fetchMock).toHaveBeenCalledTimes(2);
});
});
});
Contract Testing Against Real API Responses
Save actual API responses as fixtures and validate your Zod schemas against them. This catches schema drift before it breaks production:
// src/integrations/hubspot/__tests__/contract.test.ts
import contactFixture from '../../../docs/apis/hubspot/example-responses/contact-response.json';
import dealFixture from '../../../docs/apis/hubspot/example-responses/deal-response.json';
describe('HubSpot API Contracts', () => {
it('contact response matches our schema', () => {
const result = HubSpotContactSchema.safeParse(contactFixture);
expect(result.success).toBe(true);
});
it('deal response matches our schema', () => {
const result = HubSpotDealSchema.safeParse(dealFixture);
expect(result.success).toBe(true);
});
});
Set up a script that pulls fresh API responses monthly and saves them as fixtures. When your contract tests break, you know the API changed before your workflows do. This is essential for maintaining reliable automated workflows that run without constant supervision.
Debugging with Cursor's Chat
When an integration breaks in production, Cursor's chat is your fastest diagnostic tool. The key is providing full context:
Paste the full error including response body
Don't just share the status code. Include the response body, headers, and the request payload that triggered it. Cursor needs this context to identify the root cause.
Reference the relevant client code
Use @-mentions to point Cursor at the specific client file: "This error comes from @src/integrations/hubspot/contacts-client.ts when calling getContact."
Describe what changed recently
Did you update a dependency? Did the vendor announce API changes? Did you modify field mappings? This narrows the search space for Cursor's analysis.
Ask Cursor to trace the data flow
For multi-step workflows where the bug could be anywhere, ask: "Trace the data transformation from the Clay webhook payload through to the Salesforce upsert and identify where the company name field gets lost."
Error Handling Patterns for Production Integrations
The difference between a prototype and a production integration is error handling. Cursor generates decent happy-path code by default, but you need to explicitly guide it toward the failure modes that matter for GTM workflows.
Typed Error Hierarchy
Define a clear error hierarchy so your wrapper functions can make intelligent decisions about what to retry, what to skip, and what to escalate:
// src/shared/errors.ts
class ApiError extends Error {
constructor(
public service: string,
public statusCode: number,
public responseBody: unknown,
public requestId: string
) {
super(`${service} API error ${statusCode}: ${JSON.stringify(responseBody)}`);
this.name = 'ApiError';
}
get isRetryable(): boolean {
return [429, 502, 503, 504].includes(this.statusCode);
}
get isAuthError(): boolean {
return [401, 403].includes(this.statusCode);
}
get isClientError(): boolean {
return this.statusCode >= 400 && this.statusCode < 500;
}
}
class RateLimitError extends ApiError {
constructor(
service: string,
public retryAfter: number,
requestId: string
) {
super(service, 429, { retryAfter }, requestId);
this.name = 'RateLimitError';
}
}
class AuthError extends ApiError {
constructor(service: string, message: string, requestId?: string) {
super(service, 401, { message }, requestId ?? 'unknown');
this.name = 'AuthError';
}
}
class ValidationError extends Error {
constructor(
public service: string,
public zodError: z.ZodError,
public requestId: string
) {
super(`${service} response validation failed: ${zodError.message}`);
this.name = 'ValidationError';
}
}
Retry with Exponential Backoff and Jitter
For high-volume operations like managing Clay rate limits and API quotas, a robust retry function prevents cascading failures:
// src/shared/retry.ts
interface RetryOptions {
maxAttempts?: number;
baseDelayMs?: number;
maxDelayMs?: number;
onRetry?: (attempt: number, error: Error, delayMs: number) => void;
}
async function withRetry<T>(
operation: () => Promise<T>,
options: RetryOptions = {}
): Promise<T> {
const {
maxAttempts = 3,
baseDelayMs = 1000,
maxDelayMs = 30000,
onRetry
} = options;
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return await operation();
} catch (error) {
// Don't retry non-retryable errors
if (error instanceof ApiError && !error.isRetryable) {
throw error;
}
if (attempt === maxAttempts) {
throw error;
}
// Use Retry-After header if available
let delayMs: number;
if (error instanceof RateLimitError && error.retryAfter > 0) {
delayMs = error.retryAfter * 1000;
} else {
// Exponential backoff with jitter
delayMs = Math.min(
baseDelayMs * Math.pow(2, attempt - 1) + Math.random() * 1000,
maxDelayMs
);
}
onRetry?.(attempt, error as Error, delayMs);
await new Promise(resolve => setTimeout(resolve, delayMs));
}
}
// TypeScript requires this, but we'll never reach it
throw new Error('Retry logic error');
}
Circuit Breaker for Degraded APIs
When an API is consistently failing, continuing to hammer it wastes resources and can get your credentials throttled. A circuit breaker protects both sides:
// src/shared/circuit-breaker.ts
class CircuitBreaker {
private failures = 0;
private lastFailure = 0;
private state: 'closed' | 'open' | 'half-open' = 'closed';
constructor(
private readonly threshold: number = 5,
private readonly resetTimeoutMs: number = 60000
) {}
async execute<T>(operation: () => Promise<T>): Promise<T> {
if (this.state === 'open') {
if (Date.now() - this.lastFailure > this.resetTimeoutMs) {
this.state = 'half-open';
} else {
throw new Error('Circuit breaker is open — API is degraded');
}
}
try {
const result = await operation();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
private onSuccess(): void {
this.failures = 0;
this.state = 'closed';
}
private onFailure(): void {
this.failures++;
this.lastFailure = Date.now();
if (this.failures >= this.threshold) {
this.state = 'open';
}
}
}
This pattern is especially valuable when orchestrating multi-step workflows. If your enrichment provider is down, you want to stop early rather than process 500 records through a pipeline that will fail at the enrichment step. Teams building reliability guardrails for automated outbound should consider circuit breakers essential infrastructure.
| Error Type | Action | Example |
|---|---|---|
| 429 Rate Limit | Retry with backoff using Retry-After header | Clay API, HubSpot bulk operations |
| 401 Unauthorized | Refresh OAuth token, then retry once | Salesforce token expiry |
| 403 Forbidden | Log and alert, do not retry | Missing API scopes |
| 404 Not Found | Skip record, log for review | Deleted CRM contact |
| 422 Validation | Log payload, skip record | Invalid email format |
| 500+ Server Error | Retry with backoff, circuit break after threshold | Vendor outage |
Beyond Solo Integration Work
Building individual API integrations in Cursor is efficient. You set up your docs, generate typed clients, write tests, and ship. But the real challenge for GTM teams isn't building a single integration. It's maintaining fifteen of them, keeping data consistent across all of them, and ensuring changes in one system propagate correctly to the rest.
Consider what happens as your integration portfolio grows. Your Clay enrichment pipeline writes to HubSpot. Your scoring model reads from HubSpot and writes results back. Outreach pulls contacts from HubSpot and tracks engagement. Your data warehouse aggregates everything for reporting. Each integration has its own authentication, field mappings, error handling, and sync schedule. Your Cursor-generated code handles each individual connection well. The problem is coordination.
When your product team updates the ICP definition, that change needs to ripple through scoring logic, enrichment criteria, and sequencer enrollment rules. When a lead's status changes in the CRM, downstream systems need to know immediately. When your enrichment provider returns new data, it needs to reach every system that uses it. Managing this with point-to-point integrations means maintaining an exponentially growing web of connections.
This is where the architecture shifts from individual integrations to a context layer. Instead of each tool maintaining its own version of the truth, you need a unified system that keeps all your GTM data synchronized and accessible. Platforms like Octave approach this through a context graph that maintains relationships between accounts, contacts, signals, and messaging across your entire stack. Your Cursor-built integrations become simpler because they connect to one central layer instead of negotiating with every other tool directly.
For teams exploring how this works in practice, Octave's MCP server integration means your AI coding assistant can query unified GTM context directly. Your Cursor session doesn't just have access to API docs. It has access to your ICP definitions, persona mappings, competitive positioning, and enrichment data, all resolved at runtime from a single source of truth.
FAQ
Cursor works best with documentation you explicitly add to your project. If an API's official docs are sparse, create your own reference by capturing actual request/response pairs and saving them in your docs/ folder. This gives Cursor concrete examples to work from, which produces more accurate code than relying on general training data. For APIs that change frequently, refresh your documentation fixtures monthly.
Both work well, but TypeScript has an edge for API integration work. Static types catch mismatched field names and incorrect response shapes at compile time, and libraries like Zod add runtime validation. Cursor's type inference is also stronger with TypeScript, producing more accurate code suggestions. That said, if your existing stack is Python-based, Cursor handles Python integration code effectively with Pydantic for validation.
Add explicit rules to your .cursorrules file: "Never hardcode API keys, tokens, or secrets. Always read credentials from environment variables." Also include a .env.example file in your project listing every required variable. Cursor references this and generates code that reads from process.env consistently. Always review generated code for accidental credential exposure before committing.
Yes, but use it carefully. Ask Cursor to generate integration tests that target staging or sandbox environments with unique test data identifiers and cleanup steps. Never point generated tests at production APIs. A good pattern: create a separate .env.test file with sandbox credentials and configure your test runner to use it.
If your Zod validation catches the change (and it should), the error includes the exact schema mismatch. Paste the error and the raw API response into Cursor's chat, reference your schema file, and ask it to update the schema and any dependent code. Cursor can trace the impact across your codebase and propose changes to every affected file using Composer.
Break complex workflows into individual steps and generate each separately. Instead of "build me a Clay-to-HubSpot-to-Outreach pipeline," start with the Clay data extraction, then the transformation logic, then the HubSpot upsert, then the Outreach enrollment. Reference completed steps when generating subsequent ones. This produces more reliable code and makes debugging easier when something breaks at a specific stage.
Conclusion
Cursor transforms API integration development from tedious boilerplate work into focused engineering. The patterns covered here, from structured documentation to typed clients to production-grade error handling, represent how effective GTM Engineers build integrations that actually survive contact with production traffic.
The investment in setup pays for itself quickly. A well-configured .cursorrules file and organized documentation folder mean every subsequent integration follows your team's established patterns. Zod schemas catch API changes before they break workflows. Typed error hierarchies let your wrapper functions make intelligent retry decisions. Circuit breakers protect your systems when vendor APIs degrade.
Start with one integration end-to-end: documentation, client, authentication, wrapper, tests, and error handling. Refine your patterns on that first integration, then let Cursor propagate those patterns across your stack. The goal isn't just faster code generation. It's building an integration layer that your team can maintain and extend without accumulating technical debt that slows everything down six months from now.
