Overview
Self-hosting n8n gives GTM teams complete control over their automation infrastructure. Unlike cloud-hosted solutions, running n8n on your own servers means no per-execution pricing, full data sovereignty, and the flexibility to customize every aspect of your workflow engine. For teams running high-volume outbound campaigns or handling sensitive prospect data, this control is not optional—it is foundational.
This guide walks through the complete Docker setup for n8n, covering architecture decisions, configuration best practices, and the operational considerations that separate a hobby deployment from production infrastructure. Whether you are a GTM Engineer building your first self-hosted stack or scaling an existing deployment, you will find practical guidance for every stage.
Why Self-Host n8n for GTM Operations
The decision to self-host is not just about cost—though that matters. At scale, cloud-hosted automation platforms can charge per execution, per active workflow, or per connected integration. When you are running hands-off outbound pipelines that process thousands of leads daily, those costs compound quickly.
Cost Predictability
Self-hosting transforms variable per-execution costs into fixed infrastructure costs. A modest VPS running Docker can handle the same workload that would cost hundreds in monthly SaaS fees. For teams serious about budgeting for AI-powered outbound, this predictability is invaluable.
Data Control and Compliance
When your workflows process prospect information, CRM data, and enrichment results, you need to know exactly where that data lives. Self-hosting means your data never leaves infrastructure you control. This is particularly important for teams handling compliance-safe qualification workflows or operating under strict data residency requirements.
Customization and Integration Depth
Self-hosted n8n allows custom node development, direct database access, and integration patterns impossible with hosted solutions. You can build workflows that connect directly to your internal APIs, run custom Python scripts, or integrate with proprietary tools—no waiting for official connectors.
n8n Architecture: What You Need to Know
Before deploying, understanding n8n's architecture helps you make informed decisions about scaling and reliability.
Core Components
n8n consists of several key components that work together:
- Main process: Handles the web interface, workflow execution, and API endpoints
- Worker processes: Optional separate processes for executing workflows (queue mode)
- Database: Stores workflow definitions, credentials, and execution history
- Webhook processor: Handles incoming webhook triggers from external services
Execution Modes
n8n supports two primary execution modes:
| Mode | Best For | Scaling Approach |
|---|---|---|
| Regular (default) | Small teams, lower volume | Vertical (bigger server) |
| Queue | High volume, production | Horizontal (more workers) |
For GTM operations processing significant lead volumes, queue mode with separate workers provides better reliability and the ability to scale horizontally. This matters when you are coordinating Clay, CRM, and sequencer workflows that need to execute reliably at scale.
Docker Setup: Step by Step
Docker provides the cleanest deployment path for n8n. It handles dependencies, simplifies updates, and makes your setup reproducible across environments.
Prerequisites
Before starting, ensure you have:
- A server with at least 2GB RAM (4GB recommended for production)
- Docker and Docker Compose installed
- A domain name pointing to your server (for HTTPS)
- Basic familiarity with command line operations
Basic Docker Compose Configuration
Start with this foundation and build from it:
Create a docker-compose.yml file in your project directory with the following structure. Adjust environment variables based on your specific requirements.
Create the Project Directory
Set up a dedicated directory for your n8n deployment. This keeps configuration files, data volumes, and logs organized.
mkdir -p /opt/n8n
cd /opt/n8n
Configure Environment Variables
Create a .env file for sensitive configuration. Never commit this file to version control—treat it with the same care as you would API keys for your sales AI tools.
N8N_ENCRYPTION_KEY=your-strong-encryption-key-here
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=secure-password
POSTGRES_PASSWORD=database-password
WEBHOOK_URL=https://n8n.yourdomain.com/
Write the Docker Compose File
This configuration includes PostgreSQL for production-ready data persistence and proper networking:
version: '3.8'
services:
n8n:
image: n8nio/n8n:latest
restart: always
ports:
- "5678:5678"
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- WEBHOOK_URL=${WEBHOOK_URL}
- EXECUTIONS_DATA_PRUNE=true
- EXECUTIONS_DATA_MAX_AGE=168
volumes:
- n8n_data:/home/node/.n8n
depends_on:
- postgres
postgres:
image: postgres:15
restart: always
environment:
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=n8n
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
n8n_data:
postgres_data:
Launch the Stack
Start your n8n instance with Docker Compose:
docker-compose up -d
Verify both containers are running:
docker-compose ps
Production Configuration Essentials
A development setup is not a production setup. Before routing real GTM workflows through your instance, address these critical configuration areas.
Reverse Proxy and HTTPS
Never expose n8n directly to the internet without HTTPS. Use a reverse proxy like Nginx or Traefik. For most deployments, adding Traefik to your Docker Compose stack provides automatic Let's Encrypt certificate management:
Adding Traefik labels to your n8n service automates SSL certificate provisioning and renewal. This eliminates one of the most common failure points in self-hosted deployments.
Database Backups
Your n8n database contains workflow definitions, credentials, and execution history. Losing it means rebuilding everything from scratch. Implement automated backups:
# Daily PostgreSQL backup script
pg_dump -U n8n -h localhost n8n | gzip > /backups/n8n-$(date +%Y%m%d).sql.gz
Execution History Management
By default, n8n stores every execution indefinitely. This consumes disk space rapidly and can slow down the interface. The configuration above includes pruning settings that retain 7 days (168 hours) of execution history. Adjust based on your debugging needs and storage capacity.
Resource Limits
Add resource constraints to prevent runaway workflows from crashing your server:
services:
n8n:
deploy:
resources:
limits:
memory: 2G
cpus: '2'
Scaling with Queue Mode
When single-process execution becomes a bottleneck, queue mode separates workflow execution from the main n8n process. This is essential for teams running webhook-triggered outbound workflows that need consistent response times.
When to Switch to Queue Mode
Consider queue mode when you experience:
- Webhook timeouts during peak processing
- UI slowdowns while workflows execute
- Need for horizontal scaling across multiple servers
- Requirement for workflow execution redundancy
Queue Mode Architecture
Queue mode adds Redis as a message broker between the main process and worker processes. The main process queues execution requests, and workers pull from the queue independently.
Queue mode requires Redis. Add it to your Docker Compose stack and configure n8n to use it as the queue backend.
A production queue mode setup includes: one main n8n instance for the UI and webhook reception, multiple worker containers processing executions, Redis for queue management, and PostgreSQL for persistent storage.
Integration Patterns for GTM Workflows
With n8n running, the real value comes from how you integrate it with your GTM stack. Here are patterns that work well for sales automation.
CRM Synchronization
Build workflows that keep your CRM data consistent across tools. n8n can watch for changes in one system and propagate updates automatically. This supports the kind of Clay to CRM sync that keeps qualification scores and enrichment data flowing.
Enrichment Pipelines
Chain multiple enrichment sources in sequence, with fallback logic when one provider returns empty. This implements the waterfall enrichment pattern that maximizes data coverage while controlling costs.
Event-Driven Sequences
Trigger outreach based on real-time signals: website visits, product usage events, or intent data. n8n's webhook nodes make it straightforward to receive events and route them to the appropriate event-driven sequence.
Monitoring and Maintenance
Self-hosting means self-maintaining. Build observability into your deployment from day one.
Health Checks
n8n exposes a health endpoint at /healthz. Monitor it with your preferred uptime service. For Docker deployments, add container health checks:
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:5678/healthz"]
interval: 30s
timeout: 10s
retries: 3
Log Management
Docker logs can be forwarded to centralized logging systems. For production, ship logs to a dedicated logging service rather than relying on local files. This becomes critical when debugging failed workflows in your daily outbound maintenance routines.
Update Strategy
n8n releases updates regularly. For production deployments:
- Pin to specific version tags rather than
latest - Test updates in a staging environment first
- Maintain database backups before any upgrade
- Review release notes for breaking changes
FAQ
For development and light usage, 1 CPU core and 1GB RAM works. For production GTM workflows, plan for at least 2 CPU cores and 4GB RAM. If running queue mode with workers, each worker needs its own resources—start with 1GB RAM per worker and adjust based on workflow complexity.
Self-hosted n8n has feature parity with the cloud version. You get all nodes, integrations, and workflow capabilities. The main differences are operational: you handle updates, backups, and scaling yourself. Some cloud-specific features like team collaboration and SSO require the Enterprise self-hosted license.
A basic VPS with Docker Compose is sufficient for most GTM team deployments. Kubernetes adds operational complexity that is only worthwhile if you already have Kubernetes infrastructure or need to scale beyond what a single server can handle. Start simple and evolve your deployment as requirements grow.
n8n includes built-in retry logic for failed executions. Configure retry settings at the workflow level or globally via environment variables. For critical workflows, implement error handling nodes that send notifications to Slack or create tickets when execution fails.
n8n encrypts credentials at rest using the encryption key you provide. Keep this key secure and separate from your deployment configuration. For maximum security, consider integrating with a secrets manager like HashiCorp Vault rather than storing credentials directly in n8n.
Export workflows from cloud n8n as JSON files and import them into your self-hosted instance. Credentials must be recreated manually since they cannot be exported for security reasons. Test all workflows after migration since webhook URLs and environment-specific configurations will need updating.
What Changes at Scale
Running n8n for a small team is straightforward. Running it as the automation backbone for enterprise GTM operations is a different challenge entirely.
At 50 workflows processing 500 leads daily, single-instance n8n handles the load fine. At 500 workflows processing 10,000 leads, you start hitting limits: execution queue backlogs, webhook timeouts, and database query latency. The data flowing through your automation engine—enrichment results, qualification scores, engagement history—needs to stay consistent across every tool in your stack.
This is where point-to-point automation reaches its limits. You can build individual workflows connecting each tool pair, but maintaining data consistency across a dozen integrations becomes exponentially complex. What you actually need is a context layer that unifies all this data—automatically syncing prospect information, company insights, and engagement signals across your entire GTM stack.
This is exactly what context platforms like Octave are designed to handle. Rather than building custom n8n workflows for every data synchronization need, Octave maintains a unified context graph that keeps CRM, enrichment, and engagement data in sync. For teams running AI-powered outbound at scale, it is the difference between managing dozens of brittle integrations and having a single source of truth that every workflow can rely on.
Conclusion
Self-hosting n8n provides GTM teams with a powerful, cost-effective automation foundation. Docker makes deployment straightforward, and the patterns covered here—from basic setup through production configuration and queue mode scaling—give you a clear path from development to production-ready infrastructure.
The key is matching your deployment complexity to your actual needs. Start with the basic Docker Compose setup, add production hardening as you validate your workflows, and scale to queue mode when execution volume demands it. Keep your runbooks and SOPs updated as your deployment evolves, and maintain regular backups regardless of deployment size.
For teams ready to move beyond individual workflow automation toward unified GTM data orchestration, the combination of self-hosted execution engines and context platforms creates infrastructure that scales with your ambitions.
