n8n_MCP_server_complete

MCP.Pizza Chef: dopehunter

The n8n_MCP_server_complete is a fully featured MCP server that integrates n8n workflow automation with AI agents via the Model Context Protocol. It allows listing, viewing, executing, and monitoring n8n workflows directly from LLMs, passing parameters dynamically, and managing workflow executions seamlessly. This server provides a standardized MCP-compatible interface to enable AI-driven workflow orchestration and automation within environments like Cursor.

Use This MCP server To

List available n8n workflows for AI-driven automation Execute n8n workflows with dynamic parameters from LLMs Monitor real-time execution status of n8n workflows Retrieve detailed information about specific workflows Integrate n8n workflow management into AI agent environments Automate complex task sequences using n8n via MCP Enable AI agents to trigger and control workflow executions

README

n8n MCP Server

A Model Context Protocol (MCP) server that enables seamless management of n8n workflows directly within LLMs and AI agents through the Smithery Model Context Protocol.

MCP Compatible npm version

Features

  • List available workflows from n8n
  • View workflow details
  • Execute workflows
  • Monitor workflow executions
  • Pass parameters to workflows
  • MCP-compatible interface for AI agents

Getting Started

Quick Start

  1. Install the package

    npm install @dopehunter/n8n-mcp-server
  2. Create a .env file

    cp .env.example .env
  3. Configure your n8n connection Edit the .env file and set:

    • N8N_BASE_URL: URL to your n8n instance (e.g., http://localhost:5678/api)
    • N8N_API_KEY: Your n8n API key (generate this in n8n settings)
  4. Start the server

    npm start
  5. Test the server

    curl -X POST http://localhost:3000/mcp -H "Content-Type: application/json" \
      -d '{"jsonrpc":"2.0","id":"1","method":"mcp.tools.list","params":{}}'

Common Issues and Troubleshooting

  • Connection Refused Errors: Make sure your n8n instance is running and accessible at the URL specified in N8N_BASE_URL
  • API Key Issues: Verify your n8n API key is correct and has appropriate permissions
  • Docker Issues: Ensure Docker is running before attempting to build or run the Docker image

For more detailed troubleshooting, see the Troubleshooting Guide.

Components

Tools

  • n8n_list_workflows

    • List all workflows in the n8n instance
    • Input: None
  • n8n_get_workflow

    • Get details of a specific workflow
    • Input: workflowId (string, required): ID of the workflow to retrieve
  • n8n_execute_workflow

    • Execute an n8n workflow
    • Inputs:
      • workflowId (string, required): ID of the workflow to execute
      • data (object, optional): Data to pass to the workflow
  • n8n_get_executions

    • Get execution history for a workflow
    • Inputs:
      • workflowId (string, required): ID of the workflow to get executions for
      • limit (number, optional): Maximum number of executions to return
  • n8n_activate_workflow

    • Activate a workflow
    • Input: workflowId (string, required): ID of the workflow to activate
  • n8n_deactivate_workflow

    • Deactivate a workflow
    • Input: workflowId (string, required): ID of the workflow to deactivate

Prerequisites

  • Node.js (v14+)
  • n8n instance with API access
  • An LLM or AI agent that supports the Model Context Protocol

Configuration Options

Docker Configuration

{
  "mcpServers": {
    "n8n": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "--init", "-e", "N8N_API_KEY=$N8N_API_KEY", "-e", "N8N_BASE_URL=$N8N_BASE_URL", "mcp/n8n-mcp-server"]
    }
  }
}

NPX Configuration

{
  "mcpServers": {
    "n8n": {
      "command": "npx",
      "args": ["-y", "@dopehunter/n8n-mcp-server"]
    }
  }
}

Installation

NPM

npm install @dopehunter/n8n-mcp-server

Direct Usage with npx

npx @dopehunter/n8n-mcp-server

From Source

git clone https://github.com/dopehunter/n8n_MCP_server_complete.git
cd n8n_MCP_server_complete
npm install
cp .env.example .env
# Edit the .env file with your n8n API details

Development

Start the development server:

npm run start:dev

Build the project:

npm run build

Run tests:

npm test

Usage With Claude or Other LLMs

  1. Start the MCP server:

    npm start
    
  2. Configure your LLM client to use the MCP server:

    • For Claude Desktop, use the configuration from the "Configuration Options" section.
    • For other clients, point to the server URL (e.g., http://localhost:3000/mcp).
  3. Your LLM can now use n8n workflows directly through MCP commands.

Building Docker Image

docker build -t mcp/n8n-mcp-server .

API Documentation

See the API Documentation for details on the available MCP functions.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the ISC License.

n8n_MCP_server_complete FAQ

How do I install the n8n MCP server?
Install it via npm using 'npm install @dopehunter/n8n-mcp-server' and configure your .env file with your n8n URL and API key.
How does the server authenticate with n8n?
It uses an API key configured in the .env file to securely connect to your n8n instance.
Can I pass parameters to workflows when executing them?
Yes, the server supports passing parameters dynamically to workflows during execution.
Is this MCP server compatible with multiple LLM providers?
Yes, it supports integration with various LLMs including OpenAI, Anthropic Claude, and Google Gemini through the MCP interface.
How can I monitor workflow execution status?
The server provides real-time monitoring of workflow executions accessible via the MCP interface.
Does this server support listing all workflows available in n8n?
Yes, it can list all available workflows from your connected n8n instance.
What environment variables are required to configure the server?
You need to set N8N_BASE_URL and N8N_API_KEY in your .env file to connect to your n8n instance.
Can this MCP server be used in production environments?
Yes, it is designed for seamless integration and reliable operation in production workflow automation setups.