Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AIFire in da houseCheck it out free

openrouterai

MCP.Pizza Chef: heltonteixeira

The OpenRouter.ai MCP server is a TypeScript-based Model Context Protocol server that provides seamless integration with the OpenRouter.ai model ecosystem. It offers direct access to a wide range of AI models through a unified, type-safe interface, ensuring automatic model validation, capability checking, and default model configuration. The server includes built-in features such as caching, rate limiting, and robust error handling to optimize performance and reliability. Designed for developers building AI-enhanced workflows and agents, this MCP server simplifies interaction with OpenRouter.ai models while maintaining secure and efficient communication.

Use This MCP server To

Access multiple OpenRouter.ai models via a single interface Implement caching for AI model responses Enforce rate limiting on model API calls Validate AI model capabilities automatically Configure default AI models for workflows Integrate OpenRouter.ai models into AI agents Handle errors gracefully during model interactions

README

OpenRouter MCP Server

MCP Server Version TypeScript License

A Model Context Protocol (MCP) server providing seamless integration with OpenRouter.ai's diverse model ecosystem. Access various AI models through a unified, type-safe interface with built-in caching, rate limiting, and error handling.

OpenRouter Server MCP server

Features

  • Model Access

    • Direct access to all OpenRouter.ai models
    • Automatic model validation and capability checking
    • Default model configuration support
  • Performance Optimization

    • Smart model information caching (1-hour expiry)
    • Automatic rate limit management
    • Exponential backoff for failed requests
  • Unified Response Format

    • Consistent ToolResult structure for all responses
    • Clear error identification with isError flag
    • Structured error messages with context

Installation

pnpm install @mcpservers/openrouterai

Configuration

Prerequisites

  1. Get your OpenRouter API key from OpenRouter Keys
  2. Choose a default model (optional)

Environment Variables

OPENROUTER_API_KEY=your-api-key-here
OPENROUTER_DEFAULT_MODEL=optional-default-model

Setup

Add to your MCP settings configuration file (cline_mcp_settings.json or claude_desktop_config.json):

{
  "mcpServers": {
    "openrouterai": {
      "command": "npx",
      "args": ["@mcpservers/openrouterai"],
      "env": {
        "OPENROUTER_API_KEY": "your-api-key-here",
        "OPENROUTER_DEFAULT_MODEL": "optional-default-model"
      }
    }
  }
}

Response Format

All tools return responses in a standardized structure:

interface ToolResult {
  isError: boolean;
  content: Array<{
    type: "text";
    text: string; // JSON string or error message
  }>;
}

Success Example:

{
  "isError": false,
  "content": [{
    "type": "text",
    "text": "{\"id\": \"gen-123\", ...}"
  }]
}

Error Example:

{
  "isError": true,
  "content": [{
    "type": "text",
    "text": "Error: Model validation failed - 'invalid-model' not found"
  }]
}

Available Tools

chat_completion

Send messages to OpenRouter.ai models:

interface ChatCompletionRequest {
  model?: string;
  messages: Array<{role: "user"|"system"|"assistant", content: string}>;
  temperature?: number; // 0-2
}

// Response: ToolResult with chat completion data or error

search_models

Search and filter available models:

interface ModelSearchRequest {
  query?: string;
  provider?: string;
  minContextLength?: number;
  capabilities?: {
    functions?: boolean;
    vision?: boolean;
  };
}

// Response: ToolResult with model list or error

get_model_info

Get detailed information about a specific model:

{
  model: string;           // Model identifier
}

validate_model

Check if a model ID is valid:

interface ModelValidationRequest {
  model: string;
}

// Response: 
// Success: { isError: false, valid: true }
// Error: { isError: true, error: "Model not found" }

Error Handling

The server provides structured errors with contextual information:

// Error response structure
{
  isError: true,
  content: [{
    type: "text",
    text: "Error: [Category] - Detailed message"
  }]
}

Common Error Categories:

  • Validation Error: Invalid input parameters
  • API Error: OpenRouter API communication issues
  • Rate Limit: Request throttling detection
  • Internal Error: Server-side processing failures

Handling Responses:

async function handleResponse(result: ToolResult) {
  if (result.isError) {
    const errorMessage = result.content[0].text;
    if (errorMessage.startsWith('Error: Rate Limit')) {
      // Handle rate limiting
    }
    // Other error handling
  } else {
    const data = JSON.parse(result.content[0].text);
    // Process successful response
  }
}

Development

See CONTRIBUTING.md for detailed information about:

  • Development setup
  • Project structure
  • Feature implementation
  • Error handling guidelines
  • Tool usage examples
# Install dependencies
pnpm install

# Build project
pnpm run build

# Run tests
pnpm test

Changelog

See CHANGELOG.md for recent updates including:

  • Unified response format implementation
  • Enhanced error handling system
  • Type-safe interface improvements

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

openrouterai FAQ

How do I install the OpenRouter.ai MCP server?
You can install it via npm or yarn from the GitHub repository, following the provided setup instructions.
Does the server support automatic model validation?
Yes, it automatically validates models and checks their capabilities before use.
How does caching work in this MCP server?
The server caches model responses to reduce latency and API usage, improving performance.
Can I configure default models for my applications?
Yes, the server supports default model configuration to streamline workflow integration.
How is rate limiting handled?
Built-in rate limiting controls the frequency of API calls to prevent overuse and ensure stability.
Is error handling included?
Yes, the server includes robust error handling to manage API failures and unexpected issues.
What programming language is the server written in?
The server is implemented in TypeScript for type safety and maintainability.
Can this server be used with multiple LLM providers?
While focused on OpenRouter.ai, it can be integrated alongside other providers like Claude and Gemini in MCP workflows.