Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP

MCP.Pizza Chef: newideas99

Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP is an MCP server that integrates DeepSeek R1's advanced reasoning with Claude 3.5 Sonnet's response generation via OpenRouter. It employs a two-stage process where DeepSeek provides structured reasoning within a 50k character context, which is then injected into Claude's 600k character context for final response generation. This server supports smart conversation management, handling multiple concurrent conversations and maintaining context continuity, enabling sophisticated retrieval-augmented thinking workflows.

Use This MCP server To

Combine structured reasoning with large-context LLM responses Maintain multi-turn conversation context across sessions Integrate DeepSeek reasoning tokens into Claude 3.5 responses Manage multiple concurrent conversations intelligently Enable retrieval-augmented thinking workflows with LLMs Use OpenRouter API to unify access to multiple LLMs Process large context windows for complex query handling

README

Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP

smithery badge

A Model Context Protocol (MCP) server that combines DeepSeek R1's reasoning capabilities with Claude 3.5 Sonnet's response generation through OpenRouter. This implementation uses a two-stage process where DeepSeek provides structured reasoning which is then incorporated into Claude's response generation.

Features

  • Two-Stage Processing:

    • Uses DeepSeek R1 for initial reasoning (50k character context)
    • Uses Claude 3.5 Sonnet for final response (600k character context)
    • Both models accessed through OpenRouter's unified API
    • Injects DeepSeek's reasoning tokens into Claude's context
  • Smart Conversation Management:

    • Detects active conversations using file modification times
    • Handles multiple concurrent conversations
    • Filters out ended conversations automatically
    • Supports context clearing when needed
  • Optimized Parameters:

    • Model-specific context limits:
      • DeepSeek: 50,000 characters for focused reasoning
      • Claude: 600,000 characters for comprehensive responses
    • Recommended settings:
      • temperature: 0.7 for balanced creativity
      • top_p: 1.0 for full probability distribution
      • repetition_penalty: 1.0 to prevent repetition

Installation

Installing via Smithery

To install DeepSeek Thinking with Claude 3.5 Sonnet for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @newideas99/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP --client claude

Manual Installation

  1. Clone the repository:
git clone https://github.com/yourusername/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP.git
cd Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP
  1. Install dependencies:
npm install
  1. Create a .env file with your OpenRouter API key:
# Required: OpenRouter API key for both DeepSeek and Claude models
OPENROUTER_API_KEY=your_openrouter_api_key_here

# Optional: Model configuration (defaults shown below)
DEEPSEEK_MODEL=deepseek/deepseek-r1  # DeepSeek model for reasoning
CLAUDE_MODEL=anthropic/claude-3.5-sonnet:beta  # Claude model for responses
  1. Build the server:
npm run build

Usage with Cline

Add to your Cline MCP settings (usually in ~/.vscode/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json):

{
  "mcpServers": {
    "deepseek-claude": {
      "command": "/path/to/node",
      "args": ["/path/to/Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP/build/index.js"],
      "env": {
        "OPENROUTER_API_KEY": "your_key_here"
      },
      "disabled": false,
      "autoApprove": []
    }
  }
}

Tool Usage

The server provides two tools for generating and monitoring responses:

generate_response

Main tool for generating responses with the following parameters:

{
  "prompt": string,           // Required: The question or prompt
  "showReasoning"?: boolean, // Optional: Show DeepSeek's reasoning process
  "clearContext"?: boolean,  // Optional: Clear conversation history
  "includeHistory"?: boolean // Optional: Include Cline conversation history
}

check_response_status

Tool for checking the status of a response generation task:

{
  "taskId": string  // Required: The task ID from generate_response
}

Response Polling

The server uses a polling mechanism to handle long-running requests:

  1. Initial Request:

    • generate_response returns immediately with a task ID
    • Response format: {"taskId": "uuid-here"}
  2. Status Checking:

    • Use check_response_status to poll the task status
    • Note: Responses can take up to 60 seconds to complete
    • Status progresses through: pending → reasoning → responding → complete

Example usage in Cline:

// Initial request
const result = await use_mcp_tool({
  server_name: "deepseek-claude",
  tool_name: "generate_response",
  arguments: {
    prompt: "What is quantum computing?",
    showReasoning: true
  }
});

// Get taskId from result
const taskId = JSON.parse(result.content[0].text).taskId;

// Poll for status (may need multiple checks over ~60 seconds)
const status = await use_mcp_tool({
  server_name: "deepseek-claude",
  tool_name: "check_response_status",
  arguments: { taskId }
});

// Example status response when complete:
{
  "status": "complete",
  "reasoning": "...",  // If showReasoning was true
  "response": "..."    // The final response
}

Development

For development with auto-rebuild:

npm run watch

How It Works

  1. Reasoning Stage (DeepSeek R1):

    • Uses OpenRouter's reasoning tokens feature
    • Prompt is modified to output 'done' while capturing reasoning
    • Reasoning is extracted from response metadata
  2. Response Stage (Claude 3.5 Sonnet):

    • Receives the original prompt and DeepSeek's reasoning
    • Generates final response incorporating the reasoning
    • Maintains conversation context and history

License

MIT License - See LICENSE file for details.

Credits

Based on the RAT (Retrieval Augmented Thinking) concept by Skirano, which enhances AI responses through structured reasoning and knowledge retrieval.

This implementation specifically combines DeepSeek R1's reasoning capabilities with Claude 3.5 Sonnet's response generation through OpenRouter's unified API.

Deepseek-Thinking-Claude-3.5-Sonnet-CLINE-MCP FAQ

How does the two-stage processing work?
DeepSeek R1 performs initial reasoning with a 50k character context, then Claude 3.5 Sonnet generates the final response using a 600k character context, incorporating DeepSeek's reasoning tokens.
What models does this MCP server support?
It supports DeepSeek R1 for reasoning and Claude 3.5 Sonnet for response generation, accessed via OpenRouter's unified API.
How does it manage multiple conversations?
It detects active conversations using file modification times and can handle multiple concurrent conversations with context continuity.
Can this server be used with other LLM providers?
Yes, through OpenRouter, it can integrate with various LLM providers like OpenAI, Anthropic Claude, and Mistral.
What is retrieval augmented thinking (RAT)?
RAT is a method combining retrieval of relevant information with structured reasoning to enhance LLM responses.
How large is the context window supported?
It supports up to 600k characters in Claude 3.5 Sonnet's context window for final response generation.
Is this MCP server suitable for real-time applications?
Yes, it manages conversation context in real-time and supports multi-turn interactions efficiently.
How does it integrate DeepSeek reasoning into Claude responses?
It injects DeepSeek's reasoning tokens directly into Claude's context to enrich response quality.