Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

deepseek-thinker-mcp

MCP.Pizza Chef: ruixingshi

Deepseek Thinker MCP Server provides structured reasoning content from Deepseek's Chain-of-Thought (CoT) models to MCP-enabled AI clients like Claude Desktop. It supports dual modes: accessing Deepseek's API service or running locally via an Ollama server, enabling flexible integration. This server captures and delivers Deepseek's focused reasoning outputs, enhancing AI workflows with transparent thought processes and structured reasoning data.

Use This MCP server To

Integrate Deepseek reasoning into MCP-enabled AI clients Access Deepseek's Chain-of-Thought via API or local server Enable transparent AI reasoning in applications Support multi-model reasoning workflows with OpenAI and Ollama Provide structured reasoning outputs for complex queries Enhance AI assistants with Deepseek's thought process data

README

Deepseek Thinker MCP Server

smithery badge

A MCP (Model Context Protocol) provider Deepseek reasoning content to MCP-enabled AI Clients, like Claude Desktop. Supports access to Deepseek's thought processes from the Deepseek API service or from a local Ollama server.

Deepseek Thinker Server MCP server

Core Features

  • 🤖 Dual Mode Support

    • OpenAI API mode support
    • Ollama local mode support
  • 🎯 Focused Reasoning

    • Captures Deepseek's thinking process
    • Provides reasoning output

Available Tools

get-deepseek-thinker

  • Description: Perform reasoning using the Deepseek model
  • Input Parameters:
    • originPrompt (string): User's original prompt
  • Returns: Structured text response containing the reasoning process

Environment Configuration

OpenAI API Mode

Set the following environment variables:

API_KEY=<Your OpenAI API Key>
BASE_URL=<API Base URL>

Ollama Mode

Set the following environment variable:

USE_OLLAMA=true

Usage

Integration with AI Client, like Claude Desktop

Add the following configuration to your claude_desktop_config.json:

{
  "mcpServers": {
    "deepseek-thinker": {
      "command": "npx",
      "args": [
        "-y",
        "deepseek-thinker-mcp"
      ],
      "env": {
        "API_KEY": "<Your API Key>",
        "BASE_URL": "<Your Base URL>"
      }
    }
  }
}

Using Ollama Mode

{
  "mcpServers": {
    "deepseek-thinker": {
      "command": "npx",
      "args": [
        "-y",
        "deepseek-thinker-mcp"
      ],
      "env": {
        "USE_OLLAMA": "true"
      }
    }
  }
}

Local Server Configuration

{
  "mcpServers": {
    "deepseek-thinker": {
      "command": "node",
      "args": [
        "/your-path/deepseek-thinker-mcp/build/index.js"
      ],
      "env": {
        "API_KEY": "<Your API Key>",
        "BASE_URL": "<Your Base URL>"
      }
    }
  }
}

Development Setup

# Install dependencies
npm install

# Build project
npm run build

# Run service
node build/index.js

FAQ

Response like this: "MCP error -32001: Request timed out"

This error occurs when the Deepseek API response is too slow or when the reasoning content output is too long, causing the MCP server to timeout.

Tech Stack

  • TypeScript
  • @modelcontextprotocol/sdk
  • OpenAI API
  • Ollama
  • Zod (parameter validation)

License

This project is licensed under the MIT License. See the LICENSE file for details.

deepseek-thinker-mcp FAQ

How do I switch between OpenAI API mode and Ollama local mode?
You can configure the Deepseek Thinker MCP server to use either the OpenAI API or a local Ollama server by setting the appropriate mode in the server configuration, enabling flexible deployment.
What kind of reasoning output does this server provide?
It delivers structured Chain-of-Thought reasoning content from Deepseek models, allowing clients to access detailed thought processes behind AI responses.
Can this server be used with AI clients other than Claude Desktop?
Yes, any MCP-enabled AI client can integrate with the Deepseek Thinker MCP server to leverage Deepseek's reasoning capabilities.
Is it possible to run the Deepseek Thinker server entirely locally?
Yes, by using the Ollama local mode, you can run the server without relying on external API services, ensuring data privacy and offline capabilities.
What input parameters are required to perform reasoning?
The primary input is the 'originPrompt' string, which is the user's original prompt for which reasoning is requested.
How does this server enhance AI workflows?
By providing transparent and structured reasoning outputs, it enables AI clients to perform multi-step reasoning and improve response quality.
Does the server support multiple LLM providers?
Yes, it supports OpenAI API, Ollama local models, and can be integrated with other MCP-compatible LLM providers like Claude and Gemini.