Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

model-enhancement-servers

MCP.Pizza Chef: peragus-dev

model-enhancement-servers is a suite of MCP servers designed to extend large language models with advanced cognitive capabilities. These servers enable formal reasoning, visual and analogical thinking, scientific hypothesis testing, metacognitive monitoring, decision analysis, and collaborative problem solving. They empower LLMs to perform structured, multi-step reasoning and complex workflows beyond basic text generation, enhancing their utility in diverse applications.

Use This MCP server To

Enable formal dialectical reasoning in LLM workflows Support diagrammatic and spatial reasoning tasks Facilitate scientific hypothesis testing and evidence evaluation Implement structured metaphorical and analogical thinking Track knowledge confidence and perform metacognitive monitoring Conduct structured decision analysis for complex choices Enable multi-perspective collaborative reasoning processes

README

Cognitive Enhancement MCP Servers

A collection of Model Context Protocol servers that provide cognitive enhancement tools for large language models.

Servers

This monorepo contains the following MCP servers:

  1. Structured Argumentation - A server for formal dialectical reasoning
  2. Visual Reasoning - A server for diagrammatic thinking and spatial representation
  3. Scientific Method - A server for hypothesis testing and evidence evaluation
  4. Analogical Reasoning - A server for structured metaphorical thinking
  5. Metacognitive Monitoring - A server for knowledge assessment and confidence tracking
  6. Decision Framework - A server for structured decision analysis
  7. Collaborative Reasoning - A server for multi-perspective problem solving

Installation

Each server can be installed individually:

# Using npm
npm install @waldzellai/structured-argumentation

# Using yarn
yarn add @waldzellai/structured-argumentation

Usage with Claude Desktop

Add this to your claude_desktop_config.json:

{
  "mcpServers": {
    "structured-argumentation": {
      "command": "npx",
      "args": [
        "-y",
        "@waldzellai/structured-argumentation"
      ]
    }
  }
}

Docker

All servers are available as Docker images:

docker run --rm -i waldzellai/structured-argumentation

Development

Clone the repository and install dependencies:

git clone https://github.com/waldzellai/model-enhancement-servers.git
cd model-enhancement-servers
npm install

Build all packages:

npm run build

License

This project is licensed under the MIT License - see the LICENSE file for details.

model-enhancement-servers FAQ

How do I install individual servers from model-enhancement-servers?
Each server can be installed separately via npm or yarn, e.g., npm install @waldzellai/structured-argumentation.
Can these servers be used with multiple LLM providers?
Yes, they are designed to be provider-agnostic and work with OpenAI, Claude, Gemini, and others.
What programming languages are supported for integration?
These MCP servers are primarily JavaScript/TypeScript packages installable via npm or yarn, suitable for Node.js environments.
How do these servers improve LLM reasoning?
They provide structured frameworks for complex cognitive tasks like argumentation, decision making, and metacognition, enabling more rigorous and explainable outputs.
Are these servers modular or do I need to install the entire suite?
They are modular; you can install and use only the servers relevant to your application.
Can these servers be combined in workflows?
Yes, multiple servers can be orchestrated together to create sophisticated multi-step reasoning pipelines.
Is there documentation or examples available?
The GitHub repository includes readme files and usage examples for each server to help with integration and development.
Do these servers support real-time interaction with LLMs?
Yes, they are designed to feed structured context and receive model outputs in real time for interactive workflows.