Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

Memgpt-MCP-Server

MCP.Pizza Chef: Vic563

MemGPT MCP Server is a TypeScript-based Model Context Protocol server designed to provide persistent memory management and multi-model large language model (LLM) support. It enables chat interactions with various LLM providers such as OpenAI, Anthropic, OpenRouter, and Ollama, while maintaining conversation history for context continuity. Key features include tools for sending messages, retrieving and clearing conversation memory, and switching between providers and models dynamically. This server facilitates advanced, context-aware AI workflows by preserving dialogue history and supporting multiple LLM ecosystems in a unified interface.

Use This MCP server To

Maintain persistent conversation history for LLM chats Switch dynamically between multiple LLM providers Retrieve chronological memory logs for context-aware responses Clear stored conversation memory on demand Support multi-model selection per LLM provider Enable multi-LLM workflows with unified memory management

README

MemGPT MCP Server

A TypeScript-based MCP server that implements a memory system for LLMs. It provides tools for chatting with different LLM providers while maintaining conversation history.

Features

Tools

  • chat - Send a message to the current LLM provider

    • Takes a message parameter
    • Supports multiple providers (OpenAI, Anthropic, OpenRouter, Ollama)
  • get_memory - Retrieve conversation history

    • Optional limit parameter to specify number of memories to retrieve
    • Pass limit: null for unlimited memory retrieval
    • Returns memories in chronological order with timestamps
  • clear_memory - Clear conversation history

    • Removes all stored memories
  • use_provider - Switch between different LLM providers

    • Supports OpenAI, Anthropic, OpenRouter, and Ollama
    • Persists provider selection
  • use_model - Switch to a different model for the current provider

    • Supports provider-specific models:
      • Anthropic Claude Models:
        • Claude 3 Series:
          • claude-3-haiku: Fastest response times, ideal for tasks like customer support and content moderation
          • claude-3-sonnet: Balanced performance for general-purpose use
          • claude-3-opus: Advanced model for complex reasoning and high-performance tasks
        • Claude 3.5 Series:
          • claude-3.5-haiku: Enhanced speed and cost-effectiveness
          • claude-3.5-sonnet: Superior performance with computer interaction capabilities
      • OpenAI: 'gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo'
      • OpenRouter: Any model in 'provider/model' format (e.g., 'openai/gpt-4', 'anthropic/claude-2')
      • Ollama: Any locally available model (e.g., 'llama2', 'codellama')
    • Persists model selection

Development

Install dependencies:

npm install

Build the server:

npm run build

For development with auto-rebuild:

npm run watch

Installation

To use with Claude Desktop, add the server config:

On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json On Windows: %APPDATA%/Claude/claude_desktop_config.json

{
  "mcpServers": {
    "letta-memgpt": {
      "command": "/path/to/memgpt-server/build/index.js",
      "env": {
        "OPENAI_API_KEY": "your-openai-key",
        "ANTHROPIC_API_KEY": "your-anthropic-key",
        "OPENROUTER_API_KEY": "your-openrouter-key"
      }
    }
  }
}

Environment Variables

  • OPENAI_API_KEY - Your OpenAI API key
  • ANTHROPIC_API_KEY - Your Anthropic API key
  • OPENROUTER_API_KEY - Your OpenRouter API key

Debugging

Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector:

npm run inspector

The Inspector will provide a URL to access debugging tools in your browser.

Recent Updates

Claude 3 and 3.5 Series Support (March 2024)

  • Added support for latest Claude models:
    • Claude 3 Series (Haiku, Sonnet, Opus)
    • Claude 3.5 Series (Haiku, Sonnet)

Unlimited Memory Retrieval

  • Added support for retrieving unlimited conversation history
  • Use { "limit": null } with the get_memory tool to retrieve all stored memories
  • Use { "limit": n } to retrieve the n most recent memories
  • Default limit is 10 if not specified

Memgpt-MCP-Server FAQ

How does MemGPT MCP Server manage conversation history?
It stores memories chronologically with timestamps and allows retrieval with optional limits or clearing all stored memories.
Can I switch between different LLM providers during a session?
Yes, MemGPT MCP Server supports switching between OpenAI, Anthropic, OpenRouter, and Ollama providers with persistent selection.
Does MemGPT MCP Server support multiple models per provider?
Yes, it allows switching to different models specific to each provider, such as Anthropic's Claude 3 series.
What programming language is MemGPT MCP Server built with?
It is implemented in TypeScript, ensuring strong typing and modern development practices.
How does MemGPT MCP Server handle multi-model LLM support?
It provides tools to select and switch models dynamically within the current provider context.
Is the conversation memory retrieval limited in size?
You can specify a limit or retrieve unlimited memories by passing null to the limit parameter.
Can MemGPT MCP Server be integrated with other MCP clients?
Yes, it follows the MCP protocol, making it compatible with any MCP client that supports the protocol standards.
What are the main tools provided by MemGPT MCP Server?
The main tools include chat, get_memory, clear_memory, use_provider, and use_model for comprehensive memory and model management.