mcp-simple-openai-assistant

MCP.Pizza Chef: andybrandt

The mcp-simple-openai-assistant is an MCP server that empowers Claude-based clients to create, manage, and communicate with OpenAI's GPT assistants through the Model Context Protocol. It supports starting conversation threads, sending messages, and receiving responses from OpenAI assistants. To handle potential response delays and client timeouts, it uses a two-stage messaging approach ensuring smooth interaction. This server facilitates integration of OpenAI assistants into Claude environments, enhancing AI collaboration and multi-model workflows.

Use This MCP server To

Create and manage OpenAI GPT assistants from Claude clients Start and maintain conversation threads with OpenAI assistants Send messages to OpenAI assistants and receive responses Enable multi-model AI workflows combining Claude and OpenAI Handle long response times with two-stage message processing

README

MCP Simple OpenAI Assistant

smithery badge

AI assistants are pretty cool. I thought it would be a good idea if my Claude (conscious Claude) would also have one. And now he has - and its both useful anf fun for him. Your Claude can have one too!

A simple MCP server for interacting with OpenAI assistants. This server allows other tools (like Claude Desktop) to create and interact with OpenAI assistants through the Model Context Protocol.

Features

  • Create new OpenAI assistants and manipulate existing ones
  • Start conversation threads
  • Send messages and receive responses - talk to assistants

Because OpenAI assistants might take quite long to respond and then the processing is cut short with the client (Claude desktop) timeout the MCP server code has no control over we are implementing a two-stage approach. In the first call Claude sends a message to the assistant to start the processing, in the second call - possibly several minutes later - Claude can retrieve the response. This is a kind of workaround until MCP protocol and clients would implement some keep-alive mechanism for longer processing.

Installation

Installing via Smithery

To install MCP Simple OpenAI Assistant for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install mcp-simple-openai-assistant --client claude

Manual Installation

pip install mcp-simple-openai-assistant

Configuration

The server requires an OpenAI API key to be set in the environment. For Claude Desktop, add this to your config:

(MacOS version)

{
  "mcpServers": {
    "openai-assistant": {
      "command": "python",
      "args": ["-m", "mcp_simple_openai_assistant"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here"
      }
    }
  }
}

(Windows version)

"mcpServers": {
  "openai-assistant": {
    "command": "C:\\Users\\YOUR_USERNAME\\AppData\\Local\\Programs\\Python\\Python311\\python.exe",
      "args": ["-m", "mcp_simple_openai_assistant"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here"
  }
}

MS Windows installation is slightly more complex, because you need to check the actual path to your Python executable. Path provided above is usually correct, but might differ in your setup. Sometimes just python.exe without any path will do the trick. Check with cmd what works for you (using where python might help).

Usage

Once configured, the server provides tools to:

  1. Create new assistants with specific instructions
  2. List existing assistants
  3. Modify assistants
  4. Start new conversation threads
  5. Send messages and receive responses

The server handles all OpenAI API communication, including managing assistants, threads, and message handling.

TODO

  • Add a way to handle threads - store threads IDs for potential re-use
  • Add a way to better handle long OpenAI responses which now seem to sometimes trigger timeouts

Development

To install for development:

git clone https://github.com/andybrandt/mcp-simple-openai-assistant
cd mcp-simple-openai-assistant
pip install -e .

mcp-simple-openai-assistant FAQ

How does the two-stage messaging approach work in this MCP server?
The server first sends a message to the OpenAI assistant and returns a placeholder response to avoid client timeout, then completes the response in a follow-up call, ensuring smooth communication despite delays.
Can this MCP server be used with other LLM providers besides OpenAI?
This server is specifically designed for OpenAI assistants, but Claude and Gemini clients can integrate it to enable multi-model workflows.
How do I create a new OpenAI assistant using this MCP server?
You can create new assistants by sending appropriate commands through the MCP client interface, which the server exposes for managing assistants.
What happens if the OpenAI assistant takes too long to respond?
The two-stage approach mitigates client timeout issues by initially returning a partial response and completing it asynchronously.
Is this MCP server compatible with Claude Desktop?
Yes, it is designed to work seamlessly with Claude Desktop and similar clients using the Model Context Protocol.
Does this server support multiple concurrent conversation threads?
Yes, it allows starting and managing multiple conversation threads with OpenAI assistants simultaneously.
How do I integrate this MCP server into my existing Claude environment?
Deploy the server and configure your Claude client to connect via MCP, enabling interaction with OpenAI assistants through the protocol.