Dive

MCP.Pizza Chef: OpenAgentPlatform

Dive is an open-source MCP Host Desktop Application designed to integrate seamlessly with any large language models (LLMs) that support function calling capabilities. It acts as an MCP client, orchestrating context flow and tool interactions, enabling real-time, multi-step reasoning and interaction with the environment. Dive empowers developers and users to build AI-enhanced workflows and agents with flexible model integration.

Use This MCP client To

Integrate multiple LLMs with function calling in a desktop environment Orchestrate real-time context flow between LLMs and tools Build AI agents that interact with local and remote data sources Enable multi-step reasoning workflows with LLMs on desktop Manage and route tool calls from LLMs seamlessly Develop custom AI-enhanced workflows with flexible model support Test and prototype MCP-based AI agents locally Coordinate interactions between LLMs and MCP servers or tools

README

Dive AI Agent 🀿 πŸ€–

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Discord Twitter Follow

Dive is an open-source MCP Host Desktop Application that seamlessly integrates with any LLMs supporting function calling capabilities. ✨

Dive Demo

Features 🎯

  • 🌐 Universal LLM Support: Compatible with ChatGPT, Anthropic, Ollama and OpenAI-compatible models
  • πŸ’» Cross-Platform: Available for Windows, MacOS, and Linux
  • πŸ”„ Model Context Protocol: Enabling seamless MCP AI agent integration on both stdio and SSE mode
  • 🌍 Multi-Language Support: Traditional Chinese, Simplified Chinese, English, Spanish, Japanese with more coming soon
  • βš™οΈ Advanced API Management: Multiple API keys and model switching support
  • πŸ’‘ Custom Instructions: Personalized system prompts for tailored AI behavior
  • πŸ”„ Auto-Update Mechanism: Automatically checks for and installs the latest application updates

Recent updates(2025/4/21)

  • πŸš€ Dive MCP Host v0.8.0: DiveHost rewritten in Python is now a separate project at dive-mcp-host
  • βš™οΈ Enhanced LLM Settings: Add, modify, delete LLM Provider API Keys and custom Model IDs
  • πŸ” Model Validation: Validate or skip validation for models supporting Tool/Function calling
  • πŸ”§ Improved MCP Configuration: Add, edit, and delete MCP tools directly from the UI
  • 🌍 Japanese Translation: Added Japanese language support
  • πŸ€– Extended Model Support: Added Google Gemini and Mistral AI models integration

Important: Due to DiveHost migration from TypeScript to Python in v0.8.0, configuration files and chat history records will not be automatically upgraded. If you need to access your old data after upgrading, you can still downgrade to a previous version.

Download and Install ⬇️

Get the latest version of Dive: Download

For Windows users: πŸͺŸ

  • Download the .exe version
  • Python and Node.js environments are pre-installed

For MacOS users: 🍎

  • Download the .dmg version
  • You need to install Python and Node.js (with npx uvx) environments yourself
  • Follow the installation prompts to complete setup

For Linux users: 🐧

  • Download the .AppImage version
  • You need to install Python and Node.js (with npx uvx) environments yourself
  • For Ubuntu/Debian users:
    • You may need to add --no-sandbox parameter
    • Or modify system settings to allow sandbox
    • Run chmod +x to make the AppImage executable

MCP Tips

While the system comes with a default echo MCP Server, your LLM can access more powerful tools through MCP. Here's how to get started with two beginner-friendly tools: Fetch and Youtube-dl.

Set MCP

Quick Setup

Add this JSON configuration to your Dive MCP settings to enable both tools:

 "mcpServers":{
    "fetch": {
      "command": "uvx",
      "args": [
        "mcp-server-fetch",
        "--ignore-robots-txt"
      ],
      "enabled": true
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/path/to/allowed/files"
      ],
      "enabled": true
    },
    "youtubedl": {
      "command": "npx",
      "args": [
        "@kevinwatt/yt-dlp-mcp"
      ],
      "enabled": true
    }
  }

Using SSE Server for MCP

You can also connect to an external MCP server via SSE (Server-Sent Events). Add this configuration to your Dive MCP settings:

{
  "mcpServers": {
    "MCP_SERVER_NAME": {
      "enabled": true,
      "transport": "sse",
      "url": "YOUR_SSE_SERVER_URL"
    }
  }
}

Additional Setup for yt-dlp-mcp

yt-dlp-mcp requires the yt-dlp package. Install it based on your operating system:

Windows

winget install yt-dlp

MacOS

brew install yt-dlp

Linux

pip install yt-dlp

Build πŸ› οΈ

See BUILD.md for more details.

Connect With Us 🌐

Dive FAQ

How does Dive integrate with different LLMs?
Dive supports any LLM with function calling capabilities, enabling seamless integration and interaction.
Can Dive be used with multiple LLM providers?
Yes, Dive is provider-agnostic and works with OpenAI, Anthropic Claude, Google Gemini, and others.
Is Dive suitable for local development and testing?
Yes, Dive is a desktop application ideal for local prototyping and development of MCP-based AI workflows.
Does Dive support real-time context management?
Yes, Dive orchestrates real-time context flow between LLMs and connected tools or servers.
Can Dive manage multi-step reasoning processes?
Absolutely, Dive enables complex multi-step reasoning by coordinating LLM interactions and tool calls.
Is Dive open-source and customizable?
Yes, Dive is fully open-source, allowing developers to customize and extend its capabilities.
What platforms does Dive support?
Dive is a desktop application, typically supporting major OS platforms like Windows, macOS, and Linux.
How does Dive handle security and scoped model interactions?
Dive follows MCP principles ensuring secure, scoped, and observable interactions between models and tools.