gemini-desktop

MCP.Pizza Chef: kkrishnan90

Gemini-desktop is a cross-platform Electron-based desktop client that provides a seamless chat interface to Google's Gemini AI models. It supports extensible capabilities through the Model Context Protocol (MCP), allowing integration with external Python or command-based MCP servers. Designed for macOS and Windows, it offers a user-friendly UI with real-time tool status feedback during model interactions, enabling developers and users to build rich AI-enhanced workflows with Gemini models.

Use This MCP client To

Chat with Google's Gemini AI models on desktop Integrate Python-based MCP servers for extended functionality Connect command-based MCP servers via JSON configuration Monitor MCP tool call status in real time Develop AI workflows combining Gemini and external tools Run a unified chat interface across macOS and Windows

README

GemCP Chat Logo

GemCP Chat

The GemCP app is a cross-platform desktop application that creates a seamless chat interface for Google's Gemini AI models with extensible capabilities through a Model Context Protocol (MCP) framework.

✨ Features

  • 🤖 Gemini Integration: Seamless chat interface with Google's Gemini models (configurable via Settings).
  • 🔧 Extensible Tools (MCP): Connect external tools and data sources via the Model Context Protocol.
    • Supports Python-based MCP servers (added via file path).
    • Supports command-based MCP servers (e.g., Node.js) defined in a JSON configuration file.
  • 🖥️ Cross-Platform: Runs on macOS and Windows (Electron build).
  • 📊 Tool Status UI: Provides visual feedback when Gemini is calling an MCP tool and whether it succeeded or failed.
  • ⚙️ Model Selection: Choose different Gemini models (e.g., 1.5 Flash, 1.5 Pro, 2.5 Pro Exp) through the Settings dialog.
  • 📝 Markdown & LaTeX Rendering: Displays AI responses with formatting.

Screenshots

Weather Tool Example
Example of the weather tool in action

Chat Interface
Main chat interface with Gemini

Calculator Tool Example
Using the calculator tool with Gemini

Overview

This repository contains both a Python backend and an Electron-based desktop application for interacting with Gemini.

Project Structure

  • mcp-gemini-desktop/: Frontend Electron application
  • python_backend/: Python backend server

Running the Python Backend

Prerequisites

  • Python 3.13+ (as specified in pyproject.toml)
  • uv (Modern Python package installer and resolver)

Setup

  1. Navigate to the Python backend directory:

    cd python_backend
  2. Install uv if you don't have it already:

    pip install uv
  3. Install the required dependencies using uv:

    uv pip install .
  4. Set your Google API key as an environment variable:

    # For Linux/macOS
    export GOOGLE_API_KEY=your_api_key_here
    
    # For Windows
    set GOOGLE_API_KEY=your_api_key_here
  5. Start the Python backend server:

    python main.py

    The server should start running on http://localhost:5000

Running the Frontend Electron App

Prerequisites

  • Node.js (v16+)
  • npm (Node package manager)

Setup

  1. Navigate to the Electron app directory:

    cd mcp-gemini-desktop
  2. Install the required dependencies:

    npm install
  3. Start the Electron app in development mode:

    npm start

Building the App

To build the Electron app for your platform:

cd mcp-gemini-desktop
npm run build

This will create platform-specific binaries in the dist folder.

Using Pre-built Binaries

If you prefer to use pre-built binaries directly:

  1. Navigate to the mcp-gemini-desktop/dist directory
  2. For macOS: Install the .dmg file
  3. For Windows: Run the .exe installer

Available Binaries

  • macOS (Apple Silicon): GemCP Chat-0.1.0-arm64.dmg
  • Windows: Check the dist folder for .exe files

Example Servers

The repository includes example server implementations in mcp-gemini-desktop/mcp_example_servers/:

  • mcp_server_calc.py: Calculator server example
  • mcp_server_weather.py: Weather information server example

Troubleshooting

  • Ensure the Python backend is running before starting the Electron app
  • Check that your Google API key is correctly set
  • Verify that the required ports are not blocked by a firewall

License

This project is licensed under the MIT License - see the LICENSE file for details.

Contributing

Please raise PR for any contributions. Any PR raised will be reviewed and merged with the main branch.

gemini-desktop FAQ

How do I add external MCP servers to gemini-desktop?
You can add Python-based MCP servers by specifying their file paths or command-based MCP servers via a JSON config file within the app settings.
Does gemini-desktop support platforms other than macOS and Windows?
Currently, gemini-desktop supports macOS and Windows through its Electron build; other platforms are not officially supported yet.
How does gemini-desktop show the status of MCP tool calls?
The app provides a visual UI indicator showing when Gemini is calling an MCP tool and whether the call succeeded or failed.
Can I customize which Gemini AI model gemini-desktop uses?
Yes, gemini-desktop allows configuration of the Gemini AI model used via its settings interface.
Is gemini-desktop limited to Gemini AI models only?
While primarily designed for Gemini AI, gemini-desktop's MCP framework can potentially integrate other models if adapted accordingly.
What programming languages are supported for MCP servers in gemini-desktop?
It supports Python-based MCP servers and command-based servers such as those built with Node.js, configured via JSON.
How secure is the integration of external MCP servers?
MCP's design principles ensure scoped, secure, and observable interactions, and gemini-desktop inherits these for safe tool integration.
Can gemini-desktop be used to build multi-step AI workflows?
Yes, by connecting multiple MCP servers and tools, users can create complex, multi-step reasoning workflows with Gemini AI.