multimodal-mcp-client

MCP.Pizza Chef: Ejb503

The multimodal-mcp-client is a cutting-edge MCP client designed to enable voice-powered agentic workflows using natural speech and multimodal inputs. It integrates with advanced LLM providers like Google Gemini and Anthropic, allowing users to interact with AI through voice commands and other input modes seamlessly. This client transforms AI interaction by supporting real-time, multimodal context feeding into models, enhancing productivity and user experience. Currently in early access, it supports Chrome on Linux, Windows, and MacOS, with ongoing development to expand compatibility and features.

Use This MCP client To

Enable voice-controlled AI workflows Integrate multimodal inputs for AI interaction Control AI agents via natural speech Enhance productivity with voice commands Test AI workflows on Chrome across OSes Develop voice-powered AI applications Experiment with multimodal context feeding

README

Systemprompt Multimodal MCP Client

A modern voice-controlled AI interface powered by Google Gemini and Anthropic MCP (Model Control Protocol). Transform how you interact with AI through natural speech and multimodal inputs.

⚠️ Important Note: This open source project is currently in development and in early access. It is not currently compatible with Safari but has been tested on Chrome with Linux, Windows, and MacOS. If you have any problems, please let us know on Discord or GitHub.

If you find this project useful, please consider:

  • ⭐ Starring it on GitHub
  • 🔄 Sharing it with others
  • 💬 Joining our Discord community

🌟 Overview

A modern Vite + TypeScript application that enables voice-controlled AI workflows through MCP (Model Control Protocol). This project revolutionizes how you interact with AI systems by combining Google Gemini's multimodal capabilities with MCP's extensible tooling system.

The Client supports both custom (user provided and configured) and Systemprompt MCP servers. Systemprompt MCP servers can be installed through the UX with a Systemprompt API key (free).

Custom MCP servers are not pre-configured and require a custom configuration file.

Create a local file mcp.config.custom.json in the config directory and add your MCP server configuration.

{
  "mcpServers": {
    "my-custom-server": {
      "id": "my-custom-server",
      "env": {
        "xxx": "xxx"
      },
      "command": "node",
      "args": [
        "/my-custom-server/build/index.js"
      ]
    }
  }
}

🎯 Why Systemprompt MCP?

Transform your AI interactions with a powerful voice-first interface that combines:

Feature Description
🗣️ Multimodal AI Understand and process text, voice, and visual inputs naturally
🛠️ MCP (Model Control Protocol) Execute complex AI workflows with a robust tooling system
🎙️ Voice-First Design Control everything through natural speech, making AI interaction more intuitive

Perfect for: Developers building voice-controlled AI applications and looking for innovative ways to use multimodal AI.

✨ Core Features

🎙️ Voice & Multimodal Intelligence

  • Natural Voice Control: Speak naturally to control AI workflows and execute commands
  • Multimodal Understanding: Process text, voice, and visual inputs simultaneously
  • Real-time Voice Synthesis: Get instant audio responses from your AI interactions

🔄 AI Workflow Orchestration

  • Extensible Tool System: Add custom tools and workflows through MCP
  • Workflow Automation: Chain multiple AI operations with voice commands
  • State Management: Robust handling of complex, multi-step AI interactions

💻 Developer Experience

  • Modern Tech Stack: Built with Vite, React, TypeScript, and NextUI
  • Type Safety: Full TypeScript support with comprehensive type definitions
  • Hot Module Replacement: Fast development with instant feedback
  • Comprehensive Testing: Built-in testing infrastructure with high coverage

🚀 Getting Started

Prerequisites

  • Node.js 16.x or higher
  • npm 7.x or higher
  • A modern browser with Web Speech API support

Quick Start

  1. Clone the repository

    git clone https://github.com/Ejb503/multimodal-mcp-client.git
    cd multimodal-mcp-client
  2. Install dependencies

    npm install
    cd proxy
    npm install
  3. Configure the application

    # Navigate to config directory
    cd config
    
    # Create local configuration files
    cp mcp.config.example.json mcp.config.custom.json

    Required API Keys:

    Add keys to .env (see .env.example for reference). note that the VITE_ prefix is required to share the keys with the MCP server and client.

  4. Start development server

    npm run dev

    Access the development server at http://localhost:5173

🤝 Support & Community

Resource Link
💬 Discord Join our community
🐛 Issues GitHub Issues
📚 Docs Documentation

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🔮 Future Development

We're actively working on expanding the capabilities of Systemprompt MCP Client with exciting new features and extensions. Stay tuned for updates!

multimodal-mcp-client FAQ

How do I install the multimodal-mcp-client?
You can install it by cloning the GitHub repository and following the setup instructions in the documentation at systemprompt.io/documentation. It requires Chrome browser on Linux, Windows, or MacOS for best compatibility.
Is the multimodal-mcp-client compatible with Safari?
Currently, the client is not compatible with Safari. It has been tested and works on Chrome across Linux, Windows, and MacOS platforms.
Which LLM providers does the multimodal-mcp-client support?
It supports Google Gemini and Anthropic LLMs, leveraging the MCP protocol for seamless integration and multimodal input handling.
Can I use the multimodal-mcp-client for real-time voice interaction?
Yes, the client is designed for real-time voice-powered AI workflows, enabling natural speech commands and multimodal inputs.
Is the multimodal-mcp-client open source?
Yes, it is an open source project currently in early access, allowing developers to contribute and customize it.
What platforms are supported by the multimodal-mcp-client?
It supports Chrome browser on Linux, Windows, and MacOS. Other platforms and browsers are planned for future support.
Where can I get help or support for the multimodal-mcp-client?
Support is available via the project's Discord community, Twitter, and LinkedIn channels linked on the GitHub page and systemprompt.io website.