mcp-client-any-llm

MCP.Pizza Chef: rxyshww

The mcp-client-any-llm is a modern web client built with Next.js that facilitates interaction with various large language models (LLMs) through the Model Context Protocol (MCP). It supports multiple LLM providers such as OpenAI and Google, offering a clean chat interface with markdown rendering and syntax highlighting. Features include dark/light mode, local conversation history, and real-time streaming responses, making it ideal for developers and users seeking a versatile, user-friendly platform to engage with different AI models while maintaining conversation context.

Use This MCP client To

Chat with multiple LLM providers in one interface Maintain conversation context across sessions Render markdown with syntax highlighting in chats Store and access local conversation history Stream real-time LLM responses during chat Switch between dark and light UI modes

README

MCP Client for Any LLM

A modern web client built with Next.js that allows you to interact with various LLM models using the Model Context Protocol (MCP). This client provides a clean and intuitive interface for chatting with different AI models while maintaining conversation context.

Features

  • πŸ€– Support for multiple LLM providers (OpenAI, Google, etc.)
  • πŸ’¬ Clean chat interface with markdown support
  • πŸŒ™ Dark/Light mode support
  • πŸ“ Markdown rendering with syntax highlighting
  • πŸ’Ύ Local conversation history
  • πŸ”„ Real-time streaming responses

Prerequisites

Before you begin, ensure you have installed:

  • Node.js (v18 or higher)
  • pnpm (recommended) or npm

Quick Start

  1. Clone the repository:
git clone <your-repo-url>
cd mcp-client-any-llm
  1. Install dependencies:
pnpm install
  1. Configure environment variables: Create a .env.local file in the root directory with the following variables:

    # Required: OpenAI API Configuration
    OPENAI_API_KEY=your_openai_api_key
    OPENAI_API_BASE_URL=https://api.openai.com/v1  # Optional: Custom base URL if using a proxy
    OPENAI_API_MODEL=gpt-3.5-turbo  # Optional: Default model to use
    
    # Optional: Google AI Configuration
    GOOGLE_API_KEY=your_google_api_key
    GOOGLE_API_MODEL=gemini-pro  # Default Google AI model
    
    # Optional: Azure OpenAI Configuration
    AZURE_OPENAI_API_KEY=your_azure_openai_key
    AZURE_OPENAI_ENDPOINT=your_azure_endpoint
    AZURE_OPENAI_MODEL=your_azure_model_deployment_name
    
    # Optional: Anthropic Configuration
    ANTHROPIC_API_KEY=your_anthropic_key
    ANTHROPIC_API_MODEL=claude-2  # Default Anthropic model

    Note: Only the OpenAI configuration is required by default. Other providers are optional.

  2. Start the development server:

pnpm dev
  1. Open http://localhost:3000 in your browser to start chatting!

Environment Variables

Required Variables

  • OPENAI_API_KEY: Your OpenAI API key

Optional Variables

  • OPENAI_API_BASE_URL: Custom base URL for OpenAI API (useful for proxies)
  • OPENAI_API_MODEL: Default OpenAI model to use
  • GOOGLE_API_KEY: Google AI API key
  • GOOGLE_API_MODEL: Default Google AI model
  • AZURE_OPENAI_API_KEY: Azure OpenAI API key
  • AZURE_OPENAI_ENDPOINT: Azure OpenAI endpoint URL
  • AZURE_OPENAI_MODEL: Azure OpenAI model deployment name
  • ANTHROPIC_API_KEY: Anthropic API key
  • ANTHROPIC_API_MODEL: Default Anthropic model

Technology Stack

Development

To run the development server:

pnpm dev

For production build:

pnpm build
pnpm start

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

[Add your license information here]

mcp-client-any-llm FAQ

How do I install the mcp-client-any-llm?
Clone the repository, install dependencies with pnpm or npm, and configure environment variables as per the README.
Which Node.js version is required?
Node.js version 18 or higher is required to run the client.
How do I configure API keys for LLM providers?
Set the required environment variables in a .env.local file, including keys for OpenAI, Google, or other supported providers.
Does the client support real-time streaming responses?
Yes, it supports real-time streaming of LLM responses for a smooth chat experience.
Can I save and access past conversations?
Yes, the client stores local conversation history for later access.
Is markdown supported in chat messages?
Yes, the client supports markdown rendering with syntax highlighting.
Does the client support dark mode?
Yes, it includes both dark and light mode UI options.
Can I use this client with any LLM provider?
The client supports multiple providers like OpenAI and Google, and can be extended to others via MCP.