mcp-client-langchain-py

MCP.Pizza Chef: hideya

mcp-client-langchain-py is a Python-based CLI client implementing the Model Context Protocol (MCP) using the LangChain ReAct Agent framework. It simplifies interaction with multiple MCP servers by converting their tools into LangChain-compatible formats, enabling parallel initialization and efficient tool orchestration. Supporting LLMs from Anthropic, OpenAI, and Groq, this client facilitates advanced AI workflows and real-time context integration. A TypeScript counterpart is also available, making it versatile for different development environments.

Use This MCP client To

Integrate multiple MCP servers into LangChain workflows Enable parallel tool initialization for efficient processing Use LangChain ReAct Agent to orchestrate MCP tools Leverage Python CLI for quick MCP client deployment Support multi-provider LLMs like Anthropic, OpenAI, Groq Convert MCP server tools to LangChain-compatible tools

README

MCP Client Using LangChain / Python License: MIT

This simple Model Context Protocol (MCP) client demonstrates the use of MCP server tools by LangChain ReAct Agent.

It leverages a utility function convert_mcp_to_langchain_tools() from langchain_mcp_tools.
This function handles parallel initialization of specified multiple MCP servers and converts their available tools into a list of LangChain-compatible tools (List[BaseTool]).

LLMs from Anthropic, OpenAI and Groq are currently supported.

A typescript version of this MCP client is available here

Prerequisites

  • Python 3.11+
  • [optional] uv (uvx) installed to run Python package-based MCP servers
  • [optional] npm 7+ (npx) to run Node.js package-based MCP servers
  • API keys from Anthropic, OpenAI, and/or Groq as needed

Setup

  1. Install dependencies:

    make install
  2. Setup API keys:

    cp .env.template .env
    • Update .env as needed.
    • .gitignore is configured to ignore .env to prevent accidental commits of the credentials.
  3. Configure LLM and MCP Servers settings llm_mcp_config.json5 as needed.

    • The configuration file format for MCP servers follows the same structure as Claude for Desktop, with one difference: the key name mcpServers has been changed to mcp_servers to follow the snake_case convention commonly used in JSON configuration files.
    • The file format is JSON5, where comments and trailing commas are allowed.
    • The format is further extended to replace ${...} notations with the values of corresponding environment variables.
    • Keep all the credentials and private info in the .env file and refer to them with ${...} notation as needed.

Usage

Run the app:

make start

It takes a while on the first run.

Run in verbose mode:

make start-v

See commandline options:

make start-h

At the prompt, you can simply press Enter to use example queries that perform MCP server tool invocations.

Example queries can be configured in llm_mcp_config.json5

mcp-client-langchain-py FAQ

How do I install mcp-client-langchain-py?
Install via pip with Python 3.11+, and ensure dependencies like langchain-mcp-tools are included.
Which LLM providers are supported by mcp-client-langchain-py?
It supports Anthropic, OpenAI, and Groq LLMs for versatile AI model integration.
Can I use mcp-client-langchain-py with multiple MCP servers simultaneously?
Yes, it supports parallel initialization and management of multiple MCP servers.
Is there a non-Python version of this MCP client?
Yes, a TypeScript version is available for developers preferring that environment.
What is the role of convert_mcp_to_langchain_tools()?
This utility converts MCP server tools into LangChain-compatible tools for seamless integration.
What Python version is required?
Python 3.11 or higher is required to run mcp-client-langchain-py.
Does mcp-client-langchain-py support asynchronous operations?
The client leverages LangChain's capabilities, which include asynchronous tool handling where applicable.
How do I contribute or report issues?
Contributions and issues can be managed via the GitHub repository linked in the project documentation.