Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

litemcp

MCP.Pizza Chef: yanmxa

litemcp is a minimal, lightweight MCP client designed to simplify the integration of various AI SDKs such as LangChain and Agent SDK into MCP projects. It emphasizes simplicity, flexibility, and minimal dependencies, enabling rapid adoption and streamlined interfaces for developers to connect MCP servers with multiple LLM runtimes efficiently.

Use This MCP client To

Integrate MCP servers with LangChain SDK for AI workflows Connect MCP tools to OpenAI Agent SDK environments Embed MCP server tools into custom LLM runtime setups Rapidly prototype AI applications using minimal client overhead Simplify SDK adoption in MCP-based AI projects Facilitate multi-SDK interoperability within MCP clients

README

✨ litemcp

A minimal, lightweight client designed to simplify SDK adoption into MCP.

litemcp enables rapid and intuitive integration of various AI SDKs (e.g., LangChain, Agent SDK) into your MCP projects, emphasizing simplicity, flexibility, and minimal dependencies.

🌟 Key Features

  • Simplicity: Streamlined interfaces ensure easy integration.
  • Flexibility: Quickly adopt diverse SDKs with minimal effort.
  • Lightweight: Designed with minimal dependencies to maximize clarity and performance.

πŸ›  Installation

Install via pip:

pip install litemcp

πŸš€ Quick Start

litemcp allows you to integrate tools from an MCP server into various LLM runtimes, including the OpenAI Agent SDK, LangChain, and direct OpenAI API calls.

Below are three examples showing how to use litemcp in different contexts:

βœ… OpenAI Agent SDK Integration

async def main():
    async with MCPServerManager(sys.argv[1]) as server_manager:
        mcp_server_tools = await server_manager.agent_sdk_tools()

        agent = Agent(
            name="assistant",
            instructions="You are an AI assistant.",
            tools=mcp_server_tools,
        )

        result = await Runner.run(agent, "List all the kubernetes clusters")
        print(result.final_output)

if __name__ == "__main__":
    asyncio.run(main())

βœ… LangChain Integration

async def main(config):
    chat = ChatOpenAI(model="gpt-3.5-turbo-0125")
    async with MCPServerManager(config) as server_manager:

        # bind tools
        tools: List[BaseTool] = await server_manager.langchain_tools()
        chat_with_tools = chat.bind_tools(tools, tool_choice="any")

        messages = [
            SystemMessage(content="You're a helpful assistant"),
            HumanMessage(content="List the dirs in the /Users"),
        ]
        tool_calls = chat_with_tools.invoke(messages).tool_calls

        # invoke the tool_call
        tool_map = {tool.name: tool for tool in tools}
        for tool_call in tool_calls:
            selected_tool = tool_map[tool_call["name"].lower()]
            tool_output = await selected_tool.ainvoke(tool_call["args"])
            print(tool_output)

βœ… Direct OpenAI API Integration

async def main(config):
    client = OpenAI()

    async with MCPServerManager(config) as server_manager:
        schemas = await server_manager.schemas()

        completion = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": "List the dirs in the /Users"}],
            tools=schemas,
        )

        print(completion.choices[0].message.tool_calls)

        # Execute the selected tool
        tool_call = completion.choices[0].message.tool_calls[0]
        result = await server_manager.tool_call(
            tool_call.function.name, tool_call.function.arguments
        )
        print(result.content[0].text)

πŸ” Tool Call Validator(Optional)

You can add a custom validation function to control MCP tool calls. This helps prevent server tools from directly accessing your system without permissionβ€”such as integrating a human-in-the-loop step.

1. Define the Validator

def applier_validator(func_args) -> Optional[str]:
    """
    Return:
    - None: allow the tool call
    - str : block the tool call and return message instead
    """
    user_input = console.input(
        f"  πŸ›   Cluster - [yellow]{cluster}[/yellow] ⎈ Proceed with this YAML? (yes/no): "
    ).strip().lower()

    if user_input in {"yes", "y"}:
        return None
    if user_input in {"no", "n"}:
        console.print("[red]Exiting process.[/red]")
        sys.exit(0)
    return user_input

2. Register the Validator with MCP Server

async with MCPServerManager(sys.argv[1]) as server_manager:
    server_manager.register_validator("yaml_applier", applier_validator)

    mcp_server_tools = await server_manager.agent_sdk_tools()

    engineer = Agent(...)

πŸ“– MCP Configuration Schema

Configure your MCP environment with optional server enabling and tool exclusion:

{
  "mcpServers": {
    "fetch": {
      "command": "uvx",
      "args": ["mcp-server-fetch"]
    },
    "youtube": {
      "command": "npx",
      "args": ["-y", "github:anaisbetts/mcp-youtube"],
      "exclude_tools": ["..."]
    },
    "mcp-server-commands": {
      "command": "npx",
      "args": ["mcp-server-commands"],
      "requires_confirmation": [
        "run_command",
        "run_script"
      ],
      "enabled": false
    },
     "multicluster-mcp-server": {
      "command": "node",
      "args": [".../multicluster-mcp-server/build/index.js"],
      "enabled": false
    }
  }
}
  • Use "enabled": true/false to activate or deactivate servers.
  • Use "exclude_tools" to omit unnecessary tools from the current MCP server.

πŸ“– Documentation

Detailed documentation coming soon!

πŸ“’ Contributing

Contributions and suggestions are welcome! Please open an issue or submit a pull request.

πŸ“œ License

liteMCP is available under the MIT License.

litemcp FAQ

How do I install litemcp?
You can install litemcp easily via pip using the command `pip install litemcp`.
What AI SDKs does litemcp support?
litemcp supports integration with popular AI SDKs like LangChain, OpenAI Agent SDK, and direct OpenAI API calls, and can be extended to others.
Is litemcp suitable for production environments?
Yes, litemcp is designed to be lightweight and minimal, making it suitable for both prototyping and production use.
Does litemcp add significant dependencies to my project?
No, litemcp is designed with minimal dependencies to keep your project lightweight and maintainable.
Can litemcp be used with multiple LLM providers?
Yes, litemcp can integrate MCP servers with various LLM runtimes including OpenAI, Anthropic Claude, and Google Gemini.
How does litemcp simplify SDK adoption?
By providing streamlined interfaces and flexible integration patterns, litemcp reduces the complexity of connecting MCP servers to different AI SDKs.
Is litemcp asynchronous or synchronous?
litemcp supports asynchronous programming models to efficiently manage MCP server interactions.
Where can I find examples to get started with litemcp?
The GitHub repository includes quick start examples demonstrating integration with OpenAI Agent SDK, LangChain, and direct API usage.