ai-meta-mcp-server

MCP.Pizza Chef: alxspiker

The ai-meta-mcp-server is a dynamic MCP server that empowers AI models to define, manage, and execute custom tools at runtime using a meta-function architecture. It supports multiple runtime environments including JavaScript, Python, and Shell, running all code in sandboxed environments for security. The server includes features like persistent tool storage, flexible tool registry management, and a human approval workflow to ensure safe operation. This enables AI to extend its capabilities dynamically while maintaining strict control and safety measures.

Use This MCP server To

Create and execute custom AI tools dynamically at runtime Run AI-defined scripts in JavaScript, Python, or Shell securely Manage a registry of AI-generated tools with update and delete functions Persist custom tool definitions across sessions for reuse Implement human approval workflows for tool creation and execution Isolate tool execution in sandboxes to prevent security risks

README

AI Meta MCP Server

A dynamic MCP server that allows AI models to create and execute their own custom tools through a meta-function architecture. This server provides a mechanism for AI to extend its own capabilities by defining custom functions at runtime.

Features

  • Dynamic Tool Creation: AI can define new tools with custom implementations
  • Multiple Runtime Environments: Support for JavaScript, Python, and Shell execution
  • Sandboxed Security: Tools run in isolated sandboxes for safety
  • Persistence: Store and load custom tool definitions between sessions
  • Flexible Tool Registry: Manage, list, update, and delete custom tools
  • Human Approval Flow: Requires explicit human approval for tool creation and execution

Security Considerations

⚠️ WARNING: This server allows for dynamic code execution. Use with caution and only in trusted environments.

  • All code executes in sandboxed environments
  • Human-in-the-loop approval required for tool creation and execution
  • Tool execution privileges configurable through environment variables
  • Audit logging for all operations

Installation

npm install ai-meta-mcp-server

Usage

Running the server

npx ai-meta-mcp-server

Configuration

Environment variables:

  • ALLOW_JS_EXECUTION: Enable JavaScript execution (default: true)
  • ALLOW_PYTHON_EXECUTION: Enable Python execution (default: false)
  • ALLOW_SHELL_EXECUTION: Enable Shell execution (default: false)
  • PERSIST_TOOLS: Save tools between sessions (default: true)
  • TOOLS_DB_PATH: Path to store tools database (default: "./tools.json")

Running with Claude Desktop

Add this to your claude_desktop_config.json:

{
  "mcpServers": {
    "ai-meta-mcp": {
      "command": "npx",
      "args": ["-y", "ai-meta-mcp-server"],
      "env": {
        "ALLOW_JS_EXECUTION": "true",
        "ALLOW_PYTHON_EXECUTION": "false",
        "ALLOW_SHELL_EXECUTION": "false"
      }
    }
  }
}

Tool Creation Example

In Claude Desktop, you can create a new tool like this:

Can you create a tool called "calculate_compound_interest" that computes compound interest given principal, rate, time, and compounding frequency?

Claude will use the define_function meta-tool to create your new tool, which becomes available for immediate use.

Architecture

The server implements the Model Context Protocol (MCP) and provides a meta-tool architecture that enables AI-driven function registration and execution within safe boundaries.

License

MIT

ai-meta-mcp-server FAQ

How does the ai-meta-mcp-server ensure security when running dynamic code?
It runs all custom tools in isolated sandboxed environments and requires explicit human approval for tool creation and execution to prevent unauthorized or harmful actions.
Can I use multiple programming languages for custom tools?
Yes, the server supports JavaScript, Python, and Shell environments for executing custom tools.
How are custom tools managed within the server?
The server provides a flexible tool registry allowing you to list, update, and delete custom tools as needed.
Is it possible to save custom tools for future sessions?
Yes, the server supports persistence, enabling storage and loading of custom tool definitions between sessions.
What precautions should I take when using this server?
Use the server only in trusted environments due to the risks of dynamic code execution, and always utilize the human approval flow for safety.
Does the server support integration with multiple LLM providers?
While the server is model-agnostic, it is designed to work with various LLMs including OpenAI, Claude, and Gemini by exposing dynamic tool capabilities.
How does the human approval flow work?
Before any custom tool is created or executed, explicit human approval is required to ensure control over dynamic code execution.
Can this server be used to extend AI capabilities on the fly?
Yes, it allows AI models to dynamically extend their functionality by creating and running new tools at runtime securely.