neuron-ai

MCP.Pizza Chef: inspector-apm

Neuron-ai is an open source PHP client framework designed to create full-featured AI agents powered by Inspector.dev. It supports multiple LLM providers, integrates MCP server connectors, and enables advanced features like RAG systems, structured output, and observability. Neuron-ai facilitates building intelligent agents that interact with tools and APIs in real-time, leveraging PHP 8.1+ environments for scalable AI workflows.

Use This MCP client To

Build AI agents that interact with multiple LLM providers Integrate MCP servers for real-time context and tool access Implement retrieval-augmented generation (RAG) systems in PHP Create conversational agents with structured output capabilities Monitor and observe AI agent interactions and performance Develop AI-powered workflows using PHP and MCP protocol Connect AI agents to external APIs and tools via MCP Rapidly prototype AI assistants with PHP and open source tools

README

Latest Stable Version License

Before moving on, support the community giving a GitHub star ⭐️. Thank you!

Requirements

  • PHP: ^8.1

Official documentation

Go to the official documentation

Forum

You can post questions and feedback on the Inspector Forum.

Examples

Install the latest version of the package:

composer require inspector-apm/neuron-ai

Neuron provides you with the Agent class you can extend to inherit the main features of the framework, and create fully functional agents. This class automatically manages some advanced mechanisms for you such as memory, tools and function calls, up to the RAG systems. You can go deeper into these aspects in the documentation. In the meantime, let's create the first agent, extending the NeuronAI\Agent class:

use NeuronAI\Agent;
use NeuronAI\SystemPrompt;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;

class YouTubeAgent extends Agent
{
    public function provider(): AIProviderInterface
    {
        return new Anthropic(
            key: 'ANTHROPIC_API_KEY',
            model: 'ANTHROPIC_MODEL',
        );
    }

    public function instructions(): string
    {
        return new SystemPrompt(
            background: ["You are an AI Agent specialized in writing YouTube video summaries."],
            steps: [
                "Get the url of a YouTube video, or ask the user to provide one.",
                "Use the tools you have available to retrieve the transcription of the video.",
                "Write the summary.",
            ],
            output: [
                "Write a summary in a paragraph without using lists. Use just fluent text.",
                "After the summary add a list of three sentences as the three most important take away from the video.",
            ]
        );
    }
}

The SystemPrompt class is designed to take your base instructions and build a consistent prompt for the underlying model reducing the effort for prompt engineering.

Send a prompt to the agent to get a response from the underlying LLM:

$agent = YouTubeAgent::make();

$response = $agent->run(new UserMessage("Hi, I'm Valerio. Who are you?"));
echo $response->getContent();
// I'm a friendly YouTube assistant to help you summarize videos.


$response = $agent->run(
    new UserMessage("Do you know my name?")
);
echo $response->getContent();
// Your name is Valerio, as you said in your introduction.

As you can see in the example above, the Agent automatically has memory of the ongoing conversation. Learn more about memory in the documentation.

With Neuron you can switch between LLM providers with just one line of code, without any impact on your agent implementation. Supported providers:

You can add the ability to perform concrete tasks to your Agent with an array of Tool:

use NeuronAI\Agent;
use NeuronAI\SystemPrompt;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
use NeuronAI\Tools\Tool;
use NeuronAI\Tools\ToolProperty;

class YouTubeAgent extends Agent
{
    public function provider(): AIProviderInterface
    {
        return new Anthropic(
            key: 'ANTHROPIC_API_KEY',
            model: 'ANTHROPIC_MODEL',
        );
    }

    public function instructions(): string
    {
        return new SystemPrompt(
            background: ["You are an AI Agent specialized in writing YouTube video summaries."],
            steps: [
                "Get the url of a YouTube video, or ask the user to provide one.",
                "Use the tools you have available to retrieve the transcription of the video.",
                "Write the summary.",
            ],
            output: [
                "Write a summary in a paragraph without using lists. Use just fluent text.",
                "After the summary add a list of three sentences as the three most important take away from the video.",
            ]
        );
    }

    public function tools(): array
    {
        return [
            Tool::make(
                'get_transcription',
                'Retrieve the transcription of a youtube video.',
            )->addProperty(
                new ToolProperty(
                    name: 'video_url',
                    type: 'string',
                    description: 'The URL of the YouTube video.',
                    required: true
                )
            )->setCallable(function (string $video_url) {
                // ... retrieve the video transcription
            })
        ];
    }
}

Learn more about Tools in the documentation.

Instead of implementing tools manually, you can connect tools exposed by an MCP server with the McpConnector component:

use NeuronAI\Agent;
use NeuronAI\MCP\McpConnector;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
use NeuronAI\Tools\Tool;
use NeuronAI\Tools\ToolProperty;

class YouTubeAgent extends Agent
{
    public function provider(): AIProviderInterface
    {
        return new Anthropic(
            key: 'ANTHROPIC_API_KEY',
            model: 'ANTHROPIC_MODEL',
        );
    }

    public function instructions(): string
    {
        return new SystemPrompt(
            background: ["Act as an expert of SEO (Search Engine Optimization)."]
            steps: [
                "Analyze a text of an article.",
                "Provide suggestions on how the content can be improved to get a better rank on Google search."
            ],
            output: ["Structure your analysis in sections. One for each suggestion."]
        );
    }

    public function tools(): array
    {
        return [
            // Connect an MCP server
            ...McpConnector::make([
                'command' => 'npx',
                'args' => ['-y', '@modelcontextprotocol/server-everything'],
            ])->tools(),

            // Implement your custom tools
            Tool::make(
                'get_transcription',
                'Retrieve the transcription of a youtube video.',
            )->addProperty(
                new ToolProperty(
                    name: 'video_url',
                    type: 'string',
                    description: 'The URL of the YouTube video.',
                    required: true
                )
            )->setCallable(function (string $video_url) {
                // ... retrieve the video transcription
            })
        ];
    }
}

Learn more about MCP connector in the documentation.

For RAG use case, you must extend the NeuronAI\RAG\RAG class instead of the default Agent class.

To create a RAG you need to attach some additional components other than the AI provider, such as a vector store, and an embeddings provider.

Here is an example of a RAG implementation:

use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
use NeuronAI\RAG\Embeddings\EmbeddingsProviderInterface;
use NeuronAI\RAG\Embeddings\VoyageEmbeddingProvider;
use NeuronAI\RAG\RAG;
use NeuronAI\RAG\VectorStore\PineconeVectoreStore;
use NeuronAI\RAG\VectorStore\VectorStoreInterface;

class MyChatBot extends RAG
{
    public function provider(): AIProviderInterface
    {
        return new Anthropic(
            key: 'ANTHROPIC_API_KEY',
            model: 'ANTHROPIC_MODEL',
        );
    }

    public function embeddings(): EmbeddingsProviderInterface
    {
        return new VoyageEmbeddingProvider(
            key: 'VOYAGE_API_KEY',
            model: 'VOYAGE_MODEL'
        );
    }

    public function vectorStore(): VectorStoreInterface
    {
        return new PineconeVectoreStore(
            key: 'PINECONE_API_KEY',
            indexUrl: 'PINECONE_INDEX_URL'
        );
    }
}

Learn more about RAG in the documentation.

For many applications, such as chatbots, Agents need to respond to users directly in natural language. However, there are scenarios where we need Agents to understand natural language, but output in a structured format.

One common use-case is extracting data from text to insert into a database or use with some other downstream system. This guide covers a few strategies for getting structured outputs from the agent.

use NeuronAI\StructuredOutput\SchemaProperty;

// Define the output structure with a PHP class, including validation constraints.
class Person
{
    #[SchemaProperty(description: 'The user name')]
    public string $name;

    #[SchemaProperty(description: 'What the user love to eat')]
    public string $preference;
}


// Talk to the agent requiring the structured output
$person = MyAgent::make()->structured(
    new UserMessage("I'm John and I like pizza!"),
    Person::class
);

echo $person->name ' like '.$person->preference;
// John like pizza

Learn more about Structured Output on the documentation.

Neuron offers a built-in integration with Inspector.dev to monitor the performance of your agents and detect unexpected errors in real time.

You have to install the Inspector package based on your development environment. We provide integration packages for PHP, Laravel, Symfony, CodeIgniter, Drupal.

Attach the AgentMonitoring component to the agent to monitor the internal execution timeline in the Inspector dashboard. If the agent fires an error, you will be alerted in real-time. You can connect several notification channels like email, slack, discord, telegram, and more. Here is a code example in a legacy PHP script:

new NeuronAI\Observability\AgentMonitoring;

// The Inspector instance in your application
$inspector = new \Inspector\Inspector(
    new Configuration('YOUR-INGESTION-KEY')
);

// Attach monitoring to the Agent
$response = MyAgent::make()
    ->observe(
        new AgentMonitoring($inspector)
    )
    ->chat(...);

If you use a framework like Laravel, Symfony, or CodeIgniter, the connection is even easier, since you already have the Inspector instance in the container.

Learn more about Observability in the documentation.

Go to the official documentation

neuron-ai FAQ

How do I install neuron-ai?
Install neuron-ai via Composer requiring PHP 8.1 or higher, following the official documentation at neuron.inspector.dev.
Which LLM providers does neuron-ai support?
Neuron-ai supports multiple LLM providers including OpenAI, Claude, and Gemini for flexible AI model integration.
Can neuron-ai connect to MCP servers?
Yes, neuron-ai includes MCP server connectors to enable real-time context sharing and tool invocation.
Does neuron-ai support retrieval-augmented generation (RAG)?
Yes, it provides built-in support for implementing RAG systems to enhance AI responses with external knowledge.
How can I create conversational agents with neuron-ai?
Use the framework's agent creation APIs to build interactive chatbots with structured output and tool integration.
Is neuron-ai suitable for production environments?
Yes, neuron-ai is designed for scalable PHP applications and includes observability features for monitoring agent performance.
Where can I get help or provide feedback?
You can post questions and feedback on the Inspector Forum linked in the official documentation.
What PHP version is required for neuron-ai?
Neuron-ai requires PHP version 8.1 or higher to run.