Fire in da houseTop Tip:Paying $100+ per month for Perplexity, MidJourney, Runway, ChatGPT and other tools is crazy - get all your AI tools in one site starting at $15 per month with Galaxy AI Fire in da houseCheck it out free

knowledge-base-mcp-server

MCP.Pizza Chef: jeanibarz

The Knowledge Base MCP Server is a modular MCP server that enables listing and retrieving content from various knowledge bases. It provides structured access to knowledge repositories, allowing LLMs to query and fetch relevant information in real time. This server supports integration with multiple knowledge bases, facilitating enhanced context retrieval for AI workflows and applications.

Use This MCP server To

List available knowledge bases for querying Retrieve specific documents or articles from knowledge bases Integrate knowledge base content into AI-driven workflows Enable real-time access to organizational knowledge repositories Support multi-knowledge base search and retrieval in LLM applications Provide structured content retrieval for context-aware AI agents

README

Knowledge Base MCP Server

smithery badge This MCP server provides tools for listing and retrieving content from different knowledge bases.

Knowledge Base Server MCP server

Setup Instructions

These instructions assume you have Node.js and npm installed on your system.

Installing via Smithery

To install Knowledge Base Server for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @jeanibarz/knowledge-base-mcp-server --client claude

Manual Installation

Prerequisites

  • Node.js (version 16 or higher)
  • npm (Node Package Manager)
  1. Clone the repository:

    git clone <repository_url>
    cd knowledge-base-mcp-server
  2. Install dependencies:

    npm install
  3. Configure environment variables:

    • The server requires the HUGGINGFACE_API_KEY environment variable to be set. This is the API key for the Hugging Face Inference API, which is used to generate embeddings for the knowledge base content. You can obtain a free API key from the Hugging Face website (https://huggingface.co/).
    • The server requires the KNOWLEDGE_BASES_ROOT_DIR environment variable to be set. This variable specifies the directory where the knowledge base subdirectories are located. If you don't set this variable, it will default to $HOME/knowledge_bases, where $HOME is the current user's home directory.
    • The server supports the FAISS_INDEX_PATH environment variable to specify the path to the FAISS index. If not set, it will default to $HOME/knowledge_bases/.faiss.
    • The server supports the HUGGINGFACE_MODEL_NAME environment variable to specify the Hugging Face model to use for generating embeddings. If not set, it will default to sentence-transformers/all-MiniLM-L6-v2.
    • You can set these environment variables in your .bashrc or .zshrc file, or directly in the MCP settings.
  4. Build the server:

    npm run build
  5. Add the server to the MCP settings:

    • Edit the cline_mcp_settings.json file located at /home/jean/.vscode-server/data/User/globalStorage/saoudrizwan.claude-dev/settings/.
    • Add the following configuration to the mcpServers object:
    "knowledge-base-mcp": {
      "command": "node",
      "args": [
        "/path/to/knowledge-base-mcp-server/build/index.js"
      ],
      "disabled": false,
      "autoApprove": [],
      "env": {
        "KNOWLEDGE_BASES_ROOT_DIR": "/path/to/knowledge_bases",
        "HUGGINGFACE_API_KEY": "YOUR_HUGGINGFACE_API_KEY",
      },
      "description": "Retrieves similar chunks from the knowledge base based on a query."
    },
    • Replace /path/to/knowledge-base-mcp-server with the actual path to the server directory.
    • Replace /path/to/knowledge_bases with the actual path to the knowledge bases directory.
  6. Create knowledge base directories:

    • Create subdirectories within the KNOWLEDGE_BASES_ROOT_DIR for each knowledge base (e.g., company, it_support, onboarding).
    • Place text files (e.g., .txt, .md) containing the knowledge base content within these subdirectories.
  • The server recursively reads all text files (e.g., .txt, .md) within the specified knowledge base subdirectories.
  • The server skips hidden files and directories (those starting with a .).
  • For each file, the server calculates the SHA256 hash and stores it in a file with the same name in a hidden .index subdirectory. This hash is used to determine if the file has been modified since the last indexing.
  • The file content is splitted into chunks using the MarkdownTextSplitter from langchain/text_splitter.
  • The content of each chunk is then added to a FAISS index, which is used for similarity search.
  • The FAISS index is automatically initialized when the server starts. It checks for changes in the knowledge base files and updates the index accordingly.

Usage

The server exposes two tools:

  • list_knowledge_bases: Lists the available knowledge bases.
  • retrieve_knowledge: Retrieves similar chunks from the knowledge base based on a query. Optionally, if a knowledge base is specified, only that one is searched; otherwise, all available knowledge bases are considered. By default, at most 10 document chunks are returned with a score below a threshold of 2. A different threshold can optionally be provided using the threshold parameter.

You can use these tools through the MCP interface.

The retrieve_knowledge tool performs a semantic search using a FAISS index. The index is automatically updated when the server starts or when a file in a knowledge base is modified.

The output of the retrieve_knowledge tool is a markdown formatted string with the following structure:

## Semantic Search Results

**Result 1:**

[Content of the most similar chunk]

**Source:**
```json
{
  "source": "[Path to the file containing the chunk]"
}
```

---

**Result 2:**

[Content of the second most similar chunk]

**Source:**
```json
{
  "source": "[Path to the file containing the chunk]"
}
```

> **Disclaimer:** The provided results might not all be relevant. Please cross-check the relevance of the information.

Each result includes the content of the most similar chunk, the source file, and a similarity score.

knowledge-base-mcp-server FAQ

How do I install the Knowledge Base MCP Server?
You can install it via Smithery CLI using 'npx -y @smithery/cli install @jeanibarz/knowledge-base-mcp-server --client claude' or manually with Node.js and npm.
What prerequisites are needed to run this MCP server?
You need Node.js version 16 or higher and npm installed on your system.
Can this server connect to multiple types of knowledge bases?
Yes, it supports listing and retrieving content from different knowledge bases, enabling flexible integration.
Is this MCP server compatible with various LLM providers?
Yes, it is provider-agnostic and works with models like OpenAI, Claude, and Gemini.
How does this server enhance AI workflows?
By providing real-time, structured access to knowledge base content, it enables more informed and context-rich AI responses.
Can I customize which knowledge bases are accessible?
Yes, the server can be configured to connect to specific knowledge bases as needed.
Does the server support secure access to private knowledge bases?
Security depends on the configuration and access controls of the connected knowledge bases, which the server respects.
How do I update the server to the latest version?
Use npm or Smithery CLI to update the package to the latest release.