A Model Context Protocol (MCP) implementation for secure text embeddings with privacy-preserving features using the Mirror SDK.
The Secure Embedding MCP Server provides a robust interface for processing text data with various security levels while generating embeddings for semantic search and analysis. It leverages the Mirror SDK to provide advanced security features including:
- Format-preserving encryption (FPE) for sensitive entities
- Vector encryption for secure embeddings
- Role-based access control (RBAC) for fine-grained security policies
- Entity detection for PII and sensitive information
-
Unified Text Processing: Single entry point for various text operations with appropriate security measures
-
Multiple Operation Modes:
-
embed: Generate text embeddings -
secure: Apply security measures to text and embeddings -
analyze: Detect and analyze sensitive information -
mask: Anonymize sensitive entities -
auto: Automatically determine the appropriate operation -
Configurable Security Levels:
-
none: No security measures -
low: Basic vector encryption -
medium: Entity encryption with FPE -
high: Full encryption with RBAC -
auto: Automatically determine security level based on content -
Natural Language Interface: Process requests in natural language
-
Batch Processing: Handle multiple texts efficiently
-
Semantic Search: Search across documents using embeddings
- Python 3.10+
- Mirror SDK
- LangChain with Hugging Face integration
- MCP Server framework
- Claude Desktop (for integration)
- Visit the Mirror Platform
- Click on "Sign Up" or "Register"
- Fill in your details and create an account
- Verify your email address
- Log in to your Mirror Platform account
- Navigate to the "API Keys" section in your dashboard
- Click "Generate New Key"
- Save both the API Key and Secret securely
- These keys will be used in your environment variables or configuration file
This implementation demonstrates a subset of Mirror's capabilities. For full enterprise features including:
- Advanced encryption algorithms
- Custom security policies
- Enterprise-grade RBAC
- Advanced entity detection
- Custom model integration
- Dedicated support
Please contact our team at
uv is a fast Python package installer and resolver that we'll use for our environment setup.
Screenshot1

Screenshot2

MacOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | shWindows
# Using PowerShell
irm https://astral.sh/uv/install.ps1 | iexMake sure to restart your terminal afterwards to ensure that the uv command gets picked up.
We provide two automatic installation scripts for different operating systems:
Run the following command in PowerShell or Command Prompt:
.\setup_claude_config.batThis script will:
- Install uv if not present
- Set up the virtual environment
- Install required dependencies
- Configure Claude Desktop integration
- Create necessary configuration files
Run the following command in your terminal:
chmod +x setup_claude_config.sh
./setup_claude_config.shThis script will:
- Check for required dependencies
- Install uv if not present
- Set up the virtual environment
- Install required dependencies
- Configure Claude Desktop integration
- Create necessary configuration files
Both scripts will create a log file (setup_log.txt) in the project directory for troubleshooting purposes.
If you prefer to install manually or if you already have the project cloned:
- Clone the repository:
git clone https://github.com/yourusername/mirror-vectax-mcp-server.git
cd mirror-vectax-mcp-server- Set up virtual environment with uv:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate- Install mcp cli
uv add mcp[cli] httpx- Install dependencies:
uv add -r requirements.txt- Install Mirror SDK
Download Mirror SDK from platform and copy whl files into dist folder.
Replace <version> with download version.
uv add .\dist\mirror_sdk-<version>.whl
uv add .\dist\mirror_enc-<version>.whl- Set up environment variables:
export MIRROR_API_KEY="your-mirror-api-key"
export MIRROR_SECRET="your-mirror-secret"
export MIRROR_SERVER_URL="https://your-mirror-server-url/v1"
export EMBEDDING_MODEL="nomic-ai/nomic-embed-text-v1.5" # Optional
export EMBEDDING_DEVICE="cpu" # Or "cuda" for GPU acceleration- Alternatively, create a configuration file
secure_search_config.jsonwith the following content:
{
"api_key": "your-mirror-api-key",
"secret": "your-mirror-secret",
"server_url": "https://your-mirror-server-url/v1",
"policy_eval_enabled": false,
"app_policy": {
"roles": ["admin", "researcher", "user", "analyst"],
"groups": ["ai_team", "ml_team", "nlp_team"],
"departments": ["research", "engineering", "IT"]
}
}- Download Claude Desktop from Anthropic's website
- Install and launch Claude Desktop
- Sign in with your Anthropic account
- Create a
claude_desktop_config.jsonfile in your Claude Desktop configuration directory:
Windows:
%APPDATA%\Claude\claude_desktop_config.json
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Linux:
~/.config/Claude/claude_desktop_config.json
- Add the following configuration (adjust paths as needed):
{
"mcpServers": {
"secure-embedding": {
"command": "uv",
"args": [
"--directory",
"/ABSOLUTE/PATH/TO/YOUR/PROJECT/mirror-vectax-mcp-server",
"run",
"mirror_vectax_server.py"
]
}
}
}- Find the full path to the
uvexecutable:
- macOS/Linux:
which uv- Windows:
where uv- Update the
commandfield in the config with the full path if needed - Save the file and restart Claude Desktop
- Look for the hammer icon in Claude Desktop to confirm the MCP tools are available
- Open Claude Desktop
- Look for the hammer icon in the interface
- Try a simple test:
Can you create an embedding for this sentence: "Machine learning models can process large amounts of data efficiently."
You can start the server using one of the following methods:
- Make the script executable:
chmod +x run_mcp_server.sh- Run the server:
./run_mcp_server.shThis script sets up the environment variables and runs the server with proper Python settings:
- Adds local bin to PATH
- Sets Python to unbuffered mode
- Enables Python debug mode
- Changes to the correct directory
- Runs the server with Python 3
uv run mirror_vectax_server.pymcp run mirror_vectax_server.pyTo verify the server is running correctly, you can use the MCP CLI to list available tools:
mcp list-tools --transport stdio --binary "python mirror_vectax_server.py"This should output a list of all the tools provided by the server.
We can use mcp inspector to test the tools:
npx @modelcontextprotocol/inspector \
uv \
--directory /ABSOLUTE/PATH/TO/YOUR/PROJECT/mirror-vectax-mcp-server \
run \
mirror_vectax_server.pyAfter running mcp inspector, we can test tool, for example

If you need to modify the server startup configuration, you can edit the run_mcp_server.sh script. Here's what each line does:
#!/bin/bash
# Add local bin to PATH
export PATH="$PATH:/Users/Yourname/.local/bin"
# Enable Python debug mode
export PYTHONUNBUFFERED=1
export PYTHONDEBUG=1
# Change to project directory
cd /Users/your_downloaded_path/mirror-vectax-mcp-server-main
# Run the server with Python 3
exec python3 -u mirror_vectax_server.pyMake sure to:
- Update the paths to match your system
- Keep the script executable (
chmod +x run_mcp_server.sh) - Run it from the project directory
The server consists of two main services:
- EmbeddingService: Creates and manages text embeddings using HuggingFace models.
- EncryptionService: Provides encryption capabilities using the Mirror SDK.
These services are initialized during server startup and made available to the MCP tools.
Can you create an embedding for this sentence: "Machine learning models can process large amounts of data efficiently."
Expected outcome: Should use the process tool with "embed" operation and return embedding information.
Can you analyze this text for sensitive information? "My social security number is 123-45-6789 and my email is
test@example.com"
Expected outcome: Should use the process tool with "analyze" operation, detect SSN and email entities, and return analysis details.
Please securely process this text: "My credit card is 4111-1111-1111-1111 and my phone number is (555) 123-4567."
Expected outcome: Should use the process tool with "secure" operation at medium/high security level, encrypt sensitive entities, and potentially encrypt the embedding.
I need to encrypt and protect this confidential medical information: "Patient John Doe (DOB: 01/15/1980) has been diagnosed with hypertension."
Expected outcome: Should use the natural-language-process tool to determine intent (secure/mask) and apply appropriate security measures.
I need embeddings for the following phrases:
- "Artificial intelligence is transforming industries."
- "Data privacy is an important concern for organizations."
- "Secure embeddings protect sensitive information during processing."
Expected outcome: Should use the batch-process tool to create embeddings for all three phrases.
Generate a user key for a data analyst who belongs to the research department and has user-level access.
Expected outcome: Should use the generate-user-key tool with appropriate roles, groups, and departments.
First, create embeddings for these phrases:
- "Security is a top priority for financial institutions."
- "Privacy regulations impact how companies handle data."
- "Machine learning models require careful validation." Then search these documents for information about "data privacy".
Expected outcome: Should first use the batch-process tool to create embeddings, then use the search tool to find relevant documents, with the second phrase likely scoring highest.
Can you analyze this text: ""
Expected outcome: Should handle empty input gracefully, possibly returning an error message or minimal analysis.
Please secure this document: [Insert 1000+ word document about sensitive financial information]
Expected outcome: Should handle large input without issues, possibly detecting multiple entities and recommending high security level.
Analyze this text for sensitive information: "Mi número de pasaporte es AB123456 y mi dirección es Calle Principal 123, Madrid, España."
Expected outcome: Should detect entities in non-English text if the underlying entity detection supports it.
- First, create secure embeddings for a set of medical records that contain patient information.
- Generate a user key for a doctor in the cardiology department.
- Search the embeddings for information about "heart conditions" using the doctor's access key.
Expected outcome: Should demonstrate a full workflow involving multiple tools - process for secure embeddings, generate-user-key for RBAC, and search with the key.
Process this text and determine the appropriate security measures automatically: "This quarterly financial report contains projections for Q3 2025 and includes account numbers for our top clients."
Expected outcome: Should use the process tool with "auto" operation and security level, detect sensitive content, and apply appropriate security measures based on the content.
-
Claude Desktop Not Recognizing Tools
- Ensure the
claude_desktop_config.jsonfile is in the correct location - Verify the paths in the configuration are absolute and correct
- Restart Claude Desktop after making changes
- Ensure the
-
Server Connection Issues
- Check if the server is running (
uv run mirror_vectax_server.py) - Verify environment variables are set correctly
- Check the logs for any error messages
- Check if the server is running (
-
Memory Issues with Batch Processing
- The server processes texts one at a time to prevent memory issues
- If you experience disconnections, try processing fewer texts at once
- Monitor system memory usage during processing
-
Model Loading Issues
- Ensure you have sufficient disk space for model caching
- Check internet connectivity for initial model download
- Verify the model cache directory is writable
If you encounter issues:
- Check the logs in Claude Desktop
- Review the server logs
- Ensure all prerequisites are installed correctly
- Contact support if issues persist