OpenDataMCP

MCP.Pizza Chef: OpenDataMCP

OpenDataMCP is a powerful MCP server that enables developers to connect any open data source directly to large language models (LLMs) using the Model Context Protocol. It facilitates real-time, structured data integration from diverse open datasets into AI workflows, allowing models like GPT-4, Claude, and Gemini to access and interact with live data efficiently. OpenDataMCP simplifies the process of feeding rich, contextual open data into LLMs, accelerating AI-enhanced applications, research, and data-driven decision-making. It supports quick setup and robust CI/CD integration, making it ideal for developers seeking to leverage open data in their AI projects.

Use This MCP server To

Integrate open government data into AI workflows Feed real-time public datasets to LLMs for analysis Enable AI models to query open scientific databases Automate insights extraction from open financial data Connect environmental open data to language models Enhance chatbots with live open data context Support research by linking open datasets to LLMs Build AI apps using diverse open data sources

README

Open Data Model Context Protocol

vc3598_Hyper-realistic_Swiss_landscape_pristine_SBB_red_train_p_40803c2e-43f5-410e-89aa-f6bdcb4cd089

Connect Open Data to LLMs in minutes!

CI Package version License License Stars

See it in action

original.mp4

We enable 2 things:

  • Open Data Access: Access to many public datasets right from your LLM application (starting with Claude, more to come).
  • Publishing: Get community help and a distribution network to distribute your Open Data. Get everyone to use it!

How do we do that?

  • Access: Setup our MCP servers in your LLM application in 2 clicks via our CLI tool (starting with Claude, see Roadmap for next steps).
  • Publish: Use provided templates and guidelines to quickly contribute and publish on Open Data MCP. Make your data easily discoverable!

Usage

Access: Access Open Data using Open Data MCP CLI Tool

Prerequisites

If you want to use Open Data MCP with Claude Desktop app client you need to install the Claude Desktop app.

You will also need uv to easily run our CLI and MCP servers.

macOS
# you need to install uv through homebrew as using the install shell script 
# will install it locally to your user which make it unavailable in the Claude Desktop app context.
brew install uv
Windows
# (UNTESTED)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

Open Data MCP - CLI Tool

Overview
# show available commands
uvx odmcp 

# show available providers
uvx odmcp list

# show info about a provider
uvx odmcp info $PROVIDER_NAME

# setup a provider's MCP server on your Claude Desktop app
uvx odmcp setup $PROVIDER_NAME

# remove a provider's MCP server from your Claude Desktop app
uvx odmcp remove $PROVIDER_NAME
Example

Quickstart for the Switzerland SBB (train company) provider:

# make sure claude is installed
uvx odmcp setup ch_sbb

Restart Claude and you should see a new hammer icon at the bottom right of the chat.

You can now ask questions to Claude about SBB train network disruption and it will answer based on data collected on data.sbb.ch.

Publish: Contribute by building and publishing public datasets

Prerequisites

  1. Install UV Package Manager

    # macOS
    brew install uv
    
    # Windows (PowerShell)
    powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
    
    # Linux/WSL
    curl -LsSf https://astral.sh/uv/install.sh | sh
  2. Clone & Setup Repository

    # Clone the repository
    git clone https://github.com/OpenDataMCP/OpenDataMCP.git
    cd OpenDataMCP
    
    # Create and activate virtual environment
    uv venv
    source .venv/bin/activate  # Unix/macOS
    # or
    .venv\Scripts\activate     # Windows
    
    # Install dependencies
    uv sync
  3. Install Pre-commit Hooks

    # Install pre-commit hooks for code quality
    pre-commit install

Publishing Instructions

  1. Create a New Provider Module

    • Each data source needs its own python module.
    • Create a new Python module in src/odmcp/providers/.
    • Use a descriptive name following the pattern: {country_code}_{organization}.py (e.g., ch_sbb.py).
    • Start with our template file as your base.
  2. Implement Required Components

    • Define your Tools & Resources following the template structure
    • Each Tool or Resource should have:
      • Clear description of its purpose
      • Well-defined input/output schemas using Pydantic models
      • Proper error handling
      • Documentation strings
  3. Tool vs Resource

    • Choose Tool implementation if your data needs:
      • Active querying or computation
      • Parameter-based filtering
      • Complex transformations
    • Choose Resource implementation if your data is:
      • Static or rarely changing
      • Small enough to be loaded into memory
      • Simple file-based content
      • Reference documentation or lookup tables
    • Reference the MCP documentation for guidance
  4. Testing

    • Add tests in the tests/ directory
    • Follow existing test patterns (see other provider tests)
    • Required test coverage:
      • Basic functionality
      • Edge cases
      • Error handling
  5. Validation

    • Test your MCP server using our experimental client: uv run src/odmcp/providers/client.py
    • Verify all endpoints respond correctly
    • Ensure error messages are helpful
    • Check performance with typical query loads

For other examples, check our existing providers in the src/odmcp/providers/ directory.

Contributing

We have an ambitious roadmap and we want this project to scale with the community. The ultimate goal is to make the millions of datasets publicly available to all LLM applications.

For that we need your help!

Discord

We want to build a helping community around the challenge of bringing open data to LLM's. Join us on discord to start chatting: https://discord.gg/QPFFZWKW

Our Core Guidelines

Because of our target scale we want to keep things simple and pragmatic at first. Tackle issues with the community as they come along.

  1. Simplicity and Maintainability

    • Minimize abstractions to keep codebase simple and scalable
    • Focus on clear, straightforward implementations
    • Avoid unnecessary complexity
  2. Standardization / Templates

    • Follow provided templates and guidelines consistently
    • Maintain uniform structure across providers
    • Use common patterns for similar functionality
  3. Dependencies

    • Keep external dependencies to a minimum
    • Prioritize single repository/package setup
    • Carefully evaluate necessity of new dependencies
  4. Code Quality

    • Format code using ruff
    • Maintain comprehensive test coverage with pytest
    • Follow consistent code style
  5. Type Safety

    • Use Python type hints throughout
    • Leverage Pydantic models for API request/response validation
    • Ensure type safety in data handling

Tactical Topics (our current priorities)

  • Initialize repository with guidelines, testing framework, and contribution workflow
  • Implement CI/CD pipeline with automated PyPI releases
  • Develop provider template and first reference implementation
  • Integrate additional open datasets (actively seeking contributors)
  • Establish clear guidelines for choosing between Resources and Tools
  • Develop scalable repository architecture for long-term growth
  • Expand MCP SDK parameter support (authentication, rate limiting, etc.)
  • Implement additional MCP protocol features (prompts, resource templates)
  • Add support for alternative transport protocols beyond stdio (SSE)
  • Deploy hosted MCP servers for improved accessibility

Roadmap

Let’s build the open source infrastructure that will allow all LLMs to access all Open Data together!

Access:

  • Make Open Data available to all LLM applications (beyond Claude)
  • Make Open Data data sources searchable in a scalable way
  • Make Open Data available through MCP remotely (SSE) with publicly sponsored infrastructure

Publish:

  • Build the many Open Data MCP servers to make all the Open Data truly accessible (we need you!).
  • On our side we are starting to build MCP servers for Switzerland ~12k open dataset!
  • Make it even easier to build Open Data MCP servers

We are very early, and lack of dataset available is currently the bottleneck. Help yourself! Create your Open Data MCP server and get users to use it as well from their LLMs applications. Let’s connect LLMs to the millions of open datasets from governments, public entities, companies and NGOs!

As Anthropic's MCP evolves we will adapt and upgrade Open Data MCP.

Limitations

  • All data served by Open Data MCP servers should be Open.
  • Please oblige to the data licenses of the data providers.
  • Our License must be quoted in commercial applications.

References

License

This project is licensed under the MIT License - see the LICENSE file for details

OpenDataMCP FAQ

How do I install OpenDataMCP?
You can install OpenDataMCP easily via PyPI using 'pip install odmcp', with full setup instructions on the GitHub repository.
Can OpenDataMCP connect to any open data format?
Yes, OpenDataMCP supports a wide range of open data formats and sources, enabling flexible integration with various datasets.
Is OpenDataMCP compatible with multiple LLM providers?
Absolutely, it works seamlessly with OpenAI, Anthropic Claude, and Google Gemini models through the Model Context Protocol.
How does OpenDataMCP handle data updates?
It supports real-time or scheduled data refreshes to keep the LLM context current and accurate.
What security measures does OpenDataMCP implement?
OpenDataMCP follows MCP principles for secure, scoped, and observable interactions, ensuring safe data handling.
Can I customize data parsing in OpenDataMCP?
Yes, it offers extensible adapters to tailor data ingestion and formatting to your specific open data sources.
Does OpenDataMCP support CI/CD workflows?
Yes, it includes integration with continuous integration pipelines for automated testing and deployment.
Where can I find documentation and support?
Comprehensive docs and community support are available on the OpenDataMCP GitHub repository.