cobolt

MCP.Pizza Chef: platinum-hill

Cobolt is a cross-platform desktop client application designed for chatting with locally hosted large language models (LLMs). It supports the Model Context Protocol (MCP), enabling seamless integration with various LLMs and real-time context sharing. Cobolt runs on macOS and Windows, providing a user-friendly interface for interacting with local AI models securely and efficiently.

Use This MCP client To

Chat with locally hosted large language models on desktop Integrate MCP-enabled LLMs for real-time context sharing Run AI conversations without cloud dependencies Test and develop LLM-based workflows locally Use a unified client for multiple local LLMs Experiment with MCP protocol features in desktop environment

README

Cobolt

Cobolt Logo

License Platform Version Downloads Build Status Release Status

πŸ“₯ Download Latest Release

macOS Download Windows Download

This is an early release which is expected to be unstable and change significantly over time. For other platforms and previous versions, visit our Releases page

🎯 Overview

Cobolt is a cross-platform desktop application that enables you to get answers, and perform actions on the data that matters to you. Cobolt only stores data on your device, and uses locally running AI models. Cobolt can also remembers important details about you, and use it to give you personalized responses.And yes! your memories are stored on your device. You can connect to your favourite tools and data sources using the Model Context Protocol (MCP).

Feel like every query to a big tech AI is an automatic, non-consensual donation to their 'Make Our AI Smarter' fund, with zero transparency on how your 'donation' is used on some distant server farm? πŸ’ΈπŸ€·

We believe that the AI assistants of the future will run on your device, and will not send your data, or queries to be used by tech companies for profit. Small language models are closing the gap with their larger counterparts, and our devices are becoming more powerful. Cobolt is our effort to bring us closer to that future.

Cobolt enables you to get answers based on your data, with a model of your choosing.

Key Differentiators

  • Local Models: Ensures that your data does not leave your device. We are powered by Ollama, which enables you to use the open source model of your choosing.
  • Model Context Protocol Integration: Enables you to connect to the data sources, or tools that matter the most to you using MCP. This enables your model to access relevant tools and data, providing more useful, context aware responses.
  • Native Memory Support: Cobolt remembers the most important things about you, and uses this to give you more relevant responses.

How to?

How to change the model?

By default we use llama3.1:8b for inference, and nomic-embed-text for embedding.

You can use any Ollama model that supports tool calls listed here. To download a new model for inference install it from Ollama

ollama ls # to view models
ollama pull qwen3:8b # to download qwen3:8b

The downloaded model can be selected from the settings section on the app.

Note: If you want additional customization, you can update the models for tool use, inference, or embedding models individually:

On macOS: Edit ~/Library/Application Support/cobolt/config.json

On Windows: Edit %APPDATA%\cobolt\config.json

After editing, restart Cobolt for changes to take effect.

How to add new integrations?

You can find the most useful MCP backed integrations here. Add new MCP servers by adding new integrations through the application. The application will direct you to a JSON file to add your MCP server. We use the same format as Claude Desktop to make it easier for you to add new servers.

Some integrations that we recommend for new users are available at sample-mcp-server.json.

Restart the application, or reload the integrations after you have added the required servers.

🀝 Contributing

Contributions are welcome! Whether it's reporting a bug, suggesting a feature, or submitting a pull request, your help is appreciated.

Please read our Contributing Guidelines for details on how to setup your development environment and contribute to Cobolt.

You can also:

πŸ“„ License

This project is licensed under the Apache 2.0 License - see the LICENSE file for details.

Acknowledgements

Cobolt builds upon several amazing open-source projects and technologies:

  • Ollama - The powerful framework for running large language models locally
  • Model Context Protocol - The protocol specification by Anthropic for model context management
  • Mem0 - The memory management system that inspired our implementation
  • Electron - The framework that powers our cross-platform desktop application

We're grateful to all the contributors and maintainers of these projects for their incredible work.


Built with ❀️ by the Cobolt team

cobolt FAQ

How do I install Cobolt on my computer?
Download the latest release from the GitHub releases page and follow the installation instructions for macOS or Windows.
Can Cobolt connect to cloud-based LLMs?
Cobolt is primarily designed for locally hosted LLMs but supports MCP, which can enable integration with various LLM providers including OpenAI, Claude, and Gemini if configured.
Does Cobolt support multiple LLMs simultaneously?
Yes, Cobolt can manage chats with multiple locally hosted LLMs, leveraging MCP for context management.
Is Cobolt open source?
Yes, Cobolt is open source under the Apache 2.0 license, with source code available on GitHub.
What platforms does Cobolt support?
Cobolt supports macOS and Windows platforms.
How does Cobolt handle security for local LLMs?
Since Cobolt runs locally, it keeps your data and models on your machine, reducing exposure to external risks.
Can I extend Cobolt with custom MCP servers or tools?
Yes, Cobolt supports MCP, allowing integration with custom servers and tools that follow the protocol.
How do I update Cobolt to the latest version?
Check the GitHub releases page regularly and download the newest version to update.