This is an early release which is expected to be unstable and change significantly over time. For other platforms and previous versions, visit our Releases page
Cobolt is a cross-platform desktop application that enables you to get answers, and perform actions on the data that matters to you. Cobolt only stores data on your device, and uses locally running AI models. Cobolt can also remembers important details about you, and use it to give you personalized responses.And yes! your memories are stored on your device. You can connect to your favourite tools and data sources using the Model Context Protocol (MCP).
Feel like every query to a big tech AI is an automatic, non-consensual donation to their 'Make Our AI Smarter' fund, with zero transparency on how your 'donation' is used on some distant server farm? πΈπ€·
We believe that the AI assistants of the future will run on your device, and will not send your data, or queries to be used by tech companies for profit. Small language models are closing the gap with their larger counterparts, and our devices are becoming more powerful. Cobolt is our effort to bring us closer to that future.
Cobolt enables you to get answers based on your data, with a model of your choosing.
- Local Models: Ensures that your data does not leave your device. We are powered by Ollama, which enables you to use the open source model of your choosing.
- Model Context Protocol Integration: Enables you to connect to the data sources, or tools that matter the most to you using MCP. This enables your model to access relevant tools and data, providing more useful, context aware responses.
- Native Memory Support: Cobolt remembers the most important things about you, and uses this to give you more relevant responses.
By default we use llama3.1:8b for inference, and nomic-embed-text for embedding.
You can use any Ollama model that supports tool calls listed here. To download a new model for inference install it from Ollama
ollama ls # to view models
ollama pull qwen3:8b # to download qwen3:8b
The downloaded model can be selected from the settings section on the app.
Note: If you want additional customization, you can update the models for tool use, inference, or embedding models individually:
On macOS: Edit
~/Library/Application Support/cobolt/config.json
On Windows: Edit
%APPDATA%\cobolt\config.json
After editing, restart Cobolt for changes to take effect.
You can find the most useful MCP backed integrations here. Add new MCP servers by adding new integrations through the application. The application will direct you to a JSON file to add your MCP server. We use the same format as Claude Desktop to make it easier for you to add new servers.
Some integrations that we recommend for new users are available at sample-mcp-server.json.
Restart the application, or reload the integrations after you have added the required servers.
Contributions are welcome! Whether it's reporting a bug, suggesting a feature, or submitting a pull request, your help is appreciated.
Please read our Contributing Guidelines for details on how to setup your development environment and contribute to Cobolt.
You can also:
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Cobolt builds upon several amazing open-source projects and technologies:
- Ollama - The powerful framework for running large language models locally
- Model Context Protocol - The protocol specification by Anthropic for model context management
- Mem0 - The memory management system that inspired our implementation
- Electron - The framework that powers our cross-platform desktop application
We're grateful to all the contributors and maintainers of these projects for their incredible work.