perplexity-search

MCP.Pizza Chef: arjunkmrm

Perplexity-search is a Model Context Protocol server that integrates Perplexity's web search capabilities using sonar and sonar-pro models. It provides a search tool for AI assistants to perform real-time web queries with optional recency filters, returning content and citations. This server requires a Perplexity API key and supports efficient, context-aware web search within AI workflows.

Use This MCP server To

Perform real-time web searches from AI assistants Filter search results by recency (day, week, month, hour) Retrieve web search content with source citations Integrate Perplexity web search into AI-enhanced workflows Enable chatbots to answer queries using live web data Combine web search results with other MCP data sources

README

Perplexity Search MCP

smithery badge

A simple Model Context Protocol (MCP) server for Perplexity's web search with sonar or sonar-pro.

Features

  • Provides a search tool for AI assistants to perform web searches
  • Uses Perplexity's chat completions API with the sonar/sonar-pro models

Tool: search

The server provides a search tool with the following input parameters:

  • query (required): The search query to perform
  • search_recency_filter (optional): Filter search results by recency (options: month, week, day, hour). If not specified, no time filtering is applied.

Configuration

Environment Variables

  • PERPLEXITY_API_KEY: Your Perplexity API key (required)

Response Format

The response from the search tool includes:

  • content: The search results content
  • citations: Array of citations for the information

License

MIT

perplexity-search FAQ

How do I configure the Perplexity-search MCP server?
Set the PERPLEXITY_API_KEY environment variable with your Perplexity API key before running the server.
What search parameters does the search tool accept?
The search tool requires a 'query' string and optionally accepts a 'search_recency_filter' to limit results by time (month, week, day, hour).
What does the search tool response include?
The response contains 'content' with search results and an array of 'citations' referencing the sources.
Can I use this server with different LLM providers?
Yes, it is provider-agnostic and can be integrated with OpenAI, Anthropic Claude, Google Gemini, and others.
Is there any rate limiting or usage quota?
Rate limits depend on your Perplexity API plan; monitor usage accordingly.
What models does this server use for search?
It uses Perplexity's chat completions API with sonar or sonar-pro models for web search.
Is the server open source and customizable?
Yes, it is MIT licensed and can be extended or modified as needed.
How does the recency filter affect search results?
It restricts results to the specified recent timeframe, improving relevance for time-sensitive queries.