├── .gitignore
├── .python-version
├── CHANGELOG.md
├── Dockerfile
├── LICENSE
├── README.md
├── pyproject.toml
├── smithery.yaml
└── src
└── perplexity_mcp
├── __init__.py
└── server.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Python-generated files
2 | __pycache__/
3 | *.py[oc]
4 | build/
5 | dist/
6 | wheels/
7 | *.egg-info
8 |
9 | # Virtual environments
10 | .venv
11 |
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
1 | 3.11
2 |
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | # Changelog
2 |
3 | All notable changes to this project will be documented in this file.
4 |
5 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6 | and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7 |
8 | ## [0.1.7] - 2024-03-27
9 |
10 | ### Added
11 | - Support for citations in Perplexity API responses
12 | - Added "search_context_size: low" to Perplexity API request. Medium/High context size is more expensive and I didn't see a big increase in search result quality.
13 | - Added this Changelog
14 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
2 | # Use a Python image with uv pre-installed
3 | FROM ghcr.io/astral-sh/uv:python3.11-bookworm-slim AS uv
4 |
5 | # Install the project into /app
6 | WORKDIR /app
7 |
8 | # Enable bytecode compilation
9 | ENV UV_COMPILE_BYTECODE=1
10 |
11 | # Copy the pyproject.toml to install dependencies
12 | COPY pyproject.toml /app/pyproject.toml
13 |
14 | # Install the project's dependencies using uv
15 | RUN --mount=type=cache,target=/root/.cache/uv uv pip install -r /app/pyproject.toml --no-dev --no-editable
16 |
17 | # Then, add the rest of the project source code and install it
18 | ADD src /app/src
19 |
20 | # Environment variables
21 | ENV PERPLEXITY_API_KEY=YOUR_API_KEY_HERE
22 |
23 | ENTRYPOINT ["uv", "run", "perplexity-mcp"]
24 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Jason Allen
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # perplexity-mcp MCP server
2 |
3 | [](https://smithery.ai/server/perplexity-mcp)
4 |
5 | A Model Context Protocol (MCP) server that provides web search functionality using [Perplexity AI's](https://www.perplexity.ai/) API. Works with the [Anthropic](https://www.anthropic.com/news/model-context-protocol) Claude desktop client.
6 |
7 | ## Example
8 |
9 | Let's you use prompts like, "Search the web to find out what's new at Anthropic in the past week."
10 |
11 | ## Glama Scores
12 |
13 |
14 |
15 | ## Components
16 |
17 | ### Prompts
18 |
19 | The server provides a single prompt:
20 |
21 | - perplexity_search_web: Search the web using Perplexity AI
22 | - Required "query" argument for the search query
23 | - Optional "recency" argument to filter results by time period:
24 | - 'day': last 24 hours
25 | - 'week': last 7 days
26 | - 'month': last 30 days (default)
27 | - 'year': last 365 days
28 | - Uses Perplexity's API to perform web searches
29 |
30 | ### Tools
31 |
32 | The server implements one tool:
33 |
34 | - perplexity_search_web: Search the web using Perplexity AI
35 | - Takes "query" as a required string argument
36 | - Optional "recency" parameter to filter results (day/week/month/year)
37 | - Returns search results from Perplexity's API
38 |
39 | ## Installation
40 |
41 | ### Installing via Smithery
42 |
43 | To install Perplexity MCP for Claude Desktop automatically via [Smithery](https://smithery.ai/server/perplexity-mcp):
44 |
45 | ```bash
46 | npx -y @smithery/cli install perplexity-mcp --client claude
47 | ```
48 |
49 | ### Requires [UV](https://github.com/astral-sh/uv) (Fast Python package and project manager)
50 |
51 | If uv isn't installed.
52 |
53 | ```bash
54 | # Using Homebrew on macOS
55 | brew install uv
56 | ```
57 |
58 | or
59 |
60 | ```bash
61 | # On macOS and Linux.
62 | curl -LsSf https://astral.sh/uv/install.sh | sh
63 |
64 | # On Windows.
65 | powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
66 | ```
67 |
68 | ### Environment Variables
69 |
70 | The following environment variable is required in your claude_desktop_config.json. You can obtain an API key from [Perplexity](https://perplexity.ai)
71 |
72 | - `PERPLEXITY_API_KEY`: Your Perplexity AI API key
73 |
74 | Optional environment variables:
75 |
76 | - `PERPLEXITY_MODEL`: The Perplexity model to use (defaults to "sonar" if not specified)
77 |
78 | Available models:
79 |
80 | - `sonar-deep-research`: 128k context - Enhanced research capabilities
81 | - `sonar-reasoning-pro`: 128k context - Advanced reasoning with professional focus
82 | - `sonar-reasoning`: 128k context - Enhanced reasoning capabilities
83 | - `sonar-pro`: 200k context - Professional grade model
84 | - `sonar`: 128k context - Default model
85 | - `r1-1776`: 128k context - Alternative architecture
86 |
87 | And updated list of models is avaiable (here)[https://docs.perplexity.ai/guides/model-cards]
88 |
89 | ### Cursor & Claude Desktop Installation
90 |
91 | Add this tool as a mcp server by editing the Cursor/Claude config file.
92 |
93 | ```json
94 | "perplexity-mcp": {
95 | "env": {
96 | "PERPLEXITY_API_KEY": "XXXXXXXXXXXXXXXXXXXX",
97 | "PERPLEXITY_MODEL": "sonar"
98 | },
99 | "command": "uvx",
100 | "args": [
101 | "perplexity-mcp"
102 | ]
103 | }
104 | ```
105 |
106 | #### Cursor
107 | - On MacOS: `/Users/your-username/.cursor/mcp.json`
108 | - On Windows: `C:\Users\your-username\.cursor\mcp.json`
109 |
110 | If everything is working correctly, you should now be able to call the tool from Cursor.
111 |
112 |
113 | #### Claude Desktop
114 | - On MacOS: `~/Library/Application\ Support/Claude/claude_desktop_config.json`
115 | - On Windows: `%APPDATA%/Claude/claude_desktop_config.json`
116 |
117 | To verify the server is working. Open the Claude client and use a prompt like "search the web for news about openai in the past week". You should see an alert box open to confirm tool usage. Click "Allow for this chat".
118 |
119 |
120 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [build-system]
2 | requires = ["hatchling"]
3 | build-backend = "hatchling.build"
4 |
5 | [project]
6 | name = "perplexity-mcp"
7 | dynamic = ["version"]
8 | authors = [
9 | { name = "Jason Allen" },
10 | ]
11 | description = "MCP integration for Perplexity AI web searches"
12 | readme = "README.md"
13 | requires-python = ">=3.11"
14 | dependencies = [
15 | "aiohttp>=3.8.0",
16 | "pydantic>=2.0.0",
17 | "mcp>=1.0.2",
18 | ]
19 |
20 | [project.scripts]
21 | perplexity-mcp = "perplexity_mcp:main"
22 |
23 | [tool.hatch.build.targets.wheel]
24 | packages = ["src/perplexity_mcp"]
25 |
26 | [tool.hatch.metadata]
27 | allow-direct-references = true
28 |
29 | [tool.hatch.version]
30 | path = "src/perplexity_mcp/__init__.py"
--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------
1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
2 |
3 | startCommand:
4 | type: stdio
5 | configSchema:
6 | # JSON Schema defining the configuration options for the MCP.
7 | type: object
8 | required:
9 | - perplexityApiKey
10 | properties:
11 | perplexityApiKey:
12 | type: string
13 | description: The API key for the Perplexity AI server.
14 | commandFunction:
15 | # A function that produces the CLI command to start the MCP on stdio.
16 | |-
17 | config => ({ command: 'uv', args: ['run', 'perplexity-mcp'], env: { PERPLEXITY_API_KEY: config.perplexityApiKey } })
18 |
--------------------------------------------------------------------------------
/src/perplexity_mcp/__init__.py:
--------------------------------------------------------------------------------
1 | """Perplexity MCP package."""
2 |
3 | __version__ = "0.1.7"
4 |
5 | from . import server
6 | import asyncio
7 |
8 |
9 | def main():
10 | """Main entry point for the package."""
11 | asyncio.run(server.main_async())
12 |
13 |
14 | # Optionally expose other important items at package level
15 | __all__ = ["main", "server"]
16 |
--------------------------------------------------------------------------------
/src/perplexity_mcp/server.py:
--------------------------------------------------------------------------------
1 | import asyncio
2 | import aiohttp
3 | import sys
4 | import logging
5 | import json
6 | from datetime import datetime
7 | import os
8 |
9 | from mcp.server.models import InitializationOptions
10 | import mcp.types as types
11 | from mcp.server import NotificationOptions, Server
12 | from pydantic import AnyUrl
13 | import mcp.server.stdio
14 | from perplexity_mcp import __version__
15 |
16 | server = Server("perplexity-mcp")
17 |
18 |
19 | @server.list_prompts()
20 | async def handle_list_prompts() -> list[types.Prompt]:
21 | """
22 | List available prompts.
23 | Each prompt can have optional arguments to customize its behavior.
24 | """
25 | return [
26 | types.Prompt(
27 | name="perplexity_search_web",
28 | description="Search the web using Perplexity AI and filter results by recency",
29 | arguments=[
30 | types.PromptArgument(
31 | name="query",
32 | description="The search query to find information about",
33 | required=True,
34 | ),
35 | types.PromptArgument(
36 | name="recency",
37 | description="Filter results by how recent they are. Options: 'day' (last 24h), 'week' (last 7 days), 'month' (last 30 days), 'year' (last 365 days). Defaults to 'month'.",
38 | required=False,
39 | ),
40 | ],
41 | )
42 | ]
43 |
44 |
45 | @server.get_prompt()
46 | async def handle_get_prompt(
47 | name: str, arguments: dict[str, str] | None
48 | ) -> types.GetPromptResult:
49 | """
50 | Generate a prompt by combining arguments with server state.
51 | """
52 | if name != "perplexity_search_web":
53 | raise ValueError(f"Unknown prompt: {name}")
54 |
55 | query = (arguments or {}).get("query", "")
56 | recency = (arguments or {}).get("recency", "month")
57 | return types.GetPromptResult(
58 | description=f"Search the web for information about: {query}",
59 | messages=[
60 | types.PromptMessage(
61 | role="user",
62 | content=types.TextContent(
63 | type="text",
64 | text=f"Find recent information about: {query}",
65 | ),
66 | ),
67 | types.PromptMessage(
68 | role="user",
69 | content=types.TextContent(
70 | type="text",
71 | text=f"Only include results from the last {recency}",
72 | ),
73 | ),
74 | ],
75 | )
76 |
77 |
78 | @server.list_tools()
79 | async def list_tools() -> list[types.Tool]:
80 | return [
81 | types.Tool(
82 | name="perplexity_search_web",
83 | description="Search the web using Perplexity AI with recency filtering",
84 | inputSchema={
85 | "type": "object",
86 | "properties": {
87 | "query": {"type": "string"},
88 | "recency": {
89 | "type": "string",
90 | "enum": ["day", "week", "month", "year"],
91 | "default": "month",
92 | },
93 | },
94 | "required": ["query"],
95 | },
96 | )
97 | ]
98 |
99 |
100 | @server.call_tool()
101 | async def call_tool(
102 | name: str, arguments: dict
103 | ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
104 | if name == "perplexity_search_web":
105 | query = arguments["query"]
106 | recency = arguments.get("recency", "month")
107 | result = await call_perplexity(query, recency)
108 | return [types.TextContent(type="text", text=str(result))]
109 | raise ValueError(f"Tool not found: {name}")
110 |
111 |
112 | async def call_perplexity(query: str, recency: str) -> str:
113 |
114 | url = "https://api.perplexity.ai/chat/completions"
115 |
116 | # Get the model from environment variable or use "sonar" as default
117 | model = os.getenv("PERPLEXITY_MODEL", "sonar")
118 |
119 | payload = {
120 | "model": model,
121 | "messages": [
122 | {"role": "system", "content": "Be precise and concise."},
123 | {"role": "user", "content": query},
124 | ],
125 | "max_tokens": "512",
126 | "temperature": 0.2,
127 | "top_p": 0.9,
128 | "return_images": False,
129 | "return_related_questions": False,
130 | "search_recency_filter": recency,
131 | "top_k": 0,
132 | "stream": False,
133 | "presence_penalty": 0,
134 | "frequency_penalty": 1,
135 | "return_citations": True,
136 | "search_context_size": "low",
137 | }
138 |
139 | headers = {
140 | "Authorization": f"Bearer {os.getenv('PERPLEXITY_API_KEY')}",
141 | "Content-Type": "application/json",
142 | }
143 |
144 | async with aiohttp.ClientSession() as session:
145 | async with session.post(url, json=payload, headers=headers) as response:
146 | response.raise_for_status()
147 | data = await response.json()
148 | content = data["choices"][0]["message"]["content"]
149 |
150 | # Format response with citations if available
151 | if "citations" in data:
152 | citations = data["citations"]
153 | formatted_citations = "\n\nCitations:\n" + "\n".join(f"[{i+1}] {url}" for i, url in enumerate(citations))
154 | return content + formatted_citations
155 |
156 | return content
157 |
158 |
159 | async def main_async():
160 | API_KEY = os.getenv("PERPLEXITY_API_KEY")
161 | if not API_KEY:
162 | raise ValueError("PERPLEXITY_API_KEY environment variable is required")
163 |
164 | async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
165 | await server.run(
166 | read_stream,
167 | write_stream,
168 | InitializationOptions(
169 | server_name="perplexity-mcp",
170 | server_version=__version__,
171 | capabilities=server.get_capabilities(
172 | notification_options=NotificationOptions(),
173 | experimental_capabilities={},
174 | ),
175 | ),
176 | )
177 |
178 |
179 | def main():
180 | """CLI entry point for perplexity-mcp"""
181 | logging.basicConfig(level=logging.INFO)
182 |
183 | API_KEY = os.getenv("PERPLEXITY_API_KEY")
184 | if not API_KEY:
185 | print(
186 | "Error: PERPLEXITY_API_KEY environment variable is required",
187 | file=sys.stderr,
188 | )
189 | sys.exit(1)
190 |
191 | # Log which model is being used (helpful for debug)
192 | model = os.getenv("PERPLEXITY_MODEL", "sonar")
193 | logging.info(f"Using Perplexity AI model: {model}")
194 |
195 | # List available models
196 | available_models = {
197 | "sonar-deep-research": "128k context - Enhanced research capabilities",
198 | "sonar-reasoning-pro": "128k context - Advanced reasoning with professional focus",
199 | "sonar-reasoning": "128k context - Enhanced reasoning capabilities",
200 | "sonar-pro": "200k context - Professional grade model",
201 | "sonar": "128k context - Default model",
202 | "r1-1776": "128k context - Alternative architecture"
203 | }
204 |
205 | logging.info("Available Perplexity models (set with PERPLEXITY_MODEL environment variable):")
206 | for model_name, description in available_models.items():
207 | marker = "→" if model_name == model else " "
208 | logging.info(f" {marker} {model_name}: {description}")
209 |
210 | asyncio.run(main_async())
211 |
212 |
213 | if __name__ == "__main__":
214 | main()
215 |
--------------------------------------------------------------------------------