├── Dockerfile
├── LICENSE
├── README.md
├── glama.json
├── pyproject.toml
├── requirements.txt
├── src
├── chronulus_mcp
│ ├── __init__.py
│ ├── __main__.py
│ ├── _assets
│ │ ├── html
│ │ │ └── binary_predictor_analysis.html
│ │ └── react
│ │ │ ├── BetaPlot.jsx
│ │ │ └── Scorecard.jsx
│ ├── agent
│ │ ├── __init__.py
│ │ ├── _types.py
│ │ ├── forecaster.py
│ │ └── predictor.py
│ ├── assets.py
│ ├── io.py
│ ├── session.py
│ └── stats
│ │ ├── __init__.py
│ │ └── odds.py
└── server.py
└── uv.lock
/Dockerfile:
--------------------------------------------------------------------------------
1 | # Use a Python image with uv pre-installed
2 | FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS uv
3 |
4 | # Install the project into `/app`
5 | WORKDIR /app
6 |
7 | # Enable bytecode compilation
8 | ENV UV_COMPILE_BYTECODE=1
9 |
10 | # Copy from the cache instead of linking since it's a mounted volume
11 | ENV UV_LINK_MODE=copy
12 |
13 | # Install the project's dependencies using the lockfile and settings
14 | RUN --mount=type=cache,target=/root/.cache/uv \
15 | --mount=type=bind,source=uv.lock,target=uv.lock \
16 | --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
17 | uv sync --frozen --no-install-project --no-dev --no-editable
18 |
19 | # Then, add the rest of the project source code and install it
20 | # Installing separately from its dependencies allows optimal layer caching
21 | ADD . /app
22 | RUN --mount=type=cache,target=/root/.cache/uv uv sync --frozen --no-dev --no-editable
23 |
24 | FROM python:3.12-slim-bookworm
25 |
26 | WORKDIR /app
27 |
28 | COPY --from=uv --chown=app:app /app/.venv /app/.venv
29 |
30 | # Place executables in the environment at the front of the path
31 | ENV PATH="/app/.venv/bin:$PATH"
32 |
33 | # when running the container, add --db-path and a bind mount to the host's db file
34 | ENTRYPOINT ["chronulus-mcp"]
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2024 Chronulus AI Inc.
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
MCP Server for Chronulus
4 |
Chat with Chronulus AI Forecasting & Prediction Agents in Claude
5 |
6 |
7 |
8 |
9 |
10 | ### Quickstart: Claude for Desktop
11 |
12 | #### Install
13 |
14 | Claude for Desktop is currently available on macOS and Windows.
15 |
16 | Install Claude for Desktop [here](https://claude.ai/download)
17 |
18 | #### Configuration
19 |
20 | Follow the general instructions [here](https://modelcontextprotocol.io/quickstart/user) to configure the Claude desktop client.
21 |
22 | You can find your Claude config at one of the following locations:
23 |
24 | - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
25 | - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
26 |
27 | Then choose one of the following methods that best suits your needs and add it to your `claude_desktop_config.json`
28 |
29 |
30 |
31 |
32 | Using pip
33 |
34 | (Option 1) Install release from PyPI
35 |
36 | ```bash
37 | pip install chronulus-mcp
38 | ```
39 |
40 |
41 | (Option 2) Install from Github
42 |
43 | ```bash
44 | git clone https://github.com/ChronulusAI/chronulus-mcp.git
45 | cd chronulus-mcp
46 | pip install .
47 | ```
48 |
49 |
50 |
51 | ```json
52 | {
53 | "mcpServers": {
54 | "chronulus-agents": {
55 | "command": "python",
56 | "args": ["-m", "chronulus_mcp"],
57 | "env": {
58 | "CHRONULUS_API_KEY": ""
59 | }
60 | }
61 | }
62 | }
63 | ```
64 |
65 | Note, if you get an error like "MCP chronulus-agents: spawn python ENOENT",
66 | then you most likely need to provide the absolute path to `python`.
67 | For example `/Library/Frameworks/Python.framework/Versions/3.11/bin/python3` instead of just `python`
68 |
69 |
70 |
71 |
72 |
73 | Using docker
74 |
75 | Here we will build a docker image called 'chronulus-mcp' that we can reuse in our Claude config.
76 |
77 | ```bash
78 | git clone https://github.com/ChronulusAI/chronulus-mcp.git
79 | cd chronulus-mcp
80 | docker build . -t 'chronulus-mcp'
81 | ```
82 |
83 | In your Claude config, be sure that the final argument matches the name you give to the docker image in the build command.
84 |
85 | ```json
86 | {
87 | "mcpServers": {
88 | "chronulus-agents": {
89 | "command": "docker",
90 | "args": ["run", "-i", "--rm", "-e", "CHRONULUS_API_KEY", "chronulus-mcp"],
91 | "env": {
92 | "CHRONULUS_API_KEY": ""
93 | }
94 | }
95 | }
96 | }
97 | ```
98 |
99 |
100 |
101 |
102 | Using uvx
103 |
104 | `uvx` will pull the latest version of `chronulus-mcp` from the PyPI registry, install it, and then run it.
105 |
106 |
107 | ```json
108 | {
109 | "mcpServers": {
110 | "chronulus-agents": {
111 | "command": "uvx",
112 | "args": ["chronulus-mcp"],
113 | "env": {
114 | "CHRONULUS_API_KEY": ""
115 | }
116 | }
117 | }
118 | }
119 | ```
120 |
121 | Note, if you get an error like "MCP chronulus-agents: spawn uvx ENOENT", then you most likely need to either:
122 | 1. [install uv](https://docs.astral.sh/uv/getting-started/installation/) or
123 | 2. Provide the absolute path to `uvx`. For example `/Users/username/.local/bin/uvx` instead of just `uvx`
124 |
125 |
126 |
127 | #### Additional Servers (Filesystem, Fetch, etc)
128 |
129 | In our demo, we use third-party servers like [fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch) and [filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem).
130 |
131 | For details on installing and configure third-party server, please reference the documentation provided by the server maintainer.
132 |
133 | Below is an example of how to configure filesystem and fetch alongside Chronulus in your `claude_desktop_config.json`:
134 |
135 | ```json
136 | {
137 | "mcpServers": {
138 | "chronulus-agents": {
139 | "command": "uvx",
140 | "args": ["chronulus-mcp"],
141 | "env": {
142 | "CHRONULUS_API_KEY": ""
143 | }
144 | },
145 | "filesystem": {
146 | "command": "npx",
147 | "args": [
148 | "-y",
149 | "@modelcontextprotocol/server-filesystem",
150 | "/path/to/AIWorkspace"
151 | ]
152 | },
153 | "fetch": {
154 | "command": "uvx",
155 | "args": ["mcp-server-fetch"]
156 | }
157 | }
158 | }
159 | ```
160 |
161 |
162 | #### Claude Preferences
163 |
164 | To streamline your experience using Claude across multiple sets of tools, it is best to add your preferences to under Claude Settings.
165 |
166 | You can upgrade your Claude preferences in a couple ways:
167 |
168 | * From Claude Desktop: `Settings -> General -> Claude Settings -> Profile (tab)`
169 | * From [claude.ai/settings](https://claude.ai/settings): `Profile (tab)`
170 |
171 | Preferences are shared across both Claude for Desktop and Claude.ai (the web interface). So your instruction need to work across both experiences.
172 |
173 | Below are the preferences we used to achieve the results shown in our demos:
174 |
175 | ```
176 | ## Tools-Dependent Protocols
177 | The following instructions apply only when tools/MCP Servers are accessible.
178 |
179 | ### Filesystem - Tool Instructions
180 | - Do not use 'read_file' or 'read_multiple_files' on binary files (e.g., images, pdfs, docx) .
181 | - When working with binary files (e.g., images, pdfs, docx) use 'get_info' instead of 'read_*' tools to inspect a file.
182 |
183 | ### Chronulus Agents - Tool Instructions
184 | - When using Chronulus, prefer to use input field types like TextFromFile, PdfFromFile, and ImageFromFile over scanning the files directly.
185 | - When plotting forecasts from Chronulus, always include the Chronulus-provided forecast explanation below the plot and label it as Chronulus Explanation.
186 | ```
--------------------------------------------------------------------------------
/glama.json:
--------------------------------------------------------------------------------
1 | {
2 | "$schema": "https://glama.ai/mcp/schemas/server.json",
3 | "maintainers": [
4 | "theoldfather"
5 | ]
6 | }
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "chronulus-mcp"
3 | version = "0.0.3"
4 | description = "An MCP Server for Chronulus AI Forecasting and Prediction Agents"
5 | readme = "README.md"
6 | authors = [
7 | { name = "Chronulus AI", email = "jeremy@chronulus.com" },
8 | ]
9 | requires-python = ">=3.10"
10 | keywords = ["forecasting", "prediction", "timeseries", "mcp", "llm", "agents"]
11 | classifiers = [
12 | "Programming Language :: Python :: 3",
13 | "Programming Language :: Python :: 3.10",
14 | "Programming Language :: Python :: 3.11",
15 | "Programming Language :: Python :: 3.12",
16 | "License :: OSI Approved :: MIT License",
17 | ]
18 | dependencies = [
19 | "mcp[cli]>=1.3.0",
20 | "chronulus>=0.0.11",
21 | "chronulus-core>=0.0.19",
22 | "pandas",
23 | "requests",
24 | ]
25 |
26 | [dependency-groups]
27 | dev = [
28 | "mcp[cli]>=1.3.0",
29 | "twine>=6.1.0",
30 | ]
31 |
32 | [tool.setuptools]
33 | package-dir = { "" = "src" }
34 | packages = { find = { where = ["src"] } }
35 | include-package-data = false
36 |
37 | [tool.setuptools.package-data]
38 | "chronulus_mcp" = ["_assets/react/*.jsx", "_assets/html/*.html"]
39 |
40 | [tool.ruff]
41 | line-length = 150
42 | target-version = "py310"
43 | select = [
44 | "E", # pycodestyle errors
45 | "W", # pycodestyle warnings
46 | "F", # pyflakes
47 | "I", # isort
48 | "B", # flake8-bugbear
49 | ]
50 | ignore = []
51 |
52 | [project.scripts]
53 | chronulus-mcp = "chronulus_mcp:main"
54 |
55 | [build-system]
56 | requires = ["hatchling"]
57 | build-backend = "hatchling.build"
58 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | mcp[cli]==1.3.0
2 | chronulus>=0.0.11
3 | chronulus-core>=0.0.19
4 | requests
5 | docutils
6 | pandas
--------------------------------------------------------------------------------
/src/chronulus_mcp/__init__.py:
--------------------------------------------------------------------------------
1 | import argparse
2 |
3 | from mcp.server.fastmcp import FastMCP
4 |
5 | from .assets import get_react_component
6 | from chronulus_mcp.agent.forecaster import create_forecasting_agent_and_get_forecast, reuse_forecasting_agent_and_get_forecast, rescale_forecast
7 | from chronulus_mcp.agent.predictor import create_prediction_agent_and_get_predictions, reuse_prediction_agent_and_get_prediction
8 | from .session import create_chronulus_session, get_risk_assessment_scorecard
9 | from .io import save_forecast, save_prediction_analysis_html
10 |
11 | SERVER_DESCRIPTION_V1 = "Chronulus MCP provides access to the Chronulus AI platform of forecasting and prediction agents."
12 |
13 | SERVER_DESCRIPTION_V2 = """
14 | choronulus-agents server provides access to the Chronulus AI platform of forecasting and prediction agents.
15 |
16 | - Sessions capture an overall use case that is described by a situation and task.
17 | - Agents created for a given session and are reusable across multiple different forecasting inputs.
18 | - Input features can include text or files including images, text, or pdf (provide the path to the file)
19 | - The total size of all inputs cannot exceed 10MB. So plan according when choosing inputs
20 |
21 | For example, in a retail forecasting workflow,
22 | - The situation might include information about the business, location, demographics of customers, and motivation for forecasting
23 | - The task would include specifics about what to forecast like the demand, share of foot traffic, probability of the item going out of stock, etc.
24 | - The agent could be used for multiple different types of items with a single data model. For example a data model with brand and price feature could
25 | be used to predict over multiple items with their own values for brand and price.
26 | """
27 |
28 | mcp = FastMCP("chronulus-agents", instructions=SERVER_DESCRIPTION_V2)
29 |
30 |
31 | ##############################################################################
32 | # SESSION
33 | ##############################################################################
34 |
35 | CREATE_SESSION_DESCRIPTION = """
36 | A tool that creates a new Chronulus Session and returns a session_id
37 |
38 | When to use this tool:
39 | - Use this tool when a user has requested a forecast or prediction for a new use case
40 | - Before calling this tool make sure you have enough information to write a well-defined situation and task. You might
41 | need to ask clarifying questions in order to get this from the user.
42 | - The same session_id can be reused as long as the situation and task remain the same
43 | - If user wants to forecast a different use case, create a new session and then use that
44 |
45 | How to use this tool:
46 | - To create a session, you need to provide a situation and task that describe the forecasting use case
47 | - If the user has not provided enough detail for you to decompose the use case into a
48 | situation (broad or background context) and task (specific requirements for the forecast),
49 | ask them to elaborate since more detail will result in a better / more accurate forecast.
50 | - Once created, this will generate a unique session_id that can be used to when calling other tools about this use case.
51 | """
52 |
53 | # session tools
54 | mcp.add_tool(create_chronulus_session, description=CREATE_SESSION_DESCRIPTION)
55 |
56 |
57 | FILE_TYPE_INSTRUCTIONS = """
58 | - Remember to pass all relevant information to Chronulus including text and images provided by the user.
59 | - If a user gives you files about a thing you are forecasting or predicting, you should pass these as inputs to the
60 | agent using one of the following types:
61 | - ImageFromFile
62 | - List[ImageFromFile]
63 | - TextFromFile
64 | - List[TextFromFile]
65 | - PdfFromFile
66 | - List[PdfFromFile]
67 | - If you have a large amount of text (over 500 words) to pass to the agent, you should use the Text or List[Text] field types
68 | """.strip()
69 |
70 |
71 | ##############################################################################
72 | # FORECASTING AGENT
73 | ##############################################################################
74 |
75 | CREATE_AGENT_AND_GET_FORECAST_DESCRIPTION = f"""
76 | This tool creates a NormalizedForecaster agent with your session and input data model and then provides a forecast input
77 | data to the agent and returns the prediction data and text explanation from the agent.
78 |
79 | When to use this tool:
80 | - Use this tool to request a forecast from Chronulus
81 | - This tool is specifically made to forecast values between 0 and 1 and does not require historical data
82 | - The prediction can be thought of as seasonal weights, probabilities, or shares of something as in the decimal representation of a percent
83 |
84 | How to use this tool:
85 | - First, make sure you have a session_id for the forecasting or prediction use case.
86 | - Next, think about the features / characteristics most suitable for producing the requested forecast and then
87 | create an input_data_model that corresponds to the input_data you will provide for the thing being forecasted.
88 | {FILE_TYPE_INSTRUCTIONS}
89 | - Finally, add information about the forecasting horizon and time scale requested by the user
90 | - Assume the dates and datetimes in the prediction results are already converted to the appropriate local timezone if location is a factor in the use case. So do not try to convert from UTC to local time when plotting.
91 | - When plotting the predictions, use a Rechart time series with the appropriate axes labeled and with the prediction explanation displayed as a caption below the plot
92 | """
93 |
94 | REUSE_AGENT_AND_GET_FORECAST_DESCRIPTION = f"""
95 | This tool provides a forecast input to a previous created Chronulus NormalizedForecaster agent and returns the
96 | prediction data and text explanation from the agent.
97 |
98 | When to use this tool:
99 | - Use this tool to request a forecast from a Chronulus agent that you have already created and when your input data model is unchanged
100 | - This tool is specifically made to forecast values between 0 and 1 and does not require historical data
101 | - The prediction can be thought of as seasonal weights, probabilities, or shares of something as in the decimal representation of a percent
102 |
103 | How to use this tool:
104 | - First, make sure you have an agent_id for the agent. The agent is already attached to the correct session. So you do not need to provide a session_id.
105 | - Next, reference the input data model that you previously used with the agent and create new inputs for the item being forecast
106 | that align with the previously specified input data model
107 | {FILE_TYPE_INSTRUCTIONS}
108 | - Finally, add information about the forecasting horizon and time scale requested by the user
109 | - Assume the dates and datetimes in the prediction results are already converted to the appropriate local timezone if location is a factor in the use case. So do not try to convert from UTC to local time when plotting.
110 | - When plotting the predictions, use a Rechart time series with the appropriate axes labeled and with the prediction explanation displayed as a caption below the plot
111 | """
112 |
113 | RESCALE_PREDICTIONS_DESCRIPTION = """
114 | A tool that rescales the prediction data (values between 0 and 1) from the NormalizedForecaster agent to scale required for a use case
115 |
116 | When to use this tool:
117 | - Use this tool when there is enough information from the user or use cases to determine a reasonable min and max for the forecast predictions
118 | - Do not attempt to rescale or denormalize the predictions on your own without using this tool.
119 | - Also, if the best min and max for the use case is 0 and 1, then no rescaling is needed since that is already the scale of the predictions.
120 | - If a user requests to convert from probabilities to a unit in levels, be sure to caveat your use of this tool by noting that
121 | probabilities do not always scale uniformly to levels. Rescaling can be used as a rough first-pass estimate. But for best results,
122 | it would be better to start a new Chronulus forecasting use case predicting in levels from the start.
123 |
124 | How to use this tool:
125 | - To use this tool present prediction_id from the normalized prediction and the min and max as floats
126 | - If the user is also changing units, consider if the units will be inverted and set the inverse scale to True if needed.
127 | - When plotting the rescaled predictions, use a Rechart time series plot with the appropriate axes labeled and include the chronulus
128 | prediction explanation as a caption below the plot.
129 | - If you would like to add additional notes about the scaled series, put these below the original prediction explanation.
130 | """
131 |
132 | SAVE_FORECAST_DESCRIPTION = """
133 | A tool that saves a Chronulus forecast from NormalizedForecaster to separate CSV and TXT files
134 |
135 | When to use this tool:
136 | - Use this tool when you need to save both the forecast data and its explanation to files
137 | - The forecast data will be saved as a CSV file for data analysis
138 | - The forecast explanation will be saved as a TXT file for reference
139 | - Both files will be saved in the same directory specified by output_path
140 | - This tool can also be used to directly save rescaled predictions without first calling the rescaling tool
141 |
142 | How to use this tool:
143 | - Provide the prediction_id from a previous forecast
144 | - Specify the output_path where both files should be saved
145 | - Provide csv_name for the forecast data file (must end in .csv)
146 | - Provide txt_name for the explanation file (must end in .txt)
147 | - Optionally provide y_min and y_max to rescale the predictions (defaults to 0)
148 | - Set invert_scale to True if the target units run in the opposite direction
149 | - The tool will provide status updates through the MCP context
150 | """
151 |
152 | # forecasting agent tools
153 | mcp.add_tool(create_forecasting_agent_and_get_forecast, description=CREATE_AGENT_AND_GET_FORECAST_DESCRIPTION)
154 | mcp.add_tool(reuse_forecasting_agent_and_get_forecast, description=CREATE_AGENT_AND_GET_FORECAST_DESCRIPTION)
155 | mcp.add_tool(rescale_forecast, description=RESCALE_PREDICTIONS_DESCRIPTION)
156 | mcp.add_tool(save_forecast, description=SAVE_FORECAST_DESCRIPTION)
157 |
158 |
159 | ##############################################################################
160 | # Prediction Agent
161 | ##############################################################################
162 |
163 | CREATE_AGENT_AND_GET_PREDICTION_DESCRIPTION = f"""
164 | This tool creates a BinaryPredictor agent with your session and input data model and then provides prediction input
165 | data to the agent and returns the consensus a prediction from a panel of experts along with their individual estimates
166 | and text explanations. The agent also returns the alpha and beta parameters for a Beta distribution that allows you to
167 | estimate the confidence interval of its consensus probability estimate.
168 |
169 | When to use this tool:
170 | - Use this tool to request a probability estimate from Chronulus in situation when there is a binary outcome
171 | - This tool is specifically made to estimate the probability of an event occurring and not occurring and does not
172 | require historical data
173 |
174 | How to use this tool:
175 | - First, make sure you have a session_id for the prediction use case.
176 | - Next, think about the features / characteristics most suitable for producing the requested prediction and then
177 | create an input_data_model that corresponds to the input_data you will provide for the thing or event being predicted.
178 | {FILE_TYPE_INSTRUCTIONS}
179 | - Finally, provide the number of experts to consult. The minimum and default number is 2, but users may request up to 30
180 | 30 opinions in situations where reproducibility and risk sensitively is of the utmost importance. In most cases, 2 to 5
181 | experts is sufficient.
182 | """
183 |
184 | REUSE_AGENT_AND_GET_PREDICTION_DESCRIPTION = f"""
185 | This tool provides prediction input data to a previously created Chronulus BinaryPredictor agent and returns the
186 | consensus a prediction from a panel of experts along with their individual estimates and text explanations. The agent
187 | also returns the alpha and beta parameters for a Beta distribution that allows you to estimate the confidence interval
188 | of its consensus probability estimate.
189 |
190 | When to use this tool:
191 | - Use this tool to request a prediction from a Chronulus prediction agent that you have already created and when your
192 | input data model is unchanged
193 | - Use this tool to request a probability estimate from an existing prediction agent in a situation when there is a binary outcome
194 | - This tool is specifically made to estimate the probability of an event occurring and not occurring and does not
195 | require historical data
196 |
197 | How to use this tool:
198 | - First, make sure you have a session_id for the prediction use case.
199 | - Next, think about the features / characteristics most suitable for producing the requested prediction and then
200 | create an input_data_model that corresponds to the input_data you will provide for the thing or event being predicted.
201 | {FILE_TYPE_INSTRUCTIONS}
202 | - Finally, provide the number of experts to consult. The minimum and default number is 2, but users may request up to 30
203 | 30 opinions in situations where reproducibility and risk sensitively is of the utmost importance. In most cases, 2 to 5
204 | experts is sufficient.
205 |
206 | How to use this tool:
207 | - First, make sure you have an agent_id for the prediction agent. The agent is already attached to the correct session.
208 | So you do not need to provide a session_id.
209 | - Next, reference the input data model that you previously used with the agent and create new input data for the item
210 | being predicted that aligns with the previously specified input data model
211 | {FILE_TYPE_INSTRUCTIONS}
212 | - Finally, provide the number of experts to consult. The minimum and default number is 2, but users may request up to 30
213 | 30 opinions in situations where reproducibility and risk sensitively is of the utmost importance. In most cases, 2 to 5
214 | experts is sufficient.
215 | """
216 |
217 | SAVE_ANALYSIS_HTML_DESCRIPTION = """
218 | A tool that saves an analysis of a BinaryPredictor prediction to HTML.
219 |
220 | The analysis includes a plot of the theoretical and empirical beta distribution estimated by Chronulus and also
221 | list the opinions provided by each expert.
222 |
223 | When to use this tool:
224 | - Use this tool when you need to save the BinaryPredictor estimates to for the user
225 |
226 | How to use this tool:
227 | - Provide the request_id from a previous prediction response
228 | - Specify the output_path where the html should be saved
229 | - Provide html_name for the file (must end in .html)
230 | - The tool will provide status updates through the MCP context
231 | """
232 |
233 | # prediction agent
234 | mcp.add_tool(create_prediction_agent_and_get_predictions, description=CREATE_AGENT_AND_GET_PREDICTION_DESCRIPTION)
235 | mcp.add_tool(reuse_prediction_agent_and_get_prediction, description=REUSE_AGENT_AND_GET_PREDICTION_DESCRIPTION)
236 | mcp.add_tool(save_prediction_analysis_html, description=SAVE_ANALYSIS_HTML_DESCRIPTION)
237 |
238 | ##############################################################################
239 | # Extras
240 | ##############################################################################
241 |
242 | GET_RISK_ASSESSMENT_SCORECARD_DESCRIPTION = """
243 | A tool that retrieves the risk assessment scorecard for the Chronulus Session in Markdown format
244 |
245 | When to use this tool:
246 | - Use this tool when the use asks about the risk level or safety concerns of a forecasting use case
247 | - You may also use this tool to provide justification to a user if you would like to warn them of the implications of
248 | what they are asking you to forecasting or predict.
249 |
250 | How to use this tool:
251 | - Make sure you have a session_id for the forecasting or prediction use case
252 | - When displaying the scorecard markdown for the user, you should use an MDX-style React component
253 | """
254 |
255 | RESOURCE_GET_RISK_ASSESSMENT_SCORECARD_DESCRIPTION = """
256 | A resource that retrieves the risk assessment scorecard for the Chronulus Session in Markdown or JSON format
257 |
258 | When to use this resource:
259 | - Use this tool when the use asks about the risk level or safety concerns of a forecasting use case
260 | - You may also use this tool to provide justification to a user if you would like to warn them of the implications of
261 | what they are asking you to forecasting or predict.
262 |
263 | How to use this resource:
264 | - Make sure you have a session_id for the forecasting or prediction use case
265 | - To display the scorecard use the provided react resource at 'chronulus-react://Scorecard.jsx'
266 | """
267 |
268 | # extra
269 | mcp.add_tool(get_risk_assessment_scorecard, description=GET_RISK_ASSESSMENT_SCORECARD_DESCRIPTION)
270 |
271 |
272 | @mcp.resource(
273 | uri="chronulus-react://Scorecard.jsx",
274 | name="Scorecard React Template",
275 | mime_type="text/javascript",
276 | )
277 | def get_scorecard_react_template() -> str:
278 | """Get scorecard.tsx"""
279 | return get_react_component("Scorecard.jsx")
280 |
281 |
282 | @mcp.resource(
283 | uri="chronulus-react://BetaPlot.jsx",
284 | name="Beta Plot",
285 | mime_type="text/javascript",
286 | )
287 | def get_scorecard_react_template() -> str:
288 | """Get BetaPlot.jsx"""
289 | return get_react_component("BetaPlot.jsx")
290 |
291 |
292 |
293 |
294 | def main():
295 | """Chronulus AI: A platform for the forecasting and prediction. Predict anything."""
296 | parser = argparse.ArgumentParser(description=SERVER_DESCRIPTION_V1)
297 | parser.parse_args()
298 | mcp.run(transport="stdio")
299 |
300 |
301 | if __name__ == "__main__":
302 | main()
--------------------------------------------------------------------------------
/src/chronulus_mcp/__main__.py:
--------------------------------------------------------------------------------
1 | # __main__.py
2 |
3 | from chronulus_mcp import main
4 |
5 | main()
--------------------------------------------------------------------------------
/src/chronulus_mcp/_assets/html/binary_predictor_analysis.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | [TITLE_OF_ANALYSIS]
7 |
8 |
9 |
211 |
212 |
213 |
Below are the detailed analyses from the Chronulus experts. Each expert evaluated the question from both a positive case (probability the event would occur) and a negative case (probability the event would not occur).
321 |
322 | [EXPERT_OPINIONS]
323 |
324 |
325 |
326 |
331 |
332 |
333 |
860 |
861 |
--------------------------------------------------------------------------------
/src/chronulus_mcp/_assets/react/BetaPlot.jsx:
--------------------------------------------------------------------------------
1 | import React, { useState, useEffect } from 'react';
2 | import { LineChart, Line, BarChart, Bar, XAxis, YAxis, CartesianGrid, Tooltip, Legend, ResponsiveContainer } from 'recharts';
3 |
4 | const BetaDistribution = () => {
5 |
6 | const alphaDefault = 4 // Replace 4 with the estimated alpha parameter from the BinaryPredictor agent
7 | const betaDefault = 5 // Replace 5 with the estimated beta parameter from the BinaryPredictor agent
8 |
9 | // Using the specified parameters with state for adjustments
10 | const [alpha, setAlpha] = useState(alphaDefault);
11 | const [beta, setBeta] = useState(betaDefault);
12 | const [sampleSize, setSampleSize] = useState(1000);
13 | const [samples, setSamples] = useState([]);
14 | const [histogramData, setHistogramData] = useState([]);
15 |
16 | // Function to generate beta distribution PDF values
17 | const generateBetaDistribution = (a, b) => {
18 | const points = [];
19 | // B(a,b) = gamma(a) * gamma(b) / gamma(a+b)
20 | // We'll use an approximation for this demo
21 | const betaFunction = (a, b) => {
22 | // This is a simple approximation
23 | return Math.exp((a-0.5)*Math.log(a) + (b-0.5)*Math.log(b) - (a+b-0.5)*Math.log(a+b) - 0.5*Math.log(2*Math.PI));
24 | };
25 |
26 | const betaCoeff = 1 / betaFunction(a, b);
27 |
28 | for (let x = 0.01; x <= 0.99; x += 0.01) {
29 | const density = betaCoeff * Math.pow(x, a-1) * Math.pow(1-x, b-1);
30 | points.push({
31 | x: x.toFixed(2),
32 | pdf: density
33 | });
34 | }
35 | return points;
36 | };
37 |
38 | // Function to sample from beta distribution using rejection sampling
39 | const generateBetaSamples = (a, b, numSamples) => {
40 | // This is a simplified approach for demonstration
41 | // In real applications, you'd use a proper beta sampling algorithm
42 |
43 | // Generate samples using an approximation method
44 | const samples = [];
45 |
46 | for (let i = 0; i < numSamples; i++) {
47 | // Use the fact that if X ~ Gamma(a) and Y ~ Gamma(b), then X/(X+Y) ~ Beta(a,b)
48 | // We'll use a simple approximation of Gamma sampling
49 | let x = 0;
50 | for (let j = 0; j < Math.floor(a); j++) {
51 | x -= Math.log(Math.random());
52 | }
53 | // Handle the fractional part
54 | if (a % 1 > 0) {
55 | x -= Math.log(Math.random()) * (a % 1);
56 | }
57 |
58 | let y = 0;
59 | for (let j = 0; j < Math.floor(b); j++) {
60 | y -= Math.log(Math.random());
61 | }
62 | // Handle the fractional part
63 | if (b % 1 > 0) {
64 | y -= Math.log(Math.random()) * (b % 1);
65 | }
66 |
67 | const betaSample = x / (x + y);
68 | samples.push(betaSample);
69 | }
70 |
71 | return samples;
72 | };
73 |
74 | // Create histogram bins
75 | const createHistogram = (samples, bins = 20) => {
76 | const min = 0;
77 | const max = 1;
78 | const binWidth = (max - min) / bins;
79 |
80 | // Initialize bins
81 | const histogram = Array(bins).fill(0).map((_, i) => ({
82 | binStart: (min + i * binWidth).toFixed(2),
83 | binEnd: (min + (i + 1) * binWidth).toFixed(2),
84 | count: 0
85 | }));
86 |
87 | // Count samples in each bin
88 | samples.forEach(sample => {
89 | if (sample >= min && sample < max) {
90 | const binIndex = Math.min(Math.floor((sample - min) / binWidth), bins - 1);
91 | histogram[binIndex].count += 1;
92 | }
93 | });
94 |
95 | // Convert counts to density for comparison with PDF
96 | const totalSamples = samples.length;
97 | histogram.forEach(bin => {
98 | bin.density = totalSamples > 0 ? bin.count / (totalSamples * binWidth) : 0;
99 | });
100 |
101 | return histogram;
102 | };
103 |
104 | // Calculate sample statistics
105 | const calculateSampleStats = (samples) => {
106 | if (samples.length === 0) return { mean: 0, stdDev: 0 };
107 |
108 | const mean = samples.reduce((sum, val) => sum + val, 0) / samples.length;
109 |
110 | const sumSquaredDiff = samples.reduce((sum, val) => {
111 | const diff = val - mean;
112 | return sum + diff * diff;
113 | }, 0);
114 |
115 | const variance = sumSquaredDiff / samples.length;
116 | const stdDev = Math.sqrt(variance);
117 |
118 | return { mean, stdDev };
119 | };
120 |
121 | // Generate new samples when parameters change
122 | useEffect(() => {
123 | const newSamples = generateBetaSamples(alpha, beta, sampleSize);
124 | setSamples(newSamples);
125 | setHistogramData(createHistogram(newSamples));
126 | }, [sampleSize, alpha, beta]);
127 |
128 | // Generate the theoretical PDF
129 | const theoreticalData = generateBetaDistribution(alpha, beta);
130 |
131 | // Handle parameter changes
132 | const handleAlphaChange = (e) => {
133 | const newAlpha = parseFloat(e.target.value);
134 | if (!isNaN(newAlpha) && newAlpha > 0) {
135 | setAlpha(newAlpha);
136 | }
137 | };
138 |
139 | const handleBetaChange = (e) => {
140 | const newBeta = parseFloat(e.target.value);
141 | if (!isNaN(newBeta) && newBeta > 0) {
142 | setBeta(newBeta);
143 | }
144 | };
145 |
146 | const handleSampleSizeChange = (e) => {
147 | const newSize = parseInt(e.target.value);
148 | if (!isNaN(newSize) && newSize > 0 && newSize <= 10000) {
149 | setSampleSize(newSize);
150 | }
151 | };
152 |
153 | // Regenerate samples with same size (for random variation)
154 | const handleRegenerateSamples = () => {
155 | const newSamples = generateBetaSamples(alpha, beta, sampleSize);
156 | setSamples(newSamples);
157 | setHistogramData(createHistogram(newSamples));
158 | };
159 |
160 | // Calculate mean and variance for this beta distribution
161 | const mean = alpha / (alpha + beta);
162 | const variance = (alpha * beta) / ((alpha + beta)**2 * (alpha + beta + 1));
163 | const stdDev = Math.sqrt(variance);
164 | const mode = alpha > 1 && beta > 1 ? (alpha - 1)/(alpha + beta - 2) : (alpha < 1 && beta >= 1 ? 0 : (alpha >= 1 && beta < 1 ? 1 : "Not unique"));
165 |
166 | // Calculate sample statistics
167 | const sampleStats = calculateSampleStats(samples);
168 |
169 | return (
170 |
57 | );
58 | };
59 |
60 |
61 | const Page = () => {
62 |
63 | const scorecardJson = {} // replace with the json-formatted scorecard from chronulus
64 |
65 | return (
66 |
67 | );
68 | };
69 |
70 | export default Page;
--------------------------------------------------------------------------------
/src/chronulus_mcp/agent/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ChronulusAI/chronulus-mcp/29114c185f0068f9d1bef16af3949e75dede00da/src/chronulus_mcp/agent/__init__.py
--------------------------------------------------------------------------------
/src/chronulus_mcp/agent/_types.py:
--------------------------------------------------------------------------------
1 | from typing import Type, Annotated, List, Dict, Literal, Optional, Union
2 |
3 |
4 | from pydantic import Field, BaseModel, create_model
5 | from chronulus_core.types.attribute import ImageFromFile, TextFromFile, Text, PdfFromFile
6 |
7 | class InputField(BaseModel):
8 | name: str = Field(description="Field name. Should be a valid python variable name.")
9 | description: str = Field(description="A description of the value you will pass in the field.")
10 | type: Literal[
11 | 'str', 'Text', 'List[Text]', 'TextFromFile', 'List[TextFromFile]', 'PdfFromFile', 'List[PdfFromFile]', 'ImageFromFile', 'List[ImageFromFile]'
12 | ] = Field(
13 | default='str',
14 | description="""The type of the field.
15 | ImageFromFile takes a single named-argument, 'file_path' as input which should be absolute path to the image to be included. So you should provide this input as json, eg. {'file_path': '/path/to/image'}.
16 | """
17 | )
18 |
19 |
20 | class DataRow(BaseModel):
21 | dt: str = Field(description="The value of the date or datetime field")
22 | y_hat: float = Field(description="The value of the y_hat field")
23 |
24 |
25 |
26 | def generate_model_from_fields(model_name: str, fields: List[InputField]) -> Type[BaseModel]:
27 | """
28 | Generate a new Pydantic BaseModel from a list of InputField objects.
29 |
30 | Args:
31 | model_name: The name for the generated model class
32 | fields: List of InputField objects defining the model's fields
33 |
34 | Returns:
35 | A new Pydantic BaseModel class with the specified fields
36 | """
37 | literal_type_mapping = {
38 | 'str': str,
39 | 'ImageFromFile': ImageFromFile,
40 | 'List[ImageFromFile]': List[ImageFromFile],
41 | 'TextFromFile': TextFromFile,
42 | 'List[TextFromFile]': List[TextFromFile],
43 | 'PdfFromFile': PdfFromFile,
44 | 'List[PdfFromFile]': List[PdfFromFile]
45 | }
46 |
47 | field_definitions = {
48 | field.name: (
49 | Optional[literal_type_mapping.get(field.type, str)],
50 | Field(description=field.description)
51 | )
52 | for field in fields
53 | }
54 |
55 | DynamicModel = create_model(
56 | model_name,
57 | __base__=BaseModel, # Explicitly set BaseModel as the base class
58 | **field_definitions
59 | )
60 |
61 | DynamicModel.__annotations__ = {
62 | field.name: str for field in fields
63 | }
64 |
65 | return DynamicModel
--------------------------------------------------------------------------------
/src/chronulus_mcp/agent/forecaster.py:
--------------------------------------------------------------------------------
1 | import json
2 | from datetime import datetime
3 | from typing import Annotated, List, Dict, Union
4 |
5 | from mcp.server.fastmcp import Context
6 | from pydantic import Field
7 |
8 | from ._types import InputField, DataRow, generate_model_from_fields
9 |
10 | from chronulus import Session
11 | from chronulus.estimator import NormalizedForecaster
12 | from chronulus.prediction import RescaledForecast
13 |
14 |
15 |
16 | async def create_forecasting_agent_and_get_forecast(
17 | session_id: Annotated[str, Field(description="The session_id for the forecasting or prediction use case")],
18 | input_data_model: Annotated[List[InputField], Field(
19 | description="""Metadata on the fields you will include in the input_data."""
20 | )],
21 | input_data: Annotated[Dict[str, Union[str, dict, List[dict]]], Field(description="The forecast inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.")],
22 | forecast_start_dt_str: Annotated[str, Field(description="The datetime str in '%Y-%m-%d %H:%M:%S' format of the first value in the forecast horizon.")],
23 | ctx: Context,
24 | time_scale: Annotated[str, Field(description="The times scale of the forecast horizon. Valid time scales are 'hours', 'days', and 'weeks'.", default="days")],
25 | horizon_len: Annotated[int, Field(description="The integer length of the forecast horizon. Eg., 60 if a 60 day forecast was requested.", default=60)],
26 | ) -> Union[str, Dict[str, Union[dict, str]]]:
27 | """Queues and retrieves a forecast from Chronulus with a predefined session_id
28 |
29 | This tool creates a NormalizedForecaster agent and then provides a forecast input to the agent and returns the prediction data and
30 | text explanation from the agent.
31 |
32 | Args:
33 | session_id (str): The session_id for the forecasting or prediction use case.
34 | input_data_model (List[InputField]): Metadata on the fields you will include in the input_data. Eg., for a field named "brand", add a description like "the brand of the product to forecast"
35 | input_data (Dict[str, Union[str, dict, List[dict]]]): The forecast inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.
36 | forecast_start_dt_str (str): The datetime str in '%Y-%m-%d %H:%M:%S' format of the first value in the forecast horizon."
37 | ctx (Context): Context object providing access to MCP capabilities.
38 | time_scale (str): The times scale of the forecast horizon. Valid time scales are 'hours', 'days', and 'weeks'.
39 | horizon_len (int): The integer length of the forecast horizon. Eg., 60 if a 60 day forecast was requested.
40 |
41 | Returns:
42 | Union[str, Dict[str, Union[dict, str]]]: a dictionary with prediction data, a text explanation of the predictions, estimator_id, and the prediction id.
43 | """
44 |
45 |
46 | try:
47 | chronulus_session = Session.load_from_saved_session(session_id=session_id, verbose=False)
48 | except Exception as e:
49 | error_message = f"Failed to retrieve session with session_id: {session_id}\n\n{e}"
50 | _ = await ctx.error( message=error_message)
51 | return error_message
52 |
53 | try:
54 | InputItem = generate_model_from_fields("InputItem", input_data_model)
55 | except Exception as e:
56 | error_message = f"Failed to create InputItem model with input data model: {json.dumps(input_data_model, indent=2)}\n\n{e}"
57 | _ = await ctx.error(message=error_message)
58 | return error_message
59 |
60 | try:
61 | item = InputItem(**input_data)
62 | except Exception as e:
63 | error_message = f"Failed to validate the input_data with the generated InputItem model. \n\n{e}"
64 | _ = await ctx.error(message=error_message)
65 | return error_message
66 |
67 | try:
68 | nf_agent = NormalizedForecaster(
69 | session=chronulus_session,
70 | input_type=InputItem,
71 | verbose=False,
72 | )
73 | except Exception as e:
74 | return f"""Error at nf_agent: {str(e)}
75 |
76 | input_fields = {input_data_model}
77 |
78 | input_data = {json.dumps(input_data, indent=2)}
79 |
80 | input_type = {str(type(InputItem))}
81 | """
82 |
83 | try:
84 | forecast_start_dt = datetime.fromisoformat(forecast_start_dt_str)
85 | horizon_params = {
86 | 'start_dt': forecast_start_dt,
87 | time_scale: horizon_len
88 | }
89 | req = nf_agent.queue(item, **horizon_params)
90 | except Exception as e:
91 | return f"""Error at nf_agent: {str(e)}"""
92 |
93 | try:
94 | predictions = nf_agent.get_predictions(req.request_id)
95 | prediction = predictions[0]
96 | return {
97 | "agent_id": nf_agent.estimator_id,
98 | "prediction_id": prediction.id,
99 | 'data': prediction.to_json(orient='rows'),
100 | 'explanation': prediction.text}
101 |
102 | except Exception as e:
103 | return f"""Error on prediction: {str(e)}"""
104 |
105 |
106 | async def reuse_forecasting_agent_and_get_forecast(
107 | agent_id: Annotated[str, Field(description="The agent_id for the forecasting or prediction use case and previously defined input_data_model")],
108 | input_data: Annotated[Dict[str, Union[str, dict, List[dict]]], Field(
109 | description="The forecast inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.")],
110 | forecast_start_dt_str: Annotated[str, Field(
111 | description="The datetime str in '%Y-%m-%d %H:%M:%S' format of the first value in the forecast horizon.")],
112 | time_scale: Annotated[str, Field(
113 | description="The times scale of the forecast horizon. Valid time scales are 'hours', 'days', and 'weeks'.",
114 | default="days")],
115 | horizon_len: Annotated[int, Field(
116 | description="The integer length of the forecast horizon. Eg., 60 if a 60 day forecast was requested.",
117 | default=60)],
118 | ) -> Union[str, Dict[str, Union[dict, str]]]:
119 | """Queues and retrieves a forecast from Chronulus with a previously created agent_id
120 |
121 | This tool provides a forecast input to a previous created Chronulus NormalizedForecaster agent and returns the
122 | prediction data and text explanation from the agent.
123 |
124 | Args:
125 | agent_id (str): The agent_id for the forecasting or prediction use case and previously defined input_data_model
126 | input_data (Dict[str, Union[str, dict, List[dict]]]): The forecast inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.
127 | forecast_start_dt_str (str): The datetime str in '%Y-%m-%d %H:%M:%S' format of the first value in the forecast horizon."
128 | time_scale (str): The times scale of the forecast horizon. Valid time scales are 'hours', 'days', and 'weeks'.
129 | horizon_len (int): The integer length of the forecast horizon. Eg., 60 if a 60 day forecast was requested.
130 |
131 | Returns:
132 | Union[str, Dict[str, Union[dict, str]]]: a dictionary with prediction data, a text explanation of the predictions, agent_id, and the prediction id.
133 | """
134 |
135 | nf_agent = NormalizedForecaster.load_from_saved_estimator(estimator_id=agent_id, verbose=False)
136 | item = nf_agent.input_type(**input_data)
137 |
138 | try:
139 | forecast_start_dt = datetime.fromisoformat(forecast_start_dt_str)
140 | horizon_params = {
141 | 'start_dt': forecast_start_dt,
142 | time_scale: horizon_len
143 | }
144 | req = nf_agent.queue(item, **horizon_params)
145 | except Exception as e:
146 | return f"""Error at nf_agent: {str(e)}"""
147 |
148 | try:
149 | predictions = nf_agent.get_predictions(req.request_id)
150 | prediction = predictions[0]
151 | return {
152 | "agent_id": nf_agent.estimator_id,
153 | "prediction_id": prediction.id,
154 | 'data': prediction.to_json(orient='rows'),
155 | 'explanation': prediction.text}
156 |
157 | except Exception as e:
158 | return f"""Error on prediction: {str(e)}"""
159 |
160 |
161 | async def rescale_forecast(
162 | prediction_id: Annotated[str, Field(description="The prediction_id from a prediction result")],
163 | y_min: Annotated[float, Field(description="The expected smallest value for the use case. E.g., for product sales, 0 would be the least possible value for sales.")],
164 | y_max: Annotated[float, Field(description="The expected largest value for the use case. E.g., for product sales, 0 would be the largest possible value would be given by the user or determined from this history of sales for the product in question or a similar product.")],
165 | invert_scale: Annotated[bool, Field(description="Set this flag to true if the scale of the new units will run in the opposite direction from the inputs.", default=False)],
166 | ) -> List[dict]:
167 | """Rescales prediction data from the NormalizedForecaster agent
168 |
169 | Args:
170 | prediction_id (str) : The prediction_id for the prediction you would like to rescale as returned by the forecasting agent
171 | y_min (float) : The expected smallest value for the use case. E.g., for product sales, 0 would be the least possible value for sales.
172 | y_max (float) : The expected largest value for the use case. E.g., for product sales, 0 would be the largest possible value would be given by the user or determined from this history of sales for the product in question or a similar product.
173 | invert_scale (bool): Set this flag to true if the scale of the new units will run in the opposite direction from the inputs.
174 |
175 | Returns:
176 | List[dict] : The prediction data rescaled to suit the use case
177 | """
178 |
179 | normalized_forecast = NormalizedForecaster.get_prediction_static(prediction_id)
180 | rescaled_forecast = RescaledForecast.from_forecast(
181 | forecast=normalized_forecast,
182 | y_min=y_min,
183 | y_max=y_max,
184 | invert_scale=invert_scale
185 | )
186 |
187 | return [DataRow(dt=row.get('date',row.get('datetime')), y_hat=row.get('y_hat')).model_dump() for row in rescaled_forecast.to_json(orient='rows')]
188 |
189 |
190 |
--------------------------------------------------------------------------------
/src/chronulus_mcp/agent/predictor.py:
--------------------------------------------------------------------------------
1 | import json
2 | from typing import Annotated, List, Dict, Union
3 |
4 | from mcp.server.fastmcp import Context
5 | from pydantic import Field
6 |
7 | from ._types import InputField, generate_model_from_fields
8 |
9 | from chronulus import Session
10 | from chronulus.estimator import BinaryPredictor
11 |
12 |
13 | async def create_prediction_agent_and_get_predictions(
14 | session_id: Annotated[str, Field(description="The session_id for the forecasting or prediction use case")],
15 | input_data_model: Annotated[List[InputField], Field(
16 | description="""Metadata on the fields you will include in the input_data."""
17 | )],
18 | input_data: Annotated[Dict[str, Union[str, dict, List[dict]]], Field(description="The forecast inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.")],
19 | ctx: Context,
20 | num_experts: Annotated[int, Field(description="The number of experts to consult when forming consensus")],
21 | ) -> Union[str, Dict[str, Union[dict, str]]]:
22 | """Queues and retrieves a binary event prediction from Chronulus with a predefined session_id
23 |
24 | This tool creates a BinaryPredictor agent and then provides a prediction input to the agent and returns the prediction data and
25 | text explanations from each of the experts consulted by the agent.
26 |
27 | Args:
28 | session_id (str): The session_id for the forecasting or prediction use case.
29 | input_data_model (List[InputField]): Metadata on the fields you will include in the input_data. Eg., for a field named "brand", add a description like "the brand of the product to forecast"
30 | input_data (Dict[str, Union[str, dict, List[dict]]]): The prediction inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.
31 | ctx (Context): Context object providing access to MCP capabilities.
32 | num_experts (int): The number of experts to consult when forming consensus.
33 |
34 | Returns:
35 | Union[str, Dict[str, Union[dict, str]]]: a dictionary with prediction data, a text explanation of the predictions, agent_id, and probability estimate.
36 | """
37 |
38 |
39 | try:
40 | chronulus_session = Session.load_from_saved_session(session_id=session_id, verbose=False)
41 | except Exception as e:
42 | error_message = f"Failed to retrieve session with session_id: {session_id}\n\n{e}"
43 | _ = await ctx.error( message=error_message)
44 | return error_message
45 |
46 | try:
47 | InputItem = generate_model_from_fields("InputItem", input_data_model)
48 | except Exception as e:
49 | error_message = f"Failed to create InputItem model with input data model: {json.dumps(input_data_model, indent=2)}\n\n{e}"
50 | _ = await ctx.error(message=error_message)
51 | return error_message
52 |
53 | try:
54 | item = InputItem(**input_data)
55 | except Exception as e:
56 | error_message = f"Failed to validate the input_data with the generated InputItem model. \n\n{e}"
57 | _ = await ctx.error(message=error_message)
58 | return error_message
59 |
60 | try:
61 | agent = BinaryPredictor(
62 | session=chronulus_session,
63 | input_type=InputItem,
64 | verbose=False,
65 | )
66 | except Exception as e:
67 | return f"""Error at nf_agent: {str(e)}
68 |
69 | input_fields = {input_data_model}
70 |
71 | input_data = {json.dumps(input_data, indent=2)}
72 |
73 | input_type = {str(type(InputItem))}
74 | """
75 |
76 | try:
77 |
78 | req = agent.queue(item, num_experts=num_experts, note_length=(5,10))
79 | except Exception as e:
80 | return f"""Error at nf_agent: {str(e)}"""
81 |
82 | try:
83 | prediction_set = agent.get_request_predictions(req.request_id)
84 | return {
85 | "agent_id": agent.estimator_id,
86 | "request_id": req.request_id,
87 | "beta_params": prediction_set.beta_params,
88 | 'expert_opinions': [p.text for p in prediction_set],
89 | 'probability': prediction_set.prob_a}
90 |
91 | except Exception as e:
92 | return f"""Error on prediction: {str(e)}"""
93 |
94 |
95 | async def reuse_prediction_agent_and_get_prediction(
96 | agent_id: Annotated[str, Field(description="The agent_id for the forecasting or prediction use case and previously defined input_data_model")],
97 | input_data: Annotated[Dict[str, Union[str, dict, List[dict]]], Field(
98 | description="The forecast inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.")],
99 | num_experts: Annotated[int, Field(description="The number of experts to consult when forming consensus")],
100 |
101 | ) -> Union[str, Dict[str, Union[dict, str]]]:
102 | """Queues and retrieves a binary event prediction from Chronulus with a previously created agent_id
103 |
104 | This tool provides a prediction input to a previous created Chronulus BinaryPredictor agent and returns the
105 | prediction data and text explanations from each of the experts consulted by the agent.
106 |
107 | Args:
108 | agent_id (str): The agent_id for the forecasting or prediction use case and previously defined input_data_model
109 | input_data (Dict[str, Union[str, dict, List[dict]]]): The forecast inputs that you will pass to the chronulus agent to make the prediction. The keys of the dict should correspond to the InputField name you provided in input_fields.
110 | num_experts (int): The number of experts to consult when forming consensus.
111 |
112 | Returns:
113 | Union[str, Dict[str, Union[dict, str]]]: a dictionary with prediction data, a text explanation of the predictions, agent_id, and probability estimate.
114 | """
115 |
116 | agent = BinaryPredictor.load_from_saved_estimator(estimator_id=agent_id, verbose=False)
117 | item = agent.input_type(**input_data)
118 |
119 | try:
120 | req = agent.queue(item, num_experts=num_experts, note_length=(5,10))
121 | except Exception as e:
122 | return f"""Error at nf_agent: {str(e)}"""
123 |
124 | try:
125 | prediction_set = agent.get_request_predictions(req.request_id)
126 | return {
127 | "agent_id": agent.estimator_id,
128 | "request_id": req.request_id,
129 | "beta_params": prediction_set.beta_params,
130 | 'expert_opinions': [p.text for p in prediction_set],
131 | 'probability': prediction_set.prob_a}
132 |
133 | except Exception as e:
134 | return f"""Error on prediction: {str(e)}"""
135 |
136 |
137 |
138 |
139 |
--------------------------------------------------------------------------------
/src/chronulus_mcp/assets.py:
--------------------------------------------------------------------------------
1 | from importlib import resources
2 |
3 | #
4 |
5 |
6 |
7 | def get_react_component(filename: str) -> str:
8 | """
9 | Get the code for a react template.
10 |
11 | Returns
12 | -------
13 | str
14 | React template source code
15 | """
16 | # Get the package directory
17 | for file in resources.files("chronulus_mcp._assets.react").iterdir():
18 | if file.is_file() and file.name == filename:
19 | contents = file.read_text()
20 | return contents
21 |
22 | raise FileNotFoundError(filename)
23 |
24 |
25 |
26 | def get_html_template(filename: str) -> str:
27 | """
28 | Get the code for a html template.
29 |
30 | Returns
31 | -------
32 | str
33 | Html template source code
34 | """
35 | # Get the package directory
36 | for file in resources.files("chronulus_mcp._assets.html").iterdir():
37 | if file.is_file() and file.name == filename:
38 | contents = file.read_text()
39 | return contents
40 |
41 | raise FileNotFoundError(filename)
42 |
43 |
44 |
45 |
46 |
47 |
--------------------------------------------------------------------------------
/src/chronulus_mcp/io.py:
--------------------------------------------------------------------------------
1 | import os
2 | from typing import Annotated, Optional
3 | import math
4 |
5 | from mcp.server.fastmcp import Context
6 | from pydantic import Field
7 | from datetime import datetime
8 |
9 | from chronulus.estimator import NormalizedForecaster
10 | from chronulus.estimator import BinaryPredictor
11 | from chronulus.prediction import RescaledForecast
12 |
13 | from chronulus_mcp.assets import get_html_template
14 |
15 |
16 | async def save_forecast(
17 | prediction_id: Annotated[str, Field(description="The prediction_id from a prediction result")],
18 | output_path: Annotated[str, Field(description="The path where the CSV file should be saved. Should end in .csv")],
19 | csv_name: Annotated[str, Field(description="The path where the CSV file should be saved. Should end in .csv")],
20 | txt_name: Annotated[str, Field(description="The name of the TXT file to be saved. Should end in .txt")],
21 | ctx: Context,
22 | y_min: Annotated[float, Field(default=0.0, description="The expected smallest value for the use case. E.g., for product sales, 0 would be the least possible value for sales.")],
23 | y_max: Annotated[float, Field(default=1.0, description="The expected largest value for the use case. E.g., for product sales, 0 would be the largest possible value would be given by the user or determined from this history of sales for the product in question or a similar product.")],
24 | invert_scale: Annotated[bool, Field(default=False, description="Set this flag to true if the scale of the new units will run in the opposite direction from the inputs.")],
25 | ) -> str:
26 | """Saves the forecast from a NormalizedForecaster agent to CSV and the explanation to TXT
27 |
28 | Args:
29 | prediction_id (str): The prediction_id for the prediction you would like to rescale as returned by the forecasting agent
30 | output_path (str): The path where the CSV and TXT file should be saved.
31 | csv_name (str): The name of the CSV file to be saved. Should end in .csv
32 | txt_name (str): The name of the TXT file to be saved. Should end in .txt
33 | ctx (Context): Context object providing access to MCP capabilities.
34 | y_min (float): The expected smallest value for the use case. E.g., for product sales, 0 would be the least possible value for sales.
35 | y_max (float): The expected largest value for the use case. E.g., for product sales, 0 would be the largest possible value would be given by the user or determined from this history of sales for the product in question or a similar product.
36 | invert_scale (bool): Set this flag to true if the scale of the new units will run in the opposite direction from the inputs.
37 |
38 |
39 | Returns:
40 | str: A message confirming the file was saved and its location
41 | """
42 | # Get normalized forecast and rescale it
43 | _ = await ctx.info(f"Fetching prediction data for {prediction_id}")
44 | normalized_forecast = NormalizedForecaster.get_prediction_static(prediction_id, verbose=False)
45 | rescaled_forecast = RescaledForecast.from_forecast(
46 | forecast=normalized_forecast,
47 | y_min=y_min,
48 | y_max=y_max,
49 | invert_scale=invert_scale
50 | )
51 |
52 | # Convert to pandas using built-in method
53 | df = rescaled_forecast.to_pandas()
54 |
55 | # Save to CSV
56 | df.to_csv(os.path.join(output_path, csv_name), index_label="ds")
57 |
58 | with open(os.path.join(output_path, txt_name), "w") as f:
59 | f.write(normalized_forecast.text)
60 |
61 | return f"Forecast saved successfully to {output_path}"
62 |
63 |
64 |
65 | async def save_prediction_analysis_html(
66 | request_id: Annotated[str, Field(description="The request_id from the BinaryPredictor result")],
67 | output_path: Annotated[str, Field(description="The path where the HTML file should be saved.")],
68 | html_name: Annotated[str, Field(description="The path where the HTML file should be saved.")],
69 | title: Annotated[str, Field(description="Title of analysis")],
70 | plot_label: Annotated[str, Field(description="Label for the Beta plot")],
71 | chronulus_prediction_summary: Annotated[str, Field(description="A summary paragraph distilling prediction results and expert opinions provided by Chronulus")],
72 | dist_shape: Annotated[str, Field(description="A one line description of the shape of the distribution of predictions")],
73 | dist_shape_interpretation: Annotated[str, Field(description="2-3 sentences interpreting the shape of the distribution of predictions in layman's terms")],
74 | #ctx: Context,
75 | ) -> str:
76 | """Saves the analysis from a BinaryPredictor prediction to an HTML file
77 |
78 | Args:
79 | request_id (str): The request_id from the BinaryPredictor result
80 | output_path (str): The path where the CSV and TXT file should be saved.
81 | html_name (str): The name of the HTML file to be saved. Should end in .html
82 | title (str): Title of analysis
83 | plot_label (str): Label for the Beta plot
84 | chronulus_prediction_summary (str) : A summary paragraph distilling prediction results and expert opinions provided by Chronulus
85 | dist_shape (str) : A one line description of the shape of the distribution of predictions
86 | dist_shape_interpretation (str) : A 2-3 sentences interpreting the shape of the distribution of predictions in layman's terms
87 |
88 | Returns:
89 | str: A message confirming the file was saved and its location
90 | """
91 | # Get normalized forecast and rescale it
92 | #_ = await ctx.info(f"Fetching prediction data for request_id: {request_id}")
93 |
94 | html = get_html_template("binary_predictor_analysis.html")
95 |
96 | prediction_set = BinaryPredictor.get_request_predictions_static(request_id, verbose=False)
97 |
98 | mean = prediction_set.prob_a
99 | a, b = prediction_set.beta_params.alpha, prediction_set.beta_params.beta
100 | variance = (a*b) / (((a+b)**2)*(a+b+1))
101 | stdev = math.sqrt(variance)
102 | divergent = a <= 1 or b <= 1
103 | mode = (a - 1) / (a + b - 2)
104 | mode_txt = f"{mode: 16.4f}" if not divergent else 'Diverges'
105 |
106 | html = html.replace("[TITLE_OF_ANALYSIS]", title)
107 | html = html.replace("[PLOT_LABEL]", plot_label)
108 | html = html.replace("[CHRONULUS_PREDICTION_SUMMARY]", chronulus_prediction_summary)
109 | html = html.replace("[DIST_SHAPE_DESCRIPTION]", dist_shape)
110 | html = html.replace("[DIST_SHAPE_INTERPRETATION]", dist_shape_interpretation)
111 | html = html.replace("[ALPHA]", f"{a: 16.16f}")
112 | html = html.replace("[BETA]", f"{b: 16.16f}")
113 | html = html.replace("[MEAN]", f"{mean: 16.4f}")
114 | html = html.replace("[VARIANCE]", f"{variance: 16.4f}")
115 | html = html.replace("[STDEV]", f"{stdev: 16.4f}")
116 | html = html.replace("[MODE]", mode_txt)
117 |
118 | date = datetime.today().strftime("%B %d, %Y")
119 | html = html.replace("[DATE]", date)
120 |
121 | expert_opinion_list = []
122 | for i, p in enumerate(prediction_set):
123 | pos_text = p.opinion_set.positive.text
124 | neg_text = p.opinion_set.negative.text
125 | pos = f"""
126 |
127 |
Expert {i+1} - Positive Case
128 |
{pos_text}
129 |
130 | """
131 | neg = f"""
132 |
133 |
Expert {i+1} - Negative Case
134 |
{neg_text}
135 |
136 | """
137 | expert_opinion_list.append(pos)
138 | expert_opinion_list.append(neg)
139 |
140 | expert_opinions = "\n\n".join(expert_opinion_list)
141 |
142 | html = html.replace("[EXPERT_OPINIONS]", expert_opinions)
143 |
144 |
145 | with open(os.path.join(output_path, html_name), "w") as f:
146 | f.write(html)
147 |
148 | return f"BinaryPredictor analysis saved successfully to {output_path}"
--------------------------------------------------------------------------------
/src/chronulus_mcp/session.py:
--------------------------------------------------------------------------------
1 | from typing import Annotated
2 | import json
3 | from chronulus import Session
4 | from mcp.server.fastmcp import Context
5 | from pydantic import Field, AnyUrl
6 |
7 |
8 | async def create_chronulus_session(
9 | name: Annotated[str, Field(description="A short descriptive name for the use case defined in the session.")],
10 | situation: Annotated[str, Field(description="The broader context for the use case")],
11 | task: Annotated[str, Field(description="Specific details on the forecasting or prediction task.")],
12 | ctx: Context
13 | ) -> str:
14 | """Creates a new Chronulus Session
15 |
16 | A Chronulus Session allows you to use Chronulus Agents. To create a session, you need to provide a situation
17 | and task. Once created, this will generate a unique session id that can be used to when calling the agents.
18 |
19 | Args:
20 | name (str): A short descriptive name for the use case defined in the session.
21 | situation (str): The broader context for the use case.
22 | task (str): The specific prediction task.
23 |
24 |
25 | Returns:
26 | str: The session ID.
27 | """
28 |
29 | try:
30 | chronulus_session = Session(
31 | name=name,
32 | situation=situation,
33 | task=task,
34 | verbose=False,
35 | )
36 |
37 | except Exception as e:
38 | error_message = f"Failed to create chronulus session with the following error: \n\n{e}"
39 | _ = await ctx.error(message=error_message)
40 | return error_message
41 |
42 | return chronulus_session.session_id
43 |
44 |
45 | async def get_risk_assessment_scorecard(
46 | session_id: Annotated[str, Field(description="The session_id for the forecasting or prediction use case")],
47 | as_json: Annotated[bool, Field(description="If true, returns the scorecard in JSON format, otherwise returns a markdown formatted scorecard")]
48 | ) -> str:
49 | """Get the risk assessment scorecard for the Session
50 |
51 | Args:
52 | session_id (str): The session_id for the forecasting or prediction use case.
53 | as_json (bool): If true, returns the scorecard in JSON format, otherwise returns a markdown formatted scorecard
54 |
55 | Returns:
56 | str: a risk assessment scorecard in the specified format.
57 | """
58 |
59 | chronulus_session = Session.load_from_saved_session(session_id=session_id, verbose=False)
60 | scorecard_md = chronulus_session.risk_scorecard(width='100%')
61 | if as_json:
62 | content = json.dumps(chronulus_session.scorecard.model_dump())
63 | else:
64 | content = scorecard_md
65 | return content
66 |
67 |
68 |
--------------------------------------------------------------------------------
/src/chronulus_mcp/stats/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ChronulusAI/chronulus-mcp/29114c185f0068f9d1bef16af3949e75dede00da/src/chronulus_mcp/stats/__init__.py
--------------------------------------------------------------------------------
/src/chronulus_mcp/stats/odds.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ChronulusAI/chronulus-mcp/29114c185f0068f9d1bef16af3949e75dede00da/src/chronulus_mcp/stats/odds.py
--------------------------------------------------------------------------------
/src/server.py:
--------------------------------------------------------------------------------
1 | from chronulus_mcp import main, mcp
2 |
3 | if __name__ == "__main__":
4 | main()
--------------------------------------------------------------------------------