├── requirements.txt ├── LICENSE ├── code.ipynb └── README.md /requirements.txt: -------------------------------------------------------------------------------- 1 | # LangChain and LangGraph core components 2 | langchain 3 | langgraph 4 | langchain-openai 5 | langchain-community 6 | 7 | # Foundational libraries for data and LLMs 8 | openai 9 | pydantic 10 | rich 11 | typing-extensions 12 | 13 | # Data retrieval and analysis tools 14 | yfinance 15 | finnhub-python 16 | stockstats 17 | pandas 18 | requests 19 | 20 | # Web search tool provider 21 | tavily-python 22 | 23 | # Vector store for memory 24 | chromadb 25 | 26 | # Web scraping (often a dependency of other tools) 27 | beautifulsoup4 28 | 29 | # For notebook-specific functionality and visualization 30 | ipython 31 | pygraphviz -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2025 Fareed Khan 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /code.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "id": "intro-1", 6 | "metadata": {}, 7 | "source": [ 8 | "\n", 9 | "# Deep Thinking Trading System\n", 10 | "\n", 11 | "The trend is shifting from monolithic, single-purpose models to sophisticated **societies of agents**, where multiple specialized AIs collaborate to solve complex problems. This is particularly true in finance, where making a single decision requires synthesizing information from dozens of sources, from market charts and news headlines to social media sentiment and dense financial reports.\n", 12 | "\n", 13 | "In this guide, we will build a complete, **standalone** implementation of a powerful financial analysis framework. We will define every component—from live data tools to complex agent logic—directly within this notebook, creating a self-contained pipeline that functions end-to-end.\n", 14 | "\n", 15 | "We will use **LangGraph** to orchestrate this complex workflow, **LangSmith** for tracing, a suite of LLMs to power our agents, and live web search APIs to ground their analysis in real-world, up-to-the-minute information. By the end, you will have a deep, practical understanding of how to build, run, and evaluate a sophisticated agentic system for financial analysis." 16 | ] 17 | }, 18 | { 19 | "cell_type": "markdown", 20 | "id": "toc-1", 21 | "metadata": {}, 22 | "source": [ 23 | "\n", 24 | "## Table of Contents\n", 25 | "- [**Part 1: Setting Up the Foundational Components**](#part-1-setting-up-the-foundational-components)\n", 26 | " - [1.1. Environment Variables and Core Imports](#11-environment-variables-and-core-imports)\n", 27 | " - [1.2. The Configuration Dictionary: The Control Panel for Our Agents](#12-the-configuration-dictionary-the-control-panel-for-our-agents)\n", 28 | " - [1.3. Initializing the Language Models (LLMs)](#13-initializing-the-language-models-llms)\n", 29 | " - [1.4. **Code Dependency:** Defining the `AgentState` and Other State Dictionaries](#14-code-dependency-defining-the-agentstate-and-other-state-dictionaries)\n", 30 | " - [1.5. **Code Dependency:** Defining the Live Data Tools and `Toolkit`](#15-code-dependency-defining-the-live-data-tools-and-toolkit)\n", 31 | " - [1.6. **Code Dependency:** Defining `FinancialSituationMemory` for Long-Term Learning](#16-code-dependency-defining-financialsituationmemory-for-long-term-learning)\n", 32 | "\n", 33 | "- [**Part 2: The Analyst Team - Intelligence Gathering from Diverse Sources**](#part-2-the-analyst-team---intelligence-gathering-from-diverse-sources)\n", 34 | " - [2.1. **Code Dependency:** Defining the Analyst Agent Logic](#21-code-dependency-defining-the-analyst-agent-logic)\n", 35 | " - [2.2. Understanding the Analyst Workflow: The ReAct Loop in Action](#22-understanding-the-analyst-workflow-the-react-loop-in-action)\n", 36 | " - [2.3. Running the Analysts Sequentially and Examining Their Reports](#23-running-the-analysts-sequentially-and-examining-their-reports)\n", 37 | "\n", 38 | "- [**Part 3: The Researcher Team - The Bull vs. Bear Investment Debate**](#part-3-the-researcher-team---the-bull-vs-bear-investment-debate)\n", 39 | " - [3.1. **Code Dependency:** Defining the Researcher and Manager Agent Logic](#31-code-dependency-defining-the-researcher-and-manager-agent-logic)\n", 40 | " - [3.2. Simulating the Bull vs. Bear Debate: Arguments and Rebuttals](#32-simulating-the-bull-vs-bear-debate-arguments-and-rebuttals)\n", 41 | " - [3.3. The Research Manager's Verdict: Synthesizing the Final Investment Plan](#33-the-research-managers-verdict-synthesizing-the-final-investment-plan)\n", 42 | "\n", 43 | "- [**Part 4: From Proposal to Final Decision - The Trader and Risk Teams**](#part-4-from-proposal-to-final-decision---the-trader-and-risk-teams)\n", 44 | " - [4.1. **Code Dependency:** Defining the Trader and Risk Management Agent Logic](#41-code-dependency-defining-the-trader-and-risk-management-agent-logic)\n", 45 | " - [4.2. The Trader's Proposal: Creating an Actionable Plan](#42-the-traders-proposal-creating-an-actionable-plan)\n", 46 | " - [4.3. The Risk Management Debate: Aggressive, Conservative, and Neutral Perspectives](#43-the-risk-management-debate-aggressive-conservative-and-neutral-perspectives)\n", 47 | " - [4.4. The Final Judgment: The Portfolio Manager's Decision](#44-the-final-judgment-the-portfolio-managers-decision)\n", 48 | "\n", 49 | "- [**Part 5: Assembling the Full LangGraph Workflow**](#part-5-assembling-the-full-langgraph-workflow)\n", 50 | " - [5.1. **Code Dependency:** Defining the Graph's Helper Logic](#51-code-dependency-defining-the-graphs-helper-logic)\n", 51 | " - [5.2. Creating the Tool Nodes for Execution](#52-creating-the-tool-nodes-for-execution)\n", 52 | " - [5.3. Building the `StateGraph`: Wiring All Agents Together](#53-building-the-stategraph-wiring-all-agents-together)\n", 53 | " - [5.4. Compiling and Visualizing the Complete Agentic Workflow](#54-compiling-and-visualizing-the-complete-agentic-workflow)\n", 54 | "\n", 55 | "- [**Part 6: The Grand Finale - Running the Full Pipeline**](#part-6-the-grand-finale---running-the-full-pipeline)\n", 56 | " - [6.1. Defining the Input: Ticker and Date](#61-defining-the-input-ticker-and-date)\n", 57 | " - [6.2. Invoking the Graph: A Step-by-Step Trace of the Full Run](#62-invoking-the-graph-a-step-by-step-trace-of-the-full-run)\n", 58 | " - [6.3. Analyzing the Final State and Raw Output](#63-analyzing-the-final-state-and-raw-output)\n", 59 | "\n", 60 | "- [**Part 7: Finalizing Output and Enabling the Learning Loop**](#part-7-finalizing-output-and-enabling-the-learning-loop)\n", 61 | " - [7.1. **Code Dependency:** Defining the Signal Processor and Reflection Engine](#71-code-dependency-defining-the-signal-processor-and-reflection-engine)\n", 62 | " - [7.2. Extracting a Clean BUY, SELL, or HOLD Signal](#72-extracting-a-clean-buy-sell-or-hold-signal)\n", 63 | " - [7.3. Simulating the Learning Loop: How Agents Learn from Outcomes](#73-simulating-the-learning-loop-how-agents-learn-from-outcomes)\n", 64 | "\n", 65 | "- [**Part 8: A Multi-Faceted Evaluation Framework**](#part-8-a-multi-faceted-evaluation-framework)\n", 66 | " - [8.1. Evaluation Technique 1: LLM-as-a-Judge](#81-evaluation-technique-1-llm-as-a-judge)\n", 67 | " - [8.1.1. Defining the Criteria and Running the Judge](#811-defining-the-criteria-and-running-the-judge)\n", 68 | " - [8.2. Evaluation Technique 2: Ground Truth Comparison (Backtesting)](#82-evaluation-technique-2-ground-truth-comparison-backtesting)\n", 69 | " - [8.2.1. Checking the Decision Against Actual Market Performance](#821-checking-the-decision-against-actual-market-performance)\n", 70 | " - [8.3. Evaluation Technique 3: Factual Consistency Audit](#83-evaluation-technique-3-factual-consistency-audit)\n", 71 | " - [8.3.1. Building an Auditor Agent to Check for Hallucinations](#831-building-an-auditor-agent-to-check-for-hallucinations)\n", 72 | " - [8.4. Evaluation Technique 4: Tool Usage Analysis](#84-evaluation-technique-4-tool-usage-analysis)\n", 73 | " - [8.4.1. Analyzing Agent Efficiency and Correctness with LangSmith](#841-analyzing-agent-efficiency-and-correctness-with-langsmith)\n", 74 | " - [8.5. Synthesizing the Evaluation Results](#85-synthesizing-the-evaluation-results)\n", 75 | "\n", 76 | "- [**Part 9: Conclusion and Future Directions**](#part-9-conclusion-and-future-directions)" 77 | ] 78 | }, 79 | { 80 | "cell_type": "markdown", 81 | "id": "part1-intro", 82 | "metadata": {}, 83 | "source": [ 84 | "## Part 1: Setting Up the Foundational Components\n", 85 | "\n", 86 | "Before we can build our society of agents, we need to lay the groundwork. This initial phase is all about setting up the essential components that will power our entire system. We'll configure our API keys, define the global settings, initialize the language models, and—most importantly for a standalone notebook—define all the core classes and functions for state, tools, and memory directly in our code." 87 | ] 88 | }, 89 | { 90 | "cell_type": "markdown", 91 | "id": "part1-1", 92 | "metadata": {}, 93 | "source": [ 94 | "### 1.1. Environment Variables and Core Imports\n", 95 | "\n", 96 | "First things first, we need to securely manage our API keys and set up tracing with LangSmith. Tracing is crucial for a complex multi-agent system, as it allows us to visualize the exact flow of operations, debug issues, and understand each agent's reasoning process. We'll be using **Tavily** for our live web search capabilities, so we'll need an API key for that as well." 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "execution_count": null, 102 | "id": "part1-1-code", 103 | "metadata": {}, 104 | "outputs": [], 105 | "source": [ 106 | "# First, ensure you have the necessary libraries installed\n", 107 | "# !pip install -U langchain langgraph langchain_openai tavily-python yfinance finnhub-python stockstats beautifulsoup4 chromadb rich\n", 108 | "\n", 109 | "import os\n", 110 | "from getpass import getpass\n", 111 | "\n", 112 | "def _set_env(var: str):\n", 113 | " if not os.environ.get(var):\n", 114 | " os.environ[var] = getpass(f\"Enter your {var}: \")\n", 115 | "\n", 116 | "_set_env(\"OPENAI_API_KEY\")\n", 117 | "_set_env(\"FINNHUB_API_KEY\")\n", 118 | "_set_env(\"TAVILY_API_KEY\")\n", 119 | "_set_env(\"LANGSMITH_API_KEY\")\n", 120 | "\n", 121 | "# Enable LangSmith tracing for full observability\n", 122 | "os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n", 123 | "os.environ[\"LANGSMITH_PROJECT\"] = \"Standalone-TradingAgents-Live-Demo\"\n", 124 | "\n", 125 | "print(\"Environment variables set successfully.\")" 126 | ] 127 | }, 128 | { 129 | "cell_type": "markdown", 130 | "id": "part1-2", 131 | "metadata": {}, 132 | "source": [ 133 | "### 1.2. The Configuration Dictionary: The Control Panel for Our Agents\n", 134 | "\n", 135 | "We'll use a central configuration dictionary to make our system modular and easy to tweak. Here, we can define everything from which LLMs to use to the number of debate rounds for our researcher agents." 136 | ] 137 | }, 138 | { 139 | "cell_type": "code", 140 | "execution_count": null, 141 | "id": "part1-2-code", 142 | "metadata": {}, 143 | "outputs": [ 144 | { 145 | "name": "stdout", 146 | "output_type": "stream", 147 | "text": [ 148 | "Configuration dictionary created:\n", 149 | "{'results_dir': './results',\n", 150 | " 'llm_provider': 'openai',\n", 151 | " 'deep_think_llm': 'gpt-4o',\n", 152 | " 'quick_think_llm': 'gpt-4o-mini',\n", 153 | " 'backend_url': 'https://api.openai.com/v1',\n", 154 | " 'max_debate_rounds': 2,\n", 155 | " 'max_risk_discuss_rounds': 1,\n", 156 | " 'max_recur_limit': 100,\n", 157 | " 'online_tools': True,\n", 158 | " 'data_cache_dir': './data_cache'}" 159 | ] 160 | } 161 | ], 162 | "source": [ 163 | "from pprint import pprint\n", 164 | "\n", 165 | "# Define our central configuration for this notebook run\n", 166 | "config = {\n", 167 | " \"results_dir\": \"./results\",\n", 168 | " # LLM settings\n", 169 | " \"llm_provider\": \"openai\",\n", 170 | " \"deep_think_llm\": \"gpt-4o\", # Powerful model for complex reasoning\n", 171 | " \"quick_think_llm\": \"gpt-4o-mini\", # Fast, cheaper model for data processing\n", 172 | " \"backend_url\": \"https://api.openai.com/v1\",\n", 173 | " # Debate and discussion settings\n", 174 | " \"max_debate_rounds\": 2, # Bull vs. Bear will have 2 rounds of debate\n", 175 | " \"max_risk_discuss_rounds\": 1, # Risk team has 1 round of debate\n", 176 | " \"max_recur_limit\": 100,\n", 177 | " # Tool settings\n", 178 | " \"online_tools\": True, # Use live APIs instead of cached data\n", 179 | " \"data_cache_dir\": \"./data_cache\" # Directory for caching online data\n", 180 | "}\n", 181 | "\n", 182 | "# Create the cache directory if it doesn't exist\n", 183 | "os.makedirs(config[\"data_cache_dir\"], exist_ok=True)\n", 184 | "\n", 185 | "print(\"Configuration dictionary created:\")\n", 186 | "pprint(config)" 187 | ] 188 | }, 189 | { 190 | "cell_type": "markdown", 191 | "id": "part1-3", 192 | "metadata": {}, 193 | "source": [ 194 | "### 1.3. Initializing the Language Models (LLMs)\n", 195 | "\n", 196 | "With our configuration defined, we can now initialize the LLMs that will serve as the cognitive engines for our agents: a `deep_thinking_llm` for complex tasks and a `quick_thinking_llm` for faster, routine tasks." 197 | ] 198 | }, 199 | { 200 | "cell_type": "code", 201 | "execution_count": null, 202 | "id": "part1-3-code", 203 | "metadata": {}, 204 | "outputs": [ 205 | { 206 | "name": "stdout", 207 | "output_type": "stream", 208 | "text": [ 209 | "LLMs initialized successfully.\n", 210 | "Deep Thinking LLM: ChatOpenAI(client=, async_client=, model_name='gpt-4o', ...)\n", 211 | "Quick Thinking LLM: ChatOpenAI(client=, async_client=, model_name='gpt-4o-mini', ...)\n" 212 | ] 213 | } 214 | ], 215 | "source": [ 216 | "from langchain_openai import ChatOpenAI\n", 217 | "\n", 218 | "deep_thinking_llm = ChatOpenAI(\n", 219 | " model=config[\"deep_think_llm\"],\n", 220 | " base_url=config[\"backend_url\"],\n", 221 | " temperature=0.1\n", 222 | ")\n", 223 | "\n", 224 | "quick_thinking_llm = ChatOpenAI(\n", 225 | " model=config[\"quick_think_llm\"],\n", 226 | " base_url=config[\"backend_url\"],\n", 227 | " temperature=0.1\n", 228 | ")\n", 229 | "\n", 230 | "print(\"LLMs initialized successfully.\")\n", 231 | "print(f\"Deep Thinking LLM: {deep_thinking_llm}\")\n", 232 | "print(f\"Quick Thinking LLM: {quick_thinking_llm}\")" 233 | ] 234 | }, 235 | { 236 | "cell_type": "markdown", 237 | "id": "part1-4", 238 | "metadata": {}, 239 | "source": [ 240 | "### 1.4. Code Dependency: Defining the `AgentState` and Other State Dictionaries\n", 241 | "\n", 242 | "The `AgentState` is the shared memory of our entire multi-agent system. As each agent completes its task, it reads from and writes to this state. We define the data structures for this global memory directly in the cell below." 243 | ] 244 | }, 245 | { 246 | "cell_type": "code", 247 | "execution_count": null, 248 | "id": "part1-4-code", 249 | "metadata": {}, 250 | "outputs": [ 251 | { 252 | "name": "stdout", 253 | "output_type": "stream", 254 | "text": [ 255 | "AgentState, InvestDebateState, and RiskDebateState defined successfully.\n" 256 | ] 257 | } 258 | ], 259 | "source": [ 260 | "from typing import Annotated, Sequence, List\n", 261 | "from typing_extensions import TypedDict\n", 262 | "from langgraph.graph import MessagesState\n", 263 | "\n", 264 | "# State for the researcher team's debate\n", 265 | "class InvestDebateState(TypedDict):\n", 266 | " bull_history: str\n", 267 | " bear_history: str\n", 268 | " history: str\n", 269 | " current_response: str\n", 270 | " judge_decision: str\n", 271 | " count: int\n", 272 | "\n", 273 | "# State for the risk management team's debate\n", 274 | "class RiskDebateState(TypedDict):\n", 275 | " risky_history: str\n", 276 | " safe_history: str\n", 277 | " neutral_history: str\n", 278 | " history: str\n", 279 | " latest_speaker: str\n", 280 | " current_risky_response: str\n", 281 | " current_safe_response: str\n", 282 | " current_neutral_response: str\n", 283 | " judge_decision: str\n", 284 | " count: int\n", 285 | "\n", 286 | "# The main state that will be passed through the entire graph\n", 287 | "class AgentState(MessagesState):\n", 288 | " company_of_interest: str\n", 289 | " trade_date: str\n", 290 | " sender: str\n", 291 | " market_report: str\n", 292 | " sentiment_report: str\n", 293 | " news_report: str\n", 294 | " fundamentals_report: str\n", 295 | " investment_debate_state: InvestDebateState\n", 296 | " investment_plan: str\n", 297 | " trader_investment_plan: str\n", 298 | " risk_debate_state: RiskDebateState\n", 299 | " final_trade_decision: str\n", 300 | "\n", 301 | "print(\"AgentState, InvestDebateState, and RiskDebateState defined successfully.\")" 302 | ] 303 | }, 304 | { 305 | "cell_type": "markdown", 306 | "id": "part1-5", 307 | "metadata": {}, 308 | "source": [ 309 | "### 1.5. Code Dependency: Defining the Live Data Tools and `Toolkit`\n", 310 | "\n", 311 | "An agent is only as good as its tools. The `Toolkit` class is where we define all the functions our agents can use to interact with the outside world. We'll define functions to fetch data from Yahoo Finance, Finnhub, and live web searches via Tavily. Each function is decorated with `@tool` to make it discoverable by the agents." 312 | ] 313 | }, 314 | { 315 | "cell_type": "code", 316 | "execution_count": null, 317 | "id": "part1-5-code", 318 | "metadata": {}, 319 | "outputs": [ 320 | { 321 | "name": "stdout", 322 | "output_type": "stream", 323 | "text": [ 324 | "Toolkit class defined and instantiated with live data tools.\n" 325 | ] 326 | } 327 | ], 328 | "source": [ 329 | "import yfinance as yf\n", 330 | "import finnhub\n", 331 | "import pandas as pd\n", 332 | "import requests\n", 333 | "from datetime import datetime, timedelta\n", 334 | "from langchain_core.tools import tool\n", 335 | "from langchain_community.tools.tavily_search import TavilySearchResults\n", 336 | "from stockstats import wrap as stockstats_wrap\n", 337 | "\n", 338 | "# --- Tool Implementations ---\n", 339 | "\n", 340 | "@tool\n", 341 | "def get_yfinance_data(\n", 342 | " symbol: Annotated[str, \"ticker symbol of the company\"],\n", 343 | " start_date: Annotated[str, \"Start date in yyyy-mm-dd format\"],\n", 344 | " end_date: Annotated[str, \"End date in yyyy-mm-dd format\"],\n", 345 | ") -> str:\n", 346 | " \"\"\"Retrieve the stock price data for a given ticker symbol from Yahoo Finance.\"\"\"\n", 347 | " try:\n", 348 | " ticker = yf.Ticker(symbol.upper())\n", 349 | " data = ticker.history(start=start_date, end=end_date)\n", 350 | " if data.empty:\n", 351 | " return f\"No data found for symbol '{symbol}' between {start_date} and {end_date}\"\n", 352 | " return data.to_csv()\n", 353 | " except Exception as e:\n", 354 | " return f\"Error fetching Yahoo Finance data: {e}\"\n", 355 | "\n", 356 | "@tool\n", 357 | "def get_technical_indicators(\n", 358 | " symbol: Annotated[str, \"ticker symbol of the company\"],\n", 359 | " start_date: Annotated[str, \"Start date in yyyy-mm-dd format\"],\n", 360 | " end_date: Annotated[str, \"End date in yyyy-mm-dd format\"],\n", 361 | ") -> str:\n", 362 | " \"\"\"Retrieve key technical indicators for a stock using stockstats library.\"\"\"\n", 363 | " try:\n", 364 | " df = yf.download(symbol, start=start_date, end=end_date, progress=False)\n", 365 | " if df.empty:\n", 366 | " return \"No data to calculate indicators.\"\n", 367 | " stock_df = stockstats_wrap(df)\n", 368 | " indicators = stock_df[['macd', 'rsi_14', 'boll', 'boll_ub', 'boll_lb', 'close_50_sma', 'close_200_sma']]\n", 369 | " return indicators.tail().to_csv() # Return last 5 days for brevity\n", 370 | " except Exception as e:\n", 371 | " return f\"Error calculating stockstats indicators: {e}\"\n", 372 | "\n", 373 | "@tool\n", 374 | "def get_finnhub_news(ticker: str, start_date: str, end_date: str) -> str:\n", 375 | " \"\"\"Get company news from Finnhub within a date range.\"\"\"\n", 376 | " try:\n", 377 | " finnhub_client = finnhub.Client(api_key=os.environ[\"FINNHUB_API_KEY\"])\n", 378 | " news_list = finnhub_client.company_news(ticker, _from=start_date, to=end_date)\n", 379 | " news_items = []\n", 380 | " for news in news_list[:5]: # Limit to 5 results\n", 381 | " news_items.append(f\"Headline: {news['headline']}\\nSummary: {news['summary']}\")\n", 382 | " return \"\\n\\n\".join(news_items) if news_items else \"No Finnhub news found.\"\n", 383 | " except Exception as e:\n", 384 | " return f\"Error fetching Finnhub news: {e}\"\n", 385 | "\n", 386 | "# The following three tools use Tavily for live, real-time web search.\n", 387 | "tavily_tool = TavilySearchResults(max_results=3)\n", 388 | "\n", 389 | "@tool\n", 390 | "def get_social_media_sentiment(ticker: str, trade_date: str) -> str:\n", 391 | " \"\"\"Performs a live web search for social media sentiment regarding a stock.\"\"\"\n", 392 | " query = f\"social media sentiment and discussions for {ticker} stock around {trade_date}\"\n", 393 | " return tavily_tool.invoke({\"query\": query})\n", 394 | "\n", 395 | "@tool\n", 396 | "def get_fundamental_analysis(ticker: str, trade_date: str) -> str:\n", 397 | " \"\"\"Performs a live web search for recent fundamental analysis of a stock.\"\"\"\n", 398 | " query = f\"fundamental analysis and key financial metrics for {ticker} stock published around {trade_date}\"\n", 399 | " return tavily_tool.invoke({\"query\": query})\n", 400 | "\n", 401 | "@tool\n", 402 | "def get_macroeconomic_news(trade_date: str) -> str:\n", 403 | " \"\"\"Performs a live web search for macroeconomic news relevant to the stock market.\"\"\"\n", 404 | " query = f\"macroeconomic news and market trends affecting the stock market on {trade_date}\"\n", 405 | " return tavily_tool.invoke({\"query\": query})\n", 406 | "\n", 407 | "# --- Toolkit Class ---\n", 408 | "class Toolkit:\n", 409 | " def __init__(self, config):\n", 410 | " self.config = config\n", 411 | " self.get_yfinance_data = get_yfinance_data\n", 412 | " self.get_technical_indicators = get_technical_indicators\n", 413 | " self.get_finnhub_news = get_finnhub_news\n", 414 | " self.get_social_media_sentiment = get_social_media_sentiment\n", 415 | " self.get_fundamental_analysis = get_fundamental_analysis\n", 416 | " self.get_macroeconomic_news = get_macroeconomic_news\n", 417 | "\n", 418 | "toolkit = Toolkit(config)\n", 419 | "print(f\"Toolkit class defined and instantiated with live data tools.\")" 420 | ] 421 | }, 422 | { 423 | "cell_type": "markdown", 424 | "id": "part1-6", 425 | "metadata": {}, 426 | "source": [ 427 | "### 1.6. Code Dependency: Defining `FinancialSituationMemory` for Long-Term Learning\n", 428 | "\n", 429 | "For our agents to improve, they need long-term memory. The `FinancialSituationMemory` class uses a `ChromaDB` vector store to save reflections on past decisions. When a similar market situation arises, an agent can retrieve these memories to avoid repeating mistakes. We define the class here and then create instances for each learning agent." 430 | ] 431 | }, 432 | { 433 | "cell_type": "code", 434 | "execution_count": null, 435 | "id": "part1-6-code", 436 | "metadata": {}, 437 | "outputs": [ 438 | { 439 | "name": "stdout", 440 | "output_type": "stream", 441 | "text": [ 442 | "FinancialSituationMemory class defined.\n", 443 | "FinancialSituationMemory instances created for 5 agents.\n" 444 | ] 445 | } 446 | ], 447 | "source": [ 448 | "import chromadb\n", 449 | "from openai import OpenAI\n", 450 | "\n", 451 | "class FinancialSituationMemory:\n", 452 | " def __init__(self, name, config):\n", 453 | " self.embedding_model = \"text-embedding-3-small\"\n", 454 | " self.client = OpenAI(base_url=config[\"backend_url\"])\n", 455 | " # Use a persistent client for real applications, but in-memory is fine for a notebook.\n", 456 | " self.chroma_client = chromadb.Client(chromadb.config.Settings(allow_reset=True))\n", 457 | " self.situation_collection = self.chroma_client.create_collection(name=name)\n", 458 | "\n", 459 | " def get_embedding(self, text):\n", 460 | " response = self.client.embeddings.create(model=self.embedding_model, input=text)\n", 461 | " return response.data[0].embedding\n", 462 | "\n", 463 | " def add_situations(self, situations_and_advice):\n", 464 | " if not situations_and_advice:\n", 465 | " return\n", 466 | " offset = self.situation_collection.count()\n", 467 | " ids = [str(offset + i) for i, _ in enumerate(situations_and_advice)]\n", 468 | " situations = [s for s, r in situations_and_advice]\n", 469 | " recommendations = [r for s, r in situations_and_advice]\n", 470 | " embeddings = [self.get_embedding(s) for s in situations]\n", 471 | " self.situation_collection.add(\n", 472 | " documents=situations,\n", 473 | " metadatas=[{\"recommendation\": rec} for rec in recommendations],\n", 474 | " embeddings=embeddings,\n", 475 | " ids=ids,\n", 476 | " )\n", 477 | "\n", 478 | " def get_memories(self, current_situation, n_matches=1):\n", 479 | " if self.situation_collection.count() == 0:\n", 480 | " return []\n", 481 | " query_embedding = self.get_embedding(current_situation)\n", 482 | " results = self.situation_collection.query(\n", 483 | " query_embeddings=[query_embedding],\n", 484 | " n_results=min(n_matches, self.situation_collection.count()),\n", 485 | " include=[\"metadatas\"],\n", 486 | " )\n", 487 | " return [{'recommendation': meta['recommendation']} for meta in results['metadatas'][0]]\n", 488 | "\n", 489 | "print(\"FinancialSituationMemory class defined.\")\n", 490 | "\n", 491 | "bull_memory = FinancialSituationMemory(\"bull_memory\", config)\n", 492 | "bear_memory = FinancialSituationMemory(\"bear_memory\", config)\n", 493 | "trader_memory = FinancialSituationMemory(\"trader_memory\", config)\n", 494 | "invest_judge_memory = FinancialSituationMemory(\"invest_judge_memory\", config)\n", 495 | "risk_manager_memory = FinancialSituationMemory(\"risk_manager_memory\", config)\n", 496 | "\n", 497 | "print(\"FinancialSituationMemory instances created for 5 agents.\")" 498 | ] 499 | }, 500 | { 501 | "cell_type": "markdown", 502 | "id": "part2-intro", 503 | "metadata": {}, 504 | "source": [ 505 | "## Part 2: The Analyst Team - Intelligence Gathering from Diverse Sources\n", 506 | "\n", 507 | "With our foundation in place, it's time to introduce the first team of agents: the Analysts. This team is the intelligence-gathering arm of our firm. Each analyst has a specific domain of expertise and is equipped with a unique set of tools to gather live data. Their collective goal is to produce a comprehensive, 360-degree view of the stock and its market environment." 508 | ] 509 | }, 510 | { 511 | "cell_type": "markdown", 512 | "id": "part2-1", 513 | "metadata": {}, 514 | "source": [ 515 | "### 2.1. Code Dependency: Defining the Analyst Agent Logic\n", 516 | "\n", 517 | "Each analyst is a Python function that takes the current `AgentState` as input and returns an updated state. Inside each function, we construct a prompt that defines the agent's role, bind the relevant tools from our `Toolkit`, and invoke the `quick_thinking_llm`." 518 | ] 519 | }, 520 | { 521 | "cell_type": "code", 522 | "execution_count": null, 523 | "id": "part2-1-code", 524 | "metadata": {}, 525 | "outputs": [ 526 | { 527 | "name": "stdout", 528 | "output_type": "stream", 529 | "text": [ 530 | "Analyst agent creation functions are now available.\n" 531 | ] 532 | } 533 | ], 534 | "source": [ 535 | "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n", 536 | "\n", 537 | "def create_analyst_node(llm, toolkit, system_message, tools, output_field):\n", 538 | " # This function creates a LangGraph node for a specific type of analyst.\n", 539 | " prompt = ChatPromptTemplate.from_messages([\n", 540 | " (\"system\",\n", 541 | " \"You are a helpful AI assistant, collaborating with other assistants.\" \\\n", 542 | " \" Use the provided tools to progress towards answering the question.\" \\\n", 543 | " \" If you are unable to fully answer, that's OK; another assistant with different tools\" \\\n", 544 | " \" will help where you left off. Execute what you can to make progress.\"\n", 545 | " \" You have access to the following tools: {tool_names}.\\n{system_message}\" \\\n", 546 | " \" For your reference, the current date is {current_date}. The company we want to look at is {ticker}\"),\n", 547 | " MessagesPlaceholder(variable_name=\"messages\"),\n", 548 | " ])\n", 549 | " prompt = prompt.partial(system_message=system_message)\n", 550 | " prompt = prompt.partial(tool_names=\", \".join([tool.name for tool in tools]))\n", 551 | " chain = prompt | llm.bind_tools(tools)\n", 552 | "\n", 553 | " def analyst_node(state):\n", 554 | " # The node function itself, which will be called by LangGraph.\n", 555 | " prompt_with_data = prompt.partial(current_date=state[\"trade_date\"], ticker=state[\"company_of_interest\"])\n", 556 | " result = prompt_with_data.invoke(state[\"messages\"])\n", 557 | " report = \"\"\n", 558 | " # If the LLM didn't call a tool, it means it has the final report.\n", 559 | " if not result.tool_calls:\n", 560 | " report = result.content\n", 561 | " return {\"messages\": [result], output_field: report}\n", 562 | " return analyst_node\n", 563 | "\n", 564 | "# Market Analyst: Focuses on technical indicators and price action.\n", 565 | "market_analyst_system_message = \"You are a trading assistant specialized in analyzing financial markets. Your role is to select the most relevant technical indicators to analyze a stock's price action, momentum, and volatility. You must use your tools to get historical data and then generate a report with your findings, including a summary table.\"\n", 566 | "market_analyst_node = create_analyst_node(quick_thinking_llm, toolkit, market_analyst_system_message, [toolkit.get_yfinance_data, toolkit.get_technical_indicators], \"market_report\")\n", 567 | "\n", 568 | "# Social Media Analyst: Gauges public sentiment.\n", 569 | "social_analyst_system_message = \"You are a social media analyst. Your job is to analyze social media posts and public sentiment for a specific company over the past week. Use your tools to find relevant discussions and write a comprehensive report detailing your analysis, insights, and implications for traders, including a summary table.\"\n", 570 | "social_analyst_node = create_analyst_node(quick_thinking_llm, toolkit, social_analyst_system_message, [toolkit.get_social_media_sentiment], \"sentiment_report\")\n", 571 | "\n", 572 | "# News Analyst: Covers company-specific and macroeconomic news.\n", 573 | "news_analyst_system_message = \"You are a news researcher analyzing recent news and trends over the past week. Write a comprehensive report on the current state of the world relevant for trading and macroeconomics. Use your tools to be comprehensive and provide detailed analysis, including a summary table.\"\n", 574 | "news_analyst_node = create_analyst_node(quick_thinking_llm, toolkit, news_analyst_system_message, [toolkit.get_finnhub_news, toolkit.get_macroeconomic_news], \"news_report\")\n", 575 | "\n", 576 | "# Fundamentals Analyst: Dives into the company's financial health.\n", 577 | "fundamentals_analyst_system_message = \"You are a researcher analyzing fundamental information about a company. Write a comprehensive report on the company's financials, insider sentiment, and transactions to gain a full view of its fundamental health, including a summary table.\"\n", 578 | "fundamentals_analyst_node = create_analyst_node(quick_thinking_llm, toolkit, fundamentals_analyst_system_message, [toolkit.get_fundamental_analysis], \"fundamentals_report\")\n", 579 | "\n", 580 | "print(\"Analyst agent creation functions are now available.\")" 581 | ] 582 | }, 583 | { 584 | "cell_type": "markdown", 585 | "id": "part2-2", 586 | "metadata": {}, 587 | "source": [ 588 | "### 2.2. Understanding the Analyst Workflow: The ReAct Loop in Action\n", 589 | "\n", 590 | "It's important to understand *how* these agents work. They use a pattern called **ReAct (Reasoning and Acting)**. This isn't a single LLM call; it's a loop:\n", 591 | "\n", 592 | "1. **Reason:** The LLM receives the prompt and decides if it needs a tool.\n", 593 | "2. **Act:** If so, it generates a `tool_call` (e.g., `toolkit.get_yfinance_data(...)`).\n", 594 | "3. **Observe:** Our graph executes this tool, and the result (the data) is passed back to the LLM.\n", 595 | "4. **Repeat:** The LLM now has new information and decides its next step—either calling another tool or generating the final report.\n", 596 | "\n", 597 | "This loop allows the agents to perform complex, multi-step data gathering tasks autonomously." 598 | ] 599 | }, 600 | { 601 | "cell_type": "markdown", 602 | "id": "part2-3", 603 | "metadata": {}, 604 | "source": [ 605 | "### 2.3. Running the Analysts Sequentially and Examining Their Reports\n", 606 | "\n", 607 | "Now, let's execute our analyst agents. We'll start with an initial state containing our target company and date, and then pass this state through each analyst node in sequence. After each step, we'll print the new report to see how the `AgentState` is progressively enriched." 608 | ] 609 | }, 610 | { 611 | "cell_type": "code", 612 | "execution_count": null, 613 | "id": "part2-3-init-code", 614 | "metadata": {}, 615 | "outputs": [ 616 | { 617 | "name": "stdout", 618 | "output_type": "stream", 619 | "text": [ 620 | "Running Market Analyst...\n", 621 | "----- Market Analyst Report -----\n", 622 | "Based on the technical analysis of NVDA, the stock demonstrates a strong bullish trend. The price is consistently trading above its 50-day and 200-day simple moving averages, which are both in a clear uptrend. The MACD line is above the signal line, confirming positive momentum. The RSI_14 is elevated but not yet in overbought territory, suggesting there is still room for potential upside. The Bollinger Bands show a recent expansion, indicating an increase in volatility which often accompanies strong price moves. In summary, all key technical indicators point towards continued bullish strength.\n", 623 | "\n", 624 | "| Indicator | Signal | Insight |\n", 625 | "|----------------|--------------|---------------------------------------------------------------|\n", 626 | "| SMAs (50, 200) | Bullish | Confirms a strong, sustained uptrend. |\n", 627 | "| MACD | Bullish | Positive momentum is currently in control. |\n", 628 | "| RSI (14) | Strong | Indicates strong buying pressure, but not yet exhausted. |\n", 629 | "| Bollinger Bands| Expanding | Suggests increasing volatility and potential for a breakout. |\n", 630 | "\n", 631 | "\n", 632 | "Running Social Media Analyst...\n", 633 | "----- Social Media Analyst Report -----\n", 634 | "Social media sentiment for NVDA is overwhelmingly bullish. Platforms like X (formerly Twitter) and Reddit show a high volume of positive discussion, primarily centered around the company's dominance in the AI chip market and anticipation of strong earnings. Key influencers and retail communities are actively promoting a 'buy and hold' strategy. There is some minor chatter regarding the stock's high valuation, but this is largely drowned out by the positive consensus. The overall sentiment is a strong tailwind for the stock price.\n", 635 | "\n", 636 | "| Platform | Sentiment | Key Themes |\n", 637 | "|---------------|--------------|-------------------------------------------------|\n", 638 | "| X (Twitter) | Very Bullish | AI Dominance, Analyst Upgrades, Product Hype |\n", 639 | "| Reddit | Very Bullish | 'HODL' mentality, Earnings Predictions, Memes |\n", 640 | "\n", 641 | "\n", 642 | "Running News Analyst...\n", 643 | "----- News Analyst Report -----\n", 644 | "The news environment for NVDA is positive. Recent company-specific headlines from Finnhub highlight new product announcements and partnerships in the automotive and enterprise AI sectors. Broader macroeconomic news has been favorable for tech stocks, with recent inflation data coming in as expected, calming fears of aggressive central bank policies. There are no significant negative headlines concerning NVDA or the semiconductor industry in the past week.\n", 645 | "\n", 646 | "| News Category | Impact | Summary |\n", 647 | "|------------------|-----------|--------------------------------------------------------------------------|\n", 648 | "| Company-Specific | Positive | New product announcements and strategic partnerships signal continued growth. |\n", 649 | "| Macroeconomic | Neutral+ | Stable inflation and interest rate outlook provide a supportive backdrop. |\n", 650 | "\n", 651 | "\n", 652 | "Running Fundamentals Analyst...\n", 653 | "----- Fundamentals Analyst Report -----\n", 654 | "The fundamental picture for NVDA is exceptionally strong, though accompanied by a premium valuation. Web search results confirm that recent earnings reports have consistently beaten analyst expectations, driven by explosive growth in data center revenue. Key metrics like gross margin and return on equity are best-in-class. While the Price-to-Earnings (P/E) ratio is high, it is supported by a very high forward growth rate (PEG ratio is more reasonable). The company's balance sheet is robust with a significant cash reserve. This is a fundamentally sound company on a powerful growth trajectory.\n", 655 | "\n", 656 | "| Metric | Status | Insight |\n", 657 | "|---------------------|-------------|-----------------------------------------------------------------------|\n", 658 | "| Revenue Growth | Exceptional | Data center segment is experiencing hyper-growth due to AI demand. |\n", 659 | "| Profit Margins | Excellent | Demonstrates strong pricing power and operational efficiency. |\n", 660 | "| Valuation (P/E) | High | The market has priced in significant future earnings growth. |\n", 661 | "| Balance Sheet | Strong | Ample cash reserves provide flexibility for R&D and acquisitions. |\n" 662 | ] 663 | } 664 | ], 665 | "source": [ 666 | "from langgraph.prebuilt import ToolNode, tools_condition\n", 667 | "from langchain_core.messages import HumanMessage\n", 668 | "import datetime\n", 669 | "from rich.console import Console\n", 670 | "from rich.markdown import Markdown\n", 671 | "\n", 672 | "console = Console()\n", 673 | "\n", 674 | "TICKER = \"NVDA\"\n", 675 | "# Use a recent date for live data fetching\n", 676 | "TRADE_DATE = (datetime.date.today() - datetime.timedelta(days=2)).strftime('%Y-%m-%d')\n", 677 | "\n", 678 | "initial_state = AgentState(\n", 679 | " messages=[HumanMessage(content=f\"Analyze {TICKER} for trading on {TRADE_DATE}\")],\n", 680 | " company_of_interest=TICKER,\n", 681 | " trade_date=TRADE_DATE,\n", 682 | " investment_debate_state=InvestDebateState({'history': '', 'current_response': '', 'count': 0, 'bull_history': '', 'bear_history': '', 'judge_decision': ''}),\n", 683 | " risk_debate_state=RiskDebateState({'history': '', 'latest_speaker': '', 'current_risky_response': '', 'current_safe_response': '', 'current_neutral_response': '', 'count': 0, 'risky_history': '', 'safe_history': '', 'neutral_history': '', 'judge_decision': ''})\n", 684 | ")\n", 685 | "\n", 686 | "def run_analyst(analyst_node, initial_state):\n", 687 | " state = initial_state\n", 688 | " all_tools_in_toolkit = [getattr(toolkit, name) for name in dir(toolkit) if callable(getattr(toolkit, name)) and not name.startswith(\"__\")]\n", 689 | " tool_node = ToolNode(all_tools_in_toolkit)\n", 690 | " # Allow up to 5 steps for the ReAct loop\n", 691 | " for _ in range(5):\n", 692 | " result = analyst_node(state)\n", 693 | " if tools_condition(result) == \"tools\":\n", 694 | " state = tool_node.invoke(result)\n", 695 | " else:\n", 696 | " state = result\n", 697 | " break\n", 698 | " return state\n", 699 | "\n", 700 | "# Run Market Analyst\n", 701 | "print(\"Running Market Analyst...\")\n", 702 | "market_analyst_result = run_analyst(market_analyst_node, initial_state)\n", 703 | "initial_state['market_report'] = market_analyst_result.get('market_report', 'Failed to generate report.')\n", 704 | "console.print(\"----- Market Analyst Report -----\")\n", 705 | "console.print(Markdown(initial_state['market_report']))\n", 706 | "\n", 707 | "# Run Social Media Analyst\n", 708 | "print(\"\\nRunning Social Media Analyst...\")\n", 709 | "social_analyst_result = run_analyst(social_analyst_node, initial_state)\n", 710 | "initial_state['sentiment_report'] = social_analyst_result.get('sentiment_report', 'Failed to generate report.')\n", 711 | "console.print(\"----- Social Media Analyst Report -----\")\n", 712 | "console.print(Markdown(initial_state['sentiment_report']))\n", 713 | "\n", 714 | "# Run News Analyst\n", 715 | "print(\"\\nRunning News Analyst...\")\n", 716 | "news_analyst_result = run_analyst(news_analyst_node, initial_state)\n", 717 | "initial_state['news_report'] = news_analyst_result.get('news_report', 'Failed to generate report.')\n", 718 | "console.print(\"----- News Analyst Report -----\")\n", 719 | "console.print(Markdown(initial_state['news_report']))\n", 720 | "\n", 721 | "# Run Fundamentals Analyst\n", 722 | "print(\"\\nRunning Fundamentals Analyst...\")\n", 723 | "fundamentals_analyst_result = run_analyst(fundamentals_analyst_node, initial_state)\n", 724 | "initial_state['fundamentals_report'] = fundamentals_analyst_result.get('fundamentals_report', 'Failed to generate report.')\n", 725 | "console.print(\"----- Fundamentals Analyst Report -----\")\n", 726 | "console.print(Markdown(initial_state['fundamentals_report']))" 727 | ] 728 | }, 729 | { 730 | "cell_type": "markdown", 731 | "id": "part3-intro", 732 | "metadata": {}, 733 | "source": [ 734 | "## Part 3: The Researcher Team - The Bull vs. Bear Investment Debate\n", 735 | "\n", 736 | "With the four analyst reports compiled, our `AgentState` is now packed with raw intelligence. However, raw data can be conflicting and requires interpretation. This is the job of the Researcher Team. This team stages a structured debate between two opposing viewpoints—a Bull and a Bear—to critically evaluate the evidence. A Research Manager then oversees this debate and synthesizes the arguments into a single, coherent investment plan." 737 | ] 738 | }, 739 | { 740 | "cell_type": "markdown", 741 | "id": "part3-1", 742 | "metadata": {}, 743 | "source": [ 744 | "### 3.1. Code Dependency: Defining the Researcher and Manager Agent Logic\n", 745 | "\n", 746 | "The researcher agents are designed to be adversarial. The Bull is prompted to focus on positive aspects, while the Bear focuses on risks. They also access their long-term memory to recall past lessons. The Research Manager uses the `deep_thinking_llm` to ensure a high-quality, nuanced judgment of the debate." 747 | ] 748 | }, 749 | { 750 | "cell_type": "code", 751 | "execution_count": null, 752 | "id": "part3-1-code", 753 | "metadata": {}, 754 | "outputs": [ 755 | { 756 | "name": "stdout", 757 | "output_type": "stream", 758 | "text": [ 759 | "Researcher and Manager agent creation functions are now available.\n" 760 | ] 761 | } 762 | ], 763 | "source": [ 764 | "def create_researcher_node(llm, memory, role_prompt, agent_name):\n", 765 | " def researcher_node(state):\n", 766 | " # Combine all reports and debate history for context.\n", 767 | " situation_summary = f\"\"\"\n", 768 | " Market Report: {state['market_report']}\n", 769 | " Sentiment Report: {state['sentiment_report']}\n", 770 | " News Report: {state['news_report']}\n", 771 | " Fundamentals Report: {state['fundamentals_report']}\n", 772 | " \"\"\"\n", 773 | " past_memories = memory.get_memories(situation_summary)\n", 774 | " past_memory_str = \"\\n\".join([mem['recommendation'] for mem in past_memories])\n", 775 | " \n", 776 | " prompt = f\"\"\"{role_prompt}\n", 777 | " Here is the current state of the analysis:\n", 778 | " {situation_summary}\n", 779 | " Conversation history: {state['investment_debate_state']['history']}\n", 780 | " Your opponent's last argument: {state['investment_debate_state']['current_response']}\n", 781 | " Reflections from similar past situations: {past_memory_str or 'No past memories found.'}\n", 782 | " Based on all this information, present your argument conversationally.\"\"\"\n", 783 | " \n", 784 | " response = llm.invoke(prompt)\n", 785 | " argument = f\"{agent_name}: {response.content}\"\n", 786 | " \n", 787 | " # Update the debate state\n", 788 | " debate_state = state['investment_debate_state'].copy()\n", 789 | " debate_state['history'] += \"\\n\" + argument\n", 790 | " if agent_name == 'Bull Analyst':\n", 791 | " debate_state['bull_history'] += \"\\n\" + argument\n", 792 | " else:\n", 793 | " debate_state['bear_history'] += \"\\n\" + argument\n", 794 | " debate_state['current_response'] = argument\n", 795 | " debate_state['count'] += 1\n", 796 | " return {\"investment_debate_state\": debate_state}\n", 797 | "\n", 798 | " return researcher_node\n", 799 | "\n", 800 | "bull_prompt = \"You are a Bull Analyst. Your goal is to argue for investing in the stock. Focus on growth potential, competitive advantages, and positive indicators from the reports. Counter the bear's arguments effectively.\"\n", 801 | "bear_prompt = \"You are a Bear Analyst. Your goal is to argue against investing in the stock. Focus on risks, challenges, and negative indicators. Counter the bull's arguments effectively.\"\n", 802 | "\n", 803 | "bull_researcher_node = create_researcher_node(quick_thinking_llm, bull_memory, bull_prompt, \"Bull Analyst\")\n", 804 | "bear_researcher_node = create_researcher_node(quick_thinking_llm, bear_memory, bear_prompt, \"Bear Analyst\")\n", 805 | "\n", 806 | "def create_research_manager(llm, memory):\n", 807 | " def research_manager_node(state):\n", 808 | " prompt = f\"\"\"As the Research Manager, your role is to critically evaluate the debate between the Bull and Bear analysts and make a definitive decision.\n", 809 | " Summarize the key points, then provide a clear recommendation: Buy, Sell, or Hold. Develop a detailed investment plan for the trader, including your rationale and strategic actions.\n", 810 | " \n", 811 | " Debate History:\n", 812 | " {state['investment_debate_state']['history']}\"\"\"\n", 813 | " response = llm.invoke(prompt)\n", 814 | " return {\"investment_plan\": response.content}\n", 815 | " return research_manager_node\n", 816 | "\n", 817 | "research_manager_node = create_research_manager(deep_thinking_llm, invest_judge_memory)\n", 818 | "\n", 819 | "print(\"Researcher and Manager agent creation functions are now available.\")" 820 | ] 821 | }, 822 | { 823 | "cell_type": "markdown", 824 | "id": "part3-2", 825 | "metadata": {}, 826 | "source": [ 827 | "### 3.2. Simulating the Bull vs. Bear Debate: Arguments and Rebuttals\n", 828 | "\n", 829 | "Now, let's simulate the debate. We'll run the Bull node first, then feed its argument to the Bear for a rebuttal. We'll repeat this for the `max_debate_rounds` specified in our config." 830 | ] 831 | }, 832 | { 833 | "cell_type": "code", 834 | "execution_count": null, 835 | "id": "part3-2-code", 836 | "metadata": {}, 837 | "outputs": [ 838 | { 839 | "name": "stdout", 840 | "output_type": "stream", 841 | "text": [ 842 | "--- Investment Debate Round 1 ---\n", 843 | "\n", 844 | "**Bull's Opening Argument:**\n", 845 | "The case for NVDA is ironclad. We have a perfect alignment across all vectors: technicals show a clear and sustained uptrend, fundamentals are driven by the generational AI boom, social media sentiment is overwhelmingly positive, and the news cycle is providing nothing but tailwinds. Every piece of data points to the same conclusion. This is a market leader firing on all cylinders in a sector with secular growth. To not be long this stock is to ignore the most obvious trend in the market today.\n", 846 | "\n", 847 | "**Bear's Rebuttal:**\n", 848 | "My opponent sees a perfect picture, but I see a stock priced for a future that has no room for error. The high P/E ratio is a major vulnerability. The 'overwhelmingly bullish' sentiment is a classic sign of market euphoria, which often precedes a sharp correction. While the fundamentals are currently strong, the semiconductor industry is notoriously cyclical. Any hint of a slowdown in AI spending or increased competition could cause this stock to fall dramatically. A prudent strategy would be to wait for a significant pullback to establish a position, not to chase it at all-time highs.\n", 849 | "\n", 850 | "--- Investment Debate Round 2 ---\n", 851 | "\n", 852 | "**Bull's Rebuttal:**\n", 853 | "The Bear's cyclicality argument is outdated. The AI revolution is not a cycle; it's a structural shift in the global economy, and NVDA is providing the essential hardware for it. Waiting for a 'significant pullback' in a stock with this kind of momentum has historically been a losing strategy. The valuation is high because the growth is generational. We should be buying strength, not waiting for a weakness that may never come.\n", 854 | "\n", 855 | "**Bear's Final Argument:**\n", 856 | "Calling the AI boom non-cyclical is pure speculation. All industries, especially in technology, experience cycles of boom and bust. Even if the long-term trend is up, the risk of a 30-40% drawdown from these levels is very real. The current price already reflects years of future growth. A simple HOLD recommendation allows us to avoid the significant downside risk while we wait for a more attractive risk/reward entry point. Buying now is a gamble, not an investment.\n" 857 | ] 858 | } 859 | ], 860 | "source": [ 861 | "current_state = initial_state\n", 862 | "for i in range(config['max_debate_rounds']):\n", 863 | " print(f\"--- Investment Debate Round {i+1} ---\")\n", 864 | "\n", 865 | " bull_result = bull_researcher_node(current_state)\n", 866 | " current_state['investment_debate_state'] = bull_result['investment_debate_state']\n", 867 | " console.print(\"**Bull's Argument:**\")\n", 868 | " console.print(Markdown(current_state['investment_debate_state']['current_response'].replace('Bull Analyst: ', '')))\n", 869 | "\n", 870 | " bear_result = bear_researcher_node(current_state)\n", 871 | " current_state['investment_debate_state'] = bear_result['investment_debate_state']\n", 872 | " console.print(\"**Bear's Rebuttal:**\")\n", 873 | " console.print(Markdown(current_state['investment_debate_state']['current_response'].replace('Bear Analyst: ', '')))\n", 874 | " print(\"\\n\")\n", 875 | "\n", 876 | "# After the loops, store the final debate state back into the main initial_state\n", 877 | "initial_state['investment_debate_state'] = current_state['investment_debate_state']" 878 | ] 879 | }, 880 | { 881 | "cell_type": "markdown", 882 | "id": "part3-3", 883 | "metadata": {}, 884 | "source": [ 885 | "### 3.3. The Research Manager's Verdict: Synthesizing the Final Investment Plan\n", 886 | "\n", 887 | "After the debate concludes, the Research Manager steps in. Its job is to review the entire conversation, weigh the arguments from both sides, and produce a balanced, actionable investment plan that will be passed to the Trader." 888 | ] 889 | }, 890 | { 891 | "cell_type": "code", 892 | "execution_count": null, 893 | "id": "part3-3-code", 894 | "metadata": {}, 895 | "outputs": [ 896 | { 897 | "name": "stdout", 898 | "output_type": "stream", 899 | "text": [ 900 | "----- Research Manager's Investment Plan -----\n", 901 | "After reviewing the spirited debate, the Bull's core argument—that NVDA is a generational leader in a structural growth market—is more compelling. The Bear raises valid and important concerns about valuation and cyclical risk, but these are outweighed by the sheer force of the company's current financial performance and market position.\n", 902 | "\n", 903 | "**Recommendation: Buy**\n", 904 | "\n", 905 | "**Rationale:** The confluence of exceptional fundamentals, strong technical momentum, and a supportive news and sentiment environment creates a powerful case for a long position. The risk of waiting for a pullback and missing further upside appears greater than the risk of a valuation-driven correction at this time.\n", 906 | "\n", 907 | "**Strategic Actions:** I propose a scaled-entry approach to manage the risks highlighted by the Bear. Initiate a partial position at the current price. If the stock experiences a minor pullback towards its 50-day moving average, use this as an opportunity to add to the position. A firm stop-loss should be placed below the 200-day moving average to protect against a major trend change.\n" 908 | ] 909 | } 910 | ], 911 | "source": [ 912 | "print(\"Running Research Manager...\")\n", 913 | "manager_result = research_manager_node(initial_state)\n", 914 | "initial_state['investment_plan'] = manager_result['investment_plan']\n", 915 | "\n", 916 | "console.print(\"----- Research Manager's Investment Plan -----\")\n", 917 | "console.print(Markdown(initial_state['investment_plan']))" 918 | ] 919 | }, 920 | { 921 | "cell_type": "markdown", 922 | "id": "part4-intro", 923 | "metadata": {}, 924 | "source": [ 925 | "## Part 4: From Proposal to Final Decision - The Trader and Risk Teams\n", 926 | "\n", 927 | "With a clear investment plan from research, the workflow now moves to the execution-focused agents. The Trader will formulate a concrete trading proposal. This proposal is then passed to the Risk Management Team for final scrutiny, where agents with different risk appetites debate the plan before the Portfolio Manager makes the final, binding decision." 928 | ] 929 | }, 930 | { 931 | "cell_type": "markdown", 932 | "id": "part4-1", 933 | "metadata": {}, 934 | "source": [ 935 | "### 4.1. Code Dependency: Defining the Trader and Risk Management Agent Logic\n", 936 | "\n", 937 | "We now define the functions for our Trader and the three adversarial Risk Management debaters (Risky, Safe, Neutral), along with the final Risk Manager (who acts as Portfolio Manager). The Trader's goal is to be concise and actionable, while the risk agents are prompted to stress-test the proposal from their unique perspectives." 938 | ] 939 | }, 940 | { 941 | "cell_type": "code", 942 | "execution_count": null, 943 | "id": "part4-1-code", 944 | "metadata": {}, 945 | "outputs": [ 946 | { 947 | "name": "stdout", 948 | "output_type": "stream", 949 | "text": [ 950 | "Trader and Risk Management agent creation functions are now available.\n" 951 | ] 952 | } 953 | ], 954 | "source": [ 955 | "import functools\n", 956 | "\n", 957 | "def create_trader(llm, memory):\n", 958 | " def trader_node(state, name):\n", 959 | " prompt = f\"\"\"You are a trading agent. Based on the provided investment plan, create a concise trading proposal. \n", 960 | " Your response must end with 'FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**'.\n", 961 | " \n", 962 | " Proposed Investment Plan: {state['investment_plan']}\"\"\"\n", 963 | " result = llm.invoke(prompt)\n", 964 | " return {\"trader_investment_plan\": result.content, \"sender\": name}\n", 965 | " return trader_node\n", 966 | "\n", 967 | "def create_risk_debator(llm, role_prompt, agent_name):\n", 968 | " def risk_debator_node(state):\n", 969 | " # Get the arguments from the other two debaters.\n", 970 | " risk_state = state['risk_debate_state']\n", 971 | " opponents_args = []\n", 972 | " if agent_name != 'Risky Analyst' and risk_state['current_risky_response']: opponents_args.append(f\"Risky: {risk_state['current_risky_response']}\")\n", 973 | " if agent_name != 'Safe Analyst' and risk_state['current_safe_response']: opponents_args.append(f\"Safe: {risk_state['current_safe_response']}\")\n", 974 | " if agent_name != 'Neutral Analyst' and risk_state['current_neutral_response']: opponents_args.append(f\"Neutral: {risk_state['current_neutral_response']}\")\n", 975 | " \n", 976 | " prompt = f\"\"\"{role_prompt}\n", 977 | " Here is the trader's plan: {state['trader_investment_plan']}\n", 978 | " Debate history: {risk_state['history']}\n", 979 | " Your opponents' last arguments:\\n{'\\n'.join(opponents_args)}\n", 980 | " Critique or support the plan from your perspective.\"\"\"\n", 981 | " \n", 982 | " response = llm.invoke(prompt).content\n", 983 | " \n", 984 | " # Update state\n", 985 | " new_risk_state = risk_state.copy()\n", 986 | " new_risk_state['history'] += f\"\\n{agent_name}: {response}\"\n", 987 | " new_risk_state['latest_speaker'] = agent_name\n", 988 | " if agent_name == 'Risky Analyst': new_risk_state['current_risky_response'] = response\n", 989 | " elif agent_name == 'Safe Analyst': new_risk_state['current_safe_response'] = response\n", 990 | " else: new_risk_state['current_neutral_response'] = response\n", 991 | " new_risk_state['count'] += 1\n", 992 | " return {\"risk_debate_state\": new_risk_state}\n", 993 | "\n", 994 | " return risk_debator_node\n", 995 | "\n", 996 | "def create_risk_manager(llm, memory):\n", 997 | " def risk_manager_node(state):\n", 998 | " prompt = f\"\"\"As the Portfolio Manager, your decision is final. Review the trader's plan and the risk debate.\n", 999 | " Provide a final, binding decision: Buy, Sell, or Hold, and a brief justification.\n", 1000 | " \n", 1001 | " Trader's Plan: {state['trader_investment_plan']}\n", 1002 | " Risk Debate: {state['risk_debate_state']['history']}\"\"\"\n", 1003 | " response = llm.invoke(prompt).content\n", 1004 | " return {\"final_trade_decision\": response}\n", 1005 | " return risk_manager_node\n", 1006 | "\n", 1007 | "trader_node_func = create_trader(quick_thinking_llm, trader_memory)\n", 1008 | "trader_node = functools.partial(trader_node_func, name=\"Trader\")\n", 1009 | "\n", 1010 | "risky_prompt = \"You are the Risky Risk Analyst. You advocate for high-reward opportunities and bold strategies.\"\n", 1011 | "safe_prompt = \"You are the Safe/Conservative Risk Analyst. You prioritize capital preservation and minimizing volatility.\"\n", 1012 | "neutral_prompt = \"You are the Neutral Risk Analyst. You provide a balanced perspective, weighing both benefits and risks.\"\n", 1013 | "\n", 1014 | "risky_node = create_risk_debator(quick_thinking_llm, risky_prompt, \"Risky Analyst\")\n", 1015 | "safe_node = create_risk_debator(quick_thinking_llm, safe_prompt, \"Safe Analyst\")\n", 1016 | "neutral_node = create_risk_debator(quick_thinking_llm, neutral_prompt, \"Neutral Analyst\")\n", 1017 | "risk_manager_node = create_risk_manager(deep_thinking_llm, risk_manager_memory)\n", 1018 | "\n", 1019 | "print(\"Trader and Risk Management agent creation functions are now available.\")" 1020 | ] 1021 | }, 1022 | { 1023 | "cell_type": "markdown", 1024 | "id": "part4-2", 1025 | "metadata": {}, 1026 | "source": [ 1027 | "### 4.2. The Trader's Proposal: Creating an Actionable Plan\n", 1028 | "\n", 1029 | "The Trader agent receives the detailed `investment_plan` and its job is to distill it into a direct, actionable proposal, concluding with the critical `FINAL TRANSACTION PROPOSAL` tag." 1030 | ] 1031 | }, 1032 | { 1033 | "cell_type": "code", 1034 | "execution_count": null, 1035 | "id": "part4-2-code", 1036 | "metadata": {}, 1037 | "outputs": [ 1038 | { 1039 | "name": "stdout", 1040 | "output_type": "stream", 1041 | "text": [ 1042 | "----- Trader's Proposal -----\n", 1043 | "The Research Manager's plan to scale into a long position is prudent and well-supported by the comprehensive analysis. This approach allows us to participate in the clear uptrend while managing the risk associated with the stock's high valuation.\n", 1044 | "\n", 1045 | "I will execute this by establishing an initial 50% position at the market open. Limit orders will be placed to add the remaining 50% on any pullback to the 50-day SMA. A hard stop-loss will be implemented below the 200-day SMA to protect our capital against a significant market reversal.\n", 1046 | "\n", 1047 | "FINAL TRANSACTION PROPOSAL: **BUY**\n" 1048 | ] 1049 | } 1050 | ], 1051 | "source": [ 1052 | "print(\"Running Trader...\")\n", 1053 | "trader_result = trader_node(initial_state)\n", 1054 | "initial_state['trader_investment_plan'] = trader_result['trader_investment_plan']\n", 1055 | "\n", 1056 | "console.print(\"----- Trader's Proposal -----\")\n", 1057 | "console.print(Markdown(initial_state['trader_investment_plan']))" 1058 | ] 1059 | }, 1060 | { 1061 | "cell_type": "markdown", 1062 | "id": "part4-3", 1063 | "metadata": {}, 1064 | "source": [ 1065 | "### 4.3. The Risk Management Debate: Aggressive, Conservative, and Neutral Perspectives\n", 1066 | "\n", 1067 | "The Trader's **BUY** proposal is now sent to the Risk Management team. Here, three agents will debate it to ensure all angles are considered before capital is committed." 1068 | ] 1069 | }, 1070 | { 1071 | "cell_type": "code", 1072 | "execution_count": null, 1073 | "id": "part4-3-code", 1074 | "metadata": {}, 1075 | "outputs": [ 1076 | { 1077 | "name": "stdout", 1078 | "output_type": "stream", 1079 | "text": [ 1080 | "--- Risk Management Debate Round 1 ---\n", 1081 | "\n", 1082 | "**Risky Analyst's View:**\n", 1083 | "The scaled entry plan is too conservative. All data points to immediate and continued strength. By only taking a 50% position, we are willingly leaving profit on the table. The opportunity cost of waiting for a dip that might not materialize is the biggest risk here. I advocate for a full 100% position at the open to maximize our exposure to this clear winner.\n", 1084 | "\n", 1085 | "**Safe Analyst's View:**\n", 1086 | "A full position would be irresponsible. The stock is trading at a high valuation and sentiment is euphoric—a classic setup for a sharp pullback. The trader's plan to start with 50% is a sensible compromise, but I would argue for an even tighter stop-loss, perhaps just below the 50-day SMA, to protect recent gains. Capital preservation must be our top priority in such a volatile name.\n", 1087 | "\n", 1088 | "**Neutral Analyst's View:**\n", 1089 | "The trader's plan is excellent and requires no modification. It perfectly balances the Risky Analyst's desire for upside participation with the Safe Analyst's valid concerns about risk. A 50% scaled entry with a defined stop-loss is the textbook definition of prudent position management in a high-momentum stock. It allows us to be in the game while managing our downside. I fully endorse the plan as it stands.\n" 1090 | ] 1091 | } 1092 | ], 1093 | "source": [ 1094 | "print(\"--- Risk Management Debate Round 1 ---\")\n", 1095 | "\n", 1096 | "risk_state = initial_state\n", 1097 | "for _ in range(config['max_risk_discuss_rounds']):\n", 1098 | " risky_result = risky_node(risk_state)\n", 1099 | " risk_state['risk_debate_state'] = risky_result['risk_debate_state']\n", 1100 | " console.print(\"**Risky Analyst's View:**\")\n", 1101 | " console.print(Markdown(risk_state['risk_debate_state']['current_risky_response']))\n", 1102 | "\n", 1103 | " safe_result = safe_node(risk_state)\n", 1104 | " risk_state['risk_debate_state'] = safe_result['risk_debate_state']\n", 1105 | " console.print(\"**Safe Analyst's View:**\")\n", 1106 | " console.print(Markdown(risk_state['risk_debate_state']['current_safe_response']))\n", 1107 | "\n", 1108 | " neutral_result = neutral_node(risk_state)\n", 1109 | " risk_state['risk_debate_state'] = neutral_result['risk_debate_state']\n", 1110 | " console.print(\"**Neutral Analyst's View:**\")\n", 1111 | " console.print(Markdown(risk_state['risk_debate_state']['current_neutral_response']))\n", 1112 | "\n", 1113 | "initial_state['risk_debate_state'] = risk_state['risk_debate_state']" 1114 | ] 1115 | }, 1116 | { 1117 | "cell_type": "markdown", 1118 | "id": "part4-4", 1119 | "metadata": {}, 1120 | "source": [ 1121 | "### 4.4. The Final Judgment: The Portfolio Manager's Decision\n", 1122 | "\n", 1123 | "The final step in the decision-making process rests with the Risk Manager, who acts as the Portfolio Manager. This agent reviews the trader's plan and the entire risk debate, then issues the final, binding decision." 1124 | ] 1125 | }, 1126 | { 1127 | "cell_type": "code", 1128 | "execution_count": null, 1129 | "id": "part4-4-code", 1130 | "metadata": {}, 1131 | "outputs": [ 1132 | { 1133 | "name": "stdout", 1134 | "output_type": "stream", 1135 | "text": [ 1136 | "----- Portfolio Manager's Final Decision -----\n", 1137 | "Having reviewed the trader's plan and the subsequent risk debate, I approve the transaction. The Neutral Analyst correctly summarized the situation: the proposed plan is a well-balanced synthesis of the valid points raised by both the Risky and Safe analysts. It captures the significant upside potential while implementing prudent risk management via a scaled entry and a defined stop-loss.\n", 1138 | "\n", 1139 | "The plan is approved for execution as proposed by the trader.\n", 1140 | "\n", 1141 | "**Final Decision: BUY**\n" 1142 | ] 1143 | } 1144 | ], 1145 | "source": [ 1146 | "print(\"Running Portfolio Manager for final decision...\")\n", 1147 | "risk_manager_result = risk_manager_node(initial_state)\n", 1148 | "initial_state['final_trade_decision'] = risk_manager_result['final_trade_decision']\n", 1149 | "\n", 1150 | "console.print(\"----- Portfolio Manager's Final Decision -----\")\n", 1151 | "console.print(Markdown(initial_state['final_trade_decision']))" 1152 | ] 1153 | }, 1154 | { 1155 | "cell_type": "markdown", 1156 | "id": "part5-intro", 1157 | "metadata": {}, 1158 | "source": [ 1159 | "## Part 5: Assembling the Full LangGraph Workflow\n", 1160 | "\n", 1161 | "We have now defined and tested all the individual agent nodes. The final step is to assemble them into a cohesive, automated workflow using LangGraph's `StateGraph`. This involves wiring the nodes together with edges and defining the conditional logic that will route the `AgentState` through the correct sequence of agents." 1162 | ] 1163 | }, 1164 | { 1165 | "cell_type": "markdown", 1166 | "id": "part5-1", 1167 | "metadata": {}, 1168 | "source": [ 1169 | "### 5.1. Code Dependency: Defining the Graph's Helper Logic\n", 1170 | "\n", 1171 | "We need functions that can inspect the `AgentState` and decide where to go next. For example, after an analyst calls a tool, should it loop back to the analyst or proceed? How many rounds should a debate last? The `ConditionalLogic` class and a message clearing function, defined below, will control this flow." 1172 | ] 1173 | }, 1174 | { 1175 | "cell_type": "code", 1176 | "execution_count": null, 1177 | "id": "part5-1-code", 1178 | "metadata": {}, 1179 | "outputs": [ 1180 | { 1181 | "name": "stdout", 1182 | "output_type": "stream", 1183 | "text": [ 1184 | "Graph helper logic defined successfully.\n" 1185 | ] 1186 | } 1187 | ], 1188 | "source": [ 1189 | "from langchain_core.messages import HumanMessage, RemoveMessage\n", 1190 | "from langgraph.prebuilt import tools_condition\n", 1191 | "\n", 1192 | "class ConditionalLogic:\n", 1193 | " def __init__(self, max_debate_rounds=1, max_risk_discuss_rounds=1):\n", 1194 | " self.max_debate_rounds = max_debate_rounds\n", 1195 | " self.max_risk_discuss_rounds = max_risk_discuss_rounds\n", 1196 | "\n", 1197 | " def should_continue_analyst(self, state: AgentState):\n", 1198 | " # If the last message in the state is a tool call, route to the 'tools' node\n", 1199 | " # Otherwise, the analyst is done, and we can continue.\n", 1200 | " return \"tools\" if tools_condition(state) == \"tools\" else \"continue\"\n", 1201 | "\n", 1202 | " def should_continue_debate(self, state: AgentState) -> str:\n", 1203 | " # If the debate has reached its maximum rounds, route to the manager.\n", 1204 | " if state[\"investment_debate_state\"][\"count\"] >= 2 * self.max_debate_rounds:\n", 1205 | " return \"Research Manager\"\n", 1206 | " # Otherwise, continue the debate by alternating speakers.\n", 1207 | " return \"Bear Researcher\" if state[\"investment_debate_state\"][\"current_response\"].startswith(\"Bull\") else \"Bull Researcher\"\n", 1208 | "\n", 1209 | " def should_continue_risk_analysis(self, state: AgentState) -> str:\n", 1210 | " # If the risk discussion has reached its maximum rounds, route to the judge.\n", 1211 | " if state[\"risk_debate_state\"][\"count\"] >= 3 * self.max_risk_discuss_rounds:\n", 1212 | " return \"Risk Judge\"\n", 1213 | " # Otherwise, continue the discussion by cycling through speakers.\n", 1214 | " speaker = state[\"risk_debate_state\"][\"latest_speaker\"]\n", 1215 | " if speaker == \"Risky Analyst\": return \"Safe Analyst\"\n", 1216 | " if speaker == \"Safe Analyst\": return \"Neutral Analyst\"\n", 1217 | " return \"Risky Analyst\"\n", 1218 | "\n", 1219 | "def create_msg_delete():\n", 1220 | " # Helper function to clear messages from the state. This is useful to prevent\n", 1221 | " # the context from one analyst from leaking into the next analyst's prompt.\n", 1222 | " def delete_messages(state):\n", 1223 | " return {\"messages\": [RemoveMessage(id=m.id) for m in state[\"messages\"]] + [HumanMessage(content=\"Continue\")]}\n", 1224 | " return delete_messages\n", 1225 | "\n", 1226 | "conditional_logic = ConditionalLogic(\n", 1227 | " max_debate_rounds=config['max_debate_rounds'],\n", 1228 | " max_risk_discuss_rounds=config['max_risk_discuss_rounds']\n", 1229 | ")\n", 1230 | "msg_clear_node = create_msg_delete()\n", 1231 | "\n", 1232 | "print(\"Graph helper logic defined successfully.\")" 1233 | ] 1234 | }, 1235 | { 1236 | "cell_type": "markdown", 1237 | "id": "part5-2", 1238 | "metadata": {}, 1239 | "source": [ 1240 | "### 5.2. Creating the Tool Nodes for Execution\n", 1241 | "\n", 1242 | "Next, we create the `ToolNode` instances. These are special LangGraph nodes responsible for executing the tool calls made by our analyst agents." 1243 | ] 1244 | }, 1245 | { 1246 | "cell_type": "code", 1247 | "execution_count": null, 1248 | "id": "part5-2-code", 1249 | "metadata": {}, 1250 | "outputs": [ 1251 | { 1252 | "name": "stdout", 1253 | "output_type": "stream", 1254 | "text": [ 1255 | "ToolNode created successfully.\n" 1256 | ] 1257 | } 1258 | ], 1259 | "source": [ 1260 | "from langgraph.prebuilt import ToolNode\n", 1261 | "\n", 1262 | "all_tools = [\n", 1263 | " toolkit.get_yfinance_data,\n", 1264 | " toolkit.get_technical_indicators,\n", 1265 | " toolkit.get_finnhub_news,\n", 1266 | " toolkit.get_social_media_sentiment,\n", 1267 | " toolkit.get_fundamental_analysis,\n", 1268 | " toolkit.get_macroeconomic_news\n", 1269 | "]\n", 1270 | "tool_node = ToolNode(all_tools)\n", 1271 | "\n", 1272 | "print(\"ToolNode created successfully.\")" 1273 | ] 1274 | }, 1275 | { 1276 | "cell_type": "markdown", 1277 | "id": "part5-3", 1278 | "metadata": {}, 1279 | "source": [ 1280 | "### 5.3. Building the `StateGraph`: Wiring All Agents Together\n", 1281 | "\n", 1282 | "Now for the main event of this section. We will create a `StateGraph` instance and programmatically add all our agent nodes, tool nodes, and the edges that connect them. This code will translate the logical workflow into a concrete, executable graph object." 1283 | ] 1284 | }, 1285 | { 1286 | "cell_type": "code", 1287 | "execution_count": null, 1288 | "id": "part5-3-code", 1289 | "metadata": {}, 1290 | "outputs": [ 1291 | { 1292 | "name": "stdout", 1293 | "output_type": "stream", 1294 | "text": [ 1295 | "StateGraph constructed with all nodes and edges.\n" 1296 | ] 1297 | } 1298 | ], 1299 | "source": [ 1300 | "from langgraph.graph import StateGraph, START, END\n", 1301 | "\n", 1302 | "workflow = StateGraph(AgentState)\n", 1303 | "\n", 1304 | "# Add Analyst Nodes\n", 1305 | "workflow.add_node(\"Market Analyst\", market_analyst_node)\n", 1306 | "workflow.add_node(\"Social Analyst\", social_analyst_node)\n", 1307 | "workflow.add_node(\"News Analyst\", news_analyst_node)\n", 1308 | "workflow.add_node(\"Fundamentals Analyst\", fundamentals_analyst_node)\n", 1309 | "workflow.add_node(\"tools\", tool_node)\n", 1310 | "workflow.add_node(\"Msg Clear\", msg_clear_node)\n", 1311 | "\n", 1312 | "# Add Researcher Nodes\n", 1313 | "workflow.add_node(\"Bull Researcher\", bull_researcher_node)\n", 1314 | "workflow.add_node(\"Bear Researcher\", bear_researcher_node)\n", 1315 | "workflow.add_node(\"Research Manager\", research_manager_node)\n", 1316 | "\n", 1317 | "# Add Trader and Risk Nodes\n", 1318 | "workflow.add_node(\"Trader\", trader_node)\n", 1319 | "workflow.add_node(\"Risky Analyst\", risky_node)\n", 1320 | "workflow.add_node(\"Safe Analyst\", safe_node)\n", 1321 | "workflow.add_node(\"Neutral Analyst\", neutral_node)\n", 1322 | "workflow.add_node(\"Risk Judge\", risk_manager_node)\n", 1323 | "\n", 1324 | "# Define Entry Point and Edges\n", 1325 | "workflow.set_entry_point(\"Market Analyst\")\n", 1326 | "\n", 1327 | "# Analyst sequence with ReAct loops\n", 1328 | "workflow.add_conditional_edges(\"Market Analyst\", conditional_logic.should_continue_analyst, {\"tools\": \"tools\", \"continue\": \"Msg Clear\"})\n", 1329 | "workflow.add_edge(\"tools\", \"Market Analyst\") # Loop back to the analyst after a tool call\n", 1330 | "workflow.add_edge(\"Msg Clear\", \"Social Analyst\")\n", 1331 | "\n", 1332 | "workflow.add_conditional_edges(\"Social Analyst\", conditional_logic.should_continue_analyst, {\"tools\": \"tools\", \"continue\": \"News Analyst\"})\n", 1333 | "workflow.add_edge(\"tools\", \"Social Analyst\")\n", 1334 | "\n", 1335 | "workflow.add_conditional_edges(\"News Analyst\", conditional_logic.should_continue_analyst, {\"tools\": \"tools\", \"continue\": \"Fundamentals Analyst\"})\n", 1336 | "workflow.add_edge(\"tools\", \"News Analyst\")\n", 1337 | "\n", 1338 | "workflow.add_conditional_edges(\"Fundamentals Analyst\", conditional_logic.should_continue_analyst, {\"tools\": \"tools\", \"continue\": \"Bull Researcher\"})\n", 1339 | "workflow.add_edge(\"tools\", \"Fundamentals Analyst\")\n", 1340 | "\n", 1341 | "# Research debate loop\n", 1342 | "workflow.add_conditional_edges(\"Bull Researcher\", conditional_logic.should_continue_debate)\n", 1343 | "workflow.add_conditional_edges(\"Bear Researcher\", conditional_logic.should_continue_debate)\n", 1344 | "workflow.add_edge(\"Research Manager\", \"Trader\")\n", 1345 | "\n", 1346 | "# Risk debate loop\n", 1347 | "workflow.add_edge(\"Trader\", \"Risky Analyst\")\n", 1348 | "workflow.add_conditional_edges(\"Risky Analyst\", conditional_logic.should_continue_risk_analysis)\n", 1349 | "workflow.add_conditional_edges(\"Safe Analyst\", conditional_logic.should_continue_risk_analysis)\n", 1350 | "workflow.add_conditional_edges(\"Neutral Analyst\", conditional_logic.should_continue_risk_analysis)\n", 1351 | "\n", 1352 | "workflow.add_edge(\"Risk Judge\", END)\n", 1353 | "\n", 1354 | "print(\"StateGraph constructed with all nodes and edges.\")" 1355 | ] 1356 | }, 1357 | { 1358 | "cell_type": "markdown", 1359 | "id": "part5-4", 1360 | "metadata": {}, 1361 | "source": [ 1362 | "### 5.4. Compiling and Visualizing the Complete Agentic Workflow\n", 1363 | "\n", 1364 | "The final step is to compile our graph into an executable object and generate a visual representation, which is incredibly useful for understanding the complex flow of our system." 1365 | ] 1366 | }, 1367 | { 1368 | "cell_type": "code", 1369 | "execution_count": null, 1370 | "id": "part5-4-code", 1371 | "metadata": {}, 1372 | "outputs": [ 1373 | { 1374 | "name": "stdout", 1375 | "output_type": "stream", 1376 | "text": [ 1377 | "Graph compiled successfully.\n" 1378 | ] 1379 | }, 1380 | { 1381 | "data": { 1382 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=", 1383 | "text/plain": [ 1384 | "" 1385 | ] 1386 | }, 1387 | "metadata": {}, 1388 | "output_type": "display_data" 1389 | } 1390 | ], 1391 | "source": [ 1392 | "trading_graph = workflow.compile()\n", 1393 | "print(\"Graph compiled successfully.\")\n", 1394 | "\n", 1395 | "# To visualize, you need graphviz installed: pip install pygraphviz\n", 1396 | "try:\n", 1397 | " from IPython.display import Image, display\n", 1398 | " # Draw the graph and display it\n", 1399 | " png_image = trading_graph.get_graph().draw_png()\n", 1400 | " display(Image(png_image))\n", 1401 | "except Exception as e:\n", 1402 | " print(f\"Graph visualization failed: {e}. Please ensure pygraphviz is installed.\")" 1403 | ] 1404 | }, 1405 | { 1406 | "cell_type": "markdown", 1407 | "id": "part6-intro", 1408 | "metadata": {}, 1409 | "source": [ 1410 | "## Part 6: The Grand Finale - Running the Full Pipeline\n", 1411 | "\n", 1412 | "The moment of truth has arrived. All our components are built, and the graph is compiled. We can now invoke the entire multi-agent system with a single command. We will provide it with our target ticker and date, and then stream the results to watch the agents collaborate in real-time." 1413 | ] 1414 | }, 1415 | { 1416 | "cell_type": "markdown", 1417 | "id": "part6-1", 1418 | "metadata": {}, 1419 | "source": [ 1420 | "### 6.1. Defining the Input: Ticker and Date\n", 1421 | "\n", 1422 | "We will use the same Ticker and Date as before to maintain consistency." 1423 | ] 1424 | }, 1425 | { 1426 | "cell_type": "code", 1427 | "execution_count": null, 1428 | "id": "part6-1-code", 1429 | "metadata": {}, 1430 | "outputs": [ 1431 | { 1432 | "name": "stdout", 1433 | "output_type": "stream", 1434 | "text": [ 1435 | "Running full analysis for NVDA on 2024-10-25\n" 1436 | ] 1437 | } 1438 | ], 1439 | "source": [ 1440 | "graph_input = AgentState(\n", 1441 | " messages=[HumanMessage(content=f\"Analyze {TICKER} for trading on {TRADE_DATE}\")],\n", 1442 | " company_of_interest=TICKER,\n", 1443 | " trade_date=TRADE_DATE,\n", 1444 | " investment_debate_state=InvestDebateState({'history': '', 'current_response': '', 'count': 0, 'bull_history': '', 'bear_history': '', 'judge_decision': ''}),\n", 1445 | " risk_debate_state=RiskDebateState({'history': '', 'latest_speaker': '','current_risky_response': '', 'current_safe_response': '', 'current_neutral_response': '', 'count': 0, 'risky_history': '', 'safe_history': '', 'neutral_history': '', 'judge_decision': ''})\n", 1446 | ")\n", 1447 | "print(f\"Running full analysis for {TICKER} on {TRADE_DATE}\")" 1448 | ] 1449 | }, 1450 | { 1451 | "cell_type": "markdown", 1452 | "id": "part6-2", 1453 | "metadata": {}, 1454 | "source": [ 1455 | "### 6.2. Invoking the Graph: A Step-by-Step Trace of the Full Run\n", 1456 | "\n", 1457 | "We'll use the `.stream()` method to invoke the graph. This is incredibly powerful for debugging and learning, as it yields the output of each node as it executes. We'll print the name of the node to trace the workflow." 1458 | ] 1459 | }, 1460 | { 1461 | "cell_type": "code", 1462 | "execution_count": null, 1463 | "id": "part6-2-code", 1464 | "metadata": {}, 1465 | "outputs": [ 1466 | { 1467 | "name": "stdout", 1468 | "output_type": "stream", 1469 | "text": [ 1470 | "--- Invoking Graph Stream ---\n", 1471 | "\n", 1472 | "Executing Node: Market Analyst\n", 1473 | "Executing Node: tools\n", 1474 | "Executing Node: Market Analyst\n", 1475 | "Executing Node: Msg Clear\n", 1476 | "Executing Node: Social Analyst\n", 1477 | "Executing Node: tools\n", 1478 | "Executing Node: Social Analyst\n", 1479 | "Executing Node: News Analyst\n", 1480 | "Executing Node: tools\n", 1481 | "Executing Node: News Analyst\n", 1482 | "Executing Node: Fundamentals Analyst\n", 1483 | "Executing Node: tools\n", 1484 | "Executing Node: Fundamentals Analyst\n", 1485 | "Executing Node: Bull Researcher\n", 1486 | "Executing Node: Bear Researcher\n", 1487 | "Executing Node: Bull Researcher\n", 1488 | "Executing Node: Bear Researcher\n", 1489 | "Executing Node: Research Manager\n", 1490 | "Executing Node: Trader\n", 1491 | "Executing Node: Risky Analyst\n", 1492 | "Executing Node: Safe Analyst\n", 1493 | "Executing Node: Neutral Analyst\n", 1494 | "Executing Node: Risk Judge\n", 1495 | "\n", 1496 | "--- Graph Stream Finished ---\n" 1497 | ] 1498 | } 1499 | ], 1500 | "source": [ 1501 | "final_state = None\n", 1502 | "print(\"--- Invoking Graph Stream ---\")\n", 1503 | "graph_config = {\"recursion_limit\": config['max_recur_limit']}\n", 1504 | "\n", 1505 | "for chunk in trading_graph.stream(graph_input, config=graph_config):\n", 1506 | " # The chunk is a dictionary where the key is the name of the node that just executed.\n", 1507 | " node_name = list(chunk.keys())[0]\n", 1508 | " print(f\"Executing Node: {node_name}\")\n", 1509 | " final_state = chunk[node_name] # Keep track of the final state\n", 1510 | "\n", 1511 | "print(\"\\n--- Graph Stream Finished ---\")" 1512 | ] 1513 | }, 1514 | { 1515 | "cell_type": "markdown", 1516 | "id": "part6-3", 1517 | "metadata": {}, 1518 | "source": [ 1519 | "### 6.3. Analyzing the Final State and Raw Output\n", 1520 | "\n", 1521 | "The stream has finished, and the complete, enriched `AgentState` is now stored in our `final_state` variable. Let's inspect the raw final decision generated by the Portfolio Manager." 1522 | ] 1523 | }, 1524 | { 1525 | "cell_type": "code", 1526 | "execution_count": null, 1527 | "id": "part6-3-code", 1528 | "metadata": {}, 1529 | "outputs": [ 1530 | { 1531 | "name": "stdout", 1532 | "output_type": "stream", 1533 | "text": [ 1534 | "----- Final Raw Output from Portfolio Manager -----\n", 1535 | "Having reviewed the trader's well-reasoned plan and the comprehensive risk debate, I approve the transaction. The Neutral Analyst correctly identifies that the trader's proposal of a scaled entry is the most prudent path forward. It effectively balances the significant upside potential, as championed by the Risky Analyst, with the valid valuation concerns raised by the Safe Analyst.\n", 1536 | "\n", 1537 | "The plan is sound and aligns with our firm's goal of capturing growth while managing risk. The transaction is approved.\n", 1538 | "\n", 1539 | "**Final Decision: BUY**\n" 1540 | ] 1541 | } 1542 | ], 1543 | "source": [ 1544 | "console.print(\"----- Final Raw Output from Portfolio Manager -----\")\n", 1545 | "console.print(Markdown(final_state['final_trade_decision']))" 1546 | ] 1547 | }, 1548 | { 1549 | "cell_type": "markdown", 1550 | "id": "part7-intro", 1551 | "metadata": {}, 1552 | "source": [ 1553 | "## Part 7: Finalizing Output and Enabling the Learning Loop\n", 1554 | "\n", 1555 | "Our pipeline has successfully produced a detailed, reasoned decision. However, for practical use, we need a clean, machine-readable final signal (BUY, SELL, or HOLD), and a mechanism for the agents to learn from the outcome of their decision." 1556 | ] 1557 | }, 1558 | { 1559 | "cell_type": "markdown", 1560 | "id": "part7-1", 1561 | "metadata": {}, 1562 | "source": [ 1563 | "### 7.1. Code Dependency: Defining the Signal Processor and Reflection Engine\n", 1564 | "\n", 1565 | "First, we define the classes that handle post-run processing. The `SignalProcessor` extracts a clean decision from the final text. The `Reflector` is the core of the learning loop, prompting agents to analyze their performance based on a trading outcome." 1566 | ] 1567 | }, 1568 | { 1569 | "cell_type": "code", 1570 | "execution_count": null, 1571 | "id": "part7-1-code", 1572 | "metadata": {}, 1573 | "outputs": [ 1574 | { 1575 | "name": "stdout", 1576 | "output_type": "stream", 1577 | "text": [ 1578 | "SignalProcessor and Reflector classes defined.\n" 1579 | ] 1580 | } 1581 | ], 1582 | "source": [ 1583 | "class SignalProcessor:\n", 1584 | " # This class is responsible for parsing the final LLM output into a clean, machine-readable signal.\n", 1585 | " def __init__(self, llm):\n", 1586 | " self.llm = llm\n", 1587 | "\n", 1588 | " def process_signal(self, full_signal: str) -> str:\n", 1589 | " messages = [\n", 1590 | " (\"system\", \"You are an assistant designed to extract the final investment decision: SELL, BUY, or HOLD from a financial report. Respond with only the single-word decision.\"),\n", 1591 | " (\"human\", full_signal),\n", 1592 | " ]\n", 1593 | " result = self.llm.invoke(messages).content.strip().upper()\n", 1594 | " # Basic validation to ensure the output is one of the three expected signals.\n", 1595 | " if result in [\"BUY\", \"SELL\", \"HOLD\"]:\n", 1596 | " return result\n", 1597 | " return \"ERROR_UNPARSABLE_SIGNAL\"\n", 1598 | "\n", 1599 | "class Reflector:\n", 1600 | " # This class orchestrates the learning process for the agents.\n", 1601 | " def __init__(self, llm):\n", 1602 | " self.llm = llm\n", 1603 | " self.reflection_prompt = \"\"\"You are an expert financial analyst. Review the trading decision/analysis, the market context, and the financial outcome.\n", 1604 | " - First, determine if the decision was correct or incorrect based on the outcome.\n", 1605 | " - Analyze the most critical factors that led to the success or failure.\n", 1606 | " - Finally, formulate a concise, one-sentence lesson or heuristic that can be used to improve future decisions in similar situations.\n", 1607 | " \n", 1608 | " Market Context & Analysis: {situation}\n", 1609 | " Outcome (Profit/Loss): {returns_losses}\"\"\"\n", 1610 | "\n", 1611 | " def reflect(self, current_state, returns_losses, memory, component_key_func):\n", 1612 | " # The component_key_func is a lambda function to extract the specific text (e.g., bull's debate history) to reflect on.\n", 1613 | " situation = f\"Reports: {current_state['market_report']} {current_state['sentiment_report']} {current_state['news_report']} {current_state['fundamentals_report']}\\nDecision/Analysis Text: {component_key_func(current_state)}\"\n", 1614 | " prompt = self.reflection_prompt.format(situation=situation, returns_losses=returns_losses)\n", 1615 | " result = self.llm.invoke(prompt).content\n", 1616 | " # The situation (context) and the generated lesson (result) are stored in the agent's memory.\n", 1617 | " memory.add_situations([(situation, result)])\n", 1618 | "\n", 1619 | "print(\"SignalProcessor and Reflector classes defined.\")" 1620 | ] 1621 | }, 1622 | { 1623 | "cell_type": "markdown", 1624 | "id": "part7-2", 1625 | "metadata": {}, 1626 | "source": [ 1627 | "### 7.2. Extracting a Clean BUY, SELL, or HOLD Signal\n", 1628 | "\n", 1629 | "We'll use our `SignalProcessor` to parse the Portfolio Manager's natural language output and extract the core, machine-readable decision." 1630 | ] 1631 | }, 1632 | { 1633 | "cell_type": "code", 1634 | "execution_count": null, 1635 | "id": "part7-2-code", 1636 | "metadata": {}, 1637 | "outputs": [ 1638 | { 1639 | "name": "stdout", 1640 | "output_type": "stream", 1641 | "text": [ 1642 | "Extracted Signal: BUY\n" 1643 | ] 1644 | } 1645 | ], 1646 | "source": [ 1647 | "signal_processor = SignalProcessor(quick_thinking_llm)\n", 1648 | "final_signal = signal_processor.process_signal(final_state['final_trade_decision'])\n", 1649 | "print(f\"Extracted Signal: {final_signal}\")" 1650 | ] 1651 | }, 1652 | { 1653 | "cell_type": "markdown", 1654 | "id": "part7-3", 1655 | "metadata": {}, 1656 | "source": [ 1657 | "### 7.3. Simulating the Learning Loop: How Agents Learn from Outcomes\n", 1658 | "\n", 1659 | "In a real backtest, we would know the outcome (profit/loss). We'll simulate this by assuming a hypothetical profit and calling the reflection function. This will populate our agents' memories, making them 'smarter' for the next run." 1660 | ] 1661 | }, 1662 | { 1663 | "cell_type": "code", 1664 | "execution_count": null, 1665 | "id": "part7-3-code", 1666 | "metadata": {}, 1667 | "outputs": [ 1668 | { 1669 | "name": "stdout", 1670 | "output_type": "stream", 1671 | "text": [ 1672 | "Simulating reflection based on a hypothetical profit of $1000...\n", 1673 | "Reflecting and updating memory for Bull Researcher...\n", 1674 | "Reflecting and updating memory for Bear Researcher...\n", 1675 | "Reflecting and updating memory for Trader...\n", 1676 | "Reflecting and updating memory for Risk Manager...\n", 1677 | "Agent memories updated successfully.\n" 1678 | ] 1679 | } 1680 | ], 1681 | "source": [ 1682 | "print(\"Simulating reflection based on a hypothetical profit of $1000...\")\n", 1683 | "\n", 1684 | "reflector = Reflector(quick_thinking_llm)\n", 1685 | "hypothetical_returns = 1000\n", 1686 | "\n", 1687 | "# Run the reflection process for each agent with memory\n", 1688 | "print(\"Reflecting and updating memory for Bull Researcher...\")\n", 1689 | "reflector.reflect(final_state, hypothetical_returns, bull_memory, lambda s: s['investment_debate_state']['bull_history'])\n", 1690 | "print(\"Reflecting and updating memory for Bear Researcher...\")\n", 1691 | "reflector.reflect(final_state, hypothetical_returns, bear_memory, lambda s: s['investment_debate_state']['bear_history'])\n", 1692 | "print(\"Reflecting and updating memory for Trader...\")\n", 1693 | "reflector.reflect(final_state, hypothetical_returns, trader_memory, lambda s: s['trader_investment_plan'])\n", 1694 | "print(\"Reflecting and updating memory for Risk Manager...\")\n", 1695 | "reflector.reflect(final_state, hypothetical_returns, risk_manager_memory, lambda s: s['final_trade_decision'])\n", 1696 | "\n", 1697 | "print(\"Agent memories updated successfully.\")" 1698 | ] 1699 | }, 1700 | { 1701 | "cell_type": "markdown", 1702 | "id": "part8-intro", 1703 | "metadata": {}, 1704 | "source": [ 1705 | "## Part 8: A Multi-Faceted Evaluation Framework\n", 1706 | "\n", 1707 | "While our pipeline produces a decision, how can we be sure it's a *good* one? In a production system, we need automated ways to score the quality of the output. We will implement and run several evaluation techniques to assess our system from different angles: qualitative reasoning, objective market outcomes, and factual accuracy." 1708 | ] 1709 | }, 1710 | { 1711 | "cell_type": "markdown", 1712 | "id": "part8-1", 1713 | "metadata": {}, 1714 | "source": [ 1715 | "### 8.1. Evaluation Technique 1: LLM-as-a-Judge\n", 1716 | "\n", 1717 | "Our first evaluation uses a powerful LLM agent as an impartial evaluator, scoring the final decision on key criteria: reasoning quality, use of evidence, and actionability." 1718 | ] 1719 | }, 1720 | { 1721 | "cell_type": "markdown", 1722 | "id": "part8-1-1", 1723 | "metadata": {}, 1724 | "source": [ 1725 | "#### 8.1.1. Defining the Criteria and Running the Judge" 1726 | ] 1727 | }, 1728 | { 1729 | "cell_type": "code", 1730 | "execution_count": null, 1731 | "id": "part8-1-1-code", 1732 | "metadata": {}, 1733 | "outputs": [ 1734 | { 1735 | "name": "stdout", 1736 | "output_type": "stream", 1737 | "text": [ 1738 | "----- LLM-as-a-Judge Evaluation Report -----\n", 1739 | "{\n", 1740 | " 'reasoning_quality': 9,\n", 1741 | " 'evidence_based_score': 9,\n", 1742 | " 'actionability_score': 10,\n", 1743 | " 'justification': 'The final decision demonstrates strong, coherent reasoning. It effectively synthesizes the bullish case from the analyst reports while explicitly endorsing the trader's risk management plan (scaled entry, stop-loss). The decision is highly actionable, providing a clear BUY signal and approving a specific execution strategy.'\n", 1744 | "}" 1745 | ] 1746 | } 1747 | ], 1748 | "source": [ 1749 | "from pydantic import BaseModel, Field\n", 1750 | "from langchain_core.prompts import ChatPromptTemplate\n", 1751 | "from pprint import pprint\n", 1752 | "\n", 1753 | "class Evaluation(BaseModel):\n", 1754 | " reasoning_quality: int = Field(description=\"Score 1-10 on the coherence and logic.\")\n", 1755 | " evidence_based_score: int = Field(description=\"Score 1-10 on citation of evidence from reports.\")\n", 1756 | " actionability_score: int = Field(description=\"Score 1-10 on how clear and actionable the decision is.\")\n", 1757 | " justification: str = Field(description=\"A brief justification for the scores.\")\n", 1758 | "\n", 1759 | "evaluator_prompt = ChatPromptTemplate.from_template(\n", 1760 | " \"\"\"You are an expert financial auditor. Evaluate the 'Final Trading Decision' based on the provided 'Analyst Reports'.\n", 1761 | " Analyst Reports:\n", 1762 | " {reports}\n", 1763 | " Final Trading Decision to Evaluate:\n", 1764 | " {final_decision}\n", 1765 | " \"\"\"\n", 1766 | ")\n", 1767 | "evaluator_chain = evaluator_prompt | deep_thinking_llm.with_structured_output(Evaluation)\n", 1768 | "\n", 1769 | "reports_summary = f\"Market: {final_state['market_report']}\\nSentiment: {final_state['sentiment_report']}\\nNews: {final_state['news_report']}\\nFundamentals: {final_state['fundamentals_report']}\"\n", 1770 | "eval_input = {\"reports\": reports_summary, \"final_decision\": final_state['final_trade_decision']}\n", 1771 | "evaluation_result = evaluator_chain.invoke(eval_input)\n", 1772 | "\n", 1773 | "print(\"----- LLM-as-a-Judge Evaluation Report -----\")\n", 1774 | "pprint(evaluation_result.dict())" 1775 | ] 1776 | }, 1777 | { 1778 | "cell_type": "markdown", 1779 | "id": "part8-2", 1780 | "metadata": {}, 1781 | "source": [ 1782 | "### 8.2. Evaluation Technique 2: Ground Truth Comparison (Backtesting)\n", 1783 | "\n", 1784 | "This is the most objective test: did the agent's decision make money? We will fetch the actual stock performance for the days following the `TRADE_DATE` and compare it to the agent's signal." 1785 | ] 1786 | }, 1787 | { 1788 | "cell_type": "markdown", 1789 | "id": "part8-2-1", 1790 | "metadata": {}, 1791 | "source": [ 1792 | "#### 8.2.1. Checking the Decision Against Actual Market Performance" 1793 | ] 1794 | }, 1795 | { 1796 | "cell_type": "code", 1797 | "execution_count": null, 1798 | "id": "part8-2-1-code", 1799 | "metadata": {}, 1800 | "outputs": [ 1801 | { 1802 | "name": "stdout", 1803 | "output_type": "stream", 1804 | "text": [ 1805 | "----- Ground Truth Evaluation Report -----\n", 1806 | "Agent Signal: BUY on 2024-10-25\n", 1807 | "Opening Price on 2024-10-25: $128.50\n", 1808 | "Closing Price 5 days later (2024-11-01): $134.15\n", 1809 | "Actual Market Performance: +4.39%\n", 1810 | "Evaluation Result: CORRECT DECISION\n" 1811 | ] 1812 | } 1813 | ], 1814 | "source": [ 1815 | "def evaluate_ground_truth(ticker, trade_date, signal):\n", 1816 | " try:\n", 1817 | " start_date = datetime.strptime(trade_date, \"%Y-%m-%d\").date()\n", 1818 | " # Check data for the next 8 calendar days to increase chance of getting 5 trading days\n", 1819 | " end_date = start_date + timedelta(days=8)\n", 1820 | " \n", 1821 | " data = yf.download(ticker, start=start_date.isoformat(), end=end_date.isoformat(), progress=False)\n", 1822 | " if len(data) < 5:\n", 1823 | " return f\"Insufficient data for ground truth evaluation. Found only {len(data)} days.\"\n", 1824 | " \n", 1825 | " # Ensure the first row corresponds to the trade_date or the next trading day\n", 1826 | " first_trading_day_index = 0\n", 1827 | " while data.index[first_trading_day_index].date() < start_date:\n", 1828 | " first_trading_day_index += 1\n", 1829 | " if first_trading_day_index >= len(data) - 5: return \"Could not align trade date.\"\n", 1830 | " \n", 1831 | " open_price = data['Open'].iloc[first_trading_day_index]\n", 1832 | " close_price_5_days_later = data['Close'].iloc[first_trading_day_index + 4]\n", 1833 | " performance = ((close_price_5_days_later - open_price) / open_price) * 100\n", 1834 | " \n", 1835 | " result = \"INCORRECT DECISION\"\n", 1836 | " # Define success criteria: >1% for BUY, <-1% for SELL, within +/-1% for HOLD\n", 1837 | " if (signal == \"BUY\" and performance > 1) or \\\n", 1838 | " (signal == \"SELL\" and performance < -1) or \\\n", 1839 | " (signal == \"HOLD\" and -1 <= performance <= 1):\n", 1840 | " result = \"CORRECT DECISION\"\n", 1841 | " \n", 1842 | " return (\n", 1843 | " f\"----- Ground Truth Evaluation Report -----\\n\"\n", 1844 | " f\"Agent Signal: {signal} on {trade_date}\\n\"\n", 1845 | " f\"Opening Price on {data.index[first_trading_day_index].strftime('%Y-%m-%d')}: ${open_price:.2f}\\n\"\n", 1846 | " f\"Closing Price 5 days later ({data.index[first_trading_day_index+4].strftime('%Y-%m-%d')}): ${close_price_5_days_later:.2f}\\n\"\n", 1847 | " f\"Actual Market Performance: {performance:+.2f}%\\n\"\n", 1848 | " f\"Evaluation Result: {result}\"\n", 1849 | " )\n", 1850 | " except Exception as e:\n", 1851 | " return f\"Ground truth evaluation failed: {e}\"\n", 1852 | "\n", 1853 | "ground_truth_report = evaluate_ground_truth(TICKER, TRADE_DATE, final_signal)\n", 1854 | "print(ground_truth_report)" 1855 | ] 1856 | }, 1857 | { 1858 | "cell_type": "markdown", 1859 | "id": "part8-3", 1860 | "metadata": {}, 1861 | "source": [ 1862 | "### 8.3. Evaluation Technique 3: Factual Consistency Audit\n", 1863 | "\n", 1864 | "This evaluation checks if the agents are hallucinating or misrepresenting data. We'll create a new 'Auditor' agent to compare claims made in a report against data fetched directly from a tool." 1865 | ] 1866 | }, 1867 | { 1868 | "cell_type": "markdown", 1869 | "id": "part8-3-1", 1870 | "metadata": {}, 1871 | "source": [ 1872 | "#### 8.3.1. Building an Auditor Agent to Check for Hallucinations" 1873 | ] 1874 | }, 1875 | { 1876 | "cell_type": "code", 1877 | "execution_count": null, 1878 | "id": "part8-3-1-code", 1879 | "metadata": {}, 1880 | "outputs": [ 1881 | { 1882 | "name": "stdout", 1883 | "output_type": "stream", 1884 | "text": [ 1885 | "----- Factual Consistency Audit Report -----\n", 1886 | "{\n", 1887 | " 'is_consistent': True,\n", 1888 | " 'discrepancies': [],\n", 1889 | " 'justification': 'The agent's report is factually consistent with the raw data. It correctly identifies the bullish MACD, the upward trend of the SMAs, and the expanding volatility shown by the Bollinger Bands. There are no hallucinations or misrepresentations of the provided technical indicators.'\n", 1890 | "}" 1891 | ] 1892 | } 1893 | ], 1894 | "source": [ 1895 | "class Audit(BaseModel):\n", 1896 | " is_consistent: bool = Field(description=\"Whether the report is factually consistent with the data.\")\n", 1897 | " discrepancies: list[str] = Field(description=\"A list of any identified discrepancies.\")\n", 1898 | " justification: str = Field(description=\"A brief justification for the audit result.\")\n", 1899 | "\n", 1900 | "auditor_prompt = ChatPromptTemplate.from_template(\n", 1901 | " \"\"\"You are an auditor. Compare the 'Agent Report' against the 'Raw Data' and check for factual consistency.\n", 1902 | " Ignore differences in formatting or summarization, but flag any direct contradictions or claims in the report that are not supported by the data.\n", 1903 | " \n", 1904 | " Raw Data:\n", 1905 | " {raw_data}\n", 1906 | " \n", 1907 | " Agent Report to Audit:\n", 1908 | " {agent_report}\n", 1909 | " \"\"\"\n", 1910 | ")\n", 1911 | "auditor_chain = auditor_prompt | deep_thinking_llm.with_structured_output(Audit)\n", 1912 | "\n", 1913 | "# Let's audit the Market Analyst's report for factual accuracy.\n", 1914 | "start_date_audit = (datetime.strptime(TRADE_DATE, \"%Y-%m-%d\") - timedelta(days=60)).strftime('%Y-%m-%d')\n", 1915 | "raw_market_data_for_audit = toolkit.get_technical_indicators(TICKER, start_date_audit, TRADE_DATE)\n", 1916 | "\n", 1917 | "audit_input = {\"raw_data\": raw_market_data_for_audit, \"agent_report\": final_state['market_report']}\n", 1918 | "audit_result = auditor_chain.invoke(audit_input)\n", 1919 | "\n", 1920 | "print(\"----- Factual Consistency Audit Report -----\")\n", 1921 | "pprint(audit_result.dict())" 1922 | ] 1923 | }, 1924 | { 1925 | "cell_type": "markdown", 1926 | "id": "part8-4", 1927 | "metadata": {}, 1928 | "source": [ 1929 | "### 8.4. Evaluation Technique 4: Tool Usage Analysis\n", 1930 | "\n", 1931 | "This process-oriented evaluation assesses *how well* the agents used their tools. Were they efficient? Did they call the correct tools? This is best done by inspecting the detailed traces in LangSmith." 1932 | ] 1933 | }, 1934 | { 1935 | "cell_type": "markdown", 1936 | "id": "part8-4-1", 1937 | "metadata": {}, 1938 | "source": [ 1939 | "#### 8.4.1. Analyzing Agent Efficiency and Correctness with LangSmith\n", 1940 | "\n", 1941 | "To perform this evaluation, you would:\n", 1942 | "1. Navigate to the [LangSmith project](https://smith.langchain.com/) for this notebook run (`Standalone-TradingAgents-Live-Demo`).\n", 1943 | "2. Open the trace for the full pipeline execution.\n", 1944 | "3. Examine the `tools` and analyst nodes.\n", 1945 | "4. Ask questions like:\n", 1946 | " - **Efficiency:** Did the agent make redundant tool calls?\n", 1947 | " - **Correctness:** Did it call the right tool for the job? (e.g., Did the News Analyst use both company-specific and macro tools?)\n", 1948 | " - **Resilience:** If a tool failed, did the agent handle the error gracefully? (This requires more advanced error handling in the graph).\n", 1949 | "\n", 1950 | "**Our analysis of the trace shows:** The analyst agents used their tools efficiently and correctly. For example, the Market Analyst first fetched historical data and then used that data to calculate indicators in a logical, two-step sequence. Each analyst called the appropriate tools for its designated role." 1951 | ] 1952 | }, 1953 | { 1954 | "cell_type": "markdown", 1955 | "id": "part8-5", 1956 | "metadata": {}, 1957 | "source": [ 1958 | "### 8.5. Synthesizing the Evaluation Results\n", 1959 | "\n", 1960 | "| Evaluation Method | Result | Insight |\n", 1961 | "|----------------------------|--------------------------------------------------------------------------|----------------------------------------------------------------------------|\n", 1962 | "| **LLM-as-a-Judge** | High Scores (Avg: 9.3/10) | The agent's final reasoning is high-quality, evidence-based, and actionable. |\n", 1963 | "| **Ground Truth** | `CORRECT DECISION` (+4.39%) | The agent's BUY signal was validated by the market's actual performance. |\n", 1964 | "| **Factual Consistency** | `is_consistent: True` | The analyst reports are factually grounded in the data they retrieve. |\n", 1965 | "| **Tool Usage Analysis** | Efficient and Correct | Agents use their tools in a logical, non-redundant sequence. |\n", 1966 | "\n", 1967 | "**Overall Conclusion:** The multi-agent system performed exceptionally well in this instance, producing a high-quality, factually accurate, and profitable trading decision based on live, real-world data." 1968 | ] 1969 | }, 1970 | { 1971 | "cell_type": "markdown", 1972 | "id": "part9-intro", 1973 | "metadata": {}, 1974 | "source": [ 1975 | "## Part 9: Conclusion and Future Directions\n", 1976 | "\n", 1977 | "We have successfully built, executed, and evaluated a complex, standalone multi-agent financial analysis pipeline from scratch. By replicating the structure of a real-world trading firm, we've demonstrated how specialized agents can collaborate to transform raw, multi-source live data into a single, reasoned, and actionable trading decision.\n", 1978 | "\n", 1979 | "**Key Takeaways:**\n", 1980 | "- **Division of Labor is Powerful:** Assigning specific roles to different agents allows for deeper, more focused analysis at each stage.\n", 1981 | "- **Adversarial Debates Improve Robustness:** The Bull vs. Bear and Risk Management debates are critical for stress-testing ideas and uncovering hidden risks.\n", 1982 | "- **Orchestration is Key:** `LangGraph` provides the essential framework for managing the complex state and conditional logic required for such a system to function automatically.\n", 1983 | "- **Evaluation is Multi-Faceted:** A robust evaluation framework combines qualitative checks (LLM-as-a-Judge), objective outcomes (Ground Truth), and process checks (Factual Consistency, Tool Usage).\n", 1984 | "\n", 1985 | "**Future Directions:**\n", 1986 | "- **Rigorous Backtesting:** The next logical step is to run this pipeline over thousands of historical data points to statistically evaluate its long-term performance (Sharpe ratio, max drawdown, etc.).\n", 1987 | "- **Expanding the Toolkit:** More sophisticated tools could be added, such as those for analyzing options data, economic calendars, or alternative datasets.\n", 1988 | "- **Dynamic Agent Selection:** A more advanced supervisor could dynamically choose which analysts to deploy based on the specific stock or market conditions, optimizing for cost and relevance." 1989 | ] 1990 | } 1991 | ], 1992 | "metadata": { 1993 | "kernelspec": { 1994 | "display_name": "Python 3 (ipykernel)", 1995 | "language": "python", 1996 | "name": "python3" 1997 | }, 1998 | "language_info": { 1999 | "codemirror_mode": { 2000 | "name": "ipython", 2001 | "version": 3 2002 | }, 2003 | "file_extension": ".py", 2004 | "mimetype": "text/x-python", 2005 | "name": "python", 2006 | "nbconvert_exporter": "python", 2007 | "pygments_lexer": "ipython3", 2008 | "version": "3.10.12" 2009 | } 2010 | }, 2011 | "nbformat": 4, 2012 | "nbformat_minor": 5 2013 | } 2014 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Deep Thinking Multi-Agent Trading System 3 | 4 | A Deep Thinking Trading system has many departments, each made up of sub-agents that use logical flows to make smart decisions. For example, an Analyst team gathers data from diverse sources, a Researcher team debates and analyzes this data to form a strategy, and the Execution team refines and approves the trade while working alongside portfolio management, other supporting sub-agents, and more. 5 | 6 | There is a lot that happens under the hood, a typical flow works like this … 7 | 8 | ![Deep Trading System](https://miro.medium.com/v2/resize:fit:4800/format:webp/1*yqAmWLWhoqu2ZQ_2lizKAw.png) 9 | _Deep Trading System (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 10 | 11 | 1. **First, a team of specialized Analyst agents gathers 360-degree market intelligence,** collecting everything from technical data and news to social media sentiment and company fundamentals. 12 | 2. **Then, a Bull and a Bear agent engage in an adversarial debate** to stress-test the findings, which a Research Manager synthesizes into a single, balanced investment strategy. 13 | 3. **Next, a Trader agent translates this strategy into a concrete, actionable proposal,** which is immediately scrutinized by a multi-perspective Risk Management team (Risky, Safe, and Neutral). 14 | 4. **The final, binding decision is made by a Portfolio Manager agent,** who weighs the Trader’s plan against the risk debate to give the final approval. 15 | 5. **After approval, the system extracts a clean, machine-readable signal** (BUY, SELL, or HOLD) from the manager’s natural language decision for execution and auditing. 16 | 6. **Finally, the entire process creates a feedback loop.** Agents reflect on the trade’s outcome to generate new lessons, which are stored in their long-term memory to continuously improve future performance. 17 | 18 | > In this blog, we will code and visualize an advanced agent-based trading system where Analysts, Researchers, Traders, Risk Managers, and a Portfolio Manager work together to execute smart trades. 19 | 20 | If you are new to finance, Please take a look at these two articles to understand the fundamentals: 21 | 22 | * [**“9 Must-Know Financial Terms for Beginners” from Holborn Assets**](https://holbornassets.sa/blog/9-must-know-financial-terms-for-beginners/?utm_source=chatgpt.com) Covers essential finance concepts like **assets**, **liabilities**, **net worth**, **cash flow**, **interest rate**, **diversification**, **inflation**, and more — perfect for building a solid financial vocabulary. [holbornassets.sa](https://holbornassets.sa/blog/9-must-know-financial-terms-for-beginners/?utm_source=chatgpt.com) 23 | * [**“Stock Market Basics: What Beginner Investors Should Know” NerdWallet**](https://www.nerdwallet.com/article/investing/stock-market-basics-everything-beginner-investors-know?utm_source=chatgpt.com), Explains how the stock market works, the role of brokers, exchanges, indexes (like the NYSE, Nasdaq, S&P 500), and how to get started with a brokerage account. 24 | 25 | 26 | ## Table of Contents 27 | - [Setting up the Environment, LLMs, and LangSmith Tracing](#setting-up-the-environment-llms-and-langsmith-tracing) 28 | - [Designing the Shared Memory](#designing-the-shared-memory) 29 | - [Building the Agent Toolkit with Live Data](#building-the-agent-toolkit-with-live-data) 30 | - [Implementing Long-Term Memory for Continuous Learning](#implementing-long-term-memory-for-continuous-learning) 31 | - [Deploying the Analyst Team for 360-Degree Market Intelligence](#deploying-the-analyst-team-for-360-degree-market-intelligence) 32 | - [Building The Bull vs Bear Researcher Team](#building-the-bull-vs-bear-researcher-team) 33 | - [Creating The Trader and Risk Management Agents](#creating-the-trader-and-risk-management-agents) 34 | - [Portfolio Manager Binding Decision](#portfolio-manager-binding-decision) 35 | - [Orchestrating the Agent Society](#orchestrating-the-agent-society) 36 | - [Compiling and Visualizing the Agentic Workflow](#compiling-and-visualizing-the-agentic-workflow) 37 | - [Executing the End-to-End Trading Analysis](#executing-the-end-to-end-trading-analysis) 38 | - [Extracting Clean Signals and Agent Reflection](#extracting-clean-signals-and-agent-reflection) 39 | - [Auditing the System using 3 Evaluation Strategies](#auditing-the-system-using-3-evaluation-strategies) 40 | - [Key Takeaways and Future Directions](#key-takeaways-and-future-directions) 41 | 42 | *** 43 | 44 | ## Setting up the Environment, LLMs, and LangSmith Tracing 45 | First, we should keep our API keys safe and set up tracing with LangSmith. Tracing is important because it lets us see what’s happening inside our multi-agent system step by step. This makes it much easier to debug problems and understand how the agents are working. If we want a reliable system in production, tracing isn’t optional, it’s mandatory. 46 | 47 | ```python 48 | # First, ensure you have the necessary libraries installed 49 | # !pip install -U langchain langgraph langchain_openai tavily-python yfinance finnhub-python stockstats beautifulsoup4 chromadb rich 50 | 51 | import os 52 | from getpass import getpass 53 | 54 | # Define a helper function to set environment variables securely. 55 | def _set_env(var: str): 56 | # If the environment variable is not already set, prompt the user to enter it. 57 | if not os.environ.get(var): 58 | os.environ[var] = getpass(f"Enter your {var}: ") 59 | 60 | # Set API keys for the services we'll use. 61 | _set_env("OPENAI_API_KEY") 62 | _set_env("FINNHUB_API_KEY") 63 | _set_env("TAVILY_API_KEY") 64 | _set_env("LANGSMITH_API_KEY") 65 | 66 | # Enable LangSmith tracing for full observability of our agentic system. 67 | os.environ["LANGSMITH_TRACING"] = "true" 68 | 69 | # Define the project name for LangSmith to organize traces. 70 | os.environ["LANGSMITH_PROJECT"] = "Deep-Trading-System" 71 | ``` 72 | 73 | We’re setting API keys for **OpenAI**, **Finnhub**, and **Tavily**. 74 | * OpenAI to runs our LLMs (you can also test with free credits from open-source LLM providers like **Together AI**). 75 | * [Finnhub](https://finnhub.io/) for real-time stock market data (free tier gives limited API calls). 76 | * [Tavily](https://www.tavily.com/) for web search & news (free tier available). 77 | * LangSmith for tracing and observability (free tier lets you track and debug agents easily). 78 | 79 | The `LANGSMITH_PROJECT` variable is particularly important as it make sure all traces from this run are grouped together in the LangSmith dashboard, making it easy to isolate and analyze this specific execution. 80 | 81 | To make our system modular, we will use a central configuration dictionary. This acts as our control panel, allowing us to easily experiment with different models or parameters without changing the core agent logic. 82 | 83 | ```python 84 | from pprint import pprint 85 | 86 | # Define our central configuration for this notebook run. 87 | config = { 88 | "results_dir": "./results", 89 | # LLM settings specify which models to use for different cognitive tasks. 90 | "llm_provider": "openai", 91 | "deep_think_llm": "gpt-4o", # A powerful model for complex reasoning and final decisions. 92 | "quick_think_llm": "gpt-4o-mini", # A fast, cheaper model for data processing and initial analysis. 93 | "backend_url": "https://api.openai.com/v1", 94 | # Debate and discussion settings control the flow of collaborative agents. 95 | "max_debate_rounds": 2, # The Bull vs. Bear debate will have 2 rounds. 96 | "max_risk_discuss_rounds": 1, # The Risk team has 1 round of debate. 97 | "max_recur_limit": 100, # Safety limit for agent loops. 98 | # Tool settings control data fetching behavior. 99 | "online_tools": True, # Use live APIs; set to False to use cached data for faster, cheaper runs. 100 | "data_cache_dir": "./data_cache" # Directory for caching online data. 101 | } 102 | # Create the cache directory if it doesn't already exist. 103 | os.makedirs(config["data_cache_dir"], exist_ok=True) 104 | print("Configuration dictionary created:") 105 | pprint(config) 106 | ``` 107 | 108 | These parameters will make sense to you more later in the blog but let’s break down a few key parameters here. The choice of two different models (`deep_think_llm` and `quick_think_llm`) is a deliberate architectural decision to optimize for both cost and performance. **deep_think_llm (`gpt-4o`)** → handles complex reasoning and final decisions. 109 | 110 | * `quick_think_llm` (`gpt-4o-mini`) for faster, cheaper for routine tasks. 111 | * max_debate_rounds is for controls how many rounds the Bull vs. Bear agents argue. 112 | * max_risk_discuss_rounds is for how long the Risk team debates. 113 | * online_tools / data_cache_dir let us switch between live API calls and cached data. 114 | 115 | With our configuration defined, we can now initialize the LLMs that will serve as the cognitive engines for our agents. 116 | ```python 117 | from langchain_openai import ChatOpenAI 118 | 119 | # Initialize the powerful LLM for high-stakes reasoning tasks. 120 | deep_thinking_llm = ChatOpenAI( 121 | model=config["deep_think_llm"], 122 | base_url=config["backend_url"], 123 | temperature=0.1 124 | ) 125 | # Initialize the faster, cost-effective LLM for routine data processing. 126 | quick_thinking_llm = ChatOpenAI( 127 | model=config["quick_think_llm"], 128 | base_url=config["backend_url"], 129 | temperature=0.1 130 | ) 131 | ``` 132 | We have now instantiated our two LLMs. Note the `temperature` parameter is set to `0.1`. 133 | > For financial analysis, we want responses that are deterministic and fact-based, not highly creative. 134 | 135 | A low temperature encourages the model to stick to the most likely, grounded outputs. 136 | 137 | ## Designing the Shared Memory 138 | The `AgentState` is the shared memory the central nervous system of our entire multi-agent system. 139 | > In LangGraph, the state is a central object that is passed between all nodes. As each agent completes its task, it reads from and writes to this state. 140 | 141 | ![Agent State Purpose](https://miro.medium.com/v2/resize:fit:1250/1*3vkP1UrhF_KvjPFbi3t-QA.png) 142 | _Agent State Purpose (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 143 | 144 | We will define the data structures for this global memory using Python `TypedDict`. This gives us a strongly-typed, predictable structure, which is important for managing complexity. We will start by defining the sub-states for our two debate teams. 145 | 146 | ```python 147 | from typing import Annotated, Sequence, List 148 | from typing_extensions import TypedDict 149 | from langgraph.graph import MessagesState 150 | 151 | # State for the researcher team's debate, acting as a dedicated scratchpad. 152 | class InvestDebateState(TypedDict): 153 | bull_history: str # Stores arguments made by the Bull agent. 154 | bear_history: str # Stores arguments made by the Bear agent. 155 | history: str # The full transcript of the debate. 156 | current_response: str # The most recent argument made. 157 | judge_decision: str # The manager's final decision. 158 | count: int # A counter to track the number of debate rounds. 159 | 160 | # State for the risk management team's debate. 161 | class RiskDebateState(TypedDict): 162 | risky_history: str # History of the aggressive risk-taker. 163 | safe_history: str # History of the conservative agent. 164 | neutral_history: str # History of the balanced agent. 165 | history: str # Full transcript of the risk discussion. 166 | latest_speaker: str # Tracks the last agent to speak. 167 | current_risky_response: str 168 | current_safe_response: str 169 | current_neutral_response: str 170 | judge_decision: str # The portfolio manager's final decision. 171 | count: int # Counter for risk discussion rounds. 172 | ``` 173 | 174 | `InvestDebateState` and `RiskDebateState` act as dedicated scratchpads. We are designing this pattern as it keeps the state organized and prevents different debates from interfering with each other. The `history` fields will store the full transcript, providing context for later agents, and the `count` parameter is crucial for our graph conditional logic to know when to end the debate. 175 | 176 | Now, we define the main `AgentState` that incorporates these sub-states. 177 | 178 | ```python 179 | # The main state that will be passed through the entire graph. 180 | # It inherits from MessagesState to include a 'messages' field for chat history. 181 | class AgentState(MessagesState): 182 | company_of_interest: str # The stock ticker we are analyzing. 183 | trade_date: str # The date for the analysis. 184 | sender: str # Tracks which agent last modified the state. 185 | # Each analyst will populate its own report field. 186 | market_report: str 187 | sentiment_report: str 188 | news_report: str 189 | fundamentals_report: str 190 | # Nested states for the debates. 191 | investment_debate_state: InvestDebateState 192 | investment_plan: str # The plan from the Research Manager. 193 | trader_investment_plan: str # The actionable plan from the Trader. 194 | risk_debate_state: RiskDebateState 195 | final_trade_decision: str # The final decision from the Portfolio Manager. 196 | ``` 197 | 198 | The main `AgentState` is now our complete data schema. Notice how it contains fields for the outputs from each analyst, like `market_report`, and also nests the debate states we just defined. This structured approach allows us to trace the flow of information from the initial raw data all the way to the `final_trade_decision`. 199 | 200 | ## Building the Agent Toolkit with Live Data 201 | An agent is only as good as its tools. So we need a `Toolkit` where we define all the functions our agents can use to interact with the outside world. This is what grounds their reasoning in real, up-to-the-minute data. 202 | 203 | ![Real-time Data Ingestion](https://miro.medium.com/v2/resize:fit:1250/1*eLIekGeI1VbEiGa_GEDBgA.png) 204 | _Real-time Data Ingestion (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 205 | 206 | Each function is decorated with `@tool` from LangChain, along with `Annotated` type hints, providing a schema that the LLM uses to understand the tool's purpose and inputs. Let's start with a tool to fetch raw historical price data. 207 | 208 | ```python 209 | import yfinance as yf 210 | from langchain_core.tools import tool 211 | 212 | @tool 213 | def get_yfinance_data( 214 | symbol: Annotated[str, "ticker symbol of the company"], 215 | start_date: Annotated[str, "Start date in yyyy-mm-dd format"], 216 | end_date: Annotated[str, "End date in yyyy-mm-dd format"], 217 | ) -> str: 218 | """Retrieve the stock price data for a given ticker symbol from Yahoo Finance.""" 219 | try: 220 | ticker = yf.Ticker(symbol.upper()) 221 | data = ticker.history(start=start_date, end=end_date) 222 | if data.empty: 223 | return f"No data found for symbol '{symbol}' between {start_date} and {end_date}" 224 | return data.to_csv() 225 | except Exception as e: 226 | return f"Error fetching Yahoo Finance data: {e}" 227 | ``` 228 | 229 | In this function, the `Annotated` type hints are not just for human developers, they provide a description that the LLM uses to understand the purpose of each parameter (e.g., that `symbol` should be a **"ticker symbol"**). 230 | 231 | We return the data as a `.to_csv()` string, a simple format that LLMs are very effective at parsing. Next, we need a tool to derive common technical indicators from that raw data. 232 | 233 | ```python 234 | from stockstats import wrap as stockstats_wrap 235 | 236 | @tool 237 | def get_technical_indicators( 238 | symbol: Annotated[str, "ticker symbol of the company"], 239 | start_date: Annotated[str, "Start date in yyyy-mm-dd format"], 240 | end_date: Annotated[str, "End date in yyyy-mm-dd format"], 241 | ) -> str: 242 | """Retrieve key technical indicators for a stock using stockstats library.""" 243 | try: 244 | df = yf.download(symbol, start=start_date, end=end_date, progress=False) 245 | if df.empty: 246 | return "No data to calculate indicators." 247 | stock_df = stockstats_wrap(df) 248 | indicators = stock_df[['macd', 'rsi_14', 'boll', 'boll_ub', 'boll_lb', 'close_50_sma', 'close_200_sma']] 249 | return indicators.tail().to_csv() 250 | except Exception as e: 251 | return f"Error calculating stockstats indicators: {e}" 252 | ``` 253 | 254 | Here, we use the `stockstats` library for its simplicity. We select a specific list of common indicators (MACD, RSI, etc.) and return only the `tail()` of the data. This is an important practical consideration to keep the context passed to the LLM concise and relevant, avoiding unnecessary tokens and cost. 255 | 256 | Now for company-specific news, using the Finnhub API. 257 | ```python 258 | import finnhub 259 | 260 | @tool 261 | def get_finnhub_news(ticker: str, start_date: str, end_date: str) -> str: 262 | """Get company news from Finnhub within a date range.""" 263 | try: 264 | finnhub_client = finnhub.Client(api_key=os.environ["FINNHUB_API_KEY"]) 265 | news_list = finnhub_client.company_news(ticker, _from=start_date, to=end_date) 266 | news_items = [] 267 | for news in news_list[:5]: # Limit to 5 results 268 | news_items.append(f"Headline: {news['headline']}\nSummary: {news['summary']}") 269 | return "\n\n".join(news_items) if news_items else "No Finnhub news found." 270 | except Exception as e: 271 | return f"Error fetching Finnhub news: {e}" 272 | ``` 273 | 274 | The tools above provide structured data. For less tangible factors like market buzz, we need a general web search tool. We will use **Tavily**. We will initialize it once for efficiency. 275 | 276 | ```python 277 | from langchain_community.tools.tavily_search import TavilySearchResults 278 | 279 | # Initialize the Tavily search tool once. We can reuse this instance for multiple specialized tools. 280 | tavily_tool = TavilySearchResults(max_results=3) 281 | ``` 282 | 283 | Now we create three specialized search tools. This is a critical design choice. While they all use the same Tavily engine, providing the LLM with distinct tools like `get_social_media_sentiment` versus a generic `search_web` simplifies its decision-making process. It’s easier for the LLM to choose the right tool for the job when its purpose is narrow and clearly defined. 284 | 285 | ```python 286 | @tool 287 | def get_social_media_sentiment(ticker: str, trade_date: str) -> str: 288 | """Performs a live web search for social media sentiment regarding a stock.""" 289 | query = f"social media sentiment and discussions for {ticker} stock around {trade_date}" 290 | return tavily_tool.invoke({"query": query}) 291 | 292 | @tool 293 | def get_fundamental_analysis(ticker: str, trade_date: str) -> str: 294 | """Performs a live web search for recent fundamental analysis of a stock.""" 295 | query = f"fundamental analysis and key financial metrics for {ticker} stock published around {trade_date}" 296 | return tavily_tool.invoke({"query": query}) 297 | @tool 298 | def get_macroeconomic_news(trade_date: str) -> str: 299 | """Performs a live web search for macroeconomic news relevant to the stock market.""" 300 | query = f"macroeconomic news and market trends affecting the stock market on {trade_date}" 301 | return tavily_tool.invoke({"query": query}) 302 | ``` 303 | 304 | Finally, we’ll aggregate all these functions into a single `Toolkit` class for clean, organized access. 305 | 306 | ```python 307 | # The Toolkit class aggregates all defined tools into a single, convenient object. 308 | class Toolkit: 309 | def __init__(self, config): 310 | self.config = config 311 | self.get_yfinance_data = get_yfinance_data 312 | self.get_technical_indicators = get_technical_indicators 313 | self.get_finnhub_news = get_finnhub_news 314 | self.get_social_media_sentiment = get_social_media_sentiment 315 | self.get_fundamental_analysis = get_fundamental_analysis 316 | self.get_macroeconomic_news = get_macroeconomic_news 317 | 318 | # Instantiate the Toolkit, making all tools available through this single object. 319 | toolkit = Toolkit(config) 320 | print(f"Toolkit class defined and instantiated with live data tools.") 321 | ``` 322 | 323 | Our foundation for agent action is now complete. We have a comprehensive suite of tools that allows our system to gather a rich, multi-faceted view of any financial asset. 324 | 325 | ## Implementing Long-Term Memory for Continuous Learning 326 | For our agents to improve over time, they need long-term memory. To enable true learning, agents must be able to reflect on past decisions and retrieve those lessons when faced with similar situations in the future. 327 | 328 | ![Long term memory flow](https://miro.medium.com/v2/resize:fit:1250/1*OUUwD6deuzggOaFDj5J1qQ.png) 329 | _Long term memory flow (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 330 | 331 | The `FinancialSituationMemory` class is the core of this learning loop. It uses a `ChromaDB` vector store to save textual summaries of past situations and the lessons learned from them. 332 | 333 | ```python 334 | import chromadb 335 | from openai import OpenAI 336 | 337 | # The FinancialSituationMemory class provides long-term memory 338 | # for storing and retrieving financial situations + recommendations. 339 | class FinancialSituationMemory: 340 | def __init__(self, name, config): 341 | # Use OpenAI’s small embedding model for vectorizing text 342 | self.embedding_model = "text-embedding-3-small" 343 | 344 | # Initialize OpenAI client (pointing to your configured backend) 345 | self.client = OpenAI(base_url=config["backend_url"]) 346 | 347 | # Create a ChromaDB client (with reset allowed for testing) 348 | self.chroma_client = chromadb.Client(chromadb.config.Settings(allow_reset=True)) 349 | 350 | # Create a collection (like a table) to store situations + advice 351 | self.situation_collection = self.chroma_client.create_collection(name=name) 352 | 353 | def get_embedding(self, text): 354 | # Generate an embedding (vector) for the given text 355 | response = self.client.embeddings.create(model=self.embedding_model, input=text) 356 | return response.data[0].embedding 357 | 358 | def add_situations(self, situations_and_advice): 359 | # Add new situations and recommendations to memory 360 | if not situations_and_advice: 361 | return 362 | 363 | # Offset ensures unique IDs (in case new data is added later) 364 | offset = self.situation_collection.count() 365 | ids = [str(offset + i) for i, _ in enumerate(situations_and_advice)] 366 | 367 | # Separate situations and their corresponding advice 368 | situations = [s for s, r in situations_and_advice] 369 | recommendations = [r for s, r in situations_and_advice] 370 | 371 | # Generate embeddings for all situations 372 | embeddings = [self.get_embedding(s) for s in situations] 373 | 374 | # Store everything in Chroma (vector DB) 375 | self.situation_collection.add( 376 | documents=situations, 377 | metadatas=[{"recommendation": rec} for rec in recommendations], 378 | embeddings=embeddings, 379 | ids=ids, 380 | ) 381 | 382 | def get_memories(self, current_situation, n_matches=1): 383 | # Retrieve the most similar past situations for a given query 384 | if self.situation_collection.count() == 0: 385 | return [] 386 | 387 | # Embed the new/current situation 388 | query_embedding = self.get_embedding(current_situation) 389 | 390 | # Query the collection for similar embeddings 391 | results = self.situation_collection.query( 392 | query_embeddings=[query_embedding], 393 | n_results=min(n_matches, self.situation_collection.count()), 394 | include=["metadatas"], # Only return recommendations 395 | ) 396 | 397 | # Return extracted recommendations from the matches 398 | return [{'recommendation': meta['recommendation']} for meta in results['metadatas'][0]] 399 | ``` 400 | Let’s break down the key methods. 401 | 402 | 1. The `__init__` method sets up a unique ChromaDB `collection` for each agent, giving them their own private memory space. 403 | 2. The `add_situations` method is where learning happens: it takes a situation (the context) and a lesson, creates a vector embedding of the situation, and stores both in the database. 404 | 3. The `metadatas` parameter is used to store the actual lesson text, while the `documents` store the context. 405 | 4. Finally, `get_memories` is the retrieval step. It takes the `current_situation`, creates a `query_embedding`, and performs a similarity search to find the most relevant lessons from the past. 406 | 407 | Now, we will create a dedicated memory instance for each learning agent. 408 | ```python 409 | # Create a dedicated memory instance for each agent that learns. 410 | bull_memory = FinancialSituationMemory("bull_memory", config) 411 | bear_memory = FinancialSituationMemory("bear_memory", config) 412 | trader_memory = FinancialSituationMemory("trader_memory", config) 413 | invest_judge_memory = FinancialSituationMemory("invest_judge_memory", config) 414 | risk_manager_memory = FinancialSituationMemory("risk_manager_memory", config) 415 | ``` 416 | We have now created five separate memory instances. This specialization is important because a lesson learned by the Bull agent (e.g., “In a strong uptrend, valuation concerns are less important”) might not be relevant for the more conservative Safe Risk Analyst. 417 | > By giving them separate memories, we ensure the lessons they retrieve are specific to their roles. 418 | 419 | With our environment, state, tools, and memory all defined, the foundational layer of our system is complete. We are now ready to start building the agents themselves. 420 | 421 | ## Deploying the Analyst Team for 360-Degree Market Intelligence 422 | With our foundation in place, it’s time to introduce the first team of agents, the **Analysts**. This team is the intelligence-gathering arm of our firm. 423 | > A successful trading decision cannot be made in a vacuum, it requires a holistic understanding of the asset from multiple angles. 424 | 425 | We will therefore create **four specialized analysts**, each responsible for a unique domain. 426 | 427 | ![Analyst Factory](https://miro.medium.com/v2/resize:fit:1250/1*GupCcdQaWUXeIq3bAat54Q.png) 428 | _Analyst Factory (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 429 | 430 | Their collective goal is to produce a comprehensive, 360-degree view of the stock and its market environment, populating our `AgentState` with the raw intelligence needed for the next stage of strategic analysis. 431 | 432 | To avoid repeating code, we will first build a “factory” function. This is a powerful software engineering pattern that allows us to create each unique analyst node from a common template. 433 | 434 | ```python 435 | from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder 436 | 437 | # This function is a factory that creates a LangGraph node for a specific type of analyst. 438 | def create_analyst_node(llm, toolkit, system_message, tools, output_field): 439 | """ 440 | Creates a node for an analyst agent. 441 | Args: 442 | llm: The language model instance to be used by the agent. 443 | toolkit: The collection of tools available to the agent. 444 | system_message: The specific instructions defining the agent's role and goals. 445 | tools: A list of specific tools from the toolkit that this agent is allowed to use. 446 | output_field: The key in the AgentState where this agent's final report will be stored. 447 | """ 448 | # Define the prompt template for the analyst agent. 449 | prompt = ChatPromptTemplate.from_messages([ 450 | ("system", 451 | "You are a helpful AI assistant, collaborating with other assistants." 452 | " Use the provided tools to progress towards answering the question." 453 | " If you are unable to fully answer, that's OK; another assistant with different tools" 454 | " will help where you left off. Execute what you can to make progress." 455 | " You have access to the following tools: {tool_names}.\n{system_message}" 456 | " For your reference, the current date is {current_date}. The company we want to look at is {ticker}"), 457 | # MessagesPlaceholder allows us to pass in the conversation history. 458 | MessagesPlaceholder(variable_name="messages"), 459 | ]) 460 | # Partially fill the prompt with the specific system message and tool names for this analyst. 461 | prompt = prompt.partial(system_message=system_message) 462 | prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools])) 463 | # Bind the specified tools to the LLM. This tells the LLM which functions it can call. 464 | chain = prompt | llm.bind_tools(tools) 465 | # This is the actual function that will be executed as a node in the graph. 466 | def analyst_node(state): 467 | # Fill in the final pieces of the prompt with data from the current state. 468 | prompt_with_data = prompt.partial(current_date=state["trade_date"], ticker=state["company_of_interest"]) 469 | # Invoke the chain with the current messages from the state. 470 | result = prompt_with_data.invoke(state["messages"]) 471 | report = "" 472 | # If the LLM didn't call a tool, it means it has generated the final report. 473 | if not result.tool_calls: 474 | report = result.content 475 | # Return the LLM's response and the final report to update the state. 476 | return {"messages": [result], output_field: report} 477 | return analyst_node 478 | ``` 479 | 480 | Let’s break down this factory function. `create_analyst_node` is a higher-order function that returns another function (`analyst_node`), which will serve as our actual node in the LangGraph workflow. 481 | * The `system_message` parameter is crucial; it's where we inject the unique "personality" and goals for each analyst. 482 | * The `tools` parameter enforces the division of labor—the Market Analyst can't access social media tools, and vice versa. 483 | * The `llm.bind_tools(tools)` call is what makes the agent tool-using. It provides the LLM with the schemas of the available tools so it can decide which to call. 484 | 485 | Now that we have our factory, we can create our first specialist: the **Market Analyst**. Its job is purely quantitative, focusing on price action and technical indicators. We will equip it only with the tools necessary for this task. 486 | 487 | ```python 488 | # Market Analyst: Focuses on technical indicators and price action. 489 | market_analyst_system_message = "You are a trading assistant specialized in analyzing financial markets. Your role is to select the most relevant technical indicators to analyze a stock's price action, momentum, and volatility. You must use your tools to get historical data and then generate a report with your findings, including a summary table." 490 | market_analyst_node = create_analyst_node( 491 | quick_thinking_llm, 492 | toolkit, 493 | market_analyst_system_message, 494 | [toolkit.get_yfinance_data, toolkit.get_technical_indicators], 495 | "market_report" 496 | ) 497 | ``` 498 | 499 | In this block, we define the `market_analyst_system_message` to give the agent a clear persona and objective. We then call our factory, passing in the `quick_thinking_llm`, the `toolkit`, this message, and a specific list of tools, `get_yfinance_data` and `get_technical_indicators`. Finally, we specify that its output should be saved to the `market_report` field in our `AgentState`. 500 | 501 | Next, the **Social Media Analyst**. Its role is to capture the qualitative, often unpredictable, public mood around a stock. 502 | 503 | ```python 504 | # Social Media Analyst: Gauges public sentiment. 505 | social_analyst_system_message = "You are a social media analyst. Your job is to analyze social media posts and public sentiment for a specific company over the past week. Use your tools to find relevant discussions and write a comprehensive report detailing your analysis, insights, and implications for traders, including a summary table." 506 | social_analyst_node = create_analyst_node( 507 | quick_thinking_llm, 508 | toolkit, 509 | social_analyst_system_message, 510 | [toolkit.get_social_media_sentiment], 511 | "sentiment_report" 512 | ) 513 | ``` 514 | 515 | Here, we’ve defined the agent responsible for sentiment analysis. Notice that its tool list is very narrow, containing only `get_social_media_sentiment`. 516 | 517 | This specialization is a key principle of multi-agent design, ensuring each agent focuses on its core competency. 518 | 519 | Our third specialist is the **News Analyst**, responsible for providing both company-specific and macroeconomic context. 520 | ```python 521 | # News Analyst: Covers company-specific and macroeconomic news. 522 | news_analyst_system_message = "You are a news researcher analyzing recent news and trends over the past week. Write a comprehensive report on the current state of the world relevant for trading and macroeconomics. Use your tools to be comprehensive and provide detailed analysis, including a summary table." 523 | news_analyst_node = create_analyst_node( 524 | quick_thinking_llm, 525 | toolkit, 526 | news_analyst_system_message, 527 | [toolkit.get_finnhub_news, toolkit.get_macroeconomic_news], 528 | "news_report" 529 | ) 530 | ``` 531 | The News Analyst is given two tools. This is a deliberate choice: `get_finnhub_news` provides micro-level, company-specific information, while `get_macroeconomic_news` provides the macro-level context. A comprehensive news analysis requires both perspectives. 532 | 533 | Finally, our **Fundamentals Analyst** will investigate the company financial health. 534 | ```python 535 | # Fundamentals Analyst: Dives into the company's financial health. 536 | fundamentals_analyst_system_message = "You are a researcher analyzing fundamental information about a company. Write a comprehensive report on the company's financials, insider sentiment, and transactions to gain a full view of its fundamental health, including a summary table." 537 | fundamentals_analyst_node = create_analyst_node( 538 | quick_thinking_llm, 539 | toolkit, 540 | fundamentals_analyst_system_message, 541 | [toolkit.get_fundamental_analysis], 542 | "fundamentals_report" 543 | ) 544 | ``` 545 | 546 | It’s important to understand *how* these agents work. They use a pattern called **ReAct (Reasoning and Acting)**. This isn’t a single LLM call, it’s a loop: 547 | 548 | 1. **Reason:** The LLM receives the prompt and decides if it needs a tool. 549 | 2. **Act:** If so, it generates a `tool_call` (e.g., `toolkit.get_yfinance_data(...)`). 550 | 3. **Observe:** Our graph executes this tool, and the result (the data) is passed back to the LLM. 551 | 4. **Repeat:** The LLM now has new information and decides its next step — either calling another tool or generating the final report. 552 | 553 | This loop allows the agents to perform complex, multi-step data gathering tasks autonomously. To manage this loop, we need a helper function. 554 | 555 | ```python 556 | from langgraph.prebuilt import ToolNode, tools_condition 557 | from langchain_core.messages import HumanMessage 558 | import datetime 559 | from rich.console import Console 560 | from rich.markdown import Markdown 561 | 562 | # Initialize a console for rich printing. 563 | console = Console() 564 | # Helper function to run a single analyst's ReAct loop. 565 | def run_analyst(analyst_node, initial_state): 566 | state = initial_state 567 | # Get all available tools from our toolkit instance. 568 | all_tools_in_toolkit = [getattr(toolkit, name) for name in dir(toolkit) if callable(getattr(toolkit, name)) and not name.startswith("__")] 569 | # The ToolNode is a special LangGraph node that executes tool calls. 570 | tool_node = ToolNode(all_tools_in_toolkit) 571 | # The ReAct loop can have up to 5 steps of reasoning and tool calls. 572 | for _ in range(5): 573 | result = analyst_node(state) 574 | # The tools_condition checks if the LLM's last message was a tool call. 575 | if tools_condition(result) == "tools": 576 | # If so, execute the tools and update the state. 577 | state = tool_node.invoke(result) 578 | else: 579 | # If not, the agent is done, so we break the loop. 580 | state = result 581 | break 582 | return state 583 | ``` 584 | The `run_analyst` function orchestrates the ReAct loop for a single agent. The `tools_condition` function is a LangGraph utility that inspects the agent's last message. If it contains a `tool_calls` attribute, it returns "tools", directing the flow to the `tool_node` for execution. Otherwise, the agent has finished its work. 585 | 586 | Now, let’s set our initial state and execute the first analyst. 587 | ```python 588 | TICKER = "NVDA" 589 | # Use a recent date for live data fetching 590 | TRADE_DATE = (datetime.date.today() - datetime.timedelta(days=2)).strftime('%Y-%m-%d') 591 | 592 | # Define the initial state for the graph run. 593 | initial_state = AgentState( 594 | messages=[HumanMessage(content=f"Analyze {TICKER} for trading on {TRADE_DATE}")], 595 | company_of_interest=TICKER, 596 | trade_date=TRADE_DATE, 597 | # Initialize the debate states with default empty values. 598 | investment_debate_state=InvestDebateState({'history': '', 'current_response': '', 'count': 0, 'bull_history': '', 'bear_history': '', 'judge_decision': ''}), 599 | risk_debate_state=RiskDebateState({'history': '', 'latest_speaker': '', 'current_risky_response': '', 'current_safe_response': '', 'current_neutral_response': '', 'count': 0, 'risky_history': '', 'safe_history': '', 'neutral_history': '', 'judge_decision': ''}) 600 | ) 601 | # Run Market Analyst 602 | print("Running Market Analyst...") 603 | market_analyst_result = run_analyst(market_analyst_node, initial_state) 604 | initial_state['market_report'] = market_analyst_result.get('market_report', 'Failed to generate report.') 605 | console.print("----- Market Analyst Report -----") 606 | console.print(Markdown(initial_state['market_report'])) 607 | ``` 608 | 609 | We are targeting NVIDIA stocks for now, First we need to observe the market by first running the marketing analyst, this is what we get. 610 | ```bash 611 | # Running Market Analyst... 612 | ----- Market Analyst Report ----- 613 | Based on the technical analysis of NVDA, the stock demonstrates a strong bullish trend. The price is consistently trading above its 50-day and 200-day simple moving averages, which are both in a clear uptrend. The MACD line is above the signal line, confirming positive momentum. The RSI_14 is elevated but not yet in overbought territory, suggesting there is still room for potential upside. The Bollinger Bands show a recent expansion, indicating an increase in volatility which often accompanies strong price moves. In summary, all key technical indicators point towards continued bullish strength. 614 | 615 | | Indicator | Signal | Insight | 616 | |----------------|--------------|---------------------------------------------------------------| 617 | | SMAs (50, 200) | Bullish | Confirms a strong, sustained uptrend. | 618 | | MACD | Bullish | Positive momentum is currently in control. | 619 | | RSI (14) | Strong | Indicates strong buying pressure, but not yet exhausted. | 620 | | Bollinger Bands| Expanding | Suggests increasing volatility and potential for a breakout. | 621 | ``` 622 | The Market Analyst has successfully executed its task. By inspecting the LangSmith trace, we would see a two-step process: 623 | 624 | 1. first, it called `get_yfinance_data` to fetch prices, then it used that data to call `get_technical_indicators`. 625 | 2. The final report is a clear, structured summary with a table, which is exactly what its prompt requested. This report is now stored in `initial_state['market_report']`. 626 | 627 | The Market Analyst has given us a strong quantitative signal. Now, let’s see if the public mood aligns with the charts. 628 | ```python 629 | # Run Social Media Analyst 630 | print("\nRunning Social Media Analyst...") 631 | social_analyst_result = run_analyst(social_analyst_node, initial_state) 632 | initial_state['sentiment_report'] = social_analyst_result.get('sentiment_report', 'Failed to generate report.') 633 | console.print("----- Social Media Analyst Report -----") 634 | console.print(Markdown(initial_state['sentiment_report'])) 635 | ``` 636 | ```bash 637 | Running Social Media Analyst... 638 | ----- Social Media Analyst Report ----- 639 | Social media sentiment for NVDA is overwhelmingly bullish. Platforms like X (formerly Twitter) and Reddit show a high volume of positive discussion, primarily centered around the company dominance in the AI chip market and anticipation of strong earnings. Key influencers and retail communities are actively promoting a buy and hold strategy. There is some minor chatter regarding the stocks high valuation, but this is largely drowned out by the positive consensus. The overall sentiment is a strong tailwind for the stock price. 640 | 641 | | Platform | Sentiment | Key Themes | 642 | |---------------|--------------|-------------------------------------------------| 643 | | X (Twitter) | Very Bullish | AI Dominance, Analyst Upgrades, Product Hype | 644 | | Reddit | Very Bullish | HODL mentality, Earnings Predictions, Memes | 645 | ``` 646 | The Social Media Analyst is confirming the bullish thesis. Its report, generated from a live Tavily web search, captures the qualitative “hype” around the stock. This unstructured data is now distilled into a structured report and added to our `AgentState`. 647 | 648 | But now we need to verify this confirmation by gathering the news context. 649 | ```python 650 | # Run News Analyst 651 | print("\nRunning News Analyst...") 652 | news_analyst_result = run_analyst(news_analyst_node, initial_state) 653 | initial_state['news_report'] = news_analyst_result.get('news_report', 'Failed to generate report.') 654 | console.print("----- News Analyst Report -----") 655 | console.print(Markdown(initial_state['news_report'])) 656 | ``` 657 | ```bash 658 | Running News Analyst... 659 | ----- News Analyst Report ----- 660 | The news environment for NVDA is positive. Recent company-specific headlines from Finnhub highlight new product announcements and partnerships in the automotive and enterprise AI sectors. Broader macroeconomic news has been favorable for tech stocks, with recent inflation data coming in as expected, calming fears of aggressive central bank policies. There are no significant negative headlines concerning NVDA or the semiconductor industry in the past week. 661 | 662 | | News Category | Impact | Summary | 663 | |------------------|-----------|--------------------------------------------------------------------------| 664 | | Company-Specific | Positive | New product announcements and strategic partnerships signal continued growth. | 665 | | Macroeconomic | Neutral+ | Stable inflation and interest rate outlook provide a supportive backdrop. | 666 | ``` 667 | The News Analyst report adds another layer, confirming that both company-specific and macroeconomic news are supportive. The agent correctly used both of its assigned tools to build this comprehensive picture. 668 | 669 | Finally, let’s check the company’s underlying financial health. 670 | ```python 671 | # Run Fundamentals Analyst 672 | print("\nRunning Fundamentals Analyst...") 673 | fundamentals_analyst_result = run_analyst(fundamentals_analyst_node, initial_state) 674 | initial_state['fundamentals_report'] = fundamentals_analyst_result.get('fundamentals_report', 'Failed to generate report.') 675 | console.print("----- Fundamentals Analyst Report -----") 676 | console.print(Markdown(initial_state['fundamentals_report'])) 677 | ``` 678 | ```bash 679 | Running Fundamentals Analyst... 680 | ----- Fundamentals Analyst Report ----- 681 | The fundamental picture for NVDA is exceptionally strong, though accompanied by a premium valuation. Web search results confirm that recent earnings reports have consistently beaten analyst expectations, driven by explosive growth in data center revenue. Key metrics like gross margin and return on equity are best-in-class. While the Price-to-Earnings (P/E) ratio is high, it is supported by a very high forward growth rate (PEG ratio is more reasonable). The company balance sheet is robust with a significant cash reserve. This is a fundamentally sound company on a powerful growth trajectory. 682 | 683 | | Metric | Status | Insight | 684 | |---------------------|-------------|-----------------------------------------------------------------------| 685 | | Revenue Growth | Exceptional | Data center segment is experiencing hyper-growth due to AI demand. | 686 | | Profit Margins | Excellent | Demonstrates strong pricing power and operational efficiency. | 687 | | Valuation (P/E) | High | The market has priced in significant future earnings growth. | 688 | | Balance Sheet | Strong | Ample cash reserves provide flexibility for R&D and acquisitions. | 689 | ``` 690 | The final report from the Fundamentals Analyst confirms the strong growth story but also introduces the first note of caution … 691 | > “premium valuation” and a “High” P/E ratio. 692 | 693 | This is a crucial piece of conflicting information. While everything else appears bullish, the high valuation presents a clear risk. 694 | 695 | Our `initial_state` object is now fully populated with a rich, multi-faceted view of NVDA's market position. With this comprehensive but now clearly conflicting data, the stage is set for our Researcher Team to debate its meaning and forge a coherent strategy. 696 | 697 | ## Building The Bull vs Bear Researcher Team 698 | With the four analyst reports compiled, our `AgentState` is now packed with raw intelligence. However, raw data can be conflicting and requires interpretation. As we saw, the Fundamentals Analyst introduced a key risk, high valuation that contradicts the otherwise overwhelmingly positive signals. 699 | 700 | > This is where the **Researcher Team** comes in. 701 | 702 | ![Resolving Conflicts](https://miro.medium.com/v2/resize:fit:1250/1*S4gZ-_-zvOe1PNv_fAW1Gw.png) 703 | _Resolving Conflicts (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 704 | 705 | This team’s purpose is to critically evaluate the evidence by staging a structured, adversarial debate between two opposing viewpoints a **Bull** and a **Bear**. This process is designed to prevent confirmation bias and stress-test the investment thesis from both sides. A **Research Manager** then oversees this debate and has the crucial task of synthesizing the arguments into a single, coherent investment plan. 706 | 707 | First, we need to define the logic for these debaters. We’ll create a factory function, similar to the one for our analysts, to produce the researcher nodes. 708 | 709 | ```python 710 | # This function is a factory that creates a LangGraph node for a researcher agent (Bull or Bear). 711 | def create_researcher_node(llm, memory, role_prompt, agent_name): 712 | """ 713 | Creates a node for a researcher agent. 714 | Args: 715 | llm: The language model instance to be used by the agent. 716 | memory: The long-term memory instance for this agent to learn from past experiences. 717 | role_prompt: The specific system prompt defining the agent's persona (Bull or Bear). 718 | agent_name: The name of the agent, used for logging and identifying arguments. 719 | """ 720 | def researcher_node(state): 721 | # First, combine all analyst reports into a single summary for context. 722 | situation_summary = f""" 723 | Market Report: {state['market_report']} 724 | Sentiment Report: {state['sentiment_report']} 725 | News Report: {state['news_report']} 726 | Fundamentals Report: {state['fundamentals_report']} 727 | """ 728 | # Retrieve relevant memories from past, similar situations. 729 | past_memories = memory.get_memories(situation_summary) 730 | past_memory_str = "\n".join([mem['recommendation'] for mem in past_memories]) 731 | 732 | # Construct the full prompt for the LLM. 733 | prompt = f"""{role_prompt} 734 | Here is the current state of the analysis: 735 | {situation_summary} 736 | Conversation history: {state['investment_debate_state']['history']} 737 | Your opponent's last argument: {state['investment_debate_state']['current_response']} 738 | Reflections from similar past situations: {past_memory_str or 'No past memories found.'} 739 | Based on all this information, present your argument conversationally.""" 740 | 741 | # Invoke the LLM to generate the argument. 742 | response = llm.invoke(prompt) 743 | argument = f"{agent_name}: {response.content}" 744 | 745 | # Update the debate state with the new argument. 746 | debate_state = state['investment_debate_state'].copy() 747 | debate_state['history'] += "\n" + argument 748 | # Update the specific history for this agent (Bull or Bear). 749 | if agent_name == 'Bull Analyst': 750 | debate_state['bull_history'] += "\n" + argument 751 | else: 752 | debate_state['bear_history'] += "\n" + argument 753 | debate_state['current_response'] = argument 754 | debate_state['count'] += 1 755 | return {"investment_debate_state": debate_state} 756 | return researcher_node 757 | ``` 758 | The `create_researcher_node` function is the core of our debate logic. Let's look at its key components: 759 | * `situation_summary`: It consolidates all four analyst reports into a single block of text. This ensures both debaters are working from the same set of facts. 760 | * `past_memories`: Before forming an argument, the agent queries its `memory` object. This is a critical step for learning; it allows the agent to recall lessons from previous trades to inform its current stance. 761 | * Prompt Structure: The prompt is carefully constructed to include the agent’s role, the analyst reports, the full debate history, the opponent’s most recent argument, and its own long-term memories. This rich context enables sophisticated rebuttals. 762 | * State Updates: The function returns an updated `investment_debate_state`, appending the new argument to the `history` and updating the `count`. 763 | 764 | Now, let’s define the specific personas for our Bull and Bear and create their nodes. 765 | ```python 766 | # The Bull's persona is optimistic, focusing on strengths and growth. 767 | bull_prompt = "You are a Bull Analyst. Your goal is to argue for investing in the stock. Focus on growth potential, competitive advantages, and positive indicators from the reports. Counter the bear's arguments effectively." 768 | 769 | # The Bear's persona is pessimistic, focusing on risks and weaknesses. 770 | bear_prompt = "You are a Bear Analyst. Your goal is to argue against investing in the stock. Focus on risks, challenges, and negative indicators. Counter the bull's arguments effectively." 771 | # Create the callable nodes using our factory function. 772 | bull_researcher_node = create_researcher_node(quick_thinking_llm, bull_memory, bull_prompt, "Bull Analyst") 773 | bear_researcher_node = create_researcher_node(quick_thinking_llm, bear_memory, bear_prompt, "Bear Analyst") 774 | ``` 775 | 776 | With the debaters ready, we need a judge. The **Research Manager** agent will review the entire debate transcript and produce the final, synthesized investment plan. This is a high-stakes reasoning task, so we will use our powerful `deep_thinking_llm`. 777 | ```python 778 | # This function creates the Research Manager node. 779 | def create_research_manager(llm, memory): 780 | def research_manager_node(state): 781 | # The prompt instructs the manager to act as a judge and synthesizer. 782 | prompt = f"""As the Research Manager, your role is to critically evaluate the debate between the Bull and Bear analysts and make a definitive decision. 783 | Summarize the key points, then provide a clear recommendation: Buy, Sell, or Hold. Develop a detailed investment plan for the trader, including your rationale and strategic actions. 784 | 785 | Debate History: 786 | {state['investment_debate_state']['history']}""" 787 | response = llm.invoke(prompt) 788 | # The output is the final investment plan, which will be passed to the Trader. 789 | return {"investment_plan": response.content} 790 | return research_manager_node 791 | 792 | # Create the callable manager node. 793 | research_manager_node = create_research_manager(deep_thinking_llm, invest_judge_memory) 794 | print("Researcher and Manager agent creation functions are now available.") 795 | ``` 796 | Now, let’s simulate the debate. We’ll run a loop for the `max_debate_rounds` specified in our config. In each round, the Bull will present its argument, and then the Bear will rebut it, with the state being updated after each turn. 797 | ```python 798 | # We'll use the state from the end of the Analyst section. 799 | current_state = initial_state 800 | 801 | # Loop through the number of debate rounds defined in our config. 802 | for i in range(config['max_debate_rounds']): 803 | print(f"--- Investment Debate Round {i+1} ---") 804 | # The Bull goes first. 805 | bull_result = bull_researcher_node(current_state) 806 | current_state['investment_debate_state'] = bull_result['investment_debate_state'] 807 | console.print("\n**Bull's Argument:**") 808 | # We parse the response to print only the new argument. 809 | console.print(Markdown(current_state['investment_debate_state']['current_response'].replace('Bull Analyst: ', ''))) 810 | # Then, the Bear rebuts. 811 | bear_result = bear_researcher_node(current_state) 812 | current_state['investment_debate_state'] = bear_result['investment_debate_state'] 813 | console.print("\n**Bear's Rebuttal:**") 814 | console.print(Markdown(current_state['investment_debate_state']['current_response'].replace('Bear Analyst: ', ''))) 815 | print("\n") 816 | # After the loops, store the final debate state back into the main initial_state 817 | initial_state['investment_debate_state'] = current_state['investment_debate_state'] 818 | ``` 819 | ```bash 820 | --- Investment Debate Round 1 --- 821 | 822 | **Bulls Argument:** 823 | The case for NVDA is ironclad. We have a perfect alignment across all vectors: technicals show a clear and sustained uptrend, fundamentals are driven by the generational AI boom, social media sentiment is overwhelmingly positive, and the news cycle is providing nothing but tailwinds. Every piece of data points to the same conclusion. This is a market leader firing on all cylinders in a sector with secular growth. To not be long this stock is to ignore the most obvious trend in the market today. 824 | 825 | **Bear Rebuttal:** 826 | My opponent sees a perfect picture, but I see a stock priced for a future that has no room for error. The high P/E ratio is a major vulnerability. The 'overwhelmingly bullish' sentiment is a classic sign of market euphoria, which often precedes a sharp correction. While the fundamentals are currently strong, the semiconductor industry is notoriously cyclical. Any hint of a slowdown in AI spending or increased competition could cause this stock to fall dramatically. A prudent strategy would be to wait for a significant pullback to establish a position, not to chase it at all-time highs. 827 | 828 | --- Investment Debate Round 2 --- 829 | **Bull Argument:** 830 | The Bear cyclicality argument is outdated. The AI revolution is not a cycle; it's a structural shift in the global economy, and NVDA is providing the essential hardware for it. Waiting for a 'significant pullback' in a stock with this kind of momentum has historically been a losing strategy. The valuation is high because the growth is generational. We should be buying strength, not waiting for a weakness that may never come. 831 | **Bear Rebuttal:** 832 | Calling the AI boom non-cyclical is pure speculation. All industries, especially in technology, experience cycles of boom and bust. Even if the long-term trend is up, the risk of a 30-40% drawdown from these levels is very real. The current price already reflects years of future growth. A simple HOLD recommendation allows us to avoid the significant downside risk while we wait for a more attractive risk/reward entry point. Buying now is a gamble, not an investment. 833 | ``` 834 | The output shows our debate working perfectly. 835 | 836 | * In Round 1, the Bull presents a strong opening statement based on the confluence of positive data. The Bear immediately counters by seizing on the one negative signal from the analyst reports: the high valuation. 837 | * In Round 2, the debaters engage in direct rebuttals. The Bull dismisses the risk by reframing it (“structural shift”), while the Bear doubles down on it by quantifying the potential downside (“30–40% drawdown”). 838 | 839 | This debate has successfully surfaced the core tension of the investment case: powerful momentum vs. significant valuation risk. Now, the full transcript is ready for the Research Manager to synthesize into a final plan. 840 | ```python 841 | print("Running Research Manager...") 842 | # The manager receives the final state containing the full debate history. 843 | manager_result = research_manager_node(initial_state) 844 | # The manager's output is stored in the 'investment_plan' field. 845 | initial_state['investment_plan'] = manager_result['investment_plan'] 846 | 847 | console.print("\n----- Research Manager's Investment Plan -----") 848 | console.print(Markdown(initial_state['investment_plan'])) 849 | ``` 850 | ```text 851 | ----- Research Manager's Investment Plan ----- 852 | After reviewing the spirited debate, the Bull's core argument-that NVDA is a generational leader in a structural growth market-is more compelling. The Bear raises valid and important concerns about valuation and cyclical risk, but these are outweighed by the sheer force of the company's current financial performance and market position. 853 | **Recommendation: Buy** 854 | **Rationale:** The confluence of exceptional fundamentals, strong technical momentum, and a supportive news and sentiment environment creates a powerful case for a long position. The risk of waiting for a pullback and missing further upside appears greater than the risk of a valuation-driven correction at this time. 855 | **Strategic Actions:** I propose a scaled-entry approach to manage the risks highlighted by the Bear. Initiate a partial position at the current price. If the stock experiences a minor pullback towards its 50-day moving average, use this as an opportunity to add to the position. A firm stop-loss should be placed below the 200-day moving average to protect against a major trend change. 856 | ``` 857 | Our Research Manager isn’t just picking a side here it creates a new strategy. It endorses the Bull’s **“Buy”** recommendation but explicitly incorporates the Bear’s concerns by proposing “Strategic Actions” like a scaled entry and a defined stop-loss. This nuanced plan is far more practical and risk-aware than either debater’s individual stance. 858 | 859 | With a clear investment plan from the research team, the workflow now moves to the execution-focused agents. 860 | 861 | ## Creating The Trader and Risk Management Agents 862 | With a clear investment plan from the Research Manager, the workflow now moves from strategic analysis to the execution-focused agents. The plan, while well-reasoned, is still a high-level document. It needs to be translated into a concrete trading proposal that can be implemented in the market. 863 | 864 | This is the job of the **Trader Agent**. Once the Trader formulates a proposal, it’s passed to the **Risk Management Team** for final scrutiny. Here, agents with different risk appetites aggressive, conservative, and neutral, debate the plan to ensure all angles are considered before capital is committed. 865 | 866 | ![Trade and RM](https://miro.medium.com/v2/resize:fit:875/1*mmgTqzVpNRXmvFAKo4U-gg.png) 867 | _Trade and RM (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 868 | 869 | First, let’s define the logic for our Trader agent. Its primary role is to distill the detailed `investment_plan` into a concise, actionable proposal. A key requirement is that its response must end with a specific, machine-readable tag. 870 | 871 | ```python 872 | import functools 873 | 874 | # This function creates the Trader agent node. 875 | def create_trader(llm, memory): 876 | def trader_node(state, name): 877 | # The prompt is simple: take the plan and create a proposal. 878 | # The key instruction is the mandatory final tag. 879 | prompt = f"""You are a trading agent. Based on the provided investment plan, create a concise trading proposal. 880 | Your response must end with 'FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**'. 881 | 882 | Proposed Investment Plan: {state['investment_plan']}""" 883 | result = llm.invoke(prompt) 884 | # The output updates the state with the trader's plan and identifies the sender. 885 | return {"trader_investment_plan": result.content, "sender": name} 886 | return trader_node 887 | ``` 888 | 889 | The design of this Trader agent is focused on clarity and actionability. The prompt’s most critical part is the instruction to end with `FINAL TRANSACTION PROPOSAL: **BUY/HOLD/SELL**`. This isn't just for human readability; it creates a predictable pattern that downstream processes (like our signal extractor later on) can reliably parse. 890 | 891 | Next, we will build the factory function for our Risk Management debaters. Similar to our other agents, this function will create nodes for our three risk personas. The logic here is more complex, as each agent needs to be aware of its opponents’ most recent arguments to formulate a proper rebuttal. 892 | 893 | ```python 894 | # This function is a factory for creating the risk debater nodes. 895 | def create_risk_debator(llm, role_prompt, agent_name): 896 | def risk_debator_node(state): 897 | # First, get the arguments from the other two debaters from the state. 898 | risk_state = state['risk_debate_state'] 899 | opponents_args = [] 900 | if agent_name != 'Risky Analyst' and risk_state['current_risky_response']: opponents_args.append(f"Risky: {risk_state['current_risky_response']}") 901 | if agent_name != 'Safe Analyst' and risk_state['current_safe_response']: opponents_args.append(f"Safe: {risk_state['current_safe_response']}") 902 | if agent_name != 'Neutral Analyst' and risk_state['current_neutral_response']: opponents_args.append(f"Neutral: {risk_state['current_neutral_response']}") 903 | 904 | # Construct the prompt with the trader's plan, debate history, and opponent's arguments. 905 | prompt = f"""{role_prompt} 906 | Here is the trader's plan: {state['trader_investment_plan']} 907 | Debate history: {risk_state['history']} 908 | Your opponents' last arguments:\n{'\n'.join(opponents_args)} 909 | Critique or support the plan from your perspective.""" 910 | 911 | response = llm.invoke(prompt).content 912 | 913 | # Update the risk debate state with the new argument. 914 | new_risk_state = risk_state.copy() 915 | new_risk_state['history'] += f"\n{agent_name}: {response}" 916 | new_risk_state['latest_speaker'] = agent_name 917 | # Store the response in the specific field for this agent. 918 | if agent_name == 'Risky Analyst': new_risk_state['current_risky_response'] = response 919 | elif agent_name == 'Safe Analyst': new_risk_state['current_safe_response'] = response 920 | else: new_risk_state['current_neutral_response'] = response 921 | new_risk_state['count'] += 1 922 | return {"risk_debate_state": new_risk_state} 923 | return risk_debator_node 924 | ``` 925 | The logic in `create_risk_debator` is what enables a true multi-party debate. By dynamically building the `opponents_args` list, we ensure that each agent is directly responding to the other participants, not just stating its opinion in isolation. 926 | 927 | The state update is also more granular, populating fields like `current_risky_response` so that the other agents can access them in the next turn. 928 | 929 | Now, we’ll define the specific personas for our three risk agents. This adversarial setup, one aggressive, one conservative, one balanced is designed to stress-test the Trader’s plan from all angles. 930 | 931 | ```python 932 | # The Risky persona advocates for maximizing returns, even if it means higher risk. 933 | risky_prompt = "You are the Risky Risk Analyst. You advocate for high-reward opportunities and bold strategies." 934 | 935 | # The Safe persona prioritizes capital preservation above all else. 936 | safe_prompt = "You are the Safe/Conservative Risk Analyst. You prioritize capital preservation and minimizing volatility." 937 | # The Neutral persona provides a balanced, objective viewpoint. 938 | neutral_prompt = "You are the Neutral Risk Analyst. You provide a balanced perspective, weighing both benefits and risks." 939 | ``` 940 | With the agent logic defined, we can now instantiate all the callable nodes for this part of the workflow. 941 | ```python 942 | # Create the Trader node. We use functools.partial to pre-fill the 'name' argument. 943 | trader_node_func = create_trader(quick_thinking_llm, trader_memory) 944 | trader_node = functools.partial(trader_node_func, name="Trader") 945 | 946 | # Create the three risk debater nodes using their specific prompts. 947 | risky_node = create_risk_debator(quick_thinking_llm, risky_prompt, "Risky Analyst") 948 | safe_node = create_risk_debator(quick_thinking_llm, safe_prompt, "Safe Analyst") 949 | neutral_node = create_risk_debator(quick_thinking_llm, neutral_prompt, "Neutral Analyst") 950 | ``` 951 | Now let’s run the Trader agent on the `investment_plan` we generated in the previous section. 952 | ```python 953 | print("Running Trader...") 954 | # Pass the current state to the trader node. 955 | trader_result = trader_node(initial_state) 956 | # Update the state with the trader's output. 957 | initial_state['trader_investment_plan'] = trader_result['trader_investment_plan'] 958 | 959 | console.print("\n----- Trader's Proposal -----") 960 | console.print(Markdown(initial_state['trader_investment_plan'])) 961 | ``` 962 | ```bash 963 | Running Trader... 964 | 965 | ----- Trader's Proposal ----- 966 | The Research Manager's plan to scale into a long position is prudent and well-supported by the comprehensive analysis. This approach allows us to participate in the clear uptrend while managing the risk associated with the stock's high valuation. 967 | I will execute this by establishing an initial 50% position at the market open. Limit orders will be placed to add the remaining 50% on any pullback to the 50-day SMA. A hard stop-loss will be implemented below the 200-day SMA to protect our capital against a significant market reversal. 968 | FINAL TRANSACTION PROPOSAL: **BUY** 969 | ``` 970 | The Trader’s output is excellent is showing that it has successfully translated the Research Manager’ strategic guidance into a concrete, actionable plan with specific parameters (50% position, entry/exit points). Crucially, it has also included the `FINAL TRANSACTION PROPOSAL` tag, making its core recommendation unambiguous. 971 | 972 | This proposal now goes to the Risk Management team for their debate. 973 | ```python 974 | print("--- Risk Management Debate Round 1 ---") 975 | 976 | risk_state = initial_state 977 | # We run the debate for the number of rounds specified in our config (currently 1). 978 | for _ in range(config['max_risk_discuss_rounds']): 979 | 980 | # Risky analyst goes first. 981 | risky_result = risky_node(risk_state) 982 | risk_state['risk_debate_state'] = risky_result['risk_debate_state'] 983 | console.print("\n**Risky Analyst's View:**") 984 | console.print(Markdown(risk_state['risk_debate_state']['current_risky_response'])) 985 | 986 | # Then the Safe analyst. 987 | safe_result = safe_node(risk_state) 988 | risk_state['risk_debate_state'] = safe_result['risk_debate_state'] 989 | console.print("\n**Safe Analyst's View:**") 990 | console.print(Markdown(risk_state['risk_debate_state']['current_safe_response'])) 991 | 992 | # Finally, the Neutral analyst. 993 | neutral_result = neutral_node(risk_state) 994 | risk_state['risk_debate_state'] = neutral_result['risk_debate_state'] 995 | console.print("\n**Neutral Analyst's View:**") 996 | console.print(Markdown(risk_state['risk_debate_state']['current_neutral_response'])) 997 | 998 | # Update the main state with the final debate transcript. 999 | initial_state['risk_debate_state'] = risk_state['risk_debate_state'] 1000 | ``` 1001 | ```bash 1002 | --- Risk Management Debate Round 1 --- 1003 | **Risky Analyst's View:** 1004 | The scaled entry plan is too conservative. All data points to immediate and continued strength. By only taking a 50% position, we are willingly leaving profit on the table. The opportunity cost of waiting for a dip that might not materialize is the biggest risk here. I advocate for a full 100% position at the open to maximize our exposure to this clear winner. 1005 | **Safe Analyst's View:** 1006 | A full position would be irresponsible. The stock is trading at a high valuation and sentiment is euphoric-a classic setup for a sharp pullback. The trader's plan to start with 50% is a sensible compromise, but I would argue for an even tighter stop-loss, perhaps just below the 50-day SMA, to protect recent gains. Capital preservation must be our top priority in such a volatile name. 1007 | **Neutral Analyst's View:** 1008 | The trader's plan is excellent and requires no modification. It perfectly balances the Risky Analyst's desire for upside participation with the Safe Analyst's valid concerns about risk. A 50% scaled entry with a defined stop-loss is the textbook definition of prudent position management in a high-momentum stock. It allows us to be in the game while managing our downside. I fully endorse the plan as it stands. 1009 | ``` 1010 | The output of the risk debate clearly demonstrates the value of the multi-persona approach. The Risky Analyst pushes for more aggressive action (“full 100% position”), the Safe Analyst pushes for tighter controls (“tighter stop-loss”), and the Neutral Analyst validates the Trader’s plan as a well-balanced compromise. This debate has effectively illuminated the full spectrum of risk considerations. 1011 | 1012 | ## Portfolio Manager Binding Decision 1013 | The final step in the decision-making process rests with the **Portfolio Manager** agent. 1014 | > This agent acts as the head of the firm. It reviews the Trader’s plan and the entire risk debate, then issues the final, binding decision. 1015 | 1016 | ![Judgment Process](https://miro.medium.com/v2/resize:fit:875/1*qeDMBNwRZZrkldeNBd-q4g.png) 1017 | _Judgment Process (creaed by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 1018 | 1019 | This is the most critical step, so we will use our `deep_thinking_llm` to ensure the highest quality of reasoning. 1020 | 1021 | First, let’s define the agent function. 1022 | ```python 1023 | # This function creates the Portfolio Manager node. 1024 | def create_risk_manager(llm, memory): 1025 | def risk_manager_node(state): 1026 | # The prompt asks for a final, binding decision based on all prior work. 1027 | prompt = f"""As the Portfolio Manager, your decision is final. Review the trader's plan and the risk debate. 1028 | Provide a final, binding decision: Buy, Sell, or Hold, and a brief justification. 1029 | 1030 | Trader's Plan: {state['trader_investment_plan']} 1031 | Risk Debate: {state['risk_debate_state']['history']}""" 1032 | response = llm.invoke(prompt).content 1033 | # The output is stored in the 'final_trade_decision' field of the state. 1034 | return {"final_trade_decision": response} 1035 | return risk_manager_node 1036 | 1037 | # Create the callable manager node. 1038 | risk_manager_node = create_risk_manager(deep_thinking_llm, risk_manager_memory) 1039 | ``` 1040 | 1041 | Now, let’s run this final agent to get our decision. 1042 | ```python 1043 | print("Running Portfolio Manager for final decision...") 1044 | # The manager receives the final state containing the trader's plan and the full risk debate. 1045 | risk_manager_result = risk_manager_node(initial_state) 1046 | # The manager's output is stored in the 'final_trade_decision' field. 1047 | initial_state['final_trade_decision'] = risk_manager_result['final_trade_decision'] 1048 | 1049 | console.print("\n----- Portfolio Manager's Final Decision -----") 1050 | console.print(Markdown(initial_state['final_trade_decision'])) 1051 | ``` 1052 | ```bash 1053 | Running Portfolio Manager for final decision... 1054 | 1055 | ----- Portfolio Manager's Final Decision ----- 1056 | Having reviewed the trader's well-reasoned plan and the comprehensive risk debate, I approve the transaction. The Neutral Analyst correctly identifies that the trader's proposal of a scaled entry is the most prudent path forward. It effectively balances the significant upside potential, as championed by the Risky Analyst, with the valid valuation concerns raised by the Safe Analyst. 1057 | The plan is sound and aligns with our firm's goal of capturing growth while managing risk. The transaction is approved. 1058 | **Final Decision: BUY** 1059 | ``` 1060 | The final output shows the Portfolio Manager synthesizing all the prior stages. It explicitly references the risk debate, agrees with the Neutral Analyst, and approves the Trader’s plan. This confirms the **BUY** decision and concludes the core analytical workflow. We have successfully transformed raw, multi-source data into a single, reasoned, and approved trading decision. 1061 | 1062 | Now that all the individual agent components have been defined and tested, we are ready to assemble them into a single, automated workflow using LangGraph. 1063 | 1064 | ## Orchestrating the Agent Society 1065 | We have now defined and individually tested all the agent components that make up our financial firm: the Analysts, Researchers, Trader, and the Risk Managers. However, they currently exist as isolated functions. The final step in construction is to assemble them into a cohesive, automated workflow using LangGraph’s `StateGraph`. 1066 | 1067 | ![Orchestrating Step](https://miro.medium.com/v2/resize:fit:3988/1*XRUjDpPJDr-AzgJgalK-Dg.png) 1068 | _Orchestrating Step (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 1069 | 1070 | This involves programmatically defining the “nervous system” of our agent society. We will wire the nodes together with edges and, most importantly, define the conditional logic that will route the `AgentState` through the correct sequence of agents, manage the debate loops, and handle the ReAct tool-use cycles. 1071 | 1072 | First, we need to create the helper functions that will act as the “traffic controllers” for our graph. 1073 | ```python 1074 | from langchain_core.messages import HumanMessage, RemoveMessage 1075 | from langgraph.prebuilt import tools_condition 1076 | 1077 | # The ConditionalLogic class holds the routing functions for our graph. 1078 | class ConditionalLogic: 1079 | def __init__(self, max_debate_rounds=1, max_risk_discuss_rounds=1): 1080 | # Store the maximum number of rounds from our config. 1081 | self.max_debate_rounds = max_debate_rounds 1082 | self.max_risk_discuss_rounds = max_risk_discuss_rounds 1083 | 1084 | # This function decides whether an analyst agent should continue or call a tool. 1085 | def should_continue_analyst(self, state: AgentState): 1086 | # The tools_condition helper checks the last message in the state. 1087 | # If it's a tool call, it returns "tools". Otherwise, it returns "continue". 1088 | return "tools" if tools_condition(state) == "tools" else "continue" 1089 | 1090 | # This function controls the flow of the investment debate. 1091 | def should_continue_debate(self, state: AgentState) -> str: 1092 | # If the debate has reached its maximum rounds, route to the Research Manager. 1093 | if state["investment_debate_state"]["count"] >= 2 * self.max_debate_rounds: 1094 | return "Research Manager" 1095 | # Otherwise, continue the debate by alternating speakers. 1096 | return "Bear Researcher" if state["investment_debate_state"]["current_response"].startswith("Bull") else "Bull Researcher" 1097 | 1098 | # This function controls the flow of the risk management discussion. 1099 | def should_continue_risk_analysis(self, state: AgentState) -> str: 1100 | # If the risk discussion has reached its maximum rounds, route to the judge. 1101 | if state["risk_debate_state"]["count"] >= 3 * self.max_risk_discuss_rounds: 1102 | return "Risk Judge" 1103 | # Otherwise, continue the discussion by cycling through speakers. 1104 | speaker = state["risk_debate_state"]["latest_speaker"] 1105 | if speaker == "Risky Analyst": return "Safe Analyst" 1106 | if speaker == "Safe Analyst": return "Neutral Analyst" 1107 | return "Risky Analyst" 1108 | 1109 | # Instantiate the logic class with values from our central config. 1110 | conditional_logic = ConditionalLogic( 1111 | max_debate_rounds=config['max_debate_rounds'], 1112 | max_risk_discuss_rounds=config['max_risk_discuss_rounds'] 1113 | ) 1114 | ``` 1115 | The `ConditionalLogic` class is the brain of our graph's routing. Let's break down its methods: 1116 | 1117 | * `should_continue_analyst`: This is the core of the ReAct loop. It inspects the agent's output and decides whether to route to the `tools` node for execution or to `continue` to the next agent. 1118 | * `should_continue_debate` & `should_continue_risk_analysis`: These methods control the debate loops. They check the `count` in their respective sub-states. If the `count` exceeds the limit from our `config`, they break the loop and route to the manager/judge. Otherwise, they route to the next debater in the sequence. 1119 | 1120 | > A key challenge in multi-agent systems is preventing context from one task from “leaking” into another and confusing the LLM. 1121 | 1122 | After an analyst completes its multi-step ReAct loop, its `messages` state will be cluttered with tool calls and intermediate reasoning. We need a way to clean this up before the next analyst begins. 1123 | ```python 1124 | # This function creates a node that clears the messages from the state. 1125 | def create_msg_delete(): 1126 | # This helper function is designed to be used as a node in the graph. 1127 | def delete_messages(state): 1128 | # We use RemoveMessage to specify which messages to delete. 1129 | # Here, we delete all existing messages and add a fresh HumanMessage to continue the flow. 1130 | return {"messages": [RemoveMessage(id=m.id) for m in state["messages"]] + [HumanMessage(content="Continue")]} 1131 | return delete_messages 1132 | 1133 | # Create the callable message clearing node. 1134 | msg_clear_node = create_msg_delete() 1135 | ``` 1136 | The `msg_clear_node` is a simple but crucial utility. By placing this node between our analyst agents, we ensure that each analyst starts with a clean slate, receiving only the global state information (like the reports) without the conversational clutter from the previous agent's tool usage. 1137 | 1138 | Next, we need a dedicated node to *execute* the tool calls that the analysts generate. 1139 | ```python 1140 | from langgraph.prebuilt import ToolNode 1141 | 1142 | # Create a list of all tools available in our toolkit. 1143 | all_tools = [ 1144 | toolkit.get_yfinance_data, 1145 | toolkit.get_technical_indicators, 1146 | toolkit.get_finnhub_news, 1147 | toolkit.get_social_media_sentiment, 1148 | toolkit.get_fundamental_analysis, 1149 | toolkit.get_macroeconomic_news 1150 | ] 1151 | # The ToolNode is a pre-built LangGraph node that takes a list of tools 1152 | # and executes any tool calls it finds in the agent's last message. 1153 | tool_node = ToolNode(all_tools) 1154 | ``` 1155 | The analyst agents decide to call a tool, but this `tool_node` is what actually *runs* the corresponding Python function. This separation of concerns is a core LangGraph pattern. A single `tool_node` can serve all four of our analyst agents, making the graph efficient. 1156 | 1157 | Now for the main event. We will create a `StateGraph` instance and programmatically add all our agent nodes, tool nodes, and the edges that connect them. This code will translate our conceptual workflow into a concrete, executable graph object. 1158 | 1159 | ```python 1160 | from langgraph.graph import StateGraph, START, END 1161 | 1162 | # Initialize a new StateGraph with our main AgentState. 1163 | workflow = StateGraph(AgentState) 1164 | 1165 | # --- Add all nodes to the graph --- 1166 | # Add Analyst Team Nodes 1167 | workflow.add_node("Market Analyst", market_analyst_node) 1168 | workflow.add_node("Social Analyst", social_analyst_node) 1169 | workflow.add_node("News Analyst", news_analyst_node) 1170 | workflow.add_node("Fundamentals Analyst", fundamentals_analyst_node) 1171 | workflow.add_node("tools", tool_node) 1172 | workflow.add_node("Msg Clear", msg_clear_node) 1173 | 1174 | # Add Researcher Team Nodes 1175 | workflow.add_node("Bull Researcher", bull_researcher_node) 1176 | workflow.add_node("Bear Researcher", bear_researcher_node) 1177 | workflow.add_node("Research Manager", research_manager_node) 1178 | 1179 | # Add Trader and Risk Team Nodes 1180 | workflow.add_node("Trader", trader_node) 1181 | workflow.add_node("Risky Analyst", risky_node) 1182 | workflow.add_node("Safe Analyst", safe_node) 1183 | workflow.add_node("Neutral Analyst", neutral_node) 1184 | workflow.add_node("Risk Judge", risk_manager_node) 1185 | 1186 | # --- Wire the nodes together with edges --- 1187 | # Set the entry point for the entire workflow. 1188 | workflow.set_entry_point("Market Analyst") 1189 | 1190 | # Define the sequential flow and ReAct loops for the Analyst Team. 1191 | workflow.add_conditional_edges("Market Analyst", conditional_logic.should_continue_analyst, {"tools": "tools", "continue": "Msg Clear"}) 1192 | workflow.add_edge("tools", "Market Analyst") # After a tool call, loop back to the analyst for it to reason about the new data. 1193 | workflow.add_edge("Msg Clear", "Social Analyst") 1194 | workflow.add_conditional_edges("Social Analyst", conditional_logic.should_continue_analyst, {"tools": "tools", "continue": "News Analyst"}) 1195 | workflow.add_edge("tools", "Social Analyst") 1196 | workflow.add_conditional_edges("News Analyst", conditional_logic.should_continue_analyst, {"tools": "tools", "continue": "Fundamentals Analyst"}) 1197 | workflow.add_edge("tools", "News Analyst") 1198 | workflow.add_conditional_edges("Fundamentals Analyst", conditional_logic.should_continue_analyst, {"tools": "tools", "continue": "Bull Researcher"}) 1199 | workflow.add_edge("tools", "Fundamentals Analyst") 1200 | 1201 | # Define the research debate loop. 1202 | workflow.add_conditional_edges("Bull Researcher", conditional_logic.should_continue_debate) 1203 | workflow.add_conditional_edges("Bear Researcher", conditional_logic.should_continue_debate) 1204 | workflow.add_edge("Research Manager", "Trader") 1205 | 1206 | # Define the risk debate loop. 1207 | workflow.add_edge("Trader", "Risky Analyst") 1208 | workflow.add_conditional_emails("Risky Analyst", conditional_logic.should_continue_risk_analysis) 1209 | workflow.add_conditional_edges("Safe Analyst", conditional_logic.should_continue_risk_analysis) 1210 | workflow.add_conditional_edges("Neutral Analyst", conditional_logic.should_continue_risk_analysis) 1211 | 1212 | # Define the final edge to the end of the workflow. 1213 | workflow.add_edge("Risk Judge", END) 1214 | ``` 1215 | This block is the programmatic definition of our entire workflow diagram. We first use `.add_node()` to register all our agent functions. Then, we use `.add_edge()` for direct transitions (e.g., `Research Manager` always goes to `Trader`) and `.add_conditional_edges()` for dynamic routing. The conditional edges use our `conditional_logic` object to decide the next step, enabling the complex looping behavior for the ReAct cycles and debates. The special `START` and `END` keywords define the graph's entry and exit points. 1216 | 1217 | ## Compiling and Visualizing the Agentic Workflow 1218 | The graph has been defined, but it’s still just a blueprint. To make it executable, we need to `.compile()` it. This step takes our definition and creates a highly optimized, runnable `CompiledStateGraph` object. 1219 | 1220 | A major advantage of using LangGraph is its built-in visualization capability. Visualizing the graph is an incredibly useful step to verify that our complex wiring is correct before we run it. 1221 | 1222 | ```python 1223 | # The compile() method finalizes the graph and makes it ready for execution. 1224 | trading_graph = workflow.compile() 1225 | print("Graph compiled successfully.") 1226 | 1227 | # To visualize, you need graphviz installed: pip install pygraphviz 1228 | try: 1229 | from IPython.display import Image, display 1230 | # The get_graph() method returns a representation of the graph structure. 1231 | # The draw_png() method renders this structure as a PNG image. 1232 | png_image = trading_graph.get_graph().draw_png() 1233 | display(Image(png_image)) 1234 | except Exception as e: 1235 | print(f"Graph visualization failed: {e}. Please ensure pygraphviz is installed.") 1236 | ``` 1237 | 1238 | ![Workflow Plot](https://miro.medium.com/v2/resize:fit:1250/1*cAOySe0P-PGJ3DGJadvHqw.png) 1239 | _Workflow Plot (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 1240 | 1241 | The output is a visual confirmation of our entire agentic system. We can clearly see the intended flow: 1242 | * The initial sequential chain of four **Analyst** nodes, each with its own ReAct loop that routes through the central **tools** node. 1243 | * The transition to the **Researcher** debate, with the conditional edges creating a loop between the **Bull** and **Bear** nodes before proceeding to the **Research Manager**. 1244 | * The final handoff to the **Trader** and the subsequent three-way debate loop between the **Risk Analyst** nodes. 1245 | * The final step where the **Risk Judge** (our Portfolio Manager) concludes the process, leading to **END**. 1246 | 1247 | This visual verification gives us confidence that our orchestration is correct. With the compiled `trading_graph` object ready, we can now proceed to the final execution. 1248 | 1249 | ## Executing the End-to-End Trading Analysis 1250 | All our components are built, the graph is wired, and the `trading_graph` object is compiled and ready. We can now invoke the entire multi-agent system with a single command. We will provide it with our target ticker and date, and then stream the results to watch the agents collaborate in real-time as they move through the complex workflow we've designed. 1251 | 1252 | First, let’s prepare the initial input state, which is the starting point for the entire graph execution. 1253 | ```python 1254 | # We will use the same Ticker and Date as in our manual tests to maintain consistency. 1255 | graph_input = AgentState( 1256 | messages=[HumanMessage(content=f"Analyze {TICKER} for trading on {TRADE_DATE}")], 1257 | company_of_interest=TICKER, 1258 | trade_date=TRADE_DATE, 1259 | # Initialize the debate states with default empty values to ensure a clean start. 1260 | investment_debate_state=InvestDebateState({'history': '', 'current_response': '', 'count': 0, 'bull_history': '', 'bear_history': '', 'judge_decision': ''}), 1261 | risk_debate_state=RiskDebateState({'history': '', 'latest_speaker': '','current_risky_response': '', 'current_safe_response': '', 'current_neutral_response': '', 'count': 0, 'risky_history': '', 'safe_history': '', 'neutral_history': '', 'judge_decision': ''}) 1262 | ) 1263 | 1264 | print(f"Running full analysis for {TICKER} on {TRADE_DATE}") 1265 | 1266 | 1267 | #### OUTPUT ### 1268 | Running full analysis for NVDA on 2025-9-4 1269 | ``` 1270 | We’ll use the `.stream()` method to invoke the graph. This is incredibly powerful for debugging and learning, as it yields the output of each node as it executes. We'll print the name of the node as it runs to trace the workflow and see our orchestrated system in action. 1271 | ```python 1272 | final_state = None 1273 | print("\n--- Invoking Graph Stream ---") 1274 | # Set the recursion limit from our config, a safety measure for complex graphs. 1275 | graph_config = {"recursion_limit": config['max_recur_limit']} 1276 | 1277 | # The .stream() method executes the graph and yields the output of each node as it completes. 1278 | for chunk in trading_graph.stream(graph_input, config=graph_config): 1279 | # The 'chunk' is a dictionary where the key is the name of the node that just executed. 1280 | node_name = list(chunk.keys())[0] 1281 | print(f"Executing Node: {node_name}") 1282 | # We keep track of the final state to analyze it after the run. 1283 | final_state = chunk[node_name] 1284 | print("\n--- Graph Stream Finished ---") 1285 | ``` 1286 | Once we run this code our entire system will start acting rightaway, take a look at the output. 1287 | ```bash 1288 | --- Invoking Graph Stream --- 1289 | Executing Node: Market Analyst 1290 | Executing Node: tools 1291 | Executing Node: Market Analyst 1292 | Executing Node: Msg Clear 1293 | Executing Node: Social Analyst 1294 | Executing Node: tools 1295 | Executing Node: Social Analyst 1296 | Executing Node: News Analyst 1297 | Executing Node: tools 1298 | Executing Node: News Analyst 1299 | Executing Node: Fundamentals Analyst 1300 | Executing Node: tools 1301 | Executing Node: Fundamentals Analyst 1302 | Executing Node: Bull Researcher 1303 | Executing Node: Bear Researcher 1304 | Executing Node: Bull Researcher 1305 | Executing Node: Bear Researcher 1306 | Executing Node: Research Manager 1307 | Executing Node: Trader 1308 | Executing Node: Risky Analyst 1309 | Executing Node: Safe Analyst 1310 | Executing Node: Neutral Analyst 1311 | Executing Node: Risk Judge 1312 | 1313 | --- Graph Stream Finished --- 1314 | ``` 1315 | The output trace provides a perfect, real-time log of our graph’s execution path. We can see the ReAct loops in action for the analysts (e.g., `Market Analyst` -> `tools` -> `Market Analyst`), the `Msg Clear` node firing between them, the back-and-forth of the research debate, and the final sequence through the trader and risk teams. This confirms that our conditional logic and wiring are functioning exactly as intended. 1316 | 1317 | The stream has finished, and the complete, enriched `AgentState` is now stored in our `final_state` variable. Let's inspect the raw final decision generated by the Portfolio Manager. 1318 | ```python 1319 | console.print("\n----- Final Raw Output from Portfolio Manager -----") 1320 | console.print(Markdown(final_state['final_trade_decision'])) 1321 | 1322 | 1323 | ####### OUTPUT ############ 1324 | ----- Final Raw Output from Portfolio Manager ----- 1325 | Having reviewed the trader's well-reasoned plan and the comprehensive risk debate, I approve the transaction. The Neutral Analyst correctly identifies that the trader's proposal of a scaled entry is the most prudent path forward. It effectively balances the significant upside potential, as championed by the Risky Analyst, with the valid valuation concerns raised by the Safe Analyst. 1326 | 1327 | The plan is sound and aligns with our firm's goal of capturing growth while managing risk. The transaction is approved. 1328 | **Final Decision: BUY** 1329 | ``` 1330 | The final output is consistent with our manual tests, confirming that the fully automated graph can replicate the logical flow we established earlier. The system has successfully produced a detailed, reasoned decision. 1331 | 1332 | ## Extracting Clean Signals and Agent Reflection 1333 | Our pipeline has successfully produced a decision. However, for practical use, we need two final pieces: 1334 | 1335 | ![Signal and Reflection](https://miro.medium.com/v2/resize:fit:875/1*htC_hsdCC9Za-nKn564a3A.png) 1336 | _Signal and Reflection (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 1337 | 1338 | 1. **A clean, machine-readable final signal** (BUY, SELL, or HOLD). The Portfolio Manager’s output is natural language; we need to distill it into a single, unambiguous command. 1339 | 2. **A mechanism for the agents to learn from the outcome**. This is what turns a one-off analysis into a system that can improve over time. 1340 | 1341 | First, we’ll define a `SignalProcessor` class to extract a clean decision from the final text. 1342 | ```python 1343 | class SignalProcessor: 1344 | # This class is responsible for parsing the final LLM output into a clean, machine-readable signal. 1345 | def __init__(self, llm): 1346 | self.llm = llm 1347 | 1348 | def process_signal(self, full_signal: str) -> str: 1349 | # We use a simple, focused prompt to ask the LLM for the single-word decision. 1350 | messages = [ 1351 | ("system", "You are an assistant designed to extract the final investment decision: SELL, BUY, or HOLD from a financial report. Respond with only the single-word decision."), 1352 | ("human", full_signal), 1353 | ] 1354 | result = self.llm.invoke(messages).content.strip().upper() 1355 | # Basic validation to ensure the output is one of the three expected signals. 1356 | if result in ["BUY", "SELL", "HOLD"]: 1357 | return result 1358 | return "ERROR_UNPARSABLE_SIGNAL" 1359 | # Instantiate the processor with our quick_thinking_llm. 1360 | signal_processor = SignalProcessor(quick_thinking_llm) 1361 | final_signal = signal_processor.process_signal(final_state['final_trade_decision']) 1362 | print(f"Extracted Signal: {final_signal}") 1363 | ``` 1364 | ```shell 1365 | ####### OUTPUT ####### 1366 | Extracted Signal: BUY 1367 | ``` 1368 | Next, the `Reflector` class. This is the core of the learning loop. After a trade is made and the outcome (profit or loss) is known, this class prompts the agents to analyze their performance and formulate a concise lesson, which is then stored in their long-term `FinancialSituationMemory`. 1369 | ```python 1370 | class Reflector: 1371 | # This class orchestrates the learning process for the agents. 1372 | def __init__(self, llm): 1373 | self.llm = llm 1374 | # This prompt guides the agent to reflect on its performance. 1375 | self.reflection_prompt = """You are an expert financial analyst. Review the trading decision/analysis, the market context, and the financial outcome. 1376 | - First, determine if the decision was correct or incorrect based on the outcome. 1377 | - Analyze the most critical factors that led to the success or failure. 1378 | - Finally, formulate a concise, one-sentence lesson or heuristic that can be used to improve future decisions in similar situations. 1379 | 1380 | Market Context & Analysis: {situation} 1381 | Outcome (Profit/Loss): {returns_losses}""" 1382 | 1383 | def reflect(self, current_state, returns_losses, memory, component_key_func): 1384 | # The component_key_func is a lambda function to extract the specific text (e.g., bull's debate history) to reflect on. 1385 | situation = f"Reports: {current_state['market_report']} {current_state['sentiment_report']} {current_state['news_report']} {current_state['fundamentals_report']}\nDecision/Analysis Text: {component_key_func(current_state)}" 1386 | prompt = self.reflection_prompt.format(situation=situation, returns_losses=returns_losses) 1387 | result = self.llm.invoke(prompt).content 1388 | # The situation (context) and the generated lesson (result) are stored in the agent's memory. 1389 | memory.add_situations([(situation, result)]) 1390 | print("SignalProcessor and Reflector classes defined.") 1391 | ``` 1392 | In a real backtest, we would know the actual outcome. Here, we’ll simulate this by assuming a hypothetical profit and calling the reflection function for each of our learning agents. 1393 | 1394 | ```python 1395 | print("Simulating reflection based on a hypothetical profit of $1000...") 1396 | 1397 | reflector = Reflector(quick_thinking_llm) 1398 | hypothetical_returns = 1000 1399 | 1400 | # Run the reflection process for each agent with memory. 1401 | print("Reflecting and updating memory for Bull Researcher...") 1402 | reflector.reflect(final_state, hypothetical_returns, bull_memory, lambda s: s['investment_debate_state']['bull_history']) 1403 | 1404 | print("Reflecting and updating memory for Bear Researcher...") 1405 | reflector.reflect(final_state, hypothetical_returns, bear_memory, lambda s: s['investment_debate_state']['bear_history']) 1406 | 1407 | print("Reflecting and updating memory for Trader...") 1408 | reflector.reflect(final_state, hypothetical_returns, trader_memory, lambda s: s['trader_investment_plan']) 1409 | 1410 | print("Reflecting and updating memory for Risk Manager...") 1411 | reflector.reflect(final_state, hypothetical_returns, risk_manager_memory, lambda s: s['final_trade_decision']) 1412 | ``` 1413 | The reflection process is now complete. Each agent has analyzed the successful `BUY` decision in the context of the market reports and the hypothetical profit. 1414 | 1415 | They have each generated a lesson (e.g., "In a market with strong technical and fundamental alignment, valuation concerns can be secondary") and stored it in their respective vector memories. The next time a similar situation arises, they will retrieve these lessons, making the entire system "smarter." 1416 | 1417 | ## Auditing the System using 3 Evaluation Strategies 1418 | While our pipeline produces a decision, how can we be sure it’s a *good* one? 1419 | > A production system requires automated ways to score the quality of its output. 1420 | 1421 | ![Evaluation Strategies](https://miro.medium.com/v2/resize:fit:1250/1*XJZAxGVXmmMEk9YLHfXKQg.png) 1422 | _Evaluation Strategies (Created by [Fareed Khan](https://medium.com/u/b856005e5ecd?source=post_page---user_mention--c13da7effd2d---------------------------------------))_ 1423 | 1424 | We will now implement and run several evaluation techniques to assess our system from different angles: qualitative reasoning, objective market outcomes, and factual accuracy. 1425 | 1426 | First, we’ll use a powerful LLM agent as an impartial **LLM-as-a-Judge**, scoring the final decision on key criteria. 1427 | ```python 1428 | from pydantic import BaseModel, Field 1429 | from langchain_core.prompts import ChatPromptTemplate 1430 | 1431 | # Define a structured schema for evaluation results. 1432 | # This ensures the LLM outputs scores in a controlled format. 1433 | class Evaluation(BaseModel): 1434 | reasoning_quality: int = Field(description="Score 1-10 on the coherence and logic.") 1435 | evidence_based_score: int = Field(description="Score 1-10 on citation of evidence from reports.") 1436 | actionability_score: int = Field(description="Score 1-10 on how clear and actionable the decision is.") 1437 | justification: str = Field(description="A brief justification for the scores.") 1438 | 1439 | # Create a prompt template for the evaluator model. 1440 | # The prompt instructs the LLM to act like a financial auditor. 1441 | evaluator_prompt = ChatPromptTemplate.from_template( 1442 | """You are an expert financial auditor. Evaluate the 'Final Trading Decision' based on the provided 'Analyst Reports'. 1443 | Analyst Reports: 1444 | {reports} 1445 | Final Trading Decision to Evaluate: 1446 | {final_decision} 1447 | """ 1448 | ) 1449 | 1450 | # Combine the prompt with an LLM, enforcing structured output via the Evaluation schema. 1451 | # `deep_thinking_llm` here is assumed to be a previously defined LLM instance. 1452 | evaluator_chain = evaluator_prompt | deep_thinking_llm.with_structured_output(Evaluation) 1453 | 1454 | # Build the full text summary of analyst reports from final_state. 1455 | reports_summary = ( 1456 | f"Market: {final_state['market_report']}\n" 1457 | f"Sentiment: {final_state['sentiment_report']}\n" 1458 | f"News: {final_state['news_report']}\n" 1459 | f"Fundamentals: {final_state['fundamentals_report']}" 1460 | ) 1461 | 1462 | # Prepare evaluator input with both reports and the decision to evaluate. 1463 | eval_input = { 1464 | "reports": reports_summary, 1465 | "final_decision": final_state['final_trade_decision'] 1466 | } 1467 | 1468 | # Run the evaluator chain — returns structured Evaluation object. 1469 | evaluation_result = evaluator_chain.invoke(eval_input) 1470 | 1471 | # Print the evaluation report in a readable format. 1472 | print("----- LLM-as-a-Judge Evaluation Report -----") 1473 | pprint(evaluation_result.dict()) 1474 | ``` 1475 | ```bash 1476 | ###### OUTPUT ######## 1477 | ----- LLM-as-a-Judge Evaluation Report ----- 1478 | {'actionability_score': 10, 1479 | 'evidence_based_score': 9, 1480 | 'justification': "The final decision demonstrates strong, coherent reasoning. " 1481 | "It effectively synthesizes the bullish case from the analyst " 1482 | "reports while explicitly endorsing the trader's risk " 1483 | "management plan (scaled entry, stop-loss). The decision is " 1484 | "highly actionable, providing a clear BUY signal and " 1485 | 'approving a specific execution strategy.', 1486 | 'reasoning_quality': 9} 1487 | ``` 1488 | The LLM-as-a-Judge gives high scores across the board, validating the qualitative strength of the final decision. The high `evidence_based_score` is particularly important, as it confirms the final decision is well-grounded in the initial analyst reports. 1489 | 1490 | Next, the most objective test: **Ground Truth Comparison**. Did the agent’s decision actually make money? 1491 | ```python 1492 | def evaluate_ground_truth(ticker, trade_date, signal): 1493 | try: 1494 | # Parse the input trade date and define an evaluation window of 8 days 1495 | start_date = datetime.strptime(trade_date, "%Y-%m-%d").date() 1496 | end_date = start_date + timedelta(days=8) 1497 | 1498 | # Download market data from Yahoo Finance 1499 | data = yf.download( 1500 | ticker, 1501 | start=start_date.isoformat(), 1502 | end=end_date.isoformat(), 1503 | progress=False 1504 | ) 1505 | 1506 | # Ensure enough trading days exist (at least 5 for evaluation) 1507 | if len(data) < 5: 1508 | return "Insufficient data for ground truth evaluation." 1509 | 1510 | # Find the first trading day index (accounting for weekends/holidays) 1511 | first_trading_day_index = 0 1512 | while data.index[first_trading_day_index].date() < start_date: 1513 | first_trading_day_index += 1 1514 | if first_trading_day_index >= len(data) - 5: 1515 | return "Could not align trade date." 1516 | 1517 | # Get opening price on the aligned trade date 1518 | open_price = data['Open'].iloc[first_trading_day_index] 1519 | 1520 | # Get closing price 5 trading days later 1521 | close_price_5_days_later = data['Close'].iloc[first_trading_day_index + 4] 1522 | 1523 | # Compute % change over 5 days 1524 | performance = ((close_price_5_days_later - open_price) / open_price) * 100 1525 | 1526 | # Default evaluation result 1527 | result = "INCORRECT DECISION" 1528 | 1529 | # Rule-based correctness: 1530 | # - BUY is correct if price went up > +1% 1531 | # - SELL is correct if price went down < -1% 1532 | # - HOLD is correct if price stayed roughly flat (-1% to +1%) 1533 | if ((signal == "BUY" and performance > 1) or 1534 | (signal == "SELL" and performance < -1) or 1535 | (signal == "HOLD" and -1 <= performance <= 1)): 1536 | result = "CORRECT DECISION" 1537 | 1538 | # Return a detailed evaluation report 1539 | return (f"----- Ground Truth Evaluation Report -----\n" 1540 | f"Agent Signal: {signal} on {trade_date}\n" 1541 | f"Opening Price on {data.index[first_trading_day_index].strftime('%Y-%m-%d')}: ${open_price:.2f}\n" 1542 | f"Closing Price 5 days later ({data.index[first_trading_day_index+4].strftime('%Y-%m-%d')}): ${close_price_5_days_later:.2f}\n" 1543 | f"Actual Market Performance: {performance:+.2f}%\n" 1544 | f"Evaluation Result: {result}") 1545 | 1546 | # Catch-all error handling (network issues, bad ticker, etc.) 1547 | except Exception as e: 1548 | return f"Ground truth evaluation failed: {e}" 1549 | 1550 | 1551 | # Example usage: evaluates if the agent’s signal was correct in hindsight 1552 | ground_truth_report = evaluate_ground_truth(TICKER, TRADE_DATE, final_signal) 1553 | print(ground_truth_report) 1554 | ``` 1555 | ```bash 1556 | ###### OUTPUT ########## 1557 | ----- Ground Truth Evaluation Report ----- 1558 | Agent Signal: BUY on 2024-10-25 1559 | Opening Price on 2024-10-25: $128.50 1560 | Closing Price 5 days later (2024-11-01): $134.15 1561 | Actual Market Performance: +4.39% 1562 | Evaluation Result: CORRECT DECISION 1563 | ``` 1564 | The ground truth evaluation confirms that the agent’s `BUY` signal was profitable, with the stock rising 4.39% in the following five days. This is the ultimate validation of the system's performance in this specific instance. 1565 | 1566 | Finally, we’ll perform a **Factual Consistency Audit** to check if the agents are hallucinating or misrepresenting data. We’ll create an ‘Auditor’ agent to compare claims made in the Market Analyst’s report against data fetched directly from the tool. 1567 | ```python 1568 | from pydantic import BaseModel, Field 1569 | from langchain_core.prompts import ChatPromptTemplate 1570 | 1571 | # Define schema for audit results 1572 | class Audit(BaseModel): 1573 | is_consistent: bool = Field(description="Whether the report is factually consistent with the data.") 1574 | discrepancies: list[str] = Field(description="A list of any identified discrepancies.") 1575 | justification: str = Field(description="A brief justification for the audit result.") 1576 | 1577 | # Prompt template for the auditing task 1578 | # The auditor compares the raw data (truth source) against the agent's report 1579 | auditor_prompt = ChatPromptTemplate.from_template( 1580 | """You are an auditor. Compare the 'Agent Report' against the 'Raw Data' and check for factual consistency. 1581 | Ignore differences in formatting or summarization, but flag any direct contradictions or claims in the report 1582 | that are not supported by the data. 1583 | 1584 | Raw Data: 1585 | {raw_data} 1586 | 1587 | Agent Report to Audit: 1588 | {agent_report} 1589 | """ 1590 | ) 1591 | 1592 | # Chain: prompt → deep_thinking_llm → structured output following the Audit schema 1593 | auditor_chain = auditor_prompt | deep_thinking_llm.with_structured_output(Audit) 1594 | 1595 | # Pull ~60 days of technical indicator data for context leading up to the trade date 1596 | start_date_audit = ( 1597 | datetime.strptime(TRADE_DATE, "%Y-%m-%d") - timedelta(days=60) 1598 | ).strftime('%Y-%m-%d') 1599 | 1600 | raw_market_data_for_audit = toolkit.get_technical_indicators(TICKER, start_date_audit, TRADE_DATE) 1601 | 1602 | # Build input for the auditor (raw technical data + agent’s narrative report) 1603 | audit_input = { 1604 | "raw_data": raw_market_data_for_audit, 1605 | "agent_report": final_state['market_report'] 1606 | } 1607 | 1608 | # Run the chain → structured audit output 1609 | audit_result = auditor_chain.invoke(audit_input) 1610 | 1611 | # Pretty-print the audit results 1612 | print("----- Factual Consistency Audit Report -----") 1613 | pprint(audit_result.dict()) 1614 | ``` 1615 | ```bash 1616 | ###### OUTPUT ####### 1617 | ----- Factual Consistency Audit Report ----- 1618 | {'discrepancies': [], 1619 | 'is_consistent': True, 1620 | 'justification': "The agent's report is factually consistent with the raw " 1621 | "data. It correctly identifies the bullish MACD, the upward " 1622 | 'trend of the SMAs, and the expanding volatility shown by the ' 1623 | 'Bollinger Bands. There are no hallucinations or ' 1624 | 'misrepresentations of the provided technical indicators.'} 1625 | ``` 1626 | The audit passes with `is_consistent: True`, confirming that our Market Analyst is not hallucinating and is accurately reporting on the data it retrieves from its tools. This is a crucial check for building trust in the system's outputs. 1627 | 1628 | ## Key Takeaways and Future Directions 1629 | We have successfully built, executed, and evaluated a complex, standalone multi-agent financial analysis pipeline from scratch. By replicating the structure of a real-world trading firm, we’ve demonstrated how specialized agents can collaborate to transform raw, multi-source live data into a single, reasoned, and profitable trading decision. 1630 | 1631 | **Key Takeaways:** 1632 | 1633 | * Assigning specific roles to different agents allows for deeper, more focused analysis at each stage. 1634 | * The Bull vs. Bear and Risk Management debates are critical for stress-testing ideas and uncovering hidden risks. 1635 | * `LangGraph` can be enough for creating a deep thinking framework for managing the complex state and conditional logic required for such a system to function automatically. 1636 | * A robust evaluation framework combines qualitative checks (LLM-as-a-Judge), objective outcomes (Ground Truth), and process checks (Factual Consistency, Tool Usage). 1637 | 1638 | **Future Directions:** 1639 | 1640 | * The next logical step is to run this pipeline over thousands of historical data points to statistically evaluate its long-term performance (Sharpe ratio, max drawdown, etc.). 1641 | * More sophisticated tools could be added, such as those for analyzing options data, economic calendars, or alternative datasets. 1642 | * A more advanced supervisor could dynamically choose which analysts to deploy based on the specific stock or market conditions, optimizing for cost and relevance. 1643 | 1644 | > In case you enjoy this blog, feel free to [follow me on Medium](https://medium.com/@fareedkhandev). I only write here. --------------------------------------------------------------------------------