├── .codespellignore ├── tests ├── unit_tests │ ├── __init__.py │ └── test_configuration.py └── integration_tests │ ├── __init__.py │ └── test_graph.py ├── .env.example ├── assets ├── teddyflow-code.png ├── teddyflow-guide-01.png ├── teddyflow-guide-02.png ├── teddyflow-mcp-chat.png ├── teddyflow-custom-logo.png ├── teddyflow-mcp-chat2.png ├── teddyflow-mcp-chat3.png ├── teddyflow-mcp-chat4.png └── teddyflow-architecture.png ├── langgraph.json ├── src └── react_agent │ ├── __init__.py │ ├── prompts.py │ ├── tools.py │ ├── mcp_config_sample.json │ ├── configuration.py │ ├── state.py │ ├── utils.py │ └── graph.py ├── requirements.txt ├── pyproject.toml ├── Makefile ├── LICENSE ├── .gitignore └── README.md /.codespellignore: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/unit_tests/__init__.py: -------------------------------------------------------------------------------- 1 | """Define any unit tests you may want in this directory.""" 2 | -------------------------------------------------------------------------------- /tests/integration_tests/__init__.py: -------------------------------------------------------------------------------- 1 | """Define any integration tests you want in this directory.""" 2 | -------------------------------------------------------------------------------- /.env.example: -------------------------------------------------------------------------------- 1 | LANGSMITH_PROJECT=LangGraph-ReAct-MCP-Chat 2 | ANTHROPIC_API_KEY=.... 3 | OPENAI_API_KEY=... 4 | -------------------------------------------------------------------------------- /assets/teddyflow-code.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-code.png -------------------------------------------------------------------------------- /assets/teddyflow-guide-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-guide-01.png -------------------------------------------------------------------------------- /assets/teddyflow-guide-02.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-guide-02.png -------------------------------------------------------------------------------- /assets/teddyflow-mcp-chat.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-mcp-chat.png -------------------------------------------------------------------------------- /assets/teddyflow-custom-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-custom-logo.png -------------------------------------------------------------------------------- /assets/teddyflow-mcp-chat2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-mcp-chat2.png -------------------------------------------------------------------------------- /assets/teddyflow-mcp-chat3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-mcp-chat3.png -------------------------------------------------------------------------------- /assets/teddyflow-mcp-chat4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-mcp-chat4.png -------------------------------------------------------------------------------- /assets/teddyflow-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/teddynote-lab/langgraph-react-mcp-chat/HEAD/assets/teddyflow-architecture.png -------------------------------------------------------------------------------- /langgraph.json: -------------------------------------------------------------------------------- 1 | { 2 | "dependencies": ["."], 3 | "graphs": { 4 | "agent": "./src/react_agent/graph.py:graph" 5 | }, 6 | "env": ".env" 7 | } 8 | -------------------------------------------------------------------------------- /tests/unit_tests/test_configuration.py: -------------------------------------------------------------------------------- 1 | from react_agent.configuration import Configuration 2 | 3 | 4 | def test_configuration_empty() -> None: 5 | Configuration.from_runnable_config({}) 6 | -------------------------------------------------------------------------------- /src/react_agent/__init__.py: -------------------------------------------------------------------------------- 1 | """React Agent. 2 | 3 | This module defines a custom reasoning and action agent graph. 4 | It invokes tools in a simple loop. 5 | """ 6 | 7 | from react_agent.graph import graph 8 | 9 | __all__ = ["graph"] 10 | -------------------------------------------------------------------------------- /src/react_agent/prompts.py: -------------------------------------------------------------------------------- 1 | """Default prompts used by the agent.""" 2 | 3 | SYSTEM_PROMPT = """You are a helpful AI assistant. Your name is `테디노트 MCP 에이전트`. 4 | Use your tools to help the user with their tasks. 5 | Answer in Korean. 6 | 7 | System time: {system_time}""" 8 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | langgraph>=0.3.25 2 | langchain-openai>=0.3.12 3 | langchain-anthropic>=0.3.10 4 | langchain>=0.3.23 5 | langchain-community>=0.3.21 6 | langchain-mcp-adapters>=0.0.7 7 | tavily-python>=0.5.4 8 | python-dotenv>=1.0.1 9 | aiofiles>=24.1.0 10 | mypy>=1.11.1 11 | ruff>=0.6.1 12 | langgraph-cli[inmem]>=0.1.89 -------------------------------------------------------------------------------- /tests/integration_tests/test_graph.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | from langsmith import unit 3 | 4 | from react_agent import graph 5 | 6 | 7 | @pytest.mark.asyncio 8 | @unit 9 | async def test_react_agent_simple_passthrough() -> None: 10 | res = await graph.ainvoke( 11 | {"messages": [("user", "Hi, Tell me your abilities")]}, 12 | {"configurable": {"system_prompt": "You are a helpful AI assistant."}}, 13 | ) 14 | 15 | assert "harrison" in str(res["messages"][-1].content).lower() 16 | -------------------------------------------------------------------------------- /src/react_agent/tools.py: -------------------------------------------------------------------------------- 1 | """This module provides example tools for web scraping and search functionality. 2 | 3 | It includes a basic Tavily search function (as an example) 4 | 5 | These tools are intended as free examples to get started. For production use, 6 | consider implementing more robust and specialized tools tailored to your needs. 7 | """ 8 | 9 | from typing import Any, Callable, List, Optional, cast 10 | 11 | from langchain_community.tools.tavily_search import TavilySearchResults 12 | from langchain_core.runnables import RunnableConfig 13 | from langchain_core.tools import InjectedToolArg 14 | from typing_extensions import Annotated 15 | 16 | from react_agent.configuration import Configuration 17 | 18 | 19 | async def search( 20 | query: str, *, config: Annotated[RunnableConfig, InjectedToolArg] 21 | ) -> Optional[list[dict[str, Any]]]: 22 | """Search for general web results. 23 | 24 | This function performs a search using the Tavily search engine, which is designed 25 | to provide comprehensive, accurate, and trusted results. It's particularly useful 26 | for answering questions about current events. 27 | """ 28 | configuration = Configuration.from_runnable_config(config) 29 | wrapped = TavilySearchResults(max_results=configuration.max_search_results) 30 | result = await wrapped.ainvoke({"query": query}) 31 | return cast(list[dict[str, Any]], result) 32 | 33 | 34 | TOOLS: List[Callable[..., Any]] = [search] 35 | -------------------------------------------------------------------------------- /src/react_agent/mcp_config_sample.json: -------------------------------------------------------------------------------- 1 | { 2 | "mcpServers": { 3 | "desktop-commander": { 4 | "command": "npx", 5 | "args": [ 6 | "-y", 7 | "@smithery/cli@latest", 8 | "run", 9 | "@wonderwhy-er/desktop-commander", 10 | "--key", 11 | "Smithery API_KEY 입력" 12 | ] 13 | }, 14 | "perplexity-search": { 15 | "command": "npx", 16 | "args": [ 17 | "-y", 18 | "@smithery/cli@latest", 19 | "run", 20 | "@arjunkmrm/perplexity-search", 21 | "--config", 22 | "{\"perplexityApiKey\":\"Perplexity API_KEY 입력\"}" 23 | ] 24 | }, 25 | "hyperbrowser": { 26 | "command": "npx", 27 | "args": [ 28 | "-y", 29 | "@smithery/cli@latest", 30 | "run", 31 | "@hyperbrowserai/mcp", 32 | "--key", 33 | "Smithery API_KEY 입력" 34 | ] 35 | }, 36 | "todoist-mcp": { 37 | "command": "npx", 38 | "args": [ 39 | "-y", 40 | "@smithery/cli@latest", 41 | "run", 42 | "@miottid/todoist-mcp", 43 | "--key", 44 | "Smithery API_KEY 입력" 45 | ] 46 | } 47 | } 48 | } 49 | -------------------------------------------------------------------------------- /src/react_agent/configuration.py: -------------------------------------------------------------------------------- 1 | """Define the configurable parameters for the agent.""" 2 | 3 | from __future__ import annotations 4 | 5 | from dataclasses import dataclass, field, fields 6 | from typing import Annotated, Optional 7 | 8 | from langchain_core.runnables import RunnableConfig, ensure_config 9 | 10 | from react_agent import prompts 11 | 12 | 13 | @dataclass(kw_only=True) 14 | class Configuration: 15 | """The configuration for the agent.""" 16 | 17 | system_prompt: str = field( 18 | default=prompts.SYSTEM_PROMPT, 19 | metadata={ 20 | "description": "The system prompt to use for the agent's interactions. " 21 | "This prompt sets the context and behavior for the agent." 22 | }, 23 | ) 24 | 25 | mcp_tools: str = field( 26 | default="mcp_config.json", 27 | metadata={"description": "The path to the MCP tools configuration file."}, 28 | ) 29 | 30 | recursion_limit: int = field( 31 | default=30, 32 | metadata={ 33 | "description": "The maximum number of recursive calls that Agent can make." 34 | }, 35 | ) 36 | 37 | @classmethod 38 | def from_runnable_config( 39 | cls, config: Optional[RunnableConfig] = None 40 | ) -> Configuration: 41 | """Create a Configuration instance from a RunnableConfig object.""" 42 | config = ensure_config(config) 43 | configurable = config.get("configurable") or {} 44 | _fields = {f.name for f in fields(cls) if f.init} 45 | return cls(**{k: v for k, v in configurable.items() if k in _fields}) 46 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [project] 2 | name = "LangGraph-ReAct-MCP-Chat" 3 | version = "0.0.1" 4 | description = "A custom Reasoning and Action agent (using MCP Tools) in LangGraph." 5 | authors = [ 6 | { name = "Teddy Lee", email = "teddylee777@gmail.com" }, 7 | ] 8 | readme = "README.md" 9 | license = { text = "Apache-2.0" } 10 | requires-python = ">=3.11,<4.0" 11 | dependencies = [ 12 | "langgraph>=0.3.25", 13 | "langchain-openai>=0.3.12", 14 | "langchain-anthropic>=0.3.10", 15 | "langchain>=0.3.23", 16 | "langchain-community>=0.3.21", 17 | "langchain-mcp-adapters>=0.0.7", 18 | "tavily-python>=0.5.4", 19 | "python-dotenv>=1.0.1", 20 | "aiofiles>=24.1.0", 21 | ] 22 | 23 | 24 | [project.optional-dependencies] 25 | dev = ["mypy>=1.11.1", "ruff>=0.6.1"] 26 | 27 | [build-system] 28 | requires = ["setuptools>=73.0.0", "wheel"] 29 | build-backend = "setuptools.build_meta" 30 | 31 | [tool.setuptools] 32 | packages = ["langgraph.templates.react_agent", "react_agent"] 33 | [tool.setuptools.package-dir] 34 | "langgraph.templates.react_agent" = "src/react_agent" 35 | "react_agent" = "src/react_agent" 36 | 37 | 38 | [tool.setuptools.package-data] 39 | "*" = ["py.typed"] 40 | 41 | [tool.ruff] 42 | lint.select = [ 43 | "E", # pycodestyle 44 | "F", # pyflakes 45 | "I", # isort 46 | "D", # pydocstyle 47 | "D401", # First line should be in imperative mood 48 | "T201", 49 | "UP", 50 | ] 51 | lint.ignore = [ 52 | "UP006", 53 | "UP007", 54 | # We actually do want to import from typing_extensions 55 | "UP035", 56 | # Relax the convention by _not_ requiring documentation for every function parameter. 57 | "D417", 58 | "E501", 59 | ] 60 | [tool.ruff.lint.per-file-ignores] 61 | "tests/*" = ["D", "UP"] 62 | [tool.ruff.lint.pydocstyle] 63 | convention = "google" 64 | 65 | [dependency-groups] 66 | dev = [ 67 | "langgraph-cli[inmem]>=0.1.89", 68 | ] 69 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | .PHONY: all format lint test tests test_watch integration_tests docker_tests help extended_tests 2 | 3 | # Default target executed when no arguments are given to make. 4 | all: help 5 | 6 | # Define a variable for the test file path. 7 | TEST_FILE ?= tests/unit_tests/ 8 | 9 | test: 10 | python -m pytest $(TEST_FILE) 11 | 12 | test_watch: 13 | python -m ptw --snapshot-update --now . -- -vv tests/unit_tests 14 | 15 | test_profile: 16 | python -m pytest -vv tests/unit_tests/ --profile-svg 17 | 18 | extended_tests: 19 | python -m pytest --only-extended $(TEST_FILE) 20 | 21 | 22 | ###################### 23 | # LINTING AND FORMATTING 24 | ###################### 25 | 26 | # Define a variable for Python and notebook files. 27 | PYTHON_FILES=src/ 28 | MYPY_CACHE=.mypy_cache 29 | lint format: PYTHON_FILES=. 30 | lint_diff format_diff: PYTHON_FILES=$(shell git diff --name-only --diff-filter=d main | grep -E '\.py$$|\.ipynb$$') 31 | lint_package: PYTHON_FILES=src 32 | lint_tests: PYTHON_FILES=tests 33 | lint_tests: MYPY_CACHE=.mypy_cache_test 34 | 35 | lint lint_diff lint_package lint_tests: 36 | python -m ruff check . 37 | [ "$(PYTHON_FILES)" = "" ] || python -m ruff format $(PYTHON_FILES) --diff 38 | [ "$(PYTHON_FILES)" = "" ] || python -m ruff check --select I $(PYTHON_FILES) 39 | [ "$(PYTHON_FILES)" = "" ] || python -m mypy --strict $(PYTHON_FILES) 40 | [ "$(PYTHON_FILES)" = "" ] || mkdir -p $(MYPY_CACHE) && python -m mypy --strict $(PYTHON_FILES) --cache-dir $(MYPY_CACHE) 41 | 42 | format format_diff: 43 | ruff format $(PYTHON_FILES) 44 | ruff check --select I --fix $(PYTHON_FILES) 45 | 46 | spell_check: 47 | codespell --toml pyproject.toml 48 | 49 | spell_fix: 50 | codespell --toml pyproject.toml -w 51 | 52 | ###################### 53 | # HELP 54 | ###################### 55 | 56 | help: 57 | @echo '----' 58 | @echo 'format - run code formatters' 59 | @echo 'lint - run linters' 60 | @echo 'test - run unit tests' 61 | @echo 'tests - run unit tests' 62 | @echo 'test TEST_FILE= - run all tests in file' 63 | @echo 'test_watch - run unit tests in watch mode' 64 | 65 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | Copyright (c) 2025 TeddyNote Lab 6 | 7 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 8 | 9 | 1. Definitions. 10 | 11 | "License" shall mean the terms and conditions for use, reproduction, 12 | and distribution as defined by Sections 1 through 9 of this document. 13 | 14 | "Licensor" shall mean the copyright owner or entity authorized by 15 | the copyright owner that is granting the License. 16 | 17 | "Legal Entity" shall mean the union of the acting entity and all 18 | other entities that control, are controlled by, or are under common 19 | control with that entity. For the purposes of this definition, 20 | "control" means (i) the power, direct or indirect, to cause the 21 | direction or management of such entity, whether by contract or 22 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 23 | outstanding shares, or (iii) beneficial ownership of such entity. 24 | 25 | "You" (or "Your") shall mean an individual or Legal Entity 26 | exercising permissions granted by this License. 27 | 28 | "Source" form shall mean the preferred form for making modifications, 29 | including but not limited to software source code, documentation 30 | source, and configuration files. 31 | 32 | "Object" form shall mean any form resulting from mechanical 33 | transformation or translation of a Source form, including but 34 | not limited to compiled object code, generated documentation, 35 | and conversions to other media types. 36 | 37 | "Work" shall mean the work of authorship, whether in Source or 38 | Object form, made available under the License, as indicated by a 39 | copyright notice that is included in or attached to the work 40 | (an example is provided in the Appendix below). 41 | 42 | "Derivative Works" shall mean any work, whether in Source or Object 43 | form, that is based on (or derived from) the Work and for which the 44 | editorial revisions, annotations, elaborations, or other modifications 45 | represent, as a whole, an original work of authorship. For the purposes 46 | of this License, 47 | -------------------------------------------------------------------------------- /src/react_agent/state.py: -------------------------------------------------------------------------------- 1 | """Define the state structures for the agent.""" 2 | 3 | from __future__ import annotations 4 | 5 | from dataclasses import dataclass, field 6 | from typing import Sequence 7 | 8 | from langchain_core.messages import AnyMessage 9 | from langgraph.graph import add_messages 10 | from langgraph.managed import IsLastStep 11 | from typing_extensions import Annotated 12 | 13 | 14 | @dataclass 15 | class InputState: 16 | """Defines the input state for the agent, representing a narrower interface to the outside world. 17 | 18 | This class is used to define the initial state and structure of incoming data. 19 | """ 20 | 21 | messages: Annotated[Sequence[AnyMessage], add_messages] = field( 22 | default_factory=list 23 | ) 24 | """ 25 | Messages tracking the primary execution state of the agent. 26 | 27 | Typically accumulates a pattern of: 28 | 1. HumanMessage - user input 29 | 2. AIMessage with .tool_calls - agent picking tool(s) to use to collect information 30 | 3. ToolMessage(s) - the responses (or errors) from the executed tools 31 | 4. AIMessage without .tool_calls - agent responding in unstructured format to the user 32 | 5. HumanMessage - user responds with the next conversational turn 33 | 34 | Steps 2-5 may repeat as needed. 35 | 36 | The `add_messages` annotation ensures that new messages are merged with existing ones, 37 | updating by ID to maintain an "append-only" state unless a message with the same ID is provided. 38 | """ 39 | 40 | 41 | @dataclass 42 | class State(InputState): 43 | """Represents the complete state of the agent, extending InputState with additional attributes. 44 | 45 | This class can be used to store any information needed throughout the agent's lifecycle. 46 | """ 47 | 48 | is_last_step: IsLastStep = field(default=False) 49 | """ 50 | Indicates whether the current step is the last one before the graph raises an error. 51 | 52 | This is a 'managed' variable, controlled by the state machine rather than user code. 53 | It is set to 'True' when the step count reaches recursion_limit - 1. 54 | """ 55 | 56 | # Additional attributes can be added here as needed. 57 | # Common examples include: 58 | # retrieved_documents: List[Document] = field(default_factory=list) 59 | # extracted_entities: Dict[str, Any] = field(default_factory=dict) 60 | # api_connections: Dict[str, Any] = field(default_factory=dict) 61 | -------------------------------------------------------------------------------- /src/react_agent/utils.py: -------------------------------------------------------------------------------- 1 | """Utility & helper functions.""" 2 | 3 | import json 4 | from pathlib import Path 5 | from typing import Dict, Any 6 | import aiofiles 7 | 8 | from langchain_core.messages import BaseMessage 9 | 10 | 11 | def get_message_text(msg: BaseMessage) -> str: 12 | """Get the text content of a message.""" 13 | content = msg.content 14 | if isinstance(content, str): 15 | return content 16 | elif isinstance(content, dict): 17 | return content.get("text", "") 18 | else: 19 | txts = [c if isinstance(c, str) else (c.get("text") or "") for c in content] 20 | return "".join(txts).strip() 21 | 22 | 23 | async def load_mcp_config_json(filepath: str = "mcp_config.json") -> Dict[str, Any]: 24 | """ 25 | Load the mcp_config.json file and process the configuration. 26 | 27 | This function: 28 | 1. Reads the mcp_config.json file 29 | 2. Loads it as a Python dictionary 30 | 3. For each server in mcpServers that doesn't already have a transport field: 31 | - Adds "transport": "stdio" if the command is "npx" 32 | - Adds "transport": "sse" otherwise 33 | 34 | Returns: 35 | Dict[str, Any]: The processed configuration dictionary 36 | 37 | Raises: 38 | FileNotFoundError: If the mcp_config.json file doesn't exist 39 | json.JSONDecodeError: If the file contains invalid JSON 40 | """ 41 | # Determine the path of the mcp_config.json file 42 | config_path = Path(__file__).parent / filepath 43 | 44 | try: 45 | # Load the JSON file asynchronously using aiofiles 46 | 47 | async with aiofiles.open(config_path, "r") as f: 48 | content = await f.read() 49 | config = json.loads(content) 50 | 51 | # Process each server configuration 52 | if "mcpServers" in config: 53 | for server_name, server_config in config["mcpServers"].items(): 54 | # Skip if transport is already defined 55 | if "transport" in server_config: 56 | continue 57 | 58 | # Add the appropriate transport based on the command 59 | # command 파라미터가 없는 경우 무시하고 기본값으로 "sse" 사용 60 | if "command" in server_config and server_config["command"] == "npx": 61 | server_config["transport"] = "stdio" 62 | else: 63 | server_config["transport"] = "sse" 64 | 65 | return config 66 | except FileNotFoundError: 67 | raise FileNotFoundError(f"Config file not found: {config_path}") 68 | except json.JSONDecodeError as e: 69 | raise json.JSONDecodeError(f"Invalid JSON in config file", e.doc, e.pos) 70 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | uv.lock 6 | 7 | # C extensions 8 | *.so 9 | 10 | # Distribution / packaging 11 | .Python 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | share/python-wheels/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | MANIFEST 29 | 30 | **/mcp_config.json 31 | tests/cassettes/ 32 | 33 | # PyInstaller 34 | # Usually these files are written by a python script from a template 35 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 36 | *.manifest 37 | *.spec 38 | 39 | # Installer logs 40 | pip-log.txt 41 | pip-delete-this-directory.txt 42 | 43 | # Unit test / coverage reports 44 | htmlcov/ 45 | .tox/ 46 | .nox/ 47 | .coverage 48 | .coverage.* 49 | .cache 50 | nosetests.xml 51 | coverage.xml 52 | *.cover 53 | *.py,cover 54 | .hypothesis/ 55 | .pytest_cache/ 56 | cover/ 57 | 58 | # Translations 59 | *.mo 60 | *.pot 61 | 62 | # Django stuff: 63 | *.log 64 | local_settings.py 65 | db.sqlite3 66 | db.sqlite3-journal 67 | 68 | # Flask stuff: 69 | instance/ 70 | .webassets-cache 71 | 72 | # Scrapy stuff: 73 | .scrapy 74 | 75 | # Sphinx documentation 76 | docs/_build/ 77 | 78 | # PyBuilder 79 | .pybuilder/ 80 | target/ 81 | 82 | # Jupyter Notebook 83 | .ipynb_checkpoints 84 | 85 | # IPython 86 | profile_default/ 87 | ipython_config.py 88 | 89 | # pyenv 90 | # For a library or package, you might want to ignore these files since the code is 91 | # intended to run in multiple environments; otherwise, check them in: 92 | # .python-version 93 | 94 | # pipenv 95 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 96 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 97 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 98 | # install all needed dependencies. 99 | #Pipfile.lock 100 | 101 | # poetry 102 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 103 | # This is especially recommended for binary packages to ensure reproducibility, and is more 104 | # commonly ignored for libraries. 105 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 106 | #poetry.lock 107 | 108 | # pdm 109 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 110 | #pdm.lock 111 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 112 | # in version control. 113 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control 114 | .pdm.toml 115 | .pdm-python 116 | .pdm-build/ 117 | 118 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 119 | __pypackages__/ 120 | 121 | # Celery stuff 122 | celerybeat-schedule 123 | celerybeat.pid 124 | 125 | # SageMath parsed files 126 | *.sage.py 127 | 128 | # Environments 129 | .env 130 | .venv 131 | env/ 132 | venv/ 133 | ENV/ 134 | env.bak/ 135 | venv.bak/ 136 | 137 | # Spyder project settings 138 | .spyderproject 139 | .spyproject 140 | 141 | # Rope project settings 142 | .ropeproject 143 | 144 | # mkdocs documentation 145 | /site 146 | 147 | # mypy 148 | .mypy_cache/ 149 | .dmypy.json 150 | dmypy.json 151 | 152 | # Pyre type checker 153 | .pyre/ 154 | 155 | # pytype static type analyzer 156 | .pytype/ 157 | 158 | # Cython debug symbols 159 | cython_debug/ 160 | 161 | # PyCharm 162 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 163 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 164 | # and can be added to the global gitignore or merged into this file. For a more nuclear 165 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 166 | #.idea/ 167 | 168 | .langgraph_api 169 | -------------------------------------------------------------------------------- /src/react_agent/graph.py: -------------------------------------------------------------------------------- 1 | from datetime import datetime, timezone 2 | from typing import Dict, List, Literal, cast 3 | 4 | from langchain_core.messages import AIMessage, SystemMessage 5 | from langchain_core.runnables import RunnableConfig 6 | from langgraph.graph import StateGraph 7 | from langgraph.prebuilt import ToolNode 8 | 9 | from react_agent.configuration import Configuration 10 | from react_agent.state import InputState, State 11 | from react_agent.tools import TOOLS 12 | from react_agent import utils 13 | from contextlib import asynccontextmanager 14 | from langchain_mcp_adapters.client import MultiServerMCPClient 15 | from langgraph.prebuilt import create_react_agent 16 | from langchain_anthropic import ChatAnthropic 17 | from langgraph.checkpoint.memory import MemorySaver 18 | from langchain_core.runnables import RunnableConfig 19 | 20 | 21 | memory = MemorySaver() 22 | 23 | 24 | @asynccontextmanager 25 | async def make_graph(mcp_tools: Dict[str, Dict[str, str]]): 26 | async with MultiServerMCPClient(mcp_tools) as client: 27 | model = ChatAnthropic( 28 | model="claude-3-7-sonnet-latest", temperature=0.0, max_tokens=64000 29 | ) 30 | agent = create_react_agent(model, client.get_tools(), checkpointer=memory) 31 | yield agent 32 | 33 | 34 | async def call_model( 35 | state: State, config: RunnableConfig 36 | ) -> Dict[str, List[AIMessage]]: 37 | """Call the LLM powering our "agent". 38 | 39 | This function prepares the prompt, initializes the model, and processes the response. 40 | 41 | Args: 42 | state (State): The current state of the conversation. 43 | config (RunnableConfig): Configuration for the model run. 44 | 45 | Returns: 46 | dict: A dictionary containing the model's response message. 47 | """ 48 | configuration = Configuration.from_runnable_config(config) 49 | 50 | # Format the system prompt. Customize this to change the agent's behavior. 51 | system_message = configuration.system_prompt.format( 52 | system_time=datetime.now(tz=timezone.utc).isoformat() 53 | ) 54 | 55 | mcp_json_path = configuration.mcp_tools 56 | 57 | mcp_tools_config = await utils.load_mcp_config_json(mcp_json_path) 58 | 59 | # Extract the servers configuration from mcpServers key 60 | mcp_tools = mcp_tools_config.get("mcpServers", {}) 61 | print(mcp_tools) 62 | 63 | response = None 64 | 65 | async with make_graph(mcp_tools) as my_agent: 66 | # Create the messages list 67 | messages = [ 68 | SystemMessage(content=system_message), 69 | *state.messages, 70 | ] 71 | 72 | # Pass messages with the correct dictionary structure 73 | response = cast( 74 | AIMessage, 75 | await my_agent.ainvoke( 76 | {"messages": messages}, 77 | config, 78 | ), 79 | ) 80 | 81 | # Handle the case when it's the last step and the model still wants to use a tool 82 | if state.is_last_step and response.tool_calls: 83 | return { 84 | "messages": [ 85 | AIMessage( 86 | id=response.id, 87 | content="Sorry, I could not find an answer to your question in the specified number of steps.", 88 | ) 89 | ] 90 | } 91 | 92 | # Return the model's response as a list to be added to existing messages 93 | return {"messages": [response["messages"][-1]]} 94 | 95 | 96 | # Define a new graph 97 | 98 | builder = StateGraph(State, input=InputState, config_schema=Configuration) 99 | 100 | # Define the two nodes we will cycle between 101 | builder.add_node(call_model) 102 | builder.add_node("tools", ToolNode(TOOLS)) 103 | 104 | # Set the entrypoint as `call_model` 105 | # This means that this node is the first one called 106 | builder.add_edge("__start__", "call_model") 107 | 108 | 109 | def route_model_output(state: State) -> Literal["__end__", "tools"]: 110 | """Determine the next node based on the model's output. 111 | 112 | This function checks if the model's last message contains tool calls. 113 | 114 | Args: 115 | state (State): The current state of the conversation. 116 | 117 | Returns: 118 | str: The name of the next node to call ("__end__" or "tools"). 119 | """ 120 | last_message = state.messages[-1] 121 | if not isinstance(last_message, AIMessage): 122 | raise ValueError( 123 | f"Expected AIMessage in output edges, but got {type(last_message).__name__}" 124 | ) 125 | # If there is no tool call, then we finish 126 | if not last_message.tool_calls: 127 | return "__end__" 128 | # Otherwise we execute the requested actions 129 | return "tools" 130 | 131 | 132 | # Add a conditional edge to determine the next step after `call_model` 133 | builder.add_conditional_edges( 134 | "call_model", 135 | # After call_model finishes running, the next node(s) are scheduled 136 | # based on the output from route_model_output 137 | route_model_output, 138 | ) 139 | 140 | # Add a normal edge from `tools` to `call_model` 141 | # This creates a cycle: after using tools, we always return to the model 142 | builder.add_edge("tools", "call_model") 143 | 144 | # Compile the builder into an executable graph 145 | # You can customize this by adding interrupt points for state updates 146 | graph = builder.compile( 147 | interrupt_before=[], # Add node names here to update state before they're called 148 | interrupt_after=[], # Add node names here to update state after they're called 149 | ) 150 | graph.name = "ReAct Agent" # This customizes the name in LangSmith 151 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # LangGraph ReAct MCP Chat 2 | 3 | ![Python](https://img.shields.io/badge/Python-3.11+-blue.svg) 4 | ![LangChain](https://img.shields.io/badge/LangChain-0.3.23+-green.svg) 5 | ![LangGraph](https://img.shields.io/badge/LangGraph-0.3.25+-orange.svg) 6 | [![Open in - LangGraph Studio](https://img.shields.io/badge/Open_in-LangGraph_Studio-00324d.svg?logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NS4zMzMiIGhlaWdodD0iODUuMzMzIiB2ZXJzaW9uPSIxLjAiIHZpZXdCb3g9IjAgMCA2NCA2NCI+PHBhdGggZD0iTTEzIDcuOGMtNi4zIDMuMS03LjEgNi4zLTYuOCAyNS43LjQgMjQuNi4zIDI0LjUgMjUuOSAyNC41QzU3LjUgNTggNTggNTcuNSA1OCAzMi4zIDU4IDcuMyA1Ni43IDYgMzIgNmMtMTIuOCAwLTE2LjEuMy0xOSAxLjhtMzcuNiAxNi42YzIuOCAyLjggMy40IDQuMiAzLjQgNy42cy0uNiA0LjgtMy40IDcuNkw0Ny4yIDQzSDE2LjhsLTMuNC0zLjRjLTQuOC00LjgtNC44LTEwLjQgMC0xNS4ybDMuNC0zLjRoMzAuNHoiLz48cGF0aCBkPSJNMTguOSAyNS42Yy0xLjEgMS4zLTEgMS43LjQgMi41LjkuNiAxLjcgMS44IDEuNyAyLjcgMCAxIC43IDIuOCAxLjYgNC4xIDEuNCAxLjkgMS40IDIuNS4zIDMuMi0xIC42LS42LjkgMS40LjkgMS41IDAgMi43LS41IDIuNy0xIDAtLjYgMS4xLS44IDIuNi0uNGwyLjYuNy0xLjgtMi45Yy01LjktOS4zLTkuNC0xMi4zLTExLjUtOS44TTM5IDI2YzAgMS4xLS45IDIuNS0yIDMuMi0yLjQgMS41LTIuNiAzLjQtLjUgNC4yLjguMyAyIDEuNyAyLjUgMy4xLjYgMS41IDEuNCAyLjMgMiAyIDEuNS0uOSAxLjItMy41LS40LTMuNS0yLjEgMC0yLjgtMi44LS44LTMuMyAxLjYtLjQgMS42LS41IDAtLjYtMS4xLS4xLTEuNS0uNi0xLjItMS42LjctMS43IDMuMy0yLjEgMy41LS41LjEuNS4yIDEuNi4zIDIuMiAwIC43LjkgMS40IDEuOSAxLjYgMi4xLjQgMi4zLTIuMy4yLTMuMi0uOC0uMy0yLTEuNy0yLjUtMy4xLTEuMS0zLTMtMy4zLTMtLjUiLz48L3N2Zz4=)](https://langgraph-studio.vercel.app/templates/open?githubUrl=https://github.com/langchain-ai/react-agent) 7 | 8 | ## 프로젝트 개요 9 | 10 | ![project demo](./assets/teddyflow-mcp-chat.png) 11 | 12 | `LangGraph ReAct MCP Chat`은 Model Context Protocol(MCP)을 통해 다양한 외부 도구와 데이터 소스에 접근할 수 있는 ReAct 에이전트를 구현한 프로젝트입니다. 이 프로젝트는 LangGraph를 기반으로 하며, MCP 도구를 쉽게 추가하고 구성할 수 있는 인터페이스를 제공합니다. 13 | 14 | ### 사용 사례 15 | 16 | ![chat demo](./assets/teddyflow-mcp-chat2.png) 17 | 18 | ![chat demo](./assets/teddyflow-mcp-chat3.png) 19 | 20 | ![chat demo](./assets/teddyflow-mcp-chat4.png) 21 | 22 | ### 주요 기능 23 | 24 | * **LangGraph ReAct 에이전트**: LangGraph를 기반으로 하는 ReAct 에이전트 25 | * **도구 관리**: MCP 도구를 쉽게 추가, 제거, 구성 가능 (Smithery JSON 형식 지원) 26 | * **스트리밍 응답**: 에이전트의 응답과 도구 호출을 실시간으로 확인 27 | * **대화 기록**: 에이전트와의 대화 내용 추적 및 관리 28 | 29 | ## 아키텍처 30 | 31 | ![architecture](./assets/teddyflow-architecture.png) 32 | 33 | ## 설치 방법 34 | 35 | 1. 저장소 복제하기 36 | 37 | ```bash 38 | git clone https://github.com/teddynote-lab/langgraph-react-mcp-chat.git 39 | cd langgraph-react-mcp-chat 40 | ``` 41 | 42 | 2. uv를 사용하여 가상 환경 생성 및 의존성 설치 43 | 44 | ```bash 45 | uv venv 46 | source .venv/bin/activate # Windows의 경우: .venv\Scripts\activate 47 | uv pip install -r requirements.txt 48 | ``` 49 | 50 | 3. `.env` 파일 설정하기 51 | 52 | `.env.example` 파일을 `.env`로 복사하고 필요한 API 키를 추가합니다: 53 | 54 | ```bash 55 | cp .env.example .env 56 | ``` 57 | 58 | 아래는 필요한 API 키 목록입니다: 59 | ``` 60 | ANTHROPIC_API_KEY=your_anthropic_api_key 61 | OPENAI_API_KEY=your_openai_api_key 62 | LANGSMITH_TRACING=true 63 | LANGSMITH_ENDPOINT=https://api.smith.langchain.com 64 | LANGSMITH_API_KEY=your_langsmith_api_key 65 | LANGSMITH_PROJECT=your_langsmith_project 66 | ``` 67 | 68 | 4. MCP 도구 구성하기 69 | 70 | 먼저, 예시로 작성한 `mcp_config_sample.json` 파일을 `mcp_config.json` 파일로 파일명을 변경합니다. 71 | 72 | 아래는 샘플로 작성된 예시입니다. 73 | 74 | ```json 75 | { 76 | "mcpServers": { 77 | "desktop-commander": { 78 | "command": "npx", 79 | "args": [ 80 | "-y", 81 | "@smithery/cli@latest", 82 | "run", 83 | "@wonderwhy-er/desktop-commander", 84 | "--key", 85 | "Smithery API_KEY 입력" 86 | ] 87 | }, 88 | "perplexity-search": { 89 | "command": "npx", 90 | "args": [ 91 | "-y", 92 | "@smithery/cli@latest", 93 | "run", 94 | "@arjunkmrm/perplexity-search", 95 | "--config", 96 | "{\"perplexityApiKey\":\"Perplexity API_KEY 입력\"}" 97 | ] 98 | }, 99 | "hyperbrowser": { 100 | "command": "npx", 101 | "args": [ 102 | "-y", 103 | "@smithery/cli@latest", 104 | "run", 105 | "@hyperbrowserai/mcp", 106 | "--key", 107 | "Smithery API_KEY 입력" 108 | ] 109 | }, 110 | "todoist-mcp": { 111 | "command": "npx", 112 | "args": [ 113 | "-y", 114 | "@smithery/cli@latest", 115 | "run", 116 | "@miottid/todoist-mcp", 117 | "--key", 118 | "Smithery API_KEY 입력" 119 | ] 120 | } 121 | } 122 | } 123 | ``` 124 | 125 | [Smithery](https://smithery.ai/) 에서 사용할 MCP 도구의 JSON 구성을 가져와 `mcp_config.json` 파일에 추가해야 합니다 126 | 127 | 1. [Smithery](https://smithery.io) 웹사이트를 방문하여 사용하고 싶은 도구를 선택합니다. 128 | 2. 도구 페이지에서 오른쪽의 'COPY' 버튼을 클릭하여 JSON 구성을 복사합니다. 129 | 3. `mcp_config.json` 파일을 열고 복사한 JSON을 추가합니다. 130 | 131 | **중요**: 파일 이름은 반드시 `mcp_config.json`으로 저장해야 합니다. 다른 이름을 사용할 경우 프로그램이 올바르게 작동하지 않습니다. 132 | 133 | ## 애플리케이션 실행 134 | 135 | 모든 설정이 완료되었다면, 다음 명령어로 LangGraph 개발 서버를 실행할 수 있습니다: 136 | 137 | ```bash 138 | langgraph dev 139 | ``` 140 | 141 | 서버가 성공적으로 시작되면 웹 브라우저에서 로컬 주소(일반적으로 http://localhost:3000)를 통해 애플리케이션에 접속할 수 있습니다. 142 | 143 | ## teddyflow.com 연결 방법 144 | 145 | 146 | 1. 먼저, 터미널에서 `langgraph dev` 명령어를 실행하여 개발 서버를 시작합니다. 147 | 2. teddyflow.com 에서 회원가입을 합니다. 148 | 149 | 회원가입시 "베타 키" 에 `teddynote-youtube` 를 입력하면 승인 없이 바로 가입이 가능합니다. 150 | 151 | ![teddyflow-code](./assets/teddyflow-code.png) 152 | 153 | 3. 로그인 후 "새로운 앱 연결" 버튼을 클릭합니다. 154 | 155 | ![teddyflow-guide-01](./assets/teddyflow-guide-01.png) 156 | 157 | 4. 앱 이름을 입력하고 "연결" 버튼을 클릭합니다. 158 | 5. 탭에서 "LangGraph" 를 선택한 뒤 다음의 정보를 입력합니다. 159 | - Endpoint: `http://localhost:2024` 160 | - Graph: `agent` 161 | 162 | ![teddyflow-guide-02](./assets/teddyflow-guide-02.png) 163 | 164 | 6. 연결 설정이 완료되면 "저장" 버튼을 클릭합니다. 165 | 166 | 7. "앱 연결하기" 버튼을 눌러 저장합니다. 167 | 168 | ## 회사명 / 커뮤니티 로고 및 브랜딩 적용 169 | 170 | 회사명 / 커뮤니티를 위한 커스텀 기능을 출시하였습니다. 171 | 172 | ![teddyflow-company](./assets/teddyflow-custom-logo.png) 173 | 174 | 도입을 희망하신다면 service@brain-crew.com 으로 문의주시면 도움을 드리겠습니다. 175 | 176 | 177 | ## 라이센스 178 | 179 | Apache License 2.0 ([LICENSE](LICENSE)) --------------------------------------------------------------------------------