├── .gitignore ├── 01 LangChain Hello World ├── README.md └── main.py ├── 02 LangGraph Hello World ├── README.md └── main.py ├── 03 Graph Chain Hello World ├── README.md └── main.py ├── 04 State Graph ├── README.md └── main.py ├── 05 Agents with Tool ├── README.md └── main.py ├── 06 Agent Choose Tool ├── README.md ├── main.py └── old │ ├── 01.py │ ├── 02.py │ └── 03.py ├── 07 ollama RAG ├── README.md ├── main.py ├── old_ver │ ├── 01_ollama_embed.py │ ├── 02_embed_ask.py │ └── 03_chunk.py └── testdata │ └── product.md ├── 08 SubGraph ├── README.md └── main.py ├── CONTRIBUTING.md ├── LICENSE └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | 06 llama_index/credentials.ini 2 | 3 | 4 | 5 | # Byte-compiled / optimized / DLL files 6 | __pycache__/ 7 | *.py[cod] 8 | *$py.class 9 | 10 | # C extensions 11 | *.so 12 | 13 | # Distribution / packaging 14 | .Python 15 | build/ 16 | develop-eggs/ 17 | dist/ 18 | downloads/ 19 | eggs/ 20 | .eggs/ 21 | lib/ 22 | lib64/ 23 | parts/ 24 | sdist/ 25 | var/ 26 | wheels/ 27 | share/python-wheels/ 28 | *.egg-info/ 29 | .installed.cfg 30 | *.egg 31 | MANIFEST 32 | 33 | # PyInstaller 34 | # Usually these files are written by a python script from a template 35 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 36 | *.manifest 37 | *.spec 38 | 39 | # Installer logs 40 | pip-log.txt 41 | pip-delete-this-directory.txt 42 | 43 | # Unit test / coverage reports 44 | htmlcov/ 45 | .tox/ 46 | .nox/ 47 | .coverage 48 | .coverage.* 49 | .cache 50 | nosetests.xml 51 | coverage.xml 52 | *.cover 53 | *.py,cover 54 | .hypothesis/ 55 | .pytest_cache/ 56 | cover/ 57 | 58 | # Translations 59 | *.mo 60 | *.pot 61 | 62 | # Django stuff: 63 | *.log 64 | local_settings.py 65 | db.sqlite3 66 | db.sqlite3-journal 67 | 68 | # Flask stuff: 69 | instance/ 70 | .webassets-cache 71 | 72 | # Scrapy stuff: 73 | .scrapy 74 | 75 | # Sphinx documentation 76 | docs/_build/ 77 | 78 | # PyBuilder 79 | .pybuilder/ 80 | target/ 81 | 82 | # Jupyter Notebook 83 | .ipynb_checkpoints 84 | 85 | # IPython 86 | profile_default/ 87 | ipython_config.py 88 | 89 | # pyenv 90 | # For a library or package, you might want to ignore these files since the code is 91 | # intended to run in multiple environments; otherwise, check them in: 92 | # .python-version 93 | 94 | # pipenv 95 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 96 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 97 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 98 | # install all needed dependencies. 99 | #Pipfile.lock 100 | 101 | # poetry 102 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 103 | # This is especially recommended for binary packages to ensure reproducibility, and is more 104 | # commonly ignored for libraries. 105 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 106 | #poetry.lock 107 | 108 | # pdm 109 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 110 | #pdm.lock 111 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 112 | # in version control. 113 | # https://pdm.fming.dev/#use-with-ide 114 | .pdm.toml 115 | 116 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 117 | __pypackages__/ 118 | 119 | # Celery stuff 120 | celerybeat-schedule 121 | celerybeat.pid 122 | 123 | # SageMath parsed files 124 | *.sage.py 125 | 126 | # Environments 127 | .env 128 | .venv 129 | env/ 130 | venv/ 131 | ENV/ 132 | env.bak/ 133 | venv.bak/ 134 | 135 | # Spyder project settings 136 | .spyderproject 137 | .spyproject 138 | 139 | # Rope project settings 140 | .ropeproject 141 | 142 | # mkdocs documentation 143 | /site 144 | 145 | # mypy 146 | .mypy_cache/ 147 | .dmypy.json 148 | dmypy.json 149 | 150 | # Pyre type checker 151 | .pyre/ 152 | 153 | # pytype static type analyzer 154 | .pytype/ 155 | 156 | # Cython debug symbols 157 | cython_debug/ 158 | 159 | # PyCharm 160 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 161 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 162 | # and can be added to the global gitignore or merged into this file. For a more nuclear 163 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 164 | #.idea/ 165 | /.vs 166 | -------------------------------------------------------------------------------- /01 LangChain Hello World/README.md: -------------------------------------------------------------------------------- 1 | # Project Overview 2 | 3 | This project demonstrates the use of the `langchain` and `langchain-community` libraries to interact with a locally hosted language model using the `ChatOllama` class. The script formats a prompt, sends it to the model, and processes the response. 4 | 5 | ## Environment Setup 6 | 7 | ### Prerequisites 8 | 9 | - Python 3.11 10 | - `langchain-community` and `langchain-core` libraries 11 | - `ollama` server 12 | 13 | ### Installing Dependencies 14 | 15 | Install the necessary Python packages: 16 | ```sh 17 | pip install langchain-community langchain-core tenacity<8.4.0 18 | ``` 19 | 20 | ### Setting Up the Ollama Server 21 | 22 | Follow these steps to set up and run the `ollama` server: 23 | 24 | 1. Open a terminal and start the `ollama` server: 25 | ```sh 26 | ollama serve 27 | ``` 28 | 29 | 2. Open another terminal and run the Python script: 30 | ```sh 31 | python main.py 32 | ``` 33 | 34 | ## Core Explanation 35 | 36 | ### Specifying the Local Language Model 37 | 38 | Specify the local language model to be used (`phi3` in this case): 39 | 40 | ```python 41 | local_llm = "phi3" 42 | ``` 43 | 44 | ### Initializing the ChatOllama Model 45 | 46 | Initialize the `ChatOllama` model with desired parameters: 47 | 48 | ```python 49 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 50 | ``` 51 | 52 | ### Defining the Prompt Template 53 | 54 | Define the prompt template to be used: 55 | 56 | ```python 57 | template = "Question: {question} 58 | Answer: Let's think step by step." 59 | prompt = PromptTemplate.from_template(template) 60 | ``` 61 | 62 | ### Formatting the Prompt 63 | 64 | Format the prompt with the input variable: 65 | 66 | ```python 67 | formatted_prompt = prompt.format(question="Tell me about you") 68 | ``` 69 | 70 | ### Creating the LLM Chain 71 | 72 | Create the LLM chain to process the prompt and parse the output: 73 | 74 | ```python 75 | llm_chain = prompt | llm | StrOutputParser() 76 | generation = llm_chain.invoke(formatted_prompt) 77 | ``` 78 | 79 | ## Running the Script 80 | 81 | 1. Ensure the `ollama` server is running as described in the [Setting Up the Ollama Server](#setting-up-the-ollama-server) section. 82 | 2. Run the Python script in another terminal: 83 | ```sh 84 | python main.py 85 | ``` 86 | -------------------------------------------------------------------------------- /01 LangChain Hello World/main.py: -------------------------------------------------------------------------------- 1 | from langchain_community.chat_models import ChatOllama 2 | from langchain_core.prompts import PromptTemplate 3 | from langchain_core.output_parsers import StrOutputParser 4 | 5 | # Specify the local language model 6 | local_llm = "phi3" 7 | 8 | # Initialize the ChatOllama model with desired parameters 9 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 10 | 11 | # Define the prompt template 12 | template = "Question: {question}\nAnswer: Let's think step by step." 13 | prompt = PromptTemplate.from_template(template) 14 | 15 | # Format the prompt with the input variable 16 | formatted_prompt = prompt.format(question="Tell me about you") 17 | 18 | # define langchain 19 | llm_chain = prompt | llm | StrOutputParser() 20 | generation = llm_chain.invoke(formatted_prompt) 21 | 22 | print(generation) 23 | -------------------------------------------------------------------------------- /02 LangGraph Hello World/README.md: -------------------------------------------------------------------------------- 1 | # 02 LangGraph Hello World 2 | 3 | This project demonstrates the use of the `langgraph` library to create a simple workflow graph with two nodes. Each node represents a function that processes input data and passes it to the next node. 4 | 5 | This tutorials is refer to [Learn LangGraph - The Easy Way](https://www.youtube.com/watch?v=R8KB-Zcynxc). 6 | 7 | ## Environment Setup 8 | 9 | ### Prerequisites 10 | 11 | - Python 3.11 12 | - `langgraph` library 13 | 14 | ### Installing Dependencies 15 | 16 | Install the necessary Python package: 17 | ```sh 18 | pip install langgraph 19 | ``` 20 | 21 | ## Core Explanation 22 | 23 | ### Defining Functions 24 | 25 | Define two simple node to be used in the workflow: 26 | 27 | ```python 28 | def function_1(input_1): 29 | return input_1 + " Hello " 30 | 31 | def function_2(input_2): 32 | return input_2 + "World!" 33 | ``` 34 | 35 | ### Creating the Workflow Graph 36 | 37 | Create a `Graph` instance and add the defined functions as nodes: 38 | 39 | ```python 40 | # Define a Langchain graph 41 | workflow = Graph() 42 | 43 | workflow.add_node("node_1", function_1) 44 | workflow.add_node("node_2", function_2) 45 | ``` 46 | 47 | ### Connecting the Nodes 48 | 49 | Connect the nodes to define the workflow: 50 | 51 | ```python 52 | workflow.add_edge('node_1', 'node_2') 53 | ``` 54 | 55 | ### Setting Entry and Finish Points 56 | 57 | Set the entry and finish points of the workflow: 58 | 59 | ```python 60 | workflow.set_entry_point("node_1") 61 | workflow.set_finish_point("node_2") 62 | ``` 63 | 64 | ### Compiling and Invoking the Workflow 65 | 66 | Compile the workflow and invoke it: 67 | 68 | ```python 69 | app = workflow.compile() 70 | 71 | output = app.invoke("langgraph: ") 72 | 73 | print(output) 74 | ``` 75 | 76 | ## Running the Script 77 | 78 | Run the Python script: 79 | ```sh 80 | python main.py 81 | ``` -------------------------------------------------------------------------------- /02 LangGraph Hello World/main.py: -------------------------------------------------------------------------------- 1 | from langgraph.graph import Graph 2 | 3 | 4 | def function_1(input_1): 5 | return input_1 + " Hello " 6 | 7 | def function_2(input_2): 8 | return input_2 + "World!" 9 | 10 | 11 | # Define a Langchain graph 12 | workflow = Graph() 13 | 14 | workflow.add_node("node_1", function_1) 15 | workflow.add_node("node_2", function_2) 16 | 17 | workflow.add_edge('node_1', 'node_2') 18 | 19 | workflow.set_entry_point("node_1") 20 | workflow.set_finish_point("node_2") 21 | 22 | app = workflow.compile() 23 | 24 | print(app.invoke("langgraph: ")) -------------------------------------------------------------------------------- /03 Graph Chain Hello World/README.md: -------------------------------------------------------------------------------- 1 | # 03 Graph Chain Hello World 2 | 3 | This project demonstrates the use of the `langgraph` and `langchain-community` libraries to create a workflow graph that interacts with a local language model to generate and process content. 4 | 5 | This tutorials is refer to [Learn LangGraph - The Easy Way](https://www.youtube.com/watch?v=R8KB-Zcynxc). 6 | 7 | ## Core Explanation 8 | 9 | 10 | ### Defining Functions 11 | 12 | Because we use json format, we can parse that 13 | 14 | ```python 15 | def Tool(input): 16 | print("Tool Stage input:" + input) 17 | # Parse the JSON input 18 | data = json.loads(input) 19 | # Extract the "content" and "filename" parts 20 | content = data.get("content", "") 21 | filename = data.get("filename", "output.md") 22 | # Write the content to the specified filename 23 | with open(filename, 'w') as file: 24 | file.write(content) 25 | return input 26 | ``` 27 | 28 | ## Running the Script 29 | 30 | 1. Ensure all dependencies are installed as described in the [Installing Dependencies](#installing-dependencies) section. 31 | 2. If required, ensure the `ollama` server is running as described in the [Setting Up the Ollama Server](#setting-up-the-ollama-server) section. 32 | 3. Run the Python script: 33 | ```sh 34 | python main.py 35 | ``` -------------------------------------------------------------------------------- /03 Graph Chain Hello World/main.py: -------------------------------------------------------------------------------- 1 | from langgraph.graph import Graph 2 | 3 | from langchain_community.chat_models import ChatOllama 4 | from langchain_core.prompts import PromptTemplate 5 | from langchain_core.output_parsers import StrOutputParser 6 | 7 | import json 8 | 9 | 10 | # Specify the local language model 11 | local_llm = "phi3" 12 | 13 | # Initialize the ChatOllama model with desired parameters 14 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 15 | 16 | 17 | def Agent(question): 18 | # Define the prompt template 19 | template = """ 20 | Question: {question} Let's think step by step. 21 | your output format is filename:"" and content:"" 22 | make sure your output is right json 23 | """ 24 | 25 | prompt = PromptTemplate.from_template(template) 26 | 27 | # Format the prompt with the input variable 28 | formatted_prompt = prompt.format(question=question) 29 | 30 | llm_chain = prompt | llm | StrOutputParser() 31 | generation = llm_chain.invoke(formatted_prompt) 32 | 33 | return generation 34 | 35 | 36 | 37 | def Tool(input): 38 | print("Tool Stage input:" + input) 39 | # Parse the JSON input 40 | data = json.loads(input) 41 | # Extract the "content" and "filename" parts 42 | content = data.get("content", "") 43 | filename = data.get("filename", "output.md") 44 | # Write the content to the specified filename 45 | with open(filename, 'w') as file: 46 | file.write(content) 47 | return input 48 | 49 | # Define a Langchain graph 50 | workflow = Graph() 51 | 52 | workflow.add_node("agent", Agent) 53 | workflow.add_node("tool", Tool) 54 | 55 | workflow.add_edge('agent', 'tool') 56 | 57 | workflow.set_entry_point("agent") 58 | workflow.set_finish_point("tool") 59 | 60 | app = workflow.compile() 61 | 62 | app.invoke("write an article, content is startup.md ") 63 | -------------------------------------------------------------------------------- /04 State Graph/README.md: -------------------------------------------------------------------------------- 1 | # State Machine in LangGraph 2 | 3 | This project demonstrates the use of the `langgraph` library to create a state machine for a simple lottery case. The workflow consists of nodes representing different states of the lottery process, where users keep buying ticket until win. 4 | 5 | This tutorials is ref from 6 | * [Learn LangGraph - The Easy Way](https://www.youtube.com/watch?v=R8KB-Zcynxc). 7 | * [langgraph_code_assistant.ipynb](https://github.com/langchain-ai/langgraph/blob/main/examples/code_assistant/langgraph_code_assistant.ipynb) 8 | 9 | ## Core Explanation 10 | 11 | * We need State as global state machine to store info like chat history 12 | * We need conditional edge to make the state machine able to swtich states 13 | 14 | ### Defining Node and Edge function 15 | 16 | ```mermaid 17 | graph TD; 18 | A((Buy lottery)) --> C{Check prize}; 19 | C -- Yes --> D[Go home]; 20 | C -- No --> A; 21 | 22 | ``` 23 | 24 | Define a `TypedDict` for the lottery state and functions representing different actions in the lottery process: 25 | 26 | ```python 27 | # for state 28 | class LotteryState(TypedDict): 29 | input: str 30 | winnings: Union[str, None] 31 | missed: Union[str, None] 32 | 33 | 34 | # for node 35 | def BuyLottery(state: LotteryState): 36 | random_number = random.randint(0, 2) 37 | print("buy number: " + str(random_number)) 38 | state['input'] = random_number 39 | return state 40 | 41 | # for node 42 | def Checking(state: LotteryState): 43 | prize_number = random.randint(0, 2) 44 | print("prize number: " + str(prize_number)) 45 | if state['input'] == prize_number: 46 | state['winnings'] = "win" 47 | return state 48 | else: 49 | state['missed'] = "missed" 50 | return state 51 | 52 | # for conditional edges 53 | def checking_result(state: LotteryState) -> Literal["win", "missed"]: 54 | if state.get("winnings") == "win": 55 | print("You win! Go home.") 56 | return "win" 57 | else: 58 | print("You missed. Buy again.") 59 | return "missed" 60 | ``` 61 | 62 | ### Creating the State Machine Workflow 63 | 64 | Create a `StateGraph` instance and add the defined functions as nodes: 65 | 66 | ```python 67 | # Define a LangGraph state machine 68 | workflow = StateGraph(LotteryState) 69 | 70 | # Add nodes to the workflow 71 | workflow.add_node("buy", BuyLottery) 72 | workflow.add_node("check", Checking) 73 | 74 | # Set the entry point of the workflow 75 | workflow.set_entry_point("buy") 76 | 77 | # Define edges between nodes 78 | workflow.add_edge("buy", "check") 79 | 80 | # Add conditional edges 81 | workflow.add_conditional_edges( 82 | "check", 83 | checking_result, 84 | { 85 | "missed": "buy", 86 | "win": END, 87 | }, 88 | ) 89 | ``` 90 | 91 | ### Compiling and Invoking the Workflow 92 | 93 | Compile the workflow into a runnable app and start the lottery process: 94 | 95 | ```python 96 | # Compile the workflow into a runnable app 97 | app = workflow.compile() 98 | 99 | # Start the lottery process 100 | final_state = app.stream({"input": "", "winnings": None, "missed": None}) 101 | 102 | print("Final State:", final_state) 103 | ``` 104 | 105 | ## Running the Script 106 | 107 | 1. Ensure all dependencies are installed as described in the [Installing Dependencies](#installing-dependencies) section. 108 | 2. Run the Python script: 109 | ```sh 110 | python main.py 111 | ``` 112 | -------------------------------------------------------------------------------- /04 State Graph/main.py: -------------------------------------------------------------------------------- 1 | from langgraph.graph import StateGraph, END 2 | from typing import TypedDict, Literal, Union 3 | import random 4 | 5 | 6 | class LotteryState(TypedDict): 7 | input: str 8 | winnings: Union[str, None] 9 | missed: Union[str, None] 10 | 11 | # for node 12 | def BuyLottery(state: LotteryState): 13 | random_number = random.randint(0, 2) 14 | print("buy number: " + str(random_number)) 15 | state['input'] = random_number 16 | return state 17 | 18 | # for node 19 | def Checking(state: LotteryState): 20 | prize_number = random.randint(0, 2) 21 | print("prize number: " + str(prize_number)) 22 | if state['input'] == prize_number: 23 | state['winnings'] = "win" 24 | return state 25 | else: 26 | state['missed'] = "missed" 27 | return state 28 | 29 | # for conditional edges 30 | def checking_result(state: LotteryState) -> Literal["win", "missed"]: 31 | if state.get("winnings") == "win": 32 | print("You win! Go home.") 33 | return "win" 34 | else: 35 | print("You missed. Buy again.") 36 | return "missed" 37 | 38 | 39 | # Define a LangGraph state machine 40 | workflow = StateGraph(LotteryState) 41 | 42 | # Add nodes to the workflow 43 | workflow.add_node("buy", BuyLottery) 44 | workflow.add_node("check", Checking) 45 | 46 | # Set the entry point of the workflow 47 | workflow.set_entry_point("buy") 48 | 49 | # Define edges between nodes 50 | workflow.add_edge("buy", "check") 51 | 52 | # Add conditional edges 53 | workflow.add_conditional_edges( 54 | "check", 55 | checking_result, 56 | { 57 | "missed": "buy", 58 | "win": END, 59 | }, 60 | ) 61 | 62 | # Compile the workflow into a runnable app 63 | app = workflow.compile() 64 | 65 | # Start the lottery process 66 | for s in app.stream({"input": "", "winnings": None, "missed": None}): 67 | print(s) 68 | -------------------------------------------------------------------------------- /05 Agents with Tool/README.md: -------------------------------------------------------------------------------- 1 | # Agents with Tool 2 | 3 | This tutorial demonstrated how to create a simple TRPG agent using LangChain and LangGraph. The agent simulates a DM and a Player, facilitating a fast-play DnD game. You can extend this code to add more complex interactions, additional game mechanics, or integrate with other tools and APIs for a richer gaming experience. 4 | 5 | 6 | ### Core Explanation 7 | 8 | * We use AgentBase as Agents for different role 9 | * We use Conditional Edge to use Tool 10 | 11 | ```mermaid 12 | graph TD; 13 | A((DugenMaster)) --> B{need roll?}; 14 | B -- Yes --> C[Roll Dice]; 15 | B -- No --> D((Player)); 16 | D --> A; 17 | C --> A; 18 | 19 | 20 | ``` 21 | 22 | #### Base Class for Agent 23 | 24 | Create an abstract base class for tasks. 25 | 26 | ```python 27 | class AgentBase(ABC): 28 | def __init__(self, state: TRPGState): 29 | self.state = state 30 | 31 | @abstractmethod 32 | def get_prompt_template(self) -> str: 33 | pass 34 | 35 | def execute(self) -> TRPGState: 36 | self.state["history"] = clip_history(self.state["history"]) 37 | template = self.get_prompt_template() 38 | prompt = PromptTemplate.from_template(template) 39 | llm_chain = prompt | llm | StrOutputParser() 40 | generation = llm_chain.invoke({"history": self.state["history"], "roll_number": self.state["roll_number"]}) 41 | data = json.loads(generation) 42 | self.state["need_roll"] = data.get("need_roll", "") 43 | self.state["roll_number"] = -1 44 | self.state["history"] += "\n" + generation 45 | self.state["history"] = clip_history(self.state["history"]) 46 | return self.state 47 | ``` 48 | 49 | #### Roles 50 | 51 | Define agents for DM and Player. 52 | 53 | ```python 54 | class DM(AgentBase): 55 | def get_prompt_template(self) -> str: 56 | return """ 57 | {history} 58 | As DnD DM, describe the current scenario for the player. (in short, we do fast play) 59 | sometimes roll dice, sometimes not. 60 | player roll {roll_number}, if > 0 you need explain what the roll affect result, start from your roll {roll_number} blablabla 61 | Output the JSON in the format: {{"scenario": "your action description", "need_roll": True/False}} 62 | """ 63 | 64 | class Player(AgentBase): 65 | def get_prompt_template(self) -> str: 66 | return """ 67 | Here is the scenario: {history} 68 | As a Player, I want to perform an action. (in short, we do fast play) 69 | Output the JSON in the format: {{"action": "I want xxxx"}} 70 | """ 71 | ``` 72 | 73 | #### Tool 74 | 75 | A function to simulate rolling a 20-sided dice. 76 | 77 | ```python 78 | def RollDice(state: TRPGState) -> TRPGState: 79 | random_number = random.randint(1, 20) 80 | state["history"] += "\n" + "roll result:" + str(random_number) 81 | state["history"] = clip_history(state["history"]) 82 | state["need_roll"] = False 83 | state["roll_number"] = random_number 84 | return state 85 | ``` 86 | 87 | #### Conditional Edge to use Tool Function 88 | 89 | for conditional edge 90 | 91 | ```python 92 | def check_need_roll(state: TRPGState) -> Literal["roll", "not roll"]: 93 | if state.get("need_roll") == True: 94 | return "roll" 95 | else: 96 | return "not roll" 97 | ``` 98 | 99 | 100 | ```python 101 | workflow.add_conditional_edges( 102 | "dm", 103 | check_need_roll, 104 | { 105 | "not roll": "player", 106 | "roll": "RollDice" 107 | } 108 | ) 109 | ``` 110 | -------------------------------------------------------------------------------- /05 Agents with Tool/main.py: -------------------------------------------------------------------------------- 1 | from langgraph.graph import StateGraph, END 2 | from typing import TypedDict, Literal 3 | import random 4 | import json 5 | from langchain_community.chat_models import ChatOllama 6 | from langchain_core.prompts import PromptTemplate 7 | from langchain_core.output_parsers import StrOutputParser 8 | from abc import ABC, abstractmethod 9 | 10 | # Specify the local language model 11 | local_llm = "mistral" 12 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 13 | 14 | # Clip the history to the last 8000 characters 15 | def clip_history(history: str, max_chars: int = 8000) -> str: 16 | if len(history) > max_chars: 17 | return history[-max_chars:] 18 | return history 19 | 20 | # Define the state for our workflow 21 | class TRPGState(TypedDict): 22 | history: str 23 | need_roll: bool 24 | roll_number: int 25 | 26 | # Define the base class for tasks 27 | class AgentBase(ABC): 28 | def __init__(self, state: TRPGState): 29 | self.state = state 30 | 31 | @abstractmethod 32 | def get_prompt_template(self) -> str: 33 | pass 34 | 35 | def execute(self) -> TRPGState: 36 | # Clip the history to the last 8000 characters 37 | self.state["history"] = clip_history(self.state["history"]) 38 | 39 | # Define the prompt template 40 | template = self.get_prompt_template() 41 | prompt = PromptTemplate.from_template(template) 42 | llm_chain = prompt | llm | StrOutputParser() 43 | generation = llm_chain.invoke({"history": self.state["history"], "roll_number": self.state["roll_number"]}) 44 | 45 | data = json.loads(generation) 46 | self.state["need_roll"] = data.get("need_roll", "") 47 | self.state["roll_number"] = -1 48 | 49 | 50 | self.state["history"] += "\n" + generation 51 | self.state["history"] = clip_history(self.state["history"]) 52 | 53 | return self.state 54 | 55 | # Define agents 56 | class DM(AgentBase): 57 | def get_prompt_template(self) -> str: 58 | return """ 59 | {history} 60 | As DnD DM, describe the current scenario for the player. (in short, we do fast play) 61 | sometimes roll dice, sometimes not. 62 | player roll {roll_number}, if > 0 you need explain what the roll affect result, start from your roll {roll_number} blablabla 63 | Output the JSON in the format: {{"scenario": "your action description", "need_roll": True/False}} 64 | """ 65 | 66 | class Player(AgentBase): 67 | def get_prompt_template(self) -> str: 68 | return """ 69 | Here is the scenario: {history} 70 | As a Player, I want to perform an action. (in short, we do fast play) 71 | Output the JSON in the format: {{"action": "I want xxxx"}} 72 | """ 73 | 74 | 75 | # Define tool 76 | def RollDice(state: TRPGState) -> TRPGState: 77 | random_number = random.randint(1, 20) 78 | state["history"] += "\n" + "roll result:" + str(random_number) 79 | state["history"] = clip_history(state["history"]) 80 | state["need_roll"] = False 81 | state["roll_number"] = random_number 82 | return state 83 | 84 | # for conditional edges 85 | def check_need_roll(state: TRPGState) -> Literal["roll", "not roll"]: 86 | if state.get("need_roll") == True: 87 | return "roll" 88 | else: 89 | return "not roll" 90 | 91 | # Define the state machine 92 | workflow = StateGraph(TRPGState) 93 | 94 | # Initialize tasks for DM and Player 95 | def dm_task(state: TRPGState) -> TRPGState: 96 | return DM(state).execute() 97 | 98 | def player_task(state: TRPGState) -> TRPGState: 99 | return Player(state).execute() 100 | 101 | workflow.add_node("dm", dm_task) 102 | workflow.add_node("player", player_task) 103 | workflow.add_node("RollDice", RollDice) 104 | 105 | workflow.set_entry_point("dm") 106 | 107 | # Define edges between nodes 108 | 109 | workflow.add_conditional_edges( 110 | "dm", 111 | check_need_roll, 112 | { 113 | "not roll": "player", 114 | "roll": "RollDice" 115 | } 116 | ) 117 | 118 | workflow.add_edge("player", "dm") 119 | workflow.add_edge("RollDice", "dm") 120 | 121 | 122 | # Compile the workflow into a runnable app 123 | app = workflow.compile() 124 | 125 | # Initialize the state 126 | initial_state = TRPGState( 127 | history="A monster appears in front of you.", 128 | need_roll=False, 129 | roll_number=-1 130 | ) 131 | 132 | for s in app.stream(initial_state): 133 | # Print the current state 134 | print("for s in app.stream(initial_state):") 135 | print(s) 136 | -------------------------------------------------------------------------------- /06 Agent Choose Tool/README.md: -------------------------------------------------------------------------------- 1 | # Agents Choose Tool 2 | 3 | In this tutorial, tools are functions that can be dynamically used within the state machine. Tools are defined using the `@tool` decorator, which also registers them with the system, making them available for use by agents within the workflow. 4 | 5 | Further, we have an agent to decide whether to use a tool or not and let the tool_agent give commands to execute the chosen tool. 6 | 7 | ```mermaid 8 | graph TD; 9 | A((ChatAgent)) --> B{Use tool?}; 10 | B -- Yes --> C((ToolAgent)) --> D[ToolExecutor]; 11 | B -- No --> E[END]; 12 | ``` 13 | 14 | ## Core Explanation 15 | 16 | ### Defining and Registering Tools 17 | 18 | To define a tool, simply create a function and decorate it with the `@tool` decorator. Ensure that you include detailed parameter descriptions in the docstring for clarity. 19 | 20 | Here's an example: 21 | 22 | ```python 23 | from typing import List 24 | 25 | @tool 26 | def add(a: int, b: int) -> int: 27 | """ 28 | :function: add 29 | :param int a: First number to add 30 | :param int b: Second number to add 31 | :return: Sum of a and b 32 | """ 33 | return a + b 34 | 35 | 36 | @tool 37 | def filewrite(name: str, content: str) -> None: 38 | """ 39 | :function: filewrite 40 | :param str name: Name of the file 41 | :param str content: Content to write to the file 42 | :return: None 43 | """ 44 | 45 | # 46 | # implementation 47 | # 48 | 49 | print(f"Writing to {name}: {content}") 50 | 51 | ``` 52 | -------------------------------------------------------------------------------- /06 Agent Choose Tool/main.py: -------------------------------------------------------------------------------- 1 | from langgraph.graph import StateGraph, END 2 | from typing import TypedDict, Literal, Callable, Dict, List, Any 3 | import inspect 4 | import json 5 | import random 6 | from langchain_community.chat_models import ChatOllama 7 | from langchain_core.prompts import PromptTemplate 8 | from langchain_core.output_parsers import StrOutputParser 9 | from abc import ABC, abstractmethod 10 | 11 | # Tool registry to hold information about tools 12 | tool_registry: Dict[str, Callable] = {} 13 | tool_info_registry: List[Dict[str, Any]] = [] 14 | 15 | # Decorator to register tools 16 | def tool(func: Callable) -> Callable: 17 | signature = inspect.signature(func) 18 | docstring = func.__doc__ or "" 19 | params = [ 20 | {"name": param.name, "type": param.annotation} 21 | for param in signature.parameters.values() 22 | ] 23 | tool_info = { 24 | "name": func.__name__, 25 | "description": docstring, 26 | "parameters": params 27 | } 28 | tool_registry[func.__name__] = func 29 | tool_info_registry.append(tool_info) 30 | return func 31 | 32 | # Define the tools with detailed parameter descriptions in the docstrings 33 | @tool 34 | def mul(a: int, b: int) -> int: 35 | """ 36 | :function: mul 37 | :param int a: First number to add 38 | :param int b: Second number to add 39 | :return: a * b 40 | """ 41 | return a * b 42 | 43 | @tool 44 | def ls() -> List[str]: 45 | """ 46 | :function: ls 47 | :return: List of filenames in the current directory 48 | """ 49 | # Fake implementation 50 | return ["file1.txt", "file2.txt", "file3.txt"] 51 | 52 | @tool 53 | def filewrite(name: str, content: str) -> None: 54 | """ 55 | :function: filewrite 56 | :param str name: Name of the file 57 | :param str content: Content to write to the file 58 | :return: None 59 | """ 60 | # Fake implementation 61 | print(f"Writing to {name}: {content}") 62 | 63 | # Clip the history to the last 8000 characters 64 | def clip_history(history: str, max_chars: int = 8000) -> str: 65 | if len(history) > max_chars: 66 | return history[-max_chars:] 67 | return history 68 | 69 | # Define the state for our workflow 70 | class ToolState(TypedDict): 71 | history: str 72 | use_tool: bool 73 | tool_exec: str 74 | tools_list: str 75 | 76 | # Define the base class for tasks 77 | class AgentBase(ABC): 78 | def __init__(self, state: ToolState): 79 | self.state = state 80 | 81 | @abstractmethod 82 | def get_prompt_template(self) -> str: 83 | pass 84 | 85 | def execute(self) -> ToolState: 86 | # Clip the history to the last 8000 characters 87 | self.state["history"] = clip_history(self.state["history"]) 88 | 89 | # Define the prompt template 90 | template = self.get_prompt_template() 91 | prompt = PromptTemplate.from_template(template) 92 | llm = ChatOllama(model="mistral", format="json", temperature=0) 93 | llm_chain = prompt | llm | StrOutputParser() 94 | generation = llm_chain.invoke({ 95 | "history": self.state["history"], 96 | "use_tool": self.state["use_tool"], 97 | "tools_list": self.state["tools_list"] 98 | }) 99 | data = json.loads(generation) 100 | self.state["use_tool"] = data.get("use_tool", False) 101 | self.state["tool_exec"] = generation 102 | 103 | self.state["history"] += "\n" + generation 104 | self.state["history"] = clip_history(self.state["history"]) 105 | 106 | return self.state 107 | 108 | # Define agents 109 | class ChatAgent(AgentBase): 110 | def get_prompt_template(self) -> str: 111 | return """ 112 | Available tools: {tools_list} 113 | Question: {history} 114 | As ChatAgent, decide if we need to use a tool or not. 115 | If we don't need a tool, just reply; otherwise, let the ToolAgent handle it. 116 | Output the JSON in the format: {{"scenario": "your reply", "use_tool": True/False}} 117 | """ 118 | 119 | class ToolAgent(AgentBase): 120 | def get_prompt_template(self) -> str: 121 | return """ 122 | History: {history} 123 | Available tools: {tools_list} 124 | Based on the history, choose the appropriate tool and arguments in the format: 125 | {{"function": "", "args": [,, ...]}} 126 | """ 127 | 128 | def ToolExecutor(state: ToolState) -> ToolState: 129 | if not state["tool_exec"]: 130 | raise ValueError("No tool_exec data available to execute.") 131 | 132 | choice = json.loads(state["tool_exec"]) 133 | tool_name = choice["function"] 134 | args = choice["args"] 135 | 136 | if tool_name not in tool_registry: 137 | raise ValueError(f"Tool {tool_name} not found in registry.") 138 | 139 | result = tool_registry[tool_name](*args) 140 | state["history"] += f"\nExecuted {tool_name} with result: {result}" 141 | state["history"] = clip_history(state["history"]) 142 | state["use_tool"] = False 143 | state["tool_exec"] = "" 144 | return state 145 | 146 | # For conditional edges 147 | def check_use_tool(state: ToolState) -> Literal["use tool", "not use tool"]: 148 | if state.get("use_tool") == True: 149 | return "use tool" 150 | else: 151 | return "not use tool" 152 | 153 | # Define the state machine 154 | workflow = StateGraph(ToolState) 155 | 156 | # Initialize tasks for ChatAgent and ToolAgent 157 | def chat_agent(state: ToolState) -> ToolState: 158 | return ChatAgent(state).execute() 159 | 160 | def tool_agent(state: ToolState) -> ToolState: 161 | return ToolAgent(state).execute() 162 | 163 | workflow.add_node("chat_agent", chat_agent) 164 | workflow.add_node("tool_agent", tool_agent) 165 | workflow.add_node("tool", ToolExecutor) 166 | 167 | workflow.set_entry_point("chat_agent") 168 | 169 | # Define edges between nodes 170 | workflow.add_conditional_edges( 171 | "chat_agent", 172 | check_use_tool, 173 | { 174 | "use tool": "tool_agent", 175 | "not use tool": END, 176 | } 177 | ) 178 | 179 | workflow.add_edge('tool_agent', 'tool') 180 | workflow.add_edge('tool', END) 181 | 182 | # Generate the tools list 183 | tools_list = json.dumps([ 184 | { 185 | "name": tool["name"], 186 | "description": tool["description"] 187 | } 188 | for tool in tool_info_registry 189 | ]) 190 | 191 | # Compile the workflow into a runnable app 192 | app = workflow.compile() 193 | 194 | def question(history: str) -> None: 195 | initial_state = ToolState( 196 | history=history, 197 | use_tool=False, 198 | tool_exec="", 199 | tools_list=tools_list 200 | ) 201 | 202 | for state in app.stream(initial_state): 203 | print(state) 204 | 205 | # Example usage 206 | if __name__ == "__main__": 207 | question("help me ls files in current folder") 208 | question("who are you") 209 | -------------------------------------------------------------------------------- /06 Agent Choose Tool/old/01.py: -------------------------------------------------------------------------------- 1 | import inspect 2 | import json 3 | from typing import Callable, Dict, List, Any, TypedDict 4 | 5 | # Tool registry to hold information about tools 6 | tool_registry: List[Dict[str, Any]] = [] 7 | 8 | # Decorator to register tools 9 | def register_tool(func: Callable) -> Callable: 10 | signature = inspect.signature(func) 11 | docstring = func.__doc__ or "" 12 | params = [ 13 | {"name": param.name, "type": str(param.annotation)} 14 | for param in signature.parameters.values() 15 | ] 16 | tool_info = { 17 | "name": func.__name__, 18 | "description": docstring, 19 | "parameters": params 20 | } 21 | tool_registry.append(tool_info) 22 | return func 23 | 24 | # Define the tools with detailed parameter descriptions in the docstrings 25 | @register_tool 26 | def add(a: int, b: int) -> int: 27 | """ 28 | :function: add 29 | :param int a: First number to add 30 | :param int b: Second number to add 31 | :return: Sum of a and b 32 | """ 33 | return a + b 34 | 35 | @register_tool 36 | def ls() -> List[str]: 37 | """ 38 | :function: ls 39 | :return: List of filenames in the current directory 40 | """ 41 | # Fake implementation 42 | return ["file1.txt", "file2.txt", "file3.txt"] 43 | 44 | @register_tool 45 | def filewrite(name: str, content: str) -> None: 46 | """ 47 | :function: filewrite 48 | :param str name: Name of the file 49 | :param str content: Content to write to the file 50 | :return: None 51 | """ 52 | # Fake implementation 53 | print(f"Writing to {name}: {content}") 54 | 55 | # Define the state for our workflow 56 | class ToolState(TypedDict): 57 | history: str 58 | 59 | # Clip the history to the last 8000 characters 60 | def clip_history(history: str, max_chars: int = 8000) -> str: 61 | if len(history) > max_chars: 62 | return history[-max_chars:] 63 | return history 64 | 65 | class AgentBase: 66 | def __init__(self, state: ToolState): 67 | self.state = state 68 | 69 | def execute(self) -> ToolState: 70 | self.state["history"] = clip_history(self.state["history"]) 71 | choice = self.fake_llm_output(self.state["history"]) 72 | tool_name = choice["use_tool"] 73 | args = choice["args"] 74 | result = globals()[tool_name](*args) 75 | self.state["history"] += f"\nQuestion: {self.state['history']}\nResponse: {result}\n" 76 | return self.state 77 | 78 | def fake_llm_output(self, history: str) -> Dict[str, Any]: 79 | # Generate the detailed tools list 80 | tools_list = [ 81 | { 82 | "name": tool["name"], 83 | "description": tool["description"] 84 | } 85 | for tool in tool_registry 86 | ] 87 | 88 | # Print the detailed tools list 89 | print(f'Tools list: {json.dumps(tools_list, indent=2)}') 90 | 91 | # Simulate LLM output (for example purposes) 92 | return json.loads('{"use_tool": "add", "args": [3, 6]}') 93 | 94 | class Robot(AgentBase): 95 | pass 96 | 97 | # Initialize the state 98 | initial_state = ToolState( 99 | history="help me add 3 and 6" 100 | ) 101 | 102 | # Create a robot instance and handle the question 103 | robot = Robot(initial_state) 104 | final_state = robot.execute() 105 | 106 | # Print the final state history 107 | print(final_state["history"]) 108 | -------------------------------------------------------------------------------- /06 Agent Choose Tool/old/02.py: -------------------------------------------------------------------------------- 1 | import inspect 2 | import json 3 | from typing import Callable, Dict, List, Any, TypedDict 4 | from langchain_community.chat_models import ChatOllama 5 | from langchain_core.prompts import PromptTemplate 6 | from langchain_core.output_parsers import StrOutputParser 7 | 8 | # Tool registry to hold information about tools 9 | tool_registry: List[Dict[str, Any]] = [] 10 | 11 | # Decorator to register tools 12 | def register_tool(func: Callable) -> Callable: 13 | signature = inspect.signature(func) 14 | docstring = func.__doc__ or "" 15 | params = [ 16 | {"name": param.name, "type": param.annotation} 17 | for param in signature.parameters.values() 18 | ] 19 | tool_info = { 20 | "name": func.__name__, 21 | "description": docstring, 22 | "parameters": params 23 | } 24 | tool_registry.append(tool_info) 25 | return func 26 | 27 | # Define the tools with detailed parameter descriptions in the docstrings 28 | @register_tool 29 | def add(a: int, b: int) -> int: 30 | """ 31 | :function: add 32 | :param int a: First number to add 33 | :param int b: Second number to add 34 | :return: Sum of a and b 35 | """ 36 | return a + b 37 | 38 | @register_tool 39 | def ls() -> List[str]: 40 | """ 41 | :function: ls 42 | :return: List of filenames in the current directory 43 | """ 44 | # Fake implementation 45 | return ["file1.txt", "file2.txt", "file3.txt"] 46 | 47 | @register_tool 48 | def filewrite(name: str, content: str) -> None: 49 | """ 50 | :function: filewrite 51 | :param str name: Name of the file 52 | :param str content: Content to write to the file 53 | :return: None 54 | """ 55 | # Fake implementation 56 | print(f"Writing to {name}: {content}") 57 | 58 | # Define the state for our workflow 59 | class ToolState(TypedDict): 60 | history: str 61 | 62 | # Clip the history to the last 8000 characters 63 | def clip_history(history: str, max_chars: int = 8000) -> str: 64 | if len(history) > max_chars: 65 | return history[-max_chars:] 66 | return history 67 | 68 | class AgentBase: 69 | def __init__(self, state: ToolState): 70 | self.state = state 71 | 72 | def execute(self) -> ToolState: 73 | self.state["history"] = clip_history(self.state["history"]) 74 | choice = self.llm_output(self.state["history"]) 75 | tool_name = choice["use_tool"] 76 | args = self.convert_args(tool_name, choice["args"]) 77 | result = globals()[tool_name](*args) 78 | self.state["history"] += f"\nQuestion: {self.state['history']}\nResponse: {result}\n" 79 | return self.state 80 | 81 | def llm_output(self, history: str) -> Dict[str, Any]: 82 | # Generate the detailed tools list 83 | tools_list = [ 84 | { 85 | "name": tool["name"], 86 | "description": tool["description"] 87 | } 88 | for tool in tool_registry 89 | ] 90 | 91 | # Create the prompt for LLM 92 | template = """ 93 | History: {history} 94 | Available tools: {tools_list} 95 | Based on the history, choose the appropriate tool and arguments in the format: 96 | {{"use_tool": "", "args": [,, ...]}} 97 | """ 98 | prompt = PromptTemplate.from_template(template) 99 | 100 | # Format the prompt with the input variables 101 | formatted_prompt = prompt.format(history=history, tools_list=json.dumps(tools_list)) 102 | 103 | # Initialize the ChatOllama model with desired parameters 104 | local_llm = "phi3" 105 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 106 | llm_chain = prompt | llm | StrOutputParser() 107 | 108 | # Invoke the LLM with the formatted prompt 109 | generation = llm_chain.invoke({"history": history, "tools_list": json.dumps(tools_list)}) 110 | 111 | return json.loads(generation) 112 | 113 | def convert_args(self, tool_name: str, args: List[str]) -> List[Any]: 114 | tool_info = next(tool for tool in tool_registry if tool["name"] == tool_name) 115 | converted_args = [] 116 | for arg, param in zip(args, tool_info["parameters"]): 117 | param_type = param["type"] 118 | converted_arg = param_type(arg) 119 | converted_args.append(converted_arg) 120 | return converted_args 121 | 122 | class Robot(AgentBase): 123 | pass 124 | 125 | # Initialize the state 126 | initial_state = ToolState( 127 | history="help me add 3 and 6" 128 | ) 129 | 130 | # Create a robot instance and handle the question 131 | robot = Robot(initial_state) 132 | final_state = robot.execute() 133 | 134 | # Print the final state history 135 | print(f"Final state history: {final_state['history']}") 136 | 137 | # Initialize the state for the second question 138 | initial_state = ToolState( 139 | history="help me see what files we have now" 140 | ) 141 | 142 | # Create a robot instance and handle the question 143 | robot = Robot(initial_state) 144 | final_state = robot.execute() 145 | 146 | # Print the final state history 147 | print(f"Final state history: {final_state['history']}") 148 | -------------------------------------------------------------------------------- /06 Agent Choose Tool/old/03.py: -------------------------------------------------------------------------------- 1 | from langgraph.graph import StateGraph, END 2 | from typing import TypedDict, Literal 3 | from typing import Callable, Dict, List, Any, TypedDict 4 | import inspect 5 | import random 6 | import json 7 | from langchain_community.chat_models import ChatOllama 8 | from langchain_core.prompts import PromptTemplate 9 | from langchain_core.output_parsers import StrOutputParser 10 | from abc import ABC, abstractmethod 11 | 12 | # Tool registry to hold information about tools 13 | tool_registry: List[Dict[str, Any]] = [] 14 | 15 | # Decorator to register tools 16 | def register_tool(func: Callable) -> Callable: 17 | signature = inspect.signature(func) 18 | docstring = func.__doc__ or "" 19 | params = [ 20 | {"name": param.name, "type": param.annotation} 21 | for param in signature.parameters.values() 22 | ] 23 | tool_info = { 24 | "name": func.__name__, 25 | "description": docstring, 26 | "parameters": params 27 | } 28 | tool_registry.append(tool_info) 29 | return func 30 | 31 | # Define the tools with detailed parameter descriptions in the docstrings 32 | @register_tool 33 | def add(a: int, b: int) -> int: 34 | """ 35 | :function: add 36 | :param int a: First number to add 37 | :param int b: Second number to add 38 | :return: Sum of a and b 39 | """ 40 | return a + b 41 | 42 | @register_tool 43 | def ls() -> List[str]: 44 | """ 45 | :function: ls 46 | :return: List of filenames in the current directory 47 | """ 48 | # Fake implementation 49 | return ["file1.txt", "file2.txt", "file3.txt"] 50 | 51 | @register_tool 52 | def filewrite(name: str, content: str) -> None: 53 | """ 54 | :function: filewrite 55 | :param str name: Name of the file 56 | :param str content: Content to write to the file 57 | :return: None 58 | """ 59 | # Fake implementation 60 | print(f"Writing to {name}: {content}") 61 | 62 | 63 | # Specify the local language model 64 | local_llm = "mistral" 65 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 66 | 67 | # Clip the history to the last 8000 characters 68 | def clip_history(history: str, max_chars: int = 8000) -> str: 69 | if len(history) > max_chars: 70 | return history[-max_chars:] 71 | return history 72 | 73 | # Define the state for our workflow 74 | class ToolState(TypedDict): 75 | history: str 76 | use_tool: bool 77 | tool_exec: str 78 | 79 | 80 | # Define the base class for tasks 81 | class AgentBase(ABC): 82 | def __init__(self, state: ToolState): 83 | self.state = state 84 | 85 | @abstractmethod 86 | def get_prompt_template(self) -> str: 87 | pass 88 | 89 | def execute(self) -> ToolState: 90 | # Clip the history to the last 8000 characters 91 | self.state["history"] = clip_history(self.state["history"]) 92 | 93 | # Define the prompt template 94 | template = self.get_prompt_template() 95 | prompt = PromptTemplate.from_template(template) 96 | llm_chain = prompt | llm | StrOutputParser() 97 | generation = llm_chain.invoke({"history": self.state["history"], "use_tool": self.state["use_tool"]}) 98 | data = json.loads(generation) 99 | self.state["use_tool"] = data.get("use_tool", "") 100 | self.state["tool_exec"] = data 101 | 102 | self.state["history"] += "\n" + generation 103 | self.state["history"] = clip_history(self.state["history"]) 104 | 105 | return self.state 106 | 107 | # Define agents 108 | class ChatAgent(AgentBase): 109 | def get_prompt_template(self) -> str: 110 | return """ 111 | {history} 112 | As ChatAgent, decide we need use tool/py or not 113 | if we don't need tool, just reply, otherwirse, let tool agent to handle 114 | Output the JSON in the format: {{"scenario": "your reply", "use_tool": True/False}} 115 | """ 116 | 117 | class ToolAgent(AgentBase): 118 | def get_prompt_template(self) -> str: 119 | return """ 120 | History: {history} 121 | Available tools: {tools_list} 122 | Based on the history, choose the appropriate tool and arguments in the format: 123 | {{"function": "", "args": [,, ...]}} 124 | """ 125 | 126 | def ToolExecute(state: ToolState) -> ToolState: 127 | choice = self.llm_output(self.state["tool_exec"]) 128 | tool_name = choice["use_tool"] 129 | args = self.convert_args(tool_name, choice["args"]) 130 | result = globals()[tool_name](*args) 131 | 132 | # for conditional edges 133 | def check_use_tool(state: ToolState) -> Literal["use tool", "not use tool"]: 134 | if state.get("use_tool") == True: 135 | return "use tool" 136 | else: 137 | return "not use tool" 138 | 139 | # Define the state machine 140 | workflow = StateGraph(ToolState) 141 | 142 | # Initialize tasks for DM and Player 143 | def chat_agent(state: ToolState) -> ToolState: 144 | return ChatAgent(state).execute() 145 | 146 | def tool_agent(state: ToolState) -> ToolState: 147 | return ToolAgent(state).execute() 148 | 149 | workflow.add_node("chat_agent", chat_agent) 150 | workflow.add_node("tool_agent", tool_agent) 151 | workflow.add_node("tool", ToolExecute) 152 | 153 | workflow.set_entry_point("chat_agent") 154 | 155 | # Define edges between nodes 156 | 157 | workflow.add_conditional_edges( 158 | "chat_agent", 159 | check_use_tool, 160 | { 161 | "use tool": "tool_agent", 162 | "not use tool": END, 163 | } 164 | ) 165 | 166 | workflow.add_edge('tool_agent', 'tool') 167 | 168 | 169 | 170 | # Compile the workflow into a runnable app 171 | app = workflow.compile() 172 | 173 | # Initialize the state 174 | initial_state = ToolState( 175 | history="help me ls files in current folder", 176 | use_tool=False, 177 | ) 178 | 179 | for s in app.stream(initial_state): 180 | # Print the current state 181 | print("for s in app.stream(initial_state):") 182 | print(s) 183 | -------------------------------------------------------------------------------- /07 ollama RAG/README.md: -------------------------------------------------------------------------------- 1 | # RAG by Ollama 2 | 3 | ## Overview 4 | 5 | This code demo showcases a Retrieval-Augmented Generation (RAG) pipeline using Ollama with langchain components. The RAG approach enhances the generation capabilities of language models by integrating document retrieval, ensuring more accurate and contextually relevant responses. 6 | 7 | ## Key Components 8 | 9 | ``` 10 | pip install langchain langchain-community ollama chromadb 11 | ``` 12 | 13 | 1. **OllamaEmbeddings**: Used to generate embeddings for document chunks and queries. 14 | 2. **RecursiveCharacterTextSplitter**: Splits large documents into smaller, manageable chunks. 15 | 3. **ChromaDB**: A vector database used to store and retrieve embedded document chunks. 16 | 4. **ChatOllama**: A local language model for generating responses. 17 | 5. **LangChain Prompts and Pipelines**: Used to structure the prompts and generate responses. 18 | 19 | ## Workflow 20 | 21 | 1. **Load and Split Document**: 22 | - The content of a markdown file is loaded and split into smaller chunks using `RecursiveCharacterTextSplitter`. This ensures that each chunk is within a specified size, making it manageable for embedding and retrieval. 23 | 24 | 2. **Embed Document Chunks**: 25 | - Each chunk is embedded using `OllamaEmbeddings` and stored in a ChromaDB collection. This step transforms textual data into high-dimensional vectors, facilitating efficient retrieval based on semantic similarity. 26 | 27 | 3. **Initialize Language Model**: 28 | - The `ChatOllama` model is initialized with specific parameters to handle the generation of responses. 29 | 30 | 4. **Define Prompt Template**: 31 | - A prompt template is created to structure the input for the language model, ensuring the responses are contextually relevant. 32 | 33 | 5. **Answer Questions**: 34 | - A function `answer_question` is defined to handle queries. It embeds the query, retrieves the most relevant document chunks from ChromaDB, and combines them. The combined text is then fed into the language model pipeline to generate a response. 35 | 36 | ## Explanation of Key Concepts 37 | 38 | ### Embedding by Chunks 39 | 40 | Embedding by chunks involves breaking down a large document into smaller pieces and generating embeddings for each piece individually. This approach is beneficial for several reasons: 41 | - **Manageability**: Smaller chunks are easier to process and handle compared to a large document. 42 | - **Efficient Retrieval**: Embedding smaller chunks allows for more precise retrieval of relevant information based on specific queries. 43 | - **Scalability**: This method scales well with large documents, ensuring that the embedding and retrieval process remains efficient. 44 | 45 | ### Retrieval-Augmented Generation (RAG) 46 | 47 | RAG combines retrieval and generation to improve the accuracy and relevance of responses: 48 | - **Retrieval**: Uses embeddings to find the most relevant document chunks from a database. 49 | - **Generation**: Uses a language model to generate a response based on the retrieved information, ensuring that the response is grounded in actual data. 50 | 51 | By embedding document chunks and using a language model to generate responses, this pipeline demonstrates a powerful approach to handling complex queries with accurate and contextually appropriate answers. 52 | 53 | ## Requirements 54 | 55 | To run this code, you'll need the following libraries: 56 | - `langchain_community` 57 | - `langchain_core` 58 | - `chromadb` 59 | 60 | Ensure you have these libraries installed and properly configured in your environment. 61 | 62 | ## Conclusion 63 | 64 | This code demo illustrates a practical implementation of a Retrieval-Augmented Generation pipeline, leveraging the capabilities of LangChain components and ChromaDB to deliver accurate and contextually relevant responses to user queries. 65 | -------------------------------------------------------------------------------- /07 ollama RAG/main.py: -------------------------------------------------------------------------------- 1 | from langchain_community.embeddings import OllamaEmbeddings 2 | from langchain_community.chat_models import ChatOllama 3 | from langchain_core.prompts import PromptTemplate 4 | from langchain_core.output_parsers import StrOutputParser 5 | from langchain.text_splitter import RecursiveCharacterTextSplitter 6 | import chromadb 7 | 8 | # Initialize the embedding model 9 | embed_model = OllamaEmbeddings(model="mistral") 10 | 11 | # Path to the markdown file 12 | file_path = "./data/product.md" 13 | 14 | # Function to read the content of the file 15 | def load_file_content(file_path): 16 | with open(file_path, 'r', encoding='utf-8') as file: 17 | return file.read() 18 | 19 | # Read the content from the file 20 | file_content = load_file_content(file_path) 21 | 22 | # Initialize the text splitter 23 | text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100) 24 | 25 | # Split the document into chunks 26 | chunks = text_splitter.split_text(file_content) 27 | 28 | # Initialize ChromaDB client and create a collection 29 | client = chromadb.Client() 30 | collection = client.create_collection(name="docs") 31 | 32 | # Embed each chunk and store it in the collection 33 | for i, chunk in enumerate(chunks): 34 | chunk_embedding = embed_model.embed_documents([chunk]) 35 | collection.add( 36 | ids=[str(i)], 37 | embeddings=chunk_embedding, 38 | documents=[chunk] 39 | ) 40 | 41 | # Specify the local language model 42 | local_llm = "phi3" 43 | 44 | # Initialize the ChatOllama model with desired parameters 45 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 46 | 47 | # Define the prompt template 48 | template = "Question: {question}\nAnswer: Let's think step by step." 49 | prompt = PromptTemplate.from_template(template) 50 | 51 | # Define the langchain pipeline 52 | llm_chain = prompt | llm | StrOutputParser() 53 | 54 | # Function to answer questions using RAG 55 | def answer_question(question): 56 | # Generate an embedding for the question 57 | query_embedding = embed_model.embed_query(question) 58 | 59 | # Retrieve the most relevant document chunks 60 | results = collection.query( 61 | query_embeddings=[query_embedding], 62 | n_results=3 # Retrieve top 3 relevant chunks 63 | ) 64 | 65 | relevant_docs = [result[0] for result in results['documents']] 66 | combined_docs = " ".join(relevant_docs) 67 | print(f"Most relevant documents combined: {combined_docs}") 68 | 69 | # Format the prompt with the input variable 70 | formatted_prompt = prompt.format(question=f"Using this data: {combined_docs}. Respond to this prompt: {question}") 71 | 72 | # Generate a response using the langchain pipeline 73 | generation = llm_chain.invoke(formatted_prompt) 74 | return generation 75 | 76 | # Example usage 77 | question = "What does the product look liked?" 78 | answer = answer_question(question) 79 | print(f"Answer: {answer}") 80 | -------------------------------------------------------------------------------- /07 ollama RAG/old_ver/01_ollama_embed.py: -------------------------------------------------------------------------------- 1 | from langchain_community.embeddings import OllamaEmbeddings 2 | import chromadb 3 | import requests 4 | import json 5 | 6 | # Initialize the embedding model 7 | embed_model = OllamaEmbeddings(model="mistral") 8 | 9 | # Documents to be embedded 10 | documents = [ 11 | "Llamas are members of the camelid family meaning they're pretty closely related to vicuñas and camels", 12 | "Llamas were first domesticated and used as pack animals 4,000 to 5,000 years ago in the Peruvian highlands", 13 | "Llamas can grow as much as 6 feet tall though the average llama between 5 feet 6 inches and 5 feet 9 inches tall", 14 | "Llamas weigh between 280 and 450 pounds and can carry 25 to 30 percent of their body weight", 15 | "Llamas are vegetarians and have very efficient digestive systems", 16 | "Llamas live to be about 20 years old, though some only live for 15 years and others live to be 30 years old", 17 | ] 18 | 19 | # Generate embeddings for the documents 20 | embeddings = embed_model.embed_documents(documents) 21 | 22 | # Initialize ChromaDB client and create a collection 23 | client = chromadb.Client() 24 | collection = client.create_collection(name="docs") 25 | 26 | # Store each document and its embedding in the collection 27 | for i, embedding in enumerate(embeddings): 28 | collection.add( 29 | ids=[str(i)], 30 | embeddings=[embedding], 31 | documents=[documents[i]] 32 | ) 33 | 34 | # Query the database with a prompt 35 | query = "What animals are llamas related to?" 36 | query_embedding = embed_model.embed_query(query) 37 | 38 | # Retrieve the most relevant document 39 | results = collection.query( 40 | query_embeddings=[query_embedding], 41 | n_results=1 42 | ) 43 | 44 | most_relevant_doc = results['documents'][0][0] 45 | print(f"Most relevant document: {most_relevant_doc}") 46 | 47 | # Generate a response using the prompt and the retrieved document 48 | url = "http://localhost:11434/api/generate" 49 | headers = {"Content-Type": "application/json"} 50 | data = { 51 | "model": "mistral", 52 | "prompt": f"Using this data: {most_relevant_doc}. Respond to this prompt: {query}", 53 | "stream": False 54 | } 55 | 56 | response = requests.post(url, headers=headers, data=json.dumps(data)) 57 | if response.status_code == 200: 58 | response_json = response.json() 59 | print(f"Response: {response_json['response']}") 60 | else: 61 | print(f"Error: {response.status_code} - {response.text}") 62 | -------------------------------------------------------------------------------- /07 ollama RAG/old_ver/02_embed_ask.py: -------------------------------------------------------------------------------- 1 | from langchain_community.embeddings import OllamaEmbeddings 2 | from langchain_community.chat_models import ChatOllama 3 | from langchain_core.prompts import PromptTemplate 4 | from langchain_core.output_parsers import StrOutputParser 5 | import chromadb 6 | 7 | # Initialize the embedding model 8 | embed_model = OllamaEmbeddings(model="mistral") 9 | 10 | # Path to the markdown file 11 | file_path = "./data/product.md" 12 | 13 | # Function to read the content of the file 14 | def load_file_content(file_path): 15 | with open(file_path, 'r', encoding='utf-8') as file: 16 | return file.read() 17 | 18 | # Read the content from the file 19 | file_content = load_file_content(file_path) 20 | 21 | # Embed the content of the file 22 | document_embeddings = embed_model.embed_documents([file_content]) 23 | 24 | # Initialize ChromaDB client and create a collection 25 | client = chromadb.Client() 26 | collection = client.create_collection(name="docs") 27 | 28 | # Store the document and its embedding in the collection 29 | collection.add( 30 | ids=["0"], 31 | embeddings=document_embeddings, 32 | documents=[file_content] 33 | ) 34 | 35 | # Specify the local language model 36 | local_llm = "phi3" 37 | 38 | # Initialize the ChatOllama model with desired parameters 39 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 40 | 41 | # Define the prompt template 42 | template = "Question: {question}\nAnswer: Let's think step by step." 43 | prompt = PromptTemplate.from_template(template) 44 | 45 | # Define the langchain pipeline 46 | llm_chain = prompt | llm | StrOutputParser() 47 | 48 | # Function to answer questions using RAG 49 | def answer_question(question): 50 | # Generate an embedding for the question 51 | query_embedding = embed_model.embed_query(question) 52 | 53 | # Retrieve the most relevant document 54 | results = collection.query( 55 | query_embeddings=[query_embedding], 56 | n_results=1 57 | ) 58 | 59 | most_relevant_doc = results['documents'][0][0] 60 | print(f"Most relevant document: {most_relevant_doc}") 61 | 62 | # Format the prompt with the input variable 63 | formatted_prompt = prompt.format(question=f"Using this data: {most_relevant_doc}. Respond to this prompt: {question}") 64 | 65 | # Generate a response using the langchain pipeline 66 | generation = llm_chain.invoke(formatted_prompt) 67 | return generation 68 | 69 | # Example usage 70 | question = "What features does the product have?" 71 | answer = answer_question(question) 72 | print(f"Answer: {answer}") 73 | -------------------------------------------------------------------------------- /07 ollama RAG/old_ver/03_chunk.py: -------------------------------------------------------------------------------- 1 | from langchain_community.embeddings import OllamaEmbeddings 2 | from langchain_community.chat_models import ChatOllama 3 | from langchain_core.prompts import PromptTemplate 4 | from langchain_core.output_parsers import StrOutputParser 5 | from langchain.text_splitter import RecursiveCharacterTextSplitter 6 | import chromadb 7 | 8 | # Initialize the embedding model 9 | embed_model = OllamaEmbeddings(model="mistral") 10 | 11 | # Path to the markdown file 12 | file_path = "./data/product.md" 13 | 14 | # Function to read the content of the file 15 | def load_file_content(file_path): 16 | with open(file_path, 'r', encoding='utf-8') as file: 17 | return file.read() 18 | 19 | # Read the content from the file 20 | file_content = load_file_content(file_path) 21 | 22 | # Initialize the text splitter 23 | text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100) 24 | 25 | # Split the document into chunks 26 | chunks = text_splitter.split_text(file_content) 27 | 28 | # Initialize ChromaDB client and create a collection 29 | client = chromadb.Client() 30 | collection = client.create_collection(name="docs") 31 | 32 | # Embed each chunk and store it in the collection 33 | for i, chunk in enumerate(chunks): 34 | chunk_embedding = embed_model.embed_documents([chunk]) 35 | collection.add( 36 | ids=[str(i)], 37 | embeddings=chunk_embedding, 38 | documents=[chunk] 39 | ) 40 | 41 | # Specify the local language model 42 | local_llm = "phi3" 43 | 44 | # Initialize the ChatOllama model with desired parameters 45 | llm = ChatOllama(model=local_llm, format="json", temperature=0) 46 | 47 | # Define the prompt template 48 | template = "Question: {question}\nAnswer: Let's think step by step." 49 | prompt = PromptTemplate.from_template(template) 50 | 51 | # Define the langchain pipeline 52 | llm_chain = prompt | llm | StrOutputParser() 53 | 54 | # Function to answer questions using RAG 55 | def answer_question(question): 56 | # Generate an embedding for the question 57 | query_embedding = embed_model.embed_query(question) 58 | 59 | # Retrieve the most relevant document chunks 60 | results = collection.query( 61 | query_embeddings=[query_embedding], 62 | n_results=3 # Retrieve top 3 relevant chunks 63 | ) 64 | 65 | relevant_docs = [result[0] for result in results['documents']] 66 | combined_docs = " ".join(relevant_docs) 67 | print(f"Most relevant documents combined: {combined_docs}") 68 | 69 | # Format the prompt with the input variable 70 | formatted_prompt = prompt.format(question=f"Using this data: {combined_docs}. Respond to this prompt: {question}") 71 | 72 | # Generate a response using the langchain pipeline 73 | generation = llm_chain.invoke(formatted_prompt) 74 | return generation 75 | 76 | # Example usage 77 | question = "What does the product look liked?" 78 | answer = answer_question(question) 79 | print(f"Answer: {answer}") 80 | -------------------------------------------------------------------------------- /07 ollama RAG/testdata/product.md: -------------------------------------------------------------------------------- 1 | # Elden Ring: A Comprehensive Guide 2 | 3 | **Elden Ring** is an action RPG developed by FromSoftware and published by Bandai Namco Entertainment. Directed by Hidetaka Miyazaki, in collaboration with George R. R. Martin, the game blends the intricate lore and challenging gameplay FromSoftware is renowned for with a vast, open-world experience. 4 | 5 | --- 6 | 7 | ## Story and Setting 8 | 9 | **Elden Ring** is set in the mystical world of the Lands Between, where the Elden Ring, a powerful artifact, has been shattered. Players assume the role of a Tarnished, one of many exiles summoned back to the Lands Between to restore the Elden Ring and become the Elden Lord. The narrative is rich and complex, with each region, character, and item contributing to the overarching lore. 10 | 11 | --- 12 | 13 | ## Gameplay Mechanics 14 | 15 | **Combat and Exploration:** 16 | - **Combat**: The combat system in Elden Ring builds on FromSoftware's previous titles, emphasizing timing, strategy, and resource management. Players can choose from a variety of weapons, magic, and abilities to tailor their combat style. 17 | - **Exploration**: The open-world design encourages exploration. Players can traverse the Lands Between on foot or horseback, discovering hidden dungeons, powerful enemies, and valuable items. 18 | 19 | **Character Customization:** 20 | - **Classes and Attributes**: Players can select from various classes at the start, each with unique attributes and abilities. As they progress, they can level up and allocate points to different attributes, shaping their character to fit their preferred playstyle. 21 | - **Equipment**: A wide array of weapons, armor, and magical items are available. Equipment can be upgraded and modified, enhancing their effectiveness and allowing for deeper customization. 22 | 23 | **Special Features:** 24 | - **Spirit Ashes and Summons**: Players can summon spirits to assist in combat, each with unique abilities and roles. 25 | - **Mounted Combat**: Fighting on horseback adds a new layer of strategy. 26 | - **Multiplayer**: Both cooperative and competitive multiplayer modes are available, allowing players to interact, fight, or assist each other. 27 | 28 | --- 29 | 30 | ## World and Environment 31 | 32 | The Lands Between is divided into several distinct regions, each with its unique landscape, enemies, and challenges. Key locations include: 33 | 34 | - **Limgrave**: A starting region with lush forests and ancient ruins. 35 | - **Liurnia of the Lakes**: A vast, watery expanse home to the Raya Lucaria Academy. 36 | - **Caelid**: A desolate, scarred land filled with dangerous creatures. 37 | 38 | The world is designed to reward exploration, with hidden secrets, powerful items, and lore fragments scattered throughout. 39 | 40 | --- 41 | 42 | ## Key Locations and Progression 43 | 44 | **Tutorial and Early Areas:** 45 | - **The Chapel of Anticipation**: The starting point for all players, where they learn the basics of gameplay. 46 | - **Limgrave**: The first major region, offering a variety of enemies and quests to help players acclimate to the game's mechanics. 47 | 48 | **Major Regions:** 49 | - **Stormveil Castle**: A massive fortress filled with tough enemies and valuable loot. 50 | - **Raya Lucaria Academy**: A school of sorcery with its own unique challenges and rewards. 51 | - **Caelid**: A harsh, dangerous region that tests the player's combat skills and resourcefulness. 52 | 53 | --- 54 | 55 | ## Combat and Strategy 56 | 57 | **Combat Mechanics:** 58 | - **Basics**: Understanding the basics of combat, including dodging, parrying, and blocking, is crucial for survival. 59 | - **Advanced Techniques**: Mastering advanced techniques, such as backstabbing and using magic effectively, can turn the tide in difficult battles. 60 | 61 | **Boss Fights:** 62 | - **Strategies**: Each boss in Elden Ring has unique mechanics and requires specific strategies to defeat. Learning these patterns and adapting your approach is key to success. 63 | - **Spirit Summons**: Utilizing spirit summons can provide valuable assistance in tough fights, drawing aggro or dealing damage. 64 | 65 | --- 66 | 67 | ## Multiplayer and Online Features 68 | 69 | **Co-op and PvP:** 70 | - **Co-op**: Players can summon others to help with tough areas or bosses, making the game more manageable. 71 | - **PvP**: Competitive multiplayer allows players to invade others' worlds, adding an element of risk and excitement. 72 | 73 | **Asynchronous Features:** 74 | - **Messages**: Players can leave messages for others, providing hints or warnings. 75 | - **Bloodstains**: Viewing bloodstains shows how other players died, offering clues to potential dangers. 76 | 77 | --- 78 | 79 | ## Character Builds and Playstyles 80 | 81 | **Builds:** 82 | - **Strength Builds**: Focus on heavy weapons and high damage output. 83 | - **Dexterity Builds**: Emphasize speed and precision, using lighter weapons. 84 | - **Magic Builds**: Utilize powerful sorceries and incantations to deal damage from a distance. 85 | 86 | **Customization:** 87 | - **Attributes**: Players can allocate points to various attributes, such as strength, dexterity, intelligence, and faith, to suit their preferred playstyle. 88 | - **Equipment**: Choosing the right weapons, armor, and talismans is crucial for maximizing effectiveness in combat. 89 | 90 | --- 91 | 92 | ## In-Game Economy and Resources 93 | 94 | **Currency and Upgrades:** 95 | - **Runes**: The primary currency, used for leveling up and purchasing items. 96 | - **Smithing Stones**: Essential for upgrading weapons and armor. 97 | 98 | **Crafting:** 99 | - **Materials**: Gathering materials from the environment to craft consumables and upgrades. 100 | - **Recipes**: Learning new recipes to expand crafting options. 101 | 102 | --- 103 | 104 | ## Lore and World-Building 105 | 106 | **Mythos:** 107 | - **The Elden Ring**: The central artifact around which the game's story revolves. 108 | - **The Lands Between**: A rich, detailed world filled with history and myth. 109 | 110 | **Characters:** 111 | - **NPCs**: Interacting with NPCs provides quests, lore, and valuable items. 112 | - **Bosses**: Each boss has its own backstory and significance within the world. 113 | 114 | --- 115 | 116 | ## Technical and Artistic Aspects 117 | 118 | **Graphics and Art Style:** 119 | - **Visuals**: Stunning graphics that blend gothic and high fantasy elements. 120 | - **Art Direction**: A unique art style that enhances the game's atmosphere and storytelling. 121 | 122 | **Audio:** 123 | - **Soundtrack**: Composed by Yuka Kitamura and others, the music enhances the mood and intensity of the game. 124 | - **Voice Acting**: High-quality voice acting brings characters and lore to life. 125 | 126 | --- 127 | 128 | ## Development and Release 129 | 130 | **History:** 131 | - **Announcement**: First revealed in 2019, developed by FromSoftware and directed by Hidetaka Miyazaki, with world-building by George R. R. Martin. 132 | - **Release Date**: Released on February 25, 2022, for PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, and Microsoft Windows. 133 | 134 | **Reception:** 135 | - **Critical Acclaim**: Praised for its depth, challenge, and open-world design, quickly becoming a bestseller and winning numerous awards. 136 | - **Community**: A vibrant community that shares strategies, builds, and lore theories. 137 | 138 | --- 139 | 140 | # Conclusion 141 | 142 | Elden Ring stands as a monumental achievement in the action RPG genre. Its vast open world, intricate lore, and challenging gameplay offer an unparalleled experience for fans and newcomers alike. Whether you're delving into the lore, mastering combat mechanics, or exploring every nook and cranny of the Lands Between, Elden Ring provides a rich, rewarding adventure that sets a new standard for the genre. 143 | 144 | --- 145 | 146 | This comprehensive guide only scratches the surface of what Elden Ring has to offer. For more detailed walkthroughs, strategies, and tips, you can refer to the extensive resources available on platforms like [Fextralife](https://eldenring.wiki.fextralife.com/), [IGN](https://www.ign.com/wikis/elden-ring), and [TheGamer](https://www.thegamer.com/elden-ring-complete-guide-walkthrough/). 147 | -------------------------------------------------------------------------------- /08 SubGraph/README.md: -------------------------------------------------------------------------------- 1 | # LangGraph Subgraphs: Nested Structure 2 | 3 | ## Overview 4 | 5 | This example demonstrates a nested subgraph approach in LangGraph, where the main workflow is structured as a hierarchy of subgraphs. This approach promotes modularity and reusability by encapsulating specific functionalities into distinct subgraphs. 6 | 7 | The overall architecture can be visualized as follows: 8 | 9 | ```mermaid 10 | stateDiagram 11 | [*] --> Root 12 | 13 | state Root { 14 | [*] --> Lottery 15 | Lottery --> Buy 16 | } 17 | 18 | 19 | state Lottery { 20 | [*] --> buy 21 | buy --> check 22 | check --> buy 23 | check --> win 24 | win --> [*] 25 | } 26 | state Buy { 27 | [*] --> buyReward 28 | buyReward --> [*] 29 | } 30 | ``` 31 | 32 | Key Concepts: 33 | 34 | * **Everything is a Subgraph:** The main flow is a *root* subgraph, and all other logic is contained within subgraphs. This helps maintain separation of concerns. 35 | * **Subgraph Registry:** A central `subgraph_registry` stores all compiled subgraphs, allowing for dynamic retrieval. 36 | * **Root Subgraph Orchestrates:** The root subgraph manages the overall flow of the application by calling other subgraphs. 37 | 38 | ## Code Structure 39 | 40 | ### 1. Subgraph Registry 41 | 42 | ```python 43 | subgraph_registry: Dict[str, Any] = {} 44 | ``` 45 | 46 | This dictionary acts as a central hub for storing all compiled subgraphs, making them easily accessible by name. 47 | 48 | ### 2. Individual Subgraphs 49 | 50 | Each subgraph defines a specific part of the overall workflow (e.g., `lottery`, `buy`). They have their own state definition and workflow logic. These can be seen in the diagram above. 51 | 52 | ### 3. Subgraph Usage 53 | 54 | An orchestrator by invoking different subgraphs according to the application logic. 55 | 56 | with a node: 57 | ``` 58 | root_builder.add_node("lottery", route_to_lottery) 59 | ``` 60 | 61 | to invoke: 62 | 63 | ```python 64 | def route_to_lottery(state: RootState): 65 | subgraph = subgraph_registry["lottery"] 66 | 67 | # Pass the try_times from RootState into LotteryState 68 | lottery_state = {"input": "0", "winnings": None, "missed": None, "try_times": state.get('try_times', 0)} 69 | response = subgraph.invoke(lottery_state) 70 | 71 | ``` 72 | This is how we trigger a subgraph, in this case, `lottery`, and pass relevant state values. 73 | 74 | ## Usage Pattern 75 | 76 | 1. **Invoke the Subgraph:** The main workflow invokes the subgraph as the first step. 77 | 2. **Subgraph Orchestrates:** The subgraph, in turn, invokes other subgraphs as required, passing the necessary data. 78 | 3. **Data Passing:** Note that data is passed from parent subgraph to child subgraph by using subgraph.invoke(start_state_for_child_graph) 79 | 80 | This pattern facilitates better organization and management of complex workflows. Each part of the application is encapsulated within its subgraph, making it easier to test and maintain. 81 | -------------------------------------------------------------------------------- /08 SubGraph/main.py: -------------------------------------------------------------------------------- 1 | from langgraph.graph import StateGraph, END, START 2 | from typing import TypedDict, Literal, Union, Dict, Callable, Any 3 | import random 4 | 5 | # ========================== 6 | # Subgraph Registry 7 | # ========================== 8 | subgraph_registry: Dict[str, Any] = {} 9 | 10 | # ========================== 11 | # Subgraph: Lottery 12 | # ========================== 13 | class LotteryState(TypedDict): 14 | input: str 15 | winnings: Union[str, None] 16 | missed: Union[str, None] 17 | try_times: int 18 | 19 | def BuyLottery(state: LotteryState): 20 | random_number = random.randint(0, 3) 21 | print("buy number: " + str(random_number)) 22 | state['input'] = random_number 23 | state['try_times'] += 1 24 | return state 25 | 26 | def Checking(state: LotteryState): 27 | prize_number = random.randint(0, 2) 28 | print("prize number: " + str(prize_number)) 29 | if state['input'] == prize_number: 30 | state['winnings'] = "win" 31 | return state 32 | else: 33 | state['missed'] = "missed" 34 | return state 35 | 36 | def checking_result(state: LotteryState) -> Literal["win", "missed"]: 37 | if state.get("winnings") == "win": 38 | print("You win!") 39 | return "win" 40 | else: 41 | print("You missed. Buy again.") 42 | return "missed" 43 | 44 | lottery_builder = StateGraph(LotteryState) 45 | lottery_builder.add_node("buy", BuyLottery) 46 | lottery_builder.add_node("check", Checking) 47 | lottery_builder.set_entry_point("buy") 48 | lottery_builder.add_edge("buy", "check") 49 | lottery_builder.add_conditional_edges( 50 | "check", 51 | checking_result, 52 | { 53 | "missed": "buy", 54 | "win": END, 55 | }, 56 | ) 57 | lottery_graph = lottery_builder.compile() 58 | subgraph_registry["lottery"] = lottery_graph 59 | 60 | 61 | # ========================== 62 | # Subgraph: Buy 63 | # ========================== 64 | class BuyState(TypedDict): 65 | buy_item: str 66 | try_times: int 67 | 68 | def buy_reward(state: BuyState): 69 | if state.get("try_times", 0) > 2: 70 | print("Buy ice cream") 71 | return {"buy_item": "ice cream", "try_times": state.get("try_times")} 72 | else: 73 | print("Buy cookie") 74 | return {"buy_item": "cookie", "try_times": state.get("try_times")} 75 | 76 | 77 | buy_builder = StateGraph(BuyState) 78 | buy_builder.add_node("buy", buy_reward) 79 | buy_builder.set_entry_point("buy") 80 | buy_graph = buy_builder.compile() 81 | 82 | subgraph_registry["buy"] = buy_graph 83 | 84 | # ========================== 85 | # Subgraph: Root 86 | # ========================== 87 | class RootState(TypedDict): 88 | buy_item: Union[str, None] 89 | lottery_result: Union[str, None] 90 | try_times: int 91 | 92 | def route_to_lottery(state: RootState): 93 | subgraph = subgraph_registry["lottery"] 94 | 95 | # Pass the try_times from RootState into LotteryState 96 | lottery_state = {"input": "0", "winnings": None, "missed": None, "try_times": state.get('try_times', 0)} 97 | response = subgraph.invoke(lottery_state) 98 | 99 | # Print try times before buy 100 | print("Lottery try times:", response.get("try_times")) 101 | 102 | return {"buy_item": state["buy_item"], "lottery_result": response.get("winnings"), "try_times": response.get("try_times")} 103 | 104 | 105 | def route_to_buy(state: RootState): 106 | subgraph = subgraph_registry["buy"] 107 | # Pass try_times to buy subgraph 108 | response = subgraph.invoke({"buy_item": state['buy_item'], "try_times": state.get("try_times")}) 109 | return {"buy_item": response.get("buy_item"), "lottery_result": state.get("lottery_result"), "try_times": response.get("try_times")} 110 | 111 | root_builder = StateGraph(RootState) 112 | root_builder.add_node("lottery", route_to_lottery) 113 | root_builder.add_node("buy", route_to_buy) 114 | root_builder.set_entry_point("lottery") 115 | root_builder.add_edge("lottery", "buy") 116 | root_graph = root_builder.compile() 117 | 118 | subgraph_registry["root"] = root_graph 119 | 120 | # ========================== 121 | # Main Graph: Top Level 122 | # ========================== 123 | class MainGraphState(TypedDict): 124 | input: Union[str, None] 125 | 126 | 127 | def invoke_subgraph(state: MainGraphState): 128 | subgraph = subgraph_registry["root"] 129 | response = subgraph.invoke( 130 | {"buy_item": None, "try_times": 0} 131 | ) 132 | return { 133 | "input": None 134 | } 135 | 136 | 137 | main_graph = StateGraph(MainGraphState) 138 | main_graph.add_node("subgraph", invoke_subgraph) 139 | main_graph.set_entry_point("subgraph") 140 | 141 | main_graph = main_graph.compile() 142 | 143 | 144 | # ========================== 145 | # Run 146 | # ========================== 147 | for s in main_graph.stream( 148 | { 149 | "input": None, 150 | } 151 | ): 152 | print(s) -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to LangGraph-learn 2 | 3 | Thank you for considering contributing to LangGraph-learn! We welcome contributions of all kinds, including new examples, bug fixes, documentation improvements, and more. 4 | 5 | ## How to Contribute 6 | 7 | ### 1. Fork the Repository 8 | 9 | 1. Navigate to the [LangGraph-learn repository](https://github.com/LangGraph-GUI/LangGraph-learn). 10 | 2. Click the "Fork" button in the upper right corner of the repository page. This will create a copy of the repository in your GitHub account. 11 | 12 | ### 2. Clone Your Fork 13 | 14 | Clone your forked repository to your local machine: 15 | 16 | ```bash 17 | git clone https://github.com/your-username/LangGraph-learn.git 18 | ``` 19 | 20 | ### 3. Create a New Branch 21 | 22 | Create a new branch for your feature or fix: 23 | 24 | ```bash 25 | git checkout -b your-branch-name 26 | ``` 27 | 28 | ### 4. Make Your Changes 29 | 30 | Make the necessary changes to the codebase. Ensure that your code follows the project's coding standards and includes appropriate documentation and tests. 31 | 32 | ### 5. Commit Your Changes 33 | 34 | Commit your changes with a clear and concise commit message: 35 | 36 | ```bash 37 | git add . 38 | git commit -m "Description of your changes" 39 | ``` 40 | 41 | ### 6. Push Your Changes 42 | 43 | Push your changes to your forked repository: 44 | 45 | ```bash 46 | git push origin your-branch-name 47 | ``` 48 | 49 | ### 7. Create a Pull Request 50 | 51 | 1. Navigate to the original [LangGraph-learn repository](https://github.com/LangGraph-GUI/LangGraph-learn). 52 | 2. Click on the "Pull requests" tab and then click the "New pull request" button. 53 | 3. Select your fork and branch as the source and the original repository and branch as the destination. 54 | 4. Provide a clear title and description for your pull request. 55 | 5. Click the "Create pull request" button to submit your pull request. 56 | 57 | ### 8. Review and Merge 58 | 59 | Your pull request will be reviewed by the project maintainers. They may request changes or ask for additional information. Once your pull request is approved, it will be merged into the main repository. 60 | 61 | ## Additional Resources 62 | 63 | - [GitHub Documentation](https://docs.github.com/en) 64 | - [Fork a Repo](https://docs.github.com/en/github/getting-started-with-github/fork-a-repo) 65 | - [Creating a Pull Request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request) 66 | 67 | Thank you for your contribution! Your support is greatly appreciated. 68 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 HomunMage 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # LangGraph-learn 2 | 3 | *The project is for understanding [LangGraph-GUI-backend](https://github.com/LangGraph-GUI/LangGraph-GUI-backend) .* 4 | 5 | Welcome to LangGraph-learn, a community-driven project aimed at mastering LangGraph and other AI-related topics. This repository provides a variety of hands-on examples and resources to help you learn how to create and manage language model workflows using LangGraph and related tools. 6 | 7 | ## Table of Contents 8 | 9 | - [Introduction](#introduction) 10 | - [Getting Started](#getting-started) 11 | - [Community Guidelines](#community-guidelines) 12 | - [Contributing](#contributing) 13 | - [License](#license) 14 | 15 | ## Introduction 16 | 17 | LangGraph-learn is designed to foster a collaborative learning environment for individuals interested in AI and machine learning. Our goal is to provide practical examples and tutorials that demonstrate how to use LangGraph, LangChain, and other related tools to build efficient and reusable workflows involving language models. 18 | 19 | ## Getting Started 20 | 21 | To get started with LangGraph-learn, you will need to set up your environment. Here are some general steps: 22 | 23 | 1. **Install Python 3.11**: Ensure you have Python 3.11 installed on your machine. 24 | 2. **Install Required Packages**: Depending on the examples you are working with, you may need the following Python packages: 25 | - `langgraph` 26 | - `langchain-community` 27 | - `langchain-core` 28 | - `httpx` 29 | - `tenacity<8.4.0` 30 | 31 | You can install these packages using pip: 32 | ```bash 33 | pip install langchain langchain-community langchain-core ollama chromadb langgraph 34 | ``` 35 | Please bear in mind, this is generic and will be updated in all sub folders as needed. 36 | 37 | 3. **Set Up Local Language Models**: Some examples may require local language models such as ChatOllama. Follow the instructions in the relevant example directories to set up these models. 38 | 39 | ## Community Guidelines 40 | 41 | Our community thrives on collaboration and mutual respect. Here are a few guidelines to help maintain a positive and productive environment: 42 | 43 | - **Be Respectful**: Treat everyone with respect and kindness. Discriminatory or offensive behavior will not be tolerated. 44 | - **Collaborate**: Share your knowledge, help others, and work together on projects. 45 | - **Learn and Teach**: Engage with the content, ask questions, and offer guidance to others when possible. 46 | - **Contribute**: We welcome contributions! Whether it's a new example, bug fix, or documentation improvement, your input is valuable. 47 | 48 | ## Contributing 49 | 50 | We welcome contributions to LangGraph-learn! If you have an example, improvement, or any other contribution, please follow these steps: 51 | 52 | 1. Fork the repository. 53 | 2. Create a new branch for your feature or fix. 54 | 3. Make your changes and commit them with clear and concise messages. 55 | 4. Open a pull request with a description of your changes. 56 | 57 | For more detailed information, please refer to our [Contributing Guidelines](CONTRIBUTING.md). 58 | 59 | ## License 60 | 61 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details. 62 | --------------------------------------------------------------------------------