├── .env.example ├── README.md ├── langgraph.json ├── pyproject.toml └── src └── chatbot ├── __init__.py ├── __pycache__ ├── __init__.cpython-311.pyc ├── configuration.cpython-311.pyc ├── graph.cpython-311.pyc ├── prompts.cpython-311.pyc └── utils.cpython-311.pyc ├── configuration.py ├── graph.py ├── prompts.py ├── reply_gai.ipynb └── utils.py /.env.example: -------------------------------------------------------------------------------- 1 | ARCADE_API_KEY= 2 | ARCADE_USER_ID= 3 | ANTHROPIC_API_KEY= -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # reply gAI 2 | 3 | Reply gAI is an AI clone for any X profile. It automatically collects a user's Tweets, stores them in long-term memory, and uses Retrieval-Augmented Generation (RAG) to generate responses that match their unique writing style and viewpoints. 4 | 5 | ![reply_gai](https://github.com/user-attachments/assets/91e5bf27-04c0-4584-817f-16e43296cd34) 6 | 7 | ## 🚀 Quickstart 8 | 9 | One option for accessing Twitter/X data is the [Arcade API](https://docs.arcade-ai.com/integrations/toolkits/x) toolkit. 10 | 11 | Set API keys for the LLM of choice (Anthropic API) along with the Arcade API: 12 | ``` 13 | export ANTHROPIC_API_KEY= 14 | export ARCADE_API_KEY= 15 | export ARCADE_USER_ID= 16 | ``` 17 | 18 | Clone the repository and launch the assistant [with the LangGraph server](https://langchain-ai.github.io/langgraph/cloud/reference/cli/#dev): 19 | ```bash 20 | curl -LsSf https://astral.sh/uv/install.sh | sh 21 | git clone https://github.com/langchain-ai/reply_gAI.git 22 | cd reply_gAI 23 | uvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph dev 24 | ``` 25 | 26 | You should see the following output and Studio will open in your browser: 27 | 28 | - 🚀 API: http://127.0.0.1:2024 29 | - 🎨 Studio UI: https://smith.langchain.com/studio/?baseUrl=http://127.0.0.1:2024 30 | - 📚 API Docs: http://127.0.0.1:2024/docs 31 | 32 | In the `configuration` tab, add the Twitter/X handle of any user: 33 | 34 | ![Screenshot 2024-12-11 at 1 30 51 PM](https://github.com/user-attachments/assets/10cc592b-9b1d-4132-87e1-db3e65257fc9) 35 | 36 | Then, just interact with a chatbot persona for that user: 37 | 38 | ![Screenshot 2024-12-11 at 1 30 30 PM](https://github.com/user-attachments/assets/6bbfbd5a-40a2-46c5-b329-c66e1c1952d8) 39 | 40 | ## How it works 41 | 42 | Reply gAI uses LangGraph to create a workflow that mimics a Twitter user's writing style: 43 | 44 | 1. **Tweet Collection** 45 | - Uses the [Arcade API X Toolkit](https://docs.arcade-ai.com/integrations/toolkits/x) to fetch Tweets over the past 7 days from a specified Twitter user 46 | - Tweets are stored in the LangGraph Server's [memory store](https://langchain-ai.github.io/langgraph/concepts/persistence/#memory-store) 47 | - The system automatically refreshes tweets if they're older than the configured age limit 48 | 49 | 2. **Conversation Flow** 50 | - The workflow is managed by a state graph with two main nodes: 51 | - `get_tweets`: Fetches and stores recent tweets 52 | - `chat`: Generates responses using Claude 3.5 Sonnet 53 | 54 | 3. **Response Generation** 55 | - This uses RAG to condition responses based upon the user's Tweets stored in memory 56 | - Currently, it loads all tweets into memory, but semantic search from the LangGraph Server's memory store [is also supported](https://langchain-ai.github.io/langgraph/concepts/persistence/#semantic-search) 57 | - The LLM analyzes the collected tweets to understand the user's writing style 58 | - It generates contextually appropriate responses that match the personality and tone of the target Twitter user 59 | 60 | The system automatically determines whether to fetch new tweets or use existing ones based on their age, ensuring responses are generated using recent and relevant data. 61 | 62 | ## Long-term memory 63 | 64 | In the quickstart, we use a [locally running LangGraph server](https://langchain-ai.github.io/langgraph/tutorials/langgraph-platform/local-server/#create-a-env-file). 65 | 66 | This uses the `langraph dev` command, which [launches the server in development mode](https://langchain-ai.github.io/langgraph/cloud/reference/cli/#dev). 67 | 68 | Tweets are saved to the [LangGraph store](https://langchain-ai.github.io/langgraph/concepts/persistence/#memory-store), which uses Postgres for persistence and is saved in the `.langgraph_api/` folder in this directory. 69 | 70 | You can visualize Tweets saved per each user in the Store directly with LangGraph Studio: 71 | 72 | ![Screenshot 2024-12-11 at 1 31 09 PM](https://github.com/user-attachments/assets/41a06245-0659-4309-b7e5-e78a2f108c2b) 73 | 74 | ## Deployment 75 | 76 | If you want to want to launch the server in a mode suitable for production, you can consider [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/quick_start/#langgraph-cloud-quick-start): 77 | 78 | * Add `LANGSMITH_API_KEY` to your `.env` file. 79 | * Ensure [Docker](https://docs.docker.com/engine/install/) is running on your machine. 80 | * [Run with `langgraph up`](https://langchain-ai.github.io/langgraph/cloud/reference/cli/#up) 81 | 82 | ```bash 83 | luvx --refresh --from "langgraph-cli[inmem]" --with-editable . --python 3.11 langgraph up 84 | ``` 85 | 86 | See [Module 6](https://github.com/langchain-ai/langchain-academy/tree/main/module-6) of LangChain Academy for a detailed walkthrough of deployment options with LangGraph. 87 | -------------------------------------------------------------------------------- /langgraph.json: -------------------------------------------------------------------------------- 1 | { 2 | "dockerfile_lines": [], 3 | "graphs": { 4 | "reply_gai": "./src/chatbot/graph.py:graph" 5 | }, 6 | "python_version": "3.11", 7 | "env": "./.env", 8 | "dependencies": [ 9 | "." 10 | ] 11 | } -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [project] 2 | name = "reply-gAI" 3 | version = "0.0.1" 4 | description = "Chat persona based on a Twitter user." 5 | authors = [ 6 | { name = "Lance Martin" } 7 | ] 8 | readme = "README.md" 9 | license = { text = "MIT" } 10 | requires-python = ">=3.9" 11 | dependencies = [ 12 | "langgraph>=0.2.55", 13 | "langchain-community>=0.3.9", 14 | "langchain-anthropic>=0.3.0", 15 | "arcade_x>=0.1.5", 16 | ] 17 | 18 | [project.optional-dependencies] 19 | dev = ["mypy>=1.11.1", "ruff>=0.6.1"] 20 | 21 | [build-system] 22 | requires = ["setuptools>=73.0.0", "wheel"] 23 | build-backend = "setuptools.build_meta" 24 | 25 | [tool.setuptools] 26 | packages = ["chatbot"] 27 | 28 | [tool.setuptools.package-dir] 29 | "chatbot" = "src/chatbot" 30 | 31 | [tool.setuptools.package-data] 32 | "*" = ["py.typed"] 33 | 34 | [tool.ruff] 35 | lint.select = [ 36 | "E", # pycodestyle 37 | "F", # pyflakes 38 | "I", # isort 39 | "D", # pydocstyle 40 | "D401", # First line should be in imperative mood 41 | "T201", 42 | "UP", 43 | ] 44 | lint.ignore = [ 45 | "UP006", 46 | "UP007", 47 | "UP035", 48 | "D417", 49 | "E501", 50 | ] 51 | 52 | [tool.ruff.lint.per-file-ignores] 53 | "tests/*" = ["D", "UP"] 54 | 55 | [tool.ruff.lint.pydocstyle] 56 | convention = "google" -------------------------------------------------------------------------------- /src/chatbot/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/langchain-ai/reply_gAI/130f56f106f6b0d59f6b118713c8ba00ef808155/src/chatbot/__init__.py -------------------------------------------------------------------------------- /src/chatbot/__pycache__/__init__.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/langchain-ai/reply_gAI/130f56f106f6b0d59f6b118713c8ba00ef808155/src/chatbot/__pycache__/__init__.cpython-311.pyc -------------------------------------------------------------------------------- /src/chatbot/__pycache__/configuration.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/langchain-ai/reply_gAI/130f56f106f6b0d59f6b118713c8ba00ef808155/src/chatbot/__pycache__/configuration.cpython-311.pyc -------------------------------------------------------------------------------- /src/chatbot/__pycache__/graph.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/langchain-ai/reply_gAI/130f56f106f6b0d59f6b118713c8ba00ef808155/src/chatbot/__pycache__/graph.cpython-311.pyc -------------------------------------------------------------------------------- /src/chatbot/__pycache__/prompts.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/langchain-ai/reply_gAI/130f56f106f6b0d59f6b118713c8ba00ef808155/src/chatbot/__pycache__/prompts.cpython-311.pyc -------------------------------------------------------------------------------- /src/chatbot/__pycache__/utils.cpython-311.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/langchain-ai/reply_gAI/130f56f106f6b0d59f6b118713c8ba00ef808155/src/chatbot/__pycache__/utils.cpython-311.pyc -------------------------------------------------------------------------------- /src/chatbot/configuration.py: -------------------------------------------------------------------------------- 1 | import os 2 | from dataclasses import dataclass, fields 3 | from typing import Any, Optional 4 | 5 | from langchain_core.runnables import RunnableConfig 6 | from dataclasses import dataclass 7 | 8 | @dataclass(kw_only=True) 9 | class Configuration: 10 | """The configurable fields for the chatbot.""" 11 | username: str = "elonmusk" 12 | max_tweet_age_seconds: int = 86400 # 24 hours (24 * 60 * 60 seconds) 13 | 14 | @classmethod 15 | def from_runnable_config( 16 | cls, config: Optional[RunnableConfig] = None 17 | ) -> "Configuration": 18 | """Create a Configuration instance from a RunnableConfig.""" 19 | configurable = ( 20 | config["configurable"] if config and "configurable" in config else {} 21 | ) 22 | values: dict[str, Any] = { 23 | f.name: os.environ.get(f.name.upper(), configurable.get(f.name)) 24 | for f in fields(cls) 25 | if f.init 26 | } 27 | return cls(**{k: v for k, v in values.items() if v}) -------------------------------------------------------------------------------- /src/chatbot/graph.py: -------------------------------------------------------------------------------- 1 | import uuid 2 | import os 3 | from datetime import datetime, timezone 4 | from langchain_core.runnables import RunnableConfig 5 | from langchain_core.messages import SystemMessage 6 | from langchain_anthropic import ChatAnthropic 7 | from langgraph.graph import MessagesState 8 | from langgraph.store.base import BaseStore 9 | from langgraph.graph import END, StateGraph 10 | from arcadepy import Arcade 11 | 12 | import chatbot.configuration as configuration 13 | from chatbot.prompts import CHAT_INSTRUCTIONS 14 | from chatbot.utils import get_all_tweets 15 | 16 | def get_tweets(state: MessagesState, config: RunnableConfig, store: BaseStore) -> dict: 17 | """Fetch and store recent tweets for a specified Twitter user. 18 | 19 | This function authenticates with the Arcade API, retrieves recent tweets for a given 20 | username, and stores them in the provided BaseStore instance. Each tweet is stored 21 | with its text content and URL. 22 | 23 | Args: 24 | state (MessagesState): Current conversation state (unused but required by graph) 25 | config (RunnableConfig): Configuration object containing settings like username 26 | store (BaseStore): Storage interface for saving retrieved tweets 27 | 28 | Returns: 29 | dict: Empty dictionary (function stores tweets but doesn't return them) 30 | 31 | Note: 32 | - Requires ARCADE_USER_ID environment variable to be set 33 | - Fetches up to 100 most recent tweets from the last 7 days 34 | - Stores tweets using (username, "tweets") as namespace 35 | """ 36 | 37 | # Get the configuration 38 | configurable = configuration.Configuration.from_runnable_config(config) 39 | 40 | client = Arcade() 41 | USER_ID = os.environ["ARCADE_USER_ID"] 42 | TOOL_NAME = "X.SearchRecentTweetsByUsername" 43 | 44 | auth_response = client.tools.authorize( 45 | tool_name=TOOL_NAME, 46 | user_id=USER_ID, 47 | ) 48 | 49 | if auth_response.status != "completed": 50 | print(f"Click this link to authorize: {auth_response.authorization_url}") 51 | 52 | # Wait for the authorization to complete 53 | client.auth.wait_for_completion(auth_response) 54 | 55 | # Search for recent tweets (last 7 days) on X (Twitter) 56 | username = configurable.username 57 | 58 | # Get all the tweets 59 | tweets = get_all_tweets(client, username, USER_ID, TOOL_NAME) 60 | 61 | # Load the tweets into memory 62 | namespace_for_memory = (username, "tweets") 63 | for tweet in tweets: 64 | memory_id = tweet.get('id',uuid.uuid4()) 65 | text = tweet.get('text',"Tweet empty") 66 | url = tweet.get('tweet_url',"URL not found") 67 | store.put(namespace_for_memory, memory_id, {"text": text,"url": url}) 68 | 69 | def chat(state: MessagesState, config: RunnableConfig, store: BaseStore) -> dict: 70 | """Generate a chat response in the style of a specific Twitter user. 71 | 72 | This function retrieves tweets from the store for a given username, formats them, 73 | and uses them as context for Claude to generate a response that mimics the user's 74 | writing style and personality. 75 | 76 | Args: 77 | state (MessagesState): Current conversation state containing message history 78 | config (RunnableConfig): Configuration object containing settings like username 79 | store (BaseStore): Storage interface for accessing saved tweets 80 | 81 | Returns: 82 | dict: Contains the generated message in the 'messages' key 83 | """ 84 | 85 | # Get the configuration 86 | configurable = configuration.Configuration.from_runnable_config(config) 87 | username = configurable.username 88 | 89 | # Get the tweets 90 | namespace_for_memory = (username, "tweets") 91 | 92 | # Get all the tweets 93 | memories = [] 94 | while mems := store.search(namespace_for_memory, limit=200, offset=len(memories)): 95 | memories.extend(mems) 96 | 97 | # Format the tweets 98 | formatted_output = "" 99 | for memory in memories: 100 | tweet = memory.value 101 | formatted_output += f"@{username}: {tweet['text']}\n" 102 | formatted_output += "-" * 80 + "\n" 103 | 104 | # Generate a response 105 | claude_3_5_sonnet = ChatAnthropic(model="claude-3-5-sonnet-20240620", temperature=0.75) 106 | chat_instructions_formatted = CHAT_INSTRUCTIONS.format(username=username,tweets=formatted_output) 107 | msg = claude_3_5_sonnet.invoke([SystemMessage(content=chat_instructions_formatted)]+state['messages']) 108 | return {"messages": [msg]} 109 | 110 | def route_to_tweet_loader(state: MessagesState, config: RunnableConfig, store: BaseStore) -> dict: 111 | """Route the workflow based on tweet availability and age. 112 | 113 | This function determines whether to fetch new tweets or proceed to chat by checking: 114 | 1. If tweets exist for the user in the store 115 | 2. If existing tweets are too old (beyond max_tweet_age_seconds) 116 | 117 | Args: 118 | state (MessagesState): Current conversation state 119 | config (RunnableConfig): Configuration containing username and tweet age settings 120 | store (BaseStore): Storage interface for accessing saved tweets 121 | 122 | Returns: 123 | str: Either "get_tweets" to fetch new tweets or "chat" to proceed with conversation 124 | """ 125 | 126 | # Get the configuration 127 | configurable = configuration.Configuration.from_runnable_config(config) 128 | username = configurable.username 129 | 130 | # If we have Tweets from the user, go to chat 131 | namespace_for_memory = (username, "tweets") 132 | memories = store.search(namespace_for_memory, limit=200) 133 | 134 | # If we have tweets, check if they're too old 135 | if memories: 136 | # Get most recent tweet timestamp 137 | most_recent = max(mem.created_at for mem in memories) 138 | 139 | # Calculate time difference 140 | now = datetime.now(timezone.utc) 141 | time_delta = now - most_recent 142 | 143 | # If tweets are too old, get new ones 144 | if time_delta.total_seconds() > configurable.max_tweet_age_seconds: 145 | return "get_tweets" 146 | return "chat" 147 | # If no tweets for the user, get them 148 | else: 149 | return "get_tweets" 150 | 151 | # Create the graph + all nodes 152 | builder = StateGraph(MessagesState, config_schema=configuration.Configuration) 153 | builder.add_node("chat",chat) 154 | builder.add_node("get_tweets",get_tweets) 155 | builder.set_conditional_entry_point(route_to_tweet_loader, ["chat", "get_tweets"]) 156 | builder.add_edge("get_tweets", "chat") 157 | builder.add_edge("chat", END) 158 | 159 | # Compile the graph 160 | graph = builder.compile() -------------------------------------------------------------------------------- /src/chatbot/prompts.py: -------------------------------------------------------------------------------- 1 | CHAT_INSTRUCTIONS = """Hey! You're chatting as @{username} on Twitter. 2 | 3 | I've pulled some of their recent tweets to help you get their vibe: 4 | 5 | {tweets} 6 | 7 | Quick tips for keeping it real: 8 | - Jump right into answers naturally, no need for formal intros 9 | - Chat like you're talking to a friend - keep it casual and fun 10 | - Use their usual phrases and way of talking 11 | - Share opinions and hot takes in their style 12 | - If it's a new topic, just flow with it naturally using their voice 13 | 14 | Just vibe like @{username} would - keep it real and conversational! 15 | 16 | Here's what they're asking:""" -------------------------------------------------------------------------------- /src/chatbot/reply_gai.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 11, 6 | "metadata": {}, 7 | "outputs": [], 8 | "source": [ 9 | "%%capture --no-stderr\n", 10 | "%pip install --quiet -U langgraph langchain_community langchain_core arcade_x" 11 | ] 12 | }, 13 | { 14 | "cell_type": "code", 15 | "execution_count": 29, 16 | "metadata": {}, 17 | "outputs": [], 18 | "source": [ 19 | "import uuid\n", 20 | "import os\n", 21 | "from datetime import datetime, timezone\n", 22 | "from langchain_core.runnables import RunnableConfig\n", 23 | "from langchain_core.messages import SystemMessage\n", 24 | "from langchain_anthropic import ChatAnthropic \n", 25 | "from langgraph.graph import MessagesState\n", 26 | "from langgraph.store.base import BaseStore\n", 27 | "from langgraph.graph import END, StateGraph\n", 28 | "from arcadepy import Arcade\n", 29 | "\n", 30 | "import agent.configuration as configuration\n", 31 | "from agent.prompts import CHAT_INSTRUCTIONS\n", 32 | "from agent.utils import get_all_tweets\n", 33 | "\n", 34 | "def get_tweets(state: MessagesState, config: RunnableConfig, store: BaseStore) -> dict:\n", 35 | " \"\"\"Fetch and store recent tweets for a specified Twitter user.\n", 36 | " \n", 37 | " This function authenticates with the Arcade API, retrieves recent tweets for a given\n", 38 | " username, and stores them in the provided BaseStore instance. Each tweet is stored\n", 39 | " with its text content and URL.\n", 40 | " \n", 41 | " Args:\n", 42 | " state (MessagesState): Current conversation state (unused but required by graph)\n", 43 | " config (RunnableConfig): Configuration object containing settings like username\n", 44 | " store (BaseStore): Storage interface for saving retrieved tweets\n", 45 | " \n", 46 | " Returns:\n", 47 | " dict: Empty dictionary (function stores tweets but doesn't return them)\n", 48 | " \n", 49 | " Note:\n", 50 | " - Requires ARCADE_USER_ID environment variable to be set\n", 51 | " - Fetches up to 100 most recent tweets from the last 7 days\n", 52 | " - Stores tweets using (username, \"tweets\") as namespace\n", 53 | " \"\"\"\n", 54 | "\n", 55 | " # Get the configuration\n", 56 | " configurable = configuration.Configuration.from_runnable_config(config)\n", 57 | "\n", 58 | " client = Arcade() \n", 59 | " USER_ID = os.environ[\"ARCADE_USER_ID\"]\n", 60 | " TOOL_NAME = \"X.SearchRecentTweetsByUsername\"\n", 61 | "\n", 62 | " auth_response = client.tools.authorize(\n", 63 | " tool_name=TOOL_NAME,\n", 64 | " user_id=USER_ID,\n", 65 | " )\n", 66 | "\n", 67 | " if auth_response.status != \"completed\":\n", 68 | " print(f\"Click this link to authorize: {auth_response.authorization_url}\")\n", 69 | "\n", 70 | " # Wait for the authorization to complete\n", 71 | " client.auth.wait_for_completion(auth_response)\n", 72 | "\n", 73 | " # Search for recent tweets (last 7 days) on X (Twitter)\n", 74 | " username = configurable.username\n", 75 | "\n", 76 | " # Get all the tweets\n", 77 | " tweets = get_all_tweets(client, username, USER_ID, TOOL_NAME)\n", 78 | "\n", 79 | " # Load the tweets into memory\n", 80 | " namespace_for_memory = (username, \"tweets\")\n", 81 | " for tweet in tweets:\n", 82 | " memory_id = tweet.get('id',uuid.uuid4())\n", 83 | " text = tweet.get('text',\"Tweet empty\")\n", 84 | " url = tweet.get('tweet_url',\"URL not found\")\n", 85 | " store.put(namespace_for_memory, memory_id, {\"text\": text,\"url\": url})\n", 86 | "\n", 87 | "def chat(state: MessagesState, config: RunnableConfig, store: BaseStore) -> dict:\n", 88 | " \"\"\"Generate a chat response in the style of a specific Twitter user.\n", 89 | " \n", 90 | " This function retrieves tweets from the store for a given username, formats them,\n", 91 | " and uses them as context for Claude to generate a response that mimics the user's\n", 92 | " writing style and personality.\n", 93 | "\n", 94 | " Args:\n", 95 | " state (MessagesState): Current conversation state containing message history\n", 96 | " config (RunnableConfig): Configuration object containing settings like username\n", 97 | " store (BaseStore): Storage interface for accessing saved tweets\n", 98 | "\n", 99 | " Returns:\n", 100 | " dict: Contains the generated message in the 'messages' key\n", 101 | " \"\"\"\n", 102 | "\n", 103 | " # Get the configuration\n", 104 | " configurable = configuration.Configuration.from_runnable_config(config)\n", 105 | " username = configurable.username\n", 106 | " \n", 107 | " # Get the tweets\n", 108 | " namespace_for_memory = (username, \"tweets\")\n", 109 | "\n", 110 | " # Get all the tweets\n", 111 | " memories = []\n", 112 | " while mems := store.search(namespace_for_memory, limit=200, offset=len(memories)):\n", 113 | " memories.extend(mems)\n", 114 | " \n", 115 | " # Format the tweets\n", 116 | " formatted_output = \"\"\n", 117 | " for memory in memories:\n", 118 | " tweet = memory.value\n", 119 | " formatted_output += f\"@{username}: {tweet['text']}\\n\"\n", 120 | " formatted_output += \"-\" * 80 + \"\\n\"\n", 121 | "\n", 122 | " # Generate a response\n", 123 | " claude_3_5_sonnet = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\", temperature=0.75) \n", 124 | " chat_instructions_formatted = CHAT_INSTRUCTIONS.format(username=username,tweets=formatted_output)\n", 125 | " msg = claude_3_5_sonnet.invoke([SystemMessage(content=chat_instructions_formatted)]+state['messages'])\n", 126 | " return {\"messages\": [msg]} \n", 127 | "\n", 128 | "def route_to_tweet_loader(state: MessagesState, config: RunnableConfig, store: BaseStore) -> dict:\n", 129 | " \"\"\"Route the workflow based on tweet availability and age.\n", 130 | " \n", 131 | " This function determines whether to fetch new tweets or proceed to chat by checking:\n", 132 | " 1. If tweets exist for the user in the store\n", 133 | " 2. If existing tweets are too old (beyond max_tweet_age_seconds)\n", 134 | " \n", 135 | " Args:\n", 136 | " state (MessagesState): Current conversation state\n", 137 | " config (RunnableConfig): Configuration containing username and tweet age settings\n", 138 | " store (BaseStore): Storage interface for accessing saved tweets\n", 139 | " \n", 140 | " Returns:\n", 141 | " str: Either \"get_tweets\" to fetch new tweets or \"chat\" to proceed with conversation\n", 142 | " \"\"\"\n", 143 | "\n", 144 | " # Get the configuration\n", 145 | " configurable = configuration.Configuration.from_runnable_config(config)\n", 146 | " username = configurable.username\n", 147 | " \n", 148 | " # If we have Tweets from the user, go to chat\n", 149 | " namespace_for_memory = (username, \"tweets\")\n", 150 | " memories = store.search(namespace_for_memory, limit=200)\n", 151 | "\n", 152 | " # If we have tweets, check if they're too old\n", 153 | " if memories: \n", 154 | " # Get most recent tweet timestamp\n", 155 | " most_recent = max(mem.created_at for mem in memories)\n", 156 | " \n", 157 | " # Calculate time difference\n", 158 | " now = datetime.now(timezone.utc)\n", 159 | " time_delta = now - most_recent\n", 160 | " \n", 161 | " # If tweets are too old, get new ones\n", 162 | " if time_delta.total_seconds() > configurable.max_tweet_age_seconds:\n", 163 | " return \"get_tweets\"\n", 164 | " return \"chat\"\n", 165 | " # If no tweets for the user, get them \n", 166 | " else:\n", 167 | " return \"get_tweets\"\n", 168 | "\n", 169 | "# Create the graph + all nodes\n", 170 | "builder = StateGraph(MessagesState, config_schema=configuration.Configuration)\n", 171 | "builder.add_node(\"chat\",chat)\n", 172 | "builder.add_node(\"get_tweets\",get_tweets)\n", 173 | "builder.set_conditional_entry_point(route_to_tweet_loader, [\"chat\", \"get_tweets\"])\n", 174 | "builder.add_edge(\"get_tweets\", \"chat\")\n", 175 | "builder.add_edge(\"chat\", END)\n", 176 | "\n", 177 | "# Store\n", 178 | "from langgraph.store.memory import InMemoryStore\n", 179 | "in_memory_store = InMemoryStore()\n", 180 | "\n", 181 | "# Compile the graph\n", 182 | "graph = builder.compile(store=in_memory_store)" 183 | ] 184 | }, 185 | { 186 | "cell_type": "code", 187 | "execution_count": 30, 188 | "metadata": {}, 189 | "outputs": [ 190 | { 191 | "data": { 192 | "text/plain": [ 193 | "{'messages': [HumanMessage(content='What are some of your favorite applications of LLMs?', additional_kwargs={}, response_metadata={}, id='ca0873c2-fe1e-4d67-a106-46458c8c0820'),\n", 194 | " AIMessage(content=\"Oh man, I love this question! LLMs are just mind-blowing in terms of their potential applications. One of my absolute favorites, which I've been geeking out about recently, is using LLMs as reading companions for books. \\n\\nImagine you're diving into a complex non-fiction book or a dense sci-fi novel, and you've got this AI sidekick that's read the whole thing, understands the context, and can chat with you about it in real-time. You could ask questions, get summaries, or even have generated discussions about themes and ideas. It's like NotebookLM on steroids!\\n\\nI actually tweeted about this recently - I think if Amazon built something like this into Kindle, it would be absolutely game-changing. They've got all the content right there, after all.\\n\\nBeyond that, I'm really excited about LLMs in coding assistants. The potential for accelerating software development is huge. And of course, there's the whole world of creative applications - writing, brainstorming, even aiding in scientific research and hypothesis generation.\\n\\nBut honestly, we're just scratching the surface. The most mind-blowing applications of LLMs are probably things we haven't even thought of yet. It's like we've discovered fire and we're still figuring out all the ways we can use it. Exciting times!\", additional_kwargs={}, response_metadata={'id': 'msg_01Xpv4yY4jLc3EEBpgk52SfR', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 1128, 'output_tokens': 299}}, id='run-c6abe3f1-7cf4-4f55-a6fc-270a6186693a-0', usage_metadata={'input_tokens': 1128, 'output_tokens': 299, 'total_tokens': 1427, 'input_token_details': {}})]}" 195 | ] 196 | }, 197 | "execution_count": 30, 198 | "metadata": {}, 199 | "output_type": "execute_result" 200 | } 201 | ], 202 | "source": [ 203 | "from langchain_core.messages import HumanMessage\n", 204 | "config = {\"configurable\": {\"username\": \"karpathy\"}}\n", 205 | "graph.invoke({\"messages\": [HumanMessage(content=\"What are some of your favorite applications of LLMs?\")]}, config=config)" 206 | ] 207 | }, 208 | { 209 | "cell_type": "code", 210 | "execution_count": 24, 211 | "metadata": {}, 212 | "outputs": [ 213 | { 214 | "name": "stdout", 215 | "output_type": "stream", 216 | "text": [ 217 | "14\n" 218 | ] 219 | } 220 | ], 221 | "source": [] 222 | }, 223 | { 224 | "cell_type": "code", 225 | "execution_count": 22, 226 | "metadata": {}, 227 | "outputs": [], 228 | "source": [] 229 | }, 230 | { 231 | "cell_type": "code", 232 | "execution_count": null, 233 | "metadata": {}, 234 | "outputs": [], 235 | "source": [] 236 | } 237 | ], 238 | "metadata": { 239 | "kernelspec": { 240 | "display_name": "reply_gai_env", 241 | "language": "python", 242 | "name": "python3" 243 | }, 244 | "language_info": { 245 | "codemirror_mode": { 246 | "name": "ipython", 247 | "version": 3 248 | }, 249 | "file_extension": ".py", 250 | "mimetype": "text/x-python", 251 | "name": "python", 252 | "nbconvert_exporter": "python", 253 | "pygments_lexer": "ipython3", 254 | "version": "3.11.6" 255 | } 256 | }, 257 | "nbformat": 4, 258 | "nbformat_minor": 2 259 | } 260 | -------------------------------------------------------------------------------- /src/chatbot/utils.py: -------------------------------------------------------------------------------- 1 | def get_all_tweets(client, username: str, user_id: str, tool_name: str = "X.SearchRecentTweetsByUsername") -> list: 2 | """ 3 | Fetch all available tweets for a given username using pagination. 4 | 5 | Args: 6 | client: Arcade client instance 7 | username: Twitter username to fetch tweets for 8 | user_id: Arcade user ID 9 | tool_name: Name of the Arcade tool to use 10 | 11 | Returns: 12 | list: All collected tweets 13 | """ 14 | all_tweets = [] 15 | next_token = None 16 | 17 | while True: 18 | # Prepare inputs (include next_token if we have one) 19 | inputs = {"username": username, "max_results": 100} 20 | if next_token: 21 | inputs["next_token"] = next_token 22 | 23 | # Execute the request 24 | response = client.tools.execute( 25 | tool_name=tool_name, 26 | inputs=inputs, 27 | user_id=user_id, 28 | ) 29 | 30 | # Get tweets from the response 31 | new_tweets = response.output.value['data'] 32 | all_tweets.extend(new_tweets) 33 | 34 | # Get next token if available 35 | next_token = response.output.value["meta"].get("next_token", None) 36 | 37 | # If no next token, we've reached the end 38 | if not next_token: 39 | break 40 | 41 | return all_tweets --------------------------------------------------------------------------------