├── .github
├── ISSUE_TEMPLATE
│ ├── bug-report.md
│ └── feature-request.md
├── PULL_REQUEST_TEMPLATE.md
└── workflows
│ └── run-workflow-tests.yml
├── .gitignore
├── LICENSE
├── README.md
├── __init__.py
├── agent_components.py
├── demo.gif
├── examples
└── babyagi.xircuits
├── pyproject.toml
├── requirements.txt
└── tests
└── babyagi_test.xircuits
/.github/ISSUE_TEMPLATE/bug-report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug Report
3 | about: Share your findings to help us squash those bugs
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **What kind of bug is it?**
11 | - [ ] Xircuits Component Library Code
12 | - [ ] Workflow Example
13 | - [ ] Documentation
14 | - [ ] Not Sure
15 |
16 | **Xircuits Version**
17 | Run `pip show xircuits` to get the version, or mention you've used a specific .whl from a branch.
18 |
19 | **Describe the bug**
20 | A clear and concise description of what the bug is.
21 |
22 | **To Reproduce**
23 | Steps to reproduce the behavior:
24 | 1. Go to '...'
25 | 2. Click on '....'
26 | 3. Scroll down to '....'
27 | 4. See error
28 |
29 | **Expected behavior**
30 | A clear and concise description of what you expected to happen.
31 |
32 | **Screenshots**
33 | If applicable, add screenshots to help explain your problem.
34 |
35 | **Tested on?**
36 |
37 | - [ ] Windows
38 | - [ ] Linux Ubuntu
39 | - [ ] Centos
40 | - [ ] Mac
41 | - [ ] Others (State here -> xxx )
42 |
43 | **Additional context**
44 | Add any other context about the problem here.
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature-request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature Request
3 | about: Suggest an idea for this component library
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Xircuits Version**
11 | Run `pip show xircuits` to get the version, or mention you've used a specific .whl from a branch.
12 |
13 | **What kind of feature is it?**
14 | - [ ] Xircuits Component Library Code
15 | - [ ] Workflow Example
16 | - [ ] Documentation
17 | - [ ] Not Sure
18 |
19 | **Is your feature request related to a problem? Please describe.**
20 |
21 | A clear and concise description of what the problem is. Ex. When I use X feature / when I do Y it does Z.
22 |
23 | **Describe the solution you'd like**
24 |
25 | A clear and concise description of what you want to happen.
26 |
27 | **Describe alternatives you've considered**
28 | A clear and concise description of any alternative solutions or features you've considered.
29 |
30 | **Additional context**
31 | Add any other context or screenshots about the feature request here.
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | Welcome to Xircuits! Thank you for making a pull request. Please ensure that your pull request follows the template.
2 |
3 | # Description
4 |
5 | Please include a summary which includes relevant motivation and context. You may also describe the code changes. List any dependencies that are required for this change.
6 |
7 | ## References
8 |
9 | If applicable, note issue numbers this pull request addresses. You can also note any other pull requests that address this issue and how this pull request is different.
10 |
11 | ## Pull Request Type
12 |
13 | - [ ] Xircuits Component Library Code
14 | - [ ] Workflow Example
15 | - [ ] Documentation
16 | - [ ] Others (Please Specify)
17 |
18 | ## Type of Change
19 |
20 | - [ ] New feature (non-breaking change which adds functionality)
21 | - [ ] Bug fix (non-breaking change which fixes an issue)
22 | - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
23 | - [ ] This change requires a documentation update
24 |
25 | # Tests
26 |
27 | Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration.
28 |
29 | **1. Test A**
30 |
31 | 1. First step
32 | 2. Second step
33 | 3. ...
34 |
35 |
36 | ## Tested on?
37 |
38 | - [ ] Windows
39 | - [ ] Linux Ubuntu
40 | - [ ] Centos
41 | - [ ] Mac
42 | - [ ] Others (State here -> xxx )
43 |
44 | # Notes
45 |
46 | Add if any.
47 |
--------------------------------------------------------------------------------
/.github/workflows/run-workflow-tests.yml:
--------------------------------------------------------------------------------
1 | name: Run Xircuits Workflows Test
2 |
3 | on:
4 |
5 | workflow_dispatch:
6 |
7 | jobs:
8 | build-and-run:
9 | runs-on: ubuntu-latest
10 | strategy:
11 | fail-fast: false
12 | matrix:
13 | python-version: ["3.11"]
14 | env:
15 | TEST_XIRCUITS: |
16 | tests/babyagi_test.xircuits
17 |
18 | steps:
19 | - name: Checkout Repository
20 | uses: actions/checkout@v3
21 |
22 | - name: Set up Python ${{ matrix.python-version }}
23 | uses: actions/setup-python@v4
24 | with:
25 | python-version: ${{ matrix.python-version }}
26 |
27 | - name: Create virtual environment
28 | run: |
29 | python -m venv venv
30 | echo "${{ github.workspace }}/venv/bin" >> $GITHUB_PATH
31 |
32 | - name: Install xircuits in virtual environment
33 | run: pip install xircuits
34 |
35 | - name: Set Environment Variables
36 | run: |
37 | LIBRARY_NAME=$(echo "${GITHUB_REPOSITORY##*/}" | sed 's/-/_/g')
38 | echo "LIBRARY_NAME=$LIBRARY_NAME" >> $GITHUB_ENV
39 | COMPONENT_LIBRARY_PATH="xai_components/${LIBRARY_NAME}"
40 | echo "COMPONENT_LIBRARY_PATH=$COMPONENT_LIBRARY_PATH" >> $GITHUB_ENV
41 | if [ "${{ github.event_name }}" == "pull_request" ]; then
42 | echo "BRANCH_NAME=${{ github.head_ref }}" >> $GITHUB_ENV
43 | else
44 | echo "BRANCH_NAME=${GITHUB_REF#refs/heads/}" >> $GITHUB_ENV
45 | fi
46 |
47 | - name: Init Xircuits
48 | run: xircuits init
49 |
50 | - name: Install Xircuits Library
51 | run: xircuits install openai
52 |
53 | - name: Clone Repository
54 | run: |
55 | rm -rf ${{ env.COMPONENT_LIBRARY_PATH }}
56 | if [ "${{ github.event_name }}" == "pull_request" ]; then
57 | REPO_URL="${{ github.event.pull_request.head.repo.clone_url }}"
58 | else
59 | REPO_URL="https://github.com/${{ github.repository }}"
60 | fi
61 | git clone -b ${{ env.BRANCH_NAME }} $REPO_URL ${{ env.COMPONENT_LIBRARY_PATH }}
62 |
63 | - name: Install Component Library
64 | run: |
65 | if [ -f "${{ env.COMPONENT_LIBRARY_PATH }}/requirements.txt" ]; then
66 | echo "requirements.txt found, installing dependencies..."
67 | pip install -r ${{ env.COMPONENT_LIBRARY_PATH }}/requirements.txt
68 | else
69 | echo "requirements.txt not found."
70 | fi
71 |
72 | - name: Test .xircuits Workflows
73 | env:
74 | OPENAI_API_KEY: ${{ secrets.API_KEY }}
75 | run: |
76 | export PYTHONPATH="${GITHUB_WORKSPACE}:${PYTHONPATH}"
77 | LOG_FILE="${GITHUB_WORKSPACE}/workflow_logs.txt"
78 | TEST_FILES=$(echo "$TEST_XIRCUITS" | tr '\n' ' ')
79 |
80 | echo "Repository: $LIBRARY_NAME" > $LOG_FILE
81 | echo "Branch: $BRANCH_NAME" >> $LOG_FILE
82 | echo -e "Testing Files:\n$TEST_FILES" >> $LOG_FILE
83 |
84 | IFS=' ' read -r -a FILE_ARRAY <<< "$TEST_FILES"
85 | FAIL=0
86 |
87 | if [ ${#FILE_ARRAY[@]} -eq 0 ]; then
88 | echo "No .xircuits files specified for testing." | tee -a $LOG_FILE
89 | else
90 | for file in "${FILE_ARRAY[@]}"; do
91 | FULL_PATH="${COMPONENT_LIBRARY_PATH}/${file}"
92 | if [ -f "$FULL_PATH" ]; then
93 | WORKFLOW_LOG_FILE="${FULL_PATH%.*}_workflow_log.txt"
94 | echo -e "\n\nProcessing $FULL_PATH..." | tee -a $LOG_FILE
95 |
96 | xircuits compile "$FULL_PATH" "${FULL_PATH%.*}.py" 2>&1 | tee -a $LOG_FILE
97 | python "${FULL_PATH%.*}.py" 2>&1 | tee -a $WORKFLOW_LOG_FILE
98 | echo "$FULL_PATH processed successfully" | tee -a $LOG_FILE
99 |
100 | cat "$WORKFLOW_LOG_FILE" | tee -a $LOG_FILE
101 | else
102 | echo "Specified file $FULL_PATH does not exist" | tee -a $LOG_FILE
103 | FAIL=1
104 | fi
105 | done
106 | fi
107 |
108 | if [ $FAIL -ne 0 ]; then
109 | echo "One or more workflows failed or did not finish as expected." | tee -a $LOG_FILE
110 | exit 1
111 | else
112 | echo "All workflows processed successfully." | tee -a $LOG_FILE
113 | fi
114 |
115 | - name: Upload log file
116 | if: always()
117 | uses: actions/upload-artifact@v4
118 | with:
119 | name: ${{ env.LIBRARY_NAME }}-validation-workflow-${{ matrix.python-version }}
120 | path: ${{ github.workspace }}/workflow_logs.txt
121 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | pip-wheel-metadata/
24 | share/python-wheels/
25 | *.egg-info/
26 | .installed.cfg
27 | *.egg
28 | MANIFEST
29 |
30 | # PyInstaller
31 | # Usually these files are written by a python script from a template
32 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
33 | *.manifest
34 | *.spec
35 |
36 | # Installer logs
37 | pip-log.txt
38 | pip-delete-this-directory.txt
39 |
40 | # Unit test / coverage reports
41 | htmlcov/
42 | .tox/
43 | .nox/
44 | .coverage
45 | .coverage.*
46 | .cache
47 | nosetests.xml
48 | coverage.xml
49 | *.cover
50 | *.py,cover
51 | .hypothesis/
52 | .pytest_cache/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 | db.sqlite3
62 | db.sqlite3-journal
63 |
64 | # Flask stuff:
65 | instance/
66 | .webassets-cache
67 |
68 | # Scrapy stuff:
69 | .scrapy
70 |
71 | # Sphinx documentation
72 | docs/_build/
73 |
74 | # PyBuilder
75 | target/
76 |
77 | # Jupyter Notebook
78 | .ipynb_checkpoints
79 |
80 | # IPython
81 | profile_default/
82 | ipython_config.py
83 |
84 | # pyenv
85 | .python-version
86 |
87 | # pipenv
88 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
89 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
90 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
91 | # install all needed dependencies.
92 | #Pipfile.lock
93 |
94 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow
95 | __pypackages__/
96 |
97 | # Celery stuff
98 | celerybeat-schedule
99 | celerybeat.pid
100 |
101 | # SageMath parsed files
102 | *.sage.py
103 |
104 | # Environments
105 | .env
106 | .venv
107 | env/
108 | venv/
109 | ENV/
110 | env.bak/
111 | venv.bak/
112 |
113 | # Spyder project settings
114 | .spyderproject
115 | .spyproject
116 |
117 | # Rope project settings
118 | .ropeproject
119 |
120 | # mkdocs documentation
121 | /site
122 |
123 | # mypy
124 | .mypy_cache/
125 | .dmypy.json
126 | dmypy.json
127 |
128 | # Pyre type checker
129 | .pyre/
130 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Xpress AI
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 | Component Libraries •
3 | Project Templates
4 |
5 | Docs •
6 | Install •
7 | Tutorials •
8 | Developer Guides •
9 | Contribute •
10 | Blog •
11 | Discord
12 |
13 |
14 |
15 |
16 |
17 |
18 | Xircuits Component Library for GPT Agent Toolkit – Empower your workflows with intelligent task management and execution tools.
19 |
20 | ---
21 | ## Xircuits Component Library for GPT Agent Toolkit
22 |
23 | Effortlessly integrate GPT-powered agents into Xircuits workflows. This library enables dynamic task creation, prioritization, execution, and critique, alongside tools for interaction, memory management, and contextual understanding.
24 |
25 | ## Table of Contents
26 |
27 | - [Preview](#preview)
28 | - [Prerequisites](#prerequisites)
29 | - [Main Xircuits Components](#main-xircuits-components)
30 | - [Try the Examples](#try-the-examples)
31 | - [Installation](#installation)
32 |
33 | ## Preview
34 |
35 | ### The Example:
36 |
37 |
38 |
39 | ### The Result:
40 |
41 |
42 |
43 | ## Prerequisites
44 |
45 | Before you begin, you will need the following:
46 |
47 | 1. Python3.9+.
48 | 2. Xircuits.
49 | 3. API key for OpenAI
50 |
51 |
52 |
53 | ## Main Xircuits Components
54 |
55 | ### TaskCreatorAgent Component:
56 | Creates new tasks dynamically based on objectives, previous results, and the list of incomplete tasks.
57 |
58 |
59 |
60 | ### ToolRunner Component:
61 | Executes tools specified within tasks and stores the results in memory for future reference.
62 |
63 |
64 |
65 | ### TaskPrioritizerAgent Component:
66 | Reorders and prioritizes tasks to align with the overall objective efficiently.
67 |
68 | ### TaskExecutorAgent Component:
69 | Executes tasks using specified tools, memory, and context to achieve desired outcomes.
70 |
71 | ### TaskCriticAgent Component:
72 | Reviews and critiques executed actions to ensure accuracy and alignment with the task objective.
73 |
74 | ### CreateTaskList Component:
75 | Initializes a task list with a default or user-defined initial task.
76 |
77 | ### ScratchPadTool Component:
78 | Provides a scratch pad for storing and summarizing intermediate thoughts or insights.
79 |
80 | ### PromptUserTool Component:
81 | Prompts the user for input and captures their responses for use in workflows.
82 |
83 | ### BrowserTool Component:
84 | Automates browser interactions for tasks like navigation and data extraction.
85 |
86 | ### SqliteTool Component:
87 | Executes SQL queries on an SQLite database and returns the results for further processing.
88 |
89 | ## Try the Examples
90 |
91 | We have provided an example workflow to help you get started with the GPT Agent Toolkit component library. Give it a try and see how you can create custom GPT Agent Toolkit components for your applications.
92 |
93 | ### BabyAGI Example
94 | Explore the babyagi.xircuits workflow. This example demonstrates an iterative approach to task management, utilizing AI to execute, create, and prioritize tasks dynamically in a loop.
95 |
96 | ## Installation
97 | To use this component library, ensure that you have an existing [Xircuits setup](https://xircuits.io/docs/main/Installation). You can then install the GPT Agent Toolkit library using the [component library interface](https://xircuits.io/docs/component-library/installation#installation-using-the-xircuits-library-interface), or through the CLI using:
98 |
99 | ```
100 | xircuits install gpt-agent-toolkit
101 | ```
102 | You can also do it manually by cloning and installing it:
103 | ```
104 | # base Xircuits directory
105 | git clone https://github.com/XpressAI/xai-gpt-agent-toolkit xai_components/xai_gpt_agent_toolkit
106 | pip install -r xai_components/xai_gpt_agent_toolkit/requirements.txt
107 | ```
--------------------------------------------------------------------------------
/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/XpressAI/xai-gpt-agent-toolkit/3a4f8e220c1d3c26306be86fb0476d7cc70f7ad9/__init__.py
--------------------------------------------------------------------------------
/agent_components.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import abc
3 | from collections import deque
4 | import re
5 | from typing import NamedTuple
6 |
7 | import json
8 | from dotenv import load_dotenv
9 | import numpy as np
10 | import os
11 | import openai
12 | from openai import OpenAI
13 | import requests
14 | import sqlite3
15 | import subprocess
16 | import time
17 | from xai_components.base import InArg, OutArg, InCompArg, Component, xai_component
18 |
19 | load_dotenv()
20 |
21 | DEFAULT_EXECUTOR_PROMPT = """
22 | You are an AI who performs one task based on the following objective: {objective}.
23 | Take into account these previously completed tasks: {context}
24 | *Your thoughts*: {scratch_pad}
25 | *Your task*: {task}
26 | *Your tools*: {tools}
27 | You can use a tool by writing TOOL: TOOL_NAME in a single line. then the arguments of the tool (if any) For example, to use the python-exec tool, write
28 | TOOL: python-exec
29 | ```
30 | print('Hello world!')
31 | ```
32 | Response:
33 | """
34 |
35 | DEFAULT_CRITIC_PROMPT = """
36 | You are an AI who checks and improves that the action about to be performed is correct given the information you have.
37 | If it is the optimal solution you should respond with the action as-is.
38 |
39 | The task should help achieve the following objective: {objective}.
40 | Take into account these previously completed tasks: {context}
41 | The task: {task}
42 | The action: {action}
43 | Response:
44 | """
45 |
46 | DEFAULT_TASK_CREATOR_PROMPT = """
47 | You are an task creation AI that uses the result of an execution agent
48 | to create new tasks with the following objective: {objective},
49 | The last completed task has the result: {result}.
50 | This result was based on this task description: {task_name}.
51 | These are incomplete tasks: {task_list}.
52 | Based on the result, create new tasks to be completed by the AI system that do not overlap with incomplete tasks.
53 | Return the tasks as an array.
54 | """
55 |
56 | DEFAULT_TASK_PRIORITIZER_PROMPT = """
57 | You are a task prioritization AI tasked with cleaning the formatting of and reprioritizing the following tasks:
58 | {task_names}.
59 | Consider the ultimate objective of your team:{objective}. Do not remove any tasks.
60 | Return the result as a numbered list, like:
61 | #. First task
62 | #. Second task
63 | Start the task list with number {next_task_id}.
64 | """
65 |
66 |
67 | class Memory(abc.ABC):
68 | def query(self, query: str, n: int) -> list:
69 | pass
70 |
71 | def add(self, id: str, text: str, metadata: dict) -> None:
72 | pass
73 |
74 |
75 | def run_tool(tool_code: str, tools: list) -> str:
76 | if tool_code is None:
77 | return ""
78 |
79 | ret = ""
80 |
81 | for tool in tools:
82 | if tool_code.startswith(tool["name"]):
83 | if tool["instance"]:
84 | ret += tool["instance"].run_tool(tool_code)
85 | #tool["instance"] = None
86 |
87 | return ret
88 |
89 |
90 | def llm_call(model: str, prompt: str, temperature: float = 0.5, max_tokens: int = 500):
91 | #print("**** LLM_CALL ****")
92 | #print(prompt)
93 |
94 | while True:
95 | try:
96 | if model == 'gpt-3.5-turbo' or model == 'gpt-4o-mini':
97 | client = OpenAI()
98 | messages = [{"role": "system", "content": prompt}]
99 | response = client.chat.completions.create(
100 | model=model,
101 | messages=messages,
102 | temperature=temperature,
103 | max_tokens=max_tokens,
104 | n=1,
105 | stop=["OUTPUT", ],
106 | )
107 | return response.choices[0].message.content.strip()
108 | elif model.startswith("rwkv"):
109 | # Use proxy.
110 | if not openai.proxies: raise Exception("No proxy set")
111 | messages = [{"role": "system", "content": prompt}]
112 | response = client.chat.completions.create(
113 | model=model,
114 | messages=messages,
115 | temperature=temperature,
116 | max_tokens=max_tokens,
117 | n=1,
118 | stop=["OUTPUT", ],
119 | )
120 | return response.choices[0].message.content.strip()
121 | elif model.startswith("llama"):
122 | # Spawn a subprocess to run llama.cpp
123 | cmd = ["llama/main", "-p", prompt]
124 | result = subprocess.run(cmd, shell=True, stderr=subprocess.DEVNULL, stdout=subprocess.PIPE, text=True)
125 | return result.stdout.strip()
126 | else:
127 | raise Exception(f"Unknown model {model}")
128 | except openai.RateLimitError:
129 | print("Rate limit error, sleeping for 10 seconds...")
130 | time.sleep(10)
131 | except openai.APIError:
132 | print("Service unavailable error, sleeping for 10 seconds...")
133 | time.sleep(10)
134 | else:
135 | break
136 |
137 |
138 | def get_sorted_context(memory: Memory, query: str, n: int):
139 | results = memory.query(query, n)
140 | sorted_results = sorted(
141 | results,
142 | key=lambda x: x.similarity if getattr(x, 'similarity', None) else x.score,
143 | reverse=True
144 | )
145 | return [(str(item.attributes['task']) + ":" + str(item.attributes['result'])) for item in sorted_results]
146 |
147 |
148 | def extract_task_number(task_id, task_list):
149 | if isinstance(task_id, int):
150 | return task_id
151 |
152 | matches = re.findall(r'\d+', task_id)
153 | if matches:
154 | return int(matches[0])
155 | else:
156 | # fallback if we match nothing.
157 | return len(task_list) + 1
158 |
159 |
160 | @xai_component
161 | class TaskCreatorAgent(Component):
162 | """Creates new tasks based on given model, prompt, and objectives.
163 |
164 | #### inPorts:
165 | - objective: Objective for task creation.
166 | - prompt: Prompt string for the AI model.
167 | - model: AI model used for task creation.
168 | - result: Result of the previous tasks.
169 | - task: Current task information.
170 | - task_list: List of all tasks.
171 |
172 | #### outPorts:
173 | - new_tasks: list of newly created tasks.
174 | """
175 |
176 | objective: InCompArg[str]
177 | prompt: InArg[str]
178 | model: InArg[str]
179 | result: InArg[str]
180 | task: InArg[dict]
181 | task_list: InArg[str]
182 | new_tasks: OutArg[list]
183 |
184 | def execute(self, ctx) -> None:
185 | text = self.prompt.value if self.prompt.value is not None else DEFAULT_TASK_CREATOR_PROMPT
186 |
187 | prompt = text.format(**{
188 | "objective": self.objective.value,
189 | "result": self.result.value,
190 | "task": self.task.value,
191 | "task_name": self.task.value["task_name"],
192 | "task_list": self.task_list.value
193 | })
194 |
195 | response = llm_call(self.model.value, prompt)
196 | new_tasks = response.split('\n')
197 | print("New tasks: ", new_tasks)
198 |
199 | task_id = self.task.value["task_id"]
200 | task_id_counter = extract_task_number(task_id, self.task_list)
201 | ret = []
202 | for task_name in new_tasks:
203 | task_id_counter += 1
204 | ret.append({"task_id": task_id_counter, "task_name": task_name})
205 |
206 | self.new_tasks.value = ret
207 |
208 | @xai_component
209 | class TaskPrioritizerAgent(Component):
210 | """Prioritizes tasks based on given model, prompt, and objectives.
211 |
212 | #### inPorts:
213 | - objective: Objective for task prioritization.
214 | - prompt: Prompt string for the AI model.
215 | - model: AI model used for task prioritization.
216 | - task_list: List of all tasks.
217 |
218 | #### outPorts:
219 | - prioritized_tasks: Prioritized list of tasks.
220 | """
221 |
222 | objective: InCompArg[str]
223 | prompt: InArg[str]
224 | model: InArg[str]
225 | task_list: InArg[list]
226 | prioritized_tasks: OutArg[deque]
227 |
228 | def execute(self, ctx) -> None:
229 | text = self.prompt.value if self.prompt.value is not None else DEFAULT_TASK_PRIORITIZER_PROMPT
230 | prompt = text.format(**{
231 | "objective": self.objective.value,
232 | "task_list": self.task_list.value,
233 | "task_names": [t["task_name"] for t in self.task_list.value],
234 | "next_task_id": max([int(t["task_id"]) for t in self.task_list.value]) + 1
235 | })
236 | response = llm_call(self.model.value, prompt)
237 | new_tasks = response.split('\n')
238 | task_list = deque()
239 | for task_string in new_tasks:
240 | task_parts = task_string.strip().split(".", 1)
241 | if len(task_parts) == 2:
242 | task_id = task_parts[0].strip()
243 | task_name = task_parts[1].strip()
244 | task_list.append({"task_id": task_id, "task_name": task_name})
245 |
246 | print(f"New tasks: {new_tasks}")
247 | self.prioritized_tasks.value = task_list
248 |
249 |
250 | @xai_component
251 | class TaskExecutorAgent(Component):
252 | """Executes tasks based on given model, prompt, tools, and memory.
253 |
254 | #### inPorts:
255 | - objective: Objective for task execution.
256 | - prompt: Prompt string for the AI model.
257 | - model: AI model used for task execution.
258 | - tasks: Queue of tasks to be executed.
259 | - tools: List of tools available for task execution.
260 | - memory: Memory context for task execution.
261 |
262 | #### outPorts:
263 | - action: Executed action.
264 | - task: Task information.
265 | """
266 |
267 | objective: InCompArg[str]
268 | prompt: InArg[str]
269 | model: InArg[str]
270 | tasks: InArg[deque]
271 | tools: InArg[list]
272 | memory: InArg[any]
273 | action: OutArg[str]
274 | task: OutArg[dict]
275 |
276 | def execute(self, ctx) -> None:
277 | text = self.prompt.value if self.prompt.value is not None else DEFAULT_EXECUTOR_PROMPT
278 |
279 | task = self.tasks.value.popleft()
280 | print(f"Next Task: {task}")
281 | context = get_sorted_context(self.memory.value, query=self.objective.value, n=5)
282 |
283 | print("\n*******RELEVANT CONTEXT******\n")
284 | print(context)
285 |
286 | scratch_pad = ""
287 |
288 | for tool in self.tools.value:
289 | if tool['name'] == 'scratch-pad':
290 | file_name = tool['instance'].file_name.value
291 | with open(file_name, "r") as f:
292 | scratch_pad += f.read()
293 |
294 | print("\n*******SCRATCH PAD******\n")
295 | print(scratch_pad)
296 |
297 | prompt = text.format(**{
298 | "scratch_pad": scratch_pad,
299 | "objective": self.objective.value,
300 | "context": context,
301 | "task": self.task.value,
302 | "tools": [tool['spec'] for tool in self.tools.value]
303 | })
304 | result = llm_call(self.model.value, prompt, 0.7, 2000)
305 |
306 | print(f"Result:\n{result}")
307 |
308 | self.action.value = result
309 | self.task.value = task
310 |
311 |
312 | @xai_component
313 | class TaskCriticAgent(Component):
314 | """Critiques an executed task's action using an AI model.
315 |
316 | #### inPorts:
317 | - prompt: The base string that the AI model uses to critique the task action.
318 | - objective: The overall objective that should guide task critique.
319 | - model: The AI model that generates the critique.
320 | - memory: The current context memory, used for retrieving relevant information for task critique.
321 | - tools: The list of tools available for task critique.
322 | - action: The executed action that is to be critiqued.
323 | - task: The current task information.
324 |
325 | #### outPorts:
326 | - updated_action: The updated action after the model's critique.
327 | """
328 |
329 | prompt: InArg[str]
330 | objective: InArg[str]
331 | model: InArg[str]
332 | memory: InArg[any]
333 | tools: InArg[list]
334 | action: InArg[str]
335 | task: InArg[dict]
336 | updated_action: OutArg[str]
337 |
338 | def execute(self, ctx) -> None:
339 | text = self.prompt.value if self.prompt.value is not None else DEFAULT_CRITIC_PROMPT
340 |
341 | print(f"Task: {self.task.value}")
342 |
343 | context = get_sorted_context(self.memory.value, query=self.objective.value, n=5)
344 | print("Context: ", context)
345 |
346 | prompt = text.format(**{
347 | "objective": self.objective.value,
348 | "context": context,
349 | "action": self.action.value,
350 | "task": self.task.value
351 | })
352 | new_action = llm_call(self.model.value, prompt, 0.7, 2000)
353 |
354 | print(f"New action: {new_action}")
355 |
356 | # If the model responds without a new TOOL prompt use the original.
357 | if "TOOL" not in new_action:
358 | new_action = self.action.value
359 |
360 | self.updated_action.value = new_action
361 |
362 | @xai_component
363 | class ToolRunner(Component):
364 | """Executes a tool based on the given action.
365 |
366 | #### inPorts:
367 | - action: The action that determines which tool should be executed.
368 | - memory: The current context memory, used for updating the result of tool execution.
369 | - task: The current task information.
370 | - tools: The list of tools available for execution.
371 |
372 | #### outPorts:
373 | - result: The result after running the tool.
374 | """
375 |
376 | action: InArg[str]
377 | memory: InCompArg[Memory]
378 | task: InArg[dict]
379 | tools: InArg[list]
380 | result: OutArg[str]
381 |
382 | def execute(self, ctx) -> None:
383 | tools = self.action.value.split("TOOL: ")
384 | result = self.action.value + "\n"
385 | for tool in tools:
386 | result += run_tool(tool, self.tools.value.copy())
387 |
388 | task = self.task.value
389 | self.memory.value.add(
390 | f"result_{task['task_id']}",
391 | result,
392 | {
393 | "task_id": task['task_id'],
394 | "task": task['task_name'],
395 | "result": result
396 | }
397 | )
398 |
399 | self.result.value = result
400 |
401 |
402 | @xai_component
403 | class CreateTaskList(Component):
404 | """Component that creates an task list based on `initial_task`.
405 | If no initial task provided, the first task would be "Develop a task list".
406 |
407 | #### inPorts:
408 | - initial_task: The first task to be added to the task list.
409 |
410 | #### outPorts:
411 | - task_list: The created task list with the initial task.
412 | """
413 | initial_task: InArg[str]
414 | task_list: OutArg[deque]
415 |
416 | def execute(self, ctx) -> None:
417 | task_list = deque([])
418 |
419 | # Add the first task
420 | first_task = {
421 | "task_id": 1,
422 | "task_name": self.initial_task.value if self.initial_task.value else "Develop a task list"
423 | }
424 |
425 | task_list.append(first_task)
426 | self.task_list.value = task_list
427 |
428 |
429 | TOOL_SPEC_SQLITE = """
430 | Perform SQL queries against an sqlite database.
431 | Use by writing the SQL within markdown code blocks.
432 | Example: TOOL: sqlite
433 | ```
434 | CREATE TABLE IF NOT EXISTS points (x int, y int);
435 | INSERT INTO points (x, y) VALUES (783, 848);
436 | SELECT * FROM points;
437 | ```
438 | sqlite OUTPUT:
439 | [(783, 848)]
440 | """
441 |
442 | @xai_component
443 | class SqliteTool(Component):
444 | """Component that performs SQL queries against an SQLite database.
445 |
446 | #### inPorts:
447 | - path: The path to the SQLite database.
448 |
449 | #### outPorts:
450 | - tool_spec: The specification of the SQLite tool, including its capabilities and requirements.
451 | """
452 |
453 | path: InArg[str]
454 | tool_spec: OutArg[dict]
455 |
456 | def execute(self, ctx) -> None:
457 | if not 'tools' in ctx:
458 | ctx['tools'] = {}
459 | spec = {
460 | 'name': 'sqlite',
461 | 'spec': TOOL_SPEC_SQLITE,
462 | 'instance': self
463 | }
464 |
465 | self.tool_spec.value = spec
466 |
467 | def run_tool(self, tool_code) -> str:
468 | print(f"Running tool sqlite")
469 | lines = tool_code.splitlines()
470 | code = []
471 | include = False
472 | any = False
473 | for line in lines[1:]:
474 | if "```" in line and include == False:
475 | include = True
476 | any = True
477 | continue
478 | elif "```" in line and include == True:
479 | include = False
480 | continue
481 | elif "TOOL: sql" in line:
482 | continue
483 | elif "OUTPUT" in line:
484 | break
485 | elif include:
486 | code.append(line + "\n")
487 | if not any:
488 | for line in lines[1:]:
489 | code.append(line + "\n")
490 | conn = sqlite3.connect(self.path.value)
491 |
492 | all_queries = "\n".join(code)
493 | queries = all_queries.split(";")
494 | res = ""
495 | try:
496 | for query in queries:
497 | res += str(conn.execute(query).fetchall())
498 | res += "\n"
499 | conn.commit()
500 | except Exception as e:
501 | res += str(e)
502 |
503 | return res
504 |
505 |
506 | TOOL_SPEC_BROWSER = """
507 | Shows the user which step to perform in a browser and outputs the resulting HTML. Use by writing the commands within markdown code blocks. Do not assume that elements are on the page, use the tool to discover the correct selectors. Perform only the action related to the task. You cannot define variables with the browser tool. only write_file(filename, selector)
508 |
509 | Example: TOOL: browser
510 | ```
511 | goto("http://google.com")
512 | fill('[title="search"]', 'my search query')
513 | click('input[value="Google Search"]')
514 | ```
515 | browser OUTPUT:
516 |
517 | """
518 |
519 | @xai_component
520 | class BrowserTool(Component):
521 | """A component that implements a browser tool.
522 | Uses the Playwright library to interact with the browser.
523 | Capable of saving screenshots and writing to files directly from the browser context.
524 |
525 | #### inPorts:
526 | - cdp_address: The address to the Chrome DevTools Protocol (CDP), allowing interaction with a Chrome instance.
527 |
528 | #### outPorts:
529 | - tool_spec: The specification of the browser tool.
530 | """
531 |
532 | cdp_address: InArg[str]
533 | tool_spec: OutArg[dict]
534 |
535 | def execute(self, ctx) -> None:
536 | if not 'tools' in ctx:
537 | ctx['tools'] = {}
538 | spec = {
539 | 'name': 'browser',
540 | 'spec': TOOL_SPEC_BROWSER,
541 | 'instance': self
542 | }
543 |
544 | self.chrome = None
545 | self.playwright = None
546 | self.page = None
547 | self.tool_spec.value = spec
548 |
549 | def run_tool(self, tool_code) -> str:
550 | print(f"Running tool browser")
551 | lines = tool_code.splitlines()
552 | code = []
553 | include = False
554 | any = False
555 | for line in lines[1:]:
556 | if "```" in line and include == False:
557 | include = True
558 | any = True
559 | continue
560 | elif "```" in line and include == True:
561 | include = False
562 | continue
563 | elif "TOOL: browser" in line:
564 | continue
565 | elif "OUTPUT" in line:
566 | break
567 | elif include:
568 | code.append(line + "\n")
569 | if not any:
570 | for line in lines[1:]:
571 | code.append(line + "\n")
572 |
573 | res = ""
574 | try:
575 | import playwright
576 | from playwright.sync_api import sync_playwright
577 |
578 | if not self.chrome:
579 | self.playwright = sync_playwright().__enter__()
580 | self.chrome = self.playwright.chromium.connect_over_cdp(self.cdp_address.value)
581 |
582 | if not self.page:
583 | if len(self.chrome.contexts) > 0:
584 | self.page = self.chrome.contexts[0].new_page()
585 | self.page.set_default_timeout(3000)
586 | else:
587 | self.page = self.chrome.new_context().new_page()
588 | self.page.set_default_timeout(3000)
589 |
590 | self.page.save_screenshot = self.page.screenshot
591 | def write_file(file, selector):
592 | with open(file, "w") as f:
593 | f.write(self.page.inner_text(selector))
594 |
595 | self.page.write_file = write_file
596 | self.page.save_text = write_file
597 | self.page.save_to_file = write_file
598 | for action in code:
599 | if not action.startswith("#"):
600 | eval("self.page." + action)
601 |
602 | res += self.page.content()[:4000]
603 | #browser.close()
604 | except Exception as e:
605 | res += str(e)
606 |
607 | print("*** PAGE CONTENT ***")
608 | print(res)
609 |
610 | return res
611 |
612 | TOOL_SPEC_NLP = """
613 | NLP tool provides methods to summarize, extract, classify, ner or translate informtaion on the current page.
614 | To use use one of the words above followed by any arguments and finally a CSS selector.
615 | TOOL: NLP, summarize div[id="foo"]
616 | NLP OUTPUT:
617 | Summary appears here.
618 | """
619 |
620 | @xai_component
621 | class NlpTool(Component):
622 | """Natural Language Processing (NLP) tool. Perform NLP operations within the browser context.
623 | Enables the extraction of webpage content and further NLP analysis via a language model,
624 | in this case, gpt-3.5-turbo.
625 |
626 | #### inPorts:
627 | - cdp_address: The address to the Chrome DevTools Protocol (CDP).
628 |
629 | #### outPorts:
630 | - tool_spec: The specification of the NLP tool.
631 | """
632 | cdp_address: InArg[str]
633 | tool_spec: OutArg[dict]
634 |
635 | def execute(self, ctx) -> None:
636 | if not 'tools' in ctx:
637 | ctx['tools'] = {}
638 | spec = {
639 | 'name': 'browser',
640 | 'spec': TOOL_SPEC_BROWSER,
641 | 'instance': self
642 | }
643 |
644 | self.chrome = None
645 | self.playwright = None
646 | self.page = None
647 | self.tool_spec.value = spec
648 |
649 | def run_tool(self, tool_code) -> str:
650 | print(f"Running tool browser")
651 | lines = tool_code.splitlines()
652 | code = []
653 | include = False
654 | any = False
655 | for line in lines[1:]:
656 | if "```" in line and include == False:
657 | include = True
658 | any = True
659 | continue
660 | elif "```" in line and include == True:
661 | include = False
662 | continue
663 | elif "TOOL: browser" in line:
664 | continue
665 | elif "OUTPUT" in line:
666 | break
667 | elif include:
668 | code.append(line + "\n")
669 | if not any:
670 | for line in lines[1:]:
671 | code.append(line + "\n")
672 |
673 | res = ""
674 | try:
675 | import playwright
676 | from playwright.sync_api import sync_playwright
677 |
678 | if not self.chrome:
679 | self.playwright = sync_playwright().__enter__()
680 | self.chrome = self.playwright.chromium.connect_over_cdp(self.cdp_address.value)
681 |
682 | if not self.page:
683 | if len(self.chrome.contexts) > 0:
684 | self.page = self.chrome.contexts[0].pages[0]
685 | self.page.set_default_timeout(3000)
686 | else:
687 | self.page = self.chrome.new_context().new_page()
688 | self.page.set_default_timeout(3000)
689 |
690 |
691 | for action in code:
692 | if not action.startswith("#"):
693 | content = self.page.inner_text(action.split(" ")[-1])
694 | prompt = action + "\n" + action.split(" ")[-1] + " is: \n---\n"
695 | res += action + "OUTPUT:\n"
696 | res += llm_call("gpt-3.5-turbo", prompt, 0.0, 100)
697 | res += "\n"
698 |
699 | except Exception as e:
700 | res += str(e)
701 |
702 | print("*** PAGE CONTENT ***")
703 | print(res)
704 |
705 | return res
706 |
707 | TOOL_SPEC_PYTHON = """
708 | Execute python code in a virtual environment.
709 | Use by writing the code within markdown code blocks.
710 | Automate the browser with playwright.
711 | The environment has the following pip libraries installed: {packages}
712 | Example: TOOL: python-exec
713 | ```
714 | import pyautogui
715 | pyautogui.PAUSE = 1.0 # Minimum recommended
716 | print(pyautogui.position())
717 | ```
718 | python-exec OUTPUT:
719 | STDOUT:Point(x=783, y=848)
720 | STDERR:
721 | """
722 |
723 | @xai_component
724 | class ExecutePythonTool(Component):
725 | """
726 | Executes Python code and pip operations that are supplied as a string.
727 | It extracts the Python code and pip commands, runs them, and returns their output or errors.
728 |
729 | #### inPorts:
730 | - path: The path to the python sciprt.
731 |
732 | #### outPorts:
733 | - tool_spec: The specification of the Python tool, including its capabilities and requirements.
734 | """
735 |
736 | file_name: InArg[str]
737 | tool_spec: OutArg[dict]
738 |
739 | def execute(self, ctx) -> None:
740 | spec = {
741 | 'name': 'python-exec',
742 | 'spec': TOOL_SPEC_PYTHON,
743 | 'instance': self
744 | }
745 |
746 | self.tool_spec.value = spec
747 |
748 | def run_tool(self, tool_code) -> str:
749 | print(f"Running tool python-exec")
750 |
751 | lines = tool_code.splitlines()
752 | code = []
753 | pip_operations = []
754 | include = False
755 | any = False
756 | for line in lines[1:]:
757 | if "```" in line and include == False:
758 | include = True
759 | any = True
760 | continue
761 | elif "```" in line and include == True:
762 | include = False
763 | continue
764 | elif "!pip" in line:
765 | pip_operations.append(line.replace("!pip", "pip"))
766 | continue
767 | elif include:
768 | code.append(line + "\n")
769 | if not any:
770 | for line in lines[1:]:
771 | code.append(line + "\n")
772 |
773 | print(f"Will run pip operations: {pip_operations}")
774 | tool_code = '\n'.join(code)
775 | print(f"Will run the code: {tool_code}")
776 |
777 |
778 | try:
779 | for pip_operation in pip_operations:
780 | result = subprocess.run(pip_operation, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, shell=True, cwd=os.getcwd())
781 | print(f"pip operation {pip_operation} returned: {result}")
782 | with open(self.file_name.value, "w") as f:
783 | f.writelines(code)
784 | result = subprocess.run(["python", self.file_name.value], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, shell=True, cwd=os.getcwd())
785 | output = "python-exec OUTPUT:\nSTDOUT: \n" + result.stdout + "\n" + "STDERR:" + "\n" + result.stderr
786 | except Exception as e:
787 | print(f"Exception running tool python-exec: {e}")
788 | output = str(e)
789 | print(f"Done running tool python-exec: Returned {output}")
790 |
791 | return output
792 |
793 |
794 | TOOL_SPEC_PROMPT_USER = """
795 | Prompt the user for input with this tool.
796 | Example: TOOL: prompt-user
797 | Hello would you like to play a game?
798 |
799 | prompt-user OUTPUT:
800 | Yes I would.
801 | """
802 |
803 |
804 | @xai_component
805 | class PromptUserTool(Component):
806 | """A component that enables interaction with a user by prompting for inputs.
807 | Prints a prompt message to the user and waits for input.
808 | The user's response is then returned by the function.
809 | **Note**: If you use this component, run the compiled script from a terminal.
810 |
811 | #### outPorts:
812 | - tool_spec: The specification of the PromptUser tool.
813 | """
814 |
815 | tool_spec: OutArg[dict]
816 |
817 | def execute(self, ctx) -> None:
818 | spec = {
819 | 'name': 'prompt-user',
820 | 'spec': TOOL_SPEC_PROMPT_USER,
821 | 'instance': self
822 | }
823 |
824 | self.tool_spec.value = spec
825 |
826 | def run_tool(self, tool_code) -> str:
827 | print("PROMPTING USER:")
828 | print(f"{tool_code}")
829 | res = input(">")
830 | return res
831 |
832 |
833 |
834 | TOOL_SPEC_SCRATCH_PAD = """
835 | Your internal monologue. Written to yourself in second-person. Write out any notes that should help you with the progress of your task.
836 | Example: TOOL: scratch-pad
837 | Thoughts go here.
838 | """
839 |
840 |
841 | @xai_component
842 | class ScratchPadTool(Component):
843 | """A component that creates and manages a 'scratch pad' for storing and summarizing information within the xai framework.
844 | The component is initialized with a file name to use as the scratch pad. During execution,
845 | it writes to this file and provides a method `run_tool` that updates the contents of the file and
846 | generates a summary of the current contents using the gpt-3.5-turbo language model.
847 |
848 | #### inPorts:
849 | - file_name: The name of the file that will be used as the scratch pad.
850 |
851 | #### outPorts:
852 | - tool_spec: The specification of the ScratchPad tool.
853 | """
854 |
855 | file_name: InArg[str]
856 | tool_spec: OutArg[dict]
857 |
858 | def execute(self, ctx) -> None:
859 | spec = {
860 | 'name': 'scratch-pad',
861 | 'spec': TOOL_SPEC_SCRATCH_PAD,
862 | 'instance': self
863 | }
864 |
865 | with open(self.file_name.value, "w") as f:
866 | f.write("")
867 |
868 | self.tool_spec.value = spec
869 |
870 | def run_tool(self, tool_code) -> str:
871 | current_scratch = ""
872 | with open(self.file_name.value, "r") as f:
873 | current_scratch = f.read().strip()
874 |
875 | summary = None
876 | if len(current_scratch) > 0:
877 | summary = llm_call(
878 | "gpt-3.5-turbo",
879 | f"Summarize the following text with bullet points using a second person perspective. " +
880 | "Keep only the salient points.\n---\n {current_scratch}",
881 | 0.0,
882 | 1000
883 | )
884 |
885 | with open(self.file_name.value, "w") as f:
886 | if summary:
887 | f.write(summary)
888 | f.write("\n")
889 | f.write(tool_code[len('scratch-pad'):])
890 |
891 | return ""
892 |
893 |
894 | class VectoMemoryImpl(Memory):
895 | def __init__(self, vs):
896 | self.vs = vs
897 |
898 | def query(self, query: str, n: int) -> list:
899 | return self.vs.lookup(query, 'TEXT', n).results
900 | def add(self, id: str, text: str, metadata: dict) -> None:
901 | from vecto import vecto_toolbelt
902 |
903 | vecto_toolbelt.ingest_text(self.vs, [text], [metadata])
904 |
905 |
906 | def get_ada_embedding(text):
907 | s = text.replace("\n", " ")
908 | return openai.embeddings.create(input=[s], model="text-embedding-ada-002").data[0].embedding
909 |
910 |
911 | class PineconeMemoryImpl(Memory):
912 | def __init__(self, index, namespace):
913 | self.index = index
914 | self.namespace = namespace
915 |
916 | def query(self, query: str, n: int) -> list:
917 | return self.index.query(get_ada_embedding(query), top_k=n, include_metadata=True, namespace=self.namespace)
918 |
919 | def add(self, vector_id: str, text: str, metadata: dict) -> None:
920 | self.index.upsert([(vector_id, get_ada_embedding(text), metadata)], namespace=self.namespace)
921 |
922 |
923 | class NumpyQueryResult(NamedTuple):
924 | id: str
925 | similarity: float
926 | attributes: dict
927 |
928 |
929 | class NumpyMemoryImpl(Memory):
930 | def __init__(self, vectors=None, ids=None, metadata=None):
931 | self.vectors = vectors
932 | self.ids = ids
933 | self.metadata = metadata
934 |
935 | def query(self, query: str, n: int) -> list:
936 | if self.vectors is None:
937 | return []
938 | if isinstance(self.vectors, list) and len(self.vectors) > 1:
939 | self.vectors = np.vstack(self.vectors)
940 |
941 | top_k = min(self.vectors.shape[0], n)
942 | query_vector = get_ada_embedding(query)
943 | similarities = self.vectors @ query_vector
944 | indices = np.argpartition(similarities, -top_k)[-top_k:]
945 | return [
946 | NumpyQueryResult(
947 | self.ids[i],
948 | similarities[i],
949 | self.metadata[i]
950 | )
951 | for i in indices
952 | ]
953 |
954 | def add(self, vector_id: str, text: str, metadata: dict) -> None:
955 | if isinstance(self.vectors, list) and len(self.vectors) > 1:
956 | self.vectors = np.vstack(self.vectors)
957 |
958 | if self.vectors is None:
959 | self.vectors = np.array(get_ada_embedding(text)).reshape((1, -1))
960 | self.ids = [vector_id]
961 | self.metadata = [metadata]
962 | else:
963 | self.ids.append(vector_id)
964 | self.vectors = np.vstack([self.vectors, np.array(get_ada_embedding(text))])
965 | self.metadata.append(metadata)
966 |
967 |
968 | @xai_component
969 | class NumpyMemory(Component):
970 | memory: OutArg[Memory]
971 |
972 | def execute(self, ctx) -> None:
973 | self.memory.value = NumpyMemoryImpl()
974 |
975 |
976 | @xai_component
977 | class VectoMemory(Component):
978 | api_key: InArg[str]
979 | vector_space: InCompArg[str]
980 | initialize: InCompArg[bool]
981 | memory: OutArg[Memory]
982 |
983 | def execute(self, ctx) -> None:
984 | from vecto import Vecto
985 |
986 | api_key = os.getenv("VECTO_API_KEY") if self.api_key.value is None else self.api_key.value
987 |
988 | headers = {'Authorization': 'Bearer ' + api_key}
989 | response = requests.get("https://api.vecto.ai/api/v0/account/space", headers=headers)
990 | if response.status_code != 200:
991 | raise Exception(f"Failed to get vector space list: {response.text}")
992 | for space in response.json():
993 | if space['name'] == self.vector_space.value:
994 | vs = Vecto(api_key, space['id'])
995 | if self.initialize.value:
996 | vs.delete_vector_space_entries()
997 | self.memory.value = VectoMemoryImpl(vs)
998 | break
999 | if not self.memory.value:
1000 | raise Exception(f"Could not find vector space with name {self.vector_space.value}")
1001 |
1002 |
1003 | @xai_component
1004 | class PineconeMemory(Component):
1005 | api_key: InArg[str]
1006 | environment: InArg[str]
1007 | index_name: InCompArg[str]
1008 | namespace: InCompArg[str]
1009 | initialize: InCompArg[bool]
1010 | memory: OutArg[Memory]
1011 |
1012 | def execute(self, ctx) -> None:
1013 | import pinecone
1014 |
1015 | api_key = os.getenv("PINECONE_API_KEY") if self.api_key.value is None else self.api_key.value
1016 | environment = os.getenv("PINECONE_ENVIRONMENT") if self.environment.value is None else self.environment.value
1017 |
1018 | pinecone.init(api_key=api_key, environment=environment)
1019 |
1020 | if self.index_name.value not in pinecone.list_indexes():
1021 | pinecone.create_index(self.index_name.value, dimension=1536, metric='cosine', pod_type='p1')
1022 |
1023 | index = pinecone.Index(self.index_name.value)
1024 |
1025 | if self.initialize.value:
1026 | pinecone.delete(deleteAll='true', namespace=self.namespace.value)
1027 |
1028 | self.memory.value = PineconeMemoryImpl(index, self.namespace.value)
1029 |
1030 |
1031 | @xai_component
1032 | class Toolbelt(Component):
1033 | """A component that aggregates various GPT Agent tool specifications into a unified toolbelt.
1034 | You can add more tools if needed by simply adding it as an additional `InArg`. Ensure to reload node after.
1035 |
1036 | #### inPorts:
1037 | - tool1, tool2, tool3, tool4, tool5: Specifications for up to five tools to be included in the toolbelt.
1038 |
1039 | #### outPorts:
1040 | - toolbelt_spec: The unified specification for the toolbelt, including the specifications for all included tools.
1041 | """
1042 |
1043 | tool1: InArg[dict]
1044 | tool2: InArg[dict]
1045 | tool3: InArg[dict]
1046 | tool4: InArg[dict]
1047 | tool5: InArg[dict]
1048 | toolbelt_spec: OutArg[list]
1049 |
1050 | def execute(self, ctx) -> None:
1051 | spec = []
1052 |
1053 | if self.tool1.value:
1054 | spec.append(self.tool1.value)
1055 | if self.tool2.value:
1056 | spec.append(self.tool2.value)
1057 | if self.tool3.value:
1058 | spec.append(self.tool3.value)
1059 | if self.tool4.value:
1060 | spec.append(self.tool4.value)
1061 | if self.tool5.value:
1062 | spec.append(self.tool5.value)
1063 |
1064 | self.toolbelt_spec.value = spec
1065 |
1066 |
1067 | @xai_component
1068 | class Sleep(Component):
1069 | """A simple component that pauses execution for a specified number of seconds.
1070 |
1071 | #### inPorts:
1072 | - seconds: The number of seconds to pause execution.
1073 | """
1074 |
1075 | seconds: InArg[int]
1076 |
1077 | def execute(self, ctx) -> None:
1078 | time.sleep(self.seconds.value)
1079 |
1080 | @xai_component
1081 | class ReadFile(Component):
1082 | """A component that reads the contents of a file.
1083 |
1084 | #### inPorts:
1085 | - file_name: The name of the file to read.
1086 |
1087 | #### outPorts:
1088 | - content: The contents of the file as a string.
1089 | """
1090 |
1091 | file_name: InCompArg[str]
1092 | content: OutArg[str]
1093 |
1094 | def execute(self, ctx) -> None:
1095 | s = ""
1096 | with open(self.file_name.value, "r") as f:
1097 | s = "\n".join(f.readlines())
1098 | self.content.value = s
1099 |
1100 |
1101 | class OutputAgentStatus(Component):
1102 | """A component that generates a status output for an agent.
1103 |
1104 | #### inPorts:
1105 | - task_list: A list of tasks.
1106 | - text: The text input.
1107 | - results: The result of task execution.
1108 |
1109 | #### outPorts:
1110 | - content: The JSON-encoded status of the agent.
1111 | """
1112 |
1113 | task_list: InCompArg[deque]
1114 | text: InArg[str]
1115 | results: InArg[str]
1116 | content: OutArg[str]
1117 |
1118 | def execute(self, ctx) -> None:
1119 | out = {
1120 | 'task_list': list(self.task_list.value),
1121 | 'result': self.results.value,
1122 | 'text': self.text.value
1123 | }
1124 | self.content.value = json.dumps(out)
1125 |
1126 |
1127 | @xai_component
1128 | class Confirm(Component):
1129 | """Pauses the python process and asks the user to enter Y/N to continue.
1130 | **Note**: If you use this component, run the compiled script from a terminal.
1131 | """
1132 |
1133 | prompt: InArg[str]
1134 | decision: OutArg[bool]
1135 |
1136 | def execute(self, ctx) -> None:
1137 | prompt = self.prompt.value if self.prompt.value else 'Continue?'
1138 | response = input(prompt + '(y/N)')
1139 | if response == 'y' or response == 'Y':
1140 | self.decision.value = True
1141 | else:
1142 | self.decision.value = False
1143 |
1144 |
1145 |
1146 | @xai_component
1147 | class TestCounter(Component):
1148 |
1149 | def execute(self, ctx) -> None:
1150 |
1151 | if ctx.get("count") == None:
1152 | ctx["count"] = 1
1153 | else:
1154 | count = ctx.get("count")
1155 | count += 1
1156 | print(count)
1157 | ctx["count"] = count
1158 | if count == 3:
1159 | sys.exit(0)
1160 |
1161 |
--------------------------------------------------------------------------------
/demo.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/XpressAI/xai-gpt-agent-toolkit/3a4f8e220c1d3c26306be86fb0476d7cc70f7ad9/demo.gif
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [project]
2 | name = "xai-gpt-agent-toolkit"
3 | version = "0.1.0"
4 | description = "Xircuits component library for GPT-agent."
5 | authors = [{ name = "XpressAI", email = "eduardo@xpress.ai" }]
6 | license = "Apache-2.0"
7 | readme = "README.md"
8 | repository = "https://github.com/XpressAI/xai-gpt-agent-toolkit"
9 | keywords = ["xircuits", "GPT", "agent toolkit", "AI workflows", "automation"]
10 |
11 | dependencies = [
12 | "python-dotenv==1.0.1",
13 | "requests",
14 | "numpy",
15 | "playwright==1.48.0",
16 | "openai==1.79.0"
17 | ]
18 |
19 | # Xircuits-specific configurations
20 | [tool.xircuits]
21 | default_example_path = "examples/babyagi.xircuits"
22 | requirements_path = "requirements.txt"
23 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | numpy==1.26.4
2 | openai==1.79.0
3 | playwright==1.48.0
4 | python-dotenv==1.0.1
5 | requests==2.31.0
6 |
--------------------------------------------------------------------------------