├── .gitignore
├── LICENSE
├── README.md
├── blueprints
├── multi_provider
│ ├── README.md
│ ├── anthropic
│ │ ├── __init__.py
│ │ ├── agent.py
│ │ └── utils.py
│ ├── bedrock
│ │ ├── __init__.py
│ │ ├── agent.py
│ │ └── utils.py
│ ├── core
│ │ ├── __init__.py
│ │ └── base.py
│ ├── gemini
│ │ ├── __init__.py
│ │ ├── agent.py
│ │ └── utils.py
│ ├── openai
│ │ ├── __init__.py
│ │ ├── agent.py
│ │ └── utils.py
│ ├── requirements.txt
│ └── tests
│ │ ├── README.md
│ │ ├── __init__.py
│ │ ├── test_anthropic.py
│ │ ├── test_bedrock.py
│ │ ├── test_gemini.py
│ │ └── test_openai.py
└── single_provider
│ ├── anthropic
│ ├── README.md
│ ├── agent.py
│ └── test.py
│ ├── bedrock
│ ├── README.md
│ ├── __init__.py
│ ├── agent.py
│ ├── test.py
│ └── utils.py
│ ├── gemini
│ ├── README.md
│ ├── agent.py
│ ├── requirements.txt
│ └── test.py
│ └── openai
│ ├── README.md
│ ├── agent.py
│ └── test.py
├── examples
├── agentic-rag
│ ├── audio-retriever
│ │ ├── agent.py
│ │ ├── requirements.txt
│ │ └── table.py
│ ├── customer-reviews
│ │ ├── agent.py
│ │ ├── requirements.txt
│ │ └── table.py
│ ├── financial-pdf-reports
│ │ ├── agent.py
│ │ ├── requirements.txt
│ │ └── table.py
│ ├── image-analyzer
│ │ ├── agent.py
│ │ ├── requirements.txt
│ │ └── table.py
│ ├── memory-retriever
│ │ ├── agent.py
│ │ └── requirements.txt
│ ├── slack-agent
│ │ ├── .env-template
│ │ ├── README.md
│ │ ├── agent.py
│ │ ├── requirements.txt
│ │ └── tools.py
│ └── video-summarizer
│ │ ├── agent.py
│ │ ├── requirements.txt
│ │ └── table.py
├── getting-started
│ └── pixelagent_basics_tutorial.py
├── memory
│ ├── README.md
│ ├── anthropic
│ │ ├── basic.py
│ │ ├── requirements.txt
│ │ └── semantic-memory-search.py
│ └── openai
│ │ ├── basic.py
│ │ ├── requirements.txt
│ │ └── semantic-memory.py
├── planning
│ ├── README.md
│ ├── anthropic
│ │ ├── react.py
│ │ └── requirements.txt
│ └── openai
│ │ ├── react.py
│ │ └── requirements.txt
├── reflection
│ ├── anthropic
│ │ ├── reflection.py
│ │ └── requirements.txt
│ ├── openai
│ │ ├── reflection.py
│ │ └── requirements.txt
│ └── reflection.md
├── tool-calling
│ ├── anthropic
│ │ ├── finance.py
│ │ └── requirements.txt
│ ├── bedrock
│ │ ├── finance.py
│ │ └── requirements.txt
│ └── openai
│ │ ├── finance.py
│ │ └── requirements.txt
└── vision
│ ├── image-analyzer.py
│ └── requirements.txt
├── pixelagent
├── __init__.py
├── anthropic
│ ├── __init__.py
│ ├── agent.py
│ └── utils.py
├── bedrock
│ ├── README.md
│ ├── __init__.py
│ ├── agent.py
│ └── utils.py
├── core
│ ├── __init__.py
│ └── base.py
├── gemini
│ ├── __init__.py
│ ├── agent.py
│ └── utils.py
└── openai
│ ├── __init__.py
│ ├── agent.py
│ └── utils.py
├── poetry.lock
├── poetry.toml
├── pyproject.toml
├── scripts
└── release.sh
└── tests
├── README.md
├── conftest.py
├── pytest.ini
├── test.py
├── test_anthropic.py
├── test_bedrock.py
├── test_gemini.py
└── test_openai.py
/.gitignore:
--------------------------------------------------------------------------------
1 | .venv
2 | .vscode
3 | dist
4 | **/__pycache__
5 | *.egg-info
6 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
30 |
31 | ---
32 | # Pixelagent: An Agent Engineering Blueprint
33 |
34 | We see agents as the intersection of an LLM, storage, and orchestration. [Pixeltable](https://github.com/pixeltable/pixeltable) unifies this interface into a single declarative framework, making it the de-facto choice for engineers to build custom agentic applications with build-your-own functionality for memory, tool-calling, and more.
35 |
36 |
37 | ## Build your own agent framework:
38 |
39 | - **Data Orchestration and Storage**: Built on Pixeltable's data infrastructure
40 | - **Native Multimodal**: Built-in support for text, images, audio and video
41 | - **Declarative Model**: A type-safe python framework
42 | - **Model agnostic**: Extensible to multiple providers
43 | - **Observability**: Complete traceability with automatic logging of messages, tool calls, and performance metrics
44 | - **Agentic Extensions**: Add reasoning, reflection, memory, knowledge, and team workflows.
45 |
46 | ## Connect blueprints to Cursor, Windsurf, Cline:
47 |
48 | - **[Anthropic](https://github.com/pixeltable/pixelagent/blob/main/blueprints/single_provider/anthropic/README.md)**
49 | - **[OpenAI](https://github.com/pixeltable/pixelagent/blob/main/blueprints/single_provider/openai/README.md)**
50 | - **[AWS Bedrock](https://github.com/pixeltable/pixelagent/blob/main/blueprints/single_provider/bedrock/README.md)**
51 | - **[Multiprovider](https://github.com/pixeltable/pixelagent/tree/main/blueprints/multi_provider/README.md)**
52 |
53 | ## Plug-and-Play Extensions
54 |
55 | - **[Tools](examples/tool-calling)**: Add custom python functions as tools
56 | - **[Memory](examples/memory)**: Implement long-term memory systems with semantic search capabilities
57 | - **[Reflection](examples/reflection)**: Add self-improvement loops
58 | - **[Reasoning](examples/planning)**: Add planning loops
59 | - **[Multimodal Agentic Rag](examples/agentic-rag)**: Multimodal agentic retrieval
60 |
61 | ## Usage
62 |
63 | Transform your agent blueprint into a distributable package on PyPI, extending the build-your-own philosophy to deployment and sharing.
64 |
65 | ### Installation
66 |
67 | ```bash
68 | pip install pixelagent
69 | # Install provider-specific dependencies
70 | pip install anthropic # For Claude models
71 | pip install openai # For GPT models
72 | ```
73 |
74 | ### Quick Start
75 |
76 | ```python
77 | from pixelagent.anthropic import Agent # Or from pixelagent.openai import Agent
78 |
79 | # Create a simple agent
80 | agent = Agent(
81 | name="my_assistant",
82 | system_prompt="You are a helpful assistant."
83 | )
84 |
85 | # Chat with your agent
86 | response = agent.chat("Hello, who are you?")
87 | print(response)
88 | ```
89 |
90 | ### Adding Tools
91 |
92 | ```python
93 | import pixeltable as pxt
94 | from pixelagent.anthropic import Agent
95 | import yfinance as yf
96 |
97 | # Define a tool as a UDF
98 | @pxt.udf
99 | def stock_price(ticker: str) -> dict:
100 | """Get stock information for a ticker symbol"""
101 | stock = yf.Ticker(ticker)
102 | return stock.info
103 |
104 | # Create agent with tool
105 | agent = Agent(
106 | name="financial_assistant",
107 | system_prompt="You are a financial analyst assistant.",
108 | tools=pxt.tools(stock_price)
109 | )
110 |
111 | # Use tool calling
112 | result = agent.tool_call("What's the current price of NVDA?")
113 | print(result)
114 | ```
115 |
116 | ### State management
117 |
118 | ```python
119 | import pixeltable as pxt
120 |
121 | # Agent memory is automatically persisted in tables
122 | memory = pxt.get_table("my_assistant.memory")
123 | conversations = memory.collect()
124 |
125 | # Access tool call history
126 | tools_log = pxt.get_table("financial_assistant.tools")
127 | tool_history = tools_log.collect()
128 |
129 | # cusomatizable memory database
130 | conversational_agent = Agent(
131 | name="conversation_agent",
132 | system_prompt="Focus on remebering the conversation",
133 | n_latest_messages=14
134 | )
135 | ```
136 |
137 | ### Custom Agentic Strategies
138 | ```python
139 |
140 | # ReAct pattern for step-by-step reasoning and planning
141 | import re
142 | from datetime import datetime
143 | from pixelagent.openai import Agent
144 | import pixeltable as pxt
145 |
146 | # Define a tool
147 | @pxt.udf
148 | def stock_info(ticker: str) -> dict:
149 | """Get stock information for analysis"""
150 | import yfinance as yf
151 | stock = yf.Ticker(ticker)
152 | return stock.info
153 |
154 | # ReAct system prompt with structured reasoning pattern
155 | REACT_PROMPT = """
156 | Today is {date}
157 |
158 | IMPORTANT: You have {max_steps} maximum steps. You are on step {step}.
159 |
160 | Follow this EXACT step-by-step reasoning and action pattern:
161 |
162 | 1. THOUGHT: Think about what information you need to answer the question.
163 | 2. ACTION: Either use a tool OR write "FINAL" if you're ready to give your final answer.
164 |
165 | Available tools:
166 | {tools}
167 |
168 | Always structure your response with these exact headings:
169 |
170 | THOUGHT: [your reasoning]
171 | ACTION: [tool_name] OR simply write "FINAL"
172 | """
173 |
174 | # Helper function to extract sections from responses
175 | def extract_section(text, section_name):
176 | pattern = rf'{section_name}:?\s*(.*?)(?=\n\s*(?:THOUGHT|ACTION):|$)'
177 | match = re.search(pattern, text, re.DOTALL | re.IGNORECASE)
178 | return match.group(1).strip() if match else ""
179 |
180 | # Execute ReAct planning loop
181 | def run_react_loop(question, max_steps=5):
182 | step = 1
183 | while step <= max_steps:
184 | # Dynamic system prompt with current step
185 | react_system_prompt = REACT_PROMPT.format(
186 | date=datetime.now().strftime("%Y-%m-%d"),
187 | tools=["stock_info"],
188 | step=step,
189 | max_steps=max_steps,
190 | )
191 |
192 | # Agent with updated system prompt
193 | agent = Agent(
194 | name="financial_planner",
195 | system_prompt=react_system_prompt,
196 | reset=False, # Maintain memory between steps
197 | )
198 |
199 | # Get agent's response for current step
200 | response = agent.chat(question)
201 |
202 | # Extract action to determine next step
203 | action = extract_section(response, "ACTION")
204 |
205 | # Check if agent is ready for final answer
206 | if "FINAL" in action.upper():
207 | break
208 |
209 | # Call tool if needed
210 | if "stock_info" in action.lower():
211 | tool_agent = Agent(
212 | name="financial_planner",
213 | tools=pxt.tools(stock_info)
214 | )
215 | tool_agent.tool_call(question)
216 |
217 | step += 1
218 |
219 | # Generate final recommendation
220 | return Agent(name="financial_planner").chat(question)
221 |
222 | # Run the planning loop
223 | recommendation = run_react_loop("Create an investment recommendation for AAPL")
224 | ```
225 |
226 | Check out our [tutorials](examples/) for more examples including reflection loops, planning patterns, and multi-provider implementations.
227 |
228 | ## Tutorials and Examples
229 |
230 | - **Basics**: Check out [Getting Started](examples/getting-started/pixelagent_basics_tutorial.py) for a step-by-step introduction to core concepts
231 | - **Advanced Patterns**: Explore [Reflection](examples/reflection/anthropic/reflection.py) and [Planning](examples/planning/anthropic/react.py) for more complex agent architectures
232 | - **Specialized Directories**: Browse our example directories for deeper implementations of specific techniques
233 |
234 |
235 | Ready to start building? Dive into the blueprints, tweak them to your needs, and let Pixeltable handle the AI data infrastructure while you focus on innovation!
236 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/anthropic/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Anthropic agent module for multi-provider blueprints.
3 |
4 | This module provides an Agent class for interacting with Anthropic's Claude models.
5 | """
6 |
7 | from .agent import Agent
8 | from .utils import create_messages
9 |
10 | __all__ = ["Agent", "create_messages"]
11 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/anthropic/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from ..core.base import BaseAgent
7 |
8 | from .utils import create_messages
9 |
10 | try:
11 | from pixeltable.functions.anthropic import invoke_tools, messages
12 | except ImportError:
13 | raise ImportError("anthropic not found; run `pip install anthropic`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | Anthropic-specific implementation of the BaseAgent.
19 |
20 | This agent uses Anthropic's Claude API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 |
26 | The agent supports both limited and unlimited conversation history through
27 | the n_latest_messages parameter.
28 | """
29 |
30 | def __init__(
31 | self,
32 | name: str,
33 | system_prompt: str,
34 | model: str = "claude-3-5-sonnet-latest",
35 | n_latest_messages: Optional[int] = 10,
36 | tools: Optional[pxt.tools] = None,
37 | reset: bool = False,
38 | chat_kwargs: Optional[dict] = None,
39 | tool_kwargs: Optional[dict] = None,
40 | ):
41 | # Initialize the base agent with all common parameters
42 | super().__init__(
43 | name=name,
44 | system_prompt=system_prompt,
45 | model=model,
46 | n_latest_messages=n_latest_messages, # None for unlimited history
47 | tools=tools,
48 | reset=reset,
49 | chat_kwargs=chat_kwargs,
50 | tool_kwargs=tool_kwargs,
51 | )
52 |
53 | def _setup_chat_pipeline(self):
54 | """
55 | Configure the chat completion pipeline using Pixeltable's computed columns.
56 | This method implements the abstract method from BaseAgent.
57 |
58 | The pipeline consists of 4 steps:
59 | 1. Retrieve recent messages from memory
60 | 2. Format messages for Claude
61 | 3. Get completion from Anthropic
62 | 4. Extract the response text
63 |
64 | Note: The pipeline automatically handles memory limits based on n_latest_messages.
65 | When set to None, it maintains unlimited conversation history.
66 | """
67 |
68 | # Step 1: Define a query to get recent messages
69 | @pxt.query
70 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
71 | """
72 | Get recent messages from memory, respecting n_latest_messages limit if set.
73 | Messages are ordered by timestamp (newest first).
74 | Returns all messages if n_latest_messages is None.
75 | """
76 | query = (
77 | self.memory.where(self.memory.timestamp < current_timestamp)
78 | .order_by(self.memory.timestamp, asc=False)
79 | .select(role=self.memory.role, content=self.memory.content)
80 | )
81 | if self.n_latest_messages is not None:
82 | query = query.limit(self.n_latest_messages)
83 | return query
84 |
85 | # Step 2: Add computed columns to process the conversation
86 | # First, get the conversation history
87 | self.agent.add_computed_column(
88 | memory_context=get_recent_memory(self.agent.timestamp), if_exists="ignore"
89 | )
90 |
91 | # Format messages for Claude (simpler than OpenAI as system prompt is passed separately)
92 | self.agent.add_computed_column(
93 | messages=create_messages(
94 | self.agent.memory_context,
95 | self.agent.user_message,
96 | self.agent.image,
97 | ),
98 | if_exists="ignore",
99 | )
100 |
101 | # Get Claude's API response (note: system prompt passed directly to messages())
102 | self.agent.add_computed_column(
103 | response=messages(
104 | messages=self.agent.messages,
105 | model=self.model,
106 | system=self.system_prompt, # Claude handles system prompt differently
107 | **self.chat_kwargs,
108 | ),
109 | if_exists="ignore",
110 | )
111 |
112 | # Extract the final response text from Claude's specific response format
113 | self.agent.add_computed_column(
114 | agent_response=self.agent.response.content[0].text, if_exists="ignore"
115 | )
116 |
117 | def _setup_tools_pipeline(self):
118 | """
119 | Configure the tool execution pipeline using Pixeltable's computed columns.
120 | This method implements the abstract method from BaseAgent.
121 |
122 | The pipeline has 4 stages:
123 | 1. Get initial response from Claude with potential tool calls
124 | 2. Execute any requested tools
125 | 3. Format tool results for follow-up
126 | 4. Get final response incorporating tool outputs
127 |
128 | Note: Claude's tool calling format differs slightly from OpenAI's,
129 | but the overall flow remains the same thanks to BaseAgent abstraction.
130 | """
131 | # Stage 1: Get initial response with potential tool calls
132 | self.tools_table.add_computed_column(
133 | initial_response=messages(
134 | model=self.model,
135 | system=self.system_prompt, # Include system prompt for consistent behavior
136 | messages=[{"role": "user", "content": self.tools_table.tool_prompt}],
137 | tools=self.tools, # Pass available tools to Claude
138 | **self.tool_kwargs,
139 | ),
140 | if_exists="ignore",
141 | )
142 |
143 | # Stage 2: Execute any tools that Claude requested
144 | self.tools_table.add_computed_column(
145 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
146 | if_exists="ignore",
147 | )
148 |
149 | # Stage 3: Format tool results for follow-up
150 | self.tools_table.add_computed_column(
151 | tool_response_prompt=pxtf.string.format(
152 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
153 | ),
154 | if_exists="ignore",
155 | )
156 |
157 | # Stage 4: Get final response incorporating tool results
158 | self.tools_table.add_computed_column(
159 | final_response=messages(
160 | model=self.model,
161 | system=self.system_prompt,
162 | messages=[
163 | {"role": "user", "content": self.tools_table.tool_response_prompt}
164 | ],
165 | **self.tool_kwargs,
166 | ),
167 | if_exists="ignore",
168 | )
169 |
170 | # Extract the final response text from Claude's format
171 | self.tools_table.add_computed_column(
172 | tool_answer=self.tools_table.final_response.content[0].text,
173 | if_exists="ignore",
174 | )
175 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/anthropic/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_messages(
11 | memory_context: list[dict],
12 | current_message: str,
13 | image: Optional[PIL.Image.Image] = None,
14 | ) -> list[dict]:
15 | """
16 | Create a formatted message list for Anthropic Claude models.
17 |
18 | This function formats the conversation history and current message
19 | into the structure expected by Anthropic Claude models.
20 |
21 | Args:
22 | memory_context: List of previous messages from memory
23 | current_message: The current user message
24 | image: Optional image to include with the message
25 |
26 | Returns:
27 | List of formatted messages for Anthropic Claude
28 | """
29 | # Create a copy to avoid modifying the original
30 | messages = memory_context.copy()
31 |
32 | # For text-only messages
33 | if not image:
34 | messages.append({"role": "user", "content": current_message})
35 | return messages
36 |
37 | # Convert image to base64
38 | bytes_arr = io.BytesIO()
39 | image.save(bytes_arr, format="JPEG")
40 | b64_bytes = base64.b64encode(bytes_arr.getvalue())
41 | b64_encoded_image = b64_bytes.decode("utf-8")
42 |
43 | # Create content blocks with text and image
44 | content_blocks = [
45 | {"type": "text", "text": current_message},
46 | {
47 | "type": "image",
48 | "source": {
49 | "type": "base64",
50 | "media_type": "image/jpeg",
51 | "data": b64_encoded_image,
52 | },
53 | },
54 | ]
55 |
56 | messages.append({"role": "user", "content": content_blocks})
57 |
58 | return messages
59 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/bedrock/__init__.py:
--------------------------------------------------------------------------------
1 | from .agent import Agent
2 |
3 | __all__ = ["Agent"]
4 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/bedrock/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from ..core.base import BaseAgent
7 |
8 | from .utils import create_messages
9 |
10 | try:
11 | from pixeltable.functions.bedrock import converse, invoke_tools
12 | except ImportError:
13 | raise ImportError("boto3 not found; run `pip install boto3`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | AWS Bedrock-specific implementation of the BaseAgent.
19 |
20 | This agent uses AWS Bedrock's Claude API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 |
26 | The agent supports both limited and unlimited conversation history through
27 | the n_latest_messages parameter for regular chat, while tool calls use only
28 | the current message without conversation history.
29 | """
30 |
31 | def __init__(
32 | self,
33 | name: str,
34 | system_prompt: str,
35 | model: str = "amazon.nova-pro-v1:0",
36 | n_latest_messages: Optional[int] = 10,
37 | tools: Optional[pxt.tools] = None,
38 | reset: bool = False,
39 | chat_kwargs: Optional[dict] = None,
40 | tool_kwargs: Optional[dict] = None,
41 | ):
42 | # Initialize the base agent with all common parameters
43 | super().__init__(
44 | name=name,
45 | system_prompt=system_prompt,
46 | model=model,
47 | n_latest_messages=n_latest_messages, # None for unlimited history
48 | tools=tools,
49 | reset=reset,
50 | chat_kwargs=chat_kwargs,
51 | tool_kwargs=tool_kwargs,
52 | )
53 |
54 | def _setup_chat_pipeline(self):
55 | """
56 | Configure the chat completion pipeline using Pixeltable's computed columns.
57 | This method implements the abstract method from BaseAgent.
58 |
59 | The pipeline consists of 4 steps:
60 | 1. Retrieve recent messages from memory
61 | 2. Format messages for Claude
62 | 3. Get completion from Anthropic
63 | 4. Extract the response text
64 |
65 | Note: The pipeline automatically handles memory limits based on n_latest_messages.
66 | When set to None, it maintains unlimited conversation history.
67 | """
68 |
69 | # Step 1: Define a query to get recent messages
70 | @pxt.query
71 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
72 | """
73 | Get recent messages from memory, respecting n_latest_messages limit if set.
74 | Messages are ordered by timestamp (newest first).
75 | Returns all messages if n_latest_messages is None.
76 | """
77 | query = (
78 | self.memory.where(self.memory.timestamp < current_timestamp)
79 | .order_by(self.memory.timestamp, asc=False)
80 | .select(role=self.memory.role, content=self.memory.content)
81 | )
82 | if self.n_latest_messages is not None:
83 | query = query.limit(self.n_latest_messages)
84 | return query
85 |
86 | # Step 2: Add computed columns to process the conversation
87 | # First, get the conversation history
88 | self.agent.add_computed_column(
89 | memory_context=get_recent_memory(self.agent.timestamp), if_exists="ignore"
90 | )
91 |
92 | # Format messages for Claude (simpler than OpenAI as system prompt is passed separately)
93 | self.agent.add_computed_column(
94 | messages=create_messages(
95 | self.agent.memory_context,
96 | self.agent.user_message,
97 | self.agent.image,
98 | ),
99 | if_exists="ignore",
100 | )
101 |
102 | # Get Bedrock Claude's API response
103 | self.agent.add_computed_column(
104 | response=converse(
105 | messages=self.agent.messages,
106 | model_id=self.model,
107 | system=[{"text": self.system_prompt}],
108 | **self.chat_kwargs,
109 | ),
110 | if_exists="ignore",
111 | )
112 |
113 | # Extract the final response text from Bedrock Claude's specific response format
114 | self.agent.add_computed_column(
115 | agent_response=self.agent.response.output.message.content[0].text,
116 | if_exists="ignore"
117 | )
118 |
119 | def _setup_tools_pipeline(self):
120 | """
121 | Configure the tool execution pipeline using Pixeltable's computed columns.
122 | This method implements the abstract method from BaseAgent.
123 |
124 | The pipeline has 4 stages:
125 | 1. Get initial response from Bedrock Claude with potential tool calls
126 | 2. Execute any requested tools
127 | 3. Format tool results for follow-up
128 | 4. Get final response incorporating tool outputs
129 |
130 | Note: For tool calls, we only use the current message without conversation history
131 | to ensure tool execution is based solely on the current request.
132 | """
133 | # Stage 1: Get initial response with potential tool calls
134 | # Note: We only use the current tool prompt without memory context
135 | self.tools_table.add_computed_column(
136 | initial_response=converse(
137 | model_id=self.model,
138 | system=[{"text": self.system_prompt}],
139 | messages=[{"role": "user", "content": [{"text": self.tools_table.tool_prompt}]}],
140 | tool_config=self.tools, # Pass available tools to Bedrock Claude
141 | **self.tool_kwargs,
142 | ),
143 | if_exists="ignore",
144 | )
145 |
146 | # Stage 2: Execute any tools that Bedrock Claude requested
147 | self.tools_table.add_computed_column(
148 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
149 | if_exists="ignore",
150 | )
151 |
152 | # Stage 3: Format tool results for follow-up
153 | self.tools_table.add_computed_column(
154 | tool_response_prompt=pxtf.string.format(
155 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
156 | ),
157 | if_exists="ignore",
158 | )
159 |
160 | # Stage 4: Get final response incorporating tool results
161 | # Again, we only use the current tool response without memory context
162 | self.tools_table.add_computed_column(
163 | final_response=converse(
164 | model_id=self.model,
165 | system=[{"text": self.system_prompt}],
166 | messages=[
167 | {"role": "user", "content": [{"text": self.tools_table.tool_response_prompt}]}
168 | ],
169 | **self.tool_kwargs,
170 | ),
171 | if_exists="ignore",
172 | )
173 |
174 | # Extract the final response text from Bedrock Claude's format
175 | self.tools_table.add_computed_column(
176 | tool_answer=self.tools_table.final_response.output.message.content[0].text,
177 | if_exists="ignore",
178 | )
179 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/bedrock/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_messages(
11 | memory_context: list[dict],
12 | current_message: str,
13 | image: Optional[PIL.Image.Image] = None,
14 | ) -> list[dict]:
15 | """
16 | Create a formatted message list for Bedrock Claude models.
17 |
18 | This function formats the conversation history and current message
19 | into the structure expected by Bedrock Claude models.
20 |
21 | Args:
22 | memory_context: List of previous messages from memory
23 | current_message: The current user message
24 | image: Optional image to include with the message
25 |
26 | Returns:
27 | List of formatted messages for Bedrock Claude
28 | """
29 | # Create a copy to avoid modifying the original and format for Bedrock
30 | messages = []
31 |
32 | # Get messages in oldest-first order
33 | reversed_memory = list(reversed(memory_context))
34 |
35 | # Ensure the conversation starts with a user message
36 | # If the first message is from the assistant, skip it
37 | start_idx = 0
38 | if reversed_memory and reversed_memory[0]["role"] == "assistant":
39 | start_idx = 1
40 |
41 | # Format previous messages for Bedrock
42 | for msg in reversed_memory[start_idx:]:
43 | # Convert string content to the required list format
44 | if isinstance(msg["content"], str):
45 | messages.append({
46 | "role": msg["role"],
47 | "content": [{"text": msg["content"]}]
48 | })
49 | else:
50 | # If it's already in the correct format, keep it as is
51 | messages.append(msg)
52 |
53 | # For text-only messages
54 | if not image:
55 | messages.append({"role": "user", "content": [{"text": current_message}]})
56 | return messages
57 |
58 | # Convert image to base64
59 | bytes_arr = io.BytesIO()
60 | image.save(bytes_arr, format="JPEG")
61 | b64_bytes = base64.b64encode(bytes_arr.getvalue())
62 | b64_encoded_image = b64_bytes.decode("utf-8")
63 |
64 | # Create content blocks with text and image
65 | content_blocks = [
66 | {"type": "text", "text": current_message},
67 | {
68 | "type": "image",
69 | "source": {
70 | "type": "base64",
71 | "media_type": "image/jpeg",
72 | "data": b64_encoded_image,
73 | },
74 | },
75 | ]
76 |
77 | messages.append({"role": "user", "content": content_blocks})
78 |
79 | return messages
80 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/core/__init__.py:
--------------------------------------------------------------------------------
1 | from .base import BaseAgent
2 |
3 | __all__ = ["BaseAgent"]
4 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/gemini/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Google Gemini agent module for multi-provider blueprints.
3 |
4 | This module provides an Agent class for interacting with Google Gemini models.
5 | """
6 |
7 | from .agent import Agent
8 | from .utils import create_content
9 |
10 | __all__ = ["Agent", "create_content"]
11 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/gemini/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from ..core.base import BaseAgent
7 |
8 | from .utils import create_content
9 |
10 | try:
11 | from pixeltable.functions.gemini import generate_content, invoke_tools
12 | except ImportError:
13 | raise ImportError("google.genai not found; run `pip install google-genai`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | Google Gemini-specific implementation of the BaseAgent.
19 |
20 | This agent uses Google Gemini's generate_content API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 |
26 | The agent supports both limited and unlimited conversation history through
27 | the n_latest_messages parameter.
28 | """
29 |
30 | def __init__(
31 | self,
32 | name: str,
33 | system_prompt: str,
34 | model: str = "gemini-2.0-flash",
35 | n_latest_messages: Optional[int] = 10,
36 | tools: Optional[pxt.tools] = None,
37 | reset: bool = False,
38 | chat_kwargs: Optional[dict] = None,
39 | tool_kwargs: Optional[dict] = None,
40 | ):
41 | # Initialize the base agent with all common parameters
42 | super().__init__(
43 | name=name,
44 | system_prompt=system_prompt,
45 | model=model,
46 | n_latest_messages=n_latest_messages, # None for unlimited history
47 | tools=tools,
48 | reset=reset,
49 | chat_kwargs=chat_kwargs,
50 | tool_kwargs=tool_kwargs,
51 | )
52 |
53 | def _setup_chat_pipeline(self):
54 | """
55 | Configure the chat completion pipeline using Pixeltable's computed columns.
56 | This method implements the abstract method from BaseAgent.
57 |
58 | The pipeline consists of 4 steps:
59 | 1. Retrieve recent messages from memory
60 | 2. Format content for Gemini
61 | 3. Get completion from Google Gemini
62 | 4. Extract the response text
63 |
64 | Note: The pipeline automatically handles memory limits based on n_latest_messages.
65 | When set to None, it maintains unlimited conversation history.
66 | """
67 |
68 | # Step 1: Define a query to get recent messages
69 | @pxt.query
70 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
71 | """
72 | Get recent messages from memory, respecting n_latest_messages limit if set.
73 | Messages are ordered by timestamp (newest first).
74 | Returns all messages if n_latest_messages is None.
75 | """
76 | query = (
77 | self.memory.where(self.memory.timestamp < current_timestamp)
78 | .order_by(self.memory.timestamp, asc=False)
79 | .select(role=self.memory.role, content=self.memory.content)
80 | )
81 | if self.n_latest_messages is not None:
82 | query = query.limit(self.n_latest_messages)
83 | return query
84 |
85 | # Step 2: Add computed columns to process the conversation
86 | # First, get the conversation history
87 | self.agent.add_computed_column(
88 | memory_context=get_recent_memory(self.agent.timestamp), if_exists="ignore"
89 | )
90 |
91 | # Format content for Gemini (text-based format with conversation context)
92 | self.agent.add_computed_column(
93 | prompt=create_content(
94 | self.agent.system_prompt,
95 | self.agent.memory_context,
96 | self.agent.user_message,
97 | self.agent.image,
98 | ),
99 | if_exists="ignore",
100 | )
101 |
102 | # Get Gemini's API response (note: contents parameter instead of messages)
103 | self.agent.add_computed_column(
104 | response=generate_content(
105 | contents=self.agent.prompt,
106 | model=self.model,
107 | **self.chat_kwargs,
108 | ),
109 | if_exists="ignore",
110 | )
111 |
112 | # Extract the final response text from Gemini's specific response format
113 | self.agent.add_computed_column(
114 | agent_response=self.agent.response['candidates'][0]['content']['parts'][0]['text'],
115 | if_exists="ignore"
116 | )
117 |
118 | def _setup_tools_pipeline(self):
119 | """
120 | Configure the tool execution pipeline using Pixeltable's computed columns.
121 | This method implements the abstract method from BaseAgent.
122 |
123 | The pipeline has 4 stages:
124 | 1. Get initial response from Gemini with potential tool calls
125 | 2. Execute any requested tools
126 | 3. Format tool results for follow-up
127 | 4. Get final response incorporating tool outputs
128 |
129 | Note: Gemini's tool calling format uses the same structure as other providers
130 | thanks to BaseAgent abstraction.
131 | """
132 | # Stage 1: Get initial response with potential tool calls
133 | self.tools_table.add_computed_column(
134 | initial_response=generate_content(
135 | contents=self.tools_table.tool_prompt,
136 | model=self.model,
137 | tools=self.tools, # Pass available tools to Gemini
138 | **self.tool_kwargs,
139 | ),
140 | if_exists="ignore",
141 | )
142 |
143 | # Stage 2: Execute any tools that Gemini requested
144 | self.tools_table.add_computed_column(
145 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
146 | if_exists="ignore",
147 | )
148 |
149 | # Stage 3: Format tool results for follow-up
150 | self.tools_table.add_computed_column(
151 | tool_response_prompt=pxtf.string.format(
152 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
153 | ),
154 | if_exists="ignore",
155 | )
156 |
157 | # Stage 4: Get final response incorporating tool results
158 | self.tools_table.add_computed_column(
159 | final_response=generate_content(
160 | contents=self.tools_table.tool_response_prompt,
161 | model=self.model,
162 | **self.tool_kwargs,
163 | ),
164 | if_exists="ignore",
165 | )
166 |
167 | # Extract the final response text from Gemini's format
168 | self.tools_table.add_computed_column(
169 | tool_answer=self.tools_table.final_response['candidates'][0]['content']['parts'][0]['text'],
170 | if_exists="ignore",
171 | )
172 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/gemini/utils.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import PIL
4 | import pixeltable as pxt
5 |
6 |
7 | @pxt.udf
8 | def create_content(
9 | system_prompt: str,
10 | memory_context: list[dict],
11 | current_message: str,
12 | image: Optional[PIL.Image.Image] = None,
13 | ) -> str:
14 |
15 | # Build the conversation context as a text string
16 | context = f"System: {system_prompt}\n\n"
17 |
18 | # Add memory context
19 | for msg in memory_context:
20 | context += f"{msg['role'].title()}: {msg['content']}\n"
21 |
22 | # Add current message
23 | context += f"User: {current_message}\n"
24 | context += "Assistant: "
25 |
26 | return context
27 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/openai/__init__.py:
--------------------------------------------------------------------------------
1 | from .agent import Agent
2 |
3 | __all__ = ["Agent"]
4 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/openai/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from ..core.base import BaseAgent
7 |
8 | from .utils import create_messages
9 |
10 | try:
11 | from pixeltable.functions.openai import chat_completions, invoke_tools
12 | except ImportError:
13 | raise ImportError("openai not found; run `pip install openai`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | OpenAI-specific implementation of the BaseAgent.
19 |
20 | This agent uses OpenAI's chat completion API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 | """
26 |
27 | def __init__(
28 | self,
29 | name: str,
30 | system_prompt: str,
31 | model: str = "gpt-4o-mini",
32 | n_latest_messages: Optional[int] = 10,
33 | tools: Optional[pxt.tools] = None,
34 | reset: bool = False,
35 | chat_kwargs: Optional[dict] = None,
36 | tool_kwargs: Optional[dict] = None,
37 | ):
38 | # Initialize the base agent with all common parameters
39 | super().__init__(
40 | name=name,
41 | system_prompt=system_prompt,
42 | model=model,
43 | n_latest_messages=n_latest_messages, # None for unlimited history
44 | tools=tools,
45 | reset=reset,
46 | chat_kwargs=chat_kwargs,
47 | tool_kwargs=tool_kwargs,
48 | )
49 |
50 | def _setup_chat_pipeline(self):
51 | """
52 | Configure the chat completion pipeline using Pixeltable's computed columns.
53 | This method implements the abstract method from BaseAgent.
54 |
55 | The pipeline consists of 4 steps:
56 | 1. Retrieve recent messages from memory
57 | 2. Format messages with system prompt
58 | 3. Get completion from OpenAI
59 | 4. Extract the response text
60 | """
61 |
62 | # Step 1: Define a query to get recent messages
63 | @pxt.query
64 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
65 | """
66 | Get recent messages from memory, respecting n_latest_messages limit if set.
67 | Messages are ordered by timestamp (newest first).
68 | """
69 | query = (
70 | self.memory.where(self.memory.timestamp < current_timestamp)
71 | .order_by(self.memory.timestamp, asc=False)
72 | .select(role=self.memory.role, content=self.memory.content)
73 | )
74 | if self.n_latest_messages is not None:
75 | query = query.limit(self.n_latest_messages)
76 | return query
77 |
78 | # Step 2: Add computed columns to process the conversation
79 | # First, get the conversation history
80 | self.agent.add_computed_column(
81 | memory_context=get_recent_memory(self.agent.timestamp),
82 | if_exists="ignore",
83 | )
84 |
85 | # Format messages for OpenAI with system prompt
86 | self.agent.add_computed_column(
87 | prompt=create_messages(
88 | self.agent.system_prompt,
89 | self.agent.memory_context,
90 | self.agent.user_message,
91 | self.agent.image,
92 | ),
93 | if_exists="ignore",
94 | )
95 |
96 | # Get OpenAI's API response
97 | self.agent.add_computed_column(
98 | response=chat_completions(
99 | messages=self.agent.prompt, model=self.model, **self.chat_kwargs
100 | ),
101 | if_exists="ignore",
102 | )
103 |
104 | # Extract the final response text
105 | self.agent.add_computed_column(
106 | agent_response=self.agent.response.choices[0].message.content,
107 | if_exists="ignore",
108 | )
109 |
110 | def _setup_tools_pipeline(self):
111 | """
112 | Configure the tool execution pipeline using Pixeltable's computed columns.
113 | This method implements the abstract method from BaseAgent.
114 |
115 | The pipeline has 4 stages:
116 | 1. Get initial response from OpenAI with potential tool calls
117 | 2. Execute any requested tools
118 | 3. Format tool results for follow-up
119 | 4. Get final response incorporating tool outputs
120 | """
121 | # Stage 1: Get initial response with potential tool calls
122 | self.tools_table.add_computed_column(
123 | initial_response=chat_completions(
124 | model=self.model,
125 | messages=[{"role": "user", "content": self.tools_table.tool_prompt}],
126 | tools=self.tools, # Pass available tools to OpenAI
127 | **self.tool_kwargs,
128 | ),
129 | if_exists="ignore",
130 | )
131 |
132 | # Stage 2: Execute any tools that OpenAI requested
133 | self.tools_table.add_computed_column(
134 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
135 | if_exists="ignore",
136 | )
137 |
138 | # Stage 3: Format tool results for follow-up
139 | self.tools_table.add_computed_column(
140 | tool_response_prompt=pxtf.string.format(
141 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
142 | ),
143 | if_exists="ignore",
144 | )
145 |
146 | # Stage 4: Get final response incorporating tool results
147 | self.tools_table.add_computed_column(
148 | final_response=chat_completions(
149 | model=self.model,
150 | messages=[
151 | {"role": "user", "content": self.tools_table.tool_response_prompt},
152 | ],
153 | **self.tool_kwargs,
154 | ),
155 | if_exists="ignore",
156 | )
157 |
158 | # Extract the final response text
159 | self.tools_table.add_computed_column(
160 | tool_answer=self.tools_table.final_response.choices[0].message.content,
161 | if_exists="ignore",
162 | )
163 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/openai/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_messages(
11 | system_prompt: str,
12 | memory_context: list[dict],
13 | current_message: str,
14 | image: Optional[PIL.Image.Image] = None,
15 | ) -> list[dict]:
16 |
17 | messages = [{"role": "system", "content": system_prompt}]
18 | messages.extend(memory_context.copy())
19 |
20 | if not image:
21 | messages.append({"role": "user", "content": current_message})
22 | return messages
23 |
24 | # Encode Image
25 | bytes_arr = io.BytesIO()
26 | image.save(bytes_arr, format="jpeg")
27 | b64_bytes = base64.b64encode(bytes_arr.getvalue())
28 | b64_encoded_image = b64_bytes.decode("utf-8")
29 |
30 | # Create content blocks with text and image
31 | content_blocks = [
32 | {"type": "text", "text": current_message},
33 | {
34 | "type": "image_url",
35 | "image_url": {"url": f"data:image/jpeg;base64,{b64_encoded_image}"},
36 | },
37 | ]
38 |
39 | messages.append({"role": "user", "content": content_blocks})
40 | return messages
41 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/requirements.txt:
--------------------------------------------------------------------------------
1 | pixeltable
2 | openai
3 | anthropic
4 | boto3
5 | google-genai
6 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/tests/README.md:
--------------------------------------------------------------------------------
1 | # Multi-Provider Agent Tests
2 |
3 | This directory contains test scripts for the multi-provider agent blueprints. Each test demonstrates how to create and use agents powered by different LLM providers.
4 |
5 | ## Available Tests
6 |
7 | - `test_anthropic.py`: Tests the Anthropic Claude agent with chat and tool calling
8 | - `test_openai.py`: Tests the OpenAI GPT agent with chat and tool calling
9 | - `test_bedrock.py`: Tests the AWS Bedrock agent with chat and tool calling
10 |
11 | ## Running the Tests
12 |
13 | ### Running Individual Tests
14 |
15 | To run a specific test, navigate to the `blueprints/multi_provider` directory and run:
16 |
17 | ```bash
18 | python -m tests.test_anthropic
19 | ```
20 |
21 | Or for other providers:
22 |
23 | ```bash
24 | python -m tests.test_openai
25 | python -m tests.test_bedrock
26 | ```
27 |
28 | ### Running All Tests
29 |
30 | To run all tests in sequence, use the provided runner script:
31 |
32 | ```bash
33 | python -m tests.run_all_tests
34 | ```
35 |
36 | You can also specify which tests to run:
37 |
38 | ```bash
39 | python -m tests.run_all_tests test_anthropic test_openai
40 | ```
41 |
42 | ## Test Features
43 |
44 | Each test demonstrates:
45 |
46 | 1. **Conversational Memory**: The agent remembers previous interactions within the conversation
47 | 2. **Tool Calling**: The agent can use tools to retrieve external information
48 | 3. **Memory Persistence**: The agent maintains memory across different interaction types
49 |
50 | ## Provider-Specific Notes
51 |
52 | ### Anthropic Claude
53 |
54 | - Uses the `claude-3-5-sonnet-latest` model by default
55 | - Demonstrates a weather tool
56 |
57 | ### OpenAI
58 |
59 | - Uses the `gpt-4o-mini` model by default
60 | - Demonstrates a weather tool
61 |
62 | ### AWS Bedrock
63 |
64 | - Uses the `amazon.nova-pro-v1:0` model by default
65 | - Demonstrates a stock price tool
66 | - Requires AWS credentials to be configured
67 |
68 | ## Import Pattern
69 |
70 | The tests demonstrate a simple import pattern:
71 |
72 | ```python
73 | # Import the agent classes with aliases
74 | from blueprints.multi_provider.ant.agent import Agent as AnthropicAgent
75 | from blueprints.multi_provider.oai.agent import Agent as OpenAIAgent
76 | from blueprints.multi_provider.bedrock.agent import Agent as BedrockAgent
77 | ```
78 |
79 | This pattern makes it clear which provider's agent is being used while maintaining a consistent interface.
80 |
81 | ### For Application Code
82 |
83 | When using these agents in your application code, you can use the same import pattern:
84 |
85 | ```python
86 | from blueprints.multi_provider.ant.agent import Agent as AnthropicAgent
87 | from blueprints.multi_provider.oai.agent import Agent as OpenAIAgent
88 | from blueprints.multi_provider.bedrock.agent import Agent as BedrockAgent
89 | ```
90 |
91 | The alias makes it clear which provider you're using, while the actual implementation uses directory names that avoid conflicts with the underlying Python packages.
92 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/tests/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Test package for multi_provider blueprints.
3 |
4 | This package contains test scripts for the multi-provider agent blueprints,
5 | demonstrating chat and tool calling functionality for different LLM providers.
6 | """
7 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/tests/test_anthropic.py:
--------------------------------------------------------------------------------
1 | import sys
2 | from pathlib import Path
3 |
4 | root_dir = Path(__file__).resolve().parents[3]
5 | sys.path.insert(0, str(root_dir))
6 |
7 | from blueprints.multi_provider.anthropic.agent import Agent as AnthropicAgent
8 |
9 |
10 | import pixeltable as pxt
11 |
12 | @pxt.udf
13 | def weather(city: str) -> str:
14 | """
15 | Returns the weather in a given city.
16 | """
17 | return f"The weather in {city} is sunny with a high of 75°F."
18 |
19 |
20 | def main():
21 | # Create an Anthropic agent with memory
22 | agent = AnthropicAgent(
23 | name="anthropic_test",
24 | system_prompt="You are a helpful assistant that can answer questions and use tools.",
25 | model="claude-3-5-sonnet-latest", # Use Claude 3.5 Sonnet
26 | n_latest_messages=10, # Keep last 10 messages in context
27 | tools=pxt.tools(weather), # Register the weather tool
28 | reset=True, # Reset the agent's memory for testing
29 | )
30 |
31 | print("\n=== Testing Conversational Memory ===\n")
32 |
33 | # First conversation turn
34 | user_message = "Hello, my name is Alice."
35 | print(f"User: {user_message}")
36 | response = agent.chat(user_message)
37 | print(f"Agent: {response}\n")
38 |
39 | # Second conversation turn - the agent should remember the user's name
40 | user_message = "What's my name?"
41 | print(f"User: {user_message}")
42 | response = agent.chat(user_message)
43 | print(f"Agent: {response}\n")
44 |
45 | print("\n=== Testing Tool Calling ===\n")
46 |
47 | # Tool call
48 | user_message = "What's the weather in San Francisco?"
49 | print(f"User: {user_message}")
50 | response = agent.tool_call(user_message)
51 | print(f"Agent: {response}\n")
52 |
53 | # Another tool call
54 | user_message = "How about the weather in New York?"
55 | print(f"User: {user_message}")
56 | response = agent.tool_call(user_message)
57 | print(f"Agent: {response}\n")
58 |
59 | print("\n=== Testing Memory After Tool Calls ===\n")
60 |
61 | # Regular chat after tool calls - should still remember the user's name
62 | user_message = "Do you still remember my name?"
63 | print(f"User: {user_message}")
64 | response = agent.chat(user_message)
65 | print(f"Agent: {response}\n")
66 |
67 |
68 | if __name__ == "__main__":
69 | main()
70 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/tests/test_bedrock.py:
--------------------------------------------------------------------------------
1 | import sys
2 | from pathlib import Path
3 |
4 | root_dir = Path(__file__).resolve().parents[3]
5 | sys.path.insert(0, str(root_dir))
6 |
7 | from blueprints.multi_provider.bedrock.agent import Agent as BedrockAgent
8 |
9 |
10 | import pixeltable as pxt
11 |
12 | @pxt.udf
13 | def stock_price(ticker: str) -> float:
14 | """Get the current stock price for a given ticker symbol."""
15 | # This is a mock implementation for testing
16 | prices = {
17 | "AAPL": 175.34,
18 | "MSFT": 325.89,
19 | "GOOGL": 142.56,
20 | "AMZN": 178.23,
21 | "NVDA": 131.17,
22 | }
23 | return prices.get(ticker.upper(), 0.0)
24 |
25 |
26 | def main():
27 | # Create a Bedrock agent with memory
28 | agent = BedrockAgent(
29 | name="bedrock_test",
30 | system_prompt="You are a helpful assistant that can answer questions and use tools.",
31 | model="amazon.nova-pro-v1:0", # Use the Amazon Nova Pro model
32 | n_latest_messages=10, # Keep last 10 messages in context
33 | tools=pxt.tools(stock_price), # Register the stock_price tool
34 | reset=True, # Reset the agent's memory for testing
35 | )
36 |
37 | print("\n=== Testing Conversational Memory ===\n")
38 |
39 | # First conversation turn
40 | user_message = "Hello, my name is Charlie."
41 | print(f"User: {user_message}")
42 | response = agent.chat(user_message)
43 | print(f"Agent: {response}\n")
44 |
45 | # Second conversation turn - the agent should remember the user's name
46 | user_message = "What's my name?"
47 | print(f"User: {user_message}")
48 | response = agent.chat(user_message)
49 | print(f"Agent: {response}\n")
50 |
51 | print("\n=== Testing Tool Calling ===\n")
52 |
53 | # Tool call
54 | user_message = "What is the stock price of NVDA today?"
55 | print(f"User: {user_message}")
56 | response = agent.tool_call(user_message)
57 | print(f"Agent: {response}\n")
58 |
59 | # Another tool call
60 | user_message = "What about AAPL?"
61 | print(f"User: {user_message}")
62 | response = agent.tool_call(user_message)
63 | print(f"Agent: {response}\n")
64 |
65 | print("\n=== Testing Memory After Tool Calls ===\n")
66 |
67 | # Regular chat after tool calls - should still remember the user's name
68 | user_message = "Do you still remember my name?"
69 | print(f"User: {user_message}")
70 | response = agent.chat(user_message)
71 | print(f"Agent: {response}\n")
72 |
73 |
74 | if __name__ == "__main__":
75 | main()
76 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/tests/test_gemini.py:
--------------------------------------------------------------------------------
1 | import os
2 | import pytest
3 |
4 | import pixeltable as pxt
5 | from ..gemini import Agent
6 |
7 |
8 | @pxt.udf
9 | def weather(city: str) -> str:
10 | """
11 | Returns the weather in a given city.
12 | """
13 | return f"The weather in {city} is sunny."
14 |
15 |
16 | @pytest.mark.skipif(
17 | not os.getenv("GEMINI_API_KEY"), reason="GEMINI_API_KEY not set"
18 | )
19 | def test_gemini_agent_chat():
20 | """Test basic chat functionality with Gemini agent."""
21 | agent = Agent(
22 | name="test_gemini_agent",
23 | system_prompt="You're a helpful assistant.",
24 | reset=True,
25 | )
26 |
27 | # Test basic chat
28 | response = agent.chat("Hi, how are you?")
29 | assert isinstance(response, str)
30 | assert len(response) > 0
31 |
32 | # Test memory functionality
33 | response2 = agent.chat("What was my last question?")
34 | assert isinstance(response2, str)
35 | assert len(response2) > 0
36 |
37 |
38 | @pytest.mark.skipif(
39 | not os.getenv("GEMINI_API_KEY"), reason="GEMINI_API_KEY not set"
40 | )
41 | def test_gemini_agent_tools():
42 | """Test tool calling functionality with Gemini agent."""
43 | agent = Agent(
44 | name="test_gemini_tools",
45 | system_prompt="You're my assistant.",
46 | tools=pxt.tools(weather),
47 | reset=True,
48 | )
49 |
50 | # Test tool call
51 | response = agent.tool_call("Get weather in San Francisco")
52 | assert isinstance(response, str)
53 | assert "San Francisco" in response or "sunny" in response
54 |
55 |
56 | @pytest.mark.skipif(
57 | not os.getenv("GEMINI_API_KEY"), reason="GEMINI_API_KEY not set"
58 | )
59 | def test_gemini_agent_unlimited_memory():
60 | """Test unlimited memory functionality."""
61 | agent = Agent(
62 | name="test_gemini_unlimited",
63 | system_prompt="You're my assistant.",
64 | n_latest_messages=None, # Unlimited memory
65 | reset=True,
66 | )
67 |
68 | # Test basic functionality with unlimited memory
69 | response = agent.chat("Remember this number: 42")
70 | assert isinstance(response, str)
71 | assert len(response) > 0
72 |
--------------------------------------------------------------------------------
/blueprints/multi_provider/tests/test_openai.py:
--------------------------------------------------------------------------------
1 | import sys
2 | from pathlib import Path
3 |
4 | root_dir = Path(__file__).resolve().parents[3]
5 | sys.path.insert(0, str(root_dir))
6 |
7 | from blueprints.multi_provider.openai.agent import Agent as OpenAIAgent
8 |
9 |
10 | import pixeltable as pxt
11 |
12 | @pxt.udf
13 | def weather(city: str) -> str:
14 | """
15 | Returns the weather in a given city.
16 | """
17 | return f"The weather in {city} is sunny with a high of 75°F."
18 |
19 |
20 | def main():
21 | # Create an OpenAI agent with memory
22 | agent = OpenAIAgent(
23 | name="openai_test",
24 | system_prompt="You are a helpful assistant that can answer questions and use tools.",
25 | model="gpt-4o-mini", # Use GPT-4o Mini
26 | n_latest_messages=10, # Keep last 10 messages in context
27 | tools=pxt.tools(weather), # Register the weather tool
28 | reset=True, # Reset the agent's memory for testing
29 | )
30 |
31 | print("\n=== Testing Conversational Memory ===\n")
32 |
33 | # First conversation turn
34 | user_message = "Hello, my name is Bob."
35 | print(f"User: {user_message}")
36 | response = agent.chat(user_message)
37 | print(f"Agent: {response}\n")
38 |
39 | # Second conversation turn - the agent should remember the user's name
40 | user_message = "What's my name?"
41 | print(f"User: {user_message}")
42 | response = agent.chat(user_message)
43 | print(f"Agent: {response}\n")
44 |
45 | print("\n=== Testing Tool Calling ===\n")
46 |
47 | # Tool call
48 | user_message = "What's the weather in Seattle?"
49 | print(f"User: {user_message}")
50 | response = agent.tool_call(user_message)
51 | print(f"Agent: {response}\n")
52 |
53 | # Another tool call
54 | user_message = "How about the weather in Chicago?"
55 | print(f"User: {user_message}")
56 | response = agent.tool_call(user_message)
57 | print(f"Agent: {response}\n")
58 |
59 | print("\n=== Testing Memory After Tool Calls ===\n")
60 |
61 | # Regular chat after tool calls - should still remember the user's name
62 | user_message = "Do you still remember my name?"
63 | print(f"User: {user_message}")
64 | response = agent.chat(user_message)
65 | print(f"Agent: {response}\n")
66 |
67 |
68 | if __name__ == "__main__":
69 | main()
70 |
--------------------------------------------------------------------------------
/blueprints/single_provider/anthropic/test.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from agent import Agent
3 |
4 |
5 | @pxt.udf
6 | def weather(city: str) -> str:
7 | """
8 | Returns the weather in a given city.
9 | """
10 | return f"The weather in {city} is sunny."
11 |
12 |
13 | # Create an agent
14 | agent = Agent(
15 | name="anthropic_agent",
16 | system_prompt="You’re my assistant.",
17 | tools=pxt.tools(weather),
18 | reset=True,
19 | )
20 |
21 | # Persistant chat and memory
22 | print(agent.chat("Hi, how are you?"))
23 | print(agent.chat("What was my last question?"))
24 |
25 | # Tool call
26 | print(agent.tool_call("Get weather in San Francisco"))
27 |
--------------------------------------------------------------------------------
/blueprints/single_provider/bedrock/README.md:
--------------------------------------------------------------------------------
1 | # AWS Bedrock Agent Blueprint
2 |
3 | This blueprint demonstrates how to create a conversational AI agent powered by AWS Bedrock models using Pixeltable for persistent memory, storage, orchestration, and tool execution.
4 |
5 | ## Features
6 |
7 | - **Persistent Memory**: Maintains conversation history in a structured database
8 | - **Unlimited Memory Support**: Option to maintain complete conversation history without limits
9 | - **Tool Execution**: Supports function calling with AWS Bedrock models
10 | - **Image Support**: Can process and respond to images
11 | - **Configurable Context Window**: Control how many previous messages are included in the context
12 | - **Modular Architecture**: Built on a flexible BaseAgent design for maintainability and extensibility
13 |
14 | ## Prerequisites
15 |
16 | - Python 3.8+
17 | - Pixeltable (`pip install pixeltable`)
18 | - AWS Boto3 (`pip install boto3`)
19 | - AWS credentials configured (via AWS CLI, environment variables, or IAM role)
20 |
21 | ## AWS Credentials Setup
22 |
23 | Before using this blueprint, ensure you have:
24 |
25 | 1. An AWS account with access to AWS Bedrock
26 | 2. Proper IAM permissions to use Bedrock models
27 | 3. AWS credentials configured using one of these methods:
28 | - AWS CLI: Run `aws configure`
29 | - Environment variables: Set `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_REGION`
30 | - IAM role (if running on AWS infrastructure)
31 |
32 | ## Quick Start
33 |
34 | ```python
35 | from blueprints.single_provider.bedrock.agent import Agent
36 |
37 | # Create a simple chat agent
38 | agent = Agent(
39 | name="my_bedrock_agent",
40 | system_prompt="You are a helpful assistant.",
41 | model="amazon.nova-pro-v1:0", # You can use other Bedrock models like "anthropic.claude-3-sonnet-20240229-v1:0"
42 | n_latest_messages=10, # Number of recent messages to include in context (set to None for unlimited)
43 | reset=True # Start with a fresh conversation history
44 | )
45 |
46 | # Chat with the agent
47 | response = agent.chat("Hello, who are you?")
48 | print(response)
49 |
50 | # Chat with an image
51 | from PIL import Image
52 | img = Image.open("path/to/image.jpg")
53 | response = agent.chat("What's in this image?", image=img)
54 | print(response)
55 | ```
56 |
57 | ## Tool Execution Example
58 |
59 | ```python
60 | import pixeltable as pxt
61 |
62 | # Define tools
63 | weather_tools = pxt.tools([
64 | {
65 | "type": "function",
66 | "function": {
67 | "name": "get_weather",
68 | "description": "Get the current weather for a location",
69 | "parameters": {
70 | "type": "object",
71 | "properties": {
72 | "location": {
73 | "type": "string",
74 | "description": "The city and state, e.g. San Francisco, CA"
75 | }
76 | },
77 | "required": ["location"]
78 | }
79 | }
80 | }
81 | ])
82 |
83 | # Define tool implementation
84 | def get_weather(location):
85 | # In a real application, this would call a weather API
86 | return f"It's sunny and 72°F in {location}"
87 |
88 | # Create agent with tools
89 | agent = Agent(
90 | name="weather_assistant",
91 | system_prompt="You are a helpful weather assistant.",
92 | model="amazon.nova-pro-v1:0",
93 | tools=weather_tools,
94 | reset=True
95 | )
96 |
97 | # Register tool implementation
98 | weather_tools.register_tool("get_weather", get_weather)
99 |
100 | # Use tool calling
101 | response = agent.tool_call("What's the weather like in Seattle?")
102 | print(response)
103 | ```
104 |
105 | ## Available Bedrock Models
106 |
107 | This blueprint works with various AWS Bedrock models, including:
108 |
109 | - `amazon.nova-pro-v1:0` (default)
110 | - `anthropic.claude-3-sonnet-20240229-v1:0`
111 | - `anthropic.claude-3-opus-20240229-v1:0`
112 | - `anthropic.claude-3-haiku-20240307-v1:0`
113 | - `meta.llama3-70b-instruct-v1:0`
114 | - `meta.llama3-8b-instruct-v1:0`
115 |
116 | Note that different models have different capabilities and pricing. Refer to the [AWS Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html) for more details.
117 |
118 | ## Advanced Configuration
119 |
120 | You can pass additional parameters to the Bedrock API using the `chat_kwargs` and `tool_kwargs` parameters:
121 |
122 | ```python
123 | agent = Agent(
124 | name="advanced_agent",
125 | system_prompt="You are a helpful assistant.",
126 | model="amazon.nova-pro-v1:0",
127 | chat_kwargs={
128 | "temperature": 0.7,
129 | "max_tokens": 1000
130 | },
131 | tool_kwargs={
132 | "temperature": 0.2 # Lower temperature for more deterministic tool calls
133 | }
134 | )
135 | ```
136 |
137 | ## Unlimited Memory Support
138 |
139 | You can create an agent with unlimited conversation history by setting `n_latest_messages=None`:
140 |
141 | ```python
142 | # Create an agent with unlimited memory
143 | agent = Agent(
144 | name="memory_agent",
145 | system_prompt="You are a helpful assistant with perfect memory.",
146 | model="amazon.nova-pro-v1:0",
147 | n_latest_messages=None, # No limit on conversation history
148 | reset=True
149 | )
150 |
151 | # The agent will remember all previous interactions
152 | response1 = agent.chat("My name is Alice.")
153 | response2 = agent.chat("What's my name?") # Agent will remember
154 | response3 = agent.chat("Do you remember my name?") # Agent will still remember
155 | ```
156 |
157 | This is particularly useful for applications that require long-term memory or context awareness.
158 |
159 | ## Architecture
160 |
161 | This blueprint uses a modular architecture with:
162 |
163 | 1. **BaseAgent**: An abstract base class that handles common functionality like:
164 | - Table setup and management
165 | - Memory persistence
166 | - Core chat and tool call implementations
167 |
168 | 2. **Agent**: The Bedrock-specific implementation that inherits from BaseAgent and implements:
169 | - Bedrock-specific message formatting
170 | - Bedrock API integration
171 | - Tool calling for Bedrock models
172 |
173 | This architecture makes the code more maintainable and extensible.
174 |
175 | ## How It Works
176 |
177 | The agent uses Pixeltable to create and manage three tables:
178 |
179 | 1. **memory**: Stores all conversation history with timestamps
180 | 2. **agent**: Manages chat interactions and responses
181 | 3. **tools**: (Optional) Handles tool execution and responses
182 |
183 | When you send a message to the agent, it:
184 |
185 | 1. Stores your message in the memory table
186 | 2. Triggers a pipeline that retrieves recent conversation history
187 | 3. Formats the messages for the Bedrock API (with proper content structure)
188 | 4. Gets a response from the Bedrock model
189 | 5. Stores the response in memory
190 | 6. Returns the response to you
191 |
192 | Tool execution follows a similar pattern but uses a specialized pipeline:
193 | 1. The user's prompt is sent to Bedrock with available tools
194 | 2. Bedrock decides which tools to call and with what parameters
195 | 3. The tools are executed and their results are returned
196 | 4. The results are sent back to Bedrock for a final response
197 | 5. The response is stored in memory and returned to the user
198 |
--------------------------------------------------------------------------------
/blueprints/single_provider/bedrock/__init__.py:
--------------------------------------------------------------------------------
1 | from .agent import Agent
2 |
3 | __all__ = ["Agent"]
4 |
--------------------------------------------------------------------------------
/blueprints/single_provider/bedrock/test.py:
--------------------------------------------------------------------------------
1 | """
2 | Test script for the AWS Bedrock Agent Blueprint.
3 |
4 | This script demonstrates how to create and use a Bedrock-powered agent
5 | with both basic chat functionality and tool execution.
6 |
7 | Prerequisites:
8 | - AWS credentials configured (via AWS CLI, environment variables, or IAM role)
9 | - Access to AWS Bedrock models
10 | """
11 |
12 | import pixeltable as pxt
13 | from blueprints.single_provider.bedrock.agent import Agent
14 |
15 |
16 | @pxt.udf
17 | def stock_price(ticker: str) -> float:
18 | """Get the current stock price for a given ticker symbol."""
19 | # This is a mock implementation for testing
20 | prices = {
21 | "AAPL": 175.34,
22 | "MSFT": 325.89,
23 | "GOOGL": 142.56,
24 | "AMZN": 178.23,
25 | "NVDA": 131.17,
26 | }
27 | return prices.get(ticker.upper(), 0.0)
28 |
29 |
30 | def main():
31 | # Create a Bedrock agent with memory
32 | agent = Agent(
33 | name="bedrock_test",
34 | system_prompt="You are a helpful assistant that can answer questions and use tools.",
35 | model="amazon.nova-pro-v1:0", # Use the Amazon Nova Pro model
36 | n_latest_messages=None, # Unlimited memory to ensure all messages are included
37 | tools=pxt.tools(stock_price), # Register the stock_price tool
38 | reset=True, # Reset the agent's memory for testing
39 | )
40 |
41 | print("\n=== Testing Conversational Memory ===\n")
42 |
43 | # First conversation turn
44 | user_message = "Hello, my name is Alice."
45 | print(f"User: {user_message}")
46 | response = agent.chat(user_message)
47 | print(f"Agent: {response}\n")
48 |
49 | # Second conversation turn - the agent should remember the user's name
50 | user_message = "What's my name?"
51 | print(f"User: {user_message}")
52 | response = agent.chat(user_message)
53 | print(f"Agent: {response}\n")
54 |
55 | print("\n=== Testing Tool Calling (No Memory) ===\n")
56 |
57 | # Tool call - should not use memory from previous conversation
58 | user_message = "What is the stock price of NVDA today?"
59 | print(f"User: {user_message}")
60 | response = agent.tool_call(user_message)
61 | print(f"Agent: {response}\n")
62 |
63 | # Another tool call - should not remember previous tool call
64 | user_message = "What about AAPL?"
65 | print(f"User: {user_message}")
66 | response = agent.tool_call(user_message)
67 | print(f"Agent: {response}\n")
68 |
69 | print("\n=== Testing Memory After Tool Calls ===\n")
70 |
71 | # Regular chat after tool calls - should still remember the user's name
72 | user_message = "Do you still remember my name?"
73 | print(f"User: {user_message}")
74 | response = agent.chat(user_message)
75 | print(f"Agent: {response}\n")
76 |
77 | # Check if the memory contains all the messages
78 | print("\n=== Checking Memory Contents ===\n")
79 | memory_contents = agent.memory.select(
80 | agent.memory.role,
81 | agent.memory.content
82 | ).order_by(agent.memory.timestamp, asc=True).collect()
83 |
84 | print("Memory contains the following messages:")
85 | for i in range(len(memory_contents)):
86 | role = memory_contents["role"][i]
87 | content = memory_contents["content"][i]
88 | print(f"{i+1}. {role}: {content[:50]}...")
89 |
90 |
91 | if __name__ == "__main__":
92 | main()
93 |
--------------------------------------------------------------------------------
/blueprints/single_provider/bedrock/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_messages(
11 | memory_context: list[dict],
12 | current_message: str,
13 | image: Optional[PIL.Image.Image] = None,
14 | ) -> list[dict]:
15 | """
16 | Create a formatted message list for Bedrock Claude models.
17 |
18 | This function formats the conversation history and current message
19 | into the structure expected by Bedrock Claude models.
20 |
21 | Args:
22 | memory_context: List of previous messages from memory
23 | current_message: The current user message
24 | image: Optional image to include with the message
25 |
26 | Returns:
27 | List of formatted messages for Bedrock Claude
28 | """
29 | # Create a copy to avoid modifying the original and format for Bedrock
30 | messages = []
31 |
32 | # Get messages in oldest-first order
33 | reversed_memory = list(reversed(memory_context))
34 |
35 | # Ensure the conversation starts with a user message
36 | # If the first message is from the assistant, skip it
37 | start_idx = 0
38 | if reversed_memory and reversed_memory[0]["role"] == "assistant":
39 | start_idx = 1
40 |
41 | # Format previous messages for Bedrock
42 | for msg in reversed_memory[start_idx:]:
43 | # Convert string content to the required list format
44 | if isinstance(msg["content"], str):
45 | messages.append({
46 | "role": msg["role"],
47 | "content": [{"text": msg["content"]}]
48 | })
49 | else:
50 | # If it's already in the correct format, keep it as is
51 | messages.append(msg)
52 |
53 | # For text-only messages
54 | if not image:
55 | messages.append({"role": "user", "content": [{"text": current_message}]})
56 | return messages
57 |
58 | # Convert image to base64
59 | bytes_arr = io.BytesIO()
60 | image.save(bytes_arr, format="JPEG")
61 | b64_bytes = base64.b64encode(bytes_arr.getvalue())
62 | b64_encoded_image = b64_bytes.decode("utf-8")
63 |
64 | # Create content blocks with text and image
65 | content_blocks = [
66 | {"type": "text", "text": current_message},
67 | {
68 | "type": "image",
69 | "source": {
70 | "type": "base64",
71 | "media_type": "image/jpeg",
72 | "data": b64_encoded_image,
73 | },
74 | },
75 | ]
76 |
77 | messages.append({"role": "user", "content": content_blocks})
78 |
79 | return messages
80 |
--------------------------------------------------------------------------------
/blueprints/single_provider/gemini/requirements.txt:
--------------------------------------------------------------------------------
1 | pixeltable
2 | google-genai
--------------------------------------------------------------------------------
/blueprints/single_provider/gemini/test.py:
--------------------------------------------------------------------------------
1 | from dotenv import load_dotenv
2 | import pixeltable as pxt
3 | from agent import Agent
4 |
5 | # Load environment variables from .env file
6 | load_dotenv()
7 |
8 | @pxt.udf
9 | def weather(city: str) -> str:
10 | """
11 | Returns the weather in a given city.
12 | """
13 | return f"The weather in {city} is sunny."
14 |
15 |
16 | # Create an agent
17 | agent = Agent(
18 | name="gemini_agent",
19 | system_prompt="You're my assistant.",
20 | tools=pxt.tools(weather),
21 | reset=True,
22 | )
23 |
24 | # Persistant chat and memory
25 | print("=== FIRST CHAT ===")
26 | response1 = agent.chat("Hi, how are you?")
27 | print(f"Response: {response1}")
28 |
29 | # Check what's in memory after first chat
30 | print("\n=== MEMORY CONTENTS AFTER FIRST CHAT ===")
31 | memory_contents = agent.memory.select(agent.memory.role, agent.memory.content, agent.memory.timestamp).order_by(agent.memory.timestamp).collect()
32 | for row in memory_contents:
33 | print(f"Role: {row['role']}, Content: {row['content']}, Timestamp: {row['timestamp']}")
34 |
35 | print("\n=== SECOND CHAT ===")
36 | response2 = agent.chat("What was my last question?")
37 | print(f"Response: {response2}")
38 |
39 | # Check memory context that should be retrieved
40 | print("\n=== MEMORY CONTENTS AFTER SECOND CHAT ===")
41 | memory_contents = agent.memory.select(agent.memory.role, agent.memory.content, agent.memory.timestamp).order_by(agent.memory.timestamp).collect()
42 | for row in memory_contents:
43 | print(f"Role: {row['role']}, Content: {row['content']}, Timestamp: {row['timestamp']}")
44 |
45 | # Check what was actually passed to the agent table
46 | print("\n=== AGENT TABLE CONTENTS ===")
47 | agent_contents = agent.agent.select(agent.agent.user_message, agent.agent.memory_context, agent.agent.prompt, agent.agent.agent_response).collect()
48 | for i, row in enumerate(agent_contents):
49 | print(f"Request {i+1}:")
50 | print(f" User Message: {row['user_message']}")
51 | print(f" Memory Context: {row['memory_context']}")
52 | print(f" Prompt: {row['prompt'][:200]}...") # First 200 chars
53 | print(f" Response: {row['agent_response']}")
54 | print()
55 |
56 | print("\n=== TOOL CALL ===")
57 | tool_response = agent.tool_call("Get weather in San Francisco")
58 | print(f"Tool Response: {tool_response}")
59 |
--------------------------------------------------------------------------------
/blueprints/single_provider/openai/test.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from agent import Agent
3 |
4 |
5 | @pxt.udf
6 | def weather(city: str) -> str:
7 | """
8 | Returns the weather in a given city.
9 | """
10 | return f"The weather in {city} is sunny."
11 |
12 |
13 | # Create an agent
14 | agent = Agent(
15 | name="openai_agent",
16 | system_prompt="You’re my assistant.",
17 | tools=pxt.tools(weather),
18 | reset=True,
19 | )
20 |
21 | # Persistant chat and memory
22 | print(agent.chat("Hi, how are you?"))
23 | print(agent.chat("What was my last question?"))
24 |
25 | # Tool call
26 | print(agent.tool_call("Get weather in San Francisco"))
27 |
--------------------------------------------------------------------------------
/examples/agentic-rag/audio-retriever/agent.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixelagent.openai import Agent
3 |
4 | # Connect to your tables and views
5 | audio_t = pxt.get_table("audio_search.audio")
6 | sentences_view = pxt.get_table("audio_search.audio_sentence_chunks")
7 |
8 | # Add audio files to the knowledge base
9 | audio_t.insert([{
10 | "audio_file": "https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/10-minute%20tour%20of%20Pixeltable.mp3"
11 | }])
12 |
13 | # Perform search
14 | @pxt.query
15 | def audio_search(query_text: str) -> str:
16 | min_similarity = 0.8
17 | sim = sentences_view.text.similarity(query_text)
18 | return (
19 | sentences_view.where(sim >= min_similarity)
20 | .order_by(sim, asc=False)
21 | .select(sentences_view.text, sim=sim)
22 | .limit(5)
23 | )
24 |
25 | agent = Agent(
26 | name = "audio_search.agent",
27 | system_prompt = "Use your tool to search the audio database.",
28 | tools = pxt.tools(audio_search)
29 | )
30 |
31 | print(agent.tool_call("search for Pixeltable best practices"))
--------------------------------------------------------------------------------
/examples/agentic-rag/audio-retriever/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | tiktoken
3 | openai
4 | openai-whisper
5 | spacy
6 | sentence-transformers
--------------------------------------------------------------------------------
/examples/agentic-rag/audio-retriever/table.py:
--------------------------------------------------------------------------------
1 |
2 | # prerequisite: run `python -m spacy download en_core_web_sm` first
3 |
4 | import pixeltable as pxt
5 | from pixeltable.functions import whisper
6 | from pixeltable.functions.huggingface import sentence_transformer
7 | from pixeltable.iterators.string import StringSplitter
8 | import spacy
9 |
10 | # Initialize spaCy
11 | nlp = spacy.load("en_core_web_sm")
12 |
13 | # Initialize app structure
14 | pxt.drop_dir("audio_search", if_not_exists="ignore", force=True)
15 | pxt.create_dir("audio_search")
16 |
17 | # Create audio table
18 | audio_t = pxt.create_table(
19 | "audio_search.audio",
20 | {"audio_file": pxt.Audio}
21 | )
22 |
23 | # Add transcription workflow
24 | audio_t.add_computed_column(
25 | transcription=whisper.transcribe(
26 | audio=audio_t.audio_file,
27 | model="base.en"
28 | )
29 | )
30 |
31 | # Create sentence-level view
32 | sentences_view = pxt.create_view(
33 | "audio_search.audio_sentence_chunks",
34 | audio_t,
35 | iterator=StringSplitter.create(
36 | text=audio_t.transcription.text,
37 | separators="sentence"
38 | )
39 | )
40 |
41 | # Configure embedding model
42 | embed_model = sentence_transformer.using(
43 | model_id="intfloat/e5-large-v2"
44 | )
45 |
46 | # Add search capability
47 | sentences_view.add_embedding_index(
48 | column="text",
49 | string_embed=embed_model
50 | )
51 |
--------------------------------------------------------------------------------
/examples/agentic-rag/customer-reviews/agent.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixelagent.openai import Agent
3 |
4 | # Connect to your table
5 | feedback_t = pxt.get_table("customer_feedback.reviews")
6 |
7 | # Sample customer feedback texts
8 | feedback_texts = [
9 | "This app is incredibly user-friendly and fast!",
10 | "I encountered a bug when saving my profile.",
11 | "The new update improved my productivity significantly.",
12 | ]
13 |
14 | # Add feedback texts to the database
15 | feedback_t.insert({"review": text} for text in feedback_texts)
16 |
17 | @pxt.query
18 | def find_feedback(query: str) -> dict:
19 | sim = feedback_t.review.similarity(query)
20 | return (
21 | feedback_t.order_by(sim, asc=False)
22 | .select(feedback_t.review)
23 | .limit(5)
24 | )
25 |
26 | tools = pxt.tools(find_feedback)
27 | agent = Agent(
28 | name = "feedback_agent",
29 | system_prompt = "Use your tool to search the customer feedback reviews.",
30 | tools = tools,
31 | reset=True
32 | )
33 |
34 | print(agent.tool_call("Find reviews about app performance"))
--------------------------------------------------------------------------------
/examples/agentic-rag/customer-reviews/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | openai
--------------------------------------------------------------------------------
/examples/agentic-rag/customer-reviews/table.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixeltable.functions.openai import embeddings
3 |
4 | # Initialize app structure
5 | pxt.drop_dir("customer_feedback", force=True)
6 | pxt.create_dir("customer_feedback")
7 |
8 | # Create reviews table
9 | t = pxt.create_table("customer_feedback.reviews", {"review": pxt.String})
10 |
11 | # Add search capability
12 | t.add_embedding_index(column="review", embedding=embeddings.using(model = "text-embedding-3-small"))
13 |
--------------------------------------------------------------------------------
/examples/agentic-rag/financial-pdf-reports/agent.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixelagent.openai import Agent
3 |
4 | # Connect to your tables
5 | documents_t = pxt.get_table("pdf_search.documents")
6 | documents_chunks = pxt.get_table("pdf_search.document_chunks")
7 |
8 | # Sample document URLs
9 | DOCUMENT_URL = (
10 | "https://github.com/pixeltable/pixeltable/raw/release/docs/resources/rag-demo/"
11 | )
12 |
13 | document_urls = [
14 | DOCUMENT_URL + doc for doc in [
15 | "Argus-Market-Digest-June-2024.pdf",
16 | "Company-Research-Alphabet.pdf",
17 | # "Zacks-Nvidia-Report.pdf",
18 | ]
19 | ]
20 |
21 | # Add documents to database
22 | documents_t.insert({"pdf": url} for url in document_urls)
23 |
24 | # Search documents
25 | @pxt.query
26 | def find_documents(query: str) -> dict:
27 | sim = documents_chunks.text.similarity(query)
28 | return (
29 | documents_chunks.order_by(sim, asc=False)
30 | .select(
31 | documents_chunks.text,
32 | similarity=sim
33 | )
34 | .limit(5)
35 | )
36 |
37 | tools = pxt.tools(find_documents)
38 | agent = Agent(
39 | name = "pdf_search.agent",
40 | system_prompt = "Use your tool to search the PDF database.",
41 | tools = tools
42 | )
43 |
44 | print(agent.tool_call("Use your tool to search for Alphabet earnings."))
--------------------------------------------------------------------------------
/examples/agentic-rag/financial-pdf-reports/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | openai
3 | tiktoken
4 | sentence-transformers
--------------------------------------------------------------------------------
/examples/agentic-rag/financial-pdf-reports/table.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixeltable.iterators import DocumentSplitter
3 | from pixeltable.functions.huggingface import sentence_transformer
4 |
5 | # Initialize app structure
6 | pxt.drop_dir("pdf_search", if_not_exists="ignore", force=True)
7 | pxt.create_dir("pdf_search")
8 |
9 | # Create documents table
10 | documents_t = pxt.create_table(
11 | "pdf_search.documents",
12 | {"pdf": pxt.Document}
13 | )
14 |
15 | # Create chunked view for efficient processing
16 | documents_chunks = pxt.create_view(
17 | "pdf_search.document_chunks",
18 | documents_t,
19 | iterator=DocumentSplitter.create(
20 | document=documents_t.pdf,
21 | separators="token_limit",
22 | limit=300 # Tokens per chunk
23 | )
24 | )
25 |
26 | # Configure embedding model
27 | embed_model = sentence_transformer.using(
28 | model_id="intfloat/e5-large-v2"
29 | )
30 |
31 | # Add search capability
32 | documents_chunks.add_embedding_index(
33 | column="text",
34 | string_embed=embed_model
35 | )
36 |
--------------------------------------------------------------------------------
/examples/agentic-rag/image-analyzer/agent.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixelagent.openai import Agent
3 |
4 | # Connect to your table
5 | img_t = pxt.get_table("image_search.images")
6 |
7 | # Sample image URLs
8 | IMAGE_URL = (
9 | "https://raw.github.com/pixeltable/pixeltable/release/docs/resources/images/"
10 | )
11 |
12 | image_urls = [
13 | IMAGE_URL + doc for doc in [
14 | "000000000030.jpg",
15 | "000000000034.jpg",
16 | "000000000042.jpg",
17 | ]
18 | ]
19 |
20 | # Add images to the database
21 | img_t.insert({"image": url} for url in image_urls)
22 |
23 | @pxt.query
24 | def find_images(query: str) -> dict:
25 | sim = img_t.image_description.similarity(query)
26 | return (
27 | img_t.order_by(sim, asc=False)
28 | .select(img_t.image_description)
29 | .limit(5)
30 | )
31 |
32 | tools = pxt.tools(find_images)
33 | agent = Agent(
34 | name = "image_search.agent",
35 | system_prompt = "Use your tool to search the image index.",
36 | tools = tools,
37 | reset=True
38 | )
39 |
40 | print(agent.tool_call("Describe the image that contains flowers"))
--------------------------------------------------------------------------------
/examples/agentic-rag/image-analyzer/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | openai
3 | sentence-transformers
--------------------------------------------------------------------------------
/examples/agentic-rag/image-analyzer/table.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixeltable.functions.openai import vision
3 | from pixeltable.functions.huggingface import sentence_transformer
4 |
5 | # Initialize app structure
6 | pxt.drop_dir("image_search", force=True)
7 | pxt.create_dir("image_search")
8 |
9 | # Create images table
10 | img_t = pxt.create_table(
11 | "image_search.images",
12 | {"image": pxt.Image}
13 | )
14 |
15 | # Add OpenAI Vision analysis
16 | img_t.add_computed_column(
17 | image_description=vision(
18 | prompt="Describe the image. Be specific on the colors you see.",
19 | image=img_t.image,
20 | model="gpt-4o-mini",
21 | )
22 | )
23 |
24 | # Configure embedding model
25 | embed_model = sentence_transformer.using(
26 | model_id="intfloat/e5-large-v2"
27 | )
28 |
29 | # Add search capability
30 | img_t.add_embedding_index(
31 | column="image_description",
32 | string_embed=embed_model
33 | )
34 |
--------------------------------------------------------------------------------
/examples/agentic-rag/memory-retriever/agent.py:
--------------------------------------------------------------------------------
1 | # pip install sentence-transformers
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 | from pixeltable.functions.openai import embeddings
6 | from pixelagent.openai import Agent
7 |
8 | embed_model = embeddings.using(model="text-embedding-3-small")
9 |
10 | # First Create the Agent
11 | agent = Agent(
12 | name="conversational_agent",
13 | system_prompt="You’re my assistant.",
14 | reset=True,
15 | n_latest_messages=10
16 | )
17 |
18 | # Get the Agents Memory table and add embedding index to the content
19 | # This contains conversation history
20 | memory = pxt.get_table("rolling_memory_agent.memory")
21 |
22 | memory.add_computed_column(
23 | user_content=pxtf.string.format(
24 | "{0}: {1}", memory.role, memory.content
25 | ),
26 | if_exists="ignore",
27 | )
28 |
29 | memory.add_embedding_index(
30 | column="user_content",
31 | idx_name="user_content_idx",
32 | embedding=embed_model,
33 | if_exists="ignore",
34 | )
35 |
36 | # We can search for context from previous conversations that semantically match the query
37 | @pxt.query
38 | def search_memory(query: str) -> list[dict]:
39 | sim = memory.user_content.similarity(query, idx="user_content_idx")
40 | res = (
41 | memory.order_by(sim, asc=False)
42 | .select(memory.user_content, sim=sim)
43 | .limit(5)
44 | )
45 | return res
46 |
47 | memory_agent = Agent(
48 | name="semantic_memory_agent",
49 | system_prompt="Fetch context from previous conversations.",
50 | tools = pxt.tools(search_memory)
51 | )
52 |
53 | # Load some data into memory
54 | print(agent.chat("Can you recommend some activities in Ireland?"))
55 |
56 | print("--------------------------------------------\n")
57 | print(agent.chat("Can you recommend some activities in Paris?"))
58 |
59 | print("--------------------------------------------\n")
60 | print(memory_agent.tool_call("search and summarize memories about ireland"))
61 |
62 |
--------------------------------------------------------------------------------
/examples/agentic-rag/memory-retriever/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | tiktoken
3 | openai
--------------------------------------------------------------------------------
/examples/agentic-rag/slack-agent/.env-template:
--------------------------------------------------------------------------------
1 | SLACK_USER_TOKEN=
2 | SLACK_BOT_TOKEN=
--------------------------------------------------------------------------------
/examples/agentic-rag/slack-agent/README.md:
--------------------------------------------------------------------------------
1 | # Slack Agent Example
2 |
3 | Agentic RAG agent that interacts with Slack channel messages and search capabilities.
4 |
5 | ## 1. Create a Slack App
6 |
7 | 1. Go to [Slack API: Your Apps](https://api.slack.com/apps) and click **Create New App**.
8 | 2. Choose **From scratch** and give your app a name and workspace.
9 |
10 | ## 2. Set Required OAuth Scopes
11 |
12 | Under **OAuth & Permissions**, add the following **Bot Token Scopes**:
13 | - `app_mentions:read`
14 | - `chat:write`
15 | - `channels:history`
16 | - `channels:read`
17 | - `groups:history`
18 | - `im:history`
19 | - `im:read`
20 | - `im:write`
21 | - `mpim:history`
22 | - `users:read`
23 |
24 | (You may need fewer/more depending on your use case.)
25 |
26 | ## 2a. (Optional) User Token Scopes
27 |
28 | If your agent needs to search workspace content or act on behalf of users, add the following **User Token Scope**:
29 |
30 | - `search:read` — Search a workspace’s content
31 |
32 | User Token Scopes allow your app to access user data and perform actions as the user who authorizes them. You will need to implement the OAuth flow to obtain a **User OAuth Token** if your agent requires these capabilities. See Slack's [OAuth documentation](https://api.slack.com/authentication/oauth-v2) for details.
33 |
34 | ## 3. Install the App and Get Tokens
35 |
36 | 1. Go to **OAuth & Permissions** and click **Install App to Workspace**.
37 | 2. Copy the **Bot User OAuth Token** (starts with `xoxb-`).
38 |
39 | ## 4. Set Environment Variables
40 |
41 | Create a `.env` file in this directory with the following:
42 |
43 | ```
44 | SLACK_BOT_TOKEN=xoxb-... # Required for all Slack API operations (bot token)
45 | SLACK_USER_TOKEN=xoxp-... # Required only for search functionality (user token with search:read scope)
46 | ```
47 |
48 | - `SLACK_BOT_TOKEN` is required for basic channel and message operations. You get this from the "Bot User OAuth Token" after installing your app.
49 | - `SLACK_USER_TOKEN` is only required if you want to use the search functionality (e.g., searching messages in channels). This must be a User OAuth Token (starts with `xoxp-`) with the `search:read` user scope. You must implement the OAuth flow to obtain this token on behalf of a user. See Slack's [OAuth documentation](https://api.slack.com/authentication/oauth-v2) for details.
50 |
51 | Or export these variables in your shell before running.
52 |
53 | ## 5. Run the Agent
54 |
55 | Install dependencies (if needed):
56 | ```
57 | pip install -r requirements.txt
58 | ```
59 |
60 | Then run:
61 | ```
62 | python agent.py
63 | ```
64 |
65 | ## Troubleshooting
66 | - Make sure your app is invited to the channels you want it to listen to.
67 | - Reinstall the app if you change scopes.
68 | ---
69 |
--------------------------------------------------------------------------------
/examples/agentic-rag/slack-agent/agent.py:
--------------------------------------------------------------------------------
1 | from pixelagent.openai import Agent
2 | import pixeltable as pxt
3 |
4 | from tools import search_channel_messages
5 |
6 | agent = Agent(
7 | name = "slackbot",
8 | model = "o3-2025-04-16",
9 | system_prompt = """
10 | You are a helpful assistant that can answer questions and provide information about our slack channel
11 | """,
12 | tools = pxt.tools(search_channel_messages),
13 | reset=True
14 | )
15 |
16 | print(agent.tool_call("summarize topics in the general channel about pixeltable"))
--------------------------------------------------------------------------------
/examples/agentic-rag/slack-agent/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | openai
3 | slack_bolt
4 | python-dotenv
--------------------------------------------------------------------------------
/examples/agentic-rag/slack-agent/tools.py:
--------------------------------------------------------------------------------
1 | import os
2 | from slack_bolt import App
3 | import pixeltable as pxt
4 |
5 | from dotenv import load_dotenv
6 |
7 | load_dotenv()
8 |
9 | # Initialize Slack app with bot token
10 | app = App(token=os.environ.get("SLACK_BOT_TOKEN"))
11 |
12 | def get_recent_messages(channel_id, limit=5):
13 | """
14 | Fetch the most recent messages from a specified Slack channel.
15 |
16 | Args:
17 | channel_id (str): The ID of the channel to fetch messages from
18 | limit (int): Number of messages to retrieve (default: 5)
19 |
20 | Returns:
21 | list: List of message dictionaries, most recent first
22 | """
23 | try:
24 | # Attempt to join the channel first
25 | app.client.conversations_join(channel=channel_id)
26 | print(f"Joined channel with ID: {channel_id}")
27 | except Exception as e:
28 | print(f"Error joining channel: {e}")
29 |
30 | try:
31 | response = app.client.conversations_history(
32 | channel=channel_id,
33 | limit=limit
34 | )
35 | print(response)
36 | return [{"text": msg["text"], "user": msg["user"], "ts": msg["ts"]} for msg in response["messages"]]
37 | except Exception as e:
38 | print(f"Error fetching messages: {e}")
39 | return []
40 |
41 | def list_channels():
42 | """
43 | Fetch a list of all public channels in the Slack workspace.
44 |
45 | Returns:
46 | dict: Dictionary with channel names as keys and IDs as values
47 | """
48 | try:
49 | response = app.client.conversations_list(
50 | types="public_channel"
51 | )
52 | return {channel["name"]: channel["id"] for channel in response["channels"]}
53 | except Exception as e:
54 | print(f"Error fetching channels: {e}")
55 | return {}
56 |
57 | def get_username(user_id):
58 | """
59 | Fetch the username for a given user ID.
60 |
61 | Args:
62 | user_id (str): The ID of the user to look up
63 |
64 | Returns:
65 | str: The username if found, otherwise 'Unknown User'
66 | """
67 | try:
68 | response = app.client.users_info(user=user_id)
69 | if response['ok']:
70 | return response['user']['real_name']
71 | return 'Unknown User'
72 | except Exception as e:
73 | print(f"Error fetching username: {e}")
74 | return 'Unknown User'
75 |
76 | def format_conversation(messages):
77 | """
78 | Format a list of messages into a conversation string.
79 |
80 | Args:
81 | messages (list): List of message dictionaries
82 |
83 | Returns:
84 | str: Formatted conversation string with timestamp, real name, and text
85 | """
86 | conversation = ""
87 | for msg in messages:
88 | username = get_username(msg['user'])
89 | conversation += f"{msg['ts']}: {username}: {msg['text']}\n"
90 | return conversation
91 |
92 | def get_channel_messages(channel_name):
93 | channels = list_channels()
94 | channel_id = channels.get(channel_name, None)
95 | if channel_id:
96 | messages = get_recent_messages(channel_id)
97 | conversation = format_conversation(messages)
98 | return conversation
99 | else:
100 | return f"Channel '{channel_name}' not found"
101 |
102 | @pxt.udf
103 | def search_channel_messages(channel_name: str, keyword: str) -> str:
104 | """
105 | Search for messages containing a specific keyword in a Slack channel.
106 |
107 | Args:
108 | channel_name (str): Name of the channel to search in
109 | keyword (str): Keyword or phrase to search for
110 | limit (int): Maximum number of matching messages to return (default: 5)
111 |
112 | Returns:
113 | str: Formatted string of matching messages, or error message if channel not found
114 | """
115 | channels = list_channels()
116 | channel_id = channels.get(channel_name, None)
117 | if channel_id:
118 | try:
119 | # Join the channel if not already joined
120 | app.client.conversations_join(channel=channel_id)
121 | print(f"Joined channel with ID: {channel_id}")
122 | except Exception as e:
123 | print(f"Error joining channel: {e}")
124 |
125 | try:
126 | # Use Slack's search.messages API to find messages with the keyword
127 | query = f"{keyword} in:#{channel_name}"
128 | response = app.client.search_messages(token=os.environ.get("SLACK_USER_TOKEN"), query=query, count=10)
129 | print(response)
130 | if response['ok'] and 'messages' in response and 'matches' in response['messages']:
131 | matches = response['messages']['matches'][:10]
132 | return format_conversation([{
133 | 'text': match['text'],
134 | 'user': match['user'],
135 | 'ts': match['ts']
136 | } for match in matches])
137 | else:
138 | return f"No messages found containing '{keyword}' in channel '{channel_name}'"
139 | except Exception as e:
140 | print(f"Error searching messages: {e}")
141 | if str(e).find('not_allowed_token_type') != -1:
142 | return f"Error searching for '{keyword}' in channel '{channel_name}'"
143 | return f"Error searching for '{keyword}' in channel '{channel_name}': {str(e)}"
144 | else:
145 | return f"Channel '{channel_name}' not found"
146 |
--------------------------------------------------------------------------------
/examples/agentic-rag/video-summarizer/agent.py:
--------------------------------------------------------------------------------
1 | from datetime import datetime
2 |
3 | import pixeltable as pxt
4 | from pixelagent.openai import Agent
5 |
6 | # Constants
7 | directory = 'video_index'
8 | table_name = f'{directory}.video'
9 |
10 | # Connect to your tables and views
11 | video_index = pxt.get_table(table_name)
12 | transcription_chunks = pxt.get_table(f'{directory}.video_sentence_chunks')
13 |
14 | # Insert videos to the knowledge base
15 | videos = [
16 | 'https://github.com/pixeltable/pixeltable/raw/release/docs/resources/audio-transcription-demo/'
17 | f'Lex-Fridman-Podcast-430-Excerpt-{n}.mp4'
18 | for n in range(3)
19 | ]
20 |
21 | video_index.insert({'video': video, 'uploaded_at': datetime.now()} for video in videos[:2])
22 |
23 | @pxt.query
24 | def search_transcription(query: str) -> dict:
25 | sim = transcription_chunks.text.similarity(query)
26 | return (
27 | transcription_chunks.order_by(sim, asc=False)
28 | .select(transcription_chunks.text)
29 | .limit(10)
30 | )
31 |
32 | tools = pxt.tools(search_transcription)
33 | agent = Agent(
34 | name = "video_agent",
35 | system_prompt = "Use your tool to search the video transcription.",
36 | tools = tools,
37 | reset=True
38 | )
39 |
40 | print(agent.tool_call("Search the transcription for happiness. Summarize your findings"))
41 |
--------------------------------------------------------------------------------
/examples/agentic-rag/video-summarizer/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | openai
3 | tiktoken
4 | openai-whisper
5 | spacy
6 | sentence-transformers
--------------------------------------------------------------------------------
/examples/agentic-rag/video-summarizer/table.py:
--------------------------------------------------------------------------------
1 |
2 | # prerequisite: run `python -m spacy download en_core_web_sm` first
3 |
4 | import pixeltable as pxt
5 | from pixeltable.functions import openai
6 | from pixeltable.functions.huggingface import sentence_transformer
7 | from pixeltable.functions.video import extract_audio
8 | from pixeltable.iterators import AudioSplitter, FrameIterator
9 | from pixeltable.iterators.string import StringSplitter
10 | from pixeltable.functions.openai import vision
11 |
12 | # Define the embedding model once for reuse
13 | EMBED_MODEL = sentence_transformer.using(model_id='intfloat/e5-large-v2')
14 |
15 | # Set up directory and table name
16 | directory = 'video_index'
17 | table_name = f'{directory}.video'
18 |
19 | # Create video table
20 | pxt.create_dir(directory, if_exists='replace_force')
21 |
22 | video_index = pxt.create_table(
23 | table_name,
24 | {'video': pxt.Video, 'uploaded_at': pxt.Timestamp}
25 | )
26 |
27 | video_index.add_computed_column(
28 | audio_extract=extract_audio(video_index.video, format='mp3')
29 | )
30 |
31 | # Create view for frames
32 | frames_view = pxt.create_view(
33 | f'{directory}.video_frames',
34 | video_index,
35 | iterator=FrameIterator.create(
36 | video=video_index.video,
37 | fps=1
38 | )
39 | )
40 |
41 | # Create a column for image description using OpenAI gpt-4o-mini
42 | frames_view.add_computed_column(
43 | image_description=vision(
44 | prompt="Provide quick caption for the image.",
45 | image=frames_view.frame,
46 | model="gpt-4o-mini"
47 | )
48 | )
49 |
50 | # Create embedding index for image description
51 | frames_view.add_embedding_index('image_description', string_embed=EMBED_MODEL)
52 |
53 | # Create view for audio chunks
54 | chunks_view = pxt.create_view(
55 | f'{directory}.video_chunks',
56 | video_index,
57 | iterator=AudioSplitter.create(
58 | audio=video_index.audio_extract,
59 | chunk_duration_sec=30.0,
60 | overlap_sec=2.0,
61 | min_chunk_duration_sec=5.0
62 | )
63 | )
64 |
65 | # Audio-to-text for chunks
66 | chunks_view.add_computed_column(
67 | transcription=openai.transcriptions(
68 | audio=chunks_view.audio_chunk, model='whisper-1'
69 | )
70 | )
71 |
72 | # Create view that chunks text into sentences
73 | transcription_chunks = pxt.create_view(
74 | f'{directory}.video_sentence_chunks',
75 | chunks_view,
76 | iterator=StringSplitter.create(text=chunks_view.transcription.text, separators='sentence'),
77 | )
78 |
79 | # Create embedding index for audio
80 | transcription_chunks.add_embedding_index('text', string_embed=EMBED_MODEL)
81 |
--------------------------------------------------------------------------------
/examples/getting-started/pixelagent_basics_tutorial.py:
--------------------------------------------------------------------------------
1 | # ============================================================================
2 | # Pixelagent Basics Tutorial
3 | # ============================================================================
4 | # This tutorial demonstrates the fundamental capabilities of the Pixelagent
5 | # framework, including agent creation, memory persistence, and tool integration.
6 |
7 | # pip install pixelagent openai
8 |
9 | import pixeltable as pxt
10 |
11 | from pixelagent.openai import Agent
12 |
13 | # ============================================================================
14 | # SECTION 1: BASIC AGENT CREATION
15 | # ============================================================================
16 | # Create a simple conversational agent with a system prompt
17 |
18 | # Initialize the agent with a name and personality
19 | agent = Agent(
20 | name="openai_agent", # Unique identifier for this agent
21 | system_prompt="You're my assistant.", # Defines agent personality and capabilities
22 | reset=True, # Start with fresh conversation history
23 | )
24 |
25 | # ============================================================================
26 | # SECTION 2: CONVERSATIONAL MEMORY
27 | # ============================================================================
28 | # Demonstrate how agents maintain conversation context and history
29 |
30 | # Basic conversation with persistent memory
31 | print("Initial greeting:")
32 | print(agent.chat("Hi, how are you?"))
33 |
34 | # Agent remembers previous messages in the conversation
35 | print("\nMemory test:")
36 | print(agent.chat("What was my last question?"))
37 |
38 | # ============================================================================
39 | # SECTION 3: ACCESSING AGENT MEMORY
40 | # ============================================================================
41 | # How to retrieve and analyze the agent's conversation history
42 |
43 | # Access the agent's memory table using the name
44 | memory = pxt.get_table("openai_agent.memory")
45 |
46 | # Retrieve all conversation history
47 | print("\nFull conversation memory:")
48 | print(memory.collect())
49 |
50 | # ============================================================================
51 | # SECTION 4: AGENT WITH TOOLS
52 | # ============================================================================
53 | # Extend agent capabilities with custom Python functions as tools
54 |
55 |
56 | # Define a simple weather tool as a user-defined function (UDF)
57 | @pxt.udf
58 | def weather(city: str) -> str:
59 | """
60 | Get the current weather for a specified city.
61 |
62 | Args:
63 | city (str): The name of the city to check weather for
64 |
65 | Returns:
66 | str: Weather description for the requested city
67 | """
68 | return f"The weather in {city} is sunny."
69 |
70 |
71 | # Add tool to our exisitng agent with custom system prompt instructions
72 | agent = Agent(
73 | name="openai_agent",
74 | system_prompt="Use your tools to answer the users questions.",
75 | tools=pxt.tools(weather),
76 | )
77 |
78 | # ============================================================================
79 | # SECTION 6: TOOL CALLING
80 | # ============================================================================
81 | # Demonstrate how agents can use tools to access external functionality
82 |
83 | # Execute a tool call with a specific query
84 | print("\nTool call test:")
85 | print(agent.tool_call("Get weather in San Francisco"))
86 |
87 | # ============================================================================
88 | # SECTION 7: TOOL CALL OBSERVABILITY
89 | # ============================================================================
90 | # How to monitor and analyze the agent's tool usage
91 |
92 | # Access the agent's tool call history
93 | tools = pxt.get_table("openai_agent.tools")
94 |
95 | # Retrieve all tool usage records
96 | print("\nTool call history:")
97 | print(tools.collect())
98 |
--------------------------------------------------------------------------------
/examples/memory/README.md:
--------------------------------------------------------------------------------
1 | # Agent Memory Examples
2 |
3 | Pixelagent's memory system demonstrates the power of our data-first approach to agent development. By leveraging Pixeltable's AI data infrastructure, you get a production-ready memory system with zero setup - just focus on building your agent's logic.
4 |
5 | ## Why Use Pixelagent for Memory?
6 |
7 | - **Data-First**: Memory is automatically persisted and queryable through Pixeltable
8 | - **Engineering Freedom**: Simple interface that doesn't constrain your implementation
9 | - **Simplified Workflow**: Automated handling of:
10 | - Memory persistence and retrieval
11 | - Semantic search capabilities
12 | - State management
13 | - Timestamp tracking
14 |
15 | ## Basic Memory Access
16 |
17 | The simplest way to access agent memory is through the Pixeltable table interface. See [basic.py](openai/basic.py):
18 |
19 | ```bash
20 | pip install -r examples/memory/openai/requirements.txt
21 | ```
22 |
23 | ```python
24 | import pixeltable as pxt
25 | from pixelagent.openai import Agent
26 |
27 | # Create an agent
28 | agent = Agent(name="openai_agent", system_prompt="You're my assistant.", reset=True)
29 |
30 | # Chat with the agent
31 | agent.chat("Hi, how are you?")
32 | agent.chat("What was my last question?")
33 |
34 | # Access memory through Pixeltable
35 | memory = pxt.get_table("openai_agent.memory")
36 | print(memory.collect()) # Shows all conversations
37 | ```
38 |
39 | The memory table contains columns like:
40 | - `timestamp`: When the message was sent/received
41 | - `role`: Who sent the message (user/assistant)
42 | - `content`: The actual message content
43 |
44 | ## Semantic Memory Search
45 |
46 | For more advanced use cases, you can add semantic search capabilities to find relevant past conversations. See [semantic-memory.py](openai/semantic-memory.py):
47 |
48 | ```python
49 | from pixelagent.openai import Agent
50 | import pixeltable as pxt
51 | import pixeltable.functions as pxtf
52 | from pixeltable.functions.huggingface import sentence_transformer
53 |
54 | # Setup embedding model
55 | embed_model = sentence_transformer.using(model_id="intfloat/e5-large-v2")
56 |
57 | # Create agent
58 | agent = Agent(name="semantic_bot", system_prompt="You're my assistant.", reset=False)
59 |
60 | # Get memory table and add embedding index
61 | memory = pxt.get_table("semantic_bot.memory")
62 | memory.add_computed_column(
63 | user_content=pxtf.string.format("{0}: {1}: {2}",
64 | memory.timestamp, memory.role, memory.content),
65 | if_exists="ignore"
66 | )
67 | memory.add_embedding_index(
68 | column="user_content",
69 | idx_name="user_content_idx",
70 | string_embed=embed_model,
71 | if_exists="ignore"
72 | )
73 |
74 | # Function to search memory semantically
75 | def semantic_search(query: str) -> str:
76 | sim = memory.user_content.similarity(query, idx="user_content_idx")
77 | res = (memory.order_by(sim, asc=False)
78 | .select(memory.user_content, sim=sim)
79 | .limit(5)
80 | .collect())
81 | return "\n".join(f"Previous conversations: {user_content}"
82 | for user_content in res["user_content"])
83 | ```
84 |
85 | ## Key Features
86 |
87 | 1. **Persistent Storage**: All conversations are automatically stored in Pixeltable
88 | 2. **Easy Access**: Simple interface to query historical conversations
89 | 3. **Semantic Search**: Optional semantic search capability using embeddings
90 | 4. **Timestamp-based**: All memories are timestamped for chronological access
91 | 5. **Flexible Querying**: Full SQL-like query capabilities through Pixeltable
92 |
93 | ## Installation
94 |
95 | 1. Create and activate a virtual environment:
96 | ```bash
97 | # Windows
98 | python -m venv .venv
99 | .venv\Scripts\activate
100 |
101 | # Linux/MacOS
102 | python -m venv .venv
103 | source .venv/bin/activate
104 | ```
105 |
106 | 2. Install basic requirements:
107 | ```bash
108 | # For OpenAI example
109 | pip install pixeltable pixelagent openai
110 |
111 | # For Anthropic example
112 | pip install pixeltable pixelagent anthropic
113 | ```
114 |
115 | 3. For semantic memory (optional):
116 | ```bash
117 | pip install sentence-transformers spacy
118 | python -m spacy download en_core_web_sm
119 | ```
120 |
121 | ## Requirements
122 |
123 | ### Basic Memory Example
124 | - pixeltable
125 | - pixelagent
126 | - openai (for OpenAI example) or anthropic (for Anthropic example)
127 |
128 | ### Semantic Memory Example
129 | Additional requirements for semantic search:
130 | - sentence-transformers
131 | - spacy
132 |
--------------------------------------------------------------------------------
/examples/memory/anthropic/basic.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 |
3 | from pixelagent.anthropic import Agent
4 |
5 | # Create an agent
6 | agent = Agent(
7 | name="anthropic_agent", system_prompt="You’re my assistant.", reset=True
8 | )
9 |
10 | # Persistant chat and memory
11 | print(agent.chat("Hi, how are you?"))
12 | print(agent.chat("What was my last question?"))
13 |
14 | # Easily access agent memory
15 | memory = pxt.get_table("anthropic_agent.memory")
16 | print(memory.collect())
17 |
--------------------------------------------------------------------------------
/examples/memory/anthropic/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | tiktoken
3 | anthropic
4 | spacy
5 | sentence-transformers
--------------------------------------------------------------------------------
/examples/memory/anthropic/semantic-memory-search.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 | from pixeltable.functions.huggingface import sentence_transformer
3 |
4 | from pixelagent.anthropic import Agent
5 |
6 | embed_model = sentence_transformer.using(model_id="all-mpnet-base-v2")
7 |
8 | # First Create the Agent
9 | agent = Agent(
10 | name="semantic_bot", system_prompt="You’re my assistant.", reset=True
11 | )
12 |
13 | # Add some memory
14 | agent.chat("Hello my name is joe")
15 |
16 | # Get the Agents Memory table and add embedding index to the content
17 | memory = pxt.get_table("semantic_bot.memory")
18 |
19 | memory.add_embedding_index(
20 | column="content",
21 | idx_name="content_idx",
22 | string_embed=embed_model,
23 | if_exists="replace",
24 | )
25 |
26 |
27 | def semantic_search(query: str) -> list[dict]:
28 | sim = memory.content.similarity(query, idx="content_idx")
29 | res = (
30 | memory.order_by(sim, asc=False)
31 | .select(memory.content, sim=sim)
32 | .limit(5)
33 | .collect()
34 | )
35 | result_str = "\n".join(
36 | f"Previous conversations: {content}" for content in res["content"]
37 | )
38 | return result_str
39 |
40 |
41 | # Load some data into memory
42 | agent.chat("I like football")
43 |
44 | # test the semantic search
45 | query = "I like denver broncos"
46 | context_from_previous_conversations = semantic_search(query)
47 | print(context_from_previous_conversations)
48 |
--------------------------------------------------------------------------------
/examples/memory/openai/basic.py:
--------------------------------------------------------------------------------
1 | import pixeltable as pxt
2 |
3 | from pixelagent.openai import Agent
4 |
5 | # Create an agent
6 | agent = Agent(
7 | name="openai_agent", system_prompt="You’re my assistant.", reset=True
8 | )
9 |
10 | # Persistant chat and memory
11 | print(agent.chat("Hi, how are you?"))
12 | print(agent.chat("What was my last question?"))
13 |
14 | # Easily access agent memory
15 | memory = pxt.get_table("openai_agent.memory")
16 | print(memory.collect())
17 |
--------------------------------------------------------------------------------
/examples/memory/openai/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | tiktoken
3 | openai
4 | spacy
5 | sentence-transformers
--------------------------------------------------------------------------------
/examples/memory/openai/semantic-memory.py:
--------------------------------------------------------------------------------
1 | # pip install sentence-transformers
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 | from pixeltable.functions.huggingface import sentence_transformer
6 |
7 | from pixelagent.openai import Agent
8 |
9 | embed_model = sentence_transformer.using(model_id="intfloat/e5-large-v2")
10 |
11 | # First Create the Agent
12 | agent = Agent(
13 | name="semantic_bot", system_prompt="You’re my assistant.", reset=False
14 | )
15 |
16 | # Get the Agents Memory table and add embedding index to the content
17 | memory = pxt.get_table("semantic_bot.memory")
18 |
19 | memory.add_computed_column(
20 | user_content=pxtf.string.format(
21 | "{0}: {1}: {2}", memory.timestamp, memory.role, memory.content
22 | ),
23 | if_exists="ignore",
24 | )
25 |
26 | memory.add_embedding_index(
27 | column="user_content",
28 | idx_name="user_content_idx",
29 | string_embed=embed_model,
30 | if_exists="ignore",
31 | )
32 |
33 |
34 | def semantic_search(query: str) -> list[dict]:
35 | sim = memory.user_content.similarity(query, idx="user_content_idx")
36 | res = (
37 | memory.order_by(sim, asc=False)
38 | .select(memory.user_content, sim=sim)
39 | .limit(5)
40 | .collect()
41 | )
42 | result_str = "\n".join(
43 | f"Previous conversations: {user_content}"
44 | for user_content in res["user_content"]
45 | )
46 | return result_str
47 |
48 |
49 | # Load some data into memory
50 | agent.chat("What are your favorite travel destinations?")
51 | agent.chat("Can you recommend some activities in Paris?")
52 |
53 | # test the semantic search
54 | query = "vacation ideas"
55 | context_from_previous_conversations = semantic_search(query)
56 | print(context_from_previous_conversations)
57 |
--------------------------------------------------------------------------------
/examples/planning/README.md:
--------------------------------------------------------------------------------
1 | # ReAct Pattern Tutorial
2 |
3 | This tutorial demonstrates how to use PixelAgent's ReAct pattern to create agents that can reason about and execute complex multi-step tasks. We'll show how to build agents that effectively combine reasoning and action.
4 |
5 | ## What is ReAct?
6 |
7 | ReAct (Reason+Act) is a pattern where agents alternate between:
8 | - **Reasoning**: Thinking about what to do next and why
9 | - **Acting**: Executing specific actions through tools
10 | - **Observing**: Processing the results of actions
11 |
12 | This creates a powerful loop that helps agents break down complex tasks into manageable steps and handle errors gracefully.
13 |
14 | ## Key Benefits
15 |
16 | - **Structured Problem Solving**: Forces agents to think before they act
17 | - **Better Error Handling**: Agents can reason about failures and try alternative approaches
18 | - **Transparency**: Makes agent decision-making process visible and debuggable
19 | - **Flexibility**: Works with any combination of tools and tasks
20 |
21 | ## How Pixeltable and Pixelagent Make ReAct Easy
22 |
23 | Pixeltable and Pixelagent provide powerful infrastructure that makes implementing ReAct patterns seamless:
24 |
25 | 1. **Automatic Memory Management**:
26 | - Persistent storage of agent conversations and tool outputs
27 | - Flexible memory windows (rolling or infinite)
28 | - No need to manually track conversation history
29 |
30 | 2. **Dynamic System Prompts**:
31 | - Easy injection of current state into prompts
32 | - Built-in support for step tracking and max steps
33 | - Automatic formatting of available tools
34 |
35 | 3. **Data Orchestration**:
36 | - Automatic storage and retrieval of tool outputs
37 | - Built-in support for parallel tool execution
38 | - Seamless handling of multimodal data
39 |
40 | 4. **Clean Implementation**:
41 | - No boilerplate code needed for memory or state management
42 | - Simple agent initialization with sensible defaults
43 | - Built-in support for common agent patterns
44 |
45 | The framework handles all the complex data management and orchestration, letting you focus on designing your agent's reasoning flow. Check out the examples in this directory to see how easy it is to implement ReAct patterns with Pixelagent.
46 |
47 | ## How It Works
48 |
49 | 1. The agent receives a task
50 | 2. **Reason**: Agent thinks about what information or actions are needed
51 | 3. **Act**: Agent uses appropriate tools to gather info or make changes
52 | 4. **Observe**: Agent processes results and updates its understanding
53 | 5. Repeat until the task is complete
54 |
55 | The ReAct pattern is particularly useful for tasks that require:
56 | - Multiple steps to complete
57 | - Dynamic decision making based on intermediate results
58 | - Careful error handling and recovery
59 | - Complex reasoning about tool usage
60 |
61 | ## Installation
62 |
63 | Install Pixelagent and its dependencies:
64 |
65 | ```bash
66 | pip install pixelagent yfinance openai anthropic
67 | ```
68 |
69 | ## Getting Started
70 |
71 | You can find complete examples in both the `openai/` and `anthropic/` directories. Here's how to get started:
72 |
73 | ### OpenAI Example
74 |
75 | 1. Set your OpenAI API key:
76 | ```bash
77 | export OPENAI_API_KEY=your_api_key
78 | ```
79 |
80 | 2. Run the example:
81 | ```bash
82 | python openai/react.py
83 | ```
84 |
85 | This will run a financial analysis agent that demonstrates the ReAct pattern by analyzing stock information step-by-step.
86 |
87 | ### Anthropic Example
88 |
89 | 1. Set your Anthropic API key:
90 | ```bash
91 | export ANTHROPIC_API_KEY=your_api_key
92 | ```
93 |
94 | 2. Run the example:
95 | ```bash
96 | python anthropic/react.py
97 | ```
98 |
99 | Both examples show how to:
100 | - Set up a ReAct agent with the appropriate system prompt
101 | - Define and register tools using `@pxt.udf`
102 | - Use Pixeltable's automatic memory management
103 | - Handle the ReAct loop with step tracking
104 |
105 | The examples use different underlying models (GPT-4 vs Claude) but follow the same pattern, demonstrating how Pixelagent makes it easy to swap between different LLM providers while maintaining the same ReAct structure.
106 |
--------------------------------------------------------------------------------
/examples/planning/anthropic/react.py:
--------------------------------------------------------------------------------
1 | # ============================================================================
2 | # Stock Advisor Agent Tutorial
3 | # ============================================================================
4 | # This tutorial demonstrates how to build a financial analyst agent that uses
5 | # ReAct (Reasoning + Action) methodology to provide investment recommendations
6 | # The agent follows a structured thought process and can access external data
7 |
8 | import re
9 | from datetime import datetime
10 |
11 | # Import necessary libraries
12 | import pixeltable as pxt # Database for AI agent memory
13 | import yfinance as yf # Financial data API
14 |
15 | from pixelagent.anthropic import Agent # Agent framework
16 |
17 | # ============================================================================
18 | # SECTION 1: DEFINE TOOLS
19 | # ============================================================================
20 | # The agent needs access to external tools to gather financial information
21 | # Here we create a UDF (User-Defined Function) that fetches stock information
22 |
23 |
24 | @pxt.udf
25 | def stock_info(ticker: str) -> dict:
26 | """
27 | Retrieve comprehensive stock information for a given ticker symbol.
28 |
29 | Args:
30 | ticker (str): Stock ticker symbol (e.g., 'AAPL' for Apple)
31 |
32 | Returns:
33 | dict: Dictionary containing stock information and metrics
34 | """
35 | stock = yf.Ticker(ticker)
36 | return stock.info
37 |
38 |
39 | # List of tools available to the agent
40 | react_tools = ["stock_info"]
41 |
42 | # ============================================================================
43 | # SECTION 2: DEFINE THE REACT SYSTEM PROMPT
44 | # ============================================================================
45 | # The ReAct methodology requires a specific prompt structure to guide the agent
46 | # through alternating reasoning and action steps
47 |
48 | REACT_SYSTEM_PROMPT = """
49 | Today is {date}
50 |
51 | IMPORTANT: You have {max_steps} maximum steps. You are on step {step}.
52 |
53 | Follow this EXACT step-by-step reasoning and action pattern:
54 |
55 | 1. THOUGHT: Think about what information you need to answer the user's question.
56 | 2. ACTION: Either use a tool OR write "FINAL" if you're ready to give your final answer.
57 |
58 | Available tools:
59 | {tools}
60 |
61 |
62 | Always structure your response with these exact headings:
63 |
64 | THOUGHT: [your reasoning]
65 | ACTION: [tool_name] OR simply write "FINAL"
66 |
67 | Your memory will automatically update with the tool calling results. Use those results to inform your next action.
68 | """
69 |
70 | # ============================================================================
71 | # SECTION 3: HELPER FUNCTIONS
72 | # ============================================================================
73 | # Utility function to parse agent responses and extract specific sections
74 |
75 |
76 | def extract_section(text, section_name):
77 | """
78 | Extract a specific section from the agent's response text.
79 |
80 | Args:
81 | text (str): The full text response from the agent
82 | section_name (str): The section to extract (e.g., 'THOUGHT', 'ACTION')
83 |
84 | Returns:
85 | str: The extracted section content or empty string if not found
86 | """
87 | pattern = rf"{section_name}:?\s*(.*?)(?=\n\s*(?:THOUGHT|ACTION):|$)"
88 | match = re.search(pattern, text, re.DOTALL | re.IGNORECASE)
89 | return match.group(1).strip() if match else ""
90 |
91 |
92 | # ============================================================================
93 | # SECTION 4: MAIN EXECUTION LOOP
94 | # ============================================================================
95 | # Initialize variables for the ReAct loop
96 | step = 1
97 | max_steps = 10 # Maximum steps before forced termination
98 | question = "Create a detailed investment recommendation for Apple Inc. (AAPL) based on current market conditions."
99 |
100 | # Clear previous agent history before starting
101 | pxt.drop_dir("financial_analyst_react", force=True)
102 |
103 | # Start the ReAct loop - the agent will alternate between thinking and acting
104 | print("Starting Financial Analyst Agent with ReAct methodology...")
105 | while step <= max_steps:
106 |
107 | print(f"Step {step}:\n")
108 |
109 | # Generate the dynamic system prompt with current step information
110 | react_system_prompt = REACT_SYSTEM_PROMPT.format(
111 | date=datetime.now().strftime("%Y-%m-%d"),
112 | tools="\n".join(react_tools),
113 | step=step,
114 | max_steps=max_steps,
115 | )
116 |
117 | print("React System Prompt:\n")
118 | print(react_system_prompt)
119 |
120 | # Initialize the ReAct agent with persistent memory (reset=False)
121 | agent = Agent(
122 | name="financial_analyst_react",
123 | system_prompt=react_system_prompt, # Dynamic React System Prompt
124 | reset=False, # Maintains persistent memory across steps
125 | # n_latest_messages=10, # Optional: Define N rolling memory window
126 | )
127 |
128 | # Get the agent's response for the current step
129 | response = agent.chat(question)
130 | print("Response:")
131 | print(response)
132 |
133 | # Extract the ACTION section to determine next steps
134 | action = extract_section(response, "ACTION")
135 |
136 | # Check if the agent is ready to finalize its answer
137 | if "FINAL" in action.upper():
138 | print("Agent has decided to finalize answer.")
139 | break
140 |
141 | # Determine which tool to call based on the agent's action
142 | call_tool = [tool for tool in react_tools if tool.lower() in action.lower()]
143 |
144 | if call_tool:
145 | print(f"Agent has decided to use tool: {call_tool[0]}")
146 |
147 | # Create a tool-specific agent to handle the tool call
148 | tool_agent = Agent(
149 | name="financial_analyst_react",
150 | system_prompt="Use your tools to answer the user's question.",
151 | tools=pxt.tools(stock_info), # Register the stock_info tool
152 | )
153 |
154 | # Execute the tool call and get results
155 | tool_call_result = tool_agent.tool_call(question)
156 | print("Tool Call Result:\n")
157 | print(tool_call_result)
158 |
159 | # Increment the step counter
160 | step += 1
161 |
162 | # Safety check to prevent infinite loops
163 | if step > max_steps:
164 | print("Reached maximum steps. Forcing termination.")
165 | break
166 |
167 | # ============================================================================
168 | # SECTION 5: GENERATE FINAL ANSWER
169 | # ============================================================================
170 | # Create a final summary agent that accesses the conversation history
171 | print("Generating final investment recommendation...")
172 |
173 | summary_agent = Agent(
174 | name="financial_analyst_react", # Same agent name to access history
175 | system_prompt="Answer the user's question. Use your previous chat history to answer the question.",
176 | )
177 |
178 | # Generate the final comprehensive answer
179 | final_answer = summary_agent.chat(question)
180 | print("Final Investment Recommendation:")
181 | print(final_answer)
182 |
--------------------------------------------------------------------------------
/examples/planning/anthropic/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | anthropic
3 |
--------------------------------------------------------------------------------
/examples/planning/openai/react.py:
--------------------------------------------------------------------------------
1 | # ============================================================================
2 | # Stock Advisor Agent Tutorial
3 | # ============================================================================
4 | # This tutorial demonstrates how to build a financial analyst agent that uses
5 | # ReAct (Reasoning + Action) methodology to provide investment recommendations
6 | # The agent follows a structured thought process and can access external data
7 |
8 | import re
9 | from datetime import datetime
10 |
11 | # Import necessary libraries
12 | import pixeltable as pxt # Database for AI agent memory
13 | import yfinance as yf # Financial data API
14 |
15 | from pixelagent.openai import Agent # Agent framework
16 |
17 | # ============================================================================
18 | # SECTION 1: DEFINE TOOLS
19 | # ============================================================================
20 | # The agent needs access to external tools to gather financial information
21 | # Here we create a UDF (User-Defined Function) that fetches stock information
22 |
23 |
24 | @pxt.udf
25 | def stock_info(ticker: str) -> dict:
26 | """
27 | Retrieve comprehensive stock information for a given ticker symbol.
28 |
29 | Args:
30 | ticker (str): Stock ticker symbol (e.g., 'AAPL' for Apple)
31 |
32 | Returns:
33 | dict: Dictionary containing stock information and metrics
34 | """
35 | stock = yf.Ticker(ticker)
36 | return stock.info
37 |
38 |
39 | # List of tools available to the agent
40 | react_tools = ["stock_info"]
41 |
42 | # ============================================================================
43 | # SECTION 2: DEFINE THE REACT SYSTEM PROMPT
44 | # ============================================================================
45 | # The ReAct methodology requires a specific prompt structure to guide the agent
46 | # through alternating reasoning and action steps
47 |
48 | REACT_SYSTEM_PROMPT = """
49 | Today is {date}
50 |
51 | IMPORTANT: You have {max_steps} maximum steps. You are on step {step}.
52 |
53 | Follow this EXACT step-by-step reasoning and action pattern:
54 |
55 | 1. THOUGHT: Think about what information you need to answer the user's question.
56 | 2. ACTION: Either use a tool OR write "FINAL" if you're ready to give your final answer.
57 |
58 | Available tools:
59 | {tools}
60 |
61 |
62 | Always structure your response with these exact headings:
63 |
64 | THOUGHT: [your reasoning]
65 | ACTION: [tool_name] OR simply write "FINAL"
66 |
67 | Your memory will automatically update with the tool calling results. Use those results to inform your next action.
68 | """
69 |
70 | # ============================================================================
71 | # SECTION 3: HELPER FUNCTIONS
72 | # ============================================================================
73 | # Utility function to parse agent responses and extract specific sections
74 |
75 |
76 | def extract_section(text, section_name):
77 | """
78 | Extract a specific section from the agent's response text.
79 |
80 | Args:
81 | text (str): The full text response from the agent
82 | section_name (str): The section to extract (e.g., 'THOUGHT', 'ACTION')
83 |
84 | Returns:
85 | str: The extracted section content or empty string if not found
86 | """
87 | pattern = rf"{section_name}:?\s*(.*?)(?=\n\s*(?:THOUGHT|ACTION):|$)"
88 | match = re.search(pattern, text, re.DOTALL | re.IGNORECASE)
89 | return match.group(1).strip() if match else ""
90 |
91 |
92 | # ============================================================================
93 | # SECTION 4: MAIN EXECUTION LOOP
94 | # ============================================================================
95 | # Initialize variables for the ReAct loop
96 | step = 1
97 | max_steps = 10 # Maximum steps before forced termination
98 | question = "Create a detailed investment recommendation for Apple Inc. (AAPL) based on current market conditions."
99 |
100 | # Clear previous agent history before starting
101 | pxt.drop_dir("financial_analyst_react", force=True)
102 |
103 | # Start the ReAct loop - the agent will alternate between thinking and acting
104 | print("Starting Financial Analyst Agent with ReAct methodology...")
105 | while step <= max_steps:
106 |
107 | print(f"Step {step}:\n")
108 |
109 | # Generate the dynamic system prompt with current step information
110 | react_system_prompt = REACT_SYSTEM_PROMPT.format(
111 | date=datetime.now().strftime("%Y-%m-%d"),
112 | tools="\n".join(react_tools),
113 | step=step,
114 | max_steps=max_steps,
115 | )
116 |
117 | print("React System Prompt:\n")
118 | print(react_system_prompt)
119 |
120 | # Initialize the ReAct agent with persistent memory (reset=False)
121 | agent = Agent(
122 | name="financial_analyst_react",
123 | system_prompt=react_system_prompt, # Dynamic React System Prompt
124 | reset=False, # Maintains persistent memory across steps
125 | # n_latest_messages=10, # Optional: Define N rolling memory window
126 | )
127 |
128 | # Get the agent's response for the current step
129 | response = agent.chat(question)
130 | print("Response:")
131 | print(response)
132 |
133 | # Extract the ACTION section to determine next steps
134 | action = extract_section(response, "ACTION")
135 |
136 | # Check if the agent is ready to finalize its answer
137 | if "FINAL" in action.upper():
138 | print("Agent has decided to finalize answer.")
139 | break
140 |
141 | # Determine which tool to call based on the agent's action
142 | call_tool = [tool for tool in react_tools if tool.lower() in action.lower()]
143 |
144 | if call_tool:
145 | print(f"Agent has decided to use tool: {call_tool[0]}")
146 |
147 | # Create a tool-specific agent to handle the tool call
148 | tool_agent = Agent(
149 | name="financial_analyst_react",
150 | system_prompt="Use your tools to answer the user's question.",
151 | tools=pxt.tools(stock_info), # Register the stock_info tool
152 | )
153 |
154 | # Execute the tool call and get results
155 | tool_call_result = tool_agent.tool_call(question)
156 | print("Tool Call Result:\n")
157 | print(tool_call_result)
158 |
159 | # Increment the step counter
160 | step += 1
161 |
162 | # Safety check to prevent infinite loops
163 | if step > max_steps:
164 | print("Reached maximum steps. Forcing termination.")
165 | break
166 |
167 | # ============================================================================
168 | # SECTION 5: GENERATE FINAL ANSWER
169 | # ============================================================================
170 | # Create a final summary agent that accesses the conversation history
171 | print("Generating final investment recommendation...")
172 |
173 | summary_agent = Agent(
174 | name="financial_analyst_react", # Same agent name to access history
175 | system_prompt="Answer the user's question. Use your previous chat history to answer the question.",
176 | )
177 |
178 | # Generate the final comprehensive answer
179 | final_answer = summary_agent.chat(question)
180 | print("Final Investment Recommendation:")
181 | print(final_answer)
182 |
--------------------------------------------------------------------------------
/examples/planning/openai/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | openai
3 |
--------------------------------------------------------------------------------
/examples/reflection/anthropic/reflection.py:
--------------------------------------------------------------------------------
1 | # ============================================================================
2 | # Agent Reflection Loop Tutorial
3 | # ============================================================================
4 | # This tutorial demonstrates how to implement a self-improving agent using
5 | # a reflection loop pattern. The agent generates responses, reflects on them,
6 | # and iteratively improves based on its own critique.
7 |
8 | import pixeltable as pxt
9 | import yfinance as yf
10 |
11 | from pixelagent.anthropic import Agent
12 |
13 | # ============================================================================
14 | # SECTION 1: REFLECTION LOOP CORE FUNCTION
15 | # ============================================================================
16 | # This function implements the full reflection cycle for agent improvement
17 |
18 |
19 | def reflection_loop(
20 | main_agent: Agent,
21 | reflection_agent: Agent,
22 | user_msg: str,
23 | max_iterations: int = 3,
24 | verbose: int = 0,
25 | is_tool_call: bool = False,
26 | ) -> str:
27 | """
28 | Run a complete reflection loop for a user message or tool call.
29 |
30 | The reflection loop follows these steps:
31 | 1. Generate initial response
32 | 2. Critique the response
33 | 3. Improve based on critique
34 | 4. Repeat until satisfied or max iterations reached
35 |
36 | Args:
37 | main_agent (Agent): The main PixelAgent instance that generates responses
38 | reflection_agent (Agent): The reflection PixelAgent instance that critiques
39 | user_msg (str): The user message or query to process
40 | max_iterations (int): Maximum number of reflection-improvement cycles
41 | verbose (int): Verbosity level (0=quiet, 1=show all steps)
42 | is_tool_call (bool): Whether to use tool_call instead of chat
43 |
44 | Returns:
45 | str: The final refined response after reflection
46 | """
47 | # Step 1: Initial response generation
48 | if is_tool_call:
49 | response = main_agent.tool_call(user_msg)
50 | original_result = response # Save original tool call result for context
51 | else:
52 | response = main_agent.chat(user_msg)
53 |
54 | # Print initial response if verbose mode is enabled
55 | if verbose > 0:
56 | print(
57 | f"\n\nINITIAL {'TOOL CALL' if is_tool_call else 'GENERATION'}\n\n", response
58 | )
59 |
60 | # Step 2-4: Reflection and improvement iterations
61 | for i in range(max_iterations):
62 | if verbose > 0:
63 | print(f"\n\nITERATION {i+1}/{max_iterations}\n\n")
64 |
65 | # Generate critique of the current response
66 | critique = reflection_agent.chat(
67 | f"Please critique the following response:\n\n{response}"
68 | )
69 |
70 | if verbose > 0:
71 | print("\n\nREFLECTION\n\n", critique)
72 |
73 | # Check if the reflection agent is satisfied with the response
74 | if "" in critique:
75 | if verbose > 0:
76 | print(f"\n\nResponse is satisfactory. Stopping reflection loop.\n\n")
77 | break
78 |
79 | # Refine the response based on the critique
80 | if is_tool_call:
81 | # For tool calls, include the original result for context
82 | prompt = (
83 | f"The following was the result of a tool call: {original_result}\n\n"
84 | f"Please improve this result based on this critique:\n\n{critique}"
85 | )
86 | else:
87 | # For regular chat, just ask for improvements based on critique
88 | prompt = f"Please improve your previous response based on this critique:\n\n{critique}"
89 |
90 | # Generate improved response
91 | response = main_agent.chat(prompt)
92 |
93 | if verbose > 0:
94 | print(
95 | f"\n\nREFINED {'TOOL CALL RESULT' if is_tool_call else 'RESPONSE'}\n\n",
96 | response,
97 | )
98 |
99 | # Return the final refined response
100 | return response
101 |
102 |
103 | # ============================================================================
104 | # SECTION 2: TOOL DEFINITION
105 | # ============================================================================
106 | # Define a financial data tool that our agent can use
107 |
108 |
109 | @pxt.udf
110 | def stock_info(ticker: str) -> dict:
111 | """
112 | Retrieve stock information for a given ticker symbol.
113 |
114 | Args:
115 | ticker (str): Stock ticker symbol (e.g., 'AAPL' for Apple)
116 |
117 | Returns:
118 | dict: Dictionary containing stock information and metrics
119 | """
120 | stock = yf.Ticker(ticker)
121 | return stock.info
122 |
123 |
124 | # ============================================================================
125 | # SECTION 3: AGENT SETUP
126 | # ============================================================================
127 | # Configure the main agent and reflection agent with appropriate prompts
128 |
129 | # Register the stock_info tool
130 | tools = pxt.tools(stock_info)
131 |
132 | # Main agent that generates responses and uses tools
133 | main_agent = Agent(
134 | name="main_bot",
135 | system_prompt="You're a helpful financial assistant.",
136 | tools=tools,
137 | reset=True, # Start with a fresh conversation history
138 | )
139 |
140 | # Reflection agent that critiques responses
141 | reflection_agent = Agent(
142 | name="reflection_bot",
143 | system_prompt="""
144 | You are tasked with generating critique and recommendations for responses.
145 | If the response has something wrong or something to be improved, output a list of recommendations
146 | and critiques. If the response is good and there's nothing to change, output this:
147 | """,
148 | reset=True, # Start with a fresh conversation history
149 | )
150 |
151 | # ============================================================================
152 | # SECTION 4: EXAMPLE USAGE
153 | # ============================================================================
154 | # Demonstrate how to use the reflection loop with a financial query
155 |
156 | if __name__ == "__main__":
157 | # Example query that uses the stock_info tool
158 | tool_query = "Get NVIDIA and Apple stock price and explain what the results mean"
159 |
160 | # Run the reflection loop
161 | refined_tool_result = reflection_loop(
162 | main_agent=main_agent,
163 | reflection_agent=reflection_agent,
164 | user_msg=tool_query,
165 | verbose=1, # Show all steps
166 | is_tool_call=True, # Use tool_call instead of chat
167 | )
168 |
169 | # Output the final result
170 | print("\nFINAL TOOL RESULT:", refined_tool_result)
171 |
--------------------------------------------------------------------------------
/examples/reflection/anthropic/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | anthropic
3 |
--------------------------------------------------------------------------------
/examples/reflection/openai/reflection.py:
--------------------------------------------------------------------------------
1 | # ============================================================================
2 | # Agent Reflection Loop Tutorial
3 | # ============================================================================
4 | # This tutorial demonstrates how to implement a self-improving agent using
5 | # a reflection loop pattern. The agent generates responses, reflects on them,
6 | # and iteratively improves based on its own critique.
7 |
8 | import pixeltable as pxt
9 | import yfinance as yf
10 |
11 | from pixelagent.openai import Agent
12 |
13 | # ============================================================================
14 | # SECTION 1: REFLECTION LOOP CORE FUNCTION
15 | # ============================================================================
16 | # This function implements the full reflection cycle for agent improvement
17 |
18 |
19 | def reflection_loop(
20 | main_agent: Agent,
21 | reflection_agent: Agent,
22 | user_msg: str,
23 | max_iterations: int = 3,
24 | verbose: int = 0,
25 | is_tool_call: bool = False,
26 | ) -> str:
27 | """
28 | Run a complete reflection loop for a user message or tool call.
29 |
30 | The reflection loop follows these steps:
31 | 1. Generate initial response
32 | 2. Critique the response
33 | 3. Improve based on critique
34 | 4. Repeat until satisfied or max iterations reached
35 |
36 | Args:
37 | main_agent (Agent): The main PixelAgent instance that generates responses
38 | reflection_agent (Agent): The reflection PixelAgent instance that critiques
39 | user_msg (str): The user message or query to process
40 | max_iterations (int): Maximum number of reflection-improvement cycles
41 | verbose (int): Verbosity level (0=quiet, 1=show all steps)
42 | is_tool_call (bool): Whether to use tool_call instead of chat
43 |
44 | Returns:
45 | str: The final refined response after reflection
46 | """
47 | # Step 1: Initial response generation
48 | if is_tool_call:
49 | response = main_agent.tool_call(user_msg)
50 | original_result = response # Save original tool call result for context
51 | else:
52 | response = main_agent.chat(user_msg)
53 |
54 | # Print initial response if verbose mode is enabled
55 | if verbose > 0:
56 | print(
57 | f"\n\nINITIAL {'TOOL CALL' if is_tool_call else 'GENERATION'}\n\n", response
58 | )
59 |
60 | # Step 2-4: Reflection and improvement iterations
61 | for i in range(max_iterations):
62 | if verbose > 0:
63 | print(f"\n\nITERATION {i+1}/{max_iterations}\n\n")
64 |
65 | # Generate critique of the current response
66 | critique = reflection_agent.chat(
67 | f"Please critique the following response:\n\n{response}"
68 | )
69 |
70 | if verbose > 0:
71 | print("\n\nREFLECTION\n\n", critique)
72 |
73 | # Check if the reflection agent is satisfied with the response
74 | if "" in critique:
75 | if verbose > 0:
76 | print(f"\n\nResponse is satisfactory. Stopping reflection loop.\n\n")
77 | break
78 |
79 | # Refine the response based on the critique
80 | if is_tool_call:
81 | # For tool calls, include the original result for context
82 | prompt = (
83 | f"The following was the result of a tool call: {original_result}\n\n"
84 | f"Please improve this result based on this critique:\n\n{critique}"
85 | )
86 | else:
87 | # For regular chat, just ask for improvements based on critique
88 | prompt = f"Please improve your previous response based on this critique:\n\n{critique}"
89 |
90 | # Generate improved response
91 | response = main_agent.chat(prompt)
92 |
93 | if verbose > 0:
94 | print(
95 | f"\n\nREFINED {'TOOL CALL RESULT' if is_tool_call else 'RESPONSE'}\n\n",
96 | response,
97 | )
98 |
99 | # Return the final refined response
100 | return response
101 |
102 |
103 | # ============================================================================
104 | # SECTION 2: TOOL DEFINITION
105 | # ============================================================================
106 | # Define a financial data tool that our agent can use
107 |
108 |
109 | @pxt.udf
110 | def stock_info(ticker: str) -> dict:
111 | """
112 | Retrieve stock information for a given ticker symbol.
113 |
114 | Args:
115 | ticker (str): Stock ticker symbol (e.g., 'AAPL' for Apple)
116 |
117 | Returns:
118 | dict: Dictionary containing stock information and metrics
119 | """
120 | stock = yf.Ticker(ticker)
121 | return stock.info
122 |
123 |
124 | # ============================================================================
125 | # SECTION 3: AGENT SETUP
126 | # ============================================================================
127 | # Configure the main agent and reflection agent with appropriate prompts
128 |
129 | # Register the stock_info tool
130 | tools = pxt.tools(stock_info)
131 |
132 | # Main agent that generates responses and uses tools
133 | main_agent = Agent(
134 | name="main_bot",
135 | system_prompt="You're a helpful financial assistant.",
136 | tools=tools,
137 | reset=True, # Start with a fresh conversation history
138 | )
139 |
140 | # Reflection agent that critiques responses
141 | reflection_agent = Agent(
142 | name="reflection_bot",
143 | system_prompt="""
144 | You are tasked with generating critique and recommendations for responses.
145 | If the response has something wrong or something to be improved, output a list of recommendations
146 | and critiques. If the response is good and there's nothing to change, output this:
147 | """,
148 | reset=True, # Start with a fresh conversation history
149 | )
150 |
151 | # ============================================================================
152 | # SECTION 4: EXAMPLE USAGE
153 | # ============================================================================
154 | # Demonstrate how to use the reflection loop with a financial query
155 |
156 | if __name__ == "__main__":
157 | # Example query that uses the stock_info tool
158 | tool_query = "Get NVIDIA and Apple stock price and explain what the results mean"
159 |
160 | # Run the reflection loop
161 | refined_tool_result = reflection_loop(
162 | main_agent=main_agent,
163 | reflection_agent=reflection_agent,
164 | user_msg=tool_query,
165 | verbose=1, # Show all steps
166 | is_tool_call=True, # Use tool_call instead of chat
167 | )
168 |
169 | # Output the final result
170 | print("\nFINAL TOOL RESULT:", refined_tool_result)
171 |
--------------------------------------------------------------------------------
/examples/reflection/openai/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | openai
3 |
--------------------------------------------------------------------------------
/examples/reflection/reflection.md:
--------------------------------------------------------------------------------
1 | # Self-Reflection Pattern Tutorial
2 |
3 | This tutorial shows you how to use PixelAgent's self-reflection pattern to create agents that can critique and improve their own responses. We'll build a financial analysis agent that gets better through self-reflection.
4 |
5 | ## What is Self-Reflection?
6 |
7 | Self-reflection is a pattern where one agent critiques another agent's output to improve it. The pattern uses:
8 | - A main agent that generates responses
9 | - A reflection agent that critiques and suggests improvements
10 | - Automatic tool handling and memory management
11 |
12 | ## Basic Example
13 |
14 | Here's a simple example of the reflection pattern:
15 |
16 | ```bash
17 | pip install pixelagent openai
18 | ```
19 |
20 | ```python
21 | from pixelagent.openai import Agent
22 | import pixeltable as pxt
23 | import yfinance as yf
24 |
25 | # First, define a financial analysis tool
26 | @pxt.udf
27 | def stock_info(ticker: str) -> dict:
28 | """Get stock information from Yahoo Finance"""
29 | stock = yf.Ticker(ticker)
30 | return stock.info
31 |
32 | # Create the main agent with the tool
33 | tools = pxt.tools(stock_info)
34 | main_agent = Agent(
35 | name="finance_bot",
36 | system_prompt="You're a financial analyst. Provide clear insights about stocks.",
37 | tools=tools, # Tools are automatically handled - no manual tool calling needed
38 | reset=True
39 | )
40 |
41 | # Create the reflection agent
42 | reflection_agent = Agent(
43 | name="critic",
44 | system_prompt="""
45 | Review financial analyses for:
46 | 1. Data accuracy and completeness
47 | 2. Clear explanations of metrics
48 | 3. Risk assessment
49 | If improvements needed, list recommendations.
50 | If analysis is good, output:
51 | """,
52 | reset=True
53 | )
54 |
55 | # Run the reflection loop
56 | def reflection_loop(
57 | main_agent: Agent,
58 | reflection_agent: Agent,
59 | user_msg: str,
60 | max_iterations: int = 3,
61 | verbose: int = 0,
62 | is_tool_call: bool = False
63 | ) -> str:
64 | """
65 | Run a complete reflection loop for a user message or tool call.
66 |
67 | The reflection loop follows these steps:
68 | 1. Generate initial response
69 | 2. Critique the response
70 | 3. Improve based on critique
71 | 4. Repeat until satisfied or max iterations reached
72 |
73 | Args:
74 | main_agent (Agent): The main PixelAgent instance that generates responses
75 | reflection_agent (Agent): The reflection PixelAgent instance that critiques
76 | user_msg (str): The user message or query to process
77 | max_iterations (int): Maximum number of reflection-improvement cycles
78 | verbose (int): Verbosity level (0=quiet, 1=show all steps)
79 | is_tool_call (bool): Whether to use tool_call instead of chat
80 |
81 | Returns:
82 | str: The final refined response after reflection
83 | """
84 | # Step 1: Initial response generation
85 | if is_tool_call:
86 | response = main_agent.tool_call(user_msg)
87 | original_result = response # Save original tool call result for context
88 | else:
89 | response = main_agent.chat(user_msg)
90 |
91 | # Print initial response if verbose mode is enabled
92 | if verbose > 0:
93 | print(f"\n\nINITIAL {'TOOL CALL' if is_tool_call else 'GENERATION'}\n\n", response)
94 |
95 | # Step 2-4: Reflection and improvement iterations
96 | for i in range(max_iterations):
97 | if verbose > 0:
98 | print(f"\n\nITERATION {i+1}/{max_iterations}\n\n")
99 |
100 | # Generate critique of the current response
101 | critique = reflection_agent.chat(f"Please critique the following response:\n\n{response}")
102 |
103 | if verbose > 0:
104 | print("\n\nREFLECTION\n\n", critique)
105 |
106 | # Check if the reflection agent is satisfied with the response
107 | if "" in critique:
108 | if verbose > 0:
109 | print(f"\n\nResponse is satisfactory. Stopping reflection loop.\n\n")
110 | break
111 |
112 | # Refine the response based on the critique
113 | if is_tool_call:
114 | # For tool calls, include the original result for context
115 | prompt = f"The following was the result of a tool call: {original_result}\n\n" \
116 | f"Please improve this result based on this critique:\n\n{critique}"
117 | else:
118 | # For regular chat, just ask for improvements based on critique
119 | prompt = f"Please improve your previous response based on this critique:\n\n{critique}"
120 |
121 | # Generate improved response
122 | response = main_agent.chat(prompt)
123 |
124 | if verbose > 0:
125 | print(f"\n\nREFINED {'TOOL CALL RESULT' if is_tool_call else 'RESPONSE'}\n\n", response)
126 |
127 | # Return the final refined response
128 | return response
129 |
130 | analysis = reflection_loop(
131 | main_agent,
132 | reflection_agent,
133 | "Analyze NVDA stock performance",
134 | max_iterations=2,
135 | is_tool_call=True # Automatically handles tool calling
136 | )
137 |
138 | print(analysis)
139 | ```
140 |
141 | ## How the Pattern Works
142 |
143 | The reflection loop follows these steps:
144 |
145 | 1. **Initial Analysis**: The main agent uses `stock_info` to get data and generate analysis
146 | 2. **Review**: The reflection agent checks the analysis quality
147 | 3. **Refinement**: If needed, the main agent improves based on feedback
148 | 4. **Repeat**: Until quality meets standards or max iterations reached
149 |
150 | Key features that make this work:
151 | - Tool calls are handled automatically - no need to manage request/response
152 | - Conversation history is saved automatically
153 | - Original data context is preserved during refinement
154 |
155 | ## Step-by-Step Breakdown
156 |
157 | Let's look at what happens in each iteration:
158 |
159 | ```python
160 | # 1. Initial tool call and analysis
161 | response = main_agent.tool_call("Analyze NVDA stock")
162 | # Gets stock data and generates initial analysis
163 |
164 | # 2. Reflection agent review
165 | critique = reflection_agent.chat(f"Review this analysis:\n{response}")
166 | # Might suggest: "Add P/E ratio context, discuss industry trends"
167 |
168 | # 3. Main agent refinement
169 | improved = main_agent.chat(
170 | f"Improve analysis based on: {critique}"
171 | )
172 | # Enhances analysis with suggested improvements
173 | ```
174 |
175 | ## Common Use Cases
176 |
177 | The reflection pattern works well for:
178 | - Stock analysis reports
179 | - Financial recommendations
180 | - Market trend analysis
181 | - Risk assessments
182 |
183 | ## Tips for Better Results
184 |
185 | 1. **Clear Criteria**: Give your reflection agent specific review points
186 | 2. **Preserve Context**: The pattern automatically keeps original data available
187 | 3. **Iteration Balance**: 2-3 iterations usually give best results
188 | 4. **Tool Integration**: Use PixelAgent's built-in tool handling
189 |
190 | ## Example: Enhanced Financial Analysis
191 |
192 | Here's a more detailed example showing how to build a comprehensive stock analyzer:
193 |
194 | ```python
195 | # Define additional financial tools
196 | @pxt.udf
197 | def market_news(ticker: str) -> list:
198 | """Get recent news about the stock"""
199 | stock = yf.Ticker(ticker)
200 | return stock.news
201 |
202 | tools = pxt.tools([stock_info, market_news])
203 |
204 | main_agent = Agent(
205 | name="finance_expert",
206 | system_prompt="""
207 | You are a thorough financial analyst.
208 | - Use stock_info for fundamental data
209 | - Use market_news for recent developments
210 | - Combine quantitative and qualitative analysis
211 | """,
212 | tools=tools,
213 | reset=True
214 | )
215 |
216 | reflection_agent = Agent(
217 | name="finance_critic",
218 | system_prompt="""
219 | Evaluate financial analysis for:
220 | 1. Data accuracy and completeness
221 | 2. Market context and trends
222 | 3. Risk assessment
223 | 4. Clear actionable insights
224 | Output if analysis meets all criteria.
225 | """,
226 | reset=True
227 | )
228 |
229 | # Run analysis with automatic tool handling
230 | analysis = reflection_loop(
231 | main_agent,
232 | reflection_agent,
233 | "Should investors consider NVDA stock now?",
234 | max_iterations=2,
235 | is_tool_call=True
236 | )
237 | ```
238 |
239 | ## Next Steps
240 |
241 | 1. Try modifying the example code with different tools
242 | 2. Experiment with different reflection criteria
243 | 3. Adjust max_iterations to find the right balance
--------------------------------------------------------------------------------
/examples/tool-calling/anthropic/finance.py:
--------------------------------------------------------------------------------
1 | import random
2 |
3 | import pixeltable as pxt
4 | import yfinance as yf
5 |
6 | from pixelagent.anthropic import Agent
7 |
8 |
9 | # Define a simple tool with a description
10 | @pxt.udf
11 | def stock_price(ticker: str) -> dict:
12 | """Retrieve the current stock price for a given ticker symbol."""
13 | stock = yf.Ticker(ticker)
14 | return stock.info
15 |
16 |
17 | # Define a new tool to get a random trading action
18 | @pxt.udf
19 | def analyst_recommendation(ticker: str) -> str:
20 | """Randomly select a trading action: buy, sell, or hold."""
21 | return random.choice(["buy", "sell", "hold"])
22 |
23 |
24 | # Create an agent with tools
25 | agent = Agent(
26 | name="financial_analyst",
27 | system_prompt="You are a CFA working at a top-tier investment bank.",
28 | tools=pxt.tools(stock_price, analyst_recommendation),
29 | reset=True,
30 | )
31 |
32 | # Test chat and tool_call functionality
33 | print("--------------")
34 | print(agent.chat("Hi, how are you?"))
35 | print("--------------")
36 | print(agent.tool_call("Get NVIDIA and Apple stock price"))
37 | print("--------------")
38 | print(agent.chat("What was my last question?"))
39 | print("--------------")
40 | print(agent.chat("What's the recommendation for NVIDIA?"))
41 |
42 | agent_memory = pxt.get_table("financial_analyst.memory")
43 | print(agent_memory.select(agent_memory.content).collect())
44 |
--------------------------------------------------------------------------------
/examples/tool-calling/anthropic/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | yfinance
3 | openai
4 |
--------------------------------------------------------------------------------
/examples/tool-calling/bedrock/finance.py:
--------------------------------------------------------------------------------
1 | import random
2 |
3 | import pixeltable as pxt
4 | import yfinance as yf
5 |
6 | from pixelagent.bedrock import Agent
7 |
8 |
9 | # Define a simple tool with a description
10 | @pxt.udf
11 | def stock_price(ticker: str) -> dict:
12 | """Retrieve the current stock price for a given ticker symbol."""
13 | stock = yf.Ticker(ticker)
14 | return stock.info
15 |
16 |
17 | # Define a new tool to get a random trading action
18 | @pxt.udf
19 | def analyst_recommendation(ticker: str) -> str:
20 | """Randomly select a trading action: buy, sell, or hold."""
21 | return random.choice(["buy", "sell", "hold"])
22 |
23 |
24 | # Create an agent with tools
25 | agent = Agent(
26 | name="financial_analyst",
27 | system_prompt="You are a CFA working at a top-tier investment bank.",
28 | tools=pxt.tools(stock_price, analyst_recommendation),
29 | reset=True,
30 | )
31 |
32 | # Test chat and tool_call functionality
33 | print("--------------")
34 | print(agent.chat("Hi, how are you?"))
35 | print("--------------")
36 | print(agent.tool_call("Get NVIDIA and Apple stock price"))
37 | print("--------------")
38 | print(agent.chat("What was my last question?"))
39 | print("--------------")
40 | print(agent.chat("What's the recommendation for NVIDIA?"))
41 |
42 | agent_memory = pxt.get_table("financial_analyst.memory")
43 | print(agent_memory.select(agent_memory.content).collect())
44 |
--------------------------------------------------------------------------------
/examples/tool-calling/bedrock/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | yfinance
3 | openai
4 |
--------------------------------------------------------------------------------
/examples/tool-calling/openai/finance.py:
--------------------------------------------------------------------------------
1 | import random
2 |
3 | import pixeltable as pxt
4 | import yfinance as yf
5 |
6 | from pixelagent.openai import Agent
7 |
8 |
9 | # Define a simple tool with a description
10 | @pxt.udf
11 | def stock_price(ticker: str) -> dict:
12 | """Retrieve the current stock price for a given ticker symbol."""
13 | stock = yf.Ticker(ticker)
14 | return stock.info
15 |
16 |
17 | # Define a new tool to get a random trading action
18 | @pxt.udf
19 | def analyst_recommendation(ticker: str) -> str:
20 | """Randomly select a trading action: buy, sell, or hold."""
21 | return random.choice(["buy", "sell", "hold"])
22 |
23 |
24 | # Create an agent with tools
25 | agent = Agent(
26 | name="financial_analyst",
27 | system_prompt="You are a CFA working at a top-tier investment bank.",
28 | tools=pxt.tools(stock_price, analyst_recommendation),
29 | reset=True,
30 | )
31 |
32 | # Test chat and tool_call functionality
33 | print("--------------")
34 | print(agent.chat("Hi, how are you?"))
35 | print("--------------")
36 | print(agent.tool_call("Get NVIDIA and Apple stock price"))
37 | print("--------------")
38 | print(agent.chat("What was my last question?"))
39 | print("--------------")
40 | print(agent.chat("What's the recommendation for NVIDIA?"))
41 |
42 | agent_memory = pxt.get_table("financial_analyst.memory")
43 | print(agent_memory.select(agent_memory.content).collect())
44 |
--------------------------------------------------------------------------------
/examples/tool-calling/openai/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | yfinance
3 | openai
4 |
--------------------------------------------------------------------------------
/examples/vision/image-analyzer.py:
--------------------------------------------------------------------------------
1 | from pixelagent.anthropic import Agent
2 |
3 | vision_agent = Agent(
4 | name="anthropic_vision_agent",
5 | system_prompt="Answer questions about the image",
6 | reset=True,
7 | chat_kwargs={"temperature": 0.0},
8 | )
9 |
10 | url = "https://raw.githubusercontent.com/pixeltable/pixeltable/main/docs/resources/port-townsend-map.jpeg"
11 | print(vision_agent.chat("What is in the image?", image=url))
12 |
--------------------------------------------------------------------------------
/examples/vision/requirements.txt:
--------------------------------------------------------------------------------
1 | pixelagent
2 | anthropic
3 |
--------------------------------------------------------------------------------
/pixelagent/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/pixeltable/pixelagent/c1d6f09d5b572a44b812a0935a16adc4cc5d8951/pixelagent/__init__.py
--------------------------------------------------------------------------------
/pixelagent/anthropic/__init__.py:
--------------------------------------------------------------------------------
1 | from .agent import Agent
2 |
3 | __all__ = ["Agent"]
4 |
--------------------------------------------------------------------------------
/pixelagent/anthropic/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from pixelagent.core.base import BaseAgent
7 |
8 | from .utils import create_messages
9 |
10 | try:
11 | from pixeltable.functions.anthropic import invoke_tools, messages
12 | except ImportError:
13 | raise ImportError("anthropic not found; run `pip install anthropic`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | Anthropic-specific implementation of the BaseAgent.
19 |
20 | This agent uses Anthropic's Claude API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 |
26 | The agent supports both limited and unlimited conversation history through
27 | the n_latest_messages parameter.
28 | """
29 |
30 | def __init__(
31 | self,
32 | name: str,
33 | system_prompt: str,
34 | model: str = "claude-3-5-sonnet-latest",
35 | n_latest_messages: Optional[int] = 10,
36 | tools: Optional[pxt.tools] = None,
37 | reset: bool = False,
38 | chat_kwargs: Optional[dict] = None,
39 | tool_kwargs: Optional[dict] = None,
40 | ):
41 | # Initialize the base agent with all common parameters
42 | super().__init__(
43 | name=name,
44 | system_prompt=system_prompt,
45 | model=model,
46 | n_latest_messages=n_latest_messages, # None for unlimited history
47 | tools=tools,
48 | reset=reset,
49 | chat_kwargs=chat_kwargs,
50 | tool_kwargs=tool_kwargs,
51 | )
52 |
53 | def _setup_chat_pipeline(self):
54 | """
55 | Configure the chat completion pipeline using Pixeltable's computed columns.
56 | This method implements the abstract method from BaseAgent.
57 |
58 | The pipeline consists of 4 steps:
59 | 1. Retrieve recent messages from memory
60 | 2. Format messages for Claude
61 | 3. Get completion from Anthropic
62 | 4. Extract the response text
63 |
64 | Note: The pipeline automatically handles memory limits based on n_latest_messages.
65 | When set to None, it maintains unlimited conversation history.
66 | """
67 |
68 | # Step 1: Define a query to get recent messages
69 | @pxt.query
70 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
71 | """
72 | Get recent messages from memory, respecting n_latest_messages limit if set.
73 | Messages are ordered by timestamp (newest first).
74 | Returns all messages if n_latest_messages is None.
75 | """
76 | query = (
77 | self.memory.where(self.memory.timestamp < current_timestamp)
78 | .order_by(self.memory.timestamp, asc=False)
79 | .select(role=self.memory.role, content=self.memory.content)
80 | )
81 | if self.n_latest_messages is not None:
82 | query = query.limit(self.n_latest_messages)
83 | return query
84 |
85 | # Step 2: Add computed columns to process the conversation
86 | # First, get the conversation history
87 | self.agent.add_computed_column(
88 | memory_context=get_recent_memory(self.agent.timestamp), if_exists="ignore"
89 | )
90 |
91 | # Format messages for Claude (simpler than OpenAI as system prompt is passed separately)
92 | self.agent.add_computed_column(
93 | messages=create_messages(
94 | self.agent.memory_context,
95 | self.agent.user_message,
96 | self.agent.image,
97 | ),
98 | if_exists="ignore",
99 | )
100 |
101 | # Get Claude's API response (note: system prompt passed directly to messages())
102 | self.agent.add_computed_column(
103 | response=messages(
104 | messages=self.agent.messages,
105 | model=self.model,
106 | system=self.system_prompt, # Claude handles system prompt differently
107 | **self.chat_kwargs,
108 | ),
109 | if_exists="ignore",
110 | )
111 |
112 | # Extract the final response text from Claude's specific response format
113 | self.agent.add_computed_column(
114 | agent_response=self.agent.response.content[0].text, if_exists="ignore"
115 | )
116 |
117 | def _setup_tools_pipeline(self):
118 | """
119 | Configure the tool execution pipeline using Pixeltable's computed columns.
120 | This method implements the abstract method from BaseAgent.
121 |
122 | The pipeline has 4 stages:
123 | 1. Get initial response from Claude with potential tool calls
124 | 2. Execute any requested tools
125 | 3. Format tool results for follow-up
126 | 4. Get final response incorporating tool outputs
127 |
128 | Note: Claude's tool calling format differs slightly from OpenAI's,
129 | but the overall flow remains the same thanks to BaseAgent abstraction.
130 | """
131 | # Stage 1: Get initial response with potential tool calls
132 | self.tools_table.add_computed_column(
133 | initial_response=messages(
134 | model=self.model,
135 | system=self.system_prompt, # Include system prompt for consistent behavior
136 | messages=[{"role": "user", "content": self.tools_table.tool_prompt}],
137 | tools=self.tools, # Pass available tools to Claude
138 | **self.tool_kwargs,
139 | ),
140 | if_exists="ignore",
141 | )
142 |
143 | # Stage 2: Execute any tools that Claude requested
144 | self.tools_table.add_computed_column(
145 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
146 | if_exists="ignore",
147 | )
148 |
149 | # Stage 3: Format tool results for follow-up
150 | self.tools_table.add_computed_column(
151 | tool_response_prompt=pxtf.string.format(
152 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
153 | ),
154 | if_exists="ignore",
155 | )
156 |
157 | # Stage 4: Get final response incorporating tool results
158 | self.tools_table.add_computed_column(
159 | final_response=messages(
160 | model=self.model,
161 | system=self.system_prompt,
162 | messages=[
163 | {"role": "user", "content": self.tools_table.tool_response_prompt}
164 | ],
165 | **self.tool_kwargs,
166 | ),
167 | if_exists="ignore",
168 | )
169 |
170 | # Extract the final response text from Claude's format
171 | self.tools_table.add_computed_column(
172 | tool_answer=self.tools_table.final_response.content[0].text,
173 | if_exists="ignore",
174 | )
175 |
--------------------------------------------------------------------------------
/pixelagent/anthropic/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_messages(
11 | memory_context: list[dict],
12 | current_message: str,
13 | image: Optional[PIL.Image.Image] = None,
14 | ) -> list[dict]:
15 |
16 | # Create a copy to avoid modifying the original
17 | messages = memory_context.copy()
18 |
19 | # For text-only messages
20 | if not image:
21 | messages.append({"role": "user", "content": current_message})
22 | return messages
23 |
24 | # Convert image to base64
25 | bytes_arr = io.BytesIO()
26 | image.save(bytes_arr, format="JPEG")
27 | b64_bytes = base64.b64encode(bytes_arr.getvalue())
28 | b64_encoded_image = b64_bytes.decode("utf-8")
29 |
30 | # Create content blocks with text and image
31 | content_blocks = [
32 | {"type": "text", "text": current_message},
33 | {
34 | "type": "image",
35 | "source": {
36 | "type": "base64",
37 | "media_type": "image/jpeg",
38 | "data": b64_encoded_image,
39 | },
40 | },
41 | ]
42 |
43 | messages.append({"role": "user", "content": content_blocks})
44 |
45 | return messages
46 |
--------------------------------------------------------------------------------
/pixelagent/bedrock/README.md:
--------------------------------------------------------------------------------
1 | # AWS Bedrock Agent for Pixelagent
2 |
3 | This module implements an AWS Bedrock agent that inherits from the core BaseAgent class. The agent supports:
4 |
5 | 1. Conversational memory for regular chat interactions
6 | 2. Tool calling without conversational memory
7 | 3. Integration with AWS Bedrock's Claude models
8 |
9 | ## Implementation Details
10 |
11 | The Bedrock agent implementation consists of three main files:
12 |
13 | - `agent.py`: Contains the main Agent class that inherits from BaseAgent
14 | - `utils.py`: Contains utility functions for formatting messages for Bedrock
15 | - `test.py`: Contains a test script to verify the agent's functionality
16 |
17 | ### Important Implementation Notes
18 |
19 | 1. The BaseAgent's tool_call method uses separate timestamps for the user message and the assistant's response to ensure proper ordering in the memory table.
20 | 2. The create_messages function in utils.py ensures that the conversation always starts with a user message, which is required by Bedrock Claude models.
21 | 3. The agent supports unlimited memory (n_latest_messages=None) to maintain the entire conversation history.
22 |
23 | ### Agent Class
24 |
25 | The Agent class implements two abstract methods from BaseAgent:
26 |
27 | 1. `_setup_chat_pipeline`: Configures the chat completion pipeline with conversational memory
28 | 2. `_setup_tools_pipeline`: Configures the tool execution pipeline without conversational memory
29 |
30 | ### Message Formatting
31 |
32 | The `create_messages` function in `utils.py` formats the conversation history and current message for Bedrock Claude models. It ensures:
33 |
34 | 1. Messages are in the correct format with content as a list of objects
35 | 2. The conversation always starts with a user message
36 | 3. Images are properly encoded if included
37 |
38 | ## Usage
39 |
40 | ```python
41 | from pixelagent.bedrock import Agent
42 |
43 | # Create a Bedrock agent
44 | agent = Agent(
45 | name="my_agent",
46 | system_prompt="You are a helpful assistant.",
47 | model="amazon.nova-pro-v1:0", # Or any other Bedrock model
48 | n_latest_messages=10, # Number of recent messages to include in context
49 | tools=None, # Optional tools configuration
50 | reset=False, # Whether to reset existing agent data
51 | )
52 |
53 | # Chat with the agent
54 | response = agent.chat("Hello, my name is Alice.")
55 | print(response)
56 |
57 | # Use tools if configured
58 | if agent.tools:
59 | tool_response = agent.tool_call("What is the stock price of AAPL?")
60 | print(tool_response)
61 | ```
62 |
63 | ## Testing
64 |
65 | Run the test script to verify the agent's functionality:
66 |
67 | ```bash
68 | python -m pixelagent.bedrock.test
69 | ```
70 |
71 | The test script verifies:
72 |
73 | 1. Conversational memory for regular chat
74 | 2. Tool calling without conversational memory
75 | 3. Memory persistence across different interactions
76 |
--------------------------------------------------------------------------------
/pixelagent/bedrock/__init__.py:
--------------------------------------------------------------------------------
1 | from .agent import Agent
2 |
3 | __all__ = ["Agent"]
4 |
--------------------------------------------------------------------------------
/pixelagent/bedrock/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from pixelagent.core.base import BaseAgent
7 |
8 | from .utils import create_messages
9 |
10 | try:
11 | from pixeltable.functions.bedrock import converse, invoke_tools
12 | except ImportError:
13 | raise ImportError("boto3 not found; run `pip install boto3`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | AWS Bedrock-specific implementation of the BaseAgent.
19 |
20 | This agent uses AWS Bedrock's Claude API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 |
26 | The agent supports both limited and unlimited conversation history through
27 | the n_latest_messages parameter for regular chat, while tool calls use only
28 | the current message without conversation history.
29 | """
30 |
31 | def __init__(
32 | self,
33 | name: str,
34 | system_prompt: str,
35 | model: str = "amazon.nova-pro-v1:0",
36 | n_latest_messages: Optional[int] = 10,
37 | tools: Optional[pxt.tools] = None,
38 | reset: bool = False,
39 | chat_kwargs: Optional[dict] = None,
40 | tool_kwargs: Optional[dict] = None,
41 | ):
42 | # Initialize the base agent with all common parameters
43 | super().__init__(
44 | name=name,
45 | system_prompt=system_prompt,
46 | model=model,
47 | n_latest_messages=n_latest_messages, # None for unlimited history
48 | tools=tools,
49 | reset=reset,
50 | chat_kwargs=chat_kwargs,
51 | tool_kwargs=tool_kwargs,
52 | )
53 |
54 | def _setup_chat_pipeline(self):
55 | """
56 | Configure the chat completion pipeline using Pixeltable's computed columns.
57 | This method implements the abstract method from BaseAgent.
58 |
59 | The pipeline consists of 4 steps:
60 | 1. Retrieve recent messages from memory
61 | 2. Format messages for Bedrock Claude
62 | 3. Get completion from Bedrock
63 | 4. Extract the response text
64 |
65 | Note: The pipeline automatically handles memory limits based on n_latest_messages.
66 | When set to None, it maintains unlimited conversation history.
67 | """
68 |
69 | # Step 1: Define a query to get recent messages
70 | @pxt.query
71 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
72 | """
73 | Get recent messages from memory, respecting n_latest_messages limit if set.
74 | Messages are ordered by timestamp (newest first).
75 | Returns all messages if n_latest_messages is None.
76 | """
77 | query = (
78 | self.memory.where(self.memory.timestamp < current_timestamp)
79 | .order_by(self.memory.timestamp, asc=False)
80 | .select(role=self.memory.role, content=self.memory.content)
81 | )
82 | if self.n_latest_messages is not None:
83 | query = query.limit(self.n_latest_messages)
84 | return query
85 |
86 | # Step 2: Add computed columns to process the conversation
87 | # First, get the conversation history
88 | self.agent.add_computed_column(
89 | memory_context=get_recent_memory(self.agent.timestamp), if_exists="ignore"
90 | )
91 |
92 | # Format messages for Bedrock Claude
93 | self.agent.add_computed_column(
94 | messages=create_messages(
95 | self.agent.memory_context,
96 | self.agent.user_message,
97 | self.agent.image,
98 | ),
99 | if_exists="ignore",
100 | )
101 |
102 | # Get Bedrock Claude's API response
103 | self.agent.add_computed_column(
104 | response=converse(
105 | messages=self.agent.messages,
106 | model_id=self.model,
107 | system=[{"text": self.system_prompt}],
108 | **self.chat_kwargs,
109 | ),
110 | if_exists="ignore",
111 | )
112 |
113 | # Extract the final response text from Bedrock Claude's specific response format
114 | self.agent.add_computed_column(
115 | agent_response=self.agent.response.output.message.content[0].text,
116 | if_exists="ignore"
117 | )
118 |
119 | def _setup_tools_pipeline(self):
120 | """
121 | Configure the tool execution pipeline using Pixeltable's computed columns.
122 | This method implements the abstract method from BaseAgent.
123 |
124 | The pipeline has 4 stages:
125 | 1. Get initial response from Bedrock Claude with potential tool calls
126 | 2. Execute any requested tools
127 | 3. Format tool results for follow-up
128 | 4. Get final response incorporating tool outputs
129 |
130 | Note: For tool calls, we only use the current message without conversation history
131 | to ensure tool execution is based solely on the current request.
132 | """
133 | # Stage 1: Get initial response with potential tool calls
134 | # Note: We only use the current tool prompt without memory context
135 | self.tools_table.add_computed_column(
136 | initial_response=converse(
137 | model_id=self.model,
138 | system=[{"text": self.system_prompt}],
139 | messages=[{"role": "user", "content": [{"text": self.tools_table.tool_prompt}]}],
140 | tool_config=self.tools, # Pass available tools to Bedrock Claude
141 | **self.tool_kwargs,
142 | ),
143 | if_exists="ignore",
144 | )
145 |
146 | # Stage 2: Execute any tools that Bedrock Claude requested
147 | self.tools_table.add_computed_column(
148 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
149 | if_exists="ignore",
150 | )
151 |
152 | # Stage 3: Format tool results for follow-up
153 | self.tools_table.add_computed_column(
154 | tool_response_prompt=pxtf.string.format(
155 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
156 | ),
157 | if_exists="ignore",
158 | )
159 |
160 | # Stage 4: Get final response incorporating tool results
161 | # Again, we only use the current tool response without memory context
162 | self.tools_table.add_computed_column(
163 | final_response=converse(
164 | model_id=self.model,
165 | system=[{"text": self.system_prompt}],
166 | messages=[
167 | {"role": "user", "content": [{"text": self.tools_table.tool_response_prompt}]}
168 | ],
169 | **self.tool_kwargs,
170 | ),
171 | if_exists="ignore",
172 | )
173 |
174 | # Extract the final response text from Bedrock Claude's format
175 | self.tools_table.add_computed_column(
176 | tool_answer=self.tools_table.final_response.output.message.content[0].text,
177 | if_exists="ignore",
178 | )
179 |
--------------------------------------------------------------------------------
/pixelagent/bedrock/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_messages(
11 | memory_context: list[dict],
12 | current_message: str,
13 | image: Optional[PIL.Image.Image] = None,
14 | ) -> list[dict]:
15 | """
16 | Create a formatted message list for Bedrock Claude models.
17 |
18 | This function formats the conversation history and current message
19 | into the structure expected by Bedrock Claude models.
20 |
21 | Args:
22 | memory_context: List of previous messages from memory
23 | current_message: The current user message
24 | image: Optional image to include with the message
25 |
26 | Returns:
27 | List of formatted messages for Bedrock Claude
28 | """
29 | # Create a copy to avoid modifying the original and format for Bedrock
30 | messages = []
31 |
32 | # Get messages in oldest-first order
33 | reversed_memory = list(reversed(memory_context))
34 |
35 | # Ensure the conversation starts with a user message
36 | # If the first message is from the assistant, skip it
37 | start_idx = 0
38 | if reversed_memory and reversed_memory[0]["role"] == "assistant":
39 | start_idx = 1
40 |
41 | # Format previous messages for Bedrock
42 | for msg in reversed_memory[start_idx:]:
43 | # Convert string content to the required list format
44 | if isinstance(msg["content"], str):
45 | messages.append({
46 | "role": msg["role"],
47 | "content": [{"text": msg["content"]}]
48 | })
49 | else:
50 | # If it's already in the correct format, keep it as is
51 | messages.append(msg)
52 |
53 | # For text-only messages
54 | if not image:
55 | messages.append({"role": "user", "content": [{"text": current_message}]})
56 | return messages
57 |
58 | # Convert image to base64
59 | bytes_arr = io.BytesIO()
60 | image.save(bytes_arr, format="JPEG")
61 | b64_bytes = base64.b64encode(bytes_arr.getvalue())
62 | b64_encoded_image = b64_bytes.decode("utf-8")
63 |
64 | # Create content blocks with text and image
65 | content_blocks = [
66 | {"type": "text", "text": current_message},
67 | {
68 | "type": "image",
69 | "source": {
70 | "type": "base64",
71 | "media_type": "image/jpeg",
72 | "data": b64_encoded_image,
73 | },
74 | },
75 | ]
76 |
77 | messages.append({"role": "user", "content": content_blocks})
78 |
79 | return messages
80 |
--------------------------------------------------------------------------------
/pixelagent/core/__init__.py:
--------------------------------------------------------------------------------
1 | from .base import BaseAgent
2 |
3 | __all__ = ["BaseAgent"]
4 |
--------------------------------------------------------------------------------
/pixelagent/gemini/__init__.py:
--------------------------------------------------------------------------------
1 | from .agent import Agent
2 |
3 | __all__ = ["Agent"]
4 |
--------------------------------------------------------------------------------
/pixelagent/gemini/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from pixelagent.core.base import BaseAgent
7 |
8 | from .utils import create_content
9 |
10 | try:
11 | from pixeltable.functions.gemini import generate_content, invoke_tools
12 | except ImportError:
13 | raise ImportError("google.genai not found; run `pip install google-genai`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | Google Gemini-specific implementation of the BaseAgent.
19 |
20 | This agent uses Google Gemini's generate_content API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 | """
26 |
27 | def __init__(
28 | self,
29 | name: str,
30 | system_prompt: str,
31 | model: str = "gemini-2.0-flash",
32 | n_latest_messages: Optional[int] = 10,
33 | tools: Optional[pxt.tools] = None,
34 | reset: bool = False,
35 | chat_kwargs: Optional[dict] = None,
36 | tool_kwargs: Optional[dict] = None,
37 | ):
38 | # Initialize the base agent with all common parameters
39 | super().__init__(
40 | name=name,
41 | system_prompt=system_prompt,
42 | model=model,
43 | n_latest_messages=n_latest_messages, # None for unlimited history
44 | tools=tools,
45 | reset=reset,
46 | chat_kwargs=chat_kwargs,
47 | tool_kwargs=tool_kwargs,
48 | )
49 |
50 | def _setup_chat_pipeline(self):
51 | """
52 | Configure the chat completion pipeline using Pixeltable's computed columns.
53 | This method implements the abstract method from BaseAgent.
54 |
55 | The pipeline consists of 4 steps:
56 | 1. Retrieve recent messages from memory
57 | 2. Format content with system prompt for Gemini
58 | 3. Get completion from Gemini
59 | 4. Extract the response text
60 | """
61 |
62 | # Step 1: Define a query to get recent messages
63 | @pxt.query
64 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
65 | """
66 | Get recent messages from memory, respecting n_latest_messages limit if set.
67 | Messages are ordered by timestamp (newest first).
68 | """
69 | query = (
70 | self.memory.where(self.memory.timestamp < current_timestamp)
71 | .order_by(self.memory.timestamp, asc=False)
72 | .select(role=self.memory.role, content=self.memory.content)
73 | )
74 | if self.n_latest_messages is not None:
75 | query = query.limit(self.n_latest_messages)
76 | return query
77 |
78 | # Step 2: Add computed columns to process the conversation
79 | # First, get the conversation history
80 | self.agent.add_computed_column(
81 | memory_context=get_recent_memory(self.agent.timestamp),
82 | if_exists="ignore",
83 | )
84 |
85 | # Format content for Gemini with system prompt
86 | self.agent.add_computed_column(
87 | prompt=create_content(
88 | self.agent.system_prompt,
89 | self.agent.memory_context,
90 | self.agent.user_message,
91 | self.agent.image,
92 | ),
93 | if_exists="ignore",
94 | )
95 |
96 | # Get Gemini's API response
97 | self.agent.add_computed_column(
98 | response=generate_content(
99 | contents=self.agent.prompt, model=self.model, **self.chat_kwargs
100 | ),
101 | if_exists="ignore",
102 | )
103 |
104 | # Extract the final response text
105 | self.agent.add_computed_column(
106 | agent_response=self.agent.response['candidates'][0]['content']['parts'][0]['text'],
107 | if_exists="ignore",
108 | )
109 |
110 | def _setup_tools_pipeline(self):
111 | """
112 | Configure the tool execution pipeline using Pixeltable's computed columns.
113 | This method implements the abstract method from BaseAgent.
114 |
115 | The pipeline has 4 stages:
116 | 1. Get initial response from Gemini with potential tool calls
117 | 2. Execute any requested tools
118 | 3. Format tool results for follow-up
119 | 4. Get final response incorporating tool outputs
120 | """
121 | # Stage 1: Get initial response with potential tool calls
122 | self.tools_table.add_computed_column(
123 | initial_response=generate_content(
124 | contents=self.tools_table.tool_prompt,
125 | model=self.model,
126 | tools=self.tools, # Pass available tools to Gemini
127 | **self.tool_kwargs,
128 | ),
129 | if_exists="ignore",
130 | )
131 |
132 | # Stage 2: Execute any tools that Gemini requested
133 | self.tools_table.add_computed_column(
134 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
135 | if_exists="ignore",
136 | )
137 |
138 | # Stage 3: Format tool results for follow-up
139 | self.tools_table.add_computed_column(
140 | tool_response_prompt=pxtf.string.format(
141 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
142 | ),
143 | if_exists="ignore",
144 | )
145 |
146 | # Stage 4: Get final response incorporating tool results
147 | self.tools_table.add_computed_column(
148 | final_response=generate_content(
149 | contents=self.tools_table.tool_response_prompt,
150 | model=self.model,
151 | **self.tool_kwargs,
152 | ),
153 | if_exists="ignore",
154 | )
155 |
156 | # Extract the final response text
157 | self.tools_table.add_computed_column(
158 | tool_answer=self.tools_table.final_response['candidates'][0]['content']['parts'][0]['text'],
159 | if_exists="ignore",
160 | )
161 |
--------------------------------------------------------------------------------
/pixelagent/gemini/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_content(
11 | system_prompt: str,
12 | memory_context: list[dict],
13 | current_message: str,
14 | ) -> str:
15 |
16 | # Build the conversation context as a text string
17 | context = f"System: {system_prompt}\n\n"
18 |
19 | # Add memory context
20 | for msg in memory_context:
21 | context += f"{msg['role'].title()}: {msg['content']}\n"
22 |
23 | # Add current message
24 | context += f"User: {current_message}\n"
25 | context += "Assistant: "
26 |
27 | return context
28 |
--------------------------------------------------------------------------------
/pixelagent/openai/__init__.py:
--------------------------------------------------------------------------------
1 | from .agent import Agent
2 |
3 | __all__ = ["Agent"]
4 |
--------------------------------------------------------------------------------
/pixelagent/openai/agent.py:
--------------------------------------------------------------------------------
1 | from typing import Optional
2 |
3 | import pixeltable as pxt
4 | import pixeltable.functions as pxtf
5 |
6 | from pixelagent.core.base import BaseAgent
7 |
8 | from .utils import create_messages
9 |
10 | try:
11 | from pixeltable.functions.openai import chat_completions, invoke_tools
12 | except ImportError:
13 | raise ImportError("openai not found; run `pip install openai`")
14 |
15 |
16 | class Agent(BaseAgent):
17 | """
18 | OpenAI-specific implementation of the BaseAgent.
19 |
20 | This agent uses OpenAI's chat completion API for generating responses and handling tools.
21 | It inherits common functionality from BaseAgent including:
22 | - Table setup and management
23 | - Memory persistence
24 | - Base chat and tool call implementations
25 | """
26 |
27 | def __init__(
28 | self,
29 | name: str,
30 | system_prompt: str,
31 | model: str = "gpt-4o-mini",
32 | n_latest_messages: Optional[int] = 10,
33 | tools: Optional[pxt.tools] = None,
34 | reset: bool = False,
35 | chat_kwargs: Optional[dict] = None,
36 | tool_kwargs: Optional[dict] = None,
37 | ):
38 | # Initialize the base agent with all common parameters
39 | super().__init__(
40 | name=name,
41 | system_prompt=system_prompt,
42 | model=model,
43 | n_latest_messages=n_latest_messages, # None for unlimited history
44 | tools=tools,
45 | reset=reset,
46 | chat_kwargs=chat_kwargs,
47 | tool_kwargs=tool_kwargs,
48 | )
49 |
50 | def _setup_chat_pipeline(self):
51 | """
52 | Configure the chat completion pipeline using Pixeltable's computed columns.
53 | This method implements the abstract method from BaseAgent.
54 |
55 | The pipeline consists of 4 steps:
56 | 1. Retrieve recent messages from memory
57 | 2. Format messages with system prompt
58 | 3. Get completion from OpenAI
59 | 4. Extract the response text
60 | """
61 |
62 | # Step 1: Define a query to get recent messages
63 | @pxt.query
64 | def get_recent_memory(current_timestamp: pxt.Timestamp) -> list[dict]:
65 | """
66 | Get recent messages from memory, respecting n_latest_messages limit if set.
67 | Messages are ordered by timestamp (newest first).
68 | """
69 | query = (
70 | self.memory.where(self.memory.timestamp < current_timestamp)
71 | .order_by(self.memory.timestamp, asc=False)
72 | .select(role=self.memory.role, content=self.memory.content)
73 | )
74 | if self.n_latest_messages is not None:
75 | query = query.limit(self.n_latest_messages)
76 | return query
77 |
78 | # Step 2: Add computed columns to process the conversation
79 | # First, get the conversation history
80 | self.agent.add_computed_column(
81 | memory_context=get_recent_memory(self.agent.timestamp),
82 | if_exists="ignore",
83 | )
84 |
85 | # Format messages for OpenAI with system prompt
86 | self.agent.add_computed_column(
87 | prompt=create_messages(
88 | self.agent.system_prompt,
89 | self.agent.memory_context,
90 | self.agent.user_message,
91 | self.agent.image,
92 | ),
93 | if_exists="ignore",
94 | )
95 |
96 | # Get OpenAI's API response
97 | self.agent.add_computed_column(
98 | response=chat_completions(
99 | messages=self.agent.prompt, model=self.model, **self.chat_kwargs
100 | ),
101 | if_exists="ignore",
102 | )
103 |
104 | # Extract the final response text
105 | self.agent.add_computed_column(
106 | agent_response=self.agent.response.choices[0].message.content,
107 | if_exists="ignore",
108 | )
109 |
110 | def _setup_tools_pipeline(self):
111 | """
112 | Configure the tool execution pipeline using Pixeltable's computed columns.
113 | This method implements the abstract method from BaseAgent.
114 |
115 | The pipeline has 4 stages:
116 | 1. Get initial response from OpenAI with potential tool calls
117 | 2. Execute any requested tools
118 | 3. Format tool results for follow-up
119 | 4. Get final response incorporating tool outputs
120 | """
121 | # Stage 1: Get initial response with potential tool calls
122 | self.tools_table.add_computed_column(
123 | initial_response=chat_completions(
124 | model=self.model,
125 | messages=[{"role": "user", "content": self.tools_table.tool_prompt}],
126 | tools=self.tools, # Pass available tools to OpenAI
127 | **self.tool_kwargs,
128 | ),
129 | if_exists="ignore",
130 | )
131 |
132 | # Stage 2: Execute any tools that OpenAI requested
133 | self.tools_table.add_computed_column(
134 | tool_output=invoke_tools(self.tools, self.tools_table.initial_response),
135 | if_exists="ignore",
136 | )
137 |
138 | # Stage 3: Format tool results for follow-up
139 | self.tools_table.add_computed_column(
140 | tool_response_prompt=pxtf.string.format(
141 | "{0}: {1}", self.tools_table.tool_prompt, self.tools_table.tool_output
142 | ),
143 | if_exists="ignore",
144 | )
145 |
146 | # Stage 4: Get final response incorporating tool results
147 | self.tools_table.add_computed_column(
148 | final_response=chat_completions(
149 | model=self.model,
150 | messages=[
151 | {"role": "user", "content": self.tools_table.tool_response_prompt},
152 | ],
153 | **self.tool_kwargs,
154 | ),
155 | if_exists="ignore",
156 | )
157 |
158 | # Extract the final response text
159 | self.tools_table.add_computed_column(
160 | tool_answer=self.tools_table.final_response.choices[0].message.content,
161 | if_exists="ignore",
162 | )
163 |
--------------------------------------------------------------------------------
/pixelagent/openai/utils.py:
--------------------------------------------------------------------------------
1 | import base64
2 | import io
3 | from typing import Optional
4 |
5 | import PIL
6 | import pixeltable as pxt
7 |
8 |
9 | @pxt.udf
10 | def create_messages(
11 | system_prompt: str,
12 | memory_context: list[dict],
13 | current_message: str,
14 | image: Optional[PIL.Image.Image] = None,
15 | ) -> list[dict]:
16 |
17 | messages = [{"role": "system", "content": system_prompt}]
18 | messages.extend(memory_context.copy())
19 |
20 | if not image:
21 | messages.append({"role": "user", "content": current_message})
22 | return messages
23 |
24 | # Encode Image
25 | bytes_arr = io.BytesIO()
26 | image.save(bytes_arr, format="jpeg")
27 | b64_bytes = base64.b64encode(bytes_arr.getvalue())
28 | b64_encoded_image = b64_bytes.decode("utf-8")
29 |
30 | # Create content blocks with text and image
31 | content_blocks = [
32 | {"type": "text", "text": current_message},
33 | {
34 | "type": "image_url",
35 | "image_url": {"url": f"data:image/jpeg;base64,{b64_encoded_image}"},
36 | },
37 | ]
38 |
39 | messages.append({"role": "user", "content": content_blocks})
40 | return messages
41 |
--------------------------------------------------------------------------------
/poetry.toml:
--------------------------------------------------------------------------------
1 | [virtualenvs]
2 | create = true
3 | in-project = true
4 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [build-system]
2 | requires = ["poetry-core>=1.0.0"]
3 | build-backend = "poetry.core.masonry.api"
4 |
5 | [project]
6 | name = "pixelagent"
7 | version = "0.1.5"
8 | description = "A modular AI agent framework supporting OpenAI and Anthropic models"
9 | license = "Apache-2.0"
10 | readme = "README.md"
11 | requires-python = ">=3.9,<4.0"
12 | authors = [{ name = "Pixeltable, Inc.", email = "contact@pixeltable.com>" }]
13 | keywords = [
14 | "data-science", "machine-learning", "database", "ai", "computer-vision",
15 | "chatbot", "ml", "artificial-intelligence", "feature-engineering", "multimodal",
16 | "mlops", "feature-store", "vector-database", "llm", "genai"
17 | ]
18 | classifiers = [
19 | "Intended Audience :: Developers",
20 | "Intended Audience :: Science/Research",
21 | "License :: OSI Approved :: Apache Software License",
22 | "Operating System :: MacOS",
23 | "Operating System :: Microsoft :: Windows",
24 | "Operating System :: POSIX :: Linux",
25 | "Programming Language :: Python :: 3.9",
26 | "Programming Language :: Python :: 3.10",
27 | "Programming Language :: Python :: 3.11",
28 | "Programming Language :: Python :: 3.12",
29 | "Programming Language :: Python :: 3.13",
30 | "Topic :: Database",
31 | "Topic :: Scientific/Engineering :: Artificial Intelligence",
32 | "Topic :: Software Development :: Libraries :: Python Modules",
33 | ]
34 | dependencies = [
35 | "pixeltable (>=0.3.15)"
36 | ]
37 |
38 | [project.urls]
39 | homepage = "https://pixeltable.com/"
40 | repository = "https://github.com/pixeltable/pixelagent"
41 | documentation = "https://docs.pixeltable.com/docs/libraries/pixelagent"
42 |
43 | [tool.poetry]
44 | requires-poetry = ">=2.0"
45 |
46 | [tool.poetry.group.dev]
47 | optional = true
48 |
49 | [tool.poetry.group.dev.dependencies]
50 | # pytest-related
51 | pytest = "^7.2.1"
52 | pytest-xdist = { extras = ["psutil"], version = "^3.3.1" }
53 | pytest-rerunfailures = "^14.0.0"
54 | coverage = "^7.4"
55 | # type checking
56 | mypy = "^1.13.0"
57 | # linting
58 | ruff = "^0.9.7"
59 | # mkdocs-related
60 | mkdocs = "^1.6.0"
61 | mkdocs-material = "^9.5.23"
62 | mkdocstrings-python = "^1.10.2"
63 | black = ">=24.4" # Needed for mkdocs
64 |
--------------------------------------------------------------------------------
/scripts/release.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash -e
2 |
3 | # Pixelagent release script
4 |
5 | SCRIPT_DIR="$(dirname "$0")"
6 | PROJECT_ROOT="$(readlink -f "$SCRIPT_DIR/..")"
7 |
8 | echo -e "\n============= Running pixelagent release script.\n"
9 |
10 | echo "Project root: $PROJECT_ROOT"
11 | cd "$PROJECT_ROOT"
12 |
13 | if [[ ! $(gh --version) ]]; then
14 | echo "You must have Github commandline utilities installed to run this script. See: https://cli.github.com/"
15 | exit 1
16 | fi
17 |
18 | if [ "$(git remote get-url home)" != 'https://github.com/pixeltable/pixelagent' ]; then
19 | echo "Unexpected home repo: $(git remote get-url home)"
20 | exit 1
21 | fi
22 |
23 | git diff # Needed to ensure correct behavior of subsequent `git diff-index` call
24 |
25 | if [[ -n "$(git diff-index HEAD)" || -n "$(git diff-files)" || -n "$(git ls-files --exclude-standard --others)" ]]; then
26 | echo "The release script must be run from a clean git repo."
27 | exit 1
28 | fi
29 |
30 | if [ -z "$PYPI_API_KEY" ]; then
31 | echo "PYPI_API_KEY not defined in environment; searching in ~/.pixeltable/config.toml."
32 | PYPI_API_KEY=$(
33 | python -c "import toml,sys; y = toml.load(sys.stdin); print(y['pypi']['api_key'])" < ~/.pixeltable/config.toml
34 | )
35 | fi
36 |
37 | git fetch home
38 | git checkout home/main
39 |
40 | echo -n "Enter version number for release: "
41 | read VERSION
42 |
43 | echo ""
44 | echo "This will publish version v$VERSION to PyPI. Enter to confirm; Ctrl-C to abort."
45 | read
46 |
47 | git tag "v$VERSION"
48 | git push home "v$VERSION"
49 |
50 | echo "v$VERSION tag created and pushed to home repo."
51 | echo "Enter to proceed; Ctrl-C to abort."
52 | read
53 |
54 | git tag -d release
55 | git push --delete home release
56 | git tag release
57 | git push home release
58 |
59 | poetry build
60 | poetry publish --username __token__ --password "$PYPI_API_KEY"
61 |
62 | echo "Creating release on github ..."
63 | gh release create "v$VERSION" --generate-notes --repo pixeltable/pixelagent
64 | open "https://github.com/pixeltable/pixelagent/releases/tag/v$VERSION"
65 |
--------------------------------------------------------------------------------
/tests/README.md:
--------------------------------------------------------------------------------
1 | # PixelAgent Tests
2 |
3 | This directory contains tests for the PixelAgent library, focusing on testing the functionality of different model providers.
4 |
5 | ## Test Structure
6 |
7 | The tests are organized by model provider:
8 |
9 | - `test_anthropic.py`: Tests for the Anthropic provider
10 | - `test_bedrock.py`: Tests for the AWS Bedrock provider
11 | - `test_openai.py`: Tests for the OpenAI provider
12 |
13 | Each provider has two standardized tests:
14 | 1. Basic chat functionality test
15 | 2. Tool calling functionality test
16 |
17 | ## Common Test Components
18 |
19 | - `conftest.py`: Contains common fixtures used across all test files
20 | - `pytest.ini`: Configuration for pytest
21 |
22 | ## Running Tests
23 |
24 | ### Running All Tests
25 |
26 | ```bash
27 | pytest
28 | ```
29 |
30 | ### Running Tests for a Specific Provider
31 |
32 | ```bash
33 | # Run only Anthropic tests
34 | pytest -m anthropic
35 |
36 | # Run only Bedrock tests
37 | pytest -m bedrock
38 |
39 | # Run only OpenAI tests
40 | pytest -m openai
41 | ```
42 |
43 | ### Running Tests by Functionality
44 |
45 | ```bash
46 | # Run only chat functionality tests
47 | pytest -m chat
48 |
49 | # Run only tool calling tests
50 | pytest -m tool_calling
51 | ```
52 |
53 | ### Running a Specific Test File
54 |
55 | ```bash
56 | pytest tests/test_anthropic.py
57 | ```
58 |
59 | ### Running with Verbose Output
60 |
61 | ```bash
62 | pytest -v
63 | ```
64 |
65 | ## Test Markers
66 |
67 | The following markers are available:
68 |
69 | - `anthropic`: Tests for Anthropic provider
70 | - `bedrock`: Tests for Bedrock provider
71 | - `openai`: Tests for OpenAI provider
72 | - `chat`: Tests for chat functionality
73 | - `tool_calling`: Tests for tool calling functionality
74 |
75 | ## Mock Components
76 |
77 | The tests use mock components to avoid external dependencies:
78 |
79 | - `mock_stock_price`: A fixture that returns a fixed integer value (5)
80 | - `mock_stock_price_dict`: A fixture that returns a dictionary with stock information
81 |
82 | These mocks are used to test tool calling functionality without making actual API calls.
83 |
--------------------------------------------------------------------------------
/tests/conftest.py:
--------------------------------------------------------------------------------
1 | """
2 | Common pytest fixtures and configurations for pixelagent tests.
3 | """
4 | import pytest
5 | import pixeltable as pxt
6 |
7 |
8 | # Define UDFs at module level to avoid nested function errors
9 | @pxt.udf
10 | def stock_price_int(ticker: str) -> int:
11 | """Get stock information for a ticker symbol (returns integer)"""
12 | return 5
13 |
14 |
15 | @pxt.udf
16 | def stock_price_dict(ticker: str) -> dict:
17 | """Get stock information for a ticker symbol (returns dictionary)"""
18 | return {"regularMarketPrice": 5, "shortName": "NVIDIA Corporation"}
19 |
20 |
21 | @pytest.fixture
22 | def mock_stock_price():
23 | """Fixture to provide a mock stock price tool that returns a fixed value.
24 |
25 | This avoids external API dependencies during testing.
26 | """
27 | return stock_price_int
28 |
29 |
30 | @pytest.fixture
31 | def mock_stock_price_dict():
32 | """Fixture to provide a mock stock price tool that returns a dictionary.
33 |
34 | This is useful for testing OpenAI which expects a more complex return value.
35 | """
36 | return stock_price_dict
37 |
--------------------------------------------------------------------------------
/tests/pytest.ini:
--------------------------------------------------------------------------------
1 | [pytest]
2 | # Test discovery patterns
3 | testpaths = tests
4 | python_files = test_*.py
5 | python_classes = Test*
6 | python_functions = test_*
7 |
8 | # Display options
9 | addopts = -v
10 |
11 | # Markers
12 | markers =
13 | anthropic: tests for Anthropic provider
14 | bedrock: tests for Bedrock provider
15 | openai: tests for OpenAI provider
16 | chat: tests for chat functionality
17 | tool_calling: tests for tool calling functionality
18 |
19 | # Log settings
20 | log_cli = True
21 | log_cli_level = INFO
22 | log_cli_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
23 | log_cli_date_format = %Y-%m-%d %H:%M:%S
24 |
--------------------------------------------------------------------------------
/tests/test.py:
--------------------------------------------------------------------------------
1 | from typing import List
2 |
3 | import pixeltable as pxt
4 |
5 | from pixelagent.openai import Agent
6 |
7 |
8 | @pxt.udf
9 | def stock_price(ticker: str) -> List[float]:
10 | """Get the current stock price for a given ticker symbol."""
11 | # This is a mock implementation for testing
12 | prices = {
13 | "AAPL": 175.34,
14 | "MSFT": 325.89,
15 | "GOOGL": 142.56,
16 | "AMZN": 178.23,
17 | "NVDA": 131.17,
18 | }
19 | return [prices.get(ticker.upper(), 0.0)]
20 |
21 |
22 | def main():
23 | # Create a Bedrock agent with memory
24 | agent = Agent(
25 | name="openai_test",
26 | system_prompt="You are a helpful assistant that can answer questions and use tools.",
27 | model="gpt-4.1-2025-04-14", # Use the Amazon Nova Pro model
28 | n_latest_messages=None, # Unlimited memory to ensure all messages are included
29 | tools=pxt.tools(stock_price), # Register the stock_price tool
30 | reset=True, # Reset the agent's memory for testing
31 | )
32 |
33 | print("\n=== Testing Conversational Memory ===\n")
34 |
35 | # First conversation turn
36 | user_message = "Hello, my name is Alice."
37 | print(f"User: {user_message}")
38 | response = agent.chat(user_message)
39 | print(f"Agent: {response}\n")
40 |
41 | # Second conversation turn - the agent should remember the user's name
42 | user_message = "What's my name?"
43 | print(f"User: {user_message}")
44 | response = agent.chat(user_message)
45 | print(f"Agent: {response}\n")
46 |
47 | print("\n=== Testing Tool Calling (No Memory) ===\n")
48 |
49 | # Tool call - should not use memory from previous conversation
50 | user_message = "What is the stock price of NVDA today?"
51 | print(f"User: {user_message}")
52 | response = agent.tool_call(user_message)
53 | print(f"Agent: {response}\n")
54 |
55 | # Another tool call - should not remember previous tool call
56 | user_message = "What about stock price of AAPL?"
57 | print(f"User: {user_message}")
58 | response = agent.tool_call(user_message)
59 | print(f"Agent: {response}\n")
60 |
61 | print("\n=== Testing Memory After Tool Calls ===\n")
62 |
63 | # Regular chat after tool calls - should still remember the user's name
64 | user_message = "Do you still remember my name?"
65 | print(f"User: {user_message}")
66 | response = agent.chat(user_message)
67 | print(f"Agent: {response}\n")
68 |
69 | # Check if the memory contains all the messages
70 | print("\n=== Checking Memory Contents ===\n")
71 | memory_contents = agent.memory.select(
72 | agent.memory.role,
73 | agent.memory.content
74 | ).order_by(agent.memory.timestamp, asc=True).collect()
75 |
76 | print("Memory contains the following messages:")
77 | for i in range(len(memory_contents)):
78 | role = memory_contents["role"][i]
79 | content = memory_contents["content"][i]
80 | print(f"{i+1}. {role}: {content[:50]}...")
81 |
82 |
83 | if __name__ == "__main__":
84 | main()
85 |
--------------------------------------------------------------------------------
/tests/test_anthropic.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import pixeltable as pxt
3 | from pixelagent.anthropic import Agent
4 |
5 |
6 | @pytest.fixture
7 | def anthropic_agent(mock_stock_price):
8 | """Fixture to create an Anthropic agent with tools"""
9 | agent = Agent(
10 | name="financial_assistant",
11 | model="claude-3-7-sonnet-latest",
12 | system_prompt="You are a financial analyst assistant.",
13 | tools=pxt.tools(mock_stock_price)
14 | )
15 | return agent
16 |
17 |
18 | @pytest.mark.anthropic
19 | @pytest.mark.chat
20 | def test_anthropic_basic_chat(anthropic_agent):
21 | """Test basic chat functionality of the Anthropic agent"""
22 | # Initial conversation
23 | res1 = anthropic_agent.chat("when i say high you say low")
24 | assert res1 is not None, "Agent should return a response"
25 |
26 | # Follow-up message
27 | res2 = anthropic_agent.chat("high")
28 | assert "low" in res2.lower(), "Agent should respond with 'low' when prompted with 'high'"
29 |
30 |
31 | @pytest.mark.anthropic
32 | @pytest.mark.tool_calling
33 | def test_anthropic_tool_calling(anthropic_agent):
34 | """Test tool calling functionality of the Anthropic agent"""
35 | result = anthropic_agent.tool_call("What's the current price of NVDA?")
36 | assert result is not None, "Tool call should return a response"
37 | assert '5' in result, "Tool call should include the mock stock price value"
38 |
39 |
40 | if __name__ == "__main__":
41 | pytest.main(["-xvs", __file__])
42 |
--------------------------------------------------------------------------------
/tests/test_bedrock.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import pixeltable as pxt
3 | from pixelagent.bedrock import Agent
4 |
5 |
6 | @pytest.fixture
7 | def bedrock_agent(mock_stock_price):
8 | """Fixture to create a Bedrock agent with tools"""
9 | agent = Agent(
10 | name="financial_assistant",
11 | model="amazon.nova-pro-v1:0",
12 | system_prompt="You are a financial analyst assistant.",
13 | tools=pxt.tools(mock_stock_price)
14 | )
15 | return agent
16 |
17 |
18 | @pytest.mark.bedrock
19 | @pytest.mark.chat
20 | def test_bedrock_basic_chat(bedrock_agent):
21 | """Test basic chat functionality of the Bedrock agent"""
22 | # Initial conversation
23 | res1 = bedrock_agent.chat("when i say high you say low")
24 | assert res1 is not None, "Agent should return a response"
25 |
26 | # Follow-up message
27 | res2 = bedrock_agent.chat("high")
28 | assert "low" in res2.lower(), "Agent should respond with 'low' when prompted with 'high'"
29 |
30 |
31 | @pytest.mark.bedrock
32 | @pytest.mark.tool_calling
33 | def test_bedrock_tool_calling(bedrock_agent):
34 | """Test tool calling functionality of the Bedrock agent"""
35 | result = bedrock_agent.tool_call("What's the current price of NVDA?")
36 | assert result is not None, "Tool call should return a response"
37 | assert '5' in result, "Tool call should include the mock stock price value"
38 |
39 |
40 | if __name__ == "__main__":
41 | pytest.main(["-xvs", __file__])
42 |
--------------------------------------------------------------------------------
/tests/test_gemini.py:
--------------------------------------------------------------------------------
1 | import os
2 | import pytest
3 |
4 | import pixeltable as pxt
5 | from pixelagent.gemini import Agent
6 |
7 |
8 | @pxt.udf
9 | def weather(city: str) -> str:
10 | """
11 | Returns the weather in a given city.
12 | """
13 | return f"The weather in {city} is sunny."
14 |
15 |
16 | @pytest.mark.skipif(
17 | not os.getenv("GEMINI_API_KEY"), reason="GEMINI_API_KEY not set"
18 | )
19 | def test_gemini_agent_chat():
20 | """Test basic chat functionality with Gemini agent."""
21 | agent = Agent(
22 | name="test_gemini_agent",
23 | system_prompt="You're a helpful assistant.",
24 | reset=True,
25 | )
26 |
27 | # Test basic chat
28 | response = agent.chat("Hi, how are you?")
29 | assert isinstance(response, str)
30 | assert len(response) > 0
31 |
32 | # Test memory functionality
33 | response2 = agent.chat("What was my last question?")
34 | assert isinstance(response2, str)
35 | assert len(response2) > 0
36 |
37 |
38 | @pytest.mark.skipif(
39 | not os.getenv("GEMINI_API_KEY"), reason="GEMINI_API_KEY not set"
40 | )
41 | def test_gemini_agent_tools():
42 | """Test tool calling functionality with Gemini agent."""
43 | agent = Agent(
44 | name="test_gemini_tools",
45 | system_prompt="You're my assistant.",
46 | tools=pxt.tools(weather),
47 | reset=True,
48 | )
49 |
50 | # Test tool call
51 | response = agent.tool_call("Get weather in San Francisco")
52 | assert isinstance(response, str)
53 | assert "San Francisco" in response or "sunny" in response
54 |
55 |
56 | if __name__ == "__main__":
57 | # Simple standalone test
58 | try:
59 | test_gemini_agent_chat()
60 | test_gemini_agent_tools()
61 | print("✅ All Gemini agent tests passed!")
62 | except Exception as e:
63 | print(f"❌ Test failed: {e}")
64 |
--------------------------------------------------------------------------------
/tests/test_openai.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import pixeltable as pxt
3 | from pixelagent.openai import Agent
4 |
5 |
6 | @pytest.fixture
7 | def openai_agent(mock_stock_price_dict):
8 | """Fixture to create an OpenAI agent with tools"""
9 | agent = Agent(
10 | name="financial_assistant",
11 | model="gpt-4o-mini",
12 | system_prompt="You are a financial analyst assistant.",
13 | tools=pxt.tools(mock_stock_price_dict)
14 | )
15 | return agent
16 |
17 |
18 | @pytest.mark.openai
19 | @pytest.mark.chat
20 | def test_openai_basic_chat(openai_agent):
21 | """Test basic chat functionality of the OpenAI agent"""
22 | # Initial conversation
23 | res1 = openai_agent.chat("when i say high you say low")
24 | assert res1 is not None, "Agent should return a response"
25 |
26 | # Follow-up message
27 | res2 = openai_agent.chat("high")
28 | assert "low" in res2.lower(), "Agent should respond with 'low' when prompted with 'high'"
29 |
30 |
31 | @pytest.mark.openai
32 | @pytest.mark.tool_calling
33 | def test_openai_tool_calling(openai_agent):
34 | """Test tool calling functionality of the OpenAI agent"""
35 | result = openai_agent.tool_call("What's the current price of NVDA?")
36 | assert result is not None, "Tool call should return a response"
37 | assert '5' in result, "Tool call should include the mock stock price value"
38 |
39 |
40 | if __name__ == "__main__":
41 | pytest.main(["-xvs", __file__])
42 |
--------------------------------------------------------------------------------