├── images ├── how.png ├── checks-passed.png ├── hero-light.svg └── hero-dark.svg ├── concepts ├── tools.mdx ├── memory.mdx ├── direct_llm_call.mdx ├── llm_support.mdx ├── mcp_tools.mdx ├── knowledge_base.mdx ├── client.mdx ├── agent.mdx └── task.mdx ├── snippets └── snippet-intro.mdx ├── README.md ├── favicon.svg ├── mint.json ├── examples ├── first_agent.mdx ├── agent_with_tools.mdx └── agent_with_object_response.mdx ├── installation.mdx ├── quickstart.mdx ├── development.mdx ├── introduction.mdx └── logo ├── dark.svg └── light.svg /images/how.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/melvincarvalho/Docs-1/main/images/how.png -------------------------------------------------------------------------------- /images/checks-passed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/melvincarvalho/Docs-1/main/images/checks-passed.png -------------------------------------------------------------------------------- /concepts/tools.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Tools" 3 | description: "Connect your agent with real wold items, get informations and create actions." 4 | icon: "toolbox" 5 | --- -------------------------------------------------------------------------------- /snippets/snippet-intro.mdx: -------------------------------------------------------------------------------- 1 | One of the core principles of software development is DRY (Don't Repeat 2 | Yourself). This is a principle that apply to documentation as 3 | well. If you find yourself repeating the same content in multiple places, you 4 | should consider creating a custom snippet to keep your content in sync. 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Mintlify Starter Kit 2 | 3 | Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including 4 | 5 | - Guide pages 6 | - Navigation 7 | - Customizations 8 | - API Reference pages 9 | - Use of popular components 10 | 11 | ### Development 12 | 13 | Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command 14 | 15 | ``` 16 | npm i -g mintlify 17 | ``` 18 | 19 | Run the following command at the root of your documentation (where mint.json is) 20 | 21 | ``` 22 | mintlify dev 23 | ``` 24 | 25 | ### Publishing Changes 26 | 27 | Install our Github App to auto propagate changes from your repo to your deployment. Changes will be deployed to production automatically after pushing to the default branch. Find the link to install on your dashboard. 28 | 29 | #### Troubleshooting 30 | 31 | - Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies. 32 | - Page loads as a 404 - Make sure you are running in a folder with `mint.json` 33 | -------------------------------------------------------------------------------- /favicon.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | -------------------------------------------------------------------------------- /concepts/memory.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Memory" 3 | description: "Create an history for your agents" 4 | icon: "memory" 5 | --- 6 | 7 | ## Overview 8 | 9 | Memory is crucial for agents to understand the flow of events and their causes. If you want your agent to be aware of the general application's progress, which can be beneficial in certain situations, you can activate Memory. Additionally, if you want your agent to maintain its context continuity even when the application is closed and reopened, persistent memory will solve this need. 10 | 11 | When you activate Memory, it saves the conversation history in the current directory based on your agent's ID. You can continue this by providing an agent ID while creating a new agent. 12 | 13 | ## Enabling Memory 14 | 15 | To activate Memory, it is sufficient to set the memory parameter in the AgentConfiguration class to True and specify an agent ID. This way, the agent will persistently save the history and gain the opportunity to view events from a broader perspective. 16 | 17 | ```python 18 | 19 | agent_id = "product_manager_agent" # Setting an agent id 20 | 21 | product_manager_agent = Agent( 22 | "Marketing Manager", 23 | 24 | agent_id_=agent_id, # Setting agent id 25 | memory=True, # Enabling the memory 26 | ) 27 | 28 | ``` -------------------------------------------------------------------------------- /concepts/direct_llm_call.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Direct LLM Call' 3 | description: 'Dont wait for reasoning for basic jobs.' 4 | icon: angles-right 5 | --- 6 | 7 | ## What is Direct LLM Call 8 | 9 | As the capacity of LLMs increases, the scope of what they can accomplish in a single request also expands significantly. Previously, LLMs were limited to making just a few tool calls, but now they can engage in deep thinking and perform hundreds of tool calls. In the Upsonic framework, we assess this situation as follows: 10 | 11 | 12 | - Agents are LLMs that are specifically characterized and focused on a particular domain. 13 | - Should use agents when they need to know the organization's objective, goals, and rules. 14 | 15 | 16 | However, apart from these two cases, there are many other cases that can be handled by an LLM. For this reason, we added a feature called Direct LLM Call. 17 | 18 | 19 | ## Making a Direct LLM Call 20 | 21 | To send a task directly to an LLM without any layers in between, you should use the Direct object. It works with the same interface as agents, making it easy to switch or design according to your needs. 22 | 23 | ```python 24 | from upsonic import Task, Direct 25 | 26 | task = Task("Create History paper of LLMs") 27 | 28 | 29 | Direct.print_do(task) 30 | # or Direct.do(task) 31 | 32 | ``` 33 | 34 | 35 | -------------------------------------------------------------------------------- /mint.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "https://mintlify.com/schema.json", 3 | "name": "Upsonic", 4 | "logo": { 5 | "dark": "/logo/dark.svg", 6 | "light": "/logo/light.svg" 7 | }, 8 | "favicon": "/favicon.svg", 9 | "colors": { 10 | "primary": "#A6FF00", 11 | "light": "#A6FF00", 12 | "dark": "#1D2812", 13 | "anchors": { 14 | "from": "#1D2812", 15 | "to": "#A6FF00" 16 | } 17 | }, 18 | "modeToggle": { 19 | "default": "dark", 20 | "isHidden": true 21 | }, 22 | "topbarLinks": [ 23 | { 24 | "name": "Community", 25 | "url": "https://discord.gg/dNKGm4dfnR" 26 | } 27 | ], 28 | "topbarCtaButton": { 29 | "type": "github", 30 | "url": "https://github.com/Upsonic/Upsonic" 31 | }, 32 | "tabs": [], 33 | "anchors": [], 34 | "navigation": [ 35 | { 36 | "group": "Get Started", 37 | "pages": [ 38 | "introduction", 39 | "installation", 40 | "quickstart" 41 | ] 42 | }, 43 | { 44 | "group": "Examples", 45 | "pages": [ 46 | "examples/first_agent", 47 | "examples/agent_with_tools", 48 | "examples/agent_with_object_response" 49 | ] 50 | }, 51 | { 52 | "group": "Concepts", 53 | "pages": [ 54 | "concepts/task", 55 | "concepts/agent", 56 | "concepts/knowledge_base", 57 | "concepts/memory", 58 | "concepts/direct_llm_call", 59 | "concepts/llm_support", 60 | "concepts/tools", 61 | "concepts/mcp_tools" 62 | ] 63 | } 64 | ], 65 | "footerSocials": { 66 | "x": "https://x.com/UpsonicAI", 67 | "github": "https://github.com/Upsonic", 68 | "linkedin": "https://www.linkedin.com/company/getupsonic/" 69 | } 70 | } -------------------------------------------------------------------------------- /concepts/llm_support.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'LLM Support' 3 | description: 'Use variously llms to handle your agents and tasks.' 4 | icon: microchip-ai 5 | --- 6 | 7 | ## Overview 8 | 9 | The Upsonic framework can use .env variable files or environment variables for LLM support. Once you provide the keys for various services, you can easily select the LLM by using the model parameter within the agents. 10 | 11 | The supported LLMs are: 12 | 13 | * **OpenAI** 14 | 15 | * openai/gpt-4o 16 | 17 | * openai/o3-mini 18 | 19 | * **Azure** 20 | 21 | * azure/gpt-4o 22 | 23 | * **Anthropic** 24 | 25 | * claude/claude-3-5-sonnet 26 | 27 | * **AWS Bedrock** 28 | 29 | * bedrock/claude-3-5-sonnet 30 | 31 | * **DeepSeek** 32 | 33 | * deepseek/deepseek-chat 34 | 35 | ## Setting up .env 36 | 37 | To use these LLMs, an example .env variable file is as follows. You can adjust the variables according to the LLM you want to use and fill in only the ones required. This .env file must be located in your working directory. 38 | 39 | ``` 40 | 41 | # OpenAI 42 | OPENAI_API_KEY="sk-***" 43 | 44 | # Anthropic 45 | ANTHROPIC_API_KEY="sk-***" 46 | 47 | # DeepSeek 48 | DEEPSEEK_API_KEY="sk-**" 49 | 50 | # AWS Bedrock 51 | AWS_ACCESS_KEY_ID="**" 52 | AWS_SECRET_ACCESS_KEY="***" 53 | AWS_REGION="**-**" 54 | 55 | # Azure 56 | AZURE_OPENAI_ENDPOINT="https://**.com/" 57 | AZURE_OPENAI_API_VERSION="****-**-**" 58 | AZURE_OPENAI_API_KEY="***" 59 | 60 | ``` 61 | 62 | ## Using an specific model in Agent 63 | 64 | Using the model parameter, you can easily select which LLM each agent will use. You can refer to the example below: 65 | 66 | ```python 67 | from upsonic import Agent 68 | 69 | product_manager_agent = Agent( 70 | "Product Manager", 71 | 72 | model="openai/gpt-4o" # Specify the model 73 | ) 74 | 75 | ``` -------------------------------------------------------------------------------- /examples/first_agent.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Your First Agent" 3 | description: "Start with here to start to Upsonic" 4 | icon: "1" 5 | --- 6 | 7 | ## Overview 8 | 9 | In this example, we will create a sample agent, and while creating this agent, we will have the opportunity to analyze and learn Upsonic's fundamental mechanisms. 10 | 11 | Agents are like electronic employees that perform your tasks for you. Properly positioning them allows you to allocate time for more important and high-value tasks instead of spending time on many time-consuming and low-value-added tasks. 12 | 13 | ## Basic Agent 14 | 15 | ```python basic_agent.py 16 | from upsonic import Agent, Task 17 | 18 | task = Task("Do an in-depth analysis of US history") 19 | 20 | agent = Agent("Historian") 21 | 22 | agent.print_do(task) 23 | 24 | ``` 25 | 26 | To run the agent, install dependencies and export your OPENAI\_API\_KEY. 27 | 28 | 29 | 30 | 31 | ```bash Mac 32 | python3 -m venv .venv 33 | source .venv/bin/activate 34 | 35 | ``` 36 | 37 | ```bash Windows 38 | python3 -m venv aienv 39 | aienv/scripts/activate 40 | 41 | ``` 42 | 43 | 44 | 45 | 46 | 47 | ```bash Mac 48 | pip install -U upsonic 49 | ``` 50 | 51 | ```bash Windows 52 | pip install -U upsonic 53 | ``` 54 | 55 | 56 | 57 | 58 | 59 | 60 | ```bash .env 61 | OPENAI_API_KEY=sk-*** 62 | ``` 63 | 64 | 65 | 66 | 67 | ```bash 68 | python basic_agent.py 69 | ``` 70 | 71 | -------------------------------------------------------------------------------- /examples/agent_with_tools.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Agent with Tools" 3 | description: "Connect tools to your agent to interact with real world." 4 | icon: "2" 5 | --- 6 | 7 | ## Overview 8 | 9 | In this example, we will observe how to easily complete a Task that requires the use of tools with Upsonic agents. 10 | 11 | Tools are truly the best method for connecting agents to the real world. 12 | 13 | ## Agent with tools 14 | 15 | This agent will read and fetch the latest OpenAI developments from the internet. 16 | 17 | ```python agent_with_tools.py 18 | from upsonic import Agent, Task 19 | from upsonic.client.tools import Search # Importing Search Tool 20 | 21 | task = Task( 22 | "Find the latest OpenAI developments on the internet.", 23 | tools=[Search] # Adding Search tool to the Task 24 | ) 25 | 26 | agent = Agent("Reporter") 27 | 28 | agent.print_do(task) 29 | 30 | ``` 31 | 32 | 33 | To run the agent, install dependencies and export your OPENAI\_API\_KEY. 34 | 35 | 36 | 37 | 38 | ```bash Mac 39 | python3 -m venv .venv 40 | source .venv/bin/activate 41 | 42 | ``` 43 | 44 | ```bash Windows 45 | python3 -m venv aienv 46 | aienv/scripts/activate 47 | 48 | ``` 49 | 50 | 51 | 52 | 53 | 54 | ```bash Mac 55 | pip install -U upsonic 56 | ``` 57 | 58 | ```bash Windows 59 | pip install -U upsonic 60 | ``` 61 | 62 | 63 | 64 | 65 | 66 | 67 | ```bash .env 68 | OPENAI_API_KEY=sk-*** 69 | ``` 70 | 71 | 72 | 73 | 74 | ```bash 75 | python agent_with_tools.py 76 | ``` 77 | 78 | -------------------------------------------------------------------------------- /installation.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Installation" 3 | description: "Get started with Upsonic - Install, configure, and build your first AI Agent" 4 | icon: wrench 5 | --- 6 | 7 | 8 | **Python Version Requirements** 9 | 10 | Upsonic requires `Python >=3.10`. Here's how to check your version: 11 | 12 | ```bash 13 | python3 --version 14 | ``` 15 | 16 | If you need to update Python, visit [python.org/downloads](https://python.org/downloads) 17 | 18 | 19 | # Installing Upsonic 20 | 21 | Now let's get you set up! 🚀 22 | 23 | 24 | 25 | Install Upsonic with all recommended tools using either method: 26 | 27 | ```shell Terminal 28 | pip install upsonic 29 | ``` 30 | 31 | 32 | 33 | If you have an older version of Upsonic installed, you can upgrade it: 34 | 35 | ```shell Terminal 36 | pip install --upgrade Upsonic 37 | ``` 38 | 39 | 40 | Skip this step if you're doing a fresh installation. 41 | 42 | 43 | 44 | 45 | Check your installed versions: 46 | 47 | ```shell Terminal 48 | pip freeze | grep upsonic 49 | ``` 50 | 51 | You should see something like: 52 | 53 | ```markdown Output 54 | upsonic==X.X.X 55 | ``` 56 | 57 | 58 | Installation successful! You’re ready to create your first Agent and Task. 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | ## Next Steps 68 | 69 | 70 | 75 | Follow our quickstart guide to create your first Upsonic agent and get hands-on experience. 76 | 77 | 82 | Connect with other developers, get help, and share your Upsonic experiences. 83 | 84 | 85 | -------------------------------------------------------------------------------- /concepts/mcp_tools.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'MCP Tools' 3 | description: 'An open protocol that enables seamless integration between LLM applications and external data sources and tools.' 4 | icon: plug 5 | --- 6 | 7 | ## What is Model Context Protocol (MCP) ? 8 | 9 | The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. 10 | 11 | While using Upsonic, you can directly use tools developed with MCP (MCP Servers). This way, you can create various agents and handle your tasks using both official tools made by different companies daily and community-developed tools, providing access to an extensive tool ecosystem. 12 | 13 | 14 | ## Server Lists 15 | There are certain MCP Server lists created to view various MCP servers and learn how to use them. Some of these are: 16 | 17 | - https://glama.ai/mcp/servers 18 | - https://www.mcp.run/ 19 | - https://smithery.ai/ 20 | 21 | 22 | ## Server Types 23 | 24 | - **NPX Based**: You need to install node.js to use this kind of tools. 25 | - **UVX**: They are python based tools you dont need to install anything. 26 | - **Docker**: If you have an docker installation you can easily run these tools with a great isolation. 27 | 28 | 29 | ## Connecting an Example MCP Server 30 | 31 | Upsonic uses a structure that involves class creation for connecting to MCP servers. This structure allows you to easily create multiple integrations and use them whenever needed. 32 | 33 | In this example, we will integrate the HackerNews MCP: 34 | 35 | 36 | ```python 37 | # Hackernews MCP Server 38 | class HackerNewsMCP: 39 | command = "uvx" 40 | args = ["mcp-hn"] 41 | 42 | task = Task( 43 | "Get the latest tecnical news", 44 | tools=[HackerNewsMCP] 45 | ) 46 | ``` 47 | 48 | and GitHub MCP: 49 | 50 | ```python 51 | # GitHub MCP Server - Requires Node.JS 52 | class GitHubMCP: 53 | command = "npx" 54 | args = ["-y", "@modelcontextprotocol/server-github"] 55 | env = { 56 | "GITHUB_PERSONAL_ACCESS_TOKEN": "" 57 | } 58 | 59 | task = Task( 60 | "Check the repositories", 61 | tools=[GitHubMCP] 62 | ) 63 | ``` 64 | -------------------------------------------------------------------------------- /examples/agent_with_object_response.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Agent with Object Response' 3 | description: 'Get programmable results from your agent.' 4 | icon: "3" 5 | --- 6 | 7 | ## Overview 8 | 9 | 10 | ## Agent with Object Response 11 | 12 | This agent will do the same thing as the agent above, fetching OpenAI developments, but this time it will bring back the results in a format we desire: a programmable and usable output. 13 | 14 | ```python agent_with_object_response.py 15 | from upsonic import Agent, Task 16 | from upsonic.client.tools import Search 17 | 18 | from upsonic import ObjectResponse # Importing Object Response 19 | 20 | class ANew(ObjectResponse): 21 | title: str 22 | content: str 23 | class News(ObjectResponse): 24 | news: lists[ANew] # We want to get a list of ANew objects 25 | 26 | task = Task( 27 | "Find the latest OpenAI developments on the internet.", 28 | tools=[Search], 29 | response_format=News # Specifying the Response that we want 30 | ) 31 | 32 | agent = Agent("Reporter") 33 | 34 | agent.print_do(task) 35 | 36 | ``` 37 | 38 | When you want to use the news: 39 | 40 | ```python agent_with_object_response.py 41 | result = task.response 42 | 43 | for each_new in result.news: 44 | print() 45 | print("Title:", each_new.title) 46 | print("Content:", each_new.content) 47 | 48 | ``` 49 | 50 | 51 | To run the agent, install dependencies and export your OPENAI\_API\_KEY. 52 | 53 | 54 | 55 | 56 | ```bash Mac 57 | python3 -m venv .venv 58 | source .venv/bin/activate 59 | 60 | ``` 61 | 62 | ```bash Windows 63 | python3 -m venv aienv 64 | aienv/scripts/activate 65 | 66 | ``` 67 | 68 | 69 | 70 | 71 | 72 | ```bash Mac 73 | pip install -U upsonic 74 | ``` 75 | 76 | ```bash Windows 77 | pip install -U upsonic 78 | ``` 79 | 80 | 81 | 82 | 83 | 84 | 85 | ```bash .env 86 | OPENAI_API_KEY=sk-*** 87 | ``` 88 | 89 | 90 | 91 | 92 | ```bash 93 | python agent_with_object_response.py 94 | ``` 95 | 96 | -------------------------------------------------------------------------------- /quickstart.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Quickstart' 3 | description: 'Build your first AI agent with Upsonic In under 2 Minute' 4 | icon: rocket 5 | --- 6 | 7 | ## Build your first Upsonic Agent 8 | 9 | Let's create a simple agent that expert on our company and roled as Product Manager. 10 | 11 | Before we proceed, make sure you have `Upsonic` installed. 12 | If you haven't installed them yet, you can do so by following the [installation guide](/installation). 13 | 14 | Follow the steps below to get your sonics! 🦔 15 | 16 | 17 | 18 | Upsonic supports multiple LLMs. In this example, we will use OpenAI's GPT-4 model, for which we need to set environment variables. 19 | 20 | 21 | ```python Python 22 | import os 23 | 24 | os.environ["OPENAI_API_KEY"] = "sk-***" 25 | ``` 26 | 27 | ```bash Mac 28 | export OPENAI_API_KEY=sk-*** 29 | 30 | ``` 31 | 32 | ```powershell Windows 33 | setx OPENAI_API_KEY sk-*** 34 | ``` 35 | 36 | 37 | 38 | 39 | The task-centric structure important to create an programatic design in Agent systems. In Upsonic you can generate an task oriented structure with task **description**, **tools**, **context**, **knowledge bases** and etc. In this example we will give `Search` tools to our task. The agent will use the `Search` tool at the run time. 40 | 41 | ```python 42 | from upsonic import Task, Agent 43 | from upsonic.client.tools import Search 44 | 45 | task = Task( 46 | "Research latest news in Anthropic and OpenAI", 47 | tools=[Search] 48 | ) 49 | ``` 50 | 51 | 52 | 53 | Upsonic have an **automatic characterization** mechanism. Its important to create agents that belongs on your purpose. In this example we will generate an Product Manager. 54 | 55 | ```python 56 | agent = Agent("Product Manager") 57 | ``` 58 | 59 | 60 | 61 | ```python 62 | # Running the task 63 | agent.print_do(task) 64 | ``` 65 | 66 | 67 | 68 | ## Connect Any MCP 69 | 70 | Upsonic framework, support to start and connect any MCP server for your purposes. Model Context Protocol servers are varously and maintain by companies and community. Upsonic support all MCP servers. 71 | 72 | * [List of mcp servers](https://glama.ai/mcp/servers?attributes=) 73 | 74 | At this step you need to use `@client.mcp()` decorator to an class with **command**, **args** and **env.** 75 | 76 | ```python 77 | # Hackernews MCP Server 78 | class HackerNewsMCP: 79 | command = "uvx" 80 | args = ["mcp-hn"] 81 | # env = {"": ""} 82 | 83 | 84 | task1 = Task( 85 | "Research latest news in Anthropic and OpenAI", 86 | tools=[HackerNewsMCP] 87 | ) 88 | ``` -------------------------------------------------------------------------------- /concepts/knowledge_base.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Knowledge Base" 3 | icon: "book" 4 | description: "Give the all required informations to LLM" 5 | --- 6 | 7 | ## What is Knowledeg Base? 8 | 9 | The Knowledge Base feature in the Upsonic framework serves as a critical component that supplies essential task-related data to the LLM during its analysis and processing operations. By providing relevant information to either task-performing agents or direct LLM calls, this feature significantly enhances the probability of successful task completion. 10 | 11 | A notable advantage of the Knowledge Base feature is its seamless integration with context compression, optimizing the handling of large datasets. The framework implements this functionality through two distinct types of knowledge bases  12 | 13 | * The one that directly provides the data that needs to be kept and analyzed alongside each prompt. 14 | 15 | * Knowledge bases that exist as tools that the LLM can search when needed. 16 | 17 | While both groups have their own benefits, especially in cases requiring RAG (Retrieval-Augmented Generation), and when dealing with large or unstructured data, the LLM (Large Language Model) will return more successful results since it determines the search query by itself. 18 | 19 | 20 | Key benefits of using Knowledge: 21 | 22 | * Enhance agents for domain-specific situations 23 | 24 | * Enhance decisions with real-world data 25 | 26 | 27 | ## Creating an Knowledge Base 28 | 29 | To create a knowledge base, it's sufficient to directly import the KnowledgeBase class. Then, we will create an object belonging to this class and add it to the context list within the Task. KnowledgeBase works compatibly with every place where context can be used. 30 | 31 | ```python 32 | from upsonic import KnowledgeBase 33 | 34 | resume_list = KnowledgeBase( 35 | files=["resume1.pdf", "resume2.pdf"] 36 | ) 37 | ``` 38 | 39 | 40 | For file-Based Knowledge Sources, make sure to place your files in a current work directory. Also, you can use full path. 41 | 42 | 43 | ## Put a Knowledge Base into to Task 44 | 45 | You can easily add the KnowledgeBase object you created to any task's context list. The Agent or LLM will evaluate these contexts while performing operations. 46 | 47 | ```python 48 | from upsonic import Task 49 | 50 | task1 = Task( 51 | "Summerize the resumes", 52 | 53 | context=[resume_list] # Adding Knowledge Base to the task 54 | ) 55 | 56 | ``` 57 | 58 | ## Supported Formats 59 | 60 | 61 | 62 | * \- PDF 63 | 64 | * \- PowerPoint 65 | 66 | * \- Word 67 | 68 | * \- Excel 69 | 70 | * \- Images 71 | 72 | * \- Audio 73 | 74 | * \- HTML 75 | 76 | * \- CSV 77 | 78 | * \- JSON 79 | 80 | * \- XML 81 | 82 | * \- ZIP 83 | 84 | 85 | 86 | * \- STRING 87 | 88 | 89 | 90 | ## -------------------------------------------------------------------------------- /development.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Development' 3 | description: 'Preview changes locally to update your docs' 4 | --- 5 | 6 | 7 | **Prerequisite**: Please install Node.js (version 19 or higher) before proceeding. 8 | 9 | 10 | Follow these steps to install and run Mintlify on your operating system: 11 | 12 | **Step 1**: Install Mintlify: 13 | 14 | 15 | 16 | ```bash npm 17 | npm i -g mintlify 18 | ``` 19 | 20 | ```bash yarn 21 | yarn global add mintlify 22 | ``` 23 | 24 | 25 | 26 | **Step 2**: Navigate to the docs directory (where the `mint.json` file is located) and execute the following command: 27 | 28 | ```bash 29 | mintlify dev 30 | ``` 31 | 32 | A local preview of your documentation will be available at `http://localhost:3000`. 33 | 34 | ### Custom Ports 35 | 36 | By default, Mintlify uses port 3000. You can customize the port Mintlify runs on by using the `--port` flag. To run Mintlify on port 3333, for instance, use this command: 37 | 38 | ```bash 39 | mintlify dev --port 3333 40 | ``` 41 | 42 | If you attempt to run Mintlify on a port that's already in use, it will use the next available port: 43 | 44 | ```md 45 | Port 3000 is already in use. Trying 3001 instead. 46 | ``` 47 | 48 | ## Mintlify Versions 49 | 50 | Please note that each CLI release is associated with a specific version of Mintlify. If your local website doesn't align with the production version, please update the CLI: 51 | 52 | 53 | 54 | ```bash npm 55 | npm i -g mintlify@latest 56 | ``` 57 | 58 | ```bash yarn 59 | yarn global upgrade mintlify 60 | ``` 61 | 62 | 63 | 64 | ## Validating Links 65 | 66 | The CLI can assist with validating reference links made in your documentation. To identify any broken links, use the following command: 67 | 68 | ```bash 69 | mintlify broken-links 70 | ``` 71 | 72 | ## Deployment 73 | 74 | 75 | Unlimited editors available under the [Pro 76 | Plan](https://mintlify.com/pricing) and above. 77 | 78 | 79 | If the deployment is successful, you should see the following: 80 | 81 | 82 | 83 | 84 | 85 | ## Code Formatting 86 | 87 | We suggest using extensions on your IDE to recognize and format MDX. If you're a VSCode user, consider the [MDX VSCode extension](https://marketplace.visualstudio.com/items?itemName=unifiedjs.vscode-mdx) for syntax highlighting, and [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) for code formatting. 88 | 89 | ## Troubleshooting 90 | 91 | 92 | 93 | 94 | This may be due to an outdated version of node. Try the following: 95 | 1. Remove the currently-installed version of mintlify: `npm remove -g mintlify` 96 | 2. Upgrade to Node v19 or higher. 97 | 3. Reinstall mintlify: `npm install -g mintlify` 98 | 99 | 100 | 101 | 102 | Solution: Go to the root of your device and delete the \~/.mintlify folder. Afterwards, run `mintlify dev` again. 103 | 104 | 105 | 106 | Curious about what changed in the CLI version? [Check out the CLI changelog.](https://www.npmjs.com/package/mintlify?activeTab=versions) 107 | -------------------------------------------------------------------------------- /concepts/client.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Client" 3 | description: "Scale your Agent infrastructure easily." 4 | icon: "server" 5 | --- 6 | 7 | ## Overview 8 | 9 | The Upsonic framework leverages client-server architecture, offering significant advantages in deployment flexibility and computational capabilities. This architectural design enables the runtime environment to operate independently from the main environment, allowing for enhanced resource utilization and scalable task execution across multiple agents. 10 | 11 | Furthermore, the framework provides comprehensive operational management features, including seamless scaling of Agentic operations, robust error tracking mechanisms, and enhanced security through system isolation. These capabilities ensure reliable and secure execution of complex agent-based tasks while maintaining system integrity and performance optimization. 12 | 13 | ## Creating LocalServer 14 | 15 | The Upsonic framework has an automatic system to easily set up the client-server architecture locally. This way, you can safely and automatically use this architecture even on a small scale and easily complete your tasks. The LocalServer will start securely and shut down safely when agent operations are complete. Debugging and all other processes are managed compatibly within the client system. 16 | 17 | ```python 18 | from upsonic import UpsonicClient, ClientConfig 19 | 20 | # Create Client 21 | client = UpsonicClient("localserver") 22 | ``` 23 | 24 | ## Setting Configurations 25 | 26 | The Upsonic Framework offers comprehensive multi-provider support, enabling integration with various Large Language Model (LLM) providers and their associated configurations. Configuration management is streamlined through the ClientConfig class, which allows users to create a centralized configuration object. This object can store multiple API keys, model provider settings, and other essential parameters. 27 | 28 | Moreover, the framework provides flexibility in model selection through the DEFAULT\_LLM\_MODEL setting, allowing users to seamlessly switch between different language models based on their specific requirements. This adaptable architecture ensures that users can easily optimize their applications by selecting the most appropriate LLM provider and model configuration for their use case. 29 | 30 | 31 | The Upsonic framework implements intelligent configuration management by automatically checking environment variables when explicit configurations are not provided. For instance, if an OPENAI\_API\_KEY is present in the environment variables, the framework will automatically detect and utilize it without requiring manual configuration in ClientConfig. 32 | 33 | This feature streamlines the setup process and provides additional flexibility in managing sensitive credentials. While the ClientConfig class offers manual configuration options, developers can leverage environment variables for credential management, adhering to security best practices and simplifying deployment across different environments. 34 | 35 | 36 | ```python 37 | my_settings = ClientConfig( 38 | # OpenAI Config 39 | OPENAI_API_KEY="sk-*****", 40 | 41 | # Anthropic Config 42 | ANTHROPIC_API_KEY="sk-*****", 43 | 44 | # Azure OpenAI Config 45 | AZURE_OPENAI_ENDPOINT="https://", 46 | AZURE_OPENAI_API_VERSION="2024-06-01", 47 | AZURE_OPENAI_API_KEY="*****", 48 | 49 | # AWS Config 50 | AWS_ACCESS_KEY_ID="*****", 51 | AWS_SECRET_ACCESS_KEY="*****", 52 | AWS_REGION="us-east-2", 53 | 54 | # DeepSeek Config 55 | DEEPSEEK_API_KEY="sk-*****", 56 | 57 | 58 | # Model Selection 59 | DEFAULT_LLM_MODEL="openai/gpt-4o", 60 | ) 61 | 62 | 63 | 64 | client.config(my_settings) 65 | 66 | ``` -------------------------------------------------------------------------------- /introduction.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: Introduction 3 | description: "Build AI agent teams that work handle your tasks and beyond." 4 | icon: handshake 5 | --- 6 | 7 | # What is Upsonic? 8 | 9 | **Task oriented AI agent framework for digital workers and vertical AI agents.** 10 | 11 | Upsonic offers a cutting-edge enterprise-ready framework where you can orchestrate LLM calls, agents, and computer use to complete tasks cost-effectively. 12 | 13 | It provides more reliable agents, scalability, and a task-oriented structure that you need while completing real-world cases. 14 | 15 | ## How Upsonic Works 16 | 17 | ![](/images/how.png) 18 | 19 | | Component | Description | Key Features | 20 | | ---------------------- | ------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------- | 21 | | Tasks | The job we want to complete | - Have clear objectives
- Use specific tools
- Feed into larger process
- Produce actionable results | 22 | | Agents | A real person like LLM's | - Actions over tools
- Self-reflection
- Memory
- Context Compression | 23 | | Secure Runtime | Isolated Envinronment to Run Agents | - On-prem
- Cloud
- Custominization | 24 | | Model Context Protocol | A tool standard for LLM's. Supported by Companies and Communities. | - Wide range tool support | 25 | 26 | ## Key Features 27 | 28 | 29 | 30 | Easily complete the tasks you will accomplish and run them in various ways to get results. Focus on the tasks, not the process. 31 | 32 | 33 | 34 | Share your company's URL and objective, then input a job title for the agent. The Upsonic framework will generate a persona and assign tasks accordingly. 35 | 36 | 37 | 38 | Directly integrate with a comprehensive tool pool developed by the community and companies. Achieve stability with official tools. 39 | 40 | 41 | 42 | The most critical components are internally positioned on the server-side, allowing you to deploy the server via Docker and perform a lightweight integration with your application on the client-side in a stateless manner. 43 | 44 | 45 | 46 | If the task you want to have done is simple and doesn’t require sub-tasks, you don’t need to spend time with agents. You can directly make an LLM call and get the results instantly 47 | 48 | 49 | 50 | When working with LLMs, to get more refined results from them, the responses should be programmatic. In this regard, you can define how you want the response by specifying it as class and receiving it as an object. 51 | 52 | 53 | 54 | ## 55 | 56 | ## Next Steps 57 | 58 | 59 | 60 | Follow our installation guide to install Upsonic to your envinronment. 61 | 62 | 63 | 64 | Follow our quickstart guide to create your first Upsonic agent and get hands-on experience. 65 | 66 | -------------------------------------------------------------------------------- /logo/dark.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | -------------------------------------------------------------------------------- /logo/light.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | -------------------------------------------------------------------------------- /concepts/agent.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Agent' 3 | description: 'Generate agents that belongs to your company.' 4 | icon: user-ninja 5 | --- 6 | 7 | ## Overview 8 | 9 | The Upsonic framework features a sophisticated AgentConfiguration system that optimizes task completion through structured agent setup. This configuration mechanism creates a simulated corporate environment where agents operate with defined roles and responsibilities, ensuring focused task execution. By leveraging job title parameters, the system automatically establishes appropriate decision-making frameworks and performance objectives for each agent, streamlining the workflow process. 10 | 11 | AgentConfigurations implement a reusable architecture that enhances system efficiency and flexibility. For instance, in server analysis scenarios, you can deploy a software engineer agent that the framework can dynamically assign to relevant tasks. This modular approach not only promotes code reusability but also optimizes the overall system's performance through intelligent agent allocation. 12 | 13 | ## Creating an Agent 14 | 15 | The AgentConfiguration class serves as a fundamental component in ensuring optimal task execution within the system. Given its critical role, developers should allocate sufficient time to properly configure and fine-tune this class. Additionally, it is essential to conduct comprehensive testing across various configuration scenarios to validate performance and behavior under different operational conditions. 16 | 17 | ```python 18 | from upsonic import Agent 19 | 20 | product_manager_agent = Agent( 21 | "Product Manager", 22 | 23 | company_url="https://upsonic.ai", 24 | company_objective="Developing an AI Agent Framework", 25 | ) 26 | ``` 27 | 28 | ## Specify LLM 29 | 30 | To specify which LLM the agent will use, it's sufficient to directly use the model parameter. You can check the LLM support section to see all supported LLMs. 31 | 32 | ```python 33 | product_manager_agent = Agent( 34 | "Product Manager", 35 | 36 | model="openai/gpt-4o" 37 | ) 38 | ``` 39 | 40 | ## Agent Attributest 41 | 42 | Agents are equipped with supplementary features designed to enhance their performance and increase success rates during task execution. These configurable capabilities can be dynamically adjusted throughout the task lifecycle, allowing for real-time optimization of the agent's processing capacity. By fine-tuning these features, users can significantly improve the probability of successful task completion while meeting specific operational requirements. 43 | 44 | | Attribute | Parameters | Type | Description | 45 | | ------------------------------- | ------------------- | --------- | ----------------------------------------------------------- | 46 | | job\_title | `job_title` | `str` | The job title of Agent. | 47 | | company\_url *(Optional)* | `company_url` | `str` | The url of your company | 48 | | company\_objective *(Optional)* | `company_objective` | `str` | The objective of your company | 49 | | Name *(Optional)* | `name` | `str` | The name of the human that represent from Agent | 50 | | Contact *(Optional)* | `contact` | `str` | The contact info of the human that represent from Agent | 51 | | Memory *(Optional)* | `memory` | `boolean` | The persistent memory by the agent id (Default: False) | 52 | | Reflection *(Optional)* | `reflection` | `boolean` | Reflection mode for agent. (Default: False) | 53 | | Compress Context *(Optional)* | `compress_context` | `boolean` | Compress the context for LLM context lenght (Default: True) | 54 | | model *(Optional)* | model | str | The llm model for Agent (Default: openai/gpt-4o) | 55 | 56 | ## Act like an Human 57 | 58 | During task execution, LLM-based agents may intentionally leave specific fields incomplete, requiring human input due to inherent LLM operational constraints. However, when provided with personal identifiers such as names and contact details, the system can generate fully personalized content. For example, in email composition tasks, the agent can authentically replicate a given user's and include appropriate signature information, creating more natural and contextually appropriate communications. 59 | 60 | 61 | 62 | In this demonstration, we will configure an agent to function as Upsonic's marketing manager, showcasing the framework's role-specific capabilities. Subsequently, we will demonstrate practical implementation by assigning an email composition task to this specialized agent. 63 | 64 | ```python Agent 65 | 66 | 67 | # Generating Task and Agent 68 | task = Task(description="Write an outreach mail", tools=[Search]) 69 | 70 | product_manager_agent = Agent( 71 | "Marketing Manager", 72 | company_url="https://upsonic.ai", 73 | company_objective="To build AI Agent framework that helps people get things done", 74 | ) 75 | 76 | 77 | # Run and see the result 78 | product_manager_agent.do(task) 79 | 80 | result = task.response 81 | 82 | print(result) 83 | ``` 84 | 85 | In the output section, you will observe that the agent strategically leaves certain fields incomplete. This design needs human intervention and customization of critical content elements before finalization. 86 | 87 | ```markdown Result 88 | My name is [Your Name], and I’m a [Your Position] at [Your Company]. I wanted to reach out to introduce [Company Name], a trusted partner in delivering innovative, reliable, and customer-focused technology solutions tailored to your business needs. 89 | 90 | ****** MAIL Body 91 | 92 | Thank you for considering this opportunity. I genuinely look forward to the possibility of working together and supporting your organization in achieving success. 93 | 94 | Warm regards, 95 | [Your Full Name] 96 | [Your Job Title] 97 | [Your Company Name] 98 | [Your Contact Information] 99 | 100 | ``` 101 | 102 | 103 | 104 | By inputting personal identifiers, including name and contact details, we can enhance the agent's functionality to generate complete, uninterrupted content. This configuration eliminates the need for manual field completion, enabling the agent to produce fully automated outputs without placeholder spaces. 105 | 106 | ```python New Agent 107 | 108 | task = Task(description="Write an outreach mail", tools=[Search]) 109 | 110 | product_manager_agent = Agent( 111 | "Marketing Manager", 112 | company_url="https://upsonic.ai", 113 | company_objective="To build AI Agent framework that helps people get things done", 114 | 115 | name="Onur ULUSOY", # Now we have name and contact 116 | contact="onur@upsonic.co" 117 | ) 118 | 119 | # Run and see the result 120 | product_manager_agent.do(task) 121 | 122 | result = task.response 123 | 124 | print(result) 125 | ``` 126 | 127 | ```markdown New Result 128 | 129 | My name is Onur Ulusoy, and I am the Marketing Manager at Upsonic.ai. At Upsonic.ai, we are dedicated to helping businesses like yours harness the power of cutting-edge AI technology to address pain points, streamline operations, and achieve sustainable growth. 130 | 131 | ****** MAIL Body 132 | 133 | Thank you for taking the time to consider Upsonic.ai. I’m confident we can help you unlock your organization’s true potential. Looking forward to hearing from you soon! 134 | 135 | Best regards, 136 | Onur Ulusoy 137 | Marketing Manager | Upsonic.ai 138 | onur@upsonic.co 139 | ``` 140 | 141 | 142 | 143 | ## Memory 144 | 145 | Memory management plays a crucial role in maintaining contextual continuity across distributed tasks and timeframes for agent operations. The framework implements a disk-based persistence mechanism that associates memory storage with unique agent identifiers (IDs). To enable persistent memory functionality, developers must explicitly define and consistently maintain agent IDs across all agent definitions within their implementation. 146 | 147 | ```python 148 | 149 | agent_id = "product_manager_agent" # Setting an agent id 150 | 151 | product_manager_agent = Agent( 152 | "Marketing Manager", 153 | 154 | agent_id_=agent_id, # Setting agent id 155 | memory=True, # Enabling the memory 156 | ) 157 | ``` 158 | 159 | ## Reflection 160 | 161 | During task execution, agents may occasionally generate inaccurate results or misinterpret task objectives, which can significantly impact system stability and output quality, particularly when critical sub-tasks are involved. To address this challenge, the framework implements a sophisticated reflection feature that enables continuous self-monitoring and quality assurance. 162 | 163 | ```python 164 | product_manager_agent = Agent( 165 | "Marketing Manager", 166 | 167 | reflection=True, # Enabling the reflection 168 | ) 169 | ``` 170 | 171 | ## Compress Context 172 | 173 | One of the main limitations of LLMs is the Context Length Limit, which affects how much data the model can process at once when generating outputs. This directly impacts how well-referenced and accurate the results will be. The Upsonic framework handles this limitation by automatically summarizing resources when they exceed the context limit. When your input data is too large, the system compresses it while keeping the important information, allowing the LLM to continue functioning normally. This removes the need to manually calculate and manage context limits, letting you focus on your actual work. 174 | 175 | ```python 176 | product_manager_agent = Agent( 177 | "Marketing Manager", 178 | 179 | compress_context=True, # Enabling the compress_context 180 | ) 181 | ``` -------------------------------------------------------------------------------- /images/hero-light.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | 156 | -------------------------------------------------------------------------------- /images/hero-dark.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | 149 | 150 | 151 | 152 | 153 | 154 | 155 | 156 | 157 | 158 | 159 | 160 | 161 | 162 | -------------------------------------------------------------------------------- /concepts/task.mdx: -------------------------------------------------------------------------------- 1 | --- 2 | title: "Task-Centric Structure" 3 | description: "Design your tasks in a crystal clear format." 4 | icon: "bullseye" 5 | sidebarTitle: "Task" 6 | --- 7 | 8 | ## Overview 9 | 10 | The Upsonic framework employs the Task class as its fundamental building block for defining and executing tasks, whether through agents or direct LLM calls. Tasks can be configured with various parameters, tools, and contextual elements, enabling the construction of comprehensive systems. Through context management, you can establish connections between tasks, link them to knowledge bases, or associate them with string values. 11 | 12 | **You don't have to create tasks like steps. The agent will automatically generate these steps for you.** 13 | 14 | Unlike other systems that employ non-task structures, Upsonic addresses two critical limitations commonly found in alternative solutions. First, it eliminates the restriction of binding one agent to a single task, allowing agents to handle multiple tasks efficiently. Second, it significantly improves programmability. When creating tasks that involve dependencies—such as referenced websites, company information, or competitor data—Upsonic enables you to define these elements programmatically rather than embedding them within individual agents. This approach prevents the need to create separate agents for repetitive operations like competitor analysis, resulting in a more scalable and maintainable system architecture. 15 | 16 | ## Creating a Task 17 | 18 | The Task class is an integral component of Upsonic and can be easily imported into your project. Once imported, you can define multiple tasks by assigning them identifiers such as "task1," "task2," "task3," or any other descriptive names of your choice. 19 | 20 | ```python 21 | from upsonic import Task 22 | 23 | task1 = Task("Do an in-depth analysis of US history") 24 | ``` 25 | 26 | ## Task Attributest 27 | 28 | Tasks within the framework can range from basic to complex, depending on your specific requirements. The framework is designed with simplicity in mind, requiring only one mandatory parameter: the description. All other parameters are optional, providing flexibility in task configuration. 29 | 30 | | Attribute | Parameters | Type | Description | 31 | | ---------------------------- | ----------------- | -------------------------------------------------- | ----------------------------------------------- | 32 | | Description | `description` | `str` | A clear and concise statement of what the task. | 33 | | Response Format *(Optional)* | `response_format` | `Optional[List[Union(BaseModal, ObjectResponse)]]` | Describe the response you expect. | 34 | | Tools *(Optional)* | `tools` | `Optional[List[Union(MCP, Function)]]` | The tools needed to complete the task. | 35 | | Context *(Optional)* | `context` | `Optional[List[Union(Task, KnowledgeBase, str)]]` | Context that helps accomplish the task | 36 | 37 | ## Adding Tools to a Task 38 | 39 | Tools play a crucial role in agent functionality by bridging the gap between LLMs and real-world applications such as APIs, services, and search engines. 40 | 41 | The framework supports two distinct types of tool implementation. The first option allows you to utilize Python functions directly as tools within Upsonic agents. The second approach leverages the Model Context Protocol (MCP), a standardized protocol for LLM tools that supports multiple platforms including Python, Node.js, and Docker. MCP tools are continuously developed and maintained by both companies and the community, with detailed information available in the "Tools" section. 42 | 43 | Integrating a tool into your task is straightforward: simply create a list containing the desired tool's class name and assign it to the "tools" parameter in the Task object. 44 | 45 | 46 | 47 | Let's define a class called MyTools that includes a function named is\_page\_available. This function will perform a simple URL validation check, returning True if the specified URL is accessible, making it useful for verifying web resources. 48 | 49 | ```python 50 | import requests 51 | 52 | class MyTools: 53 | def is_page_available(url: str) -> bool: 54 | return requests.get(url).status_code == 200 55 | 56 | ``` 57 | 58 | 59 | 60 | This example demonstrates integration with the HackerNews MCP Server, which provides several functions including get\_stories, get\_story\_info, search\_stories, and get\_user\_info. The MCP framework simplifies the process of connecting our agents to external services, as illustrated by this HackerNews integration. 61 | 62 | ```python 63 | class HackerNewsMCP: 64 | command = "uvx" 65 | args = ["mcp-hn"] 66 | ``` 67 | 68 | 69 | 70 | Once you've configured your custom tools, they can be directly incorporated into the Task object. The agent will then automatically utilize these tools as needed during task execution. 71 | 72 | ```python 73 | task = Task( 74 | "Summarize the latest hackernews stories of today", 75 | tools=[Search, MyTools] # Specify the tools list 76 | ) 77 | ``` 78 | 79 | 80 | 81 | ## Putting Task to Another Task as Context 82 | 83 | The framework supports the combination of multiple tasks to handle complex operations, particularly useful in scenarios requiring deep analysis followed by report generation. While individual tasks may be complex, the true power lies in their interconnection. By creating task chains and linking them through shared context, you can build sophisticated workflows that seamlessly pass information between tasks. 84 | 85 | ```python 86 | task = Task("Do an in-depth analysis of the history of chips") 87 | 88 | task2 = Task( 89 | "Prepare a draft report on Europe's position", 90 | context=[task1] # Add task1 in a list as task context 91 | )} 92 | ``` 93 | 94 | 95 | The Upsonic framework explains the context to the LLM for your purposes. You don't have to worry about sharing multiple contexts. 96 | 97 | 98 | ## Giving Knowledge Base as Context 99 | 100 | The framework incorporates a robust KnowledgeBase feature designed to extend LLM models beyond their inherent knowledge limitations. While LLM models are constrained by their training data boundaries, real-world applications frequently demand access to external information sources such as PDFs, documents, and spreadsheets. Through seamless integration with the Context system, the KnowledgeBase feature enables you to efficiently incorporate these external data sources into your tasks, enhancing the model's capability to process and utilize supplementary information. 101 | 102 | You can see the supported files and options of the Knowledge Base System [**from here**](/knowledge_base). 103 | 104 | 105 | 106 | In this example, we'll use a PDF and a web link to make a knowledge base. 107 | 108 | ```python 109 | from upsonic import KnowledgeBase 110 | 111 | firm_knowledge_base = KnowledgeBase( 112 | files=["february_technical_tasks.pdf", "https://upsonic.ai"] 113 | ) 114 | ``` 115 | 116 | 117 | 118 | ```python 119 | task = Task( 120 | "Create this month's overall technical report.", 121 | context=[firm_knowledge_base] # Adding firm_knowledge_base to task 122 | ) 123 | ``` 124 | 125 | 126 | 127 | ## Giving string as Context 128 | 129 | Tasks represent specific objectives that need to be accomplished within the system. These tasks often contain variable elements such as URLs, names, individuals, or topics. Traditionally, all these components would need to be incorporated directly into the prompt, resulting in a non-programmatic approach. This conventional method relies heavily on f-strings or format operations, which significantly limits prompt management capabilities and constrains it within the main execution flow. 130 | 131 | In the context system we also support **str as context**. This provide an capability to put your variables to outside of the prompt. 132 | 133 | 134 | Use this when you need to separate your prompt and variable, like in a city list. You don't need to use it if your prompt doesn't have variables. 135 | 136 | 137 | ```python 138 | city = "New Yorg" 139 | 140 | task = Task( 141 | "Find resources in the city", 142 | context=[city] # Adding city string as context 143 | ) 144 | 145 | ``` 146 | 147 | When you need to create task for different cities, you can use this method: 148 | 149 | 150 | 151 | ```python 152 | cities = ["New Yorg", "San Fransisco", "San Jose"] 153 | base_description = "Find resources in the city" 154 | ``` 155 | 156 | 157 | 158 | ```python 159 | tasks = [] 160 | 161 | for city in cities: 162 | city_task = Task( 163 | base_description, # Setting description from base_description 164 | context=[city] # Setting city string as context 165 | ) 166 | tasks.append(city_task) 167 | ``` 168 | 169 | 170 | 171 | ## Response Format 172 | 173 | The Upsonic framework leverages Pydantic BaseModel compatibility for defining expected results, enabling programmatic handling of agent responses. By specifying a response format, the system returns structured objects rather than raw text, allowing for seamless integration and logic implementation within your application. 174 | 175 | For instance, when requesting a list of cities, the system returns a structured list of strings containing city names. This approach emphasizes value-oriented development by abstracting away the implementation details, allowing developers to focus on utilizing the data rather than managing its format and parsing. 176 | 177 | 178 | You can use Pydantic's BaseModel instead of ObjectResponse. We created this wrapper to make it easier to understand. 179 | 180 | 181 | ```python 182 | from upsonic import ObjectResponse 183 | 184 | class TravelResponse(ObjectResponse): 185 | cities: str 186 | ``` 187 | 188 | ```python 189 | task = Task( 190 | "Create a plan to visit cities in Canada", 191 | reponse_format=TravelResponse # Specify the response format 192 | ) 193 | ``` 194 | 195 | ## Running Task on Direct LLM Call 196 | 197 | After task definition, you have the flexibility to choose between two runtime options: Agent execution or direct LLM calls. While Agents provide powerful capabilities for complex tasks, they may not always be the most efficient choice. For simple tasks without potential sub-steps, utilizing an Agent can result in unnecessary time and cost overhead. 198 | 199 | Direct LLM calls are particularly advantageous for straightforward operations that don't require sub-task generation or characterization. This runtime selection can be strategically implemented within your Agent application logic, allowing you to optimize performance and resource utilization based on task complexity. 200 | 201 | ```python 202 | from upsonic import Direct 203 | 204 | task = Task("Describe planet Earth") 205 | 206 | Direct.print_do(task1) # Direct and fast way to complete task 207 | ``` 208 | 209 | 210 | Direct LLM Calls Support **Tools** 211 | 212 | Don't sweat it—Direct LLM Calls takes care of all your tools. They'll work like agents, no sweat! 213 | 214 | 215 | ## Running Task on Agent 216 | 217 | Tasks can be executed through characterized LLM agents, a key feature that enables specialized, task-focused LLM implementations tailored to your company's needs. The agent mechanism is governed by "AgentConfiguration" objects, which serve as the foundation for defining and customizing agent characteristics and behaviors. 218 | 219 | This characterization system allows for precise control over how agents process and respond to tasks, ensuring alignment with specific business requirements and objectives. Through these configurations, you can create purpose-built agents that maintain consistency and relevance in their operations while adhering to your organization's specific requirements. 220 | 221 | You can view the details of agent creation and customization here. 222 | 223 | 224 | 225 | To create an agent, we're going to import the "AgentConfiguration" and make a basic customization. 226 | 227 | ```python 228 | from upsonic import Agent 229 | 230 | agent = Agent( 231 | "Product Manager", 232 | company_url="https://upsonic.ai", 233 | company_objective="To build AI Agent framework that helps people get things done", 234 | ) 235 | ``` 236 | 237 | 238 | 239 | ```python 240 | task = Task("Make a deep analyze to history of chips") 241 | ``` 242 | 243 | 244 | 245 | We've got a function called "agent" in the client that we'll use to give product\_manager\_agent and task1. 246 | 247 | ```python 248 | agent.print_do(task) 249 | ``` 250 | 251 | 252 | 253 | 254 | 255 | ## Accessing to Task Results 256 | 257 | Tasks can be executed through two distinct runners: direct LLM calls or agents. These runners serve as execution mechanisms for task processing and result generation, offering flexibility in implementation. The system supports both parallel processing and multi-client operations, enhancing scalability and performance. 258 | 259 | To maintain a controlled and organized infrastructure, all results are stored within the Task object itself. This architectural decision provides a centralized approach to result management, enabling better monitoring, access, and control over task outcomes. Such design creates a robust and manageable infrastructure that can be effectively tailored to your specific requirements. 260 | 261 | When you run the task, the results are stored in the Task.response. You can get it directly. 262 | 263 | ```python 264 | task = Task("Make a deep analyze to history of chips") 265 | 266 | ## Run the task 267 | 268 | # Direct.do(task) 269 | # agent.do(task) 270 | 271 | ## After Run the task 272 | 273 | result = task.response 274 | 275 | print(response) 276 | ``` 277 | 278 | 279 | Hey, just a heads-up: if you set a response\_format, the task response will be an object of your class. 280 | 281 | ```python 282 | from upsonic import ObjectResponse 283 | 284 | class TravelResponse(ObjectResponse): 285 | cities: str 286 | 287 | task = Task( 288 | "Generate a plan to visit cities in Canada", 289 | reponse_format=TravelResponse # Specift the response format 290 | ) 291 | 292 | ## Run the task 293 | 294 | # Direct.do(task) 295 | # agent.do(task) 296 | 297 | ## After Run the task 298 | 299 | 300 | result = task.response 301 | 302 | print("Cities") 303 | for i in result.cities: 304 | print(i) 305 | ``` 306 | --------------------------------------------------------------------------------