├── .env.example ├── .gitignore ├── CHANGELOG.md ├── Makefile ├── README.md ├── autogpt-api.png ├── autogpt1-2-desktop.png ├── autogpt1-2-mobile.png ├── backend ├── Dockerfile ├── backend_pre_start.py ├── entrypoint.sh ├── gunicorn_conf.py ├── migrations │ ├── 20230508151038_initial │ │ └── migration.sql │ └── migration_lock.toml ├── poetry.lock ├── prisma_partial │ └── types.py ├── pyproject.toml ├── schema.prisma └── src │ ├── app │ ├── __init__.py │ ├── api │ │ ├── __init__.py │ │ ├── helpers │ │ │ ├── __init__.py │ │ │ ├── bots.py │ │ │ ├── responses.py │ │ │ └── security.py │ │ └── v1 │ │ │ ├── __init__.py │ │ │ ├── api.py │ │ │ └── routes │ │ │ ├── __init__.py │ │ │ ├── bots.py │ │ │ └── sessions.py │ ├── auto_gpt │ │ ├── __init__.py │ │ ├── agent.py │ │ ├── api_manager.py │ │ ├── cli.py │ │ ├── install_plugin_deps.py │ │ ├── main.py │ │ ├── memory.py │ │ └── plugins.py │ ├── clients │ │ ├── __init__.py │ │ ├── auth_backend.py │ │ └── base.py │ ├── core │ │ ├── __init__.py │ │ ├── config.py │ │ ├── globals.py │ │ └── init_logging.py │ ├── helpers │ │ ├── __init__.py │ │ ├── jobs.py │ │ ├── readers.py │ │ └── system.py │ ├── main.py │ ├── run.py │ ├── run_worker.py │ ├── schemas │ │ ├── bot.py │ │ └── enums.py │ └── worker │ │ ├── __init__.py │ │ ├── main.py │ │ └── tasks │ │ ├── __init__.py │ │ └── run_auto_gpt.py │ └── tests │ ├── __init__.py │ ├── api │ └── v1 │ │ └── test_sessions.py │ ├── conftest.py │ └── helpers │ ├── __init__.py │ └── auth_backend_mocker.py ├── docker-compose.yml ├── env-backend.env.example ├── env-frontend.env.example ├── env-guidance.txt ├── env-mysql.env.example ├── frontend ├── .dockerignore ├── .eslintrc.js ├── .gitignore ├── .npmrc ├── .prettierrc ├── Dockerfile ├── README.md ├── ecosystem.config.js ├── nuxt.config.ts ├── package-lock.json ├── package.json ├── src │ ├── app.vue │ ├── assets │ │ └── scss │ │ │ ├── _highlight.scss │ │ │ └── main.scss │ ├── components.d.ts │ ├── components │ │ └── LogViewer.vue │ ├── composables │ │ └── apiManager.ts │ ├── interfaces │ │ ├── api.ts │ │ ├── bot.ts │ │ ├── enums.ts │ │ ├── index.ts │ │ └── select.ts │ ├── middleware │ │ └── auth.global.ts │ ├── pages │ │ └── index.vue │ ├── plugins │ │ └── naive-ui.ts │ ├── public │ │ ├── favicon.ico │ │ └── logo.svg │ └── utils │ │ ├── formatters.ts │ │ └── mergers.ts └── tsconfig.json └── how-to-docker.txt /.env.example: -------------------------------------------------------------------------------- 1 | # used for traefik proxy, it's okay to leave it as is 2 | TRAEFIK_TAG=traefik-public 3 | # stack name, used for traefik, it's okay to leave it as is 4 | STACK_NAME=autogpt 5 | # base url for the app, ie: /autogpt 6 | BASE_URL= 7 | # auto gpt version to use 8 | AUTO_GPT_VERSION=0.4.4 9 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | *.log* 3 | .nuxt 4 | .nitro 5 | .cache 6 | .output 7 | config.json 8 | .env 9 | env-*.env 10 | dist 11 | 12 | # User-specific stuff 13 | .idea/**/workspace.xml 14 | .idea/**/tasks.xml 15 | .idea/**/usage.statistics.xml 16 | .idea/**/dictionaries 17 | .idea/**/shelf 18 | .vscode 19 | 20 | # .idea should not be in git 21 | .idea/ 22 | workspaces/ 23 | logs/ 24 | /plugins/ 25 | 26 | gpt-tuner/ 27 | db/*.json 28 | autogpt 29 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## auto-gpt-ui: 1.3.2 2 | 3 | * support 16k in 3.5 models 4 | 5 | ## auto-gpt-ui: 1.3.1 6 | 7 | * fix CachedMessageHistory update running summary 8 | 9 | ## auto-gpt-ui: 1.3.0 10 | 11 | * support Auto-GPT 0.4.4 12 | * add `AUTO_GPT_VERSION` env variable to configure auto-gpt version 13 | 14 | ## auto-gpt-ui: 1.2.2 15 | 16 | * support Auto-GPT 0.4.2 17 | 18 | ## auto-gpt-ui: 1.2.1 19 | 20 | * optimise mobile view 21 | 22 | ## auto-gpt-ui: 1.2.0 23 | 24 | * mobile UI 25 | * UI one goal support 26 | * plugins support 27 | * support 32k engine 28 | 29 | ## auto-gpt-ui: 1.1.1 30 | 31 | * fix task completion 32 | 33 | ## auto-gpt-ui: 1.1.0 34 | 35 | * update auto-gpt 36 | * clear notify on auto gpt start 37 | * support DENY_COMMANDS and ALLOW_COMMANDS env variables 38 | 39 | ## auto-gpt-ui: 1.0.1 40 | 41 | * use main auto-gpt repo 42 | * make adjustments to inherited auto-gpt methods to account for recent changes 43 | 44 | ## auto-gpt-ui: 1.0.0 45 | 46 | * initial 47 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | all: build deploy ## Build and deploy the stack 2 | 3 | .PHONY: build 4 | build: 5 | docker-compose build 6 | 7 | .PHONY: deploy 8 | deploy: 9 | docker-compose up -d 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # AutoGPT GUI by Neuronic AI 🚀 2 | 3 | Welcome to the official repository of **Neuronic AI's, AutoGPT GUI**. This open-source project provides an intuitive and easy-to-use graphical interface for the powerful [AutoGPT Open Source AI Agent](https://github.com/Significant-Gravitas/Auto-GPT). Driven by GPT-3.5 & GPT-4, AutoGPT has the capability to chain together LLM "thoughts", enabling the AI to autonomously achieve whatever goal you set. 4 | 5 | ### Table of Contents 6 | 1. [Demo-Tutorial](#demo-tutorial) 7 | 2. [About the Project](#about-the-project) 8 | 3. [Features](#features) 9 | 4. [Installation](#installation) 10 | 5. [Contributing](#contributing) 11 | 12 | 13 | ### Demo-Tutorial 14 | Various multimedia and content in support of using this code 15 | 16 | Additionally, we have a puiblicly available free demo available online to try, however, it is signficantly restricted with no ablity to run code or administrative commands and you need to bring your own OpenAI API key. The demo can be accessed on No|App and it requires users to create a login and add an OpenAI API key to use but then is available online @ [https://noapp.ai](https://noapp.ai/apps/app.html?app=9). If you are not logged in it will forward you to login where you can choose to register with an email address or Google. Using your email to sign up versus Google will require you to verify your email. Once you have an account, go to the menu and select "unlock AI" and add your OpenAI API key, after this all the AI services on No|App will be available for you to use or demo. 17 | 18 | #### Installation 19 | Click the image below for a tutorial on installing AutoGPT-ui 20 | 21 | [![Demo Video](https://img.youtube.com/vi/7Z7V03psycI/0.jpg)](https://www.youtube.com/watch?v=7Z7V03psycI) 22 | 23 | 24 | #### Release 1.0: 25 | 26 | Click the image below to watch a demo of our GUI. 27 | 28 | [![Demo Video](https://img.youtube.com/vi/HcbhtEIK2RE/0.jpg)](https://youtu.be/HcbhtEIK2RE) 29 | 30 | 31 | ##### Release 1.2 32 | In this release we add support for plugins, we add a "Rapid Goal" entry for fast single goal engagement while still supporting ai_settings files for complex instruction sets, we add support for GPT 3.5 16k, cleaned up the interface and added support for mobile. 33 | 34 | Release 1.2: Desktop Interface 35 | ![Release 1.2 Desktop interface](https://github.com/neuronic-ai/autogpt-ui/blob/main/autogpt1-2-desktop.png) 36 | 37 | Release 1.2: Mobile Interface 38 | ![Release 1.2 Mobile interface](https://github.com/neuronic-ai/autogpt-ui/blob/main/autogpt1-2-mobile.png) 39 | 40 | Release 1.2: API Interface 41 | ![Release 1.2 API interface](https://github.com/neuronic-ai/autogpt-ui/blob/main/autogpt-api.png) 42 | 43 | ## About the Project 44 | 45 | 🧠 AutoGPT is an experimental open-source AI Agent application showcasing the capabilities of the GPT-3.5 & GPT-4 language model. One of the first examples of GPT-4 running fully autonomously, AutoGPT pushes the boundaries of what is possible with AI. 46 | 47 | AutoGPT has fantastic potential that was locked behind a command-line interface, which for many users can be intimidating, difficult, inconvenient and/or limiting. We at Neuronic AI decided to develop a no non-sense opensource Graphical User Interface (GUI) for AutoGPT to bring this powerful technology closer to everyone and enable users to engage AutoGPT in a meaningful way. We also take the interface to another level by enabling multi-user support and the ability to integrate with existing websites. 48 | 49 | Because (#1) setting up AutoGPT correctly can sometimes be difficult with various version conflicts and dependencies issues, and (#2) because how fast AutoGPT is changing which creates the potential need to re-factor or re-base code between releases, we embed a specific version of AutoGPT along with all of its dependencies and appropriately versioned stacks to support normal operation with our GUI. The initial opensource release 1.0 is built on top of AutoGPT Stable 0.4.0. 50 | 51 | ## Features 52 | We have released 1.0 to the public and we are working on release 1.1 now and will be performing extensive testing to ensure that 1.1 is a near produciton ready software. 53 | 54 | #### Release 1.0: 55 | - Built on AutoGPT 0.4.0 56 | - Desktop graphical interface for AutoGPT 57 | - User specified AI_settings File 58 | - Resume stopped processes where we left off 59 | - Set fast and smart engine 60 | - Set max tokens for smart and fast engine 61 | - Set image size for Dalle image generation 62 | - Semi-Continuous mode with batch authorization interface 63 | - Workspace management with support for single file/directory or batch upload, download and delete 64 | - Workspace file preview for major file tyes and best effort syntax highlighting 65 | - Containerized with API backend, worker and frontend along with supporting containers 66 | 67 | #### Release 1.2: 68 | - Built on AutoGPT 0.4.4 69 | - Can choose what AutoGPT branch to use on build 70 | - Supports everything that Release1.0 supports except for the resume function 71 | - Support for Rapid Tasks: Execute a single tasks in a rapid manner without any hassle. 72 | - Plugins: Add more functionalities to your interface with an array of popular plugins support 73 | - Updated Desktop interface: We level up the UI/UX some 74 | - Mobile Interfaces: Take your AI assistant anywhere with a powerful mobile interface. 75 | - Support for GPT 3.5 16k 76 | - Max Tokens: A lot more max token options now 77 | - File size in workspace management 78 | - API controls and documentation via http://x.x.x.x:8160/docs 79 | 80 | 81 | ## AI Settings File 82 | 83 | :page_facing_up: To use the interface, we need to pass instructions via an AI settings file. These instructions could be as simple as a "Hello, World!" program or complex enough to write a full-stack Java application. 84 | 85 | The standard format for instructions includes a name, a role, a set of goals each denoted by a dash (-) at the beginning of the instruction, and a budget to limit API consumption. 86 | 87 | ------------------------ 88 | 89 | ## Fast and Smart Engines 90 | 91 | :steam_locomotive: With our interface, you can choose the language model for your fast and smart engines. AutoGPT designed the fast model to handle the AI agent processing, and the smart model to handle the actual tasks the AI agent performs. 92 | 93 | Note that GPT3.5 can only handle up to 4000 tokens maximum (release 1.1 will support 3.5 16k) and GPT-4 max tokens is based on which version of GPT-4 you have access to (8k or 32k). You can use 3.5 for both or GPT-4 for both. 94 | 95 | ## Workspace Management 96 | 97 | 💼 With the workspace management interface, you can view, download, or delete any files that are generated, along with uploading files for AutoGPT to work with. Preview support for most readable file types, best effort syntax highlighting, multi-directory support. 98 | 99 | 100 | ## Installation 101 | 102 | 🔧 The build is using the Nuxt Framework and is containerized and intended to run on docker. The complete build will deploy the following containers 103 | 104 | - auto-gpt-ui_api: Backend API enabled interface 105 | - auto-gpt-ui_worker: To process all of our jobs 106 | - auto-gpt-ui_frontend: The GUI 107 | - traefik:v2.9: Web server/ proxy 108 | - mysql: Necessary for multi-user support 109 | - redis: Used for maintaining job state 110 | 111 | The app will be exposed at port 8160 112 | 113 | (See video tutorial in the demo and tutorial section) 114 | 115 | 1. Install Ubuntu server (18 or newer) 116 | 2. Install build-essential (sudo apt install build-essential) 117 | 3. Install Docker and Docker-Compose (See how-to-docker.txt file) 118 | 4. Clone the repo down to your local machine 119 | 5. create .env, env-frontend.env, env-backend.env and env-mysql.env files (use examples and/or env-guidance.txt for help) 120 | 6. Build the app with "sudo make all" 121 | 7. Check that all your containers are up with "docker ps -a" (If container(s) are down check their logs for an indication why "docker logs container_name_or_id") 122 | 8. Connect with your browser on port 8160. If installed on your local machine "https://localhost:8160 or otherwise use the IP of the machine its installed on 123 | 124 | ------------------- 125 | 126 | Environment variables 127 | Copy config files or create new ones using env-guidance 128 | 129 | - cp .env.example .env 130 | - cp env-backend.example.env env-backend.env 131 | - cp env-frontend.example.env env-frontend.env 132 | - cp env-mysql.example.env env-mysql.env 133 | 134 | -------------------- 135 | 136 | ## Contributing 137 | 🤝 This project exists thanks to all the people who contribute. Big shout out to: 138 | 139 | - [EncryptShawn](https://github.com/EncryptShawn) - Design, Guidance, Funding, DevOps 140 | - [Yolley](https://github.com/Yolley) - Initial Codebase 141 | 142 | We welcome contributions! Please see `CONTRIBUTING.md` for details on how to contribute. 143 | 144 | -------------------------------------------------------------------------------- /autogpt-api.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/autogpt-api.png -------------------------------------------------------------------------------- /autogpt1-2-desktop.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/autogpt1-2-desktop.png -------------------------------------------------------------------------------- /autogpt1-2-mobile.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/autogpt1-2-mobile.png -------------------------------------------------------------------------------- /backend/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nikolaik/python-nodejs:latest 2 | 3 | # Install browsers 4 | RUN apt-get update && apt-get install -y \ 5 | chromium-driver firefox-esr \ 6 | ca-certificates libmagic1 \ 7 | curl jq wget git ffmpeg 8 | 9 | # Install the required python packages globally 10 | ENV PATH="$PATH:/root/.local/bin" 11 | ENV PYTHONPATH=/src 12 | ENV PLUGINS_DIR=/plugins 13 | 14 | # Configure Poetry 15 | RUN poetry config virtualenvs.create false 16 | 17 | # Install Prisma CLI 18 | RUN npm install -g prisma@4.11.0 19 | ENV PRISMA_BINARY_CACHE_DIR=/usr/lib/ 20 | 21 | # Copy using poetry.lock* in case it doesn't exist yet 22 | COPY ./pyproject.toml ./poetry.lock* / 23 | WORKDIR / 24 | 25 | RUN poetry run pip install --upgrade pip 26 | RUN poetry install --no-root --only main 27 | RUN mkdir ${PLUGINS_DIR} 28 | RUN mkdir /src 29 | RUN curl -L -o ${PLUGINS_DIR}/Auto-GPT-Plugins.zip https://github.com/Significant-Gravitas/Auto-GPT-Plugins/archive/refs/heads/master.zip 30 | RUN curl -L -o ${PLUGINS_DIR}/AutoGPT_Slack.zip https://github.com/adithya77/Auto-GPT-slack-plugin/archive/refs/heads/main.zip 31 | RUN curl -L -o ${PLUGINS_DIR}/AutoGPT_YouTube.zip https://github.com/Yolley/AutoGPT-YouTube/archive/refs/heads/master.zip 32 | 33 | ARG AUTO_GPT_VERSION 34 | ENV AUTO_GPT_VERSION ${AUTO_GPT_VERSION} 35 | 36 | RUN curl -L -o auto_gpt.zip https://github.com/Significant-Gravitas/Auto-GPT/archive/refs/tags/v${AUTO_GPT_VERSION}.zip \ 37 | && unzip auto_gpt.zip \ 38 | && cd "Auto-GPT-${AUTO_GPT_VERSION}" \ 39 | && mv autogpt /src/ \ 40 | && pip install -r requirements.txt \ 41 | && python scripts/check_requirements.py requirements.txt \ 42 | && python scripts/install_plugin_deps.py 43 | 44 | COPY ./src/ /src/ 45 | COPY ./migrations /migrations 46 | COPY ./prisma_partial /prisma_partial 47 | COPY ./schema.prisma /schema.prisma 48 | 49 | COPY entrypoint.sh /entrypoint.sh 50 | COPY gunicorn_conf.py /gunicorn_conf.py 51 | COPY backend_pre_start.py /backend_pre_start.py 52 | 53 | RUN chmod +x /entrypoint.sh 54 | 55 | ENTRYPOINT ["/entrypoint.sh"] 56 | -------------------------------------------------------------------------------- /backend/backend_pre_start.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from sqlalchemy import create_engine 4 | from sqlalchemy.orm import sessionmaker 5 | from tenacity import after_log, before_log, retry, stop_after_attempt, wait_fixed 6 | 7 | from app.core import settings 8 | 9 | logging.basicConfig(level=logging.INFO) 10 | logger = logging.getLogger(__name__) 11 | 12 | max_tries = 60 * 5 # 5 minutes 13 | wait_seconds = 1 14 | 15 | engine = create_engine(str(settings.DATABASE_URL).split("?")[0].replace("mysql://", "mysql+pymysql://"), convert_unicode=True) 16 | 17 | 18 | @retry( 19 | stop=stop_after_attempt(max_tries), 20 | wait=wait_fixed(wait_seconds), 21 | before=before_log(logger, logging.INFO), 22 | after=after_log(logger, logging.WARN), 23 | ) 24 | def init() -> None: 25 | # Try to create session to check if DB is awake 26 | try: 27 | session = sessionmaker(autocommit=False, autoflush=False, bind=engine)() 28 | session.execute("SELECT 1") 29 | except Exception as e: 30 | logger.error(e, exc_info=True) 31 | raise e 32 | 33 | 34 | def main() -> None: 35 | logger.info("Initializing service") 36 | init() 37 | logger.info("Service finished initializing") 38 | 39 | 40 | if __name__ == "__main__": 41 | main() 42 | -------------------------------------------------------------------------------- /backend/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | set -e 3 | 4 | # Run migrations 5 | prisma generate 6 | 7 | if [ "$1" = "api" ] ; then 8 | prisma migrate deploy 9 | if [ -f /src/app/main.py ]; then 10 | DEFAULT_MODULE_NAME=app.main 11 | elif [ -f /src/main.py ]; then 12 | DEFAULT_MODULE_NAME=main 13 | fi 14 | MODULE_NAME=${MODULE_NAME:-$DEFAULT_MODULE_NAME} 15 | VARIABLE_NAME=${VARIABLE_NAME:-app} 16 | export APP_MODULE=${APP_MODULE:-"$MODULE_NAME:$VARIABLE_NAME"} 17 | if [ -f /app/gunicorn_conf.py ]; then 18 | DEFAULT_GUNICORN_CONF=/src/gunicorn_conf.py 19 | elif [ -f /app/app/gunicorn_conf.py ]; then 20 | DEFAULT_GUNICORN_CONF=/src/app/gunicorn_conf.py 21 | else 22 | DEFAULT_GUNICORN_CONF=/gunicorn_conf.py 23 | fi 24 | export GUNICORN_CONF=${GUNICORN_CONF:-$DEFAULT_GUNICORN_CONF} 25 | export WORKER_CLASS=${WORKER_CLASS:-"uvicorn.workers.UvicornWorker"} 26 | export LOGGER_CLASS=${LOGGER_CLASS:-"app.core.init_logging.Logger"} 27 | gunicorn -k "$WORKER_CLASS" --logger-class "$LOGGER_CLASS" -c "$GUNICORN_CONF" "$APP_MODULE" 28 | elif [ "$1" = "worker" ] ; then 29 | arq app.worker.main.WorkerSettings 30 | else 31 | echo "Wrong run command, make sure to pass an argument to the entrypoint, available arguments: api, worker" 32 | exit 126 33 | fi 34 | -------------------------------------------------------------------------------- /backend/gunicorn_conf.py: -------------------------------------------------------------------------------- 1 | import json 2 | import multiprocessing 3 | import os 4 | 5 | workers_per_core_str = os.getenv("WORKERS_PER_CORE", "1") 6 | max_workers_str = os.getenv("MAX_WORKERS") 7 | use_max_workers = None 8 | if max_workers_str: 9 | use_max_workers = int(max_workers_str) 10 | web_concurrency_str = os.getenv("WEB_CONCURRENCY", None) 11 | 12 | host = os.getenv("HOST", "0.0.0.0") 13 | port = os.getenv("PORT", "80") 14 | bind_env = os.getenv("BIND", None) 15 | use_loglevel = os.getenv("LOG_LEVEL", "info") 16 | if bind_env: 17 | use_bind = bind_env 18 | else: 19 | use_bind = f"{host}:{port}" 20 | 21 | cores = multiprocessing.cpu_count() 22 | workers_per_core = float(workers_per_core_str) 23 | default_web_concurrency = workers_per_core * cores 24 | if web_concurrency_str: 25 | web_concurrency = int(web_concurrency_str) 26 | assert web_concurrency > 0 27 | else: 28 | web_concurrency = max(int(default_web_concurrency), 2) 29 | if use_max_workers: 30 | web_concurrency = min(web_concurrency, use_max_workers) 31 | accesslog_var = os.getenv("ACCESS_LOG", "-") 32 | use_accesslog = accesslog_var or None 33 | errorlog_var = os.getenv("ERROR_LOG", "-") 34 | use_errorlog = errorlog_var or None 35 | graceful_timeout_str = os.getenv("GRACEFUL_TIMEOUT", "120") 36 | timeout_str = os.getenv("TIMEOUT", "120") 37 | keepalive_str = os.getenv("KEEP_ALIVE", "5") 38 | 39 | # Gunicorn config variables 40 | loglevel = use_loglevel 41 | workers = web_concurrency 42 | bind = use_bind 43 | errorlog = use_errorlog 44 | # worker_tmp_dir = "/dev/shm" 45 | accesslog = use_accesslog 46 | graceful_timeout = int(graceful_timeout_str) 47 | timeout = int(timeout_str) 48 | keepalive = int(keepalive_str) 49 | 50 | 51 | # For debugging and testing 52 | log_data = { 53 | "loglevel": loglevel, 54 | "workers": workers, 55 | "bind": bind, 56 | "graceful_timeout": graceful_timeout, 57 | "timeout": timeout, 58 | "keepalive": keepalive, 59 | "errorlog": errorlog, 60 | "accesslog": accesslog, 61 | # Additional, non-gunicorn variables 62 | "workers_per_core": workers_per_core, 63 | "use_max_workers": use_max_workers, 64 | "host": host, 65 | "port": port, 66 | } 67 | print(json.dumps(log_data)) 68 | -------------------------------------------------------------------------------- /backend/migrations/20230508151038_initial/migration.sql: -------------------------------------------------------------------------------- 1 | -- CreateTable 2 | CREATE TABLE `user` ( 3 | `id` BIGINT NOT NULL AUTO_INCREMENT, 4 | `username` VARCHAR(191) NOT NULL, 5 | 6 | UNIQUE INDEX `user_username_key`(`username`), 7 | PRIMARY KEY (`id`) 8 | ) DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; 9 | 10 | -- CreateTable 11 | CREATE TABLE `bot` ( 12 | `id` BIGINT NOT NULL AUTO_INCREMENT, 13 | `user_id` BIGINT NOT NULL, 14 | `fast_engine` VARCHAR(191) NOT NULL, 15 | `smart_engine` VARCHAR(191) NOT NULL, 16 | `image_size` INTEGER NOT NULL, 17 | `fast_tokens` INTEGER NOT NULL, 18 | `smart_tokens` INTEGER NOT NULL, 19 | `ai_settings` JSON NOT NULL, 20 | `worker_message_id` VARCHAR(191) NULL, 21 | `runs_left` INTEGER NOT NULL DEFAULT 0, 22 | `created_dt` DATETIME(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3), 23 | `updated_dt` DATETIME(3) NOT NULL, 24 | `is_active` BOOLEAN NOT NULL DEFAULT true, 25 | `is_failed` BOOLEAN NOT NULL DEFAULT false, 26 | 27 | UNIQUE INDEX `bot_user_id_key`(`user_id`), 28 | PRIMARY KEY (`id`) 29 | ) DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci; 30 | 31 | -- AddForeignKey 32 | ALTER TABLE `bot` ADD CONSTRAINT `bot_user_id_fkey` FOREIGN KEY (`user_id`) REFERENCES `user`(`id`) ON DELETE CASCADE ON UPDATE CASCADE; 33 | -------------------------------------------------------------------------------- /backend/migrations/migration_lock.toml: -------------------------------------------------------------------------------- 1 | # Please do not edit this file manually 2 | # It should be added in your version-control system (i.e. Git) 3 | provider = "mysql" -------------------------------------------------------------------------------- /backend/prisma_partial/types.py: -------------------------------------------------------------------------------- 1 | from prisma.models import Bot 2 | 3 | 4 | Bot.create_partial( 5 | "BotSchema", 6 | include={ 7 | "fast_engine", 8 | "smart_engine", 9 | "image_size", 10 | "fast_tokens", 11 | "smart_tokens", 12 | "ai_settings", 13 | "is_active", 14 | "is_failed", 15 | "runs_left", 16 | }, 17 | ) 18 | Bot.create_partial( 19 | "BotInCreateSchema", 20 | include={ 21 | "fast_engine", 22 | "smart_engine", 23 | "image_size", 24 | "fast_tokens", 25 | "smart_tokens", 26 | "ai_settings", 27 | }, 28 | ) 29 | -------------------------------------------------------------------------------- /backend/pyproject.toml: -------------------------------------------------------------------------------- 1 | [tool.poetry] 2 | name = "auto-gpt-ui-backend" 3 | version = "0.1.0" 4 | description = "" 5 | authors = ["Your Name "] 6 | readme = "README.md" 7 | 8 | [tool.poetry.dependencies] 9 | python = ">=3.11,<3.12" 10 | fastapi = {extras = ["uvicorn"], version = "^0.95.1"} 11 | prisma = "^0.8.2" 12 | aiohttp = "^3.8.4" 13 | loguru = "^0.7.0" 14 | furl = "^2.1.3" 15 | pydantic = {extras = ["dotenv"], version = "^1.10.7"} 16 | sqlalchemy = "^2.0.12" 17 | tenacity = "^8.2.2" 18 | uvicorn = "^0.22.0" 19 | pymysql = "^1.0.3" 20 | regex = "^2023.3.23" 21 | more-itertools = "^9.1.0" 22 | humanize = "^4.6.0" 23 | taskiq = "^0.4.1" 24 | taskiq-redis = "^0.3.0" 25 | anyio = "^3.6.2" 26 | pyyaml = "^6.0" 27 | arq = "^0.25.0" 28 | gunicorn = "^20.1.0" 29 | python-multipart = "^0.0.6" 30 | aiostream = "^0.4.5" 31 | python-magic = "^0.4.27" 32 | ijson = "^3.2.0.post0" 33 | nltk = "^3.8.1" 34 | 35 | 36 | [tool.poetry.group.dev.dependencies] 37 | mypy = "^1.2.0" 38 | black = "^23.3.0" 39 | isort = "^5.12.0" 40 | flake8 = "^6.0.0" 41 | ipython = "^8.13.1" 42 | types-tzlocal = "^4.3.0.0" 43 | async-asgi-testclient = "^1.4.11" 44 | fakeredis = "^2.14.1" 45 | pytest = "^7.3.2" 46 | types-redis = "^4.5.5.2" 47 | pytest-asyncio = "^0.21.0" 48 | pytest-mock = "^3.10.0" 49 | 50 | 51 | [tool.poetry.group.local.dependencies] 52 | agpt = {url = "https://github.com/Significant-Gravitas/Auto-GPT/archive/refs/tags/v0.4.4.zip"} 53 | beautifulsoup4 = ">=4.12.2" 54 | colorama = "0.4.6" 55 | distro = "1.8.0" 56 | openai = "0.27.2" 57 | playsound = "1.2.2" 58 | python-dotenv = "1.0.0" 59 | pyyaml = "6.0" 60 | pypdf2 = "^3.0.1" 61 | python-docx = "^0.8.11" 62 | markdown = "^3.4.3" 63 | pylatexenc = "^2.10" 64 | readability-lxml = "0.8.1" 65 | requests = "^2.31.0" 66 | tiktoken = "0.3.3" 67 | gtts = "2.3.1" 68 | docker = "^6.1.3" 69 | duckduckgo-search = "3.0.2" 70 | google-api-python-client = "^2.88.0" 71 | pinecone-client = "2.2.1" 72 | redis = "^4.5.5" 73 | orjson = "3.8.10" 74 | pillow = "^9.5.0" 75 | selenium = "4.1.4" 76 | webdriver-manager = "^3.8.6" 77 | jsonschema = "^4.17.3" 78 | click = "^8.1.3" 79 | charset-normalizer = ">=3.1.0" 80 | spacy = ">=3.0.0,<4.0.0" 81 | openapi-python-client = "0.13.4" 82 | en-core-web-sm = {url = "https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0-py3-none-any.whl"} 83 | auto-gpt-plugin-template = {git = "https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template", rev = "0.1.0"} 84 | prompt-toolkit = ">=3.0.38" 85 | numpy = "^1.25.0" 86 | inflection = "^0.5.1" 87 | gitpython = "^3.1.31" 88 | abstract-singleton = "^1.0.1" 89 | atproto = "^0.0.17" 90 | black = "^23.7.0" 91 | bs4 = "^0.0.1" 92 | build = "^0.10.0" 93 | flake8 = "^6.0.0" 94 | isort = "^5.12.0" 95 | newsapi-python = "^0.2.7" 96 | pandas = "^2.0.3" 97 | pylint = "^2.17.4" 98 | pytest = "^7.4.0" 99 | pytest-cov = "^4.1.0" 100 | python-lorem = "^1.3.0.post1" 101 | python-telegram-bot = "^20.4" 102 | requests-mock = "^1.11.0" 103 | setuptools = "^68.0.0" 104 | tweepy = "4.13.0" 105 | twine = "^4.0.2" 106 | validators = "^0.20.0" 107 | wheel = "^0.40.0" 108 | wolframalpha = "5.0.0" 109 | 110 | [tool.isort] 111 | multi_line_output = 3 112 | include_trailing_comma = true 113 | force_grid_wrap = 0 114 | line_length = 120 115 | 116 | [tool.mypy] 117 | plugins = ["pydantic.mypy"] 118 | 119 | [build-system] 120 | requires = ["poetry-core"] 121 | build-backend = "poetry.core.masonry.api" 122 | -------------------------------------------------------------------------------- /backend/schema.prisma: -------------------------------------------------------------------------------- 1 | // database 2 | datasource db { 3 | provider = "mysql" 4 | url = env("DATABASE_URL") 5 | } 6 | 7 | // generator 8 | generator client { 9 | provider = "prisma-client-py" 10 | recursive_type_depth = 5 11 | partial_type_generator = "prisma_partial/types.py" 12 | enable_experimental_decimal = true 13 | } 14 | 15 | model User { 16 | id BigInt @id @default(autoincrement()) 17 | 18 | username String @unique 19 | bot Bot? 20 | 21 | @@map("user") 22 | } 23 | 24 | model Bot { 25 | id BigInt @id @default(autoincrement()) 26 | 27 | user User @relation(fields: [user_id], references: [id], onDelete: Cascade) 28 | user_id BigInt 29 | 30 | fast_engine String 31 | smart_engine String 32 | image_size Int 33 | fast_tokens Int 34 | smart_tokens Int 35 | ai_settings Json 36 | 37 | worker_message_id String? 38 | runs_left Int @default(0) 39 | 40 | created_dt DateTime @default(now()) 41 | updated_dt DateTime @updatedAt 42 | 43 | is_active Boolean @default(true) 44 | is_failed Boolean @default(false) 45 | 46 | @@unique([user_id]) 47 | @@map("bot") 48 | } 49 | -------------------------------------------------------------------------------- /backend/src/app/__init__.py: -------------------------------------------------------------------------------- 1 | from contextlib import asynccontextmanager 2 | 3 | from fastapi import FastAPI 4 | from fastapi.middleware.cors import CORSMiddleware 5 | 6 | 7 | def create_app() -> "FastAPI": 8 | from prisma import Prisma 9 | 10 | from app.api.v1 import api 11 | from app.core import settings 12 | 13 | @asynccontextmanager 14 | async def lifespan(_app: FastAPI): 15 | prisma = Prisma(auto_register=True) 16 | await prisma.connect() 17 | yield 18 | await prisma.disconnect() 19 | 20 | app = FastAPI( 21 | lifespan=lifespan, 22 | title=settings.PROJECT_NAME, 23 | openapi_url=f"{settings.BASE_URL}/openapi.json", 24 | version=settings.VERSION, 25 | redoc_url=f"{settings.BASE_URL}/redoc", 26 | docs_url=f"{settings.BASE_URL}/docs", 27 | ) 28 | app.add_middleware( 29 | CORSMiddleware, 30 | allow_origins=["*"], 31 | allow_credentials=True, 32 | allow_methods=["*"], 33 | allow_headers=["*"], 34 | ) 35 | app.include_router(api.router, prefix=settings.API_V1_STR) 36 | 37 | return app 38 | -------------------------------------------------------------------------------- /backend/src/app/api/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/api/__init__.py -------------------------------------------------------------------------------- /backend/src/app/api/helpers/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/api/helpers/__init__.py -------------------------------------------------------------------------------- /backend/src/app/api/helpers/bots.py: -------------------------------------------------------------------------------- 1 | import shutil 2 | from pathlib import Path 3 | 4 | from fastapi import Depends, HTTPException, status 5 | from prisma.models import Bot, User 6 | 7 | from app.api.helpers.security import check_user 8 | from app.auto_gpt.api_manager import GPT_CACHE 9 | from app.core import settings 10 | from app.helpers.jobs import abort_job 11 | 12 | 13 | async def get_bot(user: User = Depends(check_user)) -> Bot: 14 | bot = await Bot.prisma().find_unique(where={"user_id": user.id}) 15 | if not bot: 16 | raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail="Specified bot does not exit") 17 | return bot 18 | 19 | 20 | async def delete_bot(bot: Bot) -> None: 21 | if bot.worker_message_id: 22 | await abort_job(bot.worker_message_id) 23 | await Bot.prisma().delete(where={"id": bot.id}) 24 | 25 | 26 | async def stop_bot(bot: Bot) -> None: 27 | if bot.worker_message_id: 28 | await abort_job(bot.worker_message_id) 29 | await Bot.prisma().update(data={"is_active": False}, where={"id": bot.id}) 30 | 31 | 32 | def build_workspace_path(user_id: int, fmt: str | None = None, suffix: str | None = None) -> Path: 33 | key = f"user_{user_id}" 34 | if suffix: 35 | key = f"{key}_{suffix}" 36 | if fmt: 37 | return settings.WORKSPACES_DIR / f"{key}.{fmt}" 38 | return settings.WORKSPACES_DIR / key 39 | 40 | 41 | def build_prompt_settings_path(user_id: int) -> Path: 42 | return build_workspace_path(user_id, fmt="yaml", suffix="prompt") 43 | 44 | 45 | def build_settings_path(user_id: int) -> Path: 46 | return build_workspace_path(user_id, fmt="yaml") 47 | 48 | 49 | def build_log_path(user_id: int) -> Path: 50 | return build_workspace_path(user_id, fmt="log") 51 | 52 | 53 | def clear_workspace(user_id: int) -> None: 54 | shutil.rmtree(str(settings.WORKSPACES_DIR / f"user_{user_id}"), ignore_errors=True) 55 | 56 | 57 | def clear_workspace_cache(user_id: int) -> None: 58 | shutil.rmtree(str(settings.WORKSPACES_DIR / f"user_{user_id}" / GPT_CACHE), ignore_errors=True) 59 | -------------------------------------------------------------------------------- /backend/src/app/api/helpers/responses.py: -------------------------------------------------------------------------------- 1 | from itertools import islice 2 | from pathlib import Path 3 | from typing import Awaitable, Callable 4 | 5 | import anyio 6 | from fastapi.responses import FileResponse, ORJSONResponse 7 | from typing_extensions import ParamSpec 8 | from app.helpers.readers import any_open, is_gz, tail_as_text 9 | from app.core import settings 10 | 11 | P = ParamSpec("P") 12 | 13 | 14 | async def build_read_log_response(path: Path) -> ORJSONResponse: 15 | if not path.exists() and is_gz(path): 16 | path = path.with_suffix("") 17 | text = "" 18 | if path.exists(): 19 | if is_gz(path): 20 | with any_open(path, "rt") as r: 21 | text = "".join([row for row in islice(r, 0, 1000)]) 22 | elif path.suffix == ".log": 23 | text = await anyio.to_thread.run_sync(tail_as_text, path, settings.TAIL_LOG_COUNT) 24 | else: 25 | text = await anyio.to_thread.run_sync(tail_as_text, path, 10000) 26 | return ORJSONResponse( 27 | {"text": text}, 28 | ) 29 | 30 | 31 | async def build_download_log_response( 32 | func: Callable[P, Awaitable[Path]], *args: P.args, **kwargs: P.kwargs 33 | ) -> FileResponse: 34 | path = await func(*args, **kwargs) 35 | if not path.exists(): 36 | path.touch() 37 | return FileResponse(path, filename=path.name) 38 | -------------------------------------------------------------------------------- /backend/src/app/api/helpers/security.py: -------------------------------------------------------------------------------- 1 | from abc import ABC, abstractmethod 2 | from typing import NoReturn, Optional 3 | 4 | from fastapi import HTTPException, Request, status 5 | from prisma.models import User 6 | 7 | from app.clients import AuthBackendClient 8 | from app.core.config import settings 9 | 10 | BAD_CREDENTIALS_MSG = "Could not validate credentials" 11 | NO_PRIVILEGES_MSG = "The user doesn't have enough privileges" 12 | 13 | 14 | class AuthDependency(ABC): 15 | @abstractmethod 16 | async def __call__(self, *args, **kwargs) -> User: 17 | pass 18 | 19 | @classmethod 20 | def cast(cls, some_a: "AuthDependency"): 21 | """Cast an A into a MyA.""" 22 | some_a.__class__ = cls 23 | return some_a 24 | 25 | def raise_401(self, detail: str = BAD_CREDENTIALS_MSG) -> NoReturn: 26 | raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail=detail) 27 | 28 | def raise_403(self, detail: str = NO_PRIVILEGES_MSG) -> NoReturn: 29 | raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail=detail) 30 | 31 | async def validate_user(self, user: User) -> User: 32 | return user 33 | 34 | 35 | class NoAuthDependency(AuthDependency): 36 | async def __call__(self, request: Request) -> Optional[User]: 37 | user = await User.prisma().find_first(where={"id": 1}) 38 | if not user: 39 | user = await User.prisma().upsert( 40 | where={"id": 1}, 41 | data={ 42 | "create": {"id": 1, "username": "admin"}, 43 | "update": {"id": 1, "username": "admin"}, 44 | }, 45 | ) 46 | return await self.validate_user(user) 47 | 48 | 49 | class SessionAuthDependency(AuthDependency): 50 | async def __call__(self, request: Request) -> Optional[User]: 51 | client = AuthBackendClient( 52 | settings.SESSION_API_URL, 53 | user_path=settings.SESSION_API_USER_PATH, 54 | session_path=settings.SESSION_API_SESSION_PATH, 55 | session_cookie=settings.SESSION_COOKIE, 56 | ) 57 | session_id = request.cookies.get(settings.SESSION_COOKIE) 58 | if not session_id: 59 | return self.raise_401() 60 | j = await client.get_session(session_id) 61 | if "user" not in j: 62 | return self.raise_401() 63 | user = await User.prisma().find_unique(where={"username": j["user"]["username"]}) 64 | if not user: 65 | await User.prisma().upsert( 66 | where={"username": j["user"]["username"]}, 67 | data={"create": {"username": j["user"]["username"]}, "update": {"username": j["user"]["username"]}}, 68 | ) 69 | return await self.validate_user(user) 70 | 71 | 72 | auth_dependency = NoAuthDependency if settings.NO_AUTH else SessionAuthDependency 73 | check_user = auth_dependency() 74 | -------------------------------------------------------------------------------- /backend/src/app/api/v1/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/api/v1/__init__.py -------------------------------------------------------------------------------- /backend/src/app/api/v1/api.py: -------------------------------------------------------------------------------- 1 | from fastapi import APIRouter 2 | 3 | from app.api.v1.routes import sessions, bots 4 | 5 | router = APIRouter() 6 | 7 | router.include_router(bots.router, prefix="/bots", tags=["bots"]) 8 | router.include_router(sessions.router, include_in_schema=False, prefix="/sessions", tags=["sessions"]) 9 | -------------------------------------------------------------------------------- /backend/src/app/api/v1/routes/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/api/v1/routes/__init__.py -------------------------------------------------------------------------------- /backend/src/app/api/v1/routes/bots.py: -------------------------------------------------------------------------------- 1 | import magic 2 | import os 3 | import shutil 4 | from pathlib import Path 5 | from tempfile import TemporaryFile, mktemp 6 | from zipfile import ZIP_DEFLATED, ZipFile 7 | 8 | import anyio 9 | import yaml 10 | from fastapi import APIRouter, BackgroundTasks, Depends, HTTPException, status, UploadFile 11 | from fastapi.responses import FileResponse, ORJSONResponse 12 | from humanize import filesize 13 | from prisma import Json 14 | from prisma.models import Bot, User 15 | from pydantic import ValidationError 16 | 17 | from app.api.helpers import security, bots, responses 18 | from app.auto_gpt.api_manager import GPT_CACHE 19 | from app.core import globals, settings 20 | from app.helpers.system import remove_by_path 21 | from app.schemas.bot import AiSettingsSchema, BotInCreateSchema, BotSchema, WorkspaceFileSchema 22 | from app.schemas.enums import YesCount 23 | 24 | 25 | CHUNK_SIZE = 1024 * 1024 26 | router = APIRouter(dependencies=[Depends(security.check_user)]) 27 | 28 | 29 | @router.post("/", response_model=BotSchema) 30 | async def save_bot(*, bot_in: BotInCreateSchema, user: User = Depends(security.check_user)): 31 | existing_bot = await Bot.prisma().find_unique(where={"user_id": user.id}) 32 | if existing_bot: 33 | if existing_bot.is_active: 34 | raise HTTPException(status_code=400, detail=f"Bot already exists, wait for it to finish or cancel") 35 | await Bot.prisma().delete(where={"id": existing_bot.id}) 36 | bot = await Bot.prisma().create( 37 | data={ 38 | "fast_engine": bot_in.fast_engine, 39 | "smart_engine": bot_in.smart_engine, 40 | "fast_tokens": bot_in.fast_tokens, 41 | "smart_tokens": bot_in.smart_tokens, 42 | "image_size": bot_in.image_size, 43 | "ai_settings": Json(bot_in.ai_settings.dict()), 44 | "user_id": user.id, 45 | "runs_left": 1, 46 | } 47 | ) 48 | bots.clear_workspace_cache(user.id) 49 | bots.build_log_path(user.id).unlink(missing_ok=True) 50 | job = await globals.arq_redis.enqueue_job("run_auto_gpt", bot_id=bot.id) 51 | await Bot.prisma().update(data={"worker_message_id": job.job_id}, where={"id": bot.id}) 52 | return bot.dict() 53 | 54 | 55 | @router.get("/", response_model=BotSchema) 56 | async def get_bot(bot: Bot = Depends(bots.get_bot)): 57 | return bot.dict() 58 | 59 | 60 | @router.delete("/", status_code=status.HTTP_204_NO_CONTENT) 61 | async def delete_bot(bot: Bot = Depends(bots.get_bot)): 62 | await bots.delete_bot(bot) 63 | 64 | 65 | @router.get("/enabled-plugins", response_class=ORJSONResponse, response_model=list[str]) 66 | async def get_enabled_plugins(): 67 | return settings.ALLOWLISTED_PLUGINS 68 | 69 | 70 | @router.post("/parse-settings", response_class=ORJSONResponse, response_model=AiSettingsSchema) 71 | async def parse_ai_settings(*, file: UploadFile): 72 | real_file_size = 0 73 | settings_content = b"" 74 | while content := await file.read(CHUNK_SIZE): 75 | real_file_size += len(content) 76 | if real_file_size > settings.MAX_WORKSPACE_FILE_SIZE: 77 | raise HTTPException( 78 | status_code=status.HTTP_400_BAD_REQUEST, 79 | detail=f"Workspace file size can not be larger than " 80 | f"{filesize.naturalsize(settings.MAX_WORKSPACE_FILE_SIZE, binary=True)}", 81 | ) 82 | settings_content += content 83 | try: 84 | return AiSettingsSchema(**yaml.safe_load(settings_content)) 85 | except (ValueError, yaml.YAMLError) as e: 86 | if isinstance(e, ValidationError): 87 | raise HTTPException(status_code=400, detail=f"Invalid AI settings format: {str(e)}") 88 | raise HTTPException(status_code=400, detail="Invalid AI settings format") 89 | 90 | 91 | @router.get("/log", response_class=ORJSONResponse) 92 | async def get_bot_log(*, bot: Bot = Depends(bots.get_bot)): 93 | return await responses.build_read_log_response(bots.build_log_path(bot.user_id)) 94 | 95 | 96 | @router.get("/continue", status_code=status.HTTP_204_NO_CONTENT) 97 | async def continue_bot(count: YesCount, bot: Bot = Depends(bots.get_bot)): 98 | if bot.runs_left: 99 | raise HTTPException(status_code=400, detail="Bot is already running") 100 | await Bot.prisma().update( 101 | data={"runs_left": count.value}, 102 | where={"id": bot.id}, 103 | ) 104 | job = await globals.arq_redis.enqueue_job("run_auto_gpt", bot_id=bot.id) 105 | await Bot.prisma().update(data={"worker_message_id": job.job_id}, where={"id": bot.id}) 106 | 107 | 108 | @router.get("/stop", status_code=status.HTTP_204_NO_CONTENT) 109 | async def stop_bot(bot: Bot = Depends(bots.get_bot)): 110 | await bots.stop_bot(bot) 111 | 112 | 113 | @router.post("/workspace", status_code=status.HTTP_204_NO_CONTENT) 114 | async def upload_to_workspace(*, file: UploadFile, path: str | None = None, user: User = Depends(security.check_user)): 115 | workspace_path = bots.build_workspace_path(user_id=user.id) 116 | if "/" in file.filename: 117 | raise HTTPException(status_code=400, detail="Name should be a path to a file, not a directory") 118 | if not workspace_path.exists(): 119 | workspace_path.mkdir() 120 | workspace_path = bots.build_workspace_path(user_id=user.id) 121 | if path: 122 | sub_path = workspace_path / path 123 | if not sub_path.is_relative_to(workspace_path) or not sub_path.exists() or not sub_path.is_dir(): 124 | raise HTTPException(status_code=400, detail="Invalid path") 125 | else: 126 | sub_path = workspace_path 127 | with TemporaryFile() as temp_file: 128 | real_file_size = 0 129 | while content := await file.read(CHUNK_SIZE): 130 | real_file_size += len(content) 131 | if real_file_size > settings.MAX_WORKSPACE_FILE_SIZE: 132 | raise HTTPException( 133 | status_code=status.HTTP_400_BAD_REQUEST, 134 | detail=f"Workspace file size can not be larger than " 135 | f"{filesize.naturalsize(settings.MAX_WORKSPACE_FILE_SIZE, binary=True)}", 136 | ) 137 | temp_file.write(content) 138 | temp_file.seek(0) 139 | async with (await anyio.open_file(sub_path / file.filename, "wb")) as w: 140 | await w.write(temp_file.read()) 141 | 142 | 143 | @router.get("/workspace", response_class=ORJSONResponse, response_model=list[WorkspaceFileSchema]) 144 | async def list_workspace_files(*, path: str | None = None, user: User = Depends(security.check_user)): 145 | workspace_path = bots.build_workspace_path(user_id=user.id) 146 | if path: 147 | sub_path = workspace_path / path 148 | if not sub_path.is_relative_to(workspace_path) or not sub_path.exists() or not sub_path.is_dir(): 149 | raise HTTPException(status_code=400, detail="Invalid path") 150 | else: 151 | sub_path = workspace_path 152 | files = [] 153 | for f in sub_path.glob("*"): 154 | if GPT_CACHE in f.parts: 155 | continue 156 | is_dir = f.is_dir() 157 | st_size = f.stat().st_size 158 | if st_size: 159 | size = filesize.naturalsize(st_size) 160 | else: 161 | size = None 162 | files.append( 163 | WorkspaceFileSchema( 164 | name=str(f.name), 165 | path=str(f.relative_to(workspace_path)), 166 | is_dir=is_dir, 167 | mime_type=magic.from_file(f, mime=True) if not is_dir else None, 168 | size=size, 169 | ) 170 | ) 171 | return files 172 | 173 | 174 | @router.delete("/workspace", status_code=status.HTTP_204_NO_CONTENT) 175 | async def delete_workspace_file(*, name: str, user: User = Depends(security.check_user)): 176 | workspace_path = bots.build_workspace_path(user_id=user.id) 177 | file_path = workspace_path / name 178 | if not file_path.is_relative_to(workspace_path) or not file_path.exists(): 179 | raise HTTPException(status_code=400, detail="Invalid file") 180 | if file_path.is_dir(): 181 | shutil.rmtree(str(file_path), ignore_errors=True) 182 | else: 183 | file_path.unlink(missing_ok=True) 184 | 185 | 186 | @router.get("/workspace/get", response_class=FileResponse) 187 | async def get_workspace_file( 188 | *, name: str | None = None, user: User = Depends(security.check_user), background_tasks: BackgroundTasks 189 | ): 190 | workspace_path = bots.build_workspace_path(user_id=user.id) 191 | if name: 192 | if GPT_CACHE in name: 193 | raise HTTPException(status_code=400, detail=f"Can't list a file from `{GPT_CACHE} directory`") 194 | path = workspace_path / name 195 | if not path.exists() or not path.is_relative_to(workspace_path): 196 | raise HTTPException(status_code=400, detail="File does not exist in Workspace") 197 | if path.is_dir(): 198 | workspace_path = path 199 | else: 200 | return FileResponse(path, filename=path.name) 201 | zip_path = Path(mktemp(".zip")) 202 | background_tasks.add_task(remove_by_path, zip_path) 203 | with ZipFile(zip_path, "w", compression=ZIP_DEFLATED) as zip_file: 204 | for path in filter(lambda f: not os.path.isdir(f) and GPT_CACHE not in f.parts, workspace_path.rglob("*")): 205 | zip_file.write(path, str(path.relative_to(workspace_path))) 206 | return FileResponse(zip_path, filename="workspace.zip") 207 | 208 | 209 | @router.get("/workspace/clear", status_code=status.HTTP_204_NO_CONTENT) 210 | async def clear_workspace(*, user: User = Depends(security.check_user)): 211 | bots.clear_workspace(user.id) 212 | -------------------------------------------------------------------------------- /backend/src/app/api/v1/routes/sessions.py: -------------------------------------------------------------------------------- 1 | from fastapi import APIRouter, Depends 2 | from fastapi.responses import ORJSONResponse 3 | from prisma.models import User 4 | from starlette.status import HTTP_204_NO_CONTENT 5 | 6 | from app.api.helpers import security 7 | 8 | 9 | router = APIRouter() 10 | 11 | 12 | @router.get("/check", status_code=HTTP_204_NO_CONTENT, response_class=ORJSONResponse) 13 | async def check_session(*, _: User = Depends(security.check_user)): 14 | pass 15 | -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/auto_gpt/__init__.py -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/agent.py: -------------------------------------------------------------------------------- 1 | import json 2 | from colorama import Fore, Style 3 | 4 | from autogpt.agent.agent import Agent 5 | from autogpt.app import execute_command, extract_command 6 | from autogpt.json_utils.utilities import extract_json_from_response, validate_json 7 | from autogpt.llm.chat import chat_with_ai 8 | from autogpt.llm.utils import count_string_tokens 9 | from autogpt.log_cycle.log_cycle import ( 10 | FULL_MESSAGE_HISTORY_FILE_NAME, 11 | NEXT_ACTION_FILE_NAME, 12 | ) 13 | from autogpt.logs import logger, print_assistant_thoughts, remove_ansi_escape 14 | from autogpt.speech import say_text 15 | from autogpt.spinner import Spinner 16 | 17 | 18 | class AgentStandalone(Agent): 19 | def execute_command(self, command_name: str, arguments: dict, user_input: str): 20 | # Execute command 21 | if command_name is not None and command_name.lower().startswith("error"): 22 | result = f"Could not execute command: {arguments}" 23 | elif command_name == "human_feedback": 24 | result = f"Human feedback: {user_input}" 25 | else: 26 | for plugin in self.config.plugins: 27 | if not plugin.can_handle_pre_command(): 28 | continue 29 | command_name, arguments = plugin.pre_command(command_name, arguments) 30 | command_result = execute_command( 31 | command_name=command_name, 32 | arguments=arguments, 33 | agent=self, 34 | ) 35 | result = f"Command {command_name} returned: " f"{command_result}" 36 | 37 | result_tlength = count_string_tokens(str(command_result), self.config.smart_llm) 38 | memory_tlength = count_string_tokens(str(self.history.summary_message()), self.config.smart_llm) 39 | if result_tlength + memory_tlength + 600 > self.smart_token_limit: 40 | result = f"Failure: command {command_name} returned too much output. \ 41 | Do not execute this command again with the same arguments." 42 | 43 | for plugin in self.config.plugins: 44 | if not plugin.can_handle_post_command(): 45 | continue 46 | result = plugin.post_command(command_name, result) 47 | if self.next_action_count > 0: 48 | self.next_action_count -= 1 49 | 50 | # Check if there's a result from the command append it to the message 51 | # history 52 | if result is not None: 53 | self.history.add("system", result, "action_result") 54 | logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result) 55 | else: 56 | self.history.add("system", "Unable to execute command", "action_result") 57 | logger.typewriter_log("SYSTEM: ", Fore.YELLOW, "Unable to execute command") 58 | 59 | def process_next_interaction(self, last_assistant_reply: str | None): 60 | self.cycle_count = 0 61 | command_name = None 62 | arguments = None 63 | 64 | self.cycle_count += 1 65 | self.log_cycle_handler.log_count_within_cycle = 0 66 | self.log_cycle_handler.log_cycle( 67 | self.ai_config.ai_name, 68 | self.created_at, 69 | self.cycle_count, 70 | [m.raw() for m in self.history], 71 | FULL_MESSAGE_HISTORY_FILE_NAME, 72 | ) 73 | 74 | # Send message to AI, get response 75 | if not last_assistant_reply: 76 | # Send message to AI, get response 77 | with Spinner("Thinking... ", plain_output=self.config.plain_output): 78 | assistant_reply = chat_with_ai( 79 | self.config, 80 | self, 81 | self.system_prompt, 82 | self.triggering_prompt, 83 | self.smart_token_limit, 84 | self.config.smart_llm, 85 | ) 86 | else: 87 | assistant_reply = last_assistant_reply 88 | 89 | try: 90 | assistant_reply_json = extract_json_from_response(assistant_reply.content) 91 | validate_json(assistant_reply_json, self.config) 92 | except json.JSONDecodeError as e: 93 | logger.error(f"Exception while validating assistant reply JSON: {e}") 94 | assistant_reply_json = {} 95 | 96 | for plugin in self.config.plugins: 97 | if not plugin.can_handle_post_planning(): 98 | continue 99 | assistant_reply_json = plugin.post_planning(assistant_reply_json) 100 | 101 | # Print Assistant thoughts 102 | if assistant_reply_json != {}: 103 | # Get command name and arguments 104 | try: 105 | if not last_assistant_reply: 106 | print_assistant_thoughts(self.ai_name, assistant_reply_json, self.config) 107 | command_name, arguments = extract_command(assistant_reply_json, assistant_reply, self.config) 108 | if self.config.speak_mode: 109 | say_text(f"I want to execute {command_name}", self.config) 110 | 111 | arguments = self._resolve_pathlike_command_args(arguments) 112 | 113 | except Exception as e: 114 | logger.error("Error: \n", str(e)) 115 | self.log_cycle_handler.log_cycle( 116 | self.ai_config.ai_name, 117 | self.created_at, 118 | self.cycle_count, 119 | assistant_reply_json, 120 | NEXT_ACTION_FILE_NAME, 121 | ) 122 | 123 | if not last_assistant_reply: 124 | # First log new-line so user can differentiate sections better in console 125 | logger.typewriter_log("\n") 126 | logger.typewriter_log( 127 | "NEXT ACTION: ", 128 | Fore.CYAN, 129 | f"COMMAND = {Fore.CYAN}{remove_ansi_escape(command_name)}{Style.RESET_ALL} " 130 | f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}", 131 | ) 132 | 133 | logger.info( 134 | f"Enter '{self.config.authorise_key}' to authorise command, " 135 | f"'{self.config.authorise_key} -N' to run N continuous commands, " 136 | f"'{self.config.exit_key}' to exit program, or enter feedback for " 137 | f"{self.ai_name}..." 138 | ) 139 | else: 140 | # if command_name is not None: 141 | self.execute_command(command_name, arguments, "") 142 | self.process_next_interaction(None) 143 | -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/api_manager.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from pathlib import Path 4 | 5 | import orjson 6 | 7 | from autogpt.config import Config 8 | from autogpt.llm.api_manager import ApiManager 9 | 10 | 11 | SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS | orjson.OPT_NON_STR_KEYS 12 | GPT_CACHE = ".gpt_cache" 13 | 14 | 15 | class CachedApiManager(ApiManager): 16 | def load_config(self, config: Config): 17 | self.config = config 18 | 19 | @classmethod 20 | def cast(cls, some_a: ApiManager): 21 | """Cast an A into a MyA.""" 22 | assert isinstance(some_a, ApiManager) 23 | some_a.__class__ = cls 24 | assert isinstance(some_a, CachedApiManager) 25 | return some_a 26 | 27 | def build_dict(self) -> dict[str, float]: 28 | return dict( 29 | total_prompt_tokens=self.total_prompt_tokens, 30 | total_completion_tokens=self.total_completion_tokens, 31 | total_cost=self.total_cost, 32 | total_budget=self.total_budget, 33 | ) 34 | 35 | def restore(self): 36 | workspace_path = Path(self.config.workspace_path) 37 | filename = workspace_path / GPT_CACHE / f"{self.config.memory_index}-budget.json" 38 | if filename.exists(): 39 | with filename.open("rb") as f: 40 | d = orjson.loads(f.read()) 41 | self.total_prompt_tokens = d.get("total_prompt_tokens", 0) 42 | self.total_completion_tokens = d.get("total_completion_tokens", 0) 43 | self.total_cost = d.get("total_cost", 0) 44 | self.total_budget = d.get("total_budget", 0) 45 | else: 46 | filename.parent.mkdir(exist_ok=True, parents=True) 47 | self.flush() 48 | 49 | def flush(self): 50 | with open(Path(self.config.workspace_path) / GPT_CACHE / f"{self.config.memory_index}-budget.json", "wb") as f: 51 | f.write(orjson.dumps(self.build_dict(), option=SAVE_OPTIONS)) 52 | 53 | def update_cost(self, prompt_tokens, completion_tokens, model): 54 | """ 55 | Update the total cost, prompt tokens, and completion tokens. 56 | 57 | Args: 58 | prompt_tokens (int): The number of tokens used in the prompt. 59 | completion_tokens (int): The number of tokens used in the completion. 60 | model (str): The model used for the API call. 61 | """ 62 | super().update_cost(prompt_tokens, completion_tokens, model) 63 | self.flush() 64 | -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/cli.py: -------------------------------------------------------------------------------- 1 | """Main script for the autogpt package.""" 2 | from typing import Optional 3 | 4 | import click 5 | 6 | 7 | @click.group(invoke_without_command=True) 8 | @click.option("-c", "--continuous", is_flag=True, help="Enable Continuous Mode") 9 | @click.option( 10 | "--skip-reprompt", 11 | "-y", 12 | is_flag=True, 13 | help="Skips the re-prompting messages at the beginning of the script", 14 | ) 15 | @click.option( 16 | "--ai-settings", 17 | "-C", 18 | help="Specifies which ai_settings.yaml file to use, will also automatically skip the re-prompt.", 19 | ) 20 | @click.option( 21 | "--prompt-settings", 22 | "-P", 23 | help="Specifies which prompt_settings.yaml file to use.", 24 | ) 25 | @click.option( 26 | "-l", 27 | "--continuous-limit", 28 | type=int, 29 | help="Defines the number of times to run in continuous mode", 30 | ) 31 | @click.option("--speak", is_flag=True, help="Enable Speak Mode") 32 | @click.option("--debug", is_flag=True, help="Enable Debug Mode") 33 | @click.option("--gpt3only", is_flag=True, help="Enable GPT3.5 Only Mode") 34 | @click.option("--gpt4only", is_flag=True, help="Enable GPT4 Only Mode") 35 | @click.option( 36 | "--use-memory", 37 | "-m", 38 | "memory_type", 39 | type=str, 40 | help="Defines which Memory backend to use", 41 | ) 42 | @click.option( 43 | "-b", 44 | "--browser-name", 45 | help="Specifies which web-browser to use when using selenium to scrape the web.", 46 | ) 47 | @click.option( 48 | "--allow-downloads", 49 | is_flag=True, 50 | help="Dangerous: Allows Auto-GPT to download files natively.", 51 | ) 52 | @click.option( 53 | "--skip-news", 54 | is_flag=True, 55 | help="Specifies whether to suppress the output of latest news on startup.", 56 | ) 57 | @click.option( 58 | # TODO: this is a hidden option for now, necessary for integration testing. 59 | # We should make this public once we're ready to roll out agent specific workspaces. 60 | "--workspace-directory", 61 | "-w", 62 | type=click.Path(), 63 | hidden=True, 64 | ) 65 | @click.option( 66 | "--install-plugin-deps", 67 | is_flag=True, 68 | help="Installs external dependencies for 3rd party plugins.", 69 | ) 70 | @click.option( 71 | "--ai-name", 72 | type=str, 73 | help="AI name override", 74 | ) 75 | @click.option( 76 | "--ai-role", 77 | type=str, 78 | help="AI role override", 79 | ) 80 | @click.option( 81 | "--ai-goal", 82 | type=str, 83 | multiple=True, 84 | help="AI goal override; may be used multiple times to pass multiple goals", 85 | ) 86 | @click.option( 87 | "--max-cache-size", 88 | type=int, 89 | help="Max size for cache objects.", 90 | ) 91 | @click.pass_context 92 | def main( 93 | ctx: click.Context, 94 | continuous: bool, 95 | continuous_limit: int, 96 | ai_settings: str, 97 | prompt_settings: str, 98 | skip_reprompt: bool, 99 | speak: bool, 100 | debug: bool, 101 | gpt3only: bool, 102 | gpt4only: bool, 103 | memory_type: str, 104 | browser_name: str, 105 | allow_downloads: bool, 106 | skip_news: bool, 107 | workspace_directory: str, 108 | install_plugin_deps: bool, 109 | ai_name: Optional[str], 110 | ai_role: Optional[str], 111 | ai_goal: tuple[str], 112 | max_cache_size: int, 113 | ) -> None: 114 | """ 115 | Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI. 116 | 117 | Start an Auto-GPT assistant. 118 | """ 119 | # Put imports inside function to avoid importing everything when starting the CLI 120 | from app.auto_gpt.main import run_auto_gpt 121 | 122 | if ctx.invoked_subcommand is None: 123 | run_auto_gpt( 124 | continuous, 125 | continuous_limit, 126 | ai_settings, 127 | prompt_settings, 128 | skip_reprompt, 129 | speak, 130 | debug, 131 | gpt3only, 132 | gpt4only, 133 | memory_type, 134 | browser_name, 135 | allow_downloads, 136 | skip_news, 137 | workspace_directory, 138 | install_plugin_deps, 139 | max_cache_size, 140 | ai_name, 141 | ai_role, 142 | ai_goal, 143 | ) 144 | 145 | 146 | if __name__ == "__main__": 147 | main() 148 | -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/install_plugin_deps.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | import sys 4 | import zipfile 5 | from glob import glob 6 | from pathlib import Path 7 | 8 | from autogpt.logs import logger 9 | 10 | 11 | def install_plugin_dependencies(): 12 | """ 13 | Installs dependencies for all plugins in the plugins dir. 14 | 15 | Args: 16 | None 17 | 18 | Returns: 19 | None 20 | """ 21 | plugins_dir = Path(os.getenv("PLUGINS_DIR", "plugins")) 22 | 23 | logger.debug(f"Checking for dependencies in zipped plugins...") 24 | 25 | # Install zip-based plugins 26 | for plugin_archive in plugins_dir.glob("*.zip"): 27 | logger.debug(f"Checking for requirements in '{plugin_archive}'...") 28 | with zipfile.ZipFile(str(plugin_archive), "r") as zfile: 29 | if not zfile.namelist(): 30 | continue 31 | 32 | # Assume the first entry in the list will be (in) the lowest common dir 33 | first_entry = zfile.namelist()[0] 34 | basedir = first_entry.rsplit("/", 1)[0] if "/" in first_entry else "" 35 | logger.debug(f"Looking for requirements.txt in '{basedir}'") 36 | 37 | basereqs = os.path.join(basedir, "requirements.txt") 38 | try: 39 | extracted = zfile.extract(basereqs, path=plugins_dir) 40 | except KeyError as e: 41 | logger.debug(e.args[0]) 42 | continue 43 | 44 | logger.debug(f"Installing dependencies from '{basereqs}'...") 45 | subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", extracted]) 46 | os.remove(extracted) 47 | os.rmdir(os.path.join(plugins_dir, basedir)) 48 | 49 | logger.debug(f"Checking for dependencies in other plugin folders...") 50 | 51 | # Install directory-based plugins 52 | for requirements_file in glob(f"{plugins_dir}/*/requirements.txt"): 53 | logger.debug(f"Installing dependencies from '{requirements_file}'...") 54 | subprocess.check_call( 55 | [sys.executable, "-m", "pip", "install", "-r", requirements_file], 56 | stdout=subprocess.DEVNULL, 57 | ) 58 | 59 | logger.debug("Finished installing plugin dependencies") 60 | 61 | 62 | if __name__ == "__main__": 63 | install_plugin_dependencies() 64 | -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/main.py: -------------------------------------------------------------------------------- 1 | """The application entry point. Can be invoked by a CLI or any other front end application.""" 2 | import logging 3 | import sys 4 | from pathlib import Path 5 | from typing import Optional 6 | 7 | from colorama import Fore, Style 8 | 9 | from autogpt.agent import AgentManager 10 | from autogpt.config.ai_config import AIConfig 11 | from autogpt.llm.api_manager import ApiManager 12 | from autogpt.setup import prompt_user 13 | from autogpt.utils import clean_input 14 | from autogpt.config.config import Config, ConfigBuilder, check_openai_api_key 15 | from autogpt.configurator import create_config 16 | from autogpt.logs import logger 17 | from autogpt.memory.vector import get_memory 18 | from autogpt.models.command_registry import CommandRegistry 19 | from autogpt.prompts.prompt import DEFAULT_TRIGGERING_PROMPT 20 | from autogpt.utils import ( 21 | get_current_git_branch, 22 | get_latest_bulletin, 23 | get_legal_warning, 24 | markdown_to_ansi_style, 25 | ) 26 | from autogpt.workspace import Workspace 27 | 28 | from app.auto_gpt.api_manager import CachedApiManager 29 | from app.auto_gpt.agent import AgentStandalone 30 | from app.auto_gpt.install_plugin_deps import install_plugin_dependencies 31 | from app.auto_gpt.memory import CachedAgents, CachedMessageHistory 32 | from app.auto_gpt.plugins import scan_plugins 33 | 34 | 35 | COMMAND_CATEGORIES = [ 36 | "autogpt.commands.execute_code", 37 | "autogpt.commands.file_operations", 38 | "autogpt.commands.web_search", 39 | "autogpt.commands.web_selenium", 40 | "autogpt.app", 41 | "autogpt.commands.task_statuses", 42 | ] 43 | 44 | 45 | def construct_main_ai_config( 46 | config: Config, 47 | name: Optional[str] = None, 48 | role: Optional[str] = None, 49 | goals: tuple[str] = tuple(), 50 | skip_print: bool = False, 51 | ) -> AIConfig: 52 | """Construct the prompt for the AI to respond to 53 | 54 | Returns: 55 | str: The prompt string 56 | """ 57 | ai_config = AIConfig.load(config.ai_settings_file) 58 | 59 | # Apply overrides 60 | if name: 61 | ai_config.ai_name = name 62 | if role: 63 | ai_config.ai_role = role 64 | if goals: 65 | ai_config.ai_goals = list(goals) 66 | 67 | if not skip_print: 68 | if ( 69 | all([name, role, goals]) 70 | or config.skip_reprompt 71 | and all([ai_config.ai_name, ai_config.ai_role, ai_config.ai_goals]) 72 | ): 73 | logger.typewriter_log("Name :", Fore.GREEN, ai_config.ai_name) 74 | logger.typewriter_log("Role :", Fore.GREEN, ai_config.ai_role) 75 | logger.typewriter_log("Goals:", Fore.GREEN, f"{ai_config.ai_goals}") 76 | logger.typewriter_log( 77 | "API Budget:", 78 | Fore.GREEN, 79 | "infinite" if ai_config.api_budget <= 0 else f"${ai_config.api_budget}", 80 | ) 81 | elif all([ai_config.ai_name, ai_config.ai_role, ai_config.ai_goals]): 82 | logger.typewriter_log( 83 | "Welcome back! ", 84 | Fore.GREEN, 85 | f"Would you like me to return to being {ai_config.ai_name}?", 86 | speak_text=True, 87 | ) 88 | should_continue = clean_input( 89 | config, 90 | f"""Continue with the last settings? 91 | Name: {ai_config.ai_name} 92 | Role: {ai_config.ai_role} 93 | Goals: {ai_config.ai_goals} 94 | API Budget: {"infinite" if ai_config.api_budget <= 0 else f"${ai_config.api_budget}"} 95 | Continue ({config.authorise_key}/{config.exit_key}): """, 96 | ) 97 | if should_continue.lower() == config.exit_key: 98 | ai_config = AIConfig() 99 | 100 | if any([not ai_config.ai_name, not ai_config.ai_role, not ai_config.ai_goals]): 101 | ai_config = prompt_user(config) 102 | ai_config.save(config.ai_settings_file) 103 | 104 | if config.restrict_to_workspace: 105 | logger.typewriter_log( 106 | "NOTE:All files/directories created by this agent can be found inside its workspace at:", 107 | Fore.YELLOW, 108 | f"{config.workspace_path}", 109 | ) 110 | # set the total api budget 111 | api_manager = ApiManager() 112 | CachedApiManager.cast(api_manager) 113 | api_manager.set_total_budget(ai_config.api_budget) 114 | api_manager.load_config(config) 115 | api_manager.restore() 116 | 117 | if not skip_print: 118 | # Agent Created, print message 119 | logger.typewriter_log( 120 | ai_config.ai_name, 121 | Fore.LIGHTBLUE_EX, 122 | "has been created with the following details:", 123 | speak_text=True, 124 | ) 125 | 126 | # Print the ai_config details 127 | # Name 128 | logger.typewriter_log("Name:", Fore.GREEN, ai_config.ai_name, speak_text=False) 129 | # Role 130 | logger.typewriter_log("Role:", Fore.GREEN, ai_config.ai_role, speak_text=False) 131 | # Goals 132 | logger.typewriter_log("Goals:", Fore.GREEN, "", speak_text=False) 133 | for goal in ai_config.ai_goals: 134 | logger.typewriter_log("-", Fore.GREEN, goal, speak_text=False) 135 | 136 | return ai_config 137 | 138 | 139 | def run_auto_gpt( 140 | continuous: bool, 141 | continuous_limit: int, 142 | ai_settings: str, 143 | prompt_settings: str, 144 | skip_reprompt: bool, 145 | speak: bool, 146 | debug: bool, 147 | gpt3only: bool, 148 | gpt4only: bool, 149 | memory_type: str, 150 | browser_name: str, 151 | allow_downloads: bool, 152 | skip_news: bool, 153 | workspace_directory: str | Path, 154 | install_plugin_deps: bool, 155 | max_cache_size: int, 156 | ai_name: Optional[str] = None, 157 | ai_role: Optional[str] = None, 158 | ai_goals: tuple[str] = tuple(), 159 | ): 160 | # Configure logging before we do anything else. 161 | logger.set_level(logging.DEBUG if debug else logging.INFO) 162 | 163 | config = ConfigBuilder.build_config_from_env() 164 | # HACK: This is a hack to allow the config into the logger without having to pass it around everywhere 165 | # or import it directly. 166 | logger.config = config 167 | 168 | # TODO: fill in llm values here 169 | check_openai_api_key(config) 170 | 171 | create_config( 172 | config, 173 | continuous, 174 | continuous_limit, 175 | "", 176 | prompt_settings, 177 | False, 178 | speak, 179 | debug, 180 | gpt3only, 181 | gpt4only, 182 | memory_type, 183 | browser_name, 184 | allow_downloads, 185 | skip_news, 186 | ) 187 | config.skip_reprompt = skip_reprompt 188 | config.ai_settings_file = ai_settings 189 | 190 | if config.continuous_mode: 191 | for line in get_legal_warning().split("\n"): 192 | logger.warn(markdown_to_ansi_style(line), "LEGAL:", Fore.RED) 193 | 194 | if not config.skip_news: 195 | motd, is_new_motd = get_latest_bulletin() 196 | if motd: 197 | motd = markdown_to_ansi_style(motd) 198 | for motd_line in motd.split("\n"): 199 | logger.info(motd_line, "NEWS:", Fore.GREEN) 200 | if is_new_motd and not config.chat_messages_enabled: 201 | input( 202 | Fore.MAGENTA 203 | + Style.BRIGHT 204 | + "NEWS: Bulletin was updated! Press Enter to continue..." 205 | + Style.RESET_ALL 206 | ) 207 | 208 | git_branch = get_current_git_branch() 209 | if git_branch and git_branch != "stable": 210 | logger.typewriter_log( 211 | "WARNING: ", 212 | Fore.RED, 213 | f"You are running on `{git_branch}` branch " "- this is not a supported branch.", 214 | ) 215 | if sys.version_info < (3, 10): 216 | logger.typewriter_log( 217 | "WARNING: ", 218 | Fore.RED, 219 | "You are running on an older version of Python. " 220 | "Some people have observed problems with certain " 221 | "parts of Auto-GPT with this version. " 222 | "Please consider upgrading to Python 3.10 or higher.", 223 | ) 224 | 225 | if install_plugin_deps: 226 | install_plugin_dependencies() 227 | 228 | if workspace_directory is None: 229 | workspace_directory = Path(__file__).parent / "auto_gpt_workspace" 230 | else: 231 | workspace_directory = Path(workspace_directory) 232 | workspace_directory = Workspace.make_workspace(workspace_directory) 233 | config.workspace_path = str(workspace_directory) 234 | 235 | # HACK: doing this here to collect some globals that depend on the workspace. 236 | Workspace.build_file_logger_path(config, workspace_directory) 237 | 238 | config.plugins = scan_plugins(config, config.debug_mode) 239 | # Create a CommandRegistry instance and scan default folder 240 | command_registry = CommandRegistry() 241 | 242 | logger.debug(f"The following command categories are disabled: {config.disabled_command_categories}") 243 | enabled_command_categories = [x for x in COMMAND_CATEGORIES if x not in config.disabled_command_categories] 244 | 245 | logger.debug(f"The following command categories are enabled: {enabled_command_categories}") 246 | 247 | for command_category in enabled_command_categories: 248 | command_registry.import_commands(command_category) 249 | 250 | # Unregister commands that are incompatible with the current config 251 | incompatible_commands = [] 252 | for command in command_registry.commands.values(): 253 | if callable(command.enabled) and not command.enabled(config): 254 | command.enabled = False 255 | incompatible_commands.append(command) 256 | 257 | for command in incompatible_commands: 258 | command_registry.unregister(command) 259 | logger.debug( 260 | f"Unregistering incompatible command: {command.name}, " 261 | f"reason - {command.disabled_reason or 'Disabled by current config.'}" 262 | ) 263 | 264 | message_history = CachedMessageHistory.load_from_file(None, config) 265 | while message_history.filename.stat().st_size >= max_cache_size: 266 | logger.typewriter_log("Truncating cache...") 267 | message_history.messages[:] = message_history.messages[(len(message_history.messages) // 4) or 1 :] 268 | message_history.flush() 269 | AgentManager(config).agents = CachedAgents(config) 270 | 271 | try: 272 | last_assistant_reply = next(filter(lambda x: x.role == "assistant", reversed(message_history.messages))) 273 | except StopIteration: 274 | last_assistant_reply = None 275 | 276 | ai_config = construct_main_ai_config( 277 | config, name=ai_name, role=ai_role, goals=ai_goals, skip_print=bool(last_assistant_reply) 278 | ) 279 | ai_config.command_registry = command_registry 280 | ai_name = ai_config.ai_name 281 | 282 | # add chat plugins capable of report to logger 283 | if config.chat_messages_enabled: 284 | for plugin in config.plugins: 285 | if hasattr(plugin, "can_handle_report") and plugin.can_handle_report(): 286 | logger.info(f"Loaded plugin into logger: {plugin.__class__.__name__}") 287 | logger.chat_plugins.append(plugin) 288 | 289 | # Initialize memory and make sure it is empty. 290 | # this is particularly important for indexing and referencing pinecone memory 291 | memory = get_memory(config) 292 | memory.clear() 293 | if not last_assistant_reply: 294 | logger.typewriter_log("Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}") 295 | logger.typewriter_log("Using Browser:", Fore.GREEN, config.selenium_web_browser) 296 | system_prompt = ai_config.construct_full_prompt(config) 297 | if config.debug_mode: 298 | logger.typewriter_log("Prompt:", Fore.GREEN, system_prompt) 299 | 300 | agent = AgentStandalone( 301 | ai_name=ai_name, 302 | memory=memory, 303 | next_action_count=0, 304 | command_registry=command_registry, 305 | ai_config=ai_config, 306 | config=config, 307 | system_prompt=system_prompt, 308 | triggering_prompt=DEFAULT_TRIGGERING_PROMPT, 309 | workspace_directory=workspace_directory, 310 | ) 311 | message_history.agent = agent 312 | agent.history = message_history 313 | agent.process_next_interaction(last_assistant_reply) 314 | -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/memory.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import dataclasses 4 | from pathlib import Path 5 | from typing import Any 6 | 7 | import orjson 8 | 9 | from autogpt.agent import Agent 10 | from autogpt.config import Config 11 | from autogpt.llm.base import Message 12 | from autogpt.memory.message_history import MessageHistory 13 | 14 | 15 | EMBED_DIM = 1536 16 | SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS | orjson.OPT_NON_STR_KEYS 17 | GPT_CACHE = ".gpt_cache" 18 | 19 | 20 | @dataclasses.dataclass 21 | class CachedMessageHistory(MessageHistory): 22 | filename: Path = Path("") 23 | 24 | @classmethod 25 | def load_from_file(cls, agent: Agent | None, config: Config) -> "CachedMessageHistory": 26 | workspace_path = Path(config.workspace_path) 27 | self = cls(agent, filename=workspace_path / GPT_CACHE / f"{config.memory_index}-history.json") 28 | 29 | if self.filename.exists(): 30 | with self.filename.open("rb") as f: 31 | data = orjson.loads(f.read()) 32 | self.messages = [Message(**mes) for mes in data.get("messages", [])] 33 | self.summary = data.get("summary", self.summary) 34 | self.last_trimmed_index = data.get("last_trimmed_index", self.last_trimmed_index) 35 | else: 36 | self.filename.parent.mkdir(exist_ok=True, parents=True) 37 | with self.filename.open("w+b") as f: 38 | f.write(b"{}") 39 | return self 40 | 41 | def build_dict(self): 42 | return { 43 | "messages": self.messages, 44 | "summary": self.summary, 45 | "last_trimmed_index": self.last_trimmed_index, 46 | } 47 | 48 | def flush(self) -> None: 49 | with open(self.filename, "wb") as f: 50 | f.write(orjson.dumps(self.build_dict(), option=SAVE_OPTIONS)) 51 | 52 | def append(self, message: Message) -> None: 53 | super().append(message) 54 | self.flush() 55 | 56 | 57 | class CachedAgents: 58 | """A class that stores the memory in a local file""" 59 | 60 | def __init__(self, cfg) -> None: 61 | """Initialize a class instance 62 | 63 | Args: 64 | cfg: Config object 65 | 66 | Returns: 67 | None 68 | """ 69 | workspace_path = Path(cfg.workspace_path) 70 | self.filename = workspace_path / GPT_CACHE / f"{cfg.memory_index}-agents.json" 71 | 72 | self.data = {} 73 | 74 | if self.filename.exists(): 75 | with self.filename.open("rb") as f: 76 | self.data = {int(k): v for k, v in orjson.loads(f.read()).items()} 77 | else: 78 | self.filename.parent.mkdir(exist_ok=True, parents=True) 79 | with self.filename.open("w+b") as f: 80 | f.write(b"{}") 81 | 82 | def __setitem__(self, key, value): 83 | self.add(key, value) 84 | 85 | def __getitem__(self, key): 86 | return self.data[key] 87 | 88 | def flush(self) -> None: 89 | with open(self.filename, "wb") as f: 90 | f.write(orjson.dumps(self.data, option=SAVE_OPTIONS)) 91 | 92 | def add(self, key: int, agent: tuple) -> None: 93 | self.data[key] = agent 94 | self.flush() 95 | 96 | def get(self, key: int, default: Any = None) -> tuple | None: 97 | return self.data.get(key, default) 98 | 99 | def delete(self, key: int) -> None: 100 | del self.data[key] 101 | 102 | def items(self): 103 | return self.data.items() 104 | -------------------------------------------------------------------------------- /backend/src/app/auto_gpt/plugins.py: -------------------------------------------------------------------------------- 1 | """Handles loading of plugins.""" 2 | 3 | import inspect 4 | import os 5 | import sys 6 | from pathlib import Path 7 | from typing import List 8 | from zipimport import zipimporter 9 | 10 | from auto_gpt_plugin_template import AutoGPTPluginTemplate 11 | 12 | from autogpt.config.config import Config 13 | from autogpt.logs import logger 14 | from autogpt.models.base_open_ai_plugin import BaseOpenAIPlugin 15 | from autogpt.plugins import ( 16 | inspect_zip_for_modules, 17 | fetch_openai_plugins_manifest_and_spec, 18 | initialize_openai_plugins, 19 | ) 20 | 21 | 22 | def scan_plugins(config: Config, debug: bool = False) -> List[AutoGPTPluginTemplate]: 23 | """Scan the plugins directory for plugins and loads them. 24 | 25 | Args: 26 | config (Config): Config instance including plugins config 27 | debug (bool, optional): Enable debug logging. Defaults to False. 28 | 29 | Returns: 30 | List[Tuple[str, Path]]: List of plugins. 31 | """ 32 | loaded_plugins = [] 33 | # Generic plugins 34 | plugins_path = Path(config.plugins_dir) 35 | 36 | plugins_config = config.plugins_config 37 | # Directory-based plugins 38 | for plugin_path in [f.path for f in os.scandir(config.plugins_dir) if f.is_dir()]: 39 | # Avoid going into __pycache__ or other hidden directories 40 | if plugin_path.startswith("__"): 41 | continue 42 | 43 | plugin_module_path = plugin_path.split(os.path.sep) 44 | plugin_module_name = plugin_module_path[-1] 45 | qualified_module_name = ".".join(plugin_module_path) 46 | 47 | __import__(qualified_module_name) 48 | plugin = sys.modules[qualified_module_name] 49 | 50 | if not plugins_config.is_enabled(plugin_module_name): 51 | logger.warn( 52 | f"Plugin folder {plugin_module_name} found but not configured. " 53 | f"If this is a legitimate plugin, please add it to plugins_config.yaml (key: {plugin_module_name})." 54 | ) 55 | continue 56 | 57 | for _, class_obj in inspect.getmembers(plugin): 58 | if hasattr(class_obj, "_abc_impl") and AutoGPTPluginTemplate in class_obj.__bases__: 59 | loaded_plugins.append(class_obj()) 60 | 61 | # Zip-based plugins 62 | for plugin in plugins_path.glob("*.zip"): 63 | if moduleList := inspect_zip_for_modules(str(plugin), debug): 64 | for module in moduleList: 65 | plugin = Path(plugin) 66 | module = Path(module) 67 | logger.debug(f"Zipped Plugin: {plugin}, Module: {module}") 68 | zipped_package = zipimporter(str(plugin)) 69 | zipped_module = zipped_package.load_module(str(module.parent)) 70 | 71 | for key in dir(zipped_module): 72 | if key.startswith("__"): 73 | continue 74 | 75 | a_module = getattr(zipped_module, key) 76 | if not inspect.isclass(a_module): 77 | continue 78 | 79 | if issubclass(a_module, AutoGPTPluginTemplate) and a_module.__name__ != "AutoGPTPluginTemplate": 80 | plugin_name = a_module.__name__ 81 | plugin_configured = plugins_config.get(plugin_name) is not None 82 | plugin_enabled = plugins_config.is_enabled(plugin_name) 83 | 84 | if plugin_configured and plugin_enabled: 85 | logger.debug(f"Loading plugin {plugin_name}. Enabled in plugins_config.yaml.") 86 | loaded_plugins.append(a_module()) 87 | elif plugin_configured and not plugin_enabled: 88 | logger.debug(f"Not loading plugin {plugin_name}. Disabled in plugins_config.yaml.") 89 | # elif not plugin_configured: 90 | # logger.warn( 91 | # f"Not loading plugin {plugin_name}. Key '{plugin_name}' was not found in plugins_config.yaml. " 92 | # f"Zipped plugins should use the class name ({plugin_name}) as the key." 93 | # ) 94 | else: 95 | if a_module.__name__ != "AutoGPTPluginTemplate": 96 | logger.debug(f"Skipping '{key}' because it doesn't subclass AutoGPTPluginTemplate.") 97 | 98 | # OpenAI plugins 99 | if config.plugins_openai: 100 | manifests_specs = fetch_openai_plugins_manifest_and_spec(config) 101 | if manifests_specs.keys(): 102 | manifests_specs_clients = initialize_openai_plugins(manifests_specs, config, debug) 103 | for url, openai_plugin_meta in manifests_specs_clients.items(): 104 | if not plugins_config.is_enabled(url): 105 | logger.warn(f"OpenAI Plugin {plugin_module_name} found but not configured") 106 | continue 107 | 108 | plugin = BaseOpenAIPlugin(openai_plugin_meta) 109 | loaded_plugins.append(plugin) 110 | 111 | # if loaded_plugins: 112 | # logger.info(f"\nPlugins found: {len(loaded_plugins)}\n" "--------------------") 113 | # for plugin in loaded_plugins: 114 | # logger.info(f"{plugin._name}: {plugin._version} - {plugin._description}") 115 | return loaded_plugins 116 | -------------------------------------------------------------------------------- /backend/src/app/clients/__init__.py: -------------------------------------------------------------------------------- 1 | from .auth_backend import AuthBackendClient, RequestType 2 | -------------------------------------------------------------------------------- /backend/src/app/clients/auth_backend.py: -------------------------------------------------------------------------------- 1 | from enum import Enum 2 | from typing import Optional 3 | 4 | from .base import BaseClient 5 | 6 | 7 | class RequestType(str, Enum): 8 | email = "email" 9 | openai = "openai" 10 | user = "user:" 11 | userdata = "userdata:" 12 | 13 | 14 | class AuthBackendClient(BaseClient): 15 | default_url = "" 16 | 17 | def __init__( 18 | self, 19 | base_url: str | None = None, 20 | api_key: str | None = None, 21 | session_path: str = "/session", 22 | user_path: str = "/user", 23 | session_cookie: str = "sessionId", 24 | ): 25 | super().__init__(base_url, api_key) 26 | self.session_path = session_path 27 | self.user_path = user_path 28 | self.session_cookie = session_cookie 29 | 30 | async def get_session(self, session_id: Optional[str] = None) -> dict[str, dict[str, str]]: 31 | cookies = {} 32 | if session_id: 33 | cookies = {self.session_cookie: session_id} 34 | return await self._fetch( 35 | self.session_path, 36 | cookies=cookies, 37 | ) 38 | 39 | async def get_user( 40 | self, request_type: RequestType, session_id: Optional[str] = None, username: Optional[str] = None 41 | ) -> dict[str, str]: 42 | headers = {"requesttype": request_type.value} 43 | cookies = {} 44 | params = {} 45 | if session_id: 46 | cookies = {self.session_cookie: session_id} 47 | if username: 48 | params["user"] = username 49 | return await self._fetch( 50 | self.user_path, 51 | headers=headers, 52 | params=params, 53 | cookies=cookies, 54 | ) 55 | -------------------------------------------------------------------------------- /backend/src/app/clients/base.py: -------------------------------------------------------------------------------- 1 | from abc import ABC, abstractmethod 2 | from typing import Any 3 | 4 | import aiohttp 5 | from furl import furl 6 | 7 | 8 | class BaseClient(ABC): 9 | def __init__(self, base_url: str | None = None, api_key: str | None = None, *args: Any, **kwargs: Any): 10 | self.furl = furl(base_url or self.default_url) 11 | self.api_key = api_key 12 | 13 | @property 14 | @abstractmethod 15 | def default_url(self) -> str: 16 | ... 17 | 18 | @property 19 | def api_key_header(self) -> str: 20 | return "" 21 | 22 | @property 23 | def default_rate_limit(self) -> int | None: 24 | return None 25 | 26 | @property 27 | def default_rate_period(self) -> int: 28 | return 1 29 | 30 | async def _fetch( 31 | self, 32 | path: str, 33 | method: str = "get", 34 | raise_for_status: bool = False, 35 | **params: Any, 36 | ) -> dict[str, Any] | list[dict[str, Any]]: 37 | headers = params.pop("headers", {}) 38 | if self.api_key and self.api_key_header: 39 | headers[self.api_key_header] = self.api_key 40 | async with aiohttp.ClientSession() as session: 41 | r = await session.request(method, str(self.furl / path), headers=headers, **params) 42 | if raise_for_status: 43 | r.raise_for_status() 44 | return await r.json(content_type=None) 45 | -------------------------------------------------------------------------------- /backend/src/app/core/__init__.py: -------------------------------------------------------------------------------- 1 | from .config import settings 2 | 3 | __all__ = [ 4 | "settings", 5 | ] 6 | -------------------------------------------------------------------------------- /backend/src/app/core/config.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from pathlib import Path 3 | from typing import Any, Type 4 | 5 | from pydantic import AnyUrl, AnyHttpUrl, BaseSettings, Field, validator, RedisDsn 6 | 7 | 8 | AVAILABLE_PLUGINS = { 9 | "AutoGPTApiTools", 10 | "AutoGPTEmailPlugin", 11 | "AutoGPTTwitter", 12 | "AutoGPTSceneXPlugin", 13 | "PlannerPlugin", 14 | "AutoGPT_Slack", 15 | # "AutoGPTWebInteraction", 16 | "AutoGPT_YouTube", 17 | } 18 | 19 | 20 | class MysqlDsn(AnyUrl): 21 | allowed_schemes = {"mysql"} 22 | user_required = True 23 | 24 | 25 | class Settings(BaseSettings): 26 | BASE_URL: str = "" 27 | API_V1_STR: str | None = None 28 | PROJECT_NAME: str = "Auto-gpt-UI" 29 | VERSION: str = "1.0.0" 30 | 31 | TESTING: bool = "pytest" in sys.modules 32 | 33 | NO_AUTH: bool = False 34 | 35 | SESSION_COOKIE: str = "sessionId" 36 | SESSION_API_URL: AnyHttpUrl = "http://localhost" 37 | SESSION_API_USER_PATH: str = "/user" 38 | SESSION_API_SESSION_PATH: str = "/session" 39 | 40 | WORKSPACES_DIR: Path = Field( 41 | "/workspaces", description="Path to workspaces directory, default value is used in docker environment" 42 | ) 43 | PLUGINS_DIR: Path = Field("/plugins", description="Path to plugins directory used in Auto-GPT") 44 | 45 | @validator("WORKSPACES_DIR") 46 | def validate_workspace(cls, v: Path) -> Path: 47 | return v.resolve() 48 | 49 | PYTHON_BINARY: str = Field("python", description="Path to python binary if run outside Docker") 50 | 51 | TAIL_LOG_COUNT: int = Field(5000, description="Tail logs for this much rows for the UI") 52 | MAX_WORKSPACE_FILE_SIZE: int = Field( 53 | 5 * 1024 * 1024, description="Max size for a workspace file to upload, 5MiB by default" 54 | ) 55 | MAX_CACHE_SIZE: int = Field( 56 | 5 * 1024 * 1024, description="Max size for a cache file before it gets truncates, 5MiB by default" 57 | ) 58 | 59 | OPENAI_LOCAL_KEY: str = Field("", description="Used only if Auth is disabled") 60 | 61 | EXECUTE_LOCAL_COMMANDS: bool = Field(False, description="Allow shell commands execution", auto_gpt=True) 62 | 63 | @validator("API_V1_STR", pre=True) 64 | def set_api_v1_str(cls, v: str | None, values: dict[str, Any]) -> str: 65 | if isinstance(v, str): 66 | return v 67 | return f"{values.get('BASE_URL') or ''}/api/v1" 68 | 69 | DISABLED_COMMAND_CATEGORIES: list = Field( 70 | default_factory=lambda: ["autogpt.commands.execute_code"], 71 | description="Disable specific Auto-GPT command categories", 72 | auto_gpt=True, 73 | ) 74 | DENY_COMMANDS: list[str] = Field(default_factory=list, description="Disable specific shell commands", auto_gpt=True) 75 | ALLOW_COMMANDS: list[str] = Field( 76 | default_factory=list, description="Allow only These shell commands", auto_gpt=True 77 | ) 78 | ALLOWLISTED_PLUGINS: list[str] = Field( 79 | default_factory=list, 80 | description=f"Allow only These plugins, one of: {', '.join(AVAILABLE_PLUGINS)}", 81 | auto_gpt=True, 82 | ) 83 | 84 | DATABASE_URL: MysqlDsn = Field( 85 | ..., 86 | description="Database URL, ie: mysql://user:password@localhost:3306/auto_gpt_ui", 87 | ) 88 | 89 | # @validator("DATABASE_URL") 90 | # def assemble_db_connection(cls, v: MysqlDsn, values: dict[str, Any]) -> str: 91 | # if not values.get("TESTING"): 92 | # return v 93 | # return v.path 94 | 95 | REDIS_URL: RedisDsn = "redis://redis:6379/0" 96 | 97 | EMAIL_ADDRESS: str | None = Field(None, description="For Email plugin", auto_gpt=True) 98 | EMAIL_PASSWORD: str | None = Field(None, auto_gpt=True) 99 | EMAIL_SMTP_HOST: str | None = Field(None, auto_gpt=True) 100 | EMAIL_SMTP_PORT: int = Field(587, auto_gpt=True) 101 | EMAIL_IMAP_SERVER: str | None = Field(None, auto_gpt=True) 102 | EMAIL_MARK_AS_SEEN: bool = Field(False, auto_gpt=True) 103 | EMAIL_SIGNATURE: str = Field("This was sent by Auto-GPT", auto_gpt=True) 104 | EMAIL_DRAFT_MODE_WITH_FOLDER: str = Field("[Gmail]/Drafts", auto_gpt=True) 105 | 106 | TW_CONSUMER_KEY: str | None = Field(None, description="For Twitter plugin", auto_gpt=True) 107 | TW_CONSUMER_SECRET: str | None = Field(None, auto_gpt=True) 108 | TW_ACCESS_TOKEN: str | None = Field(None, auto_gpt=True) 109 | TW_ACCESS_TOKEN_SECRET: str | None = Field(None, auto_gpt=True) 110 | TW_CLIENT_ID: str | None = Field(None, auto_gpt=True) 111 | TW_CLIENT_ID_SECRET: str | None = Field(None, auto_gpt=True) 112 | 113 | SCENEX_API_KEY: str | None = Field(None, description="For SceneX plugin", auto_gpt=True) 114 | 115 | SLACK_BOT_TOKEN: str | None = Field(None, description="For Slack plugin", auto_gpt=True) 116 | SLACK_CHANNEL: str | None = Field(None, auto_gpt=True) 117 | 118 | YOUTUBE_API_KEY: str | None = Field(None, description="For Youtube plugin", auto_gpt=True) 119 | 120 | class Config: 121 | env_file = "env-backend.local.env" 122 | 123 | @classmethod 124 | def parse_env_var(cls, field_name: str, raw_val: str): 125 | if field_name in ( 126 | "DISABLED_COMMAND_CATEGORIES", 127 | "DENY_COMMANDS", 128 | "ALLOW_COMMANDS", 129 | "ALLOWLISTED_PLUGINS", 130 | ): 131 | return list(map(str.strip, raw_val.split(","))) 132 | return cls.json_loads(raw_val) 133 | 134 | 135 | settings = Settings() 136 | 137 | 138 | def build_example_env( 139 | config_class: Type[BaseSettings] = Settings, 140 | file_name: str = "./env-backend.env.example", 141 | skip_=( 142 | "BASE_URL", 143 | "API_V1_STR", 144 | "PROJECT_NAME", 145 | "VERSION", 146 | "NO_AUTH", 147 | "SMTP_USE_TLS", 148 | "SMTP_START_TLS", 149 | ), 150 | ): 151 | skip_ = set(skip_) 152 | with open(file_name, "w") as f: 153 | for field_name, field in config_class.__fields__.items(): 154 | if field_name in skip_: 155 | continue 156 | default_value = field.default if field.default is not None else "" 157 | if field.default_factory: 158 | default_value = field.default_factory() 159 | if isinstance(default_value, bool): 160 | default_value = int(default_value) 161 | if isinstance(default_value, list): 162 | default_value = ",".join(default_value) 163 | description = field.field_info.description 164 | if description: 165 | commentary = "\n# ".join(description.split("\n")) 166 | f.write(f"# {commentary}\n{field_name}={default_value}\n") 167 | else: 168 | f.write(f"{field_name}={default_value}\n") 169 | -------------------------------------------------------------------------------- /backend/src/app/core/globals.py: -------------------------------------------------------------------------------- 1 | from arq.connections import ArqRedis 2 | 3 | from app.core import settings 4 | 5 | __all__ = [ 6 | "arq_redis", 7 | ] 8 | 9 | arq_redis = ArqRedis.from_url(settings.REDIS_URL) 10 | -------------------------------------------------------------------------------- /backend/src/app/core/init_logging.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from loguru import logger 4 | from gunicorn import glogging # type: ignore 5 | 6 | 7 | class Logger(glogging.Logger): 8 | """Implements and overrides the gunicorn logging interface. 9 | This class inherits from the standard gunicorn logger and overrides it by 10 | replacing the handlers with `InterceptHandler` in order to route the 11 | gunicorn logs to loguru. 12 | """ 13 | 14 | def __init__(self, cfg): 15 | super().__init__(cfg) 16 | logging.getLogger("gunicorn.error").handlers = [InterceptHandler()] 17 | logging.getLogger("gunicorn.access").handlers = [InterceptHandler()] 18 | 19 | 20 | class InterceptHandler(logging.Handler): 21 | """Handler for intercepting records and outputting to loguru.""" 22 | 23 | def emit(self, record: logging.LogRecord): 24 | """Intercepts log messages. 25 | Intercepts log records sent to the handler, adds additional context to 26 | the records, and outputs the record to the default loguru logger. 27 | Args: 28 | record: The log record 29 | """ 30 | level: int | str = "" 31 | try: 32 | level = logger.level(record.levelname).name 33 | except ValueError: 34 | level = str(record.levelno) 35 | 36 | frame, depth = logging.currentframe(), 2 37 | while frame.f_code.co_filename == logging.__file__: 38 | if frame.f_back: 39 | frame = frame.f_back 40 | depth += 1 41 | 42 | logger.opt(depth=depth, exception=record.exc_info).log(level, record.getMessage()) 43 | 44 | 45 | def init_logging(level: int = logging.INFO) -> None: 46 | handler = logging.StreamHandler() 47 | for _, _logger in logging.root.manager.loggerDict.items(): 48 | if not isinstance(_logger, logging.PlaceHolder): 49 | if _logger.handlers: 50 | for _handler in _logger.handlers: 51 | _logger.removeHandler(_handler) 52 | 53 | logging.basicConfig(handlers=[InterceptHandler()], level=0, force=True) 54 | logger.configure( 55 | handlers=[ 56 | { 57 | "sink": handler, 58 | "format": "[{time:YYYY-MM-DD HH:mm:ss,SSS}] [{process}] [{level}] {message}", 59 | "level": level, 60 | # "backtrace": False, 61 | # "diagnose": False, 62 | } 63 | ], 64 | ) 65 | -------------------------------------------------------------------------------- /backend/src/app/helpers/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/helpers/__init__.py -------------------------------------------------------------------------------- /backend/src/app/helpers/jobs.py: -------------------------------------------------------------------------------- 1 | from arq.jobs import Job 2 | from loguru import logger 3 | 4 | from app.core import globals 5 | 6 | 7 | async def abort_job(job_id: str) -> None: 8 | try: 9 | await Job(job_id, globals.arq_redis).abort(timeout=0, poll_delay=0.01) 10 | except Exception as e: 11 | logger.warning(f"An error occurred while aborting job: {e}") 12 | -------------------------------------------------------------------------------- /backend/src/app/helpers/readers.py: -------------------------------------------------------------------------------- 1 | import gzip 2 | import os 3 | from pathlib import Path 4 | from typing import IO, Generator, List, Optional, Union 5 | 6 | 7 | def is_gz(path: Union[str, Path]) -> bool: 8 | if isinstance(path, str): 9 | path = Path(path) 10 | return path.suffix == ".gz" 11 | 12 | 13 | def any_open(path: Union[str, Path], mode: Optional[str] = None) -> Union[gzip.GzipFile, IO]: 14 | if is_gz(path): 15 | return gzip.open(path, mode or "rb") 16 | else: 17 | return open(path, mode or "r") 18 | 19 | 20 | def tail(f: Union[str, Path, IO, gzip.GzipFile], lines: int = 1, _buffer: int = 4098) -> Generator[str, None, None]: 21 | """Tail a file and get X lines from the end""" 22 | if isinstance(f, (str, Path)): 23 | with any_open(f, "rb") as r: 24 | yield from tail(r, lines, _buffer) 25 | else: 26 | f.seek(0, os.SEEK_END) 27 | block_end_byte = f.tell() 28 | lines_to_go = lines 29 | block_number = -1 30 | blocks = [] 31 | while lines_to_go > 0 and block_end_byte > 0: 32 | if block_end_byte - _buffer > 0: 33 | f.seek(block_number * _buffer, os.SEEK_END) 34 | blocks.append(f.read(_buffer)) 35 | else: 36 | f.seek(0, 0) 37 | blocks.append(f.read(block_end_byte)) 38 | lines_found = blocks[-1].count(b"\n") 39 | lines_to_go -= lines_found 40 | block_end_byte -= _buffer 41 | block_number -= 1 42 | all_read_text = b"".join(reversed(blocks)) 43 | for row in all_read_text.splitlines()[-lines:]: 44 | yield row.decode() 45 | 46 | 47 | def tail_as_text(f: Union[str, Path, IO, gzip.GzipFile], lines: int = 1, _buffer: int = 4098) -> str: 48 | return "\n".join(tail(f, lines, _buffer)) 49 | 50 | 51 | def tail_as_list(f: Union[str, Path, IO, gzip.GzipFile], lines: int = 1, _buffer: int = 4098) -> List[str]: 52 | return list(tail(f, lines, _buffer)) 53 | -------------------------------------------------------------------------------- /backend/src/app/helpers/system.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from pathlib import Path 3 | from typing import Optional, Tuple 4 | 5 | 6 | async def run_command(cmd: str) -> Tuple[Optional[int], str, str]: 7 | proc = await asyncio.create_subprocess_shell(cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) 8 | stdout, stderr = await proc.communicate() 9 | return proc.returncode, stdout.decode(), stderr.decode() 10 | 11 | 12 | async def remove_by_path(path: Path) -> None: 13 | path.unlink(missing_ok=True) 14 | -------------------------------------------------------------------------------- /backend/src/app/main.py: -------------------------------------------------------------------------------- 1 | from app import create_app 2 | 3 | app = create_app() 4 | -------------------------------------------------------------------------------- /backend/src/app/run.py: -------------------------------------------------------------------------------- 1 | import uvicorn 2 | 3 | from app import create_app 4 | from core import init_logging 5 | 6 | if __name__ == "__main__": 7 | init_logging.init_logging() 8 | uvicorn.run(create_app(), host="0.0.0.0", port=8040) 9 | -------------------------------------------------------------------------------- /backend/src/app/run_worker.py: -------------------------------------------------------------------------------- 1 | from arq import run_worker 2 | 3 | from app.worker.main import WorkerSettings 4 | 5 | if __name__ == "__main__": 6 | run_worker(WorkerSettings) # type: ignore 7 | -------------------------------------------------------------------------------- /backend/src/app/schemas/bot.py: -------------------------------------------------------------------------------- 1 | from typing import Any 2 | 3 | from pydantic import BaseModel, Field 4 | from prisma.partials import BotInCreateSchema as _BotInCreateSchema, BotSchema as _BotSchema 5 | 6 | from app.schemas.enums import ImageSize, EngineTokens, LLMEngine 7 | 8 | 9 | class AiSettingsSchema(BaseModel): 10 | ai_goals: list[str] = Field(..., max_items=20, max_length=2000) 11 | ai_name: str = Field(..., max_length=30) 12 | ai_role: str = Field(..., max_length=1200) 13 | api_budget: float = Field(0.0, ge=0) 14 | 15 | 16 | class BotSchema(_BotSchema): 17 | ai_settings: dict[str, Any] 18 | 19 | 20 | class BotInCreateSchema(_BotInCreateSchema): 21 | fast_engine: LLMEngine = LLMEngine.GPT_3_5_TURBO 22 | smart_engine: LLMEngine = LLMEngine.GPT_4 23 | fast_tokens: EngineTokens = EngineTokens.t4000 24 | smart_tokens: EngineTokens = EngineTokens.t4000 25 | image_size: ImageSize = ImageSize.s512 26 | ai_settings: AiSettingsSchema 27 | 28 | 29 | class WorkspaceFileSchema(BaseModel): 30 | name: str 31 | path: str 32 | is_dir: bool 33 | mime_type: str | None = None 34 | size: str | None = None 35 | -------------------------------------------------------------------------------- /backend/src/app/schemas/enums.py: -------------------------------------------------------------------------------- 1 | from enum import Enum, IntEnum 2 | 3 | 4 | class LLMEngine(str, Enum): 5 | GPT_3_5_TURBO = "gpt-3.5-turbo" 6 | GPT_3_5_TURBO_16K = "gpt-3.5-turbo-16k" 7 | GPT_4 = "gpt-4" 8 | GPT_4_32K = "gpt-4-32k" 9 | 10 | 11 | class ImageSize(IntEnum): 12 | s512 = 512 13 | s1024 = 1024 14 | 15 | 16 | class EngineTokens(IntEnum): 17 | t1000 = 1000 18 | t2000 = 2000 19 | t3000 = 3000 20 | t4000 = 4000 21 | t8000 = 8000 22 | t16000 = 16000 23 | t32000 = 32000 24 | 25 | 26 | class YesCount(IntEnum): 27 | c1 = 1 28 | c5 = 5 29 | c10 = 10 30 | c20 = 20 31 | c50 = 50 32 | c100 = 100 33 | c200 = 200 34 | -------------------------------------------------------------------------------- /backend/src/app/worker/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/worker/__init__.py -------------------------------------------------------------------------------- /backend/src/app/worker/main.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from arq import func 4 | from arq.connections import RedisSettings 5 | from loguru import logger 6 | from prisma import Prisma 7 | 8 | from app.core import settings, init_logging 9 | from .tasks import run_auto_gpt 10 | 11 | 12 | prisma = Prisma(auto_register=True) 13 | 14 | 15 | async def startup(ctx) -> None: 16 | await prisma.connect() 17 | init_logging.init_logging(level=logging.INFO) 18 | logger.info("Worker is ready") 19 | 20 | 21 | async def shutdown(ctx) -> None: 22 | await prisma.disconnect() 23 | 24 | 25 | # WorkerSettings defines the settings to use when creating the work, 26 | # it's used by the arq cli 27 | class WorkerSettings: 28 | functions = [ 29 | func(run_auto_gpt.run, name="run_auto_gpt", timeout=60 * 60 * 24), # type: ignore 30 | ] 31 | on_startup = startup 32 | on_shutdown = shutdown 33 | redis_settings = RedisSettings.from_dsn(settings.REDIS_URL) 34 | allow_abort_jobs = True 35 | -------------------------------------------------------------------------------- /backend/src/app/worker/tasks/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/app/worker/tasks/__init__.py -------------------------------------------------------------------------------- /backend/src/app/worker/tasks/run_auto_gpt.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | 4 | import anyio 5 | import yaml 6 | from asyncio import exceptions, streams 7 | from loguru import logger 8 | from prisma.models import Bot, User 9 | 10 | from app.api.helpers.bots import build_prompt_settings_path, build_workspace_path, build_settings_path, build_log_path 11 | from app.auto_gpt import cli 12 | from app.clients import AuthBackendClient, RequestType 13 | from app.core import globals, settings 14 | 15 | 16 | PROMPT_SETTINGS = dict( 17 | constraints=[ 18 | "~4000 word limit for short term memory. Your short term memory is short, " 19 | "so immediately save important information to files.", 20 | "If you are unsure how you previously did something or want to recall past events, " 21 | "thinking about similar events will help you remember.", 22 | "No user assistance", 23 | "Exclusively use the commands listed below e.g. command_name", 24 | ], 25 | resources=[ 26 | "Internet access for searches and information gathering.", 27 | "Long Term memory management.", 28 | "GPT-3.5 powered Agents for delegation of simple tasks.", 29 | "File output.", 30 | ], 31 | performance_evaluations=[ 32 | "Continuously review and analyze your actions to ensure you are performing to the best of your abilities.", 33 | "Constructively self-criticize your big-picture behavior constantly.", 34 | "Reflect on past decisions and strategies to refine your approach.", 35 | "Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.", 36 | "Write all code to a file.", 37 | ], 38 | ) 39 | 40 | 41 | class ExtendedStreamReader(streams.StreamReader): 42 | @classmethod 43 | def cast(cls, some_a: streams.StreamReader): 44 | """Cast an A into a MyA.""" 45 | assert isinstance(some_a, streams.StreamReader) 46 | some_a.__class__ = cls 47 | assert isinstance(some_a, ExtendedStreamReader) 48 | return some_a 49 | 50 | async def readline(self): 51 | """Read chunk of data from the stream until newline (b'\n') is found. 52 | 53 | On success, return chunk that ends with newline. If only partial 54 | line can be read due to EOF, return incomplete line without 55 | terminating newline. When EOF was reached while no bytes read, empty 56 | bytes object is returned. 57 | 58 | If limit is reached, ValueError will be raised. In that case, if 59 | newline was found, complete line including newline will be removed 60 | from internal buffer. Else, internal buffer will be cleared. Limit is 61 | compared against part of the line without newline. 62 | 63 | If stream was paused, this function will automatically resume it if 64 | needed. 65 | """ 66 | sep = [b"\n", b"\r"] 67 | seplen = len(sep) 68 | try: 69 | line = await self.readuntil(sep) 70 | except exceptions.IncompleteReadError as e: 71 | return e.partial 72 | except exceptions.LimitOverrunError as e: 73 | if self._buffer.startswith(sep, e.consumed): 74 | del self._buffer[: e.consumed + seplen] 75 | else: 76 | self._buffer.clear() 77 | self._maybe_resume_transport() 78 | raise ValueError(e.args[0]) 79 | return line 80 | 81 | async def readuntil(self, separator: bytes | list[bytes] = b"\n"): 82 | """Read data from the stream until ``separator`` is found. 83 | On success, the data and separator will be removed from the 84 | internal buffer (consumed). Returned data will include the 85 | separator at the end. 86 | Configured stream limit is used to check result. Limit sets the 87 | maximal length of data that can be returned, not counting the 88 | separator. 89 | If an EOF occurs and the complete separator is still not found, 90 | an IncompleteReadError exception will be raised, and the internal 91 | buffer will be reset. The IncompleteReadError.partial attribute 92 | may contain the separator partially. 93 | If the data cannot be read because of over limit, a 94 | LimitOverrunError exception will be raised, and the data 95 | will be left in the internal buffer, so it can be read again. 96 | The ``separator`` may also be an iterable of separators. In this 97 | case the return value will be the shortest possible that has any 98 | separator as the suffix. For the purposes of LimitOverrunError, 99 | the shortest possible separator is considered to be the one that 100 | matched. 101 | """ 102 | if isinstance(separator, bytes): 103 | separator = [separator] 104 | else: 105 | # Makes sure shortest matches wins, and supports arbitrary iterables 106 | separator = sorted(separator, key=len) 107 | if not separator: 108 | raise ValueError("Separator should contain at least one element") 109 | min_seplen = len(separator[0]) 110 | max_seplen = len(separator[-1]) 111 | if min_seplen == 0: 112 | raise ValueError("Separator should be at least one-byte string") 113 | 114 | if self._exception is not None: 115 | raise self._exception 116 | 117 | # Consume whole buffer except last bytes, which length is 118 | # one less than max_seplen. Let's check corner cases with 119 | # separator[-1]='SEPARATOR': 120 | # * we have received almost complete separator (without last 121 | # byte). i.e buffer='some textSEPARATO'. In this case we 122 | # can safely consume len(separator) - 1 bytes. 123 | # * last byte of buffer is first byte of separator, i.e. 124 | # buffer='abcdefghijklmnopqrS'. We may safely consume 125 | # everything except that last byte, but this require to 126 | # analyze bytes of buffer that match partial separator. 127 | # This is slow and/or require FSM. For this case our 128 | # implementation is not optimal, since require rescanning 129 | # of data that is known to not belong to separator. In 130 | # real world, separator will not be so long to notice 131 | # performance problems. Even when reading MIME-encoded 132 | # messages :) 133 | 134 | # `offset` is the number of bytes from the beginning of the buffer 135 | # where there is no occurrence of any `separator`. 136 | offset = 0 137 | 138 | # Loop until we find a `separator` in the buffer, exceed the buffer size, 139 | # or an EOF has happened. 140 | while True: 141 | buflen = len(self._buffer) 142 | 143 | # Check if we now have enough data in the buffer for shortest 144 | # separator to fit. 145 | if buflen - offset >= min_seplen: 146 | match_start = None 147 | match_end = None 148 | for sep in separator: 149 | isep = self._buffer.find(sep, offset) 150 | 151 | if isep != -1: 152 | # `separator` is in the buffer. `match_start` and 153 | # `match_end` will be used later to retrieve the 154 | # data. 155 | end = isep + len(sep) 156 | if match_end is None or end < match_end: 157 | match_end = end 158 | match_start = isep 159 | if match_end is not None: 160 | break 161 | 162 | # see upper comment for explanation. 163 | offset = max(0, buflen + 1 - max_seplen) 164 | if offset > self._limit: 165 | raise exceptions.LimitOverrunError("Separator is not found, and chunk exceed the limit", offset) 166 | 167 | # Complete message (with full separator) may be present in buffer 168 | # even when EOF flag is set. This may happen when the last chunk 169 | # adds data which makes separator be found. That's why we check for 170 | # EOF *after* inspecting the buffer. 171 | if self._eof: 172 | chunk = bytes(self._buffer) 173 | self._buffer.clear() 174 | raise exceptions.IncompleteReadError(chunk, None) 175 | 176 | # _wait_for_data() will resume reading if stream was paused. 177 | await self._wait_for_data("readuntil") 178 | 179 | if match_start > self._limit: 180 | raise exceptions.LimitOverrunError("Separator is found, but chunk is longer than limit", match_start) 181 | 182 | chunk = self._buffer[:match_end] 183 | del self._buffer[:match_end] 184 | self._maybe_resume_transport() 185 | return bytes(chunk) 186 | 187 | 188 | CMD = ( 189 | "{binary} " 190 | "{cli_path} " 191 | "-w {workspace_folder} " 192 | "-C {settings_path} " 193 | "-P {prompt_settings} " 194 | "--max-cache-size={max_cache_size} " 195 | "--skip-news " 196 | "--skip-reprompt" 197 | ) 198 | 199 | 200 | def build_command(bot: Bot): 201 | ai_settings_path = build_settings_path(bot.user_id) 202 | with open(ai_settings_path, "w") as w: 203 | yaml.dump(bot.ai_settings, w) 204 | prompt_settings_path = build_prompt_settings_path(bot.user_id) 205 | with open(prompt_settings_path, "w") as w: 206 | yaml.dump(PROMPT_SETTINGS, w) 207 | cmd = CMD.format( 208 | binary=settings.PYTHON_BINARY, 209 | cli_path=cli.__file__, 210 | workspace_folder=build_workspace_path(bot.user_id), 211 | settings_path=ai_settings_path, 212 | prompt_settings=prompt_settings_path, 213 | max_cache_size=settings.MAX_CACHE_SIZE, 214 | ) 215 | return cmd 216 | 217 | 218 | async def run(ctx, bot_id: int): 219 | bot = await Bot.prisma().find_unique(where={"id": bot_id}) 220 | 221 | user = await User.prisma().find_unique(where={"id": bot.user_id}) 222 | auth_client = AuthBackendClient( 223 | settings.SESSION_API_URL, 224 | user_path=settings.SESSION_API_USER_PATH, 225 | session_path=settings.SESSION_API_SESSION_PATH, 226 | session_cookie=settings.SESSION_COOKIE, 227 | ) 228 | if settings.NO_AUTH: 229 | openai_key = settings.OPENAI_LOCAL_KEY 230 | else: 231 | openai_key = (await auth_client.get_user(RequestType.userdata, username=user.username))["openai"] 232 | env = { 233 | "OPENAI_API_KEY": openai_key, 234 | "FAST_TOKEN_LIMIT": str(bot.fast_tokens), 235 | "SMART_TOKEN_LIMIT": str(bot.smart_tokens), 236 | "FAST_LLM_MODEL": bot.fast_engine, 237 | "SMART_LLM_MODEL": bot.smart_engine, 238 | "IMAGE_SIZE": str(bot.image_size), 239 | "USE_WEB_BROWSER": "firefox", 240 | "PYTHONPATH": os.environ.get("PYTHONPATH"), 241 | "PATH": os.environ.get("PATH"), 242 | "WDM_PROGRESS_BAR": "0", 243 | # TODO: they broke configuration in 0.4.4, update when new version is released 244 | "PLUGINS_CONFIG_FILE": str(settings.WORKSPACES_DIR / "plugins.yaml"), 245 | } 246 | for k, f in settings.__fields__.items(): 247 | if not f.field_info.extra.get("auto_gpt"): 248 | continue 249 | value = getattr(settings, k) 250 | if value is None: 251 | value = "" 252 | elif isinstance(value, list): 253 | value = ",".join(value) 254 | else: 255 | value = str(value) 256 | env[k] = value 257 | proc = await asyncio.create_subprocess_shell( 258 | build_command(bot), 259 | stdout=asyncio.subprocess.PIPE, 260 | stderr=asyncio.subprocess.STDOUT, 261 | env=env, 262 | ) 263 | log_path = build_log_path(bot.user_id) 264 | ExtendedStreamReader.cast(proc.stdout) 265 | is_carriage = False 266 | buf: bytes | None = None 267 | prev_buf: bytes | None = None 268 | prev_prev_buf: bytes | None = None 269 | async with (await anyio.open_file(log_path, "a+")) as w: 270 | while proc.returncode is None: 271 | prev_prev_buf = prev_buf 272 | prev_buf = buf 273 | buf = await proc.stdout.readline() 274 | if not buf: 275 | break 276 | if b"\r" in buf: 277 | if not is_carriage: 278 | is_carriage = True 279 | await w.write(buf.decode()) 280 | else: 281 | continue 282 | else: 283 | is_carriage = False 284 | await w.write(buf.decode()) 285 | await w.flush() 286 | await proc.wait() 287 | if proc.returncode != 0: 288 | logger.warning(f"Bot {bot.id} exited with non 0 return code: {proc.returncode}") 289 | await Bot.prisma().update( 290 | data={"is_failed": True, "is_active": False, "runs_left": 0, "worker_message_id": None}, 291 | where={"id": bot.id}, 292 | ) 293 | if proc.stderr: 294 | with open(log_path, "a+") as w: 295 | w.write((await proc.stderr.read()).decode()) 296 | return None 297 | if prev_prev_buf and b"Shutting down..." in prev_prev_buf: 298 | logger.info(f"Bot {bot.id} finished") 299 | await Bot.prisma().update( 300 | data={"is_active": False, "runs_left": 0, "worker_message_id": None}, where={"id": bot.id} 301 | ) 302 | return None 303 | await Bot.prisma().update(data={"runs_left": bot.runs_left - 1, "worker_message_id": None}, where={"id": bot.id}) 304 | if bot.runs_left <= 1: 305 | return None 306 | job = await globals.arq_redis.enqueue_job("run_auto_gpt", bot_id=bot.id) 307 | await Bot.prisma().update(data={"worker_message_id": job.job_id}, where={"id": bot.id}) 308 | -------------------------------------------------------------------------------- /backend/src/tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/tests/__init__.py -------------------------------------------------------------------------------- /backend/src/tests/api/v1/test_sessions.py: -------------------------------------------------------------------------------- 1 | from unittest.mock import patch 2 | 3 | import pytest 4 | from async_asgi_testclient import TestClient 5 | 6 | from app.core import settings 7 | from app.clients import AuthBackendClient 8 | 9 | from tests.helpers.auth_backend_mocker import session_payload 10 | 11 | 12 | pytestmark = pytest.mark.asyncio 13 | 14 | 15 | async def test_check_session_no_auth(async_client: TestClient): 16 | r = await async_client.get(f"{settings.API_V1_STR}/sessions/check") 17 | assert 204 == r.status_code 18 | 19 | 20 | async def test_check_session_auth_success(async_client: TestClient, with_auth): 21 | with patch.object(AuthBackendClient, "get_session", return_value=session_payload()): 22 | r = await async_client.get( 23 | f"{settings.API_V1_STR}/sessions/check", cookies={settings.SESSION_COOKIE: "session"} 24 | ) 25 | assert 204 == r.status_code 26 | 27 | 28 | async def test_check_session_auth_fail(async_client: TestClient, with_auth): 29 | with patch.object(AuthBackendClient, "get_session", return_value={}): 30 | r = await async_client.get( 31 | f"{settings.API_V1_STR}/sessions/check", cookies={settings.SESSION_COOKIE: "session"} 32 | ) 33 | assert 401 == r.status_code 34 | -------------------------------------------------------------------------------- /backend/src/tests/conftest.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from copy import deepcopy 3 | 4 | import fakeredis.aioredis 5 | import pytest 6 | import pytest_asyncio 7 | from async_asgi_testclient import TestClient as AsyncTestClient 8 | 9 | from app import create_app 10 | 11 | 12 | pytestmark = pytest.mark.asyncio 13 | 14 | 15 | def pytest_configure(config): 16 | """ 17 | Allows plugins and conftest files to perform initial configuration. 18 | This hook is called for every plugin and initial conftest 19 | file after command line options have been parsed. 20 | """ 21 | import pytest 22 | 23 | mpatch = pytest.MonkeyPatch() 24 | 25 | import redis.asyncio 26 | 27 | mpatch.setattr(redis, "Redis", fakeredis.FakeRedis) 28 | mpatch.setattr(redis.asyncio, "Redis", fakeredis.aioredis.FakeRedis) 29 | 30 | 31 | @pytest.fixture(scope="session", autouse=True) 32 | def event_loop() -> None: 33 | """Create an instance of the default event loop for each test case.""" 34 | loop = asyncio.new_event_loop() 35 | yield loop 36 | loop.close() 37 | 38 | 39 | @pytest_asyncio.fixture(scope="module") 40 | async def async_client(): 41 | async with AsyncTestClient(create_app()) as client: 42 | yield client 43 | 44 | 45 | @pytest_asyncio.fixture() 46 | async def async_client_class(request, async_client): 47 | request.cls.async_client = async_client 48 | 49 | 50 | @pytest.fixture 51 | def with_no_auth(): 52 | from app.core import settings 53 | from app.api.helpers.security import check_user, NoAuthDependency, SessionAuthDependency 54 | 55 | old_class = check_user.__class__ 56 | settings.NO_AUTH = True 57 | NoAuthDependency.cast(check_user) 58 | yield 59 | settings.NO_AUTH = False 60 | old_class.cast(check_user) 61 | 62 | 63 | @pytest.fixture 64 | def with_auth(): 65 | from app.core import settings 66 | from app.api.helpers.security import check_user, NoAuthDependency, SessionAuthDependency 67 | 68 | old_class = check_user.__class__ 69 | settings.NO_AUTH = False 70 | SessionAuthDependency.cast(check_user) 71 | yield 72 | settings.NO_AUTH = True 73 | old_class.cast(check_user) 74 | -------------------------------------------------------------------------------- /backend/src/tests/helpers/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/backend/src/tests/helpers/__init__.py -------------------------------------------------------------------------------- /backend/src/tests/helpers/auth_backend_mocker.py: -------------------------------------------------------------------------------- 1 | def session_payload(): 2 | return {"user": {"username": "user"}} 3 | 4 | 5 | def user_payload(): 6 | return {"openai": "key"} 7 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.4' 2 | 3 | services: 4 | proxy: 5 | image: traefik:v2.9 6 | command: 7 | # Enable Docker in Traefik, so that it reads labels from Docker services 8 | - --providers.docker 9 | # Add a constraint to only use services with the label for this stack 10 | # from the env var TRAEFIK_TAG 11 | - --providers.docker.constraints=Label(`traefik.constraint-label-stack`, `${TRAEFIK_TAG?Variable not set}`) 12 | # Do not expose all Docker services, only the ones explicitly exposed 13 | - --providers.docker.exposedbydefault=false 14 | # Enable the access log, with HTTP requests 15 | - --accesslog 16 | # Enable the Traefik log, for configurations and errors 17 | - --log 18 | - --entrypoints.web.address=:80 19 | - --entrypoints.web.forwardedHeaders.insecure=true 20 | restart: on-failure 21 | volumes: 22 | - /var/run/docker.sock:/var/run/docker.sock 23 | ports: 24 | - '8160:80' 25 | worker: 26 | build: 27 | context: backend 28 | args: 29 | AUTO_GPT_VERSION: "${AUTO_GPT_VERSION:-0.4.4}" 30 | command: 31 | - worker 32 | depends_on: 33 | - mysql 34 | restart: on-failure 35 | environment: 36 | - TZ=Etc/UTC 37 | env_file: 38 | - env-backend.env 39 | volumes: 40 | - wb-data:/workspaces 41 | api: 42 | build: 43 | context: backend 44 | args: 45 | AUTO_GPT_VERSION: "${AUTO_GPT_VERSION:-0.4.4}" 46 | command: 47 | - api 48 | depends_on: 49 | - mysql 50 | restart: on-failure 51 | environment: 52 | - TZ=Etc/UTC 53 | env_file: 54 | - env-backend.env 55 | volumes: 56 | - wb-data:/workspaces 57 | labels: 58 | - traefik.enable=true 59 | - traefik.constraint-label-stack=${TRAEFIK_TAG?Variable not set} 60 | - traefik.http.routers.${STACK_NAME?Variable not set}-backend-http.rule=PathPrefix(`${BASE_URL:-}/api`) || PathPrefix(`${BASE_URL:-}/docs`) || PathPrefix(`${BASE_URL:-}/redoc`) || PathPrefix(`${BASE_URL:-}/openapi.json`) 61 | - traefik.http.routers.${STACK_NAME?Variable not set}-backend-http.middlewares=base-stripprefix 62 | - traefik.http.middlewares.base-stripprefix.stripprefix.prefixes=${BASE_URL:-} 63 | - traefik.http.services.${STACK_NAME?Variable not set}-backend.loadbalancer.server.port=80 64 | frontend: 65 | build: 66 | context: frontend 67 | depends_on: 68 | - api 69 | restart: on-failure 70 | environment: 71 | - TZ=Etc/UTC 72 | env_file: 73 | - env-frontend.env 74 | labels: 75 | - traefik.enable=true 76 | - traefik.constraint-label-stack=${TRAEFIK_TAG?Variable not set} 77 | - traefik.http.routers.${STACK_NAME?Variable not set}-frontend-http.rule=PathPrefix(`/`) 78 | - traefik.http.services.${STACK_NAME?Variable not set}-frontend.loadbalancer.server.port=3000 79 | mysql: 80 | image: mysql 81 | restart: unless-stopped 82 | environment: 83 | - TZ=Etc/UTC 84 | env_file: 85 | - env-mysql.env 86 | volumes: 87 | - db-data:/var/lib/mysql 88 | redis: 89 | image: redis 90 | restart: unless-stopped 91 | environment: 92 | - TZ=Etc/UTC 93 | volumes: 94 | - redis-data:/data 95 | 96 | volumes: 97 | db-data: 98 | redis-data: 99 | wb-data: 100 | -------------------------------------------------------------------------------- /env-backend.env.example: -------------------------------------------------------------------------------- 1 | TESTING=0 2 | SESSION_COOKIE=sessionId 3 | SESSION_API_URL=http://localhost 4 | SESSION_API_USER_PATH=/user 5 | SESSION_API_SESSION_PATH=/session 6 | # Path to workspaces directory, default value is used in docker environment 7 | WORKSPACES_DIR=/workspaces 8 | # Path to plugins directory used in Auto-GPT 9 | PLUGINS_DIR=/plugins 10 | # Path to python binary if run outside Docker 11 | PYTHON_BINARY=python 12 | # Tail logs for this much rows for the UI 13 | TAIL_LOG_COUNT=5000 14 | # Max size for a workspace file to upload, 5MiB by default 15 | MAX_WORKSPACE_FILE_SIZE=5242880 16 | # Max size for a cache file before it gets truncates, 5MiB by default 17 | MAX_CACHE_SIZE=5242880 18 | # Used only if Auth is disabled 19 | OPENAI_LOCAL_KEY= 20 | # Allow shell commands execution 21 | EXECUTE_LOCAL_COMMANDS=0 22 | # Disable specific Auto-GPT command categories 23 | DISABLED_COMMAND_CATEGORIES=autogpt.commands.execute_code 24 | # Disable specific shell commands 25 | DENY_COMMANDS= 26 | # Allow only These shell commands 27 | ALLOW_COMMANDS= 28 | # Allow only These plugins, one of: AutoGPTSceneXPlugin, AutoGPTTwitter, AutoGPTEmailPlugin, AutoGPT_Slack, PlannerPlugin, AutoGPTApiTools, AutoGPT_YouTube 29 | ALLOWLISTED_PLUGINS= 30 | # Database URL, ie: mysql://user:password@localhost:3306/auto_gpt_ui 31 | DATABASE_URL= 32 | REDIS_URL=redis://redis:6379/0 33 | # For Email plugin 34 | EMAIL_ADDRESS= 35 | EMAIL_PASSWORD= 36 | EMAIL_SMTP_HOST= 37 | EMAIL_SMTP_PORT=587 38 | EMAIL_IMAP_SERVER= 39 | EMAIL_MARK_AS_SEEN=0 40 | EMAIL_SIGNATURE=This was sent by Auto-GPT 41 | EMAIL_DRAFT_MODE_WITH_FOLDER=[Gmail]/Drafts 42 | # For Twitter plugin 43 | TW_CONSUMER_KEY= 44 | TW_CONSUMER_SECRET= 45 | TW_ACCESS_TOKEN= 46 | TW_ACCESS_TOKEN_SECRET= 47 | TW_CLIENT_ID= 48 | TW_CLIENT_ID_SECRET= 49 | # For SceneX plugin 50 | SCENEX_API_KEY= 51 | # For Slack plugin 52 | SLACK_BOT_TOKEN= 53 | SLACK_CHANNEL= 54 | # For Youtube plugin 55 | YOUTUBE_API_KEY= 56 | -------------------------------------------------------------------------------- /env-frontend.env.example: -------------------------------------------------------------------------------- 1 | # base url for the backend used by the frontend to make server-side calls, ie: http://api 2 | NUXT_API_BASE= 3 | # base url for the app, ie: /autogpt/ 4 | NUXT_APP_BASE_URL= 5 | -------------------------------------------------------------------------------- /env-guidance.txt: -------------------------------------------------------------------------------- 1 | env-guidance.txt 2 | 3 | :::::::::::::: 4 | .env 5 | :::::::::::::: 6 | #Dont Change these 7 | TRAEFIK_TAG=traefik-public 8 | STACK_NAME=autogpt 9 | AUTO_GPT_VERSION=0.4.4 (Manually set version of autogpt to use, currently only tested with 0.4.4) 10 | 11 | :::::::::::::: 12 | env-backend.env 13 | :::::::::::::: 14 | NO_AUTH=1 (This should be 1 for a stand alone, we can remove all session config when no_auth=1) 15 | 16 | WORKSPACES_DIR=/workspaces (Or wherever you want us to save workspace files that you upload or that autoGPT creates) 17 | 18 | PYTHON_BINARY=python3 (This can be used to change the python command/version we use, this should stay python3 for the foreseeable future) 19 | 20 | # Tail logs for this much rows for the UI (The python log that shows the work being done is cyclical, this says how many lines before it starts to overwrite its older content) 21 | TAIL_LOG_COUNT=500 22 | 23 | # Max size for a workspace file to upload, 5MiB by default (at 5mb you wont be able so save any more to your workspace unless you set this higher) 24 | MAX_WORKSPACE_FILE_SIZE=5242880 25 | 26 | # Max size for a cache file before it gets truncated, 5MiB by default (default is probably fine) 27 | MAX_CACHE_SIZE=5242880 28 | 29 | # my openai key (Ad your openai key here https://platform.openai.com/account/api-keys) 30 | #OPENAI_LOCAL_KEY=############################################## 31 | 32 | # Add execute_code command to auto-gpt, don't use in production, private servers only, this enables GPT to do about anything on your server 33 | ALLOW_CODE_EXECUTION=1 34 | EXECUTE_LOCAL_COMMANDS=1 35 | 36 | # Restrict commands if you want, otherwise omit this line 37 | DENY_COMMANDS=nano,vim,vi,emacs,top,ping,ssh,scp 38 | 39 | 40 | # Database URL, ie: mysql://user:password@mysql:3306/auto_gpt_ui 41 | DATABASE_URL= (See example above, that is if you are using the mysql container we bring online with this build, use the user and pass you set in mysql config or even better, set up a dedicated user for the DB) 42 | 43 | # Redis is used for enabling users to stop a job and then start it off where they left off, you can use local redis we add container for using the below 44 | REDIS_URL=redis://redis:6379/0 45 | 46 | --------------- 47 | #Configure permissions and plugins 48 | #Plugins require API keys or configuration information such as email details 49 | #For full access, do not modify the allow and deny sections. 50 | ---------------- 51 | # Allow only These plugins (If you have plugins configured): AutoGPTApiTools, AutoGPT_Slack, AutoGPTSceneXPlugin, AutoGPT_YouTube, AutoGPTTwitter, PlannerPlugin, AutoGPTEmailPlugin 52 | ALLOWLISTED_PLUGINS=AutoGPTApiTools, AutoGPT_Slack, AutoGPTSceneXPlugin, AutoGPT_YouTube, AutoGPTTwitter, PlannerPlugin, AutoGPTEmailPlugin 53 | 54 | #This is luike you would configure any email client 55 | EMAIL_ADDRESS= 56 | EMAIL_PASSWORD= 57 | EMAIL_SMTP_HOST= 58 | EMAIL_SMTP_PORT=587 59 | EMAIL_IMAP_SERVER= 60 | EMAIL_MARK_AS_SEEN=0 61 | EMAIL_SIGNATURE=This was sent by Auto-GPT 62 | EMAIL_DRAFT_MODE_WITH_FOLDER=[Gmail]/Drafts 63 | 64 | # For Twitter plugin 65 | TW_CONSUMER_KEY= 66 | TW_CONSUMER_SECRET= 67 | TW_ACCESS_TOKEN= 68 | TW_ACCESS_TOKEN_SECRET= 69 | TW_CLIENT_ID= 70 | TW_CLIENT_ID_SECRET= 71 | 72 | # For SceneX plugin 73 | SCENEX_API_KEY= 74 | 75 | # For Slack plugin 76 | SLACK_BOT_TOKEN= 77 | SLACK_CHANNEL= 78 | 79 | # For Youtube plugin 80 | YOUTUBE_API_KEY= 81 | 82 | 83 | :::::::::::::: 84 | env-frontend.env 85 | :::::::::::::: 86 | # base url for the backend used by the frontend to make server-side calls, ie: http://api (For a local single user install, dont change this) 87 | NUXT_API_BASE=http://api 88 | 89 | :::::::::::::: 90 | env-mysql.env 91 | :::::::::::::: 92 | MYSQL_ROOT_PASSWORD=(You set this to what you want) 93 | MYSQL_DATABASE=autogpt (If you change this, make sure you update your mysql connection string) 94 | 95 | 96 | -------------------------------------------------------------------------------- /env-mysql.env.example: -------------------------------------------------------------------------------- 1 | MYSQL_ROOT_PASSWORD= 2 | MYSQL_DATABASE= 3 | -------------------------------------------------------------------------------- /frontend/.dockerignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | *.log* 3 | .nuxt 4 | .nitro 5 | .cache 6 | .output 7 | dist 8 | 9 | # User-specific stuff 10 | .idea/**/workspace.xml 11 | .idea/**/tasks.xml 12 | .idea/**/usage.statistics.xml 13 | .idea/**/dictionaries 14 | .idea/**/shelf 15 | .vscode 16 | 17 | # .idea should not be in git 18 | .idea/ 19 | 20 | db 21 | -------------------------------------------------------------------------------- /frontend/.eslintrc.js: -------------------------------------------------------------------------------- 1 | module.exports = { 2 | root: true, 3 | env: { 4 | browser: true, 5 | node: true, 6 | }, 7 | extends: ['@nuxtjs/eslint-config-typescript', 'prettier', 'plugin:prettier/recommended'], 8 | plugins: ['prettier'], 9 | reportUnusedDisableDirectives: true, 10 | }; 11 | -------------------------------------------------------------------------------- /frontend/.gitignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | *.log* 3 | .nuxt 4 | .nitro 5 | .cache 6 | .output 7 | .env 8 | dist 9 | .DS_Store 10 | -------------------------------------------------------------------------------- /frontend/.npmrc: -------------------------------------------------------------------------------- 1 | shamefully-hoist=true 2 | strict-peer-dependencies=false 3 | -------------------------------------------------------------------------------- /frontend/.prettierrc: -------------------------------------------------------------------------------- 1 | { 2 | "arrowParens": "always", 3 | "trailingComma": "all", 4 | "singleQuote": true, 5 | "printWidth": 120, 6 | "htmlWhitespaceSensitivity": "ignore" 7 | } 8 | -------------------------------------------------------------------------------- /frontend/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:18 2 | 3 | RUN npm install pm2 -g 4 | 5 | WORKDIR /app 6 | 7 | COPY ./package.json /app/ 8 | COPY ./package-lock.json /app/ 9 | 10 | RUN npm install 11 | 12 | COPY ./ /app/ 13 | 14 | RUN npm run build 15 | 16 | ENV NUXT_HOST=0.0.0.0 17 | ENV NUXT_PORT=3000 18 | 19 | EXPOSE ${NUXT_PORT} 20 | 21 | # ENTRYPOINT ["pm2", "start", "ecosystem.config.js"] 22 | CMD ["pm2-runtime", "start", "ecosystem.config.js"] 23 | -------------------------------------------------------------------------------- /frontend/README.md: -------------------------------------------------------------------------------- 1 | # Nuxt 3 Minimal Starter 2 | 3 | Look at the [Nuxt 3 documentation](https://nuxt.com/docs/getting-started/introduction) to learn more. 4 | 5 | ## Setup 6 | 7 | Make sure to install the dependencies: 8 | 9 | ```bash 10 | # yarn 11 | yarn install 12 | 13 | # npm 14 | npm install 15 | 16 | # pnpm 17 | pnpm install 18 | ``` 19 | 20 | ## Development Server 21 | 22 | Start the development server on `http://localhost:3000` 23 | 24 | ```bash 25 | npm run dev 26 | ``` 27 | 28 | ## Production 29 | 30 | Build the application for production: 31 | 32 | ```bash 33 | npm run build 34 | ``` 35 | 36 | Locally preview production build: 37 | 38 | ```bash 39 | npm run preview 40 | ``` 41 | 42 | Check out the [deployment documentation](https://nuxt.com/docs/getting-started/deployment) for more information. 43 | -------------------------------------------------------------------------------- /frontend/ecosystem.config.js: -------------------------------------------------------------------------------- 1 | module.exports = { 2 | apps: [ 3 | { 4 | name: 'app', 5 | script: '.output/server/index.mjs', 6 | args: '--enable-network-family-autoselection', 7 | }, 8 | ], 9 | }; 10 | -------------------------------------------------------------------------------- /frontend/nuxt.config.ts: -------------------------------------------------------------------------------- 1 | // https://nuxt.com/docs/api/configuration/nuxt-config 2 | import { NaiveUiResolver } from 'unplugin-vue-components/resolvers'; 3 | import Components from 'unplugin-vue-components/vite'; 4 | import nodePolyfills from 'rollup-plugin-polyfill-node'; 5 | 6 | export default defineNuxtConfig({ 7 | srcDir: 'src', 8 | modules: ['@vueuse/nuxt', 'nuxt-viewport'], 9 | css: ['@/assets/scss/main.scss'], 10 | runtimeConfig: { 11 | apiBase: '', 12 | public: { 13 | apiBase: '', 14 | }, 15 | }, 16 | typescript: { 17 | shim: false, 18 | }, 19 | viewport: { 20 | breakpoints: { 21 | xs: 0, 22 | sm: 640, 23 | md: 1024, 24 | lg: 1280, 25 | xl: 1536, 26 | xxl: 1920, 27 | }, 28 | defaultBreakpoints: { 29 | desktop: 'lg', 30 | mobile: 'xs', 31 | tablet: 'md', 32 | }, 33 | fallbackBreakpoint: 'lg', 34 | }, 35 | build: { 36 | transpile: 37 | process.env.NODE_ENV === 'production' 38 | ? ['naive-ui', 'vueuc', 'ansi_up', '@css-render/vue3-ssr', '@juggle/resize-observer'] 39 | : ['@juggle/resize-observer'], 40 | }, 41 | vite: { 42 | plugins: [ 43 | Components({ 44 | resolvers: [NaiveUiResolver()], 45 | }), 46 | ], 47 | resolve: { 48 | alias: { 49 | stream: 'rollup-plugin-node-polyfills/polyfills/stream', 50 | events: 'rollup-plugin-node-polyfills/polyfills/events', 51 | assert: 'assert', 52 | crypto: 'crypto-browserify', 53 | util: 'util', 54 | http: 'stream-http', 55 | https: 'https-browserify', 56 | url: 'url', 57 | }, 58 | }, 59 | optimizeDeps: { 60 | esbuildOptions: { 61 | define: { 62 | global: 'window', 63 | }, 64 | }, 65 | include: process.env.NODE_ENV === 'development' ? ['naive-ui', 'vueuc', 'date-fns-tz/esm/formatInTimeZone'] : [], 66 | }, 67 | build: { 68 | rollupOptions: { 69 | plugins: [nodePolyfills()], 70 | }, 71 | }, 72 | }, 73 | }); 74 | -------------------------------------------------------------------------------- /frontend/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "private": true, 3 | "scripts": { 4 | "build": "nuxt build", 5 | "dev": "nuxt dev", 6 | "generate": "nuxt generate", 7 | "preview": "nuxt preview", 8 | "postinstall": "nuxt prepare", 9 | "lint": "eslint ." 10 | }, 11 | "devDependencies": { 12 | "@css-render/vue3-ssr": "^0.15.12", 13 | "@nuxtjs/eslint-config-typescript": "^12.0.0", 14 | "@rollup/plugin-inject": "^5.0.3", 15 | "@types/node": "^18.14.0", 16 | "@vicons/carbon": "^0.12.0", 17 | "@vicons/fluent": "^0.12.0", 18 | "@vicons/ionicons5": "^0.12.0", 19 | "@vueuse/core": "^9.13.0", 20 | "@vueuse/nuxt": "^9.13.0", 21 | "eslint": "^8.34.0", 22 | "eslint-config-prettier": "^8.6.0", 23 | "eslint-plugin-prettier": "^4.2.1", 24 | "naive-ui": "^2.34.3", 25 | "nuxt": "^3.4.3", 26 | "nuxt-viewport": "^2.0.4", 27 | "prettier": "2.8.4", 28 | "rollup-plugin-node-polyfills": "^0.2.1", 29 | "rollup-plugin-polyfill-node": "^0.12.0", 30 | "sass": "^1.62.1", 31 | "sass-loader": "^13.2.2", 32 | "typescript": "^4.9.5", 33 | "unplugin-vue-components": "^0.24.0" 34 | }, 35 | "dependencies": { 36 | "@types/lodash": "^4.14.191", 37 | "ansi_up": "^5.2.1", 38 | "date-fns": "^2.29.3", 39 | "highlight.js": "^11.8.0", 40 | "lodash-es": "^4.17.21" 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /frontend/src/app.vue: -------------------------------------------------------------------------------- 1 | 8 | 9 | 18 | -------------------------------------------------------------------------------- /frontend/src/assets/scss/_highlight.scss: -------------------------------------------------------------------------------- 1 | @import '../../../node_modules/highlight.js/styles/github.css'; 2 | -------------------------------------------------------------------------------- /frontend/src/assets/scss/main.scss: -------------------------------------------------------------------------------- 1 | @import 'highlight'; 2 | -------------------------------------------------------------------------------- /frontend/src/components.d.ts: -------------------------------------------------------------------------------- 1 | /* eslint-disable */ 2 | /* prettier-ignore */ 3 | // @ts-nocheck 4 | // Generated by unplugin-vue-components 5 | // Read more: https://github.com/vuejs/core/pull/3399 6 | import '@vue/runtime-core' 7 | 8 | export {} 9 | 10 | declare module '@vue/runtime-core' { 11 | export interface GlobalComponents { 12 | NAvatar: typeof import('naive-ui')['NAvatar'] 13 | NButton: typeof import('naive-ui')['NButton'] 14 | NButtonGroup: typeof import('naive-ui')['NButtonGroup'] 15 | NCard: typeof import('naive-ui')['NCard'] 16 | NCollapse: typeof import('naive-ui')['NCollapse'] 17 | NCollapseItem: typeof import('naive-ui')['NCollapseItem'] 18 | NDataTable: typeof import('naive-ui')['NDataTable'] 19 | NDialogProvider: typeof import('naive-ui')['NDialogProvider'] 20 | NForm: typeof import('naive-ui')['NForm'] 21 | NFormInput: typeof import('naive-ui')['NFormInput'] 22 | NFormItem: typeof import('naive-ui')['NFormItem'] 23 | NGi: typeof import('naive-ui')['NGi'] 24 | NGi1: typeof import('naive-ui')['NGi1'] 25 | NGrid: typeof import('naive-ui')['NGrid'] 26 | NIcon: typeof import('naive-ui')['NIcon'] 27 | NImage: typeof import('naive-ui')['NImage'] 28 | NInput: typeof import('naive-ui')['NInput'] 29 | NInputNumber: typeof import('naive-ui')['NInputNumber'] 30 | NLayout: typeof import('naive-ui')['NLayout'] 31 | NLayoutHeader: typeof import('naive-ui')['NLayoutHeader'] 32 | NList: typeof import('naive-ui')['NList'] 33 | NListItem: typeof import('naive-ui')['NListItem'] 34 | NModal: typeof import('naive-ui')['NModal'] 35 | NModel: typeof import('naive-ui')['NModel'] 36 | NPageHeader: typeof import('naive-ui')['NPageHeader'] 37 | NPopconfirm: typeof import('naive-ui')['NPopconfirm'] 38 | NProgress: typeof import('naive-ui')['NProgress'] 39 | NSelect: typeof import('naive-ui')['NSelect'] 40 | NSpace: typeof import('naive-ui')['NSpace'] 41 | NSpin: typeof import('naive-ui')['NSpin'] 42 | NTabPane: typeof import('naive-ui')['NTabPane'] 43 | NTabs: typeof import('naive-ui')['NTabs'] 44 | NTag: typeof import('naive-ui')['NTag'] 45 | NText: typeof import('naive-ui')['NText'] 46 | NThing: typeof import('naive-ui')['NThing'] 47 | NTooltip: typeof import('naive-ui')['NTooltip'] 48 | NUpload: typeof import('naive-ui')['NUpload'] 49 | NUploadDragger: typeof import('naive-ui')['NUploadDragger'] 50 | RouterLink: typeof import('vue-router')['RouterLink'] 51 | RouterView: typeof import('vue-router')['RouterView'] 52 | } 53 | } 54 | -------------------------------------------------------------------------------- /frontend/src/components/LogViewer.vue: -------------------------------------------------------------------------------- 1 | 4 | 5 | 31 | 32 | 43 | -------------------------------------------------------------------------------- /frontend/src/composables/apiManager.ts: -------------------------------------------------------------------------------- 1 | import { FetchError } from 'ofetch'; 2 | import { useNotification } from 'naive-ui'; 3 | import { 4 | ApiError, 5 | AiSettingsSchema, 6 | BotSchema, 7 | LogResponse, 8 | BotInCreateSchema, 9 | WorkspaceFileSchema, 10 | YesCount, 11 | } from '@/interfaces'; 12 | 13 | interface ApiManager { 14 | putBot(v: BotInCreateSchema): Promise; 15 | getBot(): Promise; 16 | getEnabledPlugins(): Promise; 17 | listWorkspaceFiles(path?: string | null): Promise; 18 | deleteWorkspaceFile(name: string): Promise; 19 | uploadWorkspaceFile(file: File | Blob): Promise; 20 | clearWorkspace(): Promise; 21 | parseAiSettings(file: File | Blob): Promise; 22 | continueBot(count: YesCount): Promise; 23 | stopBot(): Promise; 24 | getBotLog(): Promise; 25 | } 26 | 27 | export const useApiManager = function () { 28 | const config = useRuntimeConfig(); 29 | const notify = useNotification(); 30 | 31 | // @ts-ignore 32 | function onResponseError({ response }) { 33 | if (response.status && [401, 403].includes(response.status)) { 34 | navigateTo(`https://${window.location.hostname}/auth/`, { external: true }); 35 | } 36 | } 37 | 38 | async function stopBot(): Promise { 39 | await useFetch(`${config.public.apiBase}/api/v1/bots/stop`, { 40 | onResponseError, 41 | }); 42 | } 43 | 44 | async function continueBot(count: YesCount): Promise { 45 | await useFetch(`${config.public.apiBase}/api/v1/bots/continue`, { 46 | onResponseError, 47 | params: { count }, 48 | }); 49 | } 50 | 51 | async function deleteWorkspaceFile(name: string): Promise { 52 | await useFetch(`${config.public.apiBase}/api/v1/bots/workspace`, { 53 | method: 'delete', 54 | params: { name }, 55 | onResponseError, 56 | }); 57 | } 58 | 59 | async function parseAiSettings(file: File | Blob): Promise { 60 | const formData = new FormData(); 61 | formData.append('file', file); 62 | const { data, error } = await useFetch, string, 'post'>( 63 | `${config.public.apiBase}/api/v1/bots/parse-settings`, 64 | { 65 | method: 'post', 66 | body: formData, 67 | onResponseError, 68 | }, 69 | ); 70 | if (error.value) { 71 | notify.error({ 72 | content: error.value?.data?.detail || 'Unknown error occurred', 73 | }); 74 | } 75 | return data.value as AiSettingsSchema; 76 | } 77 | 78 | async function uploadWorkspaceFile(file: File | Blob): Promise { 79 | const formData = new FormData(); 80 | formData.append('file', file); 81 | const { error } = await useFetch, string, 'post'>( 82 | `${config.public.apiBase}/api/v1/bots/workspace`, 83 | { 84 | method: 'post', 85 | body: formData, 86 | onResponseError, 87 | }, 88 | ); 89 | if (error.value) { 90 | notify.error({ 91 | content: error.value?.data?.detail || 'Unknown error occurred', 92 | }); 93 | } 94 | } 95 | 96 | async function clearWorkspace(): Promise { 97 | await useFetch(`${config.public.apiBase}/api/v1/bots/workspace/clear`, { 98 | onResponseError, 99 | }); 100 | } 101 | 102 | async function listWorkspaceFiles(path?: string | null): Promise { 103 | return ( 104 | ( 105 | await useFetch(`${config.public.apiBase}/api/v1/bots/workspace`, { 106 | params: { path }, 107 | onResponseError, 108 | }) 109 | ).data.value || [] 110 | ); 111 | } 112 | 113 | async function putBot(v: BotInCreateSchema): Promise { 114 | const notification = notify.info({ 115 | content: `Creating bot...`, 116 | }); 117 | const { data, error } = await useFetch, string, 'post'>( 118 | `${config.public.apiBase}/api/v1/bots/`, 119 | { 120 | method: 'post', 121 | body: v, 122 | onResponseError, 123 | }, 124 | ); 125 | if (error.value) { 126 | notify.error({ 127 | content: error.value?.data?.detail || 'Unknown error occurred', 128 | }); 129 | } else if ((data.value as BotSchema).is_failed) { 130 | notification.destroy(); 131 | notify.error({ 132 | content: `Failed to create bot, check logs`, 133 | duration: 3000, 134 | }); 135 | } else { 136 | notification.destroy(); 137 | notify.success({ 138 | content: `Successfully created bot`, 139 | duration: 3000, 140 | }); 141 | } 142 | return data.value as BotSchema; 143 | } 144 | 145 | async function getBot(): Promise { 146 | try { 147 | return ( 148 | await useFetch(`${config.public.apiBase}/api/v1/bots/`, { 149 | onResponseError, 150 | }) 151 | ).data.value as BotSchema; 152 | } catch (err) { 153 | return null; 154 | } 155 | } 156 | 157 | async function getEnabledPlugins(): Promise { 158 | try { 159 | return ( 160 | await useFetch(`${config.public.apiBase}/api/v1/bots/enabled-plugins`, { 161 | onResponseError, 162 | }) 163 | ).data.value as string[]; 164 | } catch (err) { 165 | return []; 166 | } 167 | } 168 | 169 | async function getBotLog(): Promise { 170 | try { 171 | return ( 172 | ( 173 | await useFetch(`${config.public.apiBase}/api/v1/bots/log`, { 174 | onResponseError, 175 | }) 176 | ).data.value?.text || '' 177 | ); 178 | } catch (err) { 179 | return ''; 180 | } 181 | } 182 | 183 | return { 184 | putBot, 185 | getBot, 186 | getEnabledPlugins, 187 | stopBot, 188 | continueBot, 189 | getBotLog, 190 | deleteWorkspaceFile, 191 | listWorkspaceFiles, 192 | uploadWorkspaceFile, 193 | clearWorkspace, 194 | parseAiSettings, 195 | } as ApiManager; 196 | }; 197 | -------------------------------------------------------------------------------- /frontend/src/interfaces/api.ts: -------------------------------------------------------------------------------- 1 | export interface ApiError { 2 | detail: string; 3 | } 4 | 5 | export interface LogResponse { 6 | text: string; 7 | } 8 | -------------------------------------------------------------------------------- /frontend/src/interfaces/bot.ts: -------------------------------------------------------------------------------- 1 | import { LLMEngine, ImageSize, GPT35Tokens, GPT4Tokens } from '@/interfaces/enums'; 2 | 3 | export interface AiSettingsSchema { 4 | ai_goals: string[]; 5 | ai_name: string; 6 | ai_role: string; 7 | api_budget?: number | null; 8 | } 9 | 10 | export interface BaseBotSchema { 11 | smart_tokens: GPT4Tokens; 12 | fast_tokens: GPT35Tokens; 13 | smart_engine: LLMEngine; 14 | fast_engine: LLMEngine; 15 | image_size: ImageSize; 16 | ai_settings: AiSettingsSchema | null; 17 | } 18 | 19 | export interface BotInCreateSchema extends BaseBotSchema { 20 | ai_settings: AiSettingsSchema; 21 | } 22 | 23 | export interface BotSchema extends BaseBotSchema { 24 | ai_settings: AiSettingsSchema; 25 | is_active: boolean; 26 | is_failed: boolean; 27 | runs_left: number; 28 | } 29 | 30 | export interface BotFormSchema extends BaseBotSchema { 31 | ai_goal: string; 32 | } 33 | 34 | export interface WorkspaceFileSchema { 35 | name: string; 36 | path: string; 37 | is_dir: boolean; 38 | mime_type: string | null; 39 | size: string | null; 40 | } 41 | 42 | export interface WorkspaceFileEnrichedSchema extends WorkspaceFileSchema { 43 | show: boolean; 44 | } 45 | -------------------------------------------------------------------------------- /frontend/src/interfaces/enums.ts: -------------------------------------------------------------------------------- 1 | export enum LLMEngine { 2 | GPT_3_5_TURBO = 'gpt-3.5-turbo', 3 | GPT_3_5_TURBO_16K = 'gpt-3.5-turbo-16k', 4 | GPT_4 = 'gpt-4', 5 | GPT_4_32K = 'gpt-4-32k', 6 | } 7 | 8 | export enum ImageSize { 9 | s512 = 512, 10 | s1024 = 1024, 11 | } 12 | 13 | export enum GPT35Tokens { 14 | t1000 = 1000, 15 | t2000 = 2000, 16 | t3000 = 3000, 17 | t4000 = 4000, 18 | t16000 = 16000, 19 | } 20 | 21 | export enum GPT4Tokens { 22 | t2000 = 2000, 23 | t4000 = 4000, 24 | t8000 = 8000, 25 | t32000 = 32000, 26 | } 27 | 28 | export enum YesCount { 29 | c1 = 1, 30 | c5 = 5, 31 | c10 = 10, 32 | c20 = 20, 33 | c50 = 50, 34 | c100 = 100, 35 | c200 = 200, 36 | } 37 | 38 | export enum FileTypes { 39 | xlsx = 'xlsx', 40 | csv = 'csv', 41 | text = 'text', 42 | zip = 'zip', 43 | } 44 | -------------------------------------------------------------------------------- /frontend/src/interfaces/index.ts: -------------------------------------------------------------------------------- 1 | export * from '@/interfaces/api'; 2 | export * from '@/interfaces/bot'; 3 | export * from '@/interfaces/enums'; 4 | export * from '@/interfaces/select'; 5 | -------------------------------------------------------------------------------- /frontend/src/interfaces/select.ts: -------------------------------------------------------------------------------- 1 | import { LLMEngine, ImageSize, GPT35Tokens, GPT4Tokens, YesCount } from '@/interfaces/enums'; 2 | 3 | export const yesCountOptions = Object.values(YesCount) 4 | .filter((value) => typeof value === 'number') 5 | .map((value) => ({ value, label: value })); 6 | export const llmEngineOptions = Object.values(LLMEngine).map((value) => ({ 7 | value, 8 | label: value, 9 | })); 10 | export const imageSizeOptions = Object.values(ImageSize) 11 | .filter((value) => typeof value === 'number') 12 | .map((value) => ({ value, label: `${value}x${value}` })); 13 | export const gpt35TokensOptions = Object.values(GPT35Tokens) 14 | .filter((value) => typeof value === 'number') 15 | .map((value) => ({ value, label: `${value} Tokens` })); 16 | export const gpt4TokensOptions = Object.values(GPT4Tokens) 17 | .filter((value) => typeof value === 'number') 18 | .map((value) => ({ value, label: `${value} Tokens` })); 19 | -------------------------------------------------------------------------------- /frontend/src/middleware/auth.global.ts: -------------------------------------------------------------------------------- 1 | export default defineNuxtRouteMiddleware(async () => { 2 | if (process.client) return; 3 | const config = useRuntimeConfig(); 4 | const nuxtApp = useNuxtApp(); 5 | const headers = useRequestHeaders(['cookie']); 6 | const { error } = await useFetch(`${config.apiBase}/api/v1/sessions/check`, { headers }); 7 | if (error.value?.status && [403, 401].includes(error.value.status)) { 8 | await navigateTo(`http://${nuxtApp.ssrContext?.event.node.req.headers.host}/auth/`, { external: true }); 9 | } 10 | }); 11 | -------------------------------------------------------------------------------- /frontend/src/pages/index.vue: -------------------------------------------------------------------------------- 1 | 404 | 405 | 721 | 722 | 731 | -------------------------------------------------------------------------------- /frontend/src/plugins/naive-ui.ts: -------------------------------------------------------------------------------- 1 | import { setup } from '@css-render/vue3-ssr'; 2 | import { defineNuxtPlugin } from '#app'; 3 | 4 | export default defineNuxtPlugin((nuxtApp) => { 5 | if (process.server) { 6 | const { collect } = setup(nuxtApp.vueApp); 7 | const originalRenderMeta = nuxtApp.ssrContext!.renderMeta; 8 | nuxtApp.ssrContext!.renderMeta = () => { 9 | if (!originalRenderMeta) { 10 | return { 11 | headTags: collect(), 12 | }; 13 | } 14 | const originalMeta = originalRenderMeta(); 15 | if ('then' in originalMeta) { 16 | return originalMeta.then((resolvedOriginalMeta) => { 17 | return { 18 | ...resolvedOriginalMeta, 19 | headTags: resolvedOriginalMeta.headTags + collect(), 20 | }; 21 | }); 22 | } else { 23 | return { 24 | ...originalMeta, 25 | headTags: originalMeta.headTags + collect(), 26 | }; 27 | } 28 | }; 29 | } 30 | }); 31 | -------------------------------------------------------------------------------- /frontend/src/public/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/neuronic-ai/autogpt-ui/a4d358e20f0b817d0e066a74adbc70f43677e9b8/frontend/src/public/favicon.ico -------------------------------------------------------------------------------- /frontend/src/utils/formatters.ts: -------------------------------------------------------------------------------- 1 | import { parseISO, formatISO } from 'date-fns'; 2 | 3 | export const formatLargeNumber = function (val: number | null | string): string { 4 | if (val == null) { 5 | return ''; 6 | } 7 | if (typeof val === 'string') { 8 | return parseFloat(val).toLocaleString('en-US'); 9 | } 10 | return val.toLocaleString('en-US'); 11 | }; 12 | 13 | export const formatDateTime = function (dt: string | Date, toDate?: boolean) { 14 | if (!(dt instanceof Date)) { 15 | dt = parseISO(dt); 16 | } 17 | if (toDate) { 18 | return dt.toLocaleDateString('en-US'); 19 | } else { 20 | return dt.toLocaleString('en-US', { timeZoneName: 'short' }); 21 | } 22 | }; 23 | 24 | export const formatDateISO = function (dt: string | Date, toDate?: boolean) { 25 | if (!(dt instanceof Date)) { 26 | dt = parseISO(dt); 27 | } 28 | if (toDate) { 29 | return formatISO(dt, { representation: 'date' }); 30 | } else { 31 | return formatISO(dt, { representation: 'complete' }); 32 | } 33 | }; 34 | 35 | export const formatDateRange = function (start: string, end: string) { 36 | return parseISO(start).toLocaleDateString('fr-FR') + ' - ' + parseISO(end).toLocaleDateString('fr-FR'); 37 | }; 38 | 39 | export const addUSD = function (v: string | number | null) { 40 | if (v == null || v === '') { 41 | return ''; 42 | } 43 | return `$${v}`; 44 | }; 45 | 46 | export const numberToMoney = function (num: number | null) { 47 | if (num == null) { 48 | return num; 49 | } 50 | return num.toLocaleString('en-US', { minimumFractionDigits: 2 }); 51 | }; 52 | 53 | export const formatPercent = function (v: string | number | null) { 54 | if (v == null || v === '') { 55 | return ''; 56 | } 57 | return `${v}%`; 58 | }; 59 | 60 | export const addPlus = function (v: number | null) { 61 | if (v == null) { 62 | return null; 63 | } 64 | let prefix = ''; 65 | if (v > 0) { 66 | prefix = '+'; 67 | } 68 | return `${prefix}${v}`; 69 | }; 70 | 71 | export const removeMinus = function (v: number | null) { 72 | if (v == null) { 73 | return null; 74 | } 75 | return Math.abs(v); 76 | }; 77 | 78 | export const formatSeconds = function (secs: number | null) { 79 | if (secs == null) { 80 | return ''; 81 | } 82 | 83 | const h = Math.floor(secs / 3600); 84 | const m = Math.floor((secs % 3600) / 60); 85 | const s = Math.floor((secs % 3600) % 60); 86 | 87 | return ('0' + h).slice(-2) + ':' + ('0' + m).slice(-2) + ':' + ('0' + s).slice(-2); 88 | }; 89 | 90 | export const formatMarket = function (v: string) { 91 | switch (v) { 92 | case 'gdax': 93 | return 'coinbase'; 94 | case 'GDAX': 95 | return 'COINBASE'; 96 | default: 97 | return v; 98 | } 99 | }; 100 | -------------------------------------------------------------------------------- /frontend/src/utils/mergers.ts: -------------------------------------------------------------------------------- 1 | import { isObject, isArray } from 'lodash-es'; 2 | 3 | function _mergeKeepShapeArray(dest: Array, source: Array) { 4 | if (source.length !== dest.length) { 5 | return dest; 6 | } 7 | const ret: any[] = []; 8 | dest.forEach((v, i) => { 9 | ret[i] = _mergeKeepShape(v, source[i]); 10 | }); 11 | return ret; 12 | } 13 | 14 | function _mergeKeepShapeObject(dest: { [key: string]: any }, source: { [key: string]: any }) { 15 | const ret: { [key: string]: any } = {}; 16 | Object.keys(dest).forEach((key) => { 17 | const sourceValue = source[key]; 18 | if (typeof sourceValue !== 'undefined') { 19 | ret[key] = _mergeKeepShape(dest[key], sourceValue); 20 | } else { 21 | ret[key] = dest[key]; 22 | } 23 | }); 24 | return ret; 25 | } 26 | 27 | function _mergeKeepShape(dest: any, source: any) { 28 | if (isArray(dest)) { 29 | if (!isArray(source)) { 30 | return dest; 31 | } 32 | return _mergeKeepShapeArray(dest, source); 33 | } else if (isObject(dest)) { 34 | if (!isObject(source)) { 35 | return dest; 36 | } 37 | return _mergeKeepShapeObject(dest, source); 38 | } else { 39 | return source; 40 | } 41 | } 42 | 43 | /** 44 | * Immutable merge that retains the shape of the `existingValue` 45 | */ 46 | export const mergeKeepShape = (existingValue: T, extendingValue: T): T => { 47 | return _mergeKeepShape(existingValue, extendingValue); 48 | }; 49 | -------------------------------------------------------------------------------- /frontend/tsconfig.json: -------------------------------------------------------------------------------- 1 | { 2 | // https://nuxt.com/docs/guide/concepts/typescript 3 | "extends": "./.nuxt/tsconfig.json", 4 | "compilerOptions": { 5 | "strict": true, 6 | "allowSyntheticDefaultImports": true 7 | }, 8 | "paths": { 9 | "~/*": ["src/*"], 10 | "@/*": ["src/*"] 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /how-to-docker.txt: -------------------------------------------------------------------------------- 1 | #This AutoGPT GUI is containerized, you should install the latest docker. Here are instructions for installing the latest docker and docker-compose (1.26) on your local ubuntu/debian box. 2 | 3 | 4 | ###Remove any existing docker and install the latest 5 | #-------------------------------- 6 | 7 | sudo apt-get remove docker docker-engine docker.io containerd runc 8 | 9 | #---------------------- 10 | 11 | sudo apt-get update 12 | sudo apt-get install \ 13 | apt-transport-https \ 14 | ca-certificates \ 15 | curl \ 16 | gnupg-agent \ 17 | software-properties-common 18 | 19 | # pull down the key 20 | 21 | sudo install -m 0755 -d /etc/apt/keyrings 22 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg 23 | sudo chmod a+r /etc/apt/keyrings/docker.gpg 24 | 25 | #------------- 26 | 27 | echo \ 28 | "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ 29 | "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ 30 | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 31 | 32 | 33 | 34 | sudo apt-get update 35 | sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin 36 | 37 | ----------------------- 38 | 39 | #Install latest docker-compose 40 | 41 | sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 42 | 43 | sudo chmod +x /usr/local/bin/docker-compose 44 | 45 | sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose 46 | 47 | #You have successfully installed Docker and ready to run any containerized applications on this machine. 48 | 49 | 50 | ####################################### 51 | Important troubleshooting tips for Docker 52 | 53 | (sudo) docker ps -a | Show all containers and their status, look for any that may be down 54 | (sudo) docker restart/start/stop CONTAINER_NAME | Reboot the container or shut it down 55 | (sudo) docker logs CONTAINER_NAME or ID | Use your actual container name or container ID with this command to look at its logs 56 | (sudo) docker exec -it CONTAINER_NAME or ID /bin/bash | Use this to conenct to console into this containers command line 57 | 58 | If a container is down, look at its logs to try to determine why it is down, if its failing, it probably doesnt like one or more of your env settings and it should tell you about that in the logs. 59 | 60 | #Note 61 | You can install mycli to manage MySQL database locally with command prompt or if you have PHPMyAdmin you can conenct on this machines, port 3306, using root and the password you set in your env files. 62 | --------------------------------------------------------------------------------