├── .DS_Store
├── .TOKEN.txt
├── .gitignore
├── AUX.md
├── README.md
├── __pycache__
├── decomposer.cpython-39.pyc
├── getToken.cpython-39.pyc
├── problemClassifier.cpython-39.pyc
└── utils.cpython-39.pyc
├── decomposer.py
├── frontCover.png
├── getToken.py
├── main.py
├── output
├── .DS_Store
├── archive
│ ├── .DS_Store
│ ├── blogSite
│ │ ├── blogList.js
│ │ ├── comments.js
│ │ ├── manage.py
│ │ └── relatedAndSharing.js
│ ├── example2
│ │ ├── .DS_Store
│ │ └── root
│ │ │ ├── main.py
│ │ │ ├── menus.py
│ │ │ ├── powerUpsAndMultiplayer.py
│ │ │ └── soundAndDifficulty.py
│ ├── forexTrader
│ │ ├── compare_forex_pairs.js
│ │ ├── forex_display.js
│ │ ├── forex_scraper.py
│ │ └── user_authentication.js
│ └── pong
│ │ ├── highScoresAndPause.py
│ │ ├── main.py
│ │ ├── menu.py
│ │ └── powerupsAndSound.py
└── root
│ └── .DS_Store
├── problemClassifier.py
├── requirements.txt
├── sample.png
├── utils.py
└── workspace
├── classifiedProblems.yaml
└── statement.txt
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/.DS_Store
--------------------------------------------------------------------------------
/.TOKEN.txt:
--------------------------------------------------------------------------------
1 | OPENAI_TOKEN_HERE
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/.gitignore
--------------------------------------------------------------------------------
/AUX.md:
--------------------------------------------------------------------------------
1 | # GPT Automator
2 |
3 |
4 |
5 | # Aspiration
6 |
7 |
8 | 1. Problem classifier
9 | 2. High level code planner breaks problem down into smallest deliverable components as instructions for the next step.
10 | 3. Code Generator Decomposition Agent: Called for each smallest deliverable increment.
11 | If under 4000 tokens create, if more, call agent with instructions for 4k token piecemeal delivery.
12 | Must include Code only, at end of code block, should include a operation instruction different kind of markup code we will agree, something like.
13 | RUN=PYTHON,FILENAME=main.py,SAVE_LOCATION=src
14 | 4. 4EyesCheck Agent: Iterates all the produced instructions from GPT, checks if it can be parsed properly by automation harness. Also ensures runtime instructions are correct.
15 | 5. File Creator Automation: Saves files in the correct place.
16 | 6. Runner: Executes code based upon step3.
17 | 7. Automated Output Capture
18 | 8 GPT Debugger agent. Setup in the same way where we have a language which says what file to change/replace.
19 | 9. Repeat - once no errors, human checks, Provides feedback into original prompt and repeats.
20 |
21 |
22 | ## implementation
23 |
24 | Create a class for each agent:
25 |
26 | ProblemClassifier
27 | HighLevelCodePlanner
28 | CodeGeneratorDecompositionAgent
29 | FourEyesCheckAgent
30 | FileCreatorAutomation
31 | Runner
32 | AutomatedOutputCapture
33 | GPTDebuggerAgent
34 | Create an orchestrator class ProjectCreator that initializes and calls these agent classes in the appropriate order.
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 | # Job examples
44 |
45 | AI-assisted brainstorming software that helps users generate new ideas and concepts by presenting unconventional connections.
46 |
47 | "Create a simple command-line tool that takes a URL as input, downloads the HTML content, and counts the frequency of each word in the HTML content. Display the top 10 most frequent words and their counts."
48 |
49 |
50 | "Develop a weather app that fetches weather data from an API, such as OpenWeatherMap, and displays the current temperature, humidity, and a brief weather description for a given location. The app should have a simple graphical user interface and be able to update the displayed information when the user enters a new location."
51 |
52 |
53 |
54 |
55 |
56 | # --------LEGACY
57 |
58 |
59 | SET UP MULTIPLE GPT AGENTS WITH DIFFERENT INITIAL SYSTEM PROMPTS INCLUDING:
60 |
61 |
62 | - GPT Problem classifier
63 | - GPT Code Generator
64 | - GPT Code RunTime advisor (produces parsable tokens i.e. RUN_MAIN.PY RUN_START.SH etc)
65 | - GPT Code Directory Checker (stiched prompts to ensure all files, code in the right place in the folder)
66 | - Automation Harness Runner - Runs the main.py or instructions given by RunTime advisor
67 | - Automation Harness OutPut Capture - captures terminal, log outputs.
68 | - GPT debugger - Reviews logs/errors and enriches description with provided high level solutions.
69 |
70 | -- LOOP REPEATS with high level solutions passed back into problem classifier
71 |
72 |
73 | ## MORE INFO:
74 |
75 | 1. USER inputs `problem statement`
76 | 2. GPT Problem classifier Agent kicks in, categorises it into:
77 | ```shell
78 | Can solve with code || Can Not solve with code.
79 | ```
80 | 3. If Can Not - produce Business plan only
81 | 4. If Can: move to next step
82 |
83 |
84 |
85 | # GPT Automator
86 |
87 | Set up multiple GPT agents with different initial system prompts:
88 |
89 | 1. GPT Problem Classifier
90 | 2. GPT Code Generator
91 | 3. GPT Code RunTime Advisor
92 | 4. GPT Code Directory Checker
93 | 5. Automation Harness Runner
94 | 6. Automation Harness Output Capture
95 | 7. GPT Debugger
96 |
97 | User inputs the problem statement:
98 |
99 | Collect the user's problem statement and pass it to the GPT Problem Classifier agent
100 |
101 | ## Problem classification:
102 |
103 | GPT Problem Classifier agent processes the problem statement and categorizes it into "Can solve with code" or "Cannot solve with code"
104 |
105 | ### Handle "Cannot solve with code" cases:
106 |
107 | If the problem statement falls into the "Cannot solve with code" category, generate a business plan or other relevant output
108 |
109 | ### Handle "Can solve with code" cases:
110 |
111 | 1. Pass the problem statement to the GPT Code Generator agent
112 | 2. Generate the necessary code using the GPT Code Generator agent
113 | 3. Pass the generated code to the GPT Code RunTime Advisor agent, which produces parsable tokens (e.g., RUN_MAIN.PY, RUN_START.SH)
114 | 4. Use the GPT Code Directory Checker agent to ensure all files and code are in the right place in the folder
115 | 5. Execute the main.py or instructions given by the GPT Code RunTime Advisor using the Automation Harness Runner
116 | 6. Capture terminal and log outputs using the Automation Harness Output Capture agent
117 |
118 | ## Debugging and iterative improvement:
119 |
120 | 1. Pass the captured outputs to the GPT Debugger agent, which reviews logs/errors and enriches the description with high-level solutions
121 | 2. Loop and repeat the process with the high-level solutions passed back into the GPT Problem Classifier agent for further refinement
122 |
123 | ## Optimize and maintain the system:
124 |
125 | - Continuously analyze the performance metrics to identify bottlenecks or inefficiencies in the process
126 | - Fine-tune the prompts, testing methodology, or other aspects of the system to improve code quality and system performance
127 | - Regularly monitor the system's performance and address any emerging issues
128 | - Update the system as needed to incorporate new GPT models, API changes, or other relevant updates
129 |
130 | ## Document the system and create user guides:
131 |
132 | - Document the architecture, configuration, and usage of the system for future reference
133 | - Develop user guides or tutorials to help users interact with and utilize the GPT Automator
134 |
135 | ## Integrate the solution with existing systems:
136 |
137 | - Develop APIs or interfaces to connect the GPT Automator with existing code repositories, CI/CD pipelines, or other relevant systems
138 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Automator V 0.1
2 |
3 | 
4 |
5 | Automator V 0.1 is a Python-based project that uses OpenAI's GPT-3.5-turbo model to help decompose software development problems into smaller deliverables and generate code for the specified deliverables.
6 |
7 | *contact Adam McMurchie to report issues or post them here*
8 |
9 |
10 | ## What can I do?
11 |
12 | 
13 |
14 |
15 | - Create me pong in pygame
16 | - Create me a blogging platform
17 | - Create me a website scraper
18 |
19 | # What it does
20 |
21 | 1. Takes your command
22 | 2. Creates all the resources you need in the `output` folder
23 |
24 | **NOTE** You need to do the cleanup of your outputs for now.
25 |
26 | ## Prerequisites
27 |
28 | Before you begin, ensure you have met the following requirements:
29 |
30 | * You have a working Python 3.6+ environment.
31 | * You have an OpenAI API key to access the GPT-3.5-turbo model.
32 |
33 | ## Installation
34 |
35 | To set up the project, follow these steps:
36 |
37 | 1. Clone the repository.
38 |
39 | ```bash
40 | git clone https://github.com/your-username/automator-v0.1.git
41 | cd automator-v0.1
42 | ```
43 |
44 | 2. Install the required Python packages.
45 |
46 | ```bash
47 | pip install -r requirements.txt
48 | ```
49 |
50 |
51 | 3. Add your OpenAI API key.
52 |
53 | ```bash
54 | export OPENAI_API_KEY="your-openai-api-key"
55 | ```
56 |
57 | Or you can create a .TOKEN.txt file in the root directory of the project and add your OpenAI API key.
58 |
59 | ```bash
60 | echo "your-openai-api-key" > .TOKEN.txt
61 | ```
62 |
63 | ## Usage
64 |
65 | Run the main script to start the application:
66 |
67 | ```bash
68 | python main.py
69 | ```
70 |
71 |
72 | Follow the prompts to describe your problem, and the application will classify the problem, decompose it into smaller deliverables, and generate code for each deliverable.
73 |
74 | The work will be in the output folder
75 |
76 | ## Limitations
77 |
78 | This is still a work in progress so use it at your own discretion, items to add are:
79 |
80 | 1. Recursive debugging
81 | 2. Better sub componentisation
82 | 3. Consistent sub componentisation
83 | 4. Logging
84 | 5. Automated archiving of output folder
85 |
86 |
87 |
88 | ## Future work
89 |
90 |
91 | *Modularize the code*: Separate the code into multiple Python modules based on functionality. For example, create a separate module for interacting with the OpenAI API, another for parsing YAML files, and another for handling user input and main program flow.
92 |
93 | *Use configuration files*: Instead of hardcoding the API key and other settings in the code, use a configuration file (e.g., JSON, YAML, or INI) to store these settings. This will make it easier to manage and update the configuration without modifying the code.
94 |
95 | *Add error handling and validation*: Improve error handling and validation throughout the code. For example, validate user input, check for the existence of required files, and handle exceptions that might be raised when interacting with external services (e.g., OpenAI API).
96 |
97 | *Add logging*: Implement a proper logging system to capture important events, warnings, and errors during the execution of the application. This will make it easier to troubleshoot and understand the program flow.
98 |
99 | *Improve user interface*: Enhance the user interface by using a more user-friendly library like prompt_toolkit or even building a web interface for a better user experience.
100 |
101 | *Add unit tests*: Write unit tests for different modules and functions to ensure the reliability and correctness of the code. This will help you catch bugs and regressions before they reach production.
102 |
103 | *Add documentation*: Add docstrings to functions and classes to provide more information about their purpose and usage. This will make the code easier to understand and maintain.
104 |
105 | *Use environment variables for sensitive information*: Instead of storing the OpenAI API key in a file, use environment variables to store sensitive information like API keys. This will make it easier to manage secrets and improve security.
106 |
107 | *Use a package manager*: Utilize a package manager like pipenv or poetry to manage dependencies and virtual environments, making it easier for others to set up and use the project.
108 |
109 | *Implement a progress indicator*: When generating code or interacting with external services like the OpenAI API, display a progress indicator to give the user feedback on the progress of the operation. This can be a simple spinner or a more advanced progress bar.
110 |
111 |
112 |
113 | ## Contributing
114 |
115 | Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
116 |
117 | ## License
118 |
119 | [MIT](https://choosealicense.com/licenses/mit/)
120 |
--------------------------------------------------------------------------------
/__pycache__/decomposer.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/__pycache__/decomposer.cpython-39.pyc
--------------------------------------------------------------------------------
/__pycache__/getToken.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/__pycache__/getToken.cpython-39.pyc
--------------------------------------------------------------------------------
/__pycache__/problemClassifier.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/__pycache__/problemClassifier.cpython-39.pyc
--------------------------------------------------------------------------------
/__pycache__/utils.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/__pycache__/utils.cpython-39.pyc
--------------------------------------------------------------------------------
/decomposer.py:
--------------------------------------------------------------------------------
1 | import openai
2 | import yaml
3 | import re
4 | import pprint
5 | from termcolor import colored, cprint
6 | import os
7 |
8 |
9 | def generate_code(prompt,openaiObject):
10 |
11 | print('')
12 | cprint('Sending Request', 'yellow',attrs=['blink'])
13 | availableResponses = openaiObject.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "system", "content": prompt}])
14 | currentResponse = availableResponses['choices'][0]['message']['content'].lstrip()
15 |
16 | return currentResponse
17 |
18 | def process_deliverable(yaml_content, desired_deliverable,openaiObject):
19 |
20 | # Find the desired deliverable
21 | deliverable = None
22 | for d in yaml_content:
23 | if d['DELIVERABLE'] == desired_deliverable:
24 | deliverable = d
25 | break
26 |
27 | if deliverable is None:
28 | raise ValueError(f'DELIVERABLE: {desired_deliverable} not found.')
29 |
30 | location = 'output/' + deliverable['RUNTIME_INSTRUCTIONS']['LOCATION']
31 | filename = deliverable['RUNTIME_INSTRUCTIONS']['FILENAME']
32 |
33 |
34 | print("DESIRED DELIVERABLE IS ")
35 | print(desired_deliverable)
36 | print("LOCATION IS ")
37 | print(location)
38 | print("FILENAME IS ")
39 | print(filename)
40 |
41 | print('')
42 | cprint('ready to generate code?','white',attrs=['bold'])
43 | input('')
44 | # Generate and append code in chunks of 400 lines
45 | code_chunk = None
46 | chunkCount = 0
47 | while code_chunk != "":
48 | prompt = f"You are given a YAML file that describes a project broken into multiple deliverables. Your task is to generate code for the specified deliverable in chunks of 150 lines at a time. DO NOT add any extra words, just provide only the code. If there is no more code to provide, just return ##JOB_COMPLETE## \n\nPlease process DELIVERABLE: {desired_deliverable}. \n\n{yaml_content}\n\nPreviously generated code (if any):\n{code_chunk}\n\nGenerate the next 400 lines:"
49 | cprint(prompt + '\n**prompt ends**', 'blue')
50 |
51 | # SENDING REQUEST
52 | code_chunk = generate_code(prompt,openaiObject)
53 | chunkCount += 1
54 | print(code_chunk)
55 | cprint('Received, saving...', 'white')
56 |
57 |
58 | # Create location if not exist
59 | os.makedirs(location, exist_ok=True)
60 |
61 | with open(f"{location}/{filename}", "a") as f:
62 | f.write(code_chunk)
63 |
64 | cprint('Saved', 'yellow')
65 |
66 | #
67 | if('JOB_COMPLETE' in code_chunk):
68 | cprint('Job Complete', 'white')
69 | return()
70 | #input("Continue with next chunk? Number: " + str(chunkCount))
71 | if(chunkCount>5):
72 | print("Something likely went wrong - chunk at 400 * 5")
73 | input("Exit or continue?")
74 |
75 |
76 |
77 | def decompose(openaiObject):
78 | # Usage:
79 | file_path = 'workspace/classifiedProblems.yaml'
80 | with open(file_path, 'r') as file:
81 | data = file.read()
82 |
83 |
84 | print('printing raw data')
85 | print(data)
86 |
87 | print('Printing as dict')
88 | yaml_content = yaml.safe_load(data)
89 | pprint.pprint(yaml_content)
90 |
91 |
92 |
93 |
94 |
95 | desired_deliverable = 0
96 | for n in range(len(yaml_content)):
97 | process_deliverable(yaml_content, n+1,openaiObject)
98 |
--------------------------------------------------------------------------------
/frontCover.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/frontCover.png
--------------------------------------------------------------------------------
/getToken.py:
--------------------------------------------------------------------------------
1 | import os
2 | import openai
3 | from termcolor import colored, cprint
4 |
5 | def getOpenAPIKey():
6 |
7 | # ----------------- CALL OPEN AI
8 |
9 | if os.environ.get("OPENAI_API_KEY"):
10 | cprint('Loading Key from Environment variable','yellow')
11 | openai.api_key = os.environ["OPENAI_API_KEY"]
12 | else:
13 | cprint('No key in env vars, attempting to load from file ','yellow')
14 | #keyFile = open('/Users/adammcmurchie/code/tools/davinci/.KEY.txt','r')
15 | keyFile = open('.TOKEN.txt','r')
16 | openai.api_key = keyFile.read()
17 |
18 | print('')
19 | cprint('Loaded','white')
20 | print('')
21 |
22 | return(openai)
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
1 | import os
2 | from termcolor import colored, cprint
3 | from art import *
4 | import problemClassifier
5 | from decomposer import *
6 | from getToken import *
7 |
8 | class agent_coordinator():
9 | def __init__(self):
10 | self.state = 'not started'
11 | self.classifier = problemClassifier.ProblemClassifier()
12 | self.deliverableStatement = None
13 | self.no_deliverables = 0
14 | self.currentDeliv = 0
15 | self.deliverablesArray = []
16 | self.openaiObject = None
17 |
18 | def mainLoop(self):
19 |
20 | self.openaiObject = getOpenAPIKey()
21 |
22 | if(self.state =='not started'):
23 | self.classifier.get_requirements(self.openaiObject)
24 | if(self.classifier.problem_state == 'solvable_with_code'):
25 |
26 | # IMPORT METRICS
27 | self.problemStatement = self.classifier.problemStatement
28 | self.deliverableStatement = self.classifier.currentResponse
29 | self.no_deliverables = self.classifier.no_deliverables
30 | self.state = 'decompose'
31 |
32 | print('Deliverables : ' + str(self.no_deliverables))
33 | cprint('Complete! press any key to proceed to the next step', 'white')
34 | input('')
35 | else:
36 | self.state = 'complete'
37 | cprint('Complete! Problem is not solvable with code unfortunately', 'red')
38 | input('')
39 | if(self.state =='decompose'):
40 | decompose(self.openaiObject)
41 |
42 | os.system('clear')
43 | welcomeMessage=text2art("Welcome")
44 | amuMessage=text2art("Automator V 0.1")
45 | print(welcomeMessage)
46 | print(amuMessage)
47 | coordinator = agent_coordinator()
48 | coordinator.mainLoop()
49 |
--------------------------------------------------------------------------------
/output/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/output/.DS_Store
--------------------------------------------------------------------------------
/output/archive/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/murchie85/GPT_AUTOMATE/204eff36e2cc7759eee919b639322d7cbfe23c8f/output/archive/.DS_Store
--------------------------------------------------------------------------------
/output/archive/blogSite/blogList.js:
--------------------------------------------------------------------------------
1 | import React, { Component } from 'react';
2 | import axios from 'axios';
3 | import BlogCard from './blogCard';
4 |
5 | class BlogList extends Component {
6 | state = {
7 | blogList: [],
8 | loading: true,
9 | error: null,
10 | currentPage: 1,
11 | blogsPerPage: 10,
12 | keyword: '',
13 | searchFilter: false,
14 | };
15 |
16 | componentDidMount() {
17 | this.fetchBlogList();
18 | }
19 |
20 | fetchBlogList = () => {
21 | axios
22 | .get('/api/blog/')
23 | .then((response) => {
24 | const blogList = response.data;
25 | this.setState({ blogList, loading: false });
26 | })
27 | .catch((error) => {
28 | this.setState({ error, loading: false });
29 | });
30 | };
31 |
32 | handlePageChange = (pageNumber) => {
33 | this.setState({ currentPage: pageNumber });
34 | };
35 |
36 | handleKeywordChange = (event) => {
37 | const keyword = event.target.value;
38 | this.setState({ keyword });
39 | };
40 |
41 | handleSearchFilterChange = () => {
42 | const { searchFilter } = this.state;
43 | this.setState({ searchFilter: !searchFilter });
44 | };
45 |
46 | render() {
47 | const { blogList, loading, error, currentPage, blogsPerPage, keyword, searchFilter } = this.state;
48 |
49 | const indexOfLastBlog = currentPage * blogsPerPage;
50 | const indexOfFirstBlog = indexOfLastBlog - blogsPerPage;
51 | const currentBlogs = blogList.slice(indexOfFirstBlog, indexOfLastBlog);
52 |
53 | const filteredBlogs = currentBlogs.filter((blog) =>
54 | blog.title.toLowerCase().includes(keyword.toLowerCase())
55 | );
56 |
57 | const totalBlogs = searchFilter ? filteredBlogs.length : blogList.length;
58 | const totalPages = Math.ceil(totalBlogs / blogsPerPage);
59 |
60 | const renderBlogCards = loading ? (
61 |