├── temp ├── .gitkeep └── .gitignore ├── local-llm-q ├── .gitignore ├── go.sh ├── go.bat ├── test.bat ├── create_env.bat ├── dir_hf_cache.bat ├── create_env.sh ├── requirements.txt ├── install.sh ├── download-llm.sh ├── download-llm.bat ├── util_time.py ├── install.bat ├── test.py ├── requirements-all.txt ├── config.py ├── README.md └── llm-chat-q.py ├── local-llm ├── .gitignore ├── go.sh ├── create_env.bat ├── go.bat ├── dir_hf_cache.bat ├── create_env.sh ├── install.bat ├── install.sh ├── requirements.txt ├── main--py-specialist-llm.py ├── main--general-coding-llm.py └── README.md ├── test.sh ├── go.sh ├── go_web.sh ├── gpt-workflow-csharp ├── gpt-workflow-csharp-lib │ ├── .gitkeep │ └── .gitignore ├── gpt-workflow-csharp-cli │ ├── .gitignore │ ├── go.sh │ ├── Config.cs │ ├── Visitor │ │ ├── VisitedStringTracker.cs │ │ ├── IDotModelVisitor.cs │ │ ├── ConsoleDumpDotModelVisitor.cs │ │ └── BaseDotModelVisitor.cs │ ├── gpt-workflow-csharp-cli.csproj │ ├── Client │ │ ├── HttpResponseMessageExtensions.cs │ │ └── GptWorkflowClient.cs │ ├── test.sh │ ├── Builder │ │ └── DotBuilder.cs │ ├── Program.cs │ └── Parser │ │ └── DotParser.cs ├── README.md └── .vscode │ ├── launch.json │ └── tasks.json ├── training └── training-data-generator │ ├── config.py │ ├── test.sh │ ├── test.py │ ├── go.sh │ ├── command.py │ ├── test__workflow_trainingset_creator.py │ ├── core.py │ ├── go.py │ ├── README.md │ ├── service_chat.py │ ├── workflow_trainingset_creator.py │ └── json_fixer.py ├── test.py ├── images ├── dot_graph_1.png ├── dot_graph_2.png ├── dot_graph_3.png ├── dot_graph_4.png ├── dot_graph_5.png ├── dot_graph_6.png ├── dot_graph_7.png ├── dot_graph_8.png ├── how_it_works-DOT-describer.png ├── how_it_works-DOT-describer.simplified.png ├── how_it_works-DOT-generation-from-natural-language.png └── how_it_works-DOT-generation-from-natural-language.simplified.png ├── config.py ├── util_file.py ├── config_web.py ├── command.py ├── dot ├── example-output │ ├── dot_graph_1.dot │ ├── dot_graph_12.dot │ ├── dot_graph_2.dot │ ├── dot_graph_3.dot │ ├── dot_graph_9.dot │ ├── dot_graph_4.dot │ ├── dot_graph_6.dot │ ├── dot_graph_10.dot │ ├── dot_graph_5.dot │ ├── dot_graph_11.dot │ ├── dot_graph_8.dot │ └── dot_graph_7.dot ├── how_it_works-DOT-describer.simplified.dot ├── how_it_works-DOT-describer.dot ├── how_it_works-DOT-generation-from-natural-language.simplified.dot └── how_it_works-DOT-generation-from-natural-language.dot ├── main_cli.py ├── LICENSE ├── .vscode ├── launch.json └── tasks.json ├── core.py ├── service_chat.py ├── service_dot_parser.py ├── json_fixer.py ├── prompts_dot_graph_creator.py ├── .gitignore ├── test__prompts_dot_graph_creator.py ├── main_web_service.py └── README.md /temp/.gitkeep: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /temp/.gitignore: -------------------------------------------------------------------------------- 1 | **/* 2 | -------------------------------------------------------------------------------- /local-llm-q/.gitignore: -------------------------------------------------------------------------------- 1 | env 2 | -------------------------------------------------------------------------------- /local-llm/.gitignore: -------------------------------------------------------------------------------- 1 | env 2 | -------------------------------------------------------------------------------- /test.sh: -------------------------------------------------------------------------------- 1 | python -W "ignore" test.py 2 | -------------------------------------------------------------------------------- /go.sh: -------------------------------------------------------------------------------- 1 | python -W "ignore" main_cli.py 2 | -------------------------------------------------------------------------------- /local-llm-q/go.sh: -------------------------------------------------------------------------------- 1 | python3 llm-chat-q.py 2 | -------------------------------------------------------------------------------- /go_web.sh: -------------------------------------------------------------------------------- 1 | python -W "ignore" main_web_service.py 2 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-lib/.gitkeep: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /local-llm-q/go.bat: -------------------------------------------------------------------------------- 1 | env\Scripts\python.exe llm-chat-q.py 2 | -------------------------------------------------------------------------------- /local-llm-q/test.bat: -------------------------------------------------------------------------------- 1 | env\Scripts\python.exe test.py 2 | -------------------------------------------------------------------------------- /training/training-data-generator/config.py: -------------------------------------------------------------------------------- 1 | is_debug = False 2 | -------------------------------------------------------------------------------- /training/training-data-generator/test.sh: -------------------------------------------------------------------------------- 1 | python test.py 2 | -------------------------------------------------------------------------------- /local-llm/go.sh: -------------------------------------------------------------------------------- 1 | ./env/bin/python main--general-coding-llm.py 7B 2 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/.gitignore: -------------------------------------------------------------------------------- 1 | bin 2 | obj 3 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/go.sh: -------------------------------------------------------------------------------- 1 | dotnet run $1 "$2" 2 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-lib/.gitignore: -------------------------------------------------------------------------------- 1 | bin 2 | obj 3 | -------------------------------------------------------------------------------- /local-llm-q/create_env.bat: -------------------------------------------------------------------------------- 1 | py -m venv env 2 | 3 | env\Scripts\activate 4 | -------------------------------------------------------------------------------- /local-llm/create_env.bat: -------------------------------------------------------------------------------- 1 | py -m venv env 2 | 3 | env\Scripts\activate 4 | -------------------------------------------------------------------------------- /local-llm/go.bat: -------------------------------------------------------------------------------- 1 | env\Scripts\python.exe main--general-coding-llm.py 7B 2 | -------------------------------------------------------------------------------- /local-llm-q/dir_hf_cache.bat: -------------------------------------------------------------------------------- 1 | dir C:\Users\Sean.Ryan\.cache\huggingface\hub %1 %2 %3 2 | -------------------------------------------------------------------------------- /local-llm/dir_hf_cache.bat: -------------------------------------------------------------------------------- 1 | dir C:\Users\Sean.Ryan\.cache\huggingface\hub %1 %2 %3 2 | -------------------------------------------------------------------------------- /test.py: -------------------------------------------------------------------------------- 1 | import test__prompts_dot_graph_creator 2 | 3 | test__prompts_dot_graph_creator.test() 4 | -------------------------------------------------------------------------------- /images/dot_graph_1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_1.png -------------------------------------------------------------------------------- /images/dot_graph_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_2.png -------------------------------------------------------------------------------- /images/dot_graph_3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_3.png -------------------------------------------------------------------------------- /images/dot_graph_4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_4.png -------------------------------------------------------------------------------- /images/dot_graph_5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_5.png -------------------------------------------------------------------------------- /images/dot_graph_6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_6.png -------------------------------------------------------------------------------- /images/dot_graph_7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_7.png -------------------------------------------------------------------------------- /images/dot_graph_8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/dot_graph_8.png -------------------------------------------------------------------------------- /local-llm-q/create_env.sh: -------------------------------------------------------------------------------- 1 | python3 -m venv env 2 | 3 | chmod +x ./env/bin/* 4 | ./env/bin/activate 5 | -------------------------------------------------------------------------------- /local-llm/create_env.sh: -------------------------------------------------------------------------------- 1 | python3 -m venv env 2 | 3 | chmod +x ./env/bin/* 4 | ./env/bin/activate 5 | -------------------------------------------------------------------------------- /local-llm-q/requirements.txt: -------------------------------------------------------------------------------- 1 | #ctransformers == 0.2.24 2 | #ctransformers[cuda]==0.2.24 3 | gradio==3.47.1 4 | -------------------------------------------------------------------------------- /local-llm/install.bat: -------------------------------------------------------------------------------- 1 | env\Scripts\pip install git+https://github.com/huggingface/transformers.git@main accelerate 2 | -------------------------------------------------------------------------------- /local-llm/install.sh: -------------------------------------------------------------------------------- 1 | ./env/bin/pip install git+https://github.com/huggingface/transformers.git@main accelerate 2 | -------------------------------------------------------------------------------- /config.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | is_debug = False 4 | 5 | END_LINE = os.linesep 6 | 7 | PATH_TO_PNG_OUTDIR = f".{os.sep}temp" 8 | -------------------------------------------------------------------------------- /images/how_it_works-DOT-describer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/how_it_works-DOT-describer.png -------------------------------------------------------------------------------- /training/training-data-generator/test.py: -------------------------------------------------------------------------------- 1 | import test__workflow_trainingset_creator 2 | 3 | test__workflow_trainingset_creator.test() 4 | -------------------------------------------------------------------------------- /util_file.py: -------------------------------------------------------------------------------- 1 | def write_text_to_file(text, filepath): 2 | with open(filepath, "w", encoding='utf-8') as f: 3 | f.write(text) 4 | -------------------------------------------------------------------------------- /config_web.py: -------------------------------------------------------------------------------- 1 | HOSTNAME = "localhost" 2 | PORT = 8083 3 | WEB_SERVER_THREADS = 100 4 | 5 | # saves money! 6 | discard_previous_messages = True 7 | -------------------------------------------------------------------------------- /training/training-data-generator/go.sh: -------------------------------------------------------------------------------- 1 | OUT_FILE=/c/sean/data/gpt-workflow-trainingset.csv 2 | 3 | python go.py $OUT_FILE 4 | 5 | ls -al $OUT_FILE 6 | -------------------------------------------------------------------------------- /images/how_it_works-DOT-describer.simplified.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/how_it_works-DOT-describer.simplified.png -------------------------------------------------------------------------------- /images/how_it_works-DOT-generation-from-natural-language.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/how_it_works-DOT-generation-from-natural-language.png -------------------------------------------------------------------------------- /images/how_it_works-DOT-generation-from-natural-language.simplified.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mrseanryan/gpt-workflow/HEAD/images/how_it_works-DOT-generation-from-natural-language.simplified.png -------------------------------------------------------------------------------- /local-llm-q/install.sh: -------------------------------------------------------------------------------- 1 | pip install -r requirements.txt 2 | 3 | # no-binary to force a local build, in case some dependency issues 4 | pip install ctransformers==0.2.24 --no-binary ctransformers --force 5 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Config.cs: -------------------------------------------------------------------------------- 1 | // TODO read from config file 2 | static class Config 3 | { 4 | public static string Host => "localhost"; 5 | public static int Port => 8083; 6 | } 7 | -------------------------------------------------------------------------------- /command.py: -------------------------------------------------------------------------------- 1 | class Command: 2 | def __init__(self, name, expert_template, description) -> None: 3 | self.name = name 4 | self.expert_template = expert_template 5 | self.description = description 6 | -------------------------------------------------------------------------------- /training/training-data-generator/command.py: -------------------------------------------------------------------------------- 1 | class Command: 2 | def __init__(self, name, expert_template, description) -> None: 3 | self.name = name 4 | self.expert_template = expert_template 5 | self.description = description 6 | -------------------------------------------------------------------------------- /local-llm-q/download-llm.sh: -------------------------------------------------------------------------------- 1 | pip3 install huggingface-hub>=0.17.1 2 | 3 | # huggingface-cli download TheBloke/CodeLlama-13B-GGUF codellama-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False 4 | 5 | huggingface-cli download TheBloke/CodeLlama-13B-GGUF --local-dir . --local-dir-use-symlinks False --include=*Q4_K*gguf* 6 | -------------------------------------------------------------------------------- /local-llm-q/download-llm.bat: -------------------------------------------------------------------------------- 1 | pip3 install huggingface-hub>=0.17.1 2 | 3 | REM huggingface-cli download TheBloke/CodeLlama-13B-GGUF codellama-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False 4 | 5 | huggingface-cli download TheBloke/CodeLlama-13B-GGUF --local-dir . --local-dir-use-symlinks False --include=*q4_K_M*gguf* 6 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Visitor/VisitedStringTracker.cs: -------------------------------------------------------------------------------- 1 | namespace Visitor; 2 | 3 | class VisitedStringTracker 4 | { 5 | readonly HashSet visited = new (); 6 | 7 | public bool WasVisited(string name) 8 | { 9 | if (visited.Contains(name)) 10 | return true; 11 | visited.Add(name); 12 | return false; 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /local-llm-q/util_time.py: -------------------------------------------------------------------------------- 1 | from datetime import timedelta 2 | import time 3 | 4 | def start_timer(): 5 | return time.time() 6 | 7 | def end_timer(start): 8 | end = time.time() 9 | seconds_elapsed = end - start 10 | return seconds_elapsed 11 | 12 | def describe_elapsed_seconds(seconds_elapsed): 13 | if seconds_elapsed is None: 14 | return "(unknown)" 15 | return f"{timedelta(seconds=seconds_elapsed)}s" 16 | -------------------------------------------------------------------------------- /local-llm-q/install.bat: -------------------------------------------------------------------------------- 1 | pip install -r requirements.txt 2 | 3 | REM The non-CUDA ctransformers? 4 | REM no-binary to force a local build, in case some dependency issues 5 | REM pip install "ctransformers>=0.2.24" --no-binary ctransformers --force 6 | 7 | REM Try ctransformers[cuda] instead of ctransformers and then bump up the GPU_LAYERS 8 | pip uninstall ctransformers 9 | pip install "ctransformers[cuda]>=0.2.24" --no-binary ctransformers --force 10 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/gpt-workflow-csharp-cli.csproj: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Exe 5 | net6.0 6 | gpt_workflow_csharp_cli 7 | enable 8 | enable 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Client/HttpResponseMessageExtensions.cs: -------------------------------------------------------------------------------- 1 | using System.Net; 2 | 3 | namespace Client; 4 | 5 | static class HttpResponseMessageExtensions 6 | { 7 | internal static void WriteRequestToConsole(this HttpResponseMessage response) 8 | { 9 | if (response is null) 10 | { 11 | return; 12 | } 13 | 14 | var request = response.RequestMessage; 15 | Console.Write($"{request?.Method} "); 16 | Console.Write($"{request?.RequestUri} "); 17 | Console.WriteLine($"HTTP/{request?.Version}"); 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /local-llm/requirements.txt: -------------------------------------------------------------------------------- 1 | accelerate==0.23.0 2 | certifi==2023.7.22 3 | charset-normalizer==3.3.0 4 | colorama==0.4.6 5 | filelock==3.12.4 6 | fsspec==2023.9.2 7 | huggingface-hub==0.17.3 8 | idna==3.4 9 | Jinja2==3.1.2 10 | MarkupSafe==2.1.3 11 | mpmath==1.3.0 12 | networkx==3.1 13 | numpy==1.26.0 14 | packaging==23.2 15 | psutil==5.9.5 16 | PyYAML==6.0.1 17 | regex==2023.10.3 18 | requests==2.31.0 19 | safetensors==0.4.0 20 | sympy==1.12 21 | tokenizers==0.14.1 22 | torch==2.1.0 23 | tqdm==4.66.1 24 | transformers @ git+https://github.com/huggingface/transformers.git@eb734e51479be7d16d41d0f60c565cac3e57367a 25 | typing_extensions==4.8.0 26 | urllib3==2.0.6 27 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/README.md: -------------------------------------------------------------------------------- 1 | # gpt-workflow C# 2 | 3 | C# client for gpt-workflow 4 | 5 | ## Capabilities: 6 | 7 | - `Take natural langauge description from C# -> DOT from LLM -> write it to C# system`: take the DOT output of gpt-workflow and then allow C# code to visit it (Visitor pattern), in order to write back to some other C# system. 8 | - `Take DOT from C# and retrieve natural language description from LLM`: allow a C# system to write its workflow to gpt-workflow, receiving back a natural language description. 9 | 10 | ## Dependencies 11 | 12 | ### DotLan (DOT AST for C#) 13 | 14 | - https://github.com/abock/dotlang 15 | - https://www.nuget.org/packages/Graphviz.DotLanguage/ 16 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/test.sh: -------------------------------------------------------------------------------- 1 | echo "Test USAGE" 2 | ./go.sh 3 | 4 | echo . 5 | echo "Test parsing DOT" 6 | ./go.sh parse ../../dot/example-output/dot_graph_1.dot 7 | 8 | echo . 9 | echo "Test parsing DOT" 10 | ./go.sh parse ../../dot/example-output/dot_graph_11.dot 11 | 12 | echo . 13 | echo "Test parsing DOT" 14 | ./go.sh parse ../../dot/example-output/dot_graph_12.dot 15 | 16 | echo . 17 | echo "Test building DOT" 18 | ./go.sh create-example-dot 19 | 20 | echo . 21 | echo "Test building DOT and sending it to web server to get Description" 22 | ./go.sh send-example-dot-to-describe 23 | 24 | echo . 25 | echo "Test send Natural Language to web server to get DOT and parse that" 26 | ./go.sh generate-dot-and-parse "Create a flow that makes a series of decisions about whether to recommend a job interview candidate." 27 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Visitor/IDotModelVisitor.cs: -------------------------------------------------------------------------------- 1 | namespace Visitor; 2 | 3 | public interface IDotModelVisitor 4 | { 5 | void VisitNode(Node node); 6 | void VisitEdge(Edge edge); 7 | void VisitNodeLabel(Node node, string label); 8 | } 9 | 10 | // Matches the whitelist in the prompt sent to LLM 11 | public enum NodeKind 12 | { 13 | Start, 14 | Decision, 15 | End, 16 | While, 17 | ReadItemsFromStorage, 18 | WriteItemsToStorage, 19 | CreateListEnumerator, 20 | HasListEnumeratorMoreItems, 21 | GenNextItemFromEnumerator, 22 | Variable, 23 | Parameter, 24 | CallFlow, 25 | Other, 26 | Comment 27 | } 28 | 29 | public record Node(NodeKind Kind, string Identifier); 30 | 31 | public record Edge(Node Start, Node End, string Label = ""); 32 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_1.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start [shape=ellipse, label="Start"]; 5 | start -> decision_credit_score; 6 | 7 | // decision_credit_score 8 | decision_credit_score [shape=Mdiamond, label="Credit Score >= 700?"]; 9 | decision_credit_score -> decision_income; 10 | 11 | // decision_income 12 | decision_income [shape=Mdiamond, label="Income >= $50,000?"]; 13 | decision_income -> decision_down_payment; 14 | 15 | // decision_down_payment 16 | decision_down_payment [shape=Mdiamond, label="Down Payment >= 20%?"]; 17 | decision_down_payment -> end_approved; 18 | decision_down_payment -> end_rejected; 19 | 20 | // end_approved 21 | end_approved [shape=rectangle, label="Approved"]; 22 | 23 | // end_rejected 24 | end_rejected [shape=rectangle, label="Rejected"]; 25 | 26 | } 27 | -------------------------------------------------------------------------------- /training/training-data-generator/test__workflow_trainingset_creator.py: -------------------------------------------------------------------------------- 1 | from workflow_trainingset_creator import EXPERT_COMMANDS 2 | import core 3 | 4 | def test(): 5 | command_messages = core.create_command_messages(EXPERT_COMMANDS) 6 | 7 | tests = [ 8 | { 9 | "name": "Generate training data for 10 workflows", 10 | "prompts": [ 11 | "Create training data with 10 workflows" 12 | ] 13 | } 14 | ] 15 | 16 | for test in tests: 17 | previous_messages = [] 18 | print(f"[[[TEST {test['name']}]]]") 19 | for user_prompt in test['prompts']: 20 | print("---") 21 | print(f">> {user_prompt}") 22 | # should route to the right 'expert' chain! 23 | rsp = core.execute_prompt(user_prompt, previous_messages, command_messages) 24 | print(rsp) 25 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Visitor/ConsoleDumpDotModelVisitor.cs: -------------------------------------------------------------------------------- 1 | namespace Visitor; 2 | 3 | // Default example implementation of IDotModelVisitor 4 | public class ConsoleDumpDotModelVisitor : BaseDotModelVisitor 5 | { 6 | protected override void VisitEdgeImplementation(Edge edge) 7 | { 8 | Console.WriteLine($"{ToString(edge.Start)} --> {ToString(edge.End)}"); 9 | } 10 | 11 | protected override void VisitNodeImplementation(Node node) 12 | { 13 | if (node.Kind == NodeKind.Comment) 14 | return; 15 | 16 | Console.WriteLine(ToString(node)); 17 | } 18 | 19 | protected override void SetLabelForNode(Node node, string label) 20 | { 21 | Console.WriteLine($"// node '{ToString(node)}' has label '{label}'"); 22 | } 23 | 24 | string ToString(Node node) => $"{node.Kind}[{node.Identifier}]"; 25 | } 26 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_12.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | // start 3 | start [shape=ellipse, label="Start"]; 4 | start -> decision_credit_score; 5 | 6 | // decision_credit_score 7 | decision_credit_score [shape=diamond, label="Credit Score > 700?"]; 8 | decision_credit_score -> decision_income; 9 | 10 | // decision_income 11 | decision_income [shape=diamond, label="Income > $50,000?"]; 12 | decision_income -> decision_down_payment; 13 | 14 | // decision_down_payment 15 | decision_down_payment [shape=diamond, label="Down Payment > 20%?"]; 16 | decision_down_payment -> end_approved; 17 | decision_down_payment -> end_rejected; 18 | 19 | // end_approved 20 | end_approved [shape=ellipse, label="Approved"]; 21 | end_approved -> end; 22 | 23 | end_rejected [shape=ellipse, label="Rejected"]; 24 | end_rejected -> end; 25 | 26 | // end 27 | end [shape=ellipse, label="End"]; 28 | 29 | } 30 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Visitor/BaseDotModelVisitor.cs: -------------------------------------------------------------------------------- 1 | namespace Visitor; 2 | 3 | public abstract class BaseDotModelVisitor : IDotModelVisitor 4 | { 5 | VisitedStringTracker visited = new(); 6 | 7 | public void VisitEdge(Edge edge) 8 | { 9 | if (!visited.WasVisited(edge.ToString())) 10 | VisitEdgeImplementation(edge); 11 | } 12 | 13 | protected abstract void VisitEdgeImplementation(Edge edge); 14 | 15 | public void VisitNode(Node node) 16 | { 17 | if (!visited.WasVisited(node.ToString())) 18 | VisitNodeImplementation(node); 19 | } 20 | 21 | protected abstract void VisitNodeImplementation(Node node); 22 | 23 | public void VisitNodeLabel(Node node, string label) 24 | { 25 | SetLabelForNode(node, label); 26 | } 27 | 28 | protected abstract void SetLabelForNode(Node node, string label); 29 | } 30 | -------------------------------------------------------------------------------- /local-llm/main--py-specialist-llm.py: -------------------------------------------------------------------------------- 1 | # ref = https://huggingface.co/codellama/CodeLlama-13b-Python-hf 2 | # 3 | # The 13B *Python specialist* version in the Hugging Face Transformers format. 4 | # This model is designed for general code synthesis and understanding. 5 | 6 | # Use a pipeline as a high-level helper 7 | from transformers import pipeline 8 | 9 | pipe = pipeline("text-generation", model="codellama/CodeLlama-13b-Python-hf") 10 | 11 | prompt = """ 12 | create a DOT graph to decide a mortgage loan. if credit score is greater than 700 then check years employed. else reject. 13 | 14 | if years employed is greater than 3 then approve. 15 | else reject. 16 | 17 | name the DOT nodes with a prefix decision_ or end_ or other_. 18 | 19 | In the labels, refer to the available properties: applicant.credit_score, applicant.years_employed, applicant.other 20 | 21 | DOT: 22 | """ 23 | 24 | print(prompt) 25 | 26 | response = pipe(prompt) 27 | 28 | print(f"RSP>> ${response}") 29 | -------------------------------------------------------------------------------- /main_cli.py: -------------------------------------------------------------------------------- 1 | from prompts_dot_graph_creator import EXPERT_COMMANDS 2 | import core 3 | 4 | command_messages = core.create_command_messages(EXPERT_COMMANDS) 5 | previous_messages = [] 6 | 7 | def output_capabilities(): 8 | print("Hello, I am gpt-workflow, an AI assistant. Here are my capabilities:") 9 | for command in EXPERT_COMMANDS: 10 | print(f" - {command.description.replace('Good for answering questions about', '')}") 11 | print("") 12 | 13 | output_capabilities() 14 | 15 | initial_prompt = "What is the name of your workflow? >>" 16 | user_prompt = input(initial_prompt) 17 | 18 | prompt_id = 1 19 | input_prompt = "(To exit, just press ENTER) >" 20 | while(user_prompt != None and len(user_prompt) > 0): 21 | print("=== === === ===") 22 | print(f">> {user_prompt}") 23 | # should route to the right 'expert'! 24 | rsp = core.execute_prompt(user_prompt, previous_messages, command_messages, prompt_id) 25 | print("=== RESPONSE ===") 26 | print(rsp) 27 | user_prompt = input(input_prompt) 28 | prompt_id += 1 29 | -------------------------------------------------------------------------------- /training/training-data-generator/core.py: -------------------------------------------------------------------------------- 1 | import service_chat 2 | 3 | # DEV note: NOT using langchain. 4 | # Tried using langchain but it's validation gets in the way of more complex prompts. 5 | # And seems simpler to code direct, not via a complicated framework. 6 | 7 | def create_command_messages(expert_commands): 8 | messages = [] 9 | for command in expert_commands: 10 | messages.append({'role':'system', 'content': command.expert_template }) 11 | 12 | return messages 13 | 14 | def execute_prompt(user_prompt, previous_messages, command_messages): 15 | # TODO: Route to the right 'expert' chain 16 | # Falls back to the default chain, which means sending the plain user prompt to the LLM 17 | 18 | user_message = {'role':'user', 'content': user_prompt } 19 | 20 | messages = command_messages + previous_messages + [user_message] 21 | 22 | rsp = service_chat.send_prompt_messages(messages) 23 | 24 | previous_messages.append(user_message) 25 | previous_messages.append({'role':'assistant', 'content': rsp }) 26 | return rsp 27 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Sean Ryan 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_2.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start [shape=ellipse, label="Start"]; 5 | start -> decision_credit_score; 6 | 7 | // decision_credit_score 8 | decision_credit_score [shape=Mdiamond, label="Good credit score?"]; 9 | decision_credit_score -> decision_income; 10 | 11 | // decision_income 12 | decision_income [shape=Mdiamond, label="Income >= $100,000?"]; 13 | decision_income -> decision_employment; 14 | 15 | // decision_employment 16 | decision_employment [shape=Mdiamond, label="Employed >= 3 years?"]; 17 | decision_employment -> decision_down_payment; 18 | 19 | // decision_down_payment 20 | decision_down_payment [shape=Mdiamond, label="Down payment >= 10%?"]; 21 | decision_down_payment -> decision_criminal_record; 22 | 23 | // decision_criminal_record 24 | decision_criminal_record [shape=Mdiamond, label="No criminal record in last 5 years?"]; 25 | decision_criminal_record -> end_approval; 26 | decision_criminal_record -> end_rejection; 27 | 28 | // end_approval 29 | end_approval [shape=ellipse, label="Approved"]; 30 | 31 | // end_rejection 32 | end_rejection [shape=ellipse, label="Rejected"]; 33 | 34 | } 35 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/.vscode/launch.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "0.2.0", 3 | "configurations": [ 4 | { 5 | // Use IntelliSense to find out which attributes exist for C# debugging 6 | // Use hover for the description of the existing attributes 7 | // For further information visit https://github.com/dotnet/vscode-csharp/blob/main/debugger-launchjson.md 8 | "name": ".NET Core Launch (console)", 9 | "type": "coreclr", 10 | "request": "launch", 11 | "preLaunchTask": "build", 12 | // If you have changed target frameworks, make sure to update the program path. 13 | "program": "${workspaceFolder}/gpt-workflow-csharp-cli/bin/Debug/net6.0/gpt-workflow-csharp-cli.dll", 14 | "args": [], 15 | "cwd": "${workspaceFolder}/gpt-workflow-csharp-cli", 16 | // For more information about the 'console' field, see https://aka.ms/VSCode-CS-LaunchJson-Console 17 | "console": "internalConsole", 18 | "stopAtEntry": false 19 | }, 20 | { 21 | "name": ".NET Core Attach", 22 | "type": "coreclr", 23 | "request": "attach" 24 | } 25 | ] 26 | } -------------------------------------------------------------------------------- /dot/example-output/dot_graph_3.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start [shape=ellipse, label="Start"]; 5 | start -> decision_experience; 6 | 7 | // decision_experience 8 | decision_experience [shape=Mdiamond, label="Has relevant experience?"]; 9 | decision_experience -> decision_education; 10 | 11 | // decision_education 12 | decision_education [shape=Mdiamond, label="Has required education?"]; 13 | decision_education -> decision_skills; 14 | 15 | // decision_skills 16 | decision_skills [shape=Mdiamond, label="Has necessary skills?"]; 17 | decision_skills -> decision_references; 18 | 19 | // decision_references 20 | decision_references [shape=Mdiamond, label="Has positive references?"]; 21 | decision_references -> decision_availability; 22 | 23 | // decision_availability 24 | decision_availability [shape=Mdiamond, label="Available for interview?"]; 25 | decision_availability -> end_recommend; 26 | decision_availability -> end_not_recommend; 27 | 28 | // end_recommend 29 | end_recommend [shape=ellipse, label="Recommend for interview"]; 30 | 31 | // end_not_recommend 32 | end_not_recommend [shape=ellipse, label="Do not recommend for interview"]; 33 | 34 | } 35 | -------------------------------------------------------------------------------- /.vscode/launch.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "0.2.0", 3 | "configurations": [ 4 | { 5 | // Use IntelliSense to find out which attributes exist for C# debugging 6 | // Use hover for the description of the existing attributes 7 | // For further information visit https://github.com/dotnet/vscode-csharp/blob/main/debugger-launchjson.md 8 | "name": ".NET Core Launch (console)", 9 | "type": "coreclr", 10 | "request": "launch", 11 | "preLaunchTask": "build", 12 | // If you have changed target frameworks, make sure to update the program path. 13 | "program": "${workspaceFolder}/gpt-workflow-csharp/gpt-workflow-csharp-cli/bin/Debug/net7.0/gpt-workflow-csharp-cli.dll", 14 | "args": [], 15 | "cwd": "${workspaceFolder}/gpt-workflow-csharp/gpt-workflow-csharp-cli", 16 | // For more information about the 'console' field, see https://aka.ms/VSCode-CS-LaunchJson-Console 17 | "console": "internalConsole", 18 | "stopAtEntry": false 19 | }, 20 | { 21 | "name": ".NET Core Attach", 22 | "type": "coreclr", 23 | "request": "attach" 24 | } 25 | ] 26 | } -------------------------------------------------------------------------------- /local-llm-q/test.py: -------------------------------------------------------------------------------- 1 | from ctransformers import AutoModelForCausalLM 2 | 3 | import config 4 | import util_time 5 | 6 | def print_config(): 7 | print(f"GPU layers used: {config.GPU_LAYERS}") 8 | 9 | print_config() 10 | 11 | #prompt = "AI is going to" 12 | prompt = """ 13 | create a DOT graph to decide a mortgage loan. 14 | if credit score is greater than 700 then check years employed. else reject. 15 | if years employed is greater than 3 then approve. else reject. 16 | 17 | digraph G { 18 | """ 19 | print(f">> {prompt}") 20 | 21 | start = util_time.start_timer() 22 | 23 | # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. 24 | llm = AutoModelForCausalLM.from_pretrained( 25 | config.HF_PROJECT, 26 | model_file=config.ACTIVE_MODEL_FILE, 27 | model_type=config.MODEL_TYPE, 28 | gpu_layers=config.GPU_LAYERS, 29 | max_new_tokens = config.MAX_NEW_TOKENS, 30 | repetition_penalty = config.REPETITION_PENALTY, 31 | temperature = config.TEMPERATURE 32 | ) 33 | 34 | print(llm(prompt)) 35 | 36 | print_config() 37 | 38 | time_elapsed = util_time.end_timer(start) 39 | print(f"Time taken: {util_time.describe_elapsed_seconds(time_elapsed)}") 40 | -------------------------------------------------------------------------------- /core.py: -------------------------------------------------------------------------------- 1 | import service_dot_parser 2 | import service_chat 3 | 4 | # DEV note: NOT using langchain. 5 | # Tried using langchain but it's validation gets in the way of more complex prompts. 6 | # And seems simpler to code direct, not via a complicated framework. 7 | 8 | def create_command_messages(expert_commands): 9 | messages = [] 10 | for command in expert_commands: 11 | messages.append({'role':'system', 'content': command.expert_template }) 12 | 13 | return messages 14 | 15 | def process_response(rsp, prompt_id): 16 | if service_dot_parser.is_dot_response(rsp): 17 | return service_dot_parser.parse_dot_and_return_human(rsp, prompt_id) 18 | return rsp 19 | 20 | def execute_prompt(user_prompt, previous_messages, command_messages, prompt_id): 21 | # TODO: Route to the right 'expert' chain 22 | # Falls back to the default chain, which means sending the plain user prompt to the LLM 23 | 24 | user_message = {'role':'user', 'content': user_prompt } 25 | 26 | messages = command_messages + previous_messages + [user_message] 27 | 28 | rsp = service_chat.send_prompt_messages(messages) 29 | 30 | previous_messages.append(user_message) 31 | previous_messages.append({'role':'assistant', 'content': rsp }) 32 | return process_response(rsp, prompt_id) 33 | -------------------------------------------------------------------------------- /dot/how_it_works-DOT-describer.simplified.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | graph [ 3 | label = "Describing/Explaining a Workflow via DOT format [simplified]" 4 | labelloc = t 5 | 6 | //dpi = 200 7 | ranksep=0.65 8 | nodesep=0.40 9 | rankdir=TB 10 | 11 | len=0 12 | ] 13 | 14 | subgraph cluster_0 { 15 | label = "User"; 16 | 17 | User_In 18 | User_Out 19 | } 20 | 21 | subgraph cluster_1 { 22 | color=blue 23 | label = "gpt-workflow"; 24 | 25 | core 26 | rsp_parser 27 | system_prompts 28 | } 29 | 30 | subgraph cluster_2 { 31 | label = "LLM"; 32 | 33 | LLM_In 34 | LLM_Out 35 | } 36 | 37 | subgraph cluster_3 { 38 | label = "Application"; 39 | 40 | Application_In 41 | Application_Out 42 | } 43 | 44 | User_In -> Application_In [label="Describe this workflow"] 45 | 46 | Application_In -> core [label="Workflow (Graphviz DOT)"] 47 | system_prompts -> core [xlabel="[system prompts]"] 48 | 49 | core -> LLM_In [label="req[Graphviz DOT][system prompts]"] 50 | 51 | LLM_In -> LLM_Out 52 | 53 | LLM_Out -> rsp_parser [label="rsp[summary]"] 54 | 55 | rsp_parser -> Application_Out 56 | Application_Out -> User_Out [label="[summary]"] 57 | } 58 | -------------------------------------------------------------------------------- /training/training-data-generator/go.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | 4 | from workflow_trainingset_creator import EXPERT_COMMANDS 5 | import core 6 | 7 | if len(sys.argv) != 2: 8 | print(f"USAGE: {sys.argv[0]} ") 9 | 10 | csv_file_path = sys.argv[1] 11 | 12 | command_messages = core.create_command_messages(EXPERT_COMMANDS) 13 | 14 | # TODO xxx how to make more random? up temperature could risk the format... 15 | user_prompt = "Create training data with 10 workflows" 16 | 17 | previous_messages = [] 18 | print("---") 19 | print(f">> {user_prompt}") 20 | rsp = core.execute_prompt(user_prompt, previous_messages, command_messages) 21 | print(rsp) 22 | 23 | LLM_LINE_SEP = '\n' 24 | 25 | def remove_header_line(rsp): 26 | lines = rsp.split(LLM_LINE_SEP) 27 | return LLM_LINE_SEP.join(lines[1:]) 28 | 29 | def write_or_append_out(rsp, csv_file_path): 30 | file_exists = os.path.isfile(csv_file_path) 31 | file_mode = 'a' if file_exists else 'w' 32 | file_mode_description = 'Appending' if file_exists else 'Writing' 33 | print(f" {file_mode_description} to {csv_file_path}") 34 | if file_exists: 35 | rsp = remove_header_line(rsp) 36 | with open(csv_file_path, file_mode, encoding='utf-8') as f: 37 | f.write(rsp + LLM_LINE_SEP) 38 | 39 | write_or_append_out(rsp, csv_file_path) 40 | -------------------------------------------------------------------------------- /local-llm-q/requirements-all.txt: -------------------------------------------------------------------------------- 1 | aiofiles==23.2.1 2 | altair==5.1.2 3 | annotated-types==0.6.0 4 | anyio==3.7.1 5 | attrs==23.1.0 6 | certifi==2023.7.22 7 | charset-normalizer==3.3.0 8 | click==8.1.7 9 | colorama==0.4.6 10 | contourpy==1.1.1 11 | ctransformers==0.2.27 12 | cycler==0.12.1 13 | fastapi==0.103.2 14 | ffmpy==0.3.1 15 | filelock==3.12.4 16 | fonttools==4.43.1 17 | fsspec==2023.9.2 18 | gradio==3.47.1 19 | gradio_client==0.6.0 20 | h11==0.14.0 21 | httpcore==0.18.0 22 | httpx==0.25.0 23 | huggingface-hub==0.18.0 24 | idna==3.4 25 | importlib-resources==6.1.0 26 | Jinja2==3.1.2 27 | jsonschema==4.19.1 28 | jsonschema-specifications==2023.7.1 29 | kiwisolver==1.4.5 30 | MarkupSafe==2.1.3 31 | matplotlib==3.8.0 32 | numpy==1.26.0 33 | nvidia-cublas-cu12==12.2.5.6 34 | nvidia-cuda-runtime-cu12==12.2.140 35 | orjson==3.9.8 36 | packaging==23.2 37 | pandas==2.1.1 38 | Pillow==10.0.1 39 | py-cpuinfo==9.0.0 40 | pydantic==2.4.2 41 | pydantic_core==2.10.1 42 | pydub==0.25.1 43 | pyparsing==3.1.1 44 | python-dateutil==2.8.2 45 | python-multipart==0.0.6 46 | pytz==2023.3.post1 47 | PyYAML==6.0.1 48 | referencing==0.30.2 49 | requests==2.31.0 50 | rpds-py==0.10.6 51 | semantic-version==2.10.0 52 | six==1.16.0 53 | sniffio==1.3.0 54 | starlette==0.27.0 55 | toolz==0.12.0 56 | tqdm==4.66.1 57 | typing_extensions==4.8.0 58 | tzdata==2023.3 59 | urllib3==2.0.6 60 | uvicorn==0.23.2 61 | websockets==11.0.3 62 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/.vscode/tasks.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "2.0.0", 3 | "tasks": [ 4 | { 5 | "label": "build", 6 | "command": "dotnet", 7 | "type": "process", 8 | "args": [ 9 | "build", 10 | "${workspaceFolder}/gpt-workflow-csharp-cli/gpt-workflow-csharp-cli.csproj", 11 | "/property:GenerateFullPaths=true", 12 | "/consoleloggerparameters:NoSummary" 13 | ], 14 | "problemMatcher": "$msCompile" 15 | }, 16 | { 17 | "label": "publish", 18 | "command": "dotnet", 19 | "type": "process", 20 | "args": [ 21 | "publish", 22 | "${workspaceFolder}/gpt-workflow-csharp-cli/gpt-workflow-csharp-cli.csproj", 23 | "/property:GenerateFullPaths=true", 24 | "/consoleloggerparameters:NoSummary" 25 | ], 26 | "problemMatcher": "$msCompile" 27 | }, 28 | { 29 | "label": "watch", 30 | "command": "dotnet", 31 | "type": "process", 32 | "args": [ 33 | "watch", 34 | "run", 35 | "--project", 36 | "${workspaceFolder}/gpt-workflow-csharp-cli/gpt-workflow-csharp-cli.csproj" 37 | ], 38 | "problemMatcher": "$msCompile" 39 | } 40 | ] 41 | } -------------------------------------------------------------------------------- /dot/example-output/dot_graph_9.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start -> read_items_from_storage_list; 5 | 6 | // read_items_from_storage_list 7 | read_items_from_storage_list -> create_list_enumerator_list; 8 | 9 | // create_list_enumerator_list 10 | create_list_enumerator_list -> has_list_enumerator_more_items_list; 11 | 12 | // has_list_enumerator_more_items_list 13 | has_list_enumerator_more_items_list -> decision_has_more_items_list; 14 | 15 | // decision_has_more_items_list 16 | decision_has_more_items_list -> get_next_item_from_enumerator_list [label="Yes"]; 17 | decision_has_more_items_list -> end [label="No"]; 18 | 19 | // get_next_item_from_enumerator_list 20 | get_next_item_from_enumerator_list -> perform_action_on_item_list; 21 | perform_action_on_item_list -> has_list_enumerator_more_items_list; 22 | 23 | // end 24 | end [shape=Msquare]; 25 | 26 | read_items_from_storage_list [shape=Mdiamond, label="Read items from storage (list)"]; 27 | create_list_enumerator_list [shape=Mdiamond, label="Create list enumerator (list)"]; 28 | has_list_enumerator_more_items_list [shape=Mdiamond, label="Has list enumerator more items?"]; 29 | decision_has_more_items_list [shape=Mdiamond, label="Has more items?"]; 30 | get_next_item_from_enumerator_list [shape=Mdiamond, label="Get next item from enumerator (list)"]; 31 | perform_action_on_item_list [shape=Mdiamond, label="Perform action on item (list)"]; 32 | } 33 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_4.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start [shape=ellipse, label="Start"]; 5 | start -> decision_has_feathers; 6 | 7 | // decision_has_feathers 8 | decision_has_feathers [shape=Mdiamond, label="Has feathers?"]; 9 | decision_has_feathers -> decision_can_fly; 10 | decision_has_feathers -> decision_has_fins; 11 | 12 | // decision_can_fly 13 | decision_can_fly [shape=Mdiamond, label="Can fly?"]; 14 | decision_can_fly -> end_Bird; 15 | decision_can_fly -> decision_lays_eggs; 16 | 17 | // decision_has_fins 18 | decision_has_fins [shape=Mdiamond, label="Has fins?"]; 19 | decision_has_fins -> end_Fish; 20 | decision_has_fins -> decision_has_legs; 21 | 22 | // decision_lays_eggs 23 | decision_lays_eggs [shape=Mdiamond, label="Lays eggs?"]; 24 | decision_lays_eggs -> end_Reptile; 25 | decision_lays_eggs -> decision_has_legs; 26 | 27 | // decision_has_legs 28 | decision_has_legs [shape=Mdiamond, label="Has legs?"]; 29 | decision_has_legs -> end_Mammal; 30 | decision_has_legs -> end_Amphibian; 31 | 32 | // end_Bird 33 | end_Bird [shape=ellipse, label="Bird"]; 34 | 35 | // end_Fish 36 | end_Fish [shape=ellipse, label="Fish"]; 37 | 38 | // end_Reptile 39 | end_Reptile [shape=ellipse, label="Reptile"]; 40 | 41 | // end_Mammal 42 | end_Mammal [shape=ellipse, label="Mammal"]; 43 | 44 | // end_Amphibian 45 | end_Amphibian [shape=ellipse, label="Amphibian"]; 46 | 47 | } 48 | -------------------------------------------------------------------------------- /.vscode/tasks.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": "2.0.0", 3 | "tasks": [ 4 | { 5 | "label": "build", 6 | "command": "dotnet", 7 | "type": "process", 8 | "args": [ 9 | "build", 10 | "${workspaceFolder}/gpt-workflow-csharp/gpt-workflow-csharp-cli/gpt-workflow-csharp-cli.csproj", 11 | "/property:GenerateFullPaths=true", 12 | "/consoleloggerparameters:NoSummary" 13 | ], 14 | "problemMatcher": "$msCompile" 15 | }, 16 | { 17 | "label": "publish", 18 | "command": "dotnet", 19 | "type": "process", 20 | "args": [ 21 | "publish", 22 | "${workspaceFolder}/gpt-workflow-csharp/gpt-workflow-csharp-cli/gpt-workflow-csharp-cli.csproj", 23 | "/property:GenerateFullPaths=true", 24 | "/consoleloggerparameters:NoSummary" 25 | ], 26 | "problemMatcher": "$msCompile" 27 | }, 28 | { 29 | "label": "watch", 30 | "command": "dotnet", 31 | "type": "process", 32 | "args": [ 33 | "watch", 34 | "run", 35 | "--project", 36 | "${workspaceFolder}/gpt-workflow-csharp/gpt-workflow-csharp-cli/gpt-workflow-csharp-cli.csproj" 37 | ], 38 | "problemMatcher": "$msCompile" 39 | } 40 | ] 41 | } -------------------------------------------------------------------------------- /dot/how_it_works-DOT-describer.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | graph [ 3 | label = "Describing/Explaining a Workflow via DOT format" 4 | labelloc = t 5 | 6 | //dpi = 200 7 | ranksep=0.65 8 | nodesep=0.40 9 | rankdir=TB 10 | 11 | len=0 12 | ] 13 | 14 | subgraph cluster_0 { 15 | label = "User"; 16 | 17 | User_In 18 | User_Out 19 | } 20 | 21 | subgraph cluster_1 { 22 | color=blue 23 | label = "gpt-workflow"; 24 | 25 | core 26 | rsp_parser 27 | 28 | dot_describer_prompt -> system_prompts 29 | } 30 | 31 | subgraph cluster_2 { 32 | label = "LLM"; 33 | 34 | LLM_In 35 | LLM_Out 36 | } 37 | 38 | subgraph cluster_3 { 39 | label = "Application"; 40 | 41 | Application_In 42 | Application_Out 43 | DOT_Serializer 44 | 45 | Workflow -> DOT_Serializer[xlabel=""] 46 | } 47 | 48 | User_In -> Application_In [label="Describe this workflow"] 49 | 50 | system_prompts -> core [xlabel="[system prompts]"] 51 | 52 | Application_In -> DOT_Serializer [label="Serialize workflow to DOT"] 53 | 54 | DOT_Serializer -> core [label="Workflow (Graphviz DOT)"] 55 | 56 | core -> LLM_In [label="req[Graphviz DOT][system prompts]"] 57 | 58 | LLM_In -> LLM_Out 59 | 60 | LLM_Out -> rsp_parser [label="rsp[summary]"] 61 | 62 | rsp_parser -> Application_Out 63 | Application_Out -> User_Out [label="[summary]"] 64 | } 65 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Client/GptWorkflowClient.cs: -------------------------------------------------------------------------------- 1 | using System.Net; 2 | using System.Web; 3 | 4 | namespace Client; 5 | 6 | public class GptWorkflowClient 7 | { 8 | readonly int port; 9 | readonly string hostname; 10 | 11 | public GptWorkflowClient(string hostname, int port) 12 | { 13 | this.port = port; 14 | this.hostname = hostname; 15 | } 16 | 17 | HttpClient CreateClient() => new HttpClient () { BaseAddress = new Uri($"http://{host}:{port}") }; 18 | 19 | async Task GetResponse(string request) 20 | { 21 | using(var webClient = CreateClient()) 22 | { 23 | var rsp = await webClient.GetAsync(request); 24 | rsp.EnsureSuccessStatusCode() 25 | .WriteRequestToConsole(); 26 | 27 | return await rsp.Content.ReadAsStringAsync(); 28 | } 29 | } 30 | 31 | string Encode(string text) 32 | => HttpUtility.UrlPathEncode(text); 33 | 34 | public async Task DescribeDot(string dot) 35 | => await GetResponse($"describe-dot?p={Encode(dot)}"); 36 | 37 | public async Task<(string description, string dot)> GenerateDot(string description) 38 | { 39 | var rsp = await GetResponse($"generate-dot?p={Encode(description)}"); 40 | 41 | var rspParts = rsp.Split("======"); 42 | var generatedDescription = rspParts[0]; 43 | var dot = rspParts[1]; 44 | return (generatedDescription, dot); 45 | } 46 | } 47 | -------------------------------------------------------------------------------- /dot/how_it_works-DOT-generation-from-natural-language.simplified.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | graph [ 3 | label = "Workflow Generation via DOT format (simplified)" 4 | labelloc = t 5 | 6 | //dpi = 200 7 | ranksep=0.65 8 | nodesep=0.40 9 | rankdir=TB 10 | 11 | len=0 12 | ] 13 | 14 | subgraph cluster_0 { 15 | label = "User"; 16 | 17 | User_In 18 | User_Out 19 | } 20 | 21 | subgraph cluster_1 { 22 | label = "Application"; 23 | 24 | Application_In 25 | Application_Out 26 | Workflow_Updater 27 | } 28 | 29 | subgraph cluster_2 { 30 | color=blue 31 | label = "gpt-workflow"; 32 | 33 | core 34 | service_dot_parser 35 | system_prompts 36 | } 37 | 38 | subgraph cluster_3 { 39 | label = "LLM"; 40 | 41 | LLM_In 42 | LLM_Out 43 | } 44 | 45 | User_In -> Application_In[label="natural language description of a workflow"] 46 | 47 | system_prompts -> core [xlabel="system prompts"] 48 | 49 | Application_In -> core [label="natural language description of a workflow"] 50 | 51 | core -> LLM_In [xlabel="req[Description][format: Graphviz DOT]"] 52 | 53 | LLM_Out -> service_dot_parser [taillabel="rsp[Workflow in DOT format][summary]"] 54 | 55 | LLM_In -> LLM_Out 56 | 57 | service_dot_parser -> Workflow_Updater [label=""] 58 | service_dot_parser -> Application_Out [label="[summary]"] 59 | 60 | Application_Out -> User_Out [label="[summary]"] 61 | } 62 | -------------------------------------------------------------------------------- /service_chat.py: -------------------------------------------------------------------------------- 1 | import openai 2 | 3 | import config 4 | 5 | def get_completion(prompt, model="gpt-3.5-turbo", temperature = 0, messages = None): 6 | if messages is None: 7 | messages = [{"role": "user", "content": prompt}] 8 | response = openai.ChatCompletion.create( 9 | model=model, 10 | messages=messages, 11 | # Temperature is the degree of randomness of the model's output 12 | # 0 would be same each time. 0.7 or 1 would be difference each time, and less likely words can be used: 13 | temperature=temperature, 14 | ) 15 | return response.choices[0].message["content"] 16 | 17 | def send_prompt(prompt, show_input = True, show_output = True, temperature = 0): 18 | if show_input: 19 | print("=== INPUT ===") 20 | print(prompt) 21 | 22 | response = get_completion(prompt, temperature=temperature) 23 | 24 | if show_output: 25 | print("=== RESPONSE ===") 26 | print(response) 27 | 28 | return response 29 | 30 | def send_prompt_messages(messages, temperature = 0): 31 | last_message = messages[-1:] 32 | if config.is_debug: 33 | print("=== LAST MESSAGE ===") 34 | print(last_message) 35 | rsp = get_completion(prompt=None, temperature=temperature, messages=messages) 36 | if config.is_debug: 37 | print("=== RESPONSE ===") 38 | print(rsp) 39 | return rsp 40 | 41 | def next_prompt(prompt): 42 | if config.is_debug: 43 | return send_prompt(prompt, temperature=config.TEMPERATURE) 44 | return send_prompt(prompt, False, False, temperature=config.TEMPERATURE) 45 | -------------------------------------------------------------------------------- /training/training-data-generator/README.md: -------------------------------------------------------------------------------- 1 | # Generating training data, to train a gpt-workflow LLM README 2 | 3 | ## Approach 4 | 5 | Create a small test dataset in CSV format, via Chat-GPT3.5 Turbo (why: power + speed + low effort) 6 | 7 | - Summary,Description, DOT 8 | 9 | Ideally, about 20 million rows? 10 | Initially less - say 10,000 rows. 11 | 12 | ## Example Output 13 | 14 | ``` 15 | ./test.sh 16 | ``` 17 | 18 | ``` 19 | [[[TEST Generate training data for 10 workflows]]] 20 | --- 21 | >> Create training data with 10 workflows 22 | workflow-name,summary,description,request,DOT 23 | EmployeeOnboarding,Manage the onboarding process for new employees,This workflow manages the onboarding process for new employees, including tasks such as paperwork completion, equipment setup, and orientation.,Create a workflow to manage the onboarding process for new employees,"digraph EmployeeOnboarding { 24 | node [shape=box]; 25 | Start -> PaperworkCompletion; 26 | PaperworkCompletion -> EquipmentSetup; 27 | EquipmentSetup -> Orientation; 28 | Orientation -> End; 29 | }" 30 | ExpenseApproval,Streamline the expense approval process,This workflow streamlines the process of approving employee expenses, ensuring timely and accurate reimbursement.,Design a workflow to streamline the expense approval process,"digraph ExpenseApproval { 31 | node [shape=box]; 32 | Start -> SubmitExpenseReport; 33 | SubmitExpenseReport -> ReviewExpense; 34 | ReviewExpense -> ApproveExpense; 35 | ApproveExpense -> ReimburseExpense; 36 | ReimburseExpense -> End; 37 | }" 38 | ``` 39 | 40 | ## Reference 41 | 42 | [gpt-workflow via Chat-GPT](../../README.md) 43 | -------------------------------------------------------------------------------- /training/training-data-generator/service_chat.py: -------------------------------------------------------------------------------- 1 | import openai 2 | 3 | import config 4 | 5 | def get_completion(prompt, model="gpt-3.5-turbo", temperature = 0, messages = None): 6 | if messages is None: 7 | messages = [{"role": "user", "content": prompt}] 8 | response = openai.ChatCompletion.create( 9 | model=model, 10 | messages=messages, 11 | # Temperature is the degree of randomness of the model's output 12 | # 0 would be same each time. 0.7 or 1 would be difference each time, and less likely words can be used: 13 | temperature=temperature, 14 | ) 15 | return response.choices[0].message["content"] 16 | 17 | def send_prompt(prompt, show_input = True, show_output = True, temperature = 0): 18 | if show_input: 19 | print("=== INPUT ===") 20 | print(prompt) 21 | 22 | response = get_completion(prompt, temperature=temperature) 23 | 24 | if show_output: 25 | print("=== RESPONSE ===") 26 | print(response) 27 | 28 | return response 29 | 30 | def send_prompt_messages(messages, temperature = 0): 31 | last_message = messages[-1:] 32 | if config.is_debug: 33 | print("=== LAST MESSAGE ===") 34 | print(last_message) 35 | rsp = get_completion(prompt=None, temperature=temperature, messages=messages) 36 | if config.is_debug: 37 | print("=== RESPONSE ===") 38 | print(rsp) 39 | return rsp 40 | 41 | def next_prompt(prompt): 42 | if config.is_debug: 43 | return send_prompt(prompt, temperature=config.TEMPERATURE) 44 | return send_prompt(prompt, False, False, temperature=config.TEMPERATURE) 45 | -------------------------------------------------------------------------------- /service_dot_parser.py: -------------------------------------------------------------------------------- 1 | import os 2 | import pydot 3 | 4 | import config 5 | import util_file 6 | 7 | DOT_GRAPH_START = "digraph G" 8 | DOT_SECTION_DELIMITER = "```" 9 | 10 | def generate_output_file_path(prompt_id, extension): 11 | return os.path.join(config.PATH_TO_PNG_OUTDIR, f"dot_graph_{prompt_id}.{extension}") 12 | 13 | def generate_png_from_dot(dot_string, prompt_id): 14 | graphs = pydot.graph_from_dot_data(dot_string) 15 | path_to_png = generate_output_file_path(prompt_id, "png") 16 | print(f"Writing png to '{path_to_png}'") 17 | graphs[0].write_png(path_to_png) 18 | 19 | def write_dot_to_file(dot_string, prompt_id): 20 | path_to_dot = generate_output_file_path(prompt_id, "dot") 21 | print(f"Writing dot to '{path_to_dot}'") 22 | util_file.write_text_to_file(dot_string, path_to_dot) 23 | 24 | def is_dot_response(rsp): 25 | return DOT_GRAPH_START in rsp 26 | 27 | def parse_dot_and_return_human(rsp, prompt_id): 28 | parts = rsp.split(DOT_GRAPH_START) 29 | human_output = parts[0].replace(DOT_SECTION_DELIMITER, "").strip() 30 | dot_string = DOT_GRAPH_START + parts[1] 31 | if DOT_SECTION_DELIMITER in dot_string: 32 | parts_after_dot = dot_string.split(DOT_SECTION_DELIMITER) 33 | dot_string = parts_after_dot[0] 34 | human_output += config.END_LINE + config.END_LINE.join(parts_after_dot[1:]) 35 | print(f" == BEGIN DOT ==") 36 | print(dot_string) 37 | print(f" == END DOT ==") 38 | write_dot_to_file(dot_string, prompt_id) 39 | generate_png_from_dot(dot_string, prompt_id) 40 | 41 | return { 42 | "human_output": human_output, 43 | "dot": dot_string 44 | } 45 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_6.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start -> read_items_from_storage_list; 5 | 6 | // read_items_from_storage_list 7 | read_items_from_storage_list -> create_list_enumerator_list; 8 | 9 | // create_list_enumerator_list 10 | create_list_enumerator_list -> has_list_enumerator_more_items_list; 11 | 12 | // has_list_enumerator_more_items_list 13 | has_list_enumerator_more_items_list -> get_next_item_from_enumerator_list [label="Yes"]; 14 | has_list_enumerator_more_items_list -> write_items_to_storage_list [label="No"]; 15 | 16 | // get_next_item_from_enumerator_list 17 | get_next_item_from_enumerator_list -> variable_item; 18 | 19 | // variable_item 20 | variable_item -> call_flow_add_item; 21 | 22 | // call_flow_add_item 23 | call_flow_add_item -> write_items_to_storage_list; 24 | 25 | // write_items_to_storage_list 26 | write_items_to_storage_list -> has_list_enumerator_more_items_list; 27 | 28 | // end 29 | has_list_enumerator_more_items_list -> end_list; 30 | 31 | start [shape=Mdiamond, label="Start"]; 32 | read_items_from_storage_list [shape=box, label="Read items from storage (list)"]; 33 | create_list_enumerator_list [shape=box, label="Create list enumerator (list)"]; 34 | has_list_enumerator_more_items_list [shape=diamond, label="Has list enumerator more items?"]; 35 | get_next_item_from_enumerator_list [shape=box, label="Get next item from enumerator (list)"]; 36 | variable_item [shape=box, label="Variable: item"]; 37 | call_flow_add_item [shape=box, label="Call flow: add item"]; 38 | write_items_to_storage_list [shape=box, label="Write items to storage (list)"]; 39 | end_list [shape=Msquare, label="End"]; 40 | 41 | } 42 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_10.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start -> read_items_from_storage_Orders; 5 | 6 | // read_items_from_storage_Orders 7 | read_items_from_storage_Orders -> create_list_enumerator_Orders; 8 | 9 | // create_list_enumerator_Orders 10 | create_list_enumerator_Orders -> has_list_enumerator_more_items_Orders; 11 | 12 | // has_list_enumerator_more_items_Orders 13 | has_list_enumerator_more_items_Orders -> get_next_item_from_enumerator_Orders [label="Yes"]; 14 | has_list_enumerator_more_items_Orders -> write_items_to_storage_TotalAmount [label="No"]; 15 | 16 | // get_next_item_from_enumerator_Orders 17 | get_next_item_from_enumerator_Orders -> variable_Order; 18 | 19 | // variable_Order 20 | variable_Order -> calculate_total_amount; 21 | 22 | // calculate_total_amount 23 | calculate_total_amount -> variable_TotalAmount; 24 | 25 | // variable_TotalAmount 26 | variable_TotalAmount -> has_list_enumerator_more_items_Orders; 27 | 28 | // write_items_to_storage_TotalAmount 29 | write_items_to_storage_TotalAmount -> end; 30 | 31 | read_items_from_storage_Orders [shape=Mdiamond, label="Read Orders from storage"]; 32 | create_list_enumerator_Orders [shape=Mdiamond, label="Create Orders enumerator"]; 33 | has_list_enumerator_more_items_Orders [shape=Mdiamond, label="Are there more Orders?"]; 34 | get_next_item_from_enumerator_Orders [shape=Mdiamond, label="Get next Order"]; 35 | variable_Order [shape=Mdiamond, label="Order"]; 36 | calculate_total_amount [shape=Mdiamond, label="Calculate total amount"]; 37 | variable_TotalAmount [shape=Mdiamond, label="Total Amount"]; 38 | write_items_to_storage_TotalAmount [shape=Mdiamond, label="Write Total Amount to storage"]; 39 | end [shape=Msquare, label="End"]; 40 | } 41 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_5.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start -> read_items_from_storage_JobApplications; 5 | 6 | // read_items_from_storage_JobApplications 7 | read_items_from_storage_JobApplications -> while_has_list_enumerator_more_items_JobApplications; 8 | 9 | // while_has_list_enumerator_more_items_JobApplications 10 | while_has_list_enumerator_more_items_JobApplications -> get_next_item_from_enumerator_JobApplications; 11 | while_has_list_enumerator_more_items_JobApplications -> end_JobApplications; 12 | 13 | // get_next_item_from_enumerator_JobApplications 14 | get_next_item_from_enumerator_JobApplications -> call_flow_CheckApplication; 15 | 16 | // call_flow_CheckApplication 17 | call_flow_CheckApplication -> decision_ShouldProceedToInterview; 18 | 19 | // decision_ShouldProceedToInterview 20 | decision_ShouldProceedToInterview -> end_ProceedToInterview; 21 | decision_ShouldProceedToInterview -> end_DoNotProceedToInterview; 22 | 23 | // end_JobApplications 24 | end_JobApplications [shape=Msquare, label="End"]; 25 | 26 | // end_ProceedToInterview 27 | end_ProceedToInterview [shape=Msquare, label="Proceed to Interview"]; 28 | 29 | // end_DoNotProceedToInterview 30 | end_DoNotProceedToInterview [shape=Msquare, label="Do Not Proceed to Interview"]; 31 | 32 | read_items_from_storage_JobApplications [shape=Mdiamond, label="Read Job Applications from Storage"]; 33 | while_has_list_enumerator_more_items_JobApplications [shape=Mdiamond, label="While there are more Job Applications"]; 34 | get_next_item_from_enumerator_JobApplications [shape=Mdiamond, label="Get Next Job Application"]; 35 | call_flow_CheckApplication [shape=Mdiamond, label="Call Check Application Flow"]; 36 | decision_ShouldProceedToInterview [shape=Mdiamond, label="Should Proceed to Interview?"]; 37 | 38 | } 39 | -------------------------------------------------------------------------------- /local-llm-q/config.py: -------------------------------------------------------------------------------- 1 | HF_PROJECT = "TheBloke/CodeLlama-13B-GGUF" 2 | 3 | #this one works 4 | MODEL_FILE__CODELLAMA_13B__Q3_K_M = "codellama-13b.q3_K_M.gguf" 5 | MODEL_FILE__CODELLAMA_13B__Q3_K_M__MAX_TOKENS = 512 # max 6 | 7 | MODEL_FILE__CODELLAMA_13_B__Q5_K_M = "codellama-13b.Q5_K_M.gguf" 8 | MODEL_FILE__CODELLAMA_13B__Q5_K_M__MAX_TOKENS = 512 # max 9 | 10 | #ACTIVE_MODEL_FILE = MODEL_FILE__CODELLAMA_13B__Q3_K_M 11 | ACTIVE_MODEL_FILE = MODEL_FILE__CODELLAMA_13_B__Q5_K_M 12 | MAX_NEW_TOKENS = MODEL_FILE__CODELLAMA_13B__Q5_K_M__MAX_TOKENS #4096, 1096 13 | 14 | MODEL_TYPE = "llama" 15 | 16 | GPU_LAYERS = 31 # 0 means 'no GPU' - if GPU try 50 or less - then probably need ctransformers[cuda] instead of ctransformers 17 | # 18 | # If too high - can actually be slower! - see https://www.reddit.com/r/LocalLLaMA/comments/14kt3hz/nvidia_user_make_sure_you_dont_offload_too_many/ 19 | # - depends on graphics card + the model 20 | # 21 | # Times for 24GB [only 8GB dedicated which seems to be the limit!] NVIDIA card with model MODEL_FILE__CODELLAMA_13B__Q3_K_M: 22 | # - note in Task manager, the Shared GPU memory should stay low, else is slower. 23 | # 0 = CPU only = 1m 34s 24 | # 50, 45, 44, 43, 42 was too high for that card and model 25 | # 41 = best time - 49.76s 26 | # 40 = 50s+ 27 | # 37 = 53s 28 | # 30 = 1m 38s 29 | # 25 = 1m 7s 30 | # 20 = 1m 22s 31 | # 10 = 1m 58s 32 | # 33 | # model MODEL_FILE__CODELLAMA_13_B__Q5_K_M 34 | # GPU layers, DOT prompt, quality good! 35 | # 31 (max this h/w) = 21s ! (once) 36 | # 28 = 20m 17s (!) 37 | 38 | # getting GPU to work was tricky - ran this on command line: 39 | # pip install "ctransformers[cuda]>=0.2.24" 40 | 41 | REPETITION_PENALTY = 1 # 1.13 - range is 1 (no penalty, default) to 'infinity' 42 | 43 | # TEMPERATURE = 0.5 44 | TEMPERATURE = 0.5 # range is normally 0 (consistent) to 1 (more 'creative') 45 | -------------------------------------------------------------------------------- /local-llm/main--general-coding-llm.py: -------------------------------------------------------------------------------- 1 | # ref = https://huggingface.co/codellama/CodeLlama-13b-hf 2 | # 3 | # The base 7B or 13B version in the Hugging Face Transformers format. 4 | # This model is designed for general code synthesis and understanding 5 | 6 | import sys 7 | from transformers import AutoTokenizer 8 | import transformers 9 | import torch 10 | 11 | if len(sys.argv) != 2: 12 | print(f"USAGE: {sys.argv[0]} <7B|13B>") 13 | 14 | model_size = sys.argv[1] 15 | 16 | model = None 17 | if model_size == "13B": 18 | model = "codellama/CodeLlama-13b-hf" 19 | elif model_size == "7B": 20 | model = "codellama/CodeLlama-7b-hf" 21 | else: 22 | raise("Model size must be 7B or 13B") 23 | 24 | tokenizer = AutoTokenizer.from_pretrained(model) 25 | pipeline = transformers.pipeline( 26 | "text-generation", 27 | model=model, 28 | # needs GPU ? - torch_dtype=torch.float16, 29 | torch_dtype=torch.float32, # ok for CPU? 30 | device_map="auto", 31 | ) 32 | 33 | prompt = """ 34 | create a DOT graph to decide a mortgage loan. if credit score is greater than 700 then check years employed. else reject. 35 | 36 | if years employed is greater than 3 then approve. 37 | else reject. 38 | 39 | name the DOT nodes with a prefix decision_ or end_ or other_. 40 | 41 | In the labels, refer to the available properties: applicant.credit_score, applicant.years_employed, applicant.other 42 | 43 | DOT: 44 | """ 45 | 46 | print(prompt) 47 | 48 | sequences = pipeline( 49 | prompt, 50 | #'import socket\n\ndef ping_exponential_backoff(host: str):', 51 | do_sample=True, 52 | top_k=10, 53 | temperature=0.1, 54 | top_p=0.95, 55 | num_return_sequences=1, 56 | eos_token_id=tokenizer.eos_token_id, 57 | max_length=200, 58 | ) 59 | for seq in sequences: 60 | print(f"Result: {seq['generated_text']}") 61 | -------------------------------------------------------------------------------- /dot/how_it_works-DOT-generation-from-natural-language.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | graph [ 3 | label = "Workflow Generation via DOT format" 4 | labelloc = t 5 | 6 | //dpi = 200 7 | ranksep=0.65 8 | nodesep=0.40 9 | rankdir=TB 10 | 11 | len=0 12 | ] 13 | 14 | subgraph cluster_0 { 15 | label = "User"; 16 | 17 | User_In 18 | User_Out 19 | } 20 | 21 | subgraph cluster_1 { 22 | label = "Application"; 23 | 24 | Application_In 25 | Application_Out 26 | DOT_Parser 27 | Workflow 28 | } 29 | 30 | subgraph cluster_2 { 31 | color=blue 32 | label = "gpt-workflow"; 33 | 34 | core 35 | service_dot_parser 36 | 37 | dot_describer_prompt -> system_prompts 38 | dot_creator_prompt -> system_prompts 39 | } 40 | 41 | subgraph cluster_3 { 42 | label = "LLM"; 43 | 44 | LLM_In 45 | LLM_Out 46 | } 47 | 48 | subgraph cluster_4 { 49 | label = "Disk"; 50 | 51 | Image_File 52 | DOT_File 53 | } 54 | 55 | User_In -> Application_In[label="natural language description of a workflow"] 56 | 57 | system_prompts -> core [xlabel="system prompts"] 58 | 59 | Application_In -> core [label="natural language description of a workflow"] 60 | 61 | core -> LLM_In [xlabel="req[Description][format: Graphviz DOT]"] 62 | 63 | LLM_Out -> service_dot_parser [taillabel="rsp[Workflow in DOT format][summary]"] 64 | 65 | LLM_In -> LLM_Out 66 | 67 | service_dot_parser -> DOT_Parser [label="[Workflow in DOT format]"] 68 | service_dot_parser -> Application_Out [label="[summary]"] 69 | DOT_Parser -> Workflow [label=""] 70 | 71 | Application_Out -> User_Out [label="[summary]"] 72 | service_dot_parser -> Image_File [label="[DOT]"] 73 | service_dot_parser -> DOT_File [label="[DOT]"] 74 | } 75 | -------------------------------------------------------------------------------- /training/training-data-generator/workflow_trainingset_creator.py: -------------------------------------------------------------------------------- 1 | from command import Command 2 | 3 | # note: Instead of this steps-in-single-prompt approach, we could have an inter-dependent chain, to collect info about the app, THEN try to generate. 4 | # BUT the step-by-step approach works really well, at least with Chat-GPT3.5 Turbo. 5 | 6 | create_dot_workflows__expert_template = """You are Workflow Training-set Creator, a bot that knows how to create training data to train another LLM to answer questions about creating workflows in DOT format. 7 | You are great at answering requests to create more training examples, about creating, altering and describing a workflow in DOT format. 8 | 9 | When you don't know the answer to a question, do not answer. 10 | 11 | You are an AI assistant to generate training examples about creating or describing a workflow in DOT format, using natural language input. 12 | 13 | The output MUST be in CSV only, based on the following example: 14 | ``` 15 | ,,,, 16 | ,,,, 17 | ,,,, 18 | ``` 19 | 20 | where: 21 | - is the name of a workflow. The workflow should follow a typical process to solve a business problem. 22 | - is a short summary of the overall purpose of the workflow. 23 | - is a longer description of the workflow and its process. 24 | - is an example of a high-level request from a user to create this workflow. 25 | - is the workflow in graphviz DOT format 26 | 27 | IMPORTANT: only output valid CSV, with valid graphviz DOT format. 28 | """ 29 | 30 | # Each expert is a prompt that knows how to handle one type of user input 31 | EXPERT_COMMANDS = [ 32 | Command('create_dot_workflows', create_dot_workflows__expert_template, "Good for answering questions about creating training data about workflows in DOT format.") 33 | ] 34 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_11.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start [shape=ellipse, label="Start"]; 5 | 6 | // decision_budget 7 | start -> decision_budget; 8 | decision_budget [shape=diamond, label="Budget?"]; 9 | 10 | // decision_budget_yes 11 | decision_budget -> decision_budget_yes [label="Yes"]; 12 | decision_budget_yes [shape=diamond, label="High budget?"]; 13 | 14 | // decision_budget_no 15 | decision_budget -> decision_budget_no [label="No"]; 16 | decision_budget_no [shape=diamond, label="Low budget?"]; 17 | 18 | // decision_budget_yes -> end_high_budget 19 | decision_budget_yes -> end_high_budget [label="Yes"]; 20 | end_high_budget [shape=box, label="Choose high-end brand"]; 21 | 22 | // decision_budget_yes -> end_mid_budget 23 | decision_budget_yes -> end_mid_budget [label="No"]; 24 | end_mid_budget [shape=box, label="Choose mid-range brand"]; 25 | 26 | // decision_budget_no -> end_low_budget 27 | decision_budget_no -> end_low_budget [label="Yes"]; 28 | end_low_budget [shape=box, label="Choose low-cost brand"]; 29 | 30 | // decision_budget_no -> decision_features 31 | decision_budget_no -> decision_features [label="No"]; 32 | decision_features [shape=diamond, label="Specific features?"]; 33 | 34 | // decision_features -> end_specific_features 35 | decision_features -> end_specific_features [label="Yes"]; 36 | end_specific_features [shape=box, label="Choose brand with specific features"]; 37 | 38 | // decision_features -> end_general_features 39 | decision_features -> end_general_features [label="No"]; 40 | end_general_features [shape=box, label="Choose brand with general features"]; 41 | 42 | // end 43 | end_high_budget [shape=ellipse, label="End"]; 44 | end_mid_budget [shape=ellipse, label="End"]; 45 | end_low_budget [shape=ellipse, label="End"]; 46 | end_specific_features [shape=ellipse, label="End"]; 47 | end_general_features [shape=ellipse, label="End"]; 48 | } -------------------------------------------------------------------------------- /dot/example-output/dot_graph_8.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start -> read_items_from_storage_list; 5 | 6 | // read_items_from_storage_list 7 | read_items_from_storage_list -> create_list_enumerator_list; 8 | 9 | // create_list_enumerator_list 10 | create_list_enumerator_list -> has_list_enumerator_more_items_list; 11 | 12 | // has_list_enumerator_more_items_list 13 | has_list_enumerator_more_items_list -> get_next_item_from_enumerator_list [label="Yes"]; 14 | has_list_enumerator_more_items_list -> end_list [label="No"]; 15 | 16 | // get_next_item_from_enumerator_list 17 | get_next_item_from_enumerator_list -> variable_item; 18 | 19 | // variable_item 20 | variable_item -> call_flow_check_boolean; 21 | 22 | // call_flow_check_boolean 23 | call_flow_check_boolean -> decision_boolean_result; 24 | 25 | // decision_boolean_result 26 | decision_boolean_result -> add_item_to_list [label="True"]; 27 | decision_boolean_result -> write_items_to_storage_list [label="False"]; 28 | 29 | // add_item_to_list 30 | add_item_to_list -> write_items_to_storage_list; 31 | 32 | // write_items_to_storage_list 33 | write_items_to_storage_list -> has_list_enumerator_more_items_list; 34 | 35 | // end 36 | end_list [shape=Msquare, label="End"]; 37 | 38 | start [shape=Mdiamond, label="Start"]; 39 | read_items_from_storage_list [shape=box, label="Read items from storage (list)"]; 40 | create_list_enumerator_list [shape=box, label="Create list enumerator (list)"]; 41 | has_list_enumerator_more_items_list [shape=diamond, label="Has list enumerator more items?"]; 42 | get_next_item_from_enumerator_list [shape=box, label="Get next item from enumerator (list)"]; 43 | variable_item [shape=box, label="Variable: item"]; 44 | call_flow_check_boolean [shape=box, label="Call flow: check boolean"]; 45 | decision_boolean_result [shape=diamond, label="Boolean result"]; 46 | add_item_to_list [shape=box, label="Add item to list"]; 47 | write_items_to_storage_list [shape=box, label="Write items to storage (list)"]; 48 | end_list [shape=Msquare, label="End"]; 49 | 50 | } 51 | -------------------------------------------------------------------------------- /json_fixer.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | import config 4 | 5 | def try_add_missing_quotes(rsp): 6 | # rsp almost valid, but no quotes on the keys! 7 | #example: 8 | # { 9 | # bot_name: "Document Creator Bot", 10 | # command_name: "Create Document", 11 | # message_to_user: "What type of document would you like to create? Text, Image, or Spreadsheet?", 12 | # document_type: None 13 | # } 14 | new_rsp = "" 15 | split_by_space = rsp.split(" ") 16 | for x in split_by_space: 17 | if x.endswith(':'): 18 | x = f'"{x}":' 19 | new_rsp += x + " " 20 | return new_rsp 21 | 22 | def force_to_json(rsp, property_name): 23 | try: 24 | if config.is_debug: 25 | print("RSP: ") 26 | print(rsp) 27 | print("") 28 | json.loads(rsp, strict=False) 29 | return rsp 30 | except Exception as e1: 31 | if config.is_debug: 32 | print(e1) 33 | print("RSP: ") 34 | print(rsp) 35 | try: 36 | rsp = rsp.replace("Output:", "") 37 | if config.is_debug: 38 | print(rsp) 39 | json.loads(rsp, strict=False) 40 | return rsp 41 | except Exception as e1: 42 | if config.is_debug: 43 | print(e1) 44 | print("RSP: ") 45 | print(rsp) 46 | try: 47 | rsp = try_add_missing_quotes(rsp) 48 | if config.is_debug: 49 | print(rsp) 50 | json.loads(rsp, strict=False) 51 | return rsp 52 | except Exception: 53 | if not ":" in rsp: # not already an attempt at JSON 54 | dict = { 55 | property_name: rsp 56 | } 57 | return json.dumps(dict) 58 | else: # an attempt at JSON, but not valid JSON! 59 | dict = { 60 | property_name: "Cannot answer this question", 61 | "error": "Invalid JSON", 62 | "original": rsp 63 | } 64 | return json.dumps(dict) 65 | -------------------------------------------------------------------------------- /training/training-data-generator/json_fixer.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | import config 4 | 5 | def try_add_missing_quotes(rsp): 6 | # rsp almost valid, but no quotes on the keys! 7 | #example: 8 | # { 9 | # bot_name: "Document Creator Bot", 10 | # command_name: "Create Document", 11 | # message_to_user: "What type of document would you like to create? Text, Image, or Spreadsheet?", 12 | # document_type: None 13 | # } 14 | new_rsp = "" 15 | split_by_space = rsp.split(" ") 16 | for x in split_by_space: 17 | if x.endswith(':'): 18 | x = f'"{x}":' 19 | new_rsp += x + " " 20 | return new_rsp 21 | 22 | def force_to_json(rsp, property_name): 23 | try: 24 | if config.is_debug: 25 | print("RSP: ") 26 | print(rsp) 27 | print("") 28 | json.loads(rsp, strict=False) 29 | return rsp 30 | except Exception as e1: 31 | if config.is_debug: 32 | print(e1) 33 | print("RSP: ") 34 | print(rsp) 35 | try: 36 | rsp = rsp.replace("Output:", "") 37 | if config.is_debug: 38 | print(rsp) 39 | json.loads(rsp, strict=False) 40 | return rsp 41 | except Exception as e1: 42 | if config.is_debug: 43 | print(e1) 44 | print("RSP: ") 45 | print(rsp) 46 | try: 47 | rsp = try_add_missing_quotes(rsp) 48 | if config.is_debug: 49 | print(rsp) 50 | json.loads(rsp, strict=False) 51 | return rsp 52 | except Exception: 53 | if not ":" in rsp: # not already an attempt at JSON 54 | dict = { 55 | property_name: rsp 56 | } 57 | return json.dumps(dict) 58 | else: # an attempt at JSON, but not valid JSON! 59 | dict = { 60 | property_name: "Cannot answer this question", 61 | "error": "Invalid JSON", 62 | "original": rsp 63 | } 64 | return json.dumps(dict) 65 | -------------------------------------------------------------------------------- /local-llm-q/README.md: -------------------------------------------------------------------------------- 1 | # Local LLM - Quantized for less RAM 2 | 3 | ## References 4 | 5 | https://huggingface.co/TheBloke/CodeLlama-13B-GGUF 6 | 7 | https://www.youtube.com/watch?v=rZz5AORu8zE 8 | 9 | ## Usage on Windows - Quick start (for Unix, use the .sh scripts) 10 | 11 | Open a command prompt. 12 | 13 | Create a Python environment, to be able to install dependencies without interfering with other Python projects on the same machine: 14 | 15 | ``` 16 | create_env.bat 17 | 18 | where Python 19 | 20 | env\Scripts\python -v 21 | ``` 22 | - output: should output the location and version of Python, at env\Scripts\python.exe. 23 | 24 | Install and run the LLM. 25 | 26 | - warning: this downloads the `Code Llama 13B Quantized` LLM which is about 10 GB (multiple files). 27 | 28 | - the files are downloaded to this folder: 29 | 30 | `dir %USERPROFILE%\.cache\huggingface\hub` 31 | 32 | ``` 33 | install.bat 34 | go.bat 35 | ``` 36 | 37 | - output: prompt sent to LLM, and its response: 38 | 39 | ``` 40 | ``` //`begin delimiter 41 | digraph { 42 | rankdir=LR 43 | node [shape=box] 44 | start_ [label="start"] 45 | decision_credit_score [label="credit score > 700"] 46 | decision_years_employed [label="years employed > 3"] 47 | end_approved [label="approved"] 48 | end_rejected [label=" 49 | ``` //` end delimiter 50 | ``` 51 | 52 | When done, deactivate the Python environment: 53 | 54 | ``` 55 | env\Scripts\deactivate 56 | 57 | where python 58 | ``` 59 | 60 | - output: should output the usual location of Python 61 | 62 | ## Results 63 | 64 | ### Model: codellama-13b.Q5_K_M.gguf 65 | 66 | - REPETITION_PENALTY = 1 (default) 67 | - TEMPERATURE = 0.5 68 | 69 | Prompt : 70 | 71 | ``` 72 | create a DOT graph to decide a mortgage loan. 73 | if credit score is greater than 700 then check years employed. else reject. 74 | if years employed is greater than 3 then approve. else reject. 75 | 76 | digraph G { 77 | ``` 78 | 79 | Response: 80 | 81 | ``` 82 | rankdir=LR; 83 | node [shape=record]; 84 | credit_score [label="{credit score: 700}"]; 85 | years_employed [label="{years employed: 5}"]; 86 | credit_score -> years_employed [label="700 > 700"]; 87 | years_employed -> approve [label="5 > 3"]; 88 | years_employed -> reject [label="5 <= 3"]; 89 | approve [label="approve"]; 90 | reject [label="reject"]; 91 | } 92 | ``` 93 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Builder/DotBuilder.cs: -------------------------------------------------------------------------------- 1 | using System.Text; 2 | using Visitor; 3 | 4 | namespace Builder; 5 | 6 | public class DotBuilder 7 | { 8 | string identifier = "G"; // the identifier of the overall graph 9 | string label = ""; // the label of the overall graph 10 | int next_id = 1; 11 | readonly List edges = new(); 12 | readonly List nodes = new(); 13 | 14 | string NextId(NodeKind kind) => $"{kind}_{next_id++}"; 15 | 16 | public Node AddNode(NodeKind kind, string identifier = "") 17 | { 18 | if (string.IsNullOrEmpty(identifier)) 19 | identifier = NextId(kind); 20 | var node = new Node(kind, identifier); 21 | nodes.Add(node); 22 | return node; 23 | } 24 | 25 | public Edge AddEdge(Node one, Node two, string label = "") 26 | { 27 | var edge = new Edge(one, two, label); 28 | edges.Add(edge); 29 | return edge; 30 | } 31 | 32 | public void SetId(string identifier) 33 | { 34 | this.identifier = identifier.Replace(" ", "_").Replace("-", "_"); 35 | } 36 | 37 | public void SetLabel(string label) 38 | { 39 | this.label = label; 40 | } 41 | 42 | public string Build() 43 | { 44 | var sb = new StringBuilder(); 45 | sb.AppendLine($"digraph {identifier} {{"); 46 | 47 | if (!string.IsNullOrEmpty(label)) 48 | sb.AppendLine($" label=\"{label}\""); 49 | 50 | foreach(var node in nodes) 51 | sb.AppendLine($" {GetNodeAsDot(node)}[shape={GetShape(node)}, label=\"{node.Identifier}\" ];"); 52 | 53 | foreach(var edge in edges) 54 | { 55 | var edgeLabel = ""; 56 | if (!string.IsNullOrEmpty(edge.Label)) 57 | { 58 | edgeLabel = $" [label=\"{edge.Label}\"]"; 59 | } 60 | sb.AppendLine($" {GetNodeAsDot(edge.Start)} -> {GetNodeAsDot(edge.End)}{edgeLabel};"); 61 | } 62 | 63 | sb.AppendLine("}"); 64 | 65 | return sb.ToString(); 66 | } 67 | 68 | string GetNodeAsDot(Node node) => $"{node.Kind}_{node.Identifier.Replace(" ", "_")}"; 69 | 70 | string GetShape(Node node) 71 | { 72 | switch(node.Kind) 73 | { 74 | case NodeKind.Decision: 75 | return "Mdiamond"; 76 | case NodeKind.End: 77 | return "rectangle"; 78 | case NodeKind.Start: 79 | return "ellipse"; 80 | default: 81 | return "Msquare"; 82 | } 83 | } 84 | } 85 | -------------------------------------------------------------------------------- /dot/example-output/dot_graph_7.dot: -------------------------------------------------------------------------------- 1 | digraph G { 2 | 3 | // start 4 | start -> read_items_from_storage_list1; 5 | 6 | // read_items_from_storage_list1 7 | read_items_from_storage_list1 -> create_list_enumerator_list1; 8 | 9 | // create_list_enumerator_list1 10 | create_list_enumerator_list1 -> has_list_enumerator_more_items_list1; 11 | 12 | // has_list_enumerator_more_items_list1 13 | has_list_enumerator_more_items_list1 -> get_next_item_from_enumerator_list1 [label="Yes"]; 14 | has_list_enumerator_more_items_list1 -> read_items_from_storage_list2 [label="No"]; 15 | 16 | // get_next_item_from_enumerator_list1 17 | get_next_item_from_enumerator_list1 -> variable_item; 18 | 19 | // variable_item 20 | variable_item -> call_flow_add_item; 21 | 22 | // call_flow_add_item 23 | call_flow_add_item -> write_items_to_storage_list; 24 | 25 | // read_items_from_storage_list2 26 | read_items_from_storage_list2 -> create_list_enumerator_list2; 27 | 28 | // create_list_enumerator_list2 29 | create_list_enumerator_list2 -> has_list_enumerator_more_items_list2; 30 | 31 | // has_list_enumerator_more_items_list2 32 | has_list_enumerator_more_items_list2 -> get_next_item_from_enumerator_list2 [label="Yes"]; 33 | has_list_enumerator_more_items_list2 -> write_items_to_storage_list [label="No"]; 34 | 35 | // get_next_item_from_enumerator_list2 36 | get_next_item_from_enumerator_list2 -> variable_item; 37 | 38 | // write_items_to_storage_list 39 | write_items_to_storage_list -> has_list_enumerator_more_items_list1; 40 | 41 | // end 42 | has_list_enumerator_more_items_list1 -> end_list; 43 | 44 | start [shape=Mdiamond, label="Start"]; 45 | read_items_from_storage_list1 [shape=box, label="Read items from storage (list1)"]; 46 | create_list_enumerator_list1 [shape=box, label="Create list enumerator (list1)"]; 47 | has_list_enumerator_more_items_list1 [shape=diamond, label="Has list enumerator more items?"]; 48 | get_next_item_from_enumerator_list1 [shape=box, label="Get next item from enumerator (list1)"]; 49 | variable_item [shape=box, label="Variable: item"]; 50 | call_flow_add_item [shape=box, label="Call flow: add item"]; 51 | read_items_from_storage_list2 [shape=box, label="Read items from storage (list2)"]; 52 | create_list_enumerator_list2 [shape=box, label="Create list enumerator (list2)"]; 53 | has_list_enumerator_more_items_list2 [shape=diamond, label="Has list enumerator more items?"]; 54 | get_next_item_from_enumerator_list2 [shape=box, label="Get next item from enumerator (list2)"]; 55 | write_items_to_storage_list [shape=box, label="Write items to storage (list1)"]; 56 | end_list [shape=Msquare, label="End"]; 57 | 58 | } 59 | -------------------------------------------------------------------------------- /prompts_dot_graph_creator.py: -------------------------------------------------------------------------------- 1 | from command import Command 2 | 3 | # note: Instead of this steps-in-single-prompt approach, we could have an inter-dependent chain, to collect info about the app, THEN try to generate. 4 | # BUT the step-by-step approach works really well, at least with Chat-GPT3.5 Turbo. 5 | 6 | create_dot_flowchart__expert_template = """You are Workflow Creator Bot, a bot that knows how to create a simple DOT format flow chart. 7 | You are great at answering questions about creating and altering a flow chart. 8 | 9 | When you don't know the answer to a question, do not answer. 10 | 11 | You are an AI assistant to assist an application developer with the creation of the flow chart via natural language input. 12 | 13 | The output MUST be in DOT format as used by the graphviz tool only, based on the following example: 14 | ``` 15 | digraph G { 16 | 17 | // decision_has_feathers 18 | start -> decision_has_feathers; 19 | decision_has_feathers -> decision_can_fly 20 | decision_has_feathers -> decision_has_fins 21 | 22 | // decision_can_fly 23 | decision_can_fly -> end_Hawk 24 | decision_can_fly -> end_Penguin 25 | 26 | // decision_has_fins 27 | decision_has_fins -> end_Dolphin 28 | decision_has_fins -> end_Bear 29 | 30 | decision_has_feathers [shape=Mdiamond, label="Has feathers?"]; 31 | decision_can_fly [shape=Mdiamond, label="Can fly?"]; 32 | decision_has_fins [shape=Mdiamond, label="Has fins?"]; 33 | } 34 | ``` 35 | 36 | IMPORTANT: Nodes of the flow digraph MUST be named to match this whitelist: 37 | - start, decision_, end_, while_, read_items_from_storage_, write_items_to_storage_, create_list_enumerator_, has_list_enumerator_more_items_, get_next_item_from_enumerator_, variable_, parameter_, call_flow_, other_ 38 | 39 | IMPORTANT: Only output valid DOT format as used by the graphviz tool. 40 | """ 41 | 42 | describe_dot_flowchart__expert_template = """ 43 | If the user asked for a description or summary of the DOT format flow chart then: 44 | - provide an explanation of the flow, and an overall summary of the flow 45 | - use clear natural language, in a concise, friendly tone. 46 | """ 47 | 48 | def getExpertCommandToCreateDot(): 49 | return Command('create_dot_workflow', create_dot_flowchart__expert_template, "Good for answering questions about creating a workflow in DOT notation") 50 | 51 | def getExpertCommandToDescribeDot(): 52 | return Command('describe_dot_workflow', describe_dot_flowchart__expert_template, "Good for describing a workflow given in DOT notation, summarizing its activity and its general purpose") 53 | 54 | # Each expert is a prompt that knows how to handle one type of user input 55 | EXPERT_COMMANDS = [ 56 | getExpertCommandToDescribeDot(), 57 | # Placing this last, so that its IMPORTANT message about whitelist is not ignored (LLMs tend to ignore content in middle) 58 | # An approach like LangChain's MULTI_PROMPT_ROUTER_TEMPLATE would avoid this problem. 59 | getExpertCommandToCreateDot(), 60 | ] 61 | 62 | def GetPromptToDescribeWorkflow(dotText): 63 | return f"""Descibe this workflow: ```{dotText}```""" 64 | -------------------------------------------------------------------------------- /local-llm/README.md: -------------------------------------------------------------------------------- 1 | # Local LLM README 2 | 3 | Self-Hosting an LLM (`Code Llama 13B`), rather than using an Open-AI service. 4 | 5 | ref - https://huggingface.co/codellama/CodeLlama-13b-hf 6 | 7 | ## Usage on Windows - Quick start (for Unix, use the .sh scripts) 8 | 9 | Open a command prompt. 10 | 11 | Create a Python environment, to be able to install dependencies without interfering with other Python projects on the same machine: 12 | 13 | ``` 14 | create_env.bat 15 | 16 | where Python 17 | 18 | env\Scripts\python -v 19 | ``` 20 | - output: should output the location and version of Python, at env\Scripts\python.exe. 21 | 22 | Install and run the LLM. 23 | 24 | - warning: this downloads the `Code Llama 13B` LLM which is about 26 GB (multiple files). 25 | 26 | - the files are downloaded to this folder: 27 | 28 | `dir %USERPROFILE%\.cache\huggingface\hub` 29 | 30 | ``` 31 | install.bat 32 | go.bat 33 | ``` 34 | 35 | - output: prompt sent to LLM, and its response: 36 | 37 | ``` 38 | ``` //`begin delimiter 39 | digraph { 40 | rankdir=LR 41 | node [shape=box] 42 | start_ [label="start"] 43 | decision_credit_score [label="credit score > 700"] 44 | decision_years_employed [label="years employed > 3"] 45 | end_approved [label="approved"] 46 | end_rejected [label=" 47 | ``` //` end delimiter 48 | ``` 49 | 50 | When done, deactivate the Python environment: 51 | 52 | ``` 53 | env\Scripts\deactivate 54 | 55 | where python 56 | ``` 57 | 58 | - output: should output the usual location of Python 59 | 60 | ## Usage on Windows - DETAILED 61 | 62 | Ref = https://github.com/mrseanryan/gpt-dm/issues/6 63 | 64 | To use via transformers (locally) 65 | 66 | - First, use a pip environment: 67 | 68 | On Windows, Command prompt: 69 | ``` 70 | cd my-project 71 | py -m venv env 72 | 73 | env\Scripts\activate 74 | 75 | where python 76 | 77 | .\env\Scripts\python.exe 78 | 79 | # when done: 80 | env\Scripts\deactivate 81 | ``` 82 | 83 | ref = https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/#:~:text=To%20create%20a%20virtual%20environment,virtualenv%20in%20the%20below%20commands.&text=The%20second%20argument%20is%20the,project%20and%20call%20it%20env%20. 84 | 85 | Then install special version of transformers for this model: 86 | 87 | ``` 88 | env\Scripts\pip install git+https://github.com/huggingface/transformers.git@main accelerate 89 | 90 | env\Scripts\pip freeze > requirements.txt 91 | ``` 92 | 93 | Next time around, with the environment activated, can install via: 94 | 95 | ``` 96 | env\Scripts\pip install -r requirements.txt 97 | ``` 98 | 99 | Using the LLM: 100 | ``` 101 | # Use a pipeline as a high-level helper 102 | from transformers import pipeline 103 | 104 | pipe = pipeline("text-generation", model="codellama/CodeLlama-13b-Python-hf") 105 | 106 | rsp = pipe("Generate the first 10 numbers of fibonacci in Python") 107 | ``` 108 | 109 | OR: 110 | 111 | ``` 112 | # Load model directly 113 | from transformers import AutoTokenizer, AutoModelForCausalLM 114 | 115 | tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-13b-Python-hf") 116 | model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-13b-Python-hf") 117 | ``` 118 | -------------------------------------------------------------------------------- /local-llm-q/llm-chat-q.py: -------------------------------------------------------------------------------- 1 | import gradio as gr 2 | import time 3 | from ctransformers import AutoModelForCausalLM 4 | 5 | import config 6 | import util_time 7 | 8 | # ref https://huggingface.co/TheBloke/CodeLlama-13B-GGUF 9 | # ref https://www.youtube.com/watch?v=rZz5AORu8zE 10 | def load_llm(): 11 | llm = AutoModelForCausalLM.from_pretrained( 12 | config.HF_PROJECT, 13 | model_file=config.ACTIVE_MODEL_FILE, 14 | model_type=config.MODEL_TYPE, 15 | gpu_layers=config.GPU_LAYERS, 16 | max_new_tokens = config.MAX_NEW_TOKENS, 17 | repetition_penalty = config.REPETITION_PENALTY, 18 | temperature = config.TEMPERATURE 19 | ) 20 | return llm 21 | 22 | print("Loading LLM...") 23 | llm = load_llm() 24 | print("[done]") 25 | 26 | def print_config(): 27 | print(f"GPU layers used: {config.GPU_LAYERS}") 28 | 29 | print_config() 30 | 31 | def llm_function(message, chat_history): 32 | global llm 33 | 34 | print(f">> {message}") 35 | start = util_time.start_timer() 36 | 37 | response = llm(message) 38 | print(response) 39 | 40 | time_elapsed = util_time.end_timer(start) 41 | print(f"Time taken: {util_time.describe_elapsed_seconds(time_elapsed)}") 42 | 43 | return response 44 | 45 | title = "CodeLlama 13B GGUF (quantized) Demo" 46 | 47 | examples = [ 48 | # best DOT prompt with the quantized model MODEL_FILE__CODELLAMA_13B__Q3_K_M 49 | """ 50 | create a DOT graph to decide a mortgage loan. 51 | if credit score is greater than 700 then check years employed. else reject. 52 | if years employed is greater than 3 then approve. else reject. 53 | 54 | digraph G { 55 | """, 56 | # This prompt works better, just with the separate lines! 57 | """ 58 | create a DOT graph to decide a mortgage loan. 59 | if credit score is greater than 700 then check years employed. else reject. 60 | if years employed is greater than 3 then approve. else reject. 61 | 62 | DOT: 63 | """, 64 | """ 65 | create a DOT graph to decide a mortgage loan. if credit score is greater than 700 then check years employed. else reject. 66 | 67 | DOT: 68 | """, 69 | """ 70 | create a DOT graph to decide a mortgage loan. if credit score is greater than 700 then check years employed. else reject. 71 | 72 | if years employed is greater than 3 then approve. 73 | else reject. 74 | 75 | name the DOT nodes with a prefix decision_ or end_ or other_. 76 | 77 | In the labels, refer to the available properties: applicant.credit_score, applicant.years_employed, applicant.other 78 | 79 | DOT: 80 | """, 81 | """ 82 | What is the overal purpose of this DOT graph: 83 | 84 | ``` 85 | digraph { 86 | rankdir=LR; 87 | node [shape = box]; 88 | start [label="start"]; 89 | end [label="end"]; 90 | start -> a [label="credit score > 700"]; 91 | start -> b [label="credit score < 700"]; 92 | a -> c [label="years employed > 3"]; 93 | a -> d [label="years employed < 3"]; 94 | b -> e [label="reject"]; 95 | c -> f [label="approve"]; 96 | d -> g [label="reject"]; 97 | f -> end; 98 | g -> end; 99 | } 100 | ``` 101 | """, 102 | 'Write a python code to connect with a SQL database and list down all the tables.', 103 | 'Write the python code to train a linear regression model using scikit learn.', 104 | 'Write the code to implement a binary tree implementation in C language.', 105 | 'What are the benefits of the python programming language?', 106 | 'AI is going to' 107 | ] 108 | 109 | print(f"Hosting local LLM - you do NOT need to be logged in to HF") 110 | 111 | gr.ChatInterface( 112 | fn = llm_function, 113 | title=title, 114 | examples = examples 115 | ).launch() 116 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Program.cs: -------------------------------------------------------------------------------- 1 | 2 | using System.Diagnostics; 3 | using System.Text; 4 | using Builder; 5 | using Visitor; 6 | 7 | args = args.Where(a => !string.IsNullOrEmpty(a)).ToArray(); 8 | 9 | switch (args.Length) 10 | { 11 | case 1: 12 | { 13 | var cmd = args[0]; 14 | if (cmd == "create-example-dot") 15 | { 16 | var dot = CreateExampleDot(); 17 | Console.WriteLine(dot); 18 | return 0; 19 | } 20 | else if (cmd == "send-example-dot-to-describe") 21 | { 22 | var dot = CreateExampleDot(); 23 | Console.WriteLine(dot); 24 | 25 | var gptWorkflowClient = new Client.GptWorkflowClient(Config.Host, Config.Port); 26 | var rsp = await gptWorkflowClient.DescribeDot(dot); 27 | Console.WriteLine($"-- Generated Description --"); 28 | Console.WriteLine(rsp); 29 | return 0; 30 | } 31 | else ShowUsage(); 32 | return 6661; 33 | } 34 | case 2: 35 | { 36 | var cmd = args[0]; 37 | if (cmd == "parse") 38 | { 39 | ParseAndDumpDotFile(args[1]); 40 | return 0; 41 | } 42 | else if (cmd == "generate-dot-and-parse") 43 | { 44 | var description = args[1]; 45 | var gptWorkflowClient = new Client.GptWorkflowClient(Config.Host, Config.Port); 46 | var rsp = await gptWorkflowClient.GenerateDot(description); 47 | Console.WriteLine($"-- Generated Description --"); 48 | Console.WriteLine(rsp.description); 49 | Console.WriteLine($"-- Generated DOT --"); 50 | Console.WriteLine(rsp.dot); 51 | Console.WriteLine($"-- Parsed DOT --"); 52 | ParseAndDumpDot(rsp.dot); 53 | return 0; 54 | } 55 | else ShowUsage(); 56 | return 6662; 57 | } 58 | default: 59 | { 60 | ShowUsage(); 61 | return 666; 62 | } 63 | } 64 | 65 | string CreateExampleDot() 66 | { 67 | var builder = new DotBuilder(); 68 | builder.SetId("Decide Job Applicant"); 69 | builder.SetLabel("Decide Job Applicant"); // intentionally NOT much detail, else helping AI too much 70 | var start = builder.AddNode(NodeKind.Start); 71 | var decideExperience = builder.AddNode(NodeKind.Decision, "Experience"); 72 | var decideEducation = builder.AddNode(NodeKind.Decision, "Education"); 73 | var decideSkills = builder.AddNode(NodeKind.Decision, "Skills"); 74 | 75 | var end_recommend = builder.AddNode(NodeKind.End, "Recommend"); 76 | var end_not_recommend = builder.AddNode(NodeKind.End, "Not_Recommend"); 77 | 78 | builder.AddEdge(start, decideExperience); 79 | builder.AddEdge(decideExperience, decideEducation, "true"); 80 | builder.AddEdge(decideExperience, end_not_recommend, "false"); 81 | 82 | builder.AddEdge(decideEducation, decideSkills, "true"); 83 | builder.AddEdge(decideEducation, end_not_recommend, "false"); 84 | 85 | builder.AddEdge(decideSkills, end_recommend, "true"); 86 | builder.AddEdge(decideSkills, end_not_recommend, "false"); 87 | 88 | var dot = builder.Build(); 89 | return dot; 90 | } 91 | 92 | void ShowUsage() 93 | { 94 | var processName = Process.GetCurrentProcess().ProcessName; 95 | Console.WriteLine($"USAGE: {processName} parse "); 96 | Console.WriteLine($"USAGE: {processName} create-example-dot"); 97 | Console.WriteLine($"USAGE: {processName} generate-dot-and-parse "); 98 | Console.WriteLine($"USAGE: {processName} send-example-dot-to-describe"); 99 | } 100 | 101 | void ParseAndDumpDotFile(string pathToDotFile) 102 | { 103 | Console.WriteLine($"Parsing DOT file {pathToDotFile}"); 104 | 105 | var dot = File.ReadAllText(pathToDotFile, Encoding.UTF8); 106 | ParseAndDumpDot(dot); 107 | } 108 | 109 | void ParseAndDumpDot(string dot) 110 | { 111 | var parser = new Parser.DotParser(); 112 | parser.Parse(dot, new ConsoleDumpDotModelVisitor()); 113 | } 114 | -------------------------------------------------------------------------------- /gpt-workflow-csharp/gpt-workflow-csharp-cli/Parser/DotParser.cs: -------------------------------------------------------------------------------- 1 | namespace Parser; 2 | 3 | using Visitor; 4 | using DotLang.CodeAnalysis.Syntax; 5 | using System; 6 | 7 | public class DotParser 8 | { 9 | public void Parse(string dot, IDotModelVisitor visitor) 10 | { 11 | dot = RemoveComments(dot); 12 | var syntaxTree = new Parser(dot).Parse(); 13 | syntaxTree.Accept(new DotVisitor(visitor)); 14 | } 15 | 16 | string RemoveComments(string dot) 17 | { 18 | // Unfortunately, comments in the DOT mess up the parsing - seems bug in Nuget package. 19 | var lines = dot.Replace("\r", "").Split("\n"); 20 | return string.Join("\n", lines.Where(l => !IsComment(l))); 21 | } 22 | 23 | bool IsComment(string line) => line.Trim().StartsWith("//") || line.Trim().StartsWith("#"); 24 | } 25 | 26 | // ref: https://github.com/abock/dotlang/blob/master/src/DotLang/CodeAnalysis/Syntax/SyntaxVisitor.cs 27 | class DotVisitor : SyntaxVisitor 28 | { 29 | const string UNKOWN = "unknown"; 30 | readonly IDotModelVisitor visitor; 31 | 32 | // The AST visitor seems to visit same nodes multiple times. This visitor suppresses the duplicate visits. 33 | readonly VisitedStringTracker visited = new VisitedStringTracker(); 34 | 35 | public DotVisitor(IDotModelVisitor visitor) 36 | { 37 | this.visitor = visitor; 38 | } 39 | 40 | // Assumption: only Nodes will have labels 41 | // TODO: support Edges with labels, if needed 42 | 43 | public override bool VisitNodeStatementSyntax(NodeStatementSyntax nodeStatement, VisitKind visitKind) 44 | { 45 | var identifier = IdentifierFrom(nodeStatement.Identifier.IdentifierToken); 46 | var kind = NodeKindFrom(nodeStatement.Identifier.IdentifierToken.ToString()); 47 | 48 | var node = new Node(kind, identifier); 49 | if (!visited.WasVisited(node.ToString())) 50 | { 51 | visitor.VisitNode(node); 52 | } 53 | 54 | var labelAttribute = nodeStatement.Attributes?.FirstOrDefault(a => a.NameToken.StringValue?.Trim() == "label"); 55 | if (labelAttribute != null) 56 | { 57 | var label = labelAttribute.ValueToken.StringValue; 58 | if (!string.IsNullOrEmpty(label)) 59 | visitor.VisitNodeLabel(node, label); 60 | } 61 | return true; 62 | } 63 | 64 | NodeKind NodeKindFrom(string identifier) 65 | { 66 | if (string.IsNullOrEmpty(identifier)) 67 | return NodeKind.Other; 68 | identifier = identifier.Trim(); 69 | if (IsComment(identifier)) { 70 | // handle comment like this: (seems to be bug in the parser - else the label gets lost!) 71 | // // decision_down_payment 72 | // decision_down_payment [shape=diamond, label="Down Payment > 20%?"]; 73 | var commentedKind = NodeKindFrom(DeComment(identifier)); 74 | if (commentedKind != NodeKind.Other) 75 | return commentedKind; 76 | 77 | return NodeKind.Comment; 78 | } 79 | 80 | var parts = identifier.Split("_") 81 | .Select(p => p.Trim()); 82 | var kind = parts.First().ToLower(); 83 | 84 | switch (kind) 85 | { 86 | case "decision": 87 | return NodeKind.Decision; 88 | case "end": 89 | return NodeKind.End; 90 | case "start": 91 | return NodeKind.Start; 92 | default: 93 | return NodeKind.Other; 94 | } 95 | } 96 | 97 | bool IsComment(string identifier) => identifier.Trim().StartsWith("//") || identifier.Trim().StartsWith("#"); 98 | 99 | string DeComment(string identifier) 100 | { 101 | identifier = identifier.Trim(); 102 | if (identifier.StartsWith("//")) 103 | return identifier.Substring(2); 104 | if (identifier.StartsWith("#")) 105 | return identifier.Substring(1); 106 | return identifier; 107 | } 108 | 109 | public override bool VisitEdgeStatementSyntax(EdgeStatementSyntax edgeStatement, VisitKind visitKind) 110 | { 111 | // bug in parser? 112 | // Can get an edgeStatement like this: - messed up due to comments in DOT. For now, can filter out comment lines. 113 | /* 114 | // decision_features -> end_general_features 115 | decision_features -> end_general_features [label="No"]; 116 | */ 117 | 118 | var leftId = IdentifierFrom(edgeStatement.Left); 119 | var rightId = IdentifierFrom(edgeStatement.Right); 120 | 121 | var left = new Node(NodeKindFrom(ToStringOrUnknown(edgeStatement.Left)), leftId); 122 | var right = new Node(NodeKindFrom(ToStringOrUnknown(edgeStatement.Right)), rightId); 123 | 124 | // TODO try to parse label attribute 125 | 126 | var edge = new Edge(left, right); 127 | if (!visited.WasVisited(edge.ToString())) 128 | visitor.VisitEdge(edge); 129 | return true; 130 | } 131 | 132 | string ToStringOrUnknown(IEdgeVertexStatementSyntax edge) => edge.ToString()?.Replace(";", "") ?? UNKOWN; 133 | 134 | string IdentifierFrom(IEdgeVertexStatementSyntax edgeVertex) => IdentifierFrom(ToStringOrUnknown(edgeVertex)); 135 | string IdentifierFrom(SyntaxToken identifierToken) => IdentifierFrom(identifierToken?.StringValue ?? UNKOWN); 136 | string IdentifierFrom(string identifier) 137 | { 138 | if (string.IsNullOrEmpty(identifier)) 139 | return UNKOWN; 140 | 141 | var parts = identifier.Split("["); 142 | parts = parts[0].Split("_") 143 | .Select(p => p.Trim()) 144 | .ToArray(); 145 | if (parts.Count() == 1) 146 | return parts.First(); 147 | 148 | // decision_down_payment -> DownPayment 149 | return string.Join("", 150 | parts.Skip(1) 151 | .Select(p => p[0].ToString().ToUpper() + string.Join("", p.Skip(1))) 152 | ) 153 | .Replace(";", ""); 154 | } 155 | } 156 | -------------------------------------------------------------------------------- /test__prompts_dot_graph_creator.py: -------------------------------------------------------------------------------- 1 | from prompts_dot_graph_creator import EXPERT_COMMANDS, GetPromptToDescribeWorkflow 2 | import core 3 | 4 | def test(): 5 | command_messages = core.create_command_messages(EXPERT_COMMANDS) 6 | 7 | decisions_tests_first = { 8 | "name": "Simple workflow to model a tree of decisions", 9 | "prompts": [ 10 | "Create a flow that makes a series of decisions about whether to approve a mortgage application" 11 | ] 12 | } 13 | 14 | decisions_tests = [ 15 | decisions_tests_first, 16 | { 17 | "name": "Simple workflow to model a tree of decisions", 18 | "prompts": [ 19 | "Create a flow that makes a series of decisions about whether to approve a mortgage application. The criteria are: good credit score, income at least 100000 USD, employed at least 3 years, can make a down payment of at least 10%, has no criminal record in last 5 years.", 20 | "Create a flow that makes a series of decisions about whether to recommend a job interview candidate.", 21 | "Create a flow that makes a series of decisions about an animal, to decide what kind of animal is it", 22 | ] 23 | } 24 | ] 25 | 26 | list_advanced_test = { 27 | "name": "Workflow that iterates conditionally over items in a list", 28 | "prompts": [ 29 | "Create a flow that iterates over Job Applications in a list. For each Job Application, call another flow that checks if the application should proceed to interview stage", 30 | ] 31 | } 32 | 33 | list_tests = [ 34 | list_advanced_test, 35 | { 36 | "name": "Simple workflow adding an item to a list", 37 | "prompts": [ 38 | "Create a flow that takes a list and adds an item of the same type", 39 | "Create a flow that takes two lists and concatenates them", 40 | "Create a flow that takes a list and an object. Call another flow to get a boolean result. If the boolean is true, then add the item to the list." 41 | ] 42 | }, 43 | { 44 | "name": "Workflow that iterates over items in a list", 45 | "prompts": [ 46 | "Create a flow that iterates over items in a list, performing an action on each item" 47 | ] 48 | } 49 | ] 50 | 51 | storage_first_test = { 52 | "name": "Workflow reads orders from storage and writes back the total", 53 | "prompts": [ 54 | "Create a Workflow that reads a list of Orders for a Customer from storage, and then iterates over the orders, calculating the total amount. Write the total back to storage.", 55 | ] 56 | } 57 | 58 | storage_tests = [ 59 | storage_first_test 60 | ] 61 | 62 | describe_tests = [ 63 | { 64 | "name": "Describe the given workflow", 65 | "prompts": [ 66 | GetPromptToDescribeWorkflow("""" 67 | digraph G { 68 | 69 | // start 70 | start [shape=ellipse, label="Start"]; 71 | 72 | // decision_credit_score 73 | start -> decision_credit_score; 74 | decision_credit_score [shape=Mdiamond, label="Credit Score > 700?"]; 75 | 76 | // decision_income 77 | decision_credit_score -> decision_income; 78 | decision_income [shape=Mdiamond, label="Income > $50,000?"]; 79 | 80 | // decision_employment 81 | decision_income -> decision_employment; 82 | decision_employment [shape=Mdiamond, label="Employment > 2 years?"]; 83 | 84 | // decision_down_payment 85 | decision_employment -> decision_down_payment; 86 | decision_down_payment [shape=Mdiamond, label="Down Payment > 20%?"]; 87 | 88 | // approve 89 | decision_down_payment -> approve; 90 | approve [shape=box, label="Approve"]; 91 | 92 | // reject 93 | decision_credit_score -> reject; 94 | reject [shape=box, label="Reject"]; 95 | 96 | decision_income -> reject; 97 | decision_employment -> reject; 98 | decision_down_payment -> reject; 99 | } 100 | """), 101 | GetPromptToDescribeWorkflow("""" 102 | digraph G { 103 | 104 | // start 105 | start [shape=ellipse, label="Start"]; 106 | 107 | // decision_has_feathers 108 | start -> decision_has_feathers; 109 | decision_has_feathers [shape=Mdiamond, label="Has feathers?"]; 110 | 111 | // decision_can_fly 112 | decision_has_feathers -> decision_can_fly; 113 | decision_can_fly [shape=Mdiamond, label="Can fly?"]; 114 | 115 | // decision_has_fins 116 | decision_has_feathers -> decision_has_fins; 117 | decision_has_fins [shape=Mdiamond, label="Has fins?"]; 118 | 119 | // Hawk 120 | decision_can_fly -> Hawk; 121 | Hawk [shape=box, label="Hawk"]; 122 | 123 | // Penguin 124 | decision_can_fly -> Penguin; 125 | Penguin [shape=box, label="Penguin"]; 126 | 127 | // Dolphin 128 | decision_has_fins -> Dolphin; 129 | Dolphin [shape=box, label="Dolphin"]; 130 | 131 | // Bear 132 | decision_has_fins -> Bear; 133 | Bear [shape=box, label="Bear"]; 134 | } 135 | """) 136 | ] 137 | } 138 | ] 139 | 140 | irrelevant_tests = [ 141 | { 142 | "name": "Irrelevant prompts", 143 | "prompts": [ 144 | # other prompts, that should NOT be handled by the Commands: 145 | "what is 2 + 5 divided by 10 ?", 146 | "Who won the battle of Agincourt, and why was it fought?", 147 | "What is my favourite color?", 148 | ] 149 | } 150 | ] 151 | 152 | tests = decisions_tests + list_tests + storage_tests + describe_tests + irrelevant_tests 153 | 154 | # for debugging: 155 | # tests = [decisions_tests_first] # xxx 156 | # tests = [list_advanced_test] # xxx 157 | # tests = [storage_first_test] # xxx 158 | 159 | prompt_id = 1 160 | for test in tests: 161 | previous_messages = [] 162 | print(f"[[[TEST {test['name']}]]]") 163 | for user_prompt in test['prompts']: 164 | print("---") 165 | print(f">> {user_prompt}") 166 | # should route to the right 'expert' chain! 167 | rsp = core.execute_prompt(user_prompt, previous_messages, command_messages, prompt_id) 168 | print(rsp) 169 | prompt_id += 1 170 | -------------------------------------------------------------------------------- /main_web_service.py: -------------------------------------------------------------------------------- 1 | from http.server import BaseHTTPRequestHandler, HTTPServer 2 | import traceback 3 | from urllib.parse import urlparse, parse_qs 4 | import urllib 5 | import threading, socket 6 | 7 | from prompts_dot_graph_creator import EXPERT_COMMANDS, GetPromptToDescribeWorkflow, getExpertCommandToCreateDot, getExpertCommandToDescribeDot 8 | import core 9 | import config 10 | import config_web 11 | 12 | # Python web server with cookie-based session 13 | # based on https://davidgorski.ca/posts/sessions/ 14 | 15 | # TODO add sessions like gpt-rpg 16 | previous_messages = [] 17 | 18 | prompt_id = 1 19 | 20 | class MyServer(BaseHTTPRequestHandler): 21 | def do_GET(self): 22 | routes = { 23 | "/generate-dot": self.bot_generate_dot, 24 | "/describe-dot": self.bot_describe_dot, 25 | } 26 | try: 27 | response = 200 28 | path = self.parse_path() 29 | print(f"req path: '{path}'") 30 | content = routes[path]() 31 | except Exception as error: 32 | print("!! error: ", error) 33 | traceback.print_exc() # print stack trace 34 | response = 404 35 | content = "{ 'error': 'Oops! Not Found' }" 36 | 37 | self.send_response(response) 38 | self.send_header('Content-type','text/plain') 39 | self.end_headers() 40 | 41 | self.write(content) 42 | return 43 | 44 | def bot_describe_dot(self): 45 | global prompt_id 46 | user_prompt = self.parse_query_param("p") 47 | print(f" {user_prompt}") 48 | user_prompt_wrapped = GetPromptToDescribeWorkflow(user_prompt) 49 | command_messages = core.create_command_messages([getExpertCommandToDescribeDot()]) 50 | if config_web.discard_previous_messages: 51 | previous_messages = [] 52 | rsp = core.execute_prompt(user_prompt_wrapped, previous_messages, command_messages, prompt_id) 53 | prompt_id += 1 54 | return rsp 55 | 56 | def bot_generate_dot(self): 57 | global prompt_id 58 | DELIMITER = "======" 59 | EMPTY_DOT = "digraph G{}" 60 | user_prompt = self.parse_query_param("p") 61 | print(f" {user_prompt}") 62 | command_messages = core.create_command_messages([getExpertCommandToCreateDot()]) 63 | if config_web.discard_previous_messages: 64 | previous_messages = [] 65 | rsp = core.execute_prompt(user_prompt, previous_messages, command_messages, prompt_id) 66 | prompt_id += 1 67 | if "human_output" in rsp: 68 | return rsp["human_output"] + f"\n\n{DELIMITER}\n\n" + rsp["dot"] 69 | else: 70 | print("!! error rsp? - cannot parse") 71 | print(rsp) 72 | return f"{rsp}\n\n{DELIMITER}\n\n{EMPTY_DOT}" 73 | 74 | def parse_path(self): 75 | return urlparse(self.path).path 76 | 77 | def parse_query_params(self): 78 | return parse_qs(urlparse(self.path).query) 79 | 80 | def parse_query_param(self, param): 81 | params = self.parse_query_params() 82 | if param in params: 83 | value_array = params[param] 84 | if value_array is None: 85 | return "" 86 | return value_array[0] 87 | return "" 88 | 89 | def write(self, content): 90 | self.wfile.write(bytes(content, "utf-8")) 91 | 92 | def start_single_threaded(): 93 | # Single threaded so can debug! 94 | webServer = HTTPServer((config_web.HOSTNAME, config_web.PORT), MyServer) 95 | print("Server started http://%s:%s" % (config_web.HOSTNAME, config_web.PORT)) 96 | try: 97 | webServer.serve_forever() 98 | except KeyboardInterrupt: 99 | pass 100 | webServer.server_close() 101 | print("Server stopped.") 102 | 103 | def quotify(text): 104 | return urllib.parse.quote(text) 105 | 106 | def start_multi_threaded(): 107 | # Multi-threaded server, else performance is terrible 108 | # ref https://stackoverflow.com/questions/46210672/python-2-7-streaming-http-server-supporting-multiple-connections-on-one-port 109 | # 110 | # Create ONE socket. 111 | addr = (config_web.HOSTNAME, config_web.PORT) 112 | sock = socket.socket (socket.AF_INET, socket.SOCK_STREAM) 113 | sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) 114 | sock.bind(addr) 115 | sock.listen(5) 116 | 117 | # Launch many listener threads. 118 | class Thread(threading.Thread): 119 | def __init__(self, i): 120 | threading.Thread.__init__(self) 121 | self.i = i 122 | self.daemon = True 123 | self.start() 124 | def run(self): 125 | httpd = HTTPServer(addr, MyServer, False) 126 | 127 | # Prevent the HTTP server from re-binding every handler. 128 | # https://stackoverflow.com/questions/46210672/ 129 | httpd.socket = sock 130 | httpd.server_bind = self.server_close = lambda self: None 131 | 132 | try: 133 | httpd.serve_forever() 134 | except KeyboardInterrupt: 135 | pass 136 | httpd.server_close() 137 | 138 | escaped_create_dot = quotify("Create a flow that makes a series of decisions about whether to approve a mortgage application") 139 | escaped_dot = quotify('digraph G { Start_Start_1[shape=ellipse, label="Start_1" ]; Decision_Experience[shape=Mdiamond, label="Experience" ]; Decision_Education[shape=Mdiamond, label="Education" ]; Decision_Skills[shape=Mdiamond, label="Skills" ]; End_Recommend[shape=rectangle, label="Recommend" ]; End_Not_Recommend[shape=rectangle, label="Not_Recommend" ]; Start_Start_1 -> Decision_Experience; Decision_Experience -> Decision_Education [label="true"]; Decision_Experience -> End_Not_Recommend [label="false"]; Decision_Education -> Decision_Skills [label="true"]; Decision_Education -> End_Not_Recommend [label="false"]; Decision_Skills -> End_Recommend [label="true"]; Decision_Skills -> End_Not_Recommend [label="false"];}")}') 140 | 141 | print(f"Server started at http://{config_web.HOSTNAME}:{config_web.PORT} - {config_web.WEB_SERVER_THREADS} threads") 142 | print("Please set the 'p' query parameter to be the user's prompt.") 143 | print(f"- generate DOT example: http://{config_web.HOSTNAME}:{config_web.PORT}/generate-dot?p={escaped_create_dot}") 144 | print(f"- describe DOT example: http://{config_web.HOSTNAME}:{config_web.PORT}/describe-dot?p={escaped_dot}"); 145 | print("[press any key to stop]") 146 | [Thread(i) for i in range(config_web.WEB_SERVER_THREADS)] 147 | input("Press ENTER to kill server\n") 148 | 149 | print("Server stopped.") 150 | 151 | if __name__ == "__main__": 152 | if config.is_debug: 153 | start_single_threaded() 154 | else: 155 | start_multi_threaded() 156 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # gpt-workflow 2 | Generate workflows (for flowcharts or low code) in DOT format, from natural language, via an LLM. 3 | 4 | Also perform the inverse: describe a given workflow (given in DOT format) in natural language. 5 | 6 | ## Approach: generate DOT notation as a simple format to represent a workflow 7 | 8 | The DOT graph format (as used by tools like graphviz) is a simple way to represent a flow chart. A flow chart is a good approximation of a workflow. 9 | 10 | Because DOT graphs are a common public format, large LLMs such as Open AI gpt-3.5-turbo have included them in their training corpus. So, such LLMs are already capable of both generating DOT files and summarizing them back to natural language. We can use such LLMs both to generate workflows from natural language, and the inverse (generating a summary), by using DOT as an intermediate format. 11 | 12 | The DOT script generated by the LLM can be further processed, for example by generating a flow chart image OR by populating some kind of workflow system inside an application. 13 | 14 | ### Approach: Generating a workflow 15 | 16 | ![images/how_it_works-DOT-generation-from-natural-language.simplified.png](images/how_it_works-DOT-generation-from-natural-language.simplified.png) 17 | 18 | #### Generating a workflow: Detailed View 19 | 20 | ![images/how_it_works-DOT-generation-from-natural-language.png](images/how_it_works-DOT-generation-from-natural-language.png) 21 | 22 | ### Approach: Describing (explaining) a workflow 23 | 24 | ![images/how_it_works-DOT-describer.simplified.png](images/how_it_works-DOT-describer.simplified.png) 25 | 26 | #### Describing (explaining) a workflow: Detailed View 27 | 28 | ![images/how_it_works-DOT-describer.png](images/how_it_works-DOT-describer.png) 29 | 30 | ## Example generated flow charts 31 | 32 | | ![images/dot_graph_1.png](images/dot_graph_1.png)| ![images/dot_graph_2.png](images/dot_graph_2.png)| 33 | |---|---| 34 | | Workflow to decide on a mortgage application | Workflow to decide on a job interview candidate| 35 | 36 | | ![images/dot_graph_3.png](images/dot_graph_3.png)| ![images/dot_graph_4.png](images/dot_graph_4.png)| 37 | |---|---| 38 | | Workflow to decide what is this animal | Workflow to add an item to a list| 39 | 40 | | ![images/dot_graph_5.png](images/dot_graph_5.png)| ![images/dot_graph_6.png](images/dot_graph_6.png)| 41 | |---|---| 42 | | Workflow to combine two lists | Workflow to conditionally add an item to a list| 43 | 44 | | ![images/dot_graph_7.png](images/dot_graph_7.png)| ![images/dot_graph_8.png](images/dot_graph_8.png)| 45 | |---|---| 46 | | Iiterate over Job Applications in a list. For each Job Application, call another flow that checks if the application should proceed to interview stage. | The step-by-step process of reading the Orders, iterating over them, calculating the total amount, and writing it back to storage. | 47 | 48 | ## Example Execution - generating flows in DOT format from natural language input 49 | 50 | ``` 51 | >> Create a flow that makes a series of decisions about whether to approve a mortgage application 52 | Writing png to '.\temp\dot_graph_1.png' 53 | digraph G { 54 | 55 | // start 56 | start [shape=ellipse, label="Start"]; 57 | 58 | // decision_credit_score 59 | start -> decision_credit_score; 60 | decision_credit_score [shape=Mdiamond, label="Credit Score > 700?"]; 61 | 62 | // decision_income 63 | decision_credit_score -> decision_income; 64 | decision_income [shape=Mdiamond, label="Income > $50,000?"]; 65 | 66 | // decision_employment 67 | decision_income -> decision_employment; 68 | decision_employment [shape=Mdiamond, label="Employment > 2 years?"]; 69 | 70 | // decision_down_payment 71 | decision_employment -> decision_down_payment; 72 | decision_down_payment [shape=Mdiamond, label="Down Payment > 20%?"]; 73 | 74 | // approve 75 | decision_down_payment -> approve; 76 | approve [shape=box, label="Approve"]; 77 | 78 | // reject 79 | decision_credit_score -> reject; 80 | reject [shape=box, label="Reject"]; 81 | 82 | decision_income -> reject; 83 | decision_employment -> reject; 84 | decision_down_payment -> reject; 85 | } 86 | ``` 87 | 88 | ``` 89 | >> Create a flow that makes a series of decisions about whether to recommend a job interview candidate. 90 | Writing png to '.\temp\dot_graph_2.png' 91 | digraph G { 92 | 93 | // start 94 | start [shape=ellipse, label="Start"]; 95 | 96 | // decision_experience 97 | start -> decision_experience; 98 | decision_experience [shape=Mdiamond, label="Has relevant experience?"]; 99 | 100 | // decision_education 101 | decision_experience -> decision_education; 102 | decision_education [shape=Mdiamond, label="Has required education?"]; 103 | 104 | // decision_skills 105 | decision_education -> decision_skills; 106 | decision_skills [shape=Mdiamond, label="Has necessary skills?"]; 107 | 108 | // decision_references 109 | decision_skills -> decision_references; 110 | decision_references [shape=Mdiamond, label="Has positive references?"]; 111 | 112 | // recommend 113 | decision_references -> recommend; 114 | recommend [shape=box, label="Recommend for interview"]; 115 | 116 | // reject 117 | decision_experience -> reject; 118 | reject [shape=box, label="Reject"]; 119 | 120 | decision_education -> reject; 121 | decision_skills -> reject; 122 | decision_references -> reject; 123 | } 124 | ``` 125 | 126 | ``` 127 | >> Create a flow that makes a series of decisions about an animal, to decide what kind of animal is it 128 | Writing png to '.\temp\dot_graph_3.png' 129 | digraph G { 130 | 131 | // start 132 | start [shape=ellipse, label="Start"]; 133 | 134 | // decision_has_feathers 135 | start -> decision_has_feathers; 136 | decision_has_feathers [shape=Mdiamond, label="Has feathers?"]; 137 | 138 | // decision_can_fly 139 | decision_has_feathers -> decision_can_fly; 140 | decision_can_fly [shape=Mdiamond, label="Can fly?"]; 141 | 142 | // decision_has_fins 143 | decision_has_feathers -> decision_has_fins; 144 | decision_has_fins [shape=Mdiamond, label="Has fins?"]; 145 | 146 | // Hawk 147 | decision_can_fly -> Hawk; 148 | Hawk [shape=box, label="Hawk"]; 149 | 150 | // Penguin 151 | decision_can_fly -> Penguin; 152 | Penguin [shape=box, label="Penguin"]; 153 | 154 | // Dolphin 155 | decision_has_fins -> Dolphin; 156 | Dolphin [shape=box, label="Dolphin"]; 157 | 158 | // Bear 159 | decision_has_fins -> Bear; 160 | Bear [shape=box, label="Bear"]; 161 | } 162 | ``` 163 | 164 | ``` 165 | >> Create a flow that takes a list and an object. Call another flow to get a boolean result. If the boolean is true, then add the item to the list. 166 | Writing png to '.\temp\dot_graph_6.png' 167 | digraph G { 168 | 169 | // start 170 | start [shape=ellipse, label="Start"]; 171 | 172 | // call_flow 173 | call_flow [shape=box, label="Call Flow"]; 174 | 175 | // decision_boolean 176 | decision_boolean [shape=diamond, label="Boolean Result?"]; 177 | 178 | // add_item 179 | add_item [shape=box, label="Add Item"]; 180 | 181 | // end 182 | end [shape=ellipse, label="End"]; 183 | 184 | // start -> call_flow 185 | start -> call_flow; 186 | 187 | // call_flow -> decision_boolean 188 | call_flow -> decision_boolean; 189 | 190 | // decision_boolean -> add_item [label="true"]; 191 | decision_boolean -> add_item [label="true"]; 192 | 193 | // decision_boolean -> end [label="false"]; 194 | decision_boolean -> end [label="false"]; 195 | 196 | // add_item -> end 197 | add_item -> end; 198 | 199 | call_flow [shape=box, label="Call Flow"]; 200 | } 201 | ``` 202 | 203 | ``` 204 | >> Create a flow that iterates over Job Applications in a list. For each Job Application, call another flow that checks if the application should proceed to interview stage 205 | 206 | digraph G { 207 | 208 | // start 209 | start [shape=ellipse, label="Start"]; 210 | 211 | // initialize 212 | start -> initialize; 213 | initialize [shape=box, label="Initialize list and counter"]; 214 | 215 | // decision_has_next 216 | initialize -> decision_has_next; 217 | decision_has_next [shape=Mdiamond, label="Has next job application?"]; 218 | 219 | // action_call_flow 220 | decision_has_next -> action_call_flow; 221 | action_call_flow [shape=box, label="Call flow to check application"]; 222 | 223 | // increment_counter 224 | action_call_flow -> increment_counter; 225 | increment_counter [shape=box, label="Increment counter"]; 226 | 227 | // decision_has_next 228 | increment_counter -> decision_has_next; 229 | 230 | // end 231 | decision_has_next -> end; 232 | end [shape=ellipse, label="End"]; 233 | 234 | initialize -> decision_has_next; 235 | } 236 | ``` 237 | 238 | ## Example Execution - Describing a given workflow 239 | 240 | ``` 241 | >> Descibe this workflow: 242 | digraph G { 243 | 244 | // start 245 | start [shape=ellipse, label="Start"]; 246 | 247 | // decision_credit_score 248 | start -> decision_credit_score; 249 | decision_credit_score [shape=Mdiamond, label="Credit Score > 700?"]; 250 | 251 | // decision_income 252 | decision_credit_score -> decision_income; 253 | decision_income [shape=Mdiamond, label="Income > $50,000?"]; 254 | 255 | // decision_employment 256 | decision_income -> decision_employment; 257 | decision_employment [shape=Mdiamond, label="Employment > 2 years?"]; 258 | 259 | // decision_down_payment 260 | decision_employment -> decision_down_payment; 261 | decision_down_payment [shape=Mdiamond, label="Down Payment > 20%?"]; 262 | 263 | // approve 264 | decision_down_payment -> approve; 265 | approve [shape=box, label="Approve"]; 266 | 267 | // reject 268 | decision_credit_score -> reject; 269 | reject [shape=box, label="Reject"]; 270 | 271 | decision_income -> reject; 272 | decision_employment -> reject; 273 | decision_down_payment -> reject; 274 | } 275 | 276 | This flow chart represents a decision-making process for approving or rejecting a loan application. 277 | 278 | The process starts at the "Start" node and then moves to the "Credit Score > 700?" decision node. If the credit score is greater than 700, the flow moves to the "Income > $50,000?" decision node. If the income is greater than $50,000, the flow moves to the "Employment > 2 years?" decision node. If the employment is greater than 2 years, the flow moves to the "Down Payment > 20%?" decision node. If the down payment is greater than 20%, the flow reaches the "Approve" node, indicating that the loan application should be approved. 279 | 280 | However, if at any point in the process a condition is not met, the flow moves to the "Reject" node, indicating that the loan application should be rejected. 281 | 282 | Overall, this flow chart helps guide the decision-making process for loan approval based on various criteria such as credit score, income, employment, and down payment. 283 | ``` 284 | 285 | ``` 286 | >> Descibe this workflow: 287 | digraph G { 288 | 289 | // start 290 | start [shape=ellipse, label="Start"]; 291 | 292 | // decision_has_feathers 293 | start -> decision_has_feathers; 294 | decision_has_feathers [shape=Mdiamond, label="Has feathers?"]; 295 | 296 | // decision_can_fly 297 | decision_has_feathers -> decision_can_fly; 298 | decision_can_fly [shape=Mdiamond, label="Can fly?"]; 299 | 300 | // decision_has_fins 301 | decision_has_feathers -> decision_has_fins; 302 | decision_has_fins [shape=Mdiamond, label="Has fins?"]; 303 | 304 | // Hawk 305 | decision_can_fly -> Hawk; 306 | Hawk [shape=box, label="Hawk"]; 307 | 308 | // Penguin 309 | decision_can_fly -> Penguin; 310 | Penguin [shape=box, label="Penguin"]; 311 | 312 | // Dolphin 313 | decision_has_fins -> Dolphin; 314 | Dolphin [shape=box, label="Dolphin"]; 315 | 316 | // Bear 317 | decision_has_fins -> Bear; 318 | Bear [shape=box, label="Bear"]; 319 | } 320 | 321 | This flow chart represents a decision-making process to determine the characteristics of different animals based on whether they have feathers, can fly, or have fins. 322 | 323 | The process starts at the "Start" node and then moves to the "Has feathers?" decision node. If the animal has feathers, the flow moves to the "Can fly?" decision node. If the animal can fly, the flow reaches the "Hawk" node, indicating that the animal is a hawk. However, if the animal cannot fly, the flow reaches the "Penguin" node, indicating that the animal is a penguin. 324 | 325 | If the animal does not have feathers, the flow moves to the "Has fins?" decision node. If the animal has fins, the flow reaches the "Dolphin" node, indicating that the animal is a dolphin. However, if the animal does not have fins, the flow reaches the "Bear" node, indicating that the animal is a bear. 326 | 327 | Overall, this flow chart helps classify animals based on their characteristics, specifically whether they have feathers, can fly, or have fins 328 | ``` 329 | 330 | ## Example Execution - Handling irrelevant prompts 331 | 332 | ``` 333 | --- 334 | >> what is 2 + 5 divided by 10 ? 335 | I'm sorry, but I can only assist with questions related to creating a flow chart. 336 | --- 337 | >> Who won the battle of Agincourt, and why was it fought? 338 | I'm sorry, but I can only assist with questions related to creating a flow chart. 339 | --- 340 | >> What is my favourite color? 341 | I'm sorry, but I don't have access to personal information. 342 | ``` 343 | 344 | ## Dependencies 345 | 346 | - Requires an LLM - by default, uses OpenAI's ChatGPT. 347 | - Python 3 348 | - [graphviz](https://www.graphviz.org/#download) 349 | 350 | ## Usage 351 | 352 | To use as a CLI (Command Line Interface) REPL (Read-Eval-Print Loop) prompt: 353 | ```go.sh``` 354 | 355 | or to use as a web server: 356 | 357 | ```go_web.sh``` 358 | 359 | For the web server, you need to pass the user prompt as GET query parameter 'p'. 360 | 361 | Example: 362 | 363 | - http://localhost:8083/?p=I%20need%20a%20make%20a%20Car%20Parts%20application 364 | 365 | So, another application can use the web server to send in natural language prompts from the user, and receive response in the graphviz DOT format. 366 | 367 | The other application can then generate an image or some kind of workflow, from the DOT script. 368 | 369 | ## Set up 370 | 371 | ``` 372 | pip3 install --upgrade openai pydot 373 | ``` 374 | 375 | Set environment variable with your OpenAI key: 376 | 377 | ``` 378 | export OPENAI_API_KEY="xxx" 379 | ``` 380 | 381 | Add that to your shell initializing script (`~/.zprofile` or similar) 382 | 383 | Load in current terminal: 384 | 385 | ``` 386 | source ~/.zprofile 387 | ``` 388 | 389 | ## Test 390 | 391 | `test.sh` 392 | 393 | or 394 | 395 | `python test.py` 396 | 397 | ## Training (still WIP) 398 | 399 | See [Training README](training/training-data-generator/README.md) about training a custom LLM for gpt-workflow. 400 | 401 | ## Related Tools 402 | 403 | [graphviz online editor](https://dreampuf.github.io/GraphvizOnline) 404 | --------------------------------------------------------------------------------