├── .env.example ├── .gitignore ├── LICENSE ├── README.md ├── pyproject.toml └── src └── fastask ├── __init__.py ├── ask.py ├── config.py ├── history.py └── llm.py /.env.example: -------------------------------------------------------------------------------- 1 | # IF USING AZURE 2 | AZURE_OPENAI_API_KEY= 3 | AZURE_RESOURCE_GROUP= 4 | AZURE_DEPLOYMENT_NAME= 5 | 6 | # IF USING OPENAI 7 | OPENAI_API_KEY= 8 | 9 | # IF USING TOGETHERAI 10 | TOGETHERAI_API_KEY= 11 | 12 | # IF USING GROQ 13 | GROQ_API_KEY= -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | config.ini 2 | /build 3 | *.egg-info 4 | __pycache__ 5 | dist 6 | .venv 7 | 8 | .env* 9 | !.env.example -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 Brian Machado 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # FastAsk 3 | 4 | [Fastask.ai](https://www.fastask.ai/) 5 | 6 | 7 | FastAsk is a the fastest way to get answers in the command line. 8 | 9 | By default, requests are routed through a freely hosted LLM endpoint so users can start asking right away. 10 | 11 | However, its very easy to clone this and run whatever LLM you want, we have prebuilt examples for you to use your own Groq, Togetherai or OpenAI key! 12 | 13 | ## Installation 14 | 15 | 16 | FastAsk can be installed using pip: 17 | 18 | ```bash 19 | pip install fastask 20 | ``` 21 | 22 | ## Usage 23 | 24 | FastAsk can be used directly from the command line: 25 | 26 | ```bash 27 | >>> ask list items in dir by date 28 | 1. 'ls -lt' - List newer first 29 | 2. 'ls -ltr' - List older first 30 | ``` 31 | 32 | More examples: 33 | 34 | ```bash 35 | >>> ask find ip address 36 | 1. 'ipconfig getifaddr en0' - Get IP for en0 37 | 2. 'ifconfig | grep 'inet '' - List all IPs 38 | 3. 'curl ifconfig.me' - Get public IP 39 | ``` 40 | 41 | ```bash 42 | >>> ask "convert video to audio using ffmpeg" 43 | 1. 'ffmpeg -i video.mp4 audio.mp3' - Convert to MP3 44 | 2. 'ffmpeg -i video.mp4 -vn audio.ogg' - Convert to OGG 45 | 3. 'ffmpeg -i video.mp4 -q:a 0 audio.wav' - Convert to WAV 46 | ``` 47 | 48 | ## Running Commands 49 | 50 | ```bash 51 | >>> ask 52 | ``` 53 | 54 | you can run the commands it outputs like so: 55 | 56 | ```bash 57 | >>> ask how to ping google 58 | 1. 'ping google.com' - Default ping Google. 59 | 60 | >>> ask 1 61 | PING google.com (142.251.41.78): 56 data bytes 62 | 64 bytes from 142.251.41.78: icmp_seq=0 ttl=112 time=7.765 ms 63 | ... 64 | ``` 65 | 66 | 67 | ## History 68 | FastAsk stores the past 5 commands you asked, and the responses so you can refer to previous commands and outputs easily. 69 | 70 | Example: 71 | 72 | ```bash 73 | >>> ask how to find installed packages 74 | 1. 'brew list' - List on macOS with Homebrew. 75 | ``` 76 | 77 | and then a second query can be like: 78 | 79 | ```bash 80 | >>> ask in python 81 | 1. 'pip list' - List for pip packages. 82 | 2. 'conda list' - List for conda envs. 83 | ``` 84 | 85 | you can also re-print the last output fastask gave with: 86 | 87 | ```bash 88 | >>> ask history 89 | 1. 'pip list' - List for pip packages. 90 | 2. 'conda list' - List for conda envs. 91 | ``` 92 | 93 | 94 | If something weird is occuring/having weird errors, try clearing your history. 95 | 96 | ```bash 97 | >>> ask --clear 98 | FastAsk History Cleared. 99 | ``` 100 | 101 | ## Notes 102 | 103 | if you come across something that looks like this 104 | ```bash 105 | >>> ask what is rebase? 106 | zsh: no matches found: rebase? 107 | ``` 108 | 109 | its because of how args are parsed in the actual zsh/bash itself. If you want to put a question mark or any other of: `-,./;'][-=` you have to wrap your question in qoutes like: 110 | ```bash 111 | >>> ask "what is rebase?" 112 | ``` 113 | 114 | ## Developing Locally 115 | 116 | To make edits to fastask: 117 | 118 | 1. Clone this repo 119 | ```bash 120 | git clone https://github.com/sincethestudy/fastask.git 121 | ``` 122 | 123 | 2. Go to the root folder 124 | ```bash 125 | cd fastask 126 | ``` 127 | 128 | 3. pip install it in that directory with `"editable"` mode on on so that any changes you make automatically update the package on your system without having to install every time. 129 | 130 | Make sure to uninstall fastask before you do this if you've already installed it before. 131 | ```bash 132 | (if needed) pip uninstall fastask 133 | ``` 134 | 135 | ```bash 136 | pip install -e . 137 | ``` 138 | 139 | Now you can edit the source code and still use the `ask` CLI command anywhere in your system on your own custom version of fastask. 140 | 141 | ## Using a different LLM 142 | 143 | We really want users to mess around and see how they can improve FastAsk. 144 | 145 | There is a `llm.py` file which is essentially a playground file for you to write your own endpoint function. For example, adding an `OLLAMA_client` would be a good next step. 146 | 147 | **To use the free FastAsk API server (default):** 148 | 149 | ```bash 150 | >>>ask --llm=fastask 151 | ``` 152 | 153 | **To use your self-hosted FastAsk API server:** 154 | 155 | ```bash 156 | >>>ask --llm=fastask-local 157 | ``` 158 | 159 | Note: This requires you to have the FastAsk API running locally on port 8080. 160 | 161 | **To use another LLM such as Groq:** 162 | 163 | ```bash 164 | >>>ask --llm=groq 165 | ``` 166 | 167 | Note: This requires you to provide API keys through environment variables. See `.env.example`. You can set them in `.env`, or through your shell startup script (`~/.bashrc`, `~/.zshrc`, etc). 168 | 169 | ## License 170 | 171 | FastAsk is licensed under the MIT License. See the `LICENSE` file for more details. 172 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = [ 3 | "setuptools", 4 | "wheel" 5 | ] 6 | build-backend = "setuptools.build_meta" 7 | 8 | [project] 9 | name = "fastask" 10 | version = "0.4.8" 11 | description = "The fastest way to get answers in the command line" 12 | authors = [ 13 | {name = "Brian Machado", email = "nah@gmail.com"}, 14 | ] 15 | readme = "README.md" 16 | dependencies = [ 17 | "requests", 18 | "openai", 19 | "anthropic", 20 | "python-dotenv", 21 | "pyyaml" 22 | ] 23 | 24 | [project.scripts] 25 | ask = "fastask.ask:main" -------------------------------------------------------------------------------- /src/fastask/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sincethestudy/fastask/8748f28ea94a91eac2a31226f2f816e247736875/src/fastask/__init__.py -------------------------------------------------------------------------------- /src/fastask/ask.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import os 3 | import argparse 4 | import json 5 | 6 | from .history import History 7 | from .config import Config 8 | from .llm import askLLM 9 | 10 | history_manager = History() 11 | config_manager = Config() 12 | 13 | def check_and_run_command(history, question): 14 | if question.lower() == 'history': 15 | history = history_manager.get(1) 16 | if history: 17 | prev_answer = json.loads(history[-1]["Answer"]) 18 | for i, item in enumerate(prev_answer): 19 | print(str(i+1)+". " + "\'" + item['command'] + "\'" + " - " + item['desc']) 20 | else: 21 | print("No history available.") 22 | print() 23 | exit() 24 | 25 | # Check if the question can be converted to an integer 26 | try: 27 | index = int(question) 28 | history = history_manager.get(5) 29 | 30 | if history: 31 | answer = history[-1]["Answer"] 32 | answer_json = json.loads(answer) 33 | if index <= len(answer_json): 34 | command = answer_json[index-1]["command"] 35 | print("\033[94mrunning `" + command + "`...\033[0m") 36 | os.system(command) 37 | else: 38 | print("No command at this index in the answer.") 39 | else: 40 | print("No history available.") 41 | exit() 42 | except ValueError: 43 | pass # The question is not an integer, so we treat it as a question 44 | 45 | 46 | def main(): 47 | config = config_manager.load() 48 | 49 | parser = argparse.ArgumentParser( 50 | description='This is a command-line tool that answers questions using OpenAI or a local model.', 51 | formatter_class=argparse.RawTextHelpFormatter 52 | ) 53 | 54 | parser.add_argument( 55 | '--version', 56 | action='version', 57 | version='%(prog)s 0.4.6' # Add your version here 58 | ) 59 | 60 | parser.add_argument( 61 | '--clear', 62 | action='store_true', 63 | help='Clear the history of questions and answers.' 64 | ) 65 | 66 | parser.add_argument( 67 | '--llm', 68 | type=str, 69 | choices=['fastask', 'fastask-local', 'azure', 'groq', 'openai', 'togetherai', 'anthropic'], 70 | help='Select the large language model to use. Default is fastask. All other models and are intended for developer use and require API keys.' 71 | ) 72 | 73 | parser.add_argument( 74 | '--enable-leaderboard', 75 | type=bool, 76 | default=False, 77 | help='Enable the leaderboard.' 78 | ) 79 | 80 | parser.add_argument( 81 | '--set-user', 82 | type=str, 83 | help='Specify a custom config file to use.' 84 | ) 85 | 86 | parser.add_argument( 87 | 'question', 88 | nargs='*', 89 | help='Enter the question you want to ask.' 90 | ) 91 | 92 | args = parser.parse_args() 93 | 94 | # If no arguments were passed, print the help message and exit 95 | if len(sys.argv) == 1: 96 | parser.print_help(sys.stderr) 97 | sys.exit(1) 98 | 99 | if args.clear: 100 | history_manager.clear_history() 101 | print("FastAsk History cleared.") 102 | exit() 103 | 104 | if args.enable_leaderboard: 105 | config['enable_leaderboard'] = True 106 | config_manager.save(config) 107 | print("\033[92mLeaderboard enabled.\033[0m") 108 | exit() 109 | 110 | if args.set_user: 111 | config['user'] = args.set_user 112 | config_manager.save(config) 113 | print("\033[92mUser set to", config['user'], "\033[0m") 114 | exit() 115 | 116 | # use fastask as llm by default if no llm is set 117 | try: 118 | llm = config['llm'] 119 | except: 120 | config['llm'] = 'fastask' 121 | config_manager.save(config) 122 | 123 | 124 | if args.llm: 125 | config['llm'] = args.llm.lower() 126 | config_manager.save(config) 127 | print("\033[94mFastAsk LLM set to", config['llm'], "\033[0m") 128 | exit() 129 | 130 | question = ' '.join(args.question) 131 | history = history_manager.get(5) 132 | check_and_run_command(history, question) 133 | askLLM(question, config['llm']) 134 | 135 | 136 | if __name__ == '__main__': 137 | main() 138 | -------------------------------------------------------------------------------- /src/fastask/config.py: -------------------------------------------------------------------------------- 1 | import os 2 | import yaml 3 | 4 | class Config: 5 | def __init__(self): 6 | self.config_dir = os.path.expanduser('~/.config/fastask') 7 | self.config_path = os.path.join(self.config_dir, 'config.yaml') 8 | self.default_config = { 9 | 'llm': 'fastask', 10 | 'enable_leaderboard': False, 11 | 'user': '' 12 | } 13 | if not os.path.exists(self.config_dir): 14 | os.makedirs(self.config_dir) 15 | if not os.path.isfile(self.config_path): 16 | with open(self.config_path, 'w') as file: 17 | yaml.dump(self.default_config, file, default_flow_style=False) 18 | 19 | def load(self): 20 | with open(self.config_path, 'r') as file: 21 | config = yaml.safe_load(file) 22 | return config 23 | 24 | def save(self, config): 25 | with open(self.config_path, 'w') as file: 26 | yaml.dump(config, file, default_flow_style=False) 27 | 28 | 29 | -------------------------------------------------------------------------------- /src/fastask/history.py: -------------------------------------------------------------------------------- 1 | import tempfile 2 | import os 3 | import json 4 | 5 | class History: 6 | def __init__(self): 7 | self.temp_dir = tempfile.gettempdir() 8 | self.history_file_path = os.path.join(self.temp_dir, 'ask_history.json') 9 | self.max_history_length = 5 10 | 11 | def add(self, question, answer): 12 | history_entry = {"Question": question, "Answer": answer} 13 | 14 | if not os.path.exists(self.history_file_path): 15 | with open(self.history_file_path, 'w') as f: 16 | json.dump([history_entry], f) 17 | else: 18 | try: 19 | with open(self.history_file_path, 'r') as f: 20 | history = json.load(f) 21 | except json.JSONDecodeError: 22 | history = [] 23 | 24 | history.append(history_entry) 25 | 26 | if len(history) > self.max_history_length: 27 | history = history[-self.max_history_length:] 28 | 29 | with open(self.history_file_path, 'w') as f: 30 | json.dump(history, f) 31 | 32 | def get(self, n): 33 | if not os.path.exists(self.history_file_path): 34 | return [] 35 | 36 | with open(self.history_file_path, 'r') as f: 37 | try: 38 | history = json.load(f) 39 | except json.JSONDecodeError: 40 | return [] 41 | 42 | return history[-n:] 43 | 44 | def clear_history(self): 45 | if os.path.exists(self.history_file_path): 46 | os.remove(self.history_file_path) 47 | with open(self.history_file_path, 'w') as f: 48 | pass # Create the file if it doesn't exist 49 | -------------------------------------------------------------------------------- /src/fastask/llm.py: -------------------------------------------------------------------------------- 1 | from openai import AzureOpenAI, OpenAI 2 | from anthropic import Anthropic 3 | from dotenv import load_dotenv 4 | import os 5 | import platform 6 | import requests 7 | import json 8 | import re 9 | 10 | from .history import History 11 | from .config import Config 12 | 13 | load_dotenv() 14 | 15 | config_manager = Config() 16 | config = config_manager.load() 17 | 18 | class LLM: 19 | def __init__(self, name): 20 | self.name = name 21 | 22 | def using(self): 23 | print(f"\033[91mFASTASK: Using {self.name}\033[0m") 24 | 25 | class FastAskClient(LLM): 26 | def __init__(self): 27 | super().__init__("FastAsk") 28 | 29 | def create_client(self, messages): 30 | if config['enable_leaderboard']: 31 | response = requests.post(url="https://fastask.fly.dev/itsfast", json={"messages": messages, "user": config['user'], "log": True}).json() 32 | return response 33 | else: 34 | response = requests.post(url="https://fastask.fly.dev/itsfast", json={"messages": messages}).json() 35 | return response 36 | 37 | class FastAskLocalClient(LLM): 38 | def __init__(self): 39 | super().__init__("FastAsk Local") 40 | 41 | def create_client(self, messages): 42 | self.using() 43 | 44 | if config['enable_leaderboard']==True: 45 | response = requests.post(url="http://0.0.0.0:8080/itsfast", json={"messages": messages, "user": config['user'], "log": True}).json() 46 | return response 47 | else: 48 | response = requests.post(url="http://0.0.0.0:8080/itsfast", json={"messages": messages}).json() 49 | return response 50 | 51 | class AzureClient(LLM): 52 | def __init__(self): 53 | super().__init__("Azure OpenAI") 54 | self.api_key = os.environ.get("AZURE_OPENAI_API_KEY") 55 | self.resource = os.environ.get("AZURE_RESOURCE_GROUP") 56 | self.deployment_name = os.environ.get("AZURE_DEPLOYMENT_NAME") 57 | 58 | def create_client(self, messages): 59 | self.using() 60 | client = AzureOpenAI( 61 | api_key=self.api_key, 62 | api_version="2023-12-01-preview", 63 | azure_endpoint=f"https://{self.resource}.openai.azure.com" 64 | ) 65 | completion_stream = client.chat.completions.create( 66 | messages=messages, 67 | model=self.deployment_name, 68 | stream=False, 69 | user='fastaskapi' 70 | ) 71 | return {"response": completion_stream.choices[0].message.content} 72 | 73 | class GroqClient(LLM): 74 | def __init__(self): 75 | super().__init__("Groq") 76 | self.api_key = os.environ.get("GROQ_API_KEY") 77 | 78 | def create_client(self, messages): 79 | self.using() 80 | client = OpenAI( 81 | api_key=self.api_key, 82 | base_url="https://api.groq.com/openai/v1" 83 | ) 84 | response = client.chat.completions.create( 85 | messages=messages, 86 | model="llama2-70b-4096" 87 | ) 88 | return {"response": response.choices[0].message.content} 89 | 90 | class OpenAIClient(LLM): 91 | def __init__(self): 92 | super().__init__("OpenAI") 93 | self.api_key = os.environ.get("OPENAI_API_KEY") 94 | 95 | def create_client(self, messages): 96 | self.using() 97 | client = OpenAI( 98 | api_key=self.api_key 99 | ) 100 | response = client.chat.completions.create( 101 | messages=messages, 102 | model="gpt-3.5-turbo-0125" 103 | ) 104 | return {"response": response.choices[0].message.content} 105 | 106 | class TogetherAIClient(LLM): 107 | def __init__(self): 108 | super().__init__("TogetherAI") 109 | self.api_key = os.environ.get("TOGETHERAI_API_KEY") 110 | 111 | def create_client(self, messages): 112 | self.using() 113 | client = OpenAI( 114 | api_key=self.api_key, 115 | base_url="https://api.together.xyz/v1" 116 | ) 117 | response = client.chat.completions.create( 118 | messages=messages, 119 | model="mistralai/Mixtral-8x7B-Instruct-v0.1" 120 | ) 121 | return {"response": response.choices[0].message.content} 122 | 123 | class AnthropicClient(LLM): 124 | def __init__(self): 125 | super().__init__("Anthropic Claude") 126 | self.api_key = os.environ.get("ANTHROPIC_API_KEY") 127 | 128 | def create_client(self, messages): 129 | self.using() 130 | client = Anthropic(api_key=self.api_key) 131 | 132 | # Convert OpenAI message format to Anthropic format 133 | system_prompt = next( 134 | (msg["content"] for msg in messages if msg["role"] == "system"), None 135 | ) 136 | conversation = [] 137 | 138 | for msg in messages: 139 | if msg["role"] == "user": 140 | conversation.append({"role": "user", "content": msg["content"]}) 141 | elif msg["role"] == "assistant": 142 | conversation.append({"role": "assistant", "content": msg["content"]}) 143 | 144 | # Create the message with system prompt if present 145 | response = client.messages.create( 146 | model="claude-3-5-sonnet-20241022", 147 | system=system_prompt, 148 | messages=conversation, 149 | max_tokens=1024, 150 | ) 151 | 152 | return {"response": response.content[0].text} 153 | 154 | def parse_response(response): 155 | pattern = r'\[\s*\{\s*"command"\s*:\s*"(.*?)"\s*,\s*"desc"\s*:\s*"(.*?)"\s*\}\s*(?:,\s*\{\s*"command"\s*:\s*"(.*?)"\s*,\s*"desc"\s*:\s*"(.*?)"\s*\}\s*)*\]' 156 | matches = re.findall(pattern, response, re.DOTALL) 157 | 158 | commands = [] 159 | for match in matches: 160 | for i in range(0, len(match), 2): 161 | if match[i] and match[i+1]: 162 | commands.append({"command": match[i], "desc": match[i+1]}) 163 | 164 | return commands 165 | 166 | 167 | def askLLM(q, client_type): 168 | messages = [] 169 | history_manager = History() 170 | 171 | # System Prompt 172 | messages.append({ 173 | "role": "system", "content": """Lets play a game of knowledge and formatting. We are playing a command line knowledge game. You are a command line utility that answers questions quickly and briefly in JSON format. If there were a few commands you could have given, show them all. Remember that you print to a console, so make it easy to read when possible. The user is on the operating system: """ + platform.system() + """. Bias towards short answers always, each row should fit in one unwrapped line of the terminal, less than 40 characters! Only 3 rows maximum. Always follow this format:\n[\n{"command": :, "desc": },\n]\nIts extremely important to follow this response structure.""" 174 | }) 175 | 176 | # Fake Examples 177 | messages.extend([ 178 | {"role": "user", "content": "how do i convert image size in ffmpeg"}, 179 | {"role": "assistant", "content":"""[ 180 | {"command": "ffmpeg -i input.jpg -filter:v scale=h=1024 output.jpg", "desc": "Resizes the image to a height of 1024 pixels."}, 181 | {"command": "ffmpeg -i input.jpg -filter:v scale=w:h=1:1 output.jpg", "desc": "Resizes image to width and height that are equal"}, 182 | {"command": "ffmpeg -i input.jpg -filter:v scale=force_original output.jpg", "desc": "Preserving original aspect ratio."} 183 | ]"""}, 184 | 185 | {"role": "user", "content": "list items in dir by date"}, 186 | {"role": "assistant", "content":"""[ 187 | {"command": "ls -lt", "desc": "List items sorted by date (newest first)."}, 188 | {"command": "ls -ltr", "desc": "Added 'r' sorts by oldest first."} 189 | ]"""}, 190 | 191 | {"role": "user", "content": "how do i make a new docker to run a fresh ubuntu to test on"}, 192 | {"role": "assistant", "content":"""[ 193 | {"command": "docker run -it ubuntu", "desc": "Runs a new Docker container with Ubuntu."}, 194 | {"command": "docker run -it ubuntu bash", "desc": "also opens a bash shell."} 195 | ]"""}, 196 | 197 | {"role": "user", "content": "find text in files in linux"}, 198 | {"role": "assistant", "content":"""[ 199 | {"command": "grep 'text' *", "desc": "Search in current directory."}, 200 | {"command": "grep -r 'text' .", "desc": "Recursive search."}, 201 | {"command": "find / -type f -exec grep -l 'text' {} +", "desc": "Find in all files."} 202 | ]"""}, 203 | 204 | {"role": "user", "content": "how to change file permissions in linux"}, 205 | {"role": "assistant", "content":"""[ 206 | {"command": "chmod 755 filename", "desc": "rwx for owner, rx for others."}, 207 | {"command": "chmod +x filename", "desc": "Make file executable for all."}, 208 | {"command": "chmod u+w,g-w,o=r filename", "desc": "Set specific permissions."} 209 | ]"""} 210 | ]) 211 | 212 | # Retrieve and append history 213 | history = history_manager.get(5) # Get the last 5 entries 214 | if history: 215 | for entry in history: 216 | messages.extend([ 217 | {"role": "user", "content": entry["Question"]}, 218 | {"role": "assistant", "content": entry["Answer"]} 219 | ]) 220 | 221 | # Add user query 222 | messages.append({ 223 | "role": "user", "content": q + "\n\n Please respond with the correct structure for grading." 224 | }) 225 | 226 | client = { 227 | "fastask": FastAskClient(), 228 | "fastask-local": FastAskLocalClient(), 229 | "azure": AzureClient(), 230 | "groq": GroqClient(), 231 | "openai": OpenAIClient(), 232 | "togetherai": TogetherAIClient(), 233 | "anthropic": AnthropicClient(), 234 | }.get(client_type) 235 | 236 | if not client: 237 | raise ValueError("Invalid client type specified.") 238 | 239 | response = client.create_client(messages) 240 | 241 | try: 242 | commands = parse_response(response["response"]) 243 | if not commands: 244 | raise ValueError("No commands found. Please ensure your query is correct.") 245 | for i, item in enumerate(commands): 246 | print(f"{i+1}. '{item['command']}' - {item['desc']}") 247 | history_manager.add(q, response['response']) 248 | except: 249 | print(response['response']) 250 | print("\033[91mhmm... something went wrong...try again maybe?\033[0m") 251 | exit() 252 | 253 | print() 254 | print() 255 | --------------------------------------------------------------------------------