├── .gitignore ├── CODE_OF_CONDUCT.md ├── LICENSE ├── README.md ├── alfred ├── Ask OpenAI GPT.alfredworkflow └── python │ ├── qa.py │ ├── qal.py │ └── requirements.txt ├── assets ├── ask-gpt-alfred-installation.gif ├── ask-gpt-alfred-qa.gif ├── ask-gpt-alfred-qal.gif ├── ask-gpt-cli-run.gif ├── ask-gpt-raycast-ask-chat.gif ├── ask-gpt-raycast-installation.gif └── ask-gpt-raycast.gif ├── cli ├── README.md ├── go.mod ├── go.sum └── main.go └── raycast ├── README.md └── ask-gpt.py /.gitignore: -------------------------------------------------------------------------------- 1 | .env 2 | ask-gpt 3 | .ask-gpt/ 4 | .DS_Store 5 | openai.toml -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our 6 | community a harassment-free experience for everyone, regardless of age, body 7 | size, visible or invisible disability, ethnicity, sex characteristics, gender 8 | identity and expression, level of experience, education, socio-economic status, 9 | nationality, personal appearance, race, religion, or sexual identity 10 | and orientation. 11 | 12 | We pledge to act and interact in ways that contribute to an open, welcoming, 13 | diverse, inclusive, and healthy community. 14 | 15 | ## Our Standards 16 | 17 | Examples of behavior that contributes to a positive environment for our 18 | community include: 19 | 20 | * Demonstrating empathy and kindness toward other people 21 | * Being respectful of differing opinions, viewpoints, and experiences 22 | * Giving and gracefully accepting constructive feedback 23 | * Accepting responsibility and apologizing to those affected by our mistakes, 24 | and learning from the experience 25 | * Focusing on what is best not just for us as individuals, but for the 26 | overall community 27 | 28 | Examples of unacceptable behavior include: 29 | 30 | * The use of sexualized language or imagery, and sexual attention or 31 | advances of any kind 32 | * Trolling, insulting or derogatory comments, and personal or political attacks 33 | * Public or private harassment 34 | * Publishing others' private information, such as a physical or email 35 | address, without their explicit permission 36 | * Other conduct which could reasonably be considered inappropriate in a 37 | professional setting 38 | 39 | ## Enforcement Responsibilities 40 | 41 | Community leaders are responsible for clarifying and enforcing our standards of 42 | acceptable behavior and will take appropriate and fair corrective action in 43 | response to any behavior that they deem inappropriate, threatening, offensive, 44 | or harmful. 45 | 46 | Community leaders have the right and responsibility to remove, edit, or reject 47 | comments, commits, code, wiki edits, issues, and other contributions that are 48 | not aligned to this Code of Conduct, and will communicate reasons for moderation 49 | decisions when appropriate. 50 | 51 | ## Scope 52 | 53 | This Code of Conduct applies within all community spaces, and also applies when 54 | an individual is officially representing the community in public spaces. 55 | Examples of representing our community include using an official e-mail address, 56 | posting via an official social media account, or acting as an appointed 57 | representative at an online or offline event. 58 | 59 | ## Enforcement 60 | 61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 62 | reported to the community leaders responsible for enforcement at 63 | hello@techbranch.net. 64 | All complaints will be reviewed and investigated promptly and fairly. 65 | 66 | All community leaders are obligated to respect the privacy and security of the 67 | reporter of any incident. 68 | 69 | ## Enforcement Guidelines 70 | 71 | Community leaders will follow these Community Impact Guidelines in determining 72 | the consequences for any action they deem in violation of this Code of Conduct: 73 | 74 | ### 1. Correction 75 | 76 | **Community Impact**: Use of inappropriate language or other behavior deemed 77 | unprofessional or unwelcome in the community. 78 | 79 | **Consequence**: A private, written warning from community leaders, providing 80 | clarity around the nature of the violation and an explanation of why the 81 | behavior was inappropriate. A public apology may be requested. 82 | 83 | ### 2. Warning 84 | 85 | **Community Impact**: A violation through a single incident or series 86 | of actions. 87 | 88 | **Consequence**: A warning with consequences for continued behavior. No 89 | interaction with the people involved, including unsolicited interaction with 90 | those enforcing the Code of Conduct, for a specified period of time. This 91 | includes avoiding interactions in community spaces as well as external channels 92 | like social media. Violating these terms may lead to a temporary or 93 | permanent ban. 94 | 95 | ### 3. Temporary Ban 96 | 97 | **Community Impact**: A serious violation of community standards, including 98 | sustained inappropriate behavior. 99 | 100 | **Consequence**: A temporary ban from any sort of interaction or public 101 | communication with the community for a specified period of time. No public or 102 | private interaction with the people involved, including unsolicited interaction 103 | with those enforcing the Code of Conduct, is allowed during this period. 104 | Violating these terms may lead to a permanent ban. 105 | 106 | ### 4. Permanent Ban 107 | 108 | **Community Impact**: Demonstrating a pattern of violation of community 109 | standards, including sustained inappropriate behavior, harassment of an 110 | individual, or aggression toward or disparagement of classes of individuals. 111 | 112 | **Consequence**: A permanent ban from any sort of public interaction within 113 | the community. 114 | 115 | ## Attribution 116 | 117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], 118 | version 2.0, available at 119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. 120 | 121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct 122 | enforcement ladder](https://github.com/mozilla/diversity). 123 | 124 | [homepage]: https://www.contributor-covenant.org 125 | 126 | For answers to common questions about this code of conduct, see the FAQ at 127 | https://www.contributor-covenant.org/faq. Translations are available at 128 | https://www.contributor-covenant.org/translations. 129 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 techbranch 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OpenAI Model Communication 2 | 3 | This project provides a convenient way to communicate with OpenAI models through various interfaces. The following methods are currently supported: 4 | 5 | ## Golang executable 6 | 7 | Run the cli tool and enter your queries directly in the terminal. 8 | 9 | ```bash 10 | ./ask-gpt "Why do trees grow branches?" 11 | ``` 12 | 13 | ## Alfred workflow 14 | 15 | Use the provided Alfred workflow to quickly access the model and input your queries without leaving your current application. 16 | 17 | Fast answer: 18 | 19 | ``` 20 | qa Why do trees grow branches? 21 | ``` 22 | 23 | More expensive and larger answer: 24 | 25 | ``` 26 | qal Why do trees grow branches? 27 | ``` 28 | 29 | ### Fast backend 30 | 31 |

32 | 33 |

34 | 35 | ### More expensive and capable backends 36 | 37 |

38 | 39 |

40 | 41 | ## Raycast script 42 | 43 | Activate Raycast, type `ask` and `` to start filling the prompt argument. 44 | 45 | ``` 46 | ask Why do trees grow branches? 47 | ``` 48 | 49 | Find out more in the Readme file located in the `raycast` directory. 50 | 51 |

52 | 53 |

54 | 55 | ## Other tools? 56 | 57 | As you can see MacOS is doubly covered, but what about Windows and Linux? I'll be writing other integrations too, but you're more than welcome to submit PRs with your implementation for tools like Wox or Cerebro! 58 | 59 | # Installation 60 | 61 | - Clone the repository to your local machine. 62 | 63 | ## Alfred workflow: 64 | 65 | - Install `requests` using `python3 -m pip install requests` 66 | - `Cmd+click` the `.alfredworkflow` file and follow the instructions in the config step. You'll have to supply your OpenAI API key. 67 | 68 |

69 | 70 |

71 | 72 | ## Golang executable 73 | 74 | Build it yourself with the [Golang build](https://go.dev/doc/install) tool: 75 | 76 | ``` 77 | cd cli 78 | go build 79 | ``` 80 | 81 | Create a `.env` file where the executable is, and fill it in. More details in the [cli README](cli/README.md) 82 | 83 | Example of a `.env` file: 84 | 85 | ``` 86 | OPENAI_API_KEY=sk-abcdefg 87 | OPENAI_MODEL=text-davinci-003 88 | OPENAI_TEMPERATURE=0.7 89 | OPENAI_MAX_TOKENS=30 90 | ``` 91 | 92 |

93 | 94 |

95 | 96 | ## Raycast script 97 | 98 | 1. Add the `ask-gpt.py` script to a directory of your choice, be it an already existing scripts directory or a new one like this `raycast` directory. 99 | > If it's a new directory, you'll have to tell Raycast about it 100 | - In Raycast, go to `Extensions`, then `scripts`, click the `Add` icon, `pick script directory` and point it to the directory you chose. 101 | 2. Create a new file in the same scripting directory and name it `openai.toml`. Put your API key in it like `apikey = "sk-abcde"` 102 | 3. This script references `#!/usr/bin/env python3` for Python, but you might want to repoint it at an installation that works for you. The script needs `requests` and `toml` - though feel free to modify. 103 | 104 | You should be good to go. Fire up Raycast, type `ask` and `` to start filling the prompt argument. 105 | 106 | More details in the [Raycast README](raycast/README.md) 107 | 108 |

109 | 110 |

111 | -------------------------------------------------------------------------------- /alfred/Ask OpenAI GPT.alfredworkflow: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/alfred/Ask OpenAI GPT.alfredworkflow -------------------------------------------------------------------------------- /alfred/python/qa.py: -------------------------------------------------------------------------------- 1 | import json 2 | import requests 3 | import sys 4 | import os 5 | 6 | # Get API key 7 | api_key = os.getenv('apikey') 8 | 9 | # Define prompt 10 | prompt = sys.argv[1] 11 | 12 | # Define request headers 13 | headers = { 14 | "Content-Type": "application/json", 15 | "Authorization": f"Bearer {api_key}", 16 | } 17 | 18 | # Define request body 19 | data = { 20 | "model": os.getenv('model'), 21 | "prompt": prompt, 22 | "max_tokens": int(os.getenv('maxtokens')), 23 | "temperature": float(os.getenv('temperature')) 24 | } 25 | 26 | # Send request to GPT-3 completion API 27 | response = requests.post( 28 | "https://api.openai.com/v1/completions", 29 | headers=headers, 30 | data=json.dumps(data) 31 | ) 32 | 33 | clean_response = response.json()["choices"][0]["text"].replace("\n", "") 34 | 35 | # Print response 36 | print(json.dumps({ 37 | "variables": { 38 | "prompt": prompt, 39 | "response": clean_response 40 | }, 41 | "items": [ 42 | { 43 | "uid": "1", 44 | "title": clean_response[:64], 45 | "subtitle": clean_response[64:], 46 | "arg": clean_response, 47 | "text": { 48 | "copy": clean_response, 49 | "largetype": clean_response 50 | } 51 | } 52 | ] 53 | })) 54 | -------------------------------------------------------------------------------- /alfred/python/qal.py: -------------------------------------------------------------------------------- 1 | import json 2 | import requests 3 | import sys 4 | import os 5 | 6 | # Get API key 7 | api_key = os.getenv('apikey') 8 | 9 | # Define prompt 10 | prompt = sys.argv[1] 11 | 12 | # Define request headers 13 | headers = { 14 | "Content-Type": "application/json", 15 | "Authorization": f"Bearer {api_key}", 16 | } 17 | 18 | # Define request body 19 | request_bodies = [ 20 | { 21 | "model": os.getenv('model'), 22 | "prompt": prompt, 23 | "max_tokens": int(os.getenv('maxtokens')), 24 | "temperature": float(os.getenv('temperature')) 25 | }, 26 | { 27 | "model": os.getenv('model'), 28 | "prompt": prompt, 29 | "max_tokens": int(os.getenv('maxtokens_expensive')), 30 | "temperature": float(os.getenv('temperature')) 31 | }, 32 | { 33 | "model": os.getenv('model_expensive'), 34 | "prompt": prompt, 35 | "max_tokens": int(os.getenv('maxtokens')), 36 | "temperature": float(os.getenv('temperature')) 37 | }, 38 | { 39 | "model": os.getenv('model_expensive'), 40 | "prompt": prompt, 41 | "max_tokens": int(os.getenv('maxtokens_expensive')), 42 | "temperature": float(os.getenv('temperature')) 43 | }, 44 | ] 45 | responses = [] 46 | # Send request to GPT-3 completion API 47 | for data in request_bodies: 48 | try: 49 | response = requests.post( 50 | "https://api.openai.com/v1/completions", 51 | headers=headers, 52 | data=json.dumps(data) 53 | ) 54 | response_json = response.json() 55 | response_object = { 56 | "model-and-tokens": f"{response_json['model']} @ {response_json['usage']['completion_tokens']} tokens", 57 | "response": response_json["choices"][0]["text"].replace("\n", "") 58 | } 59 | responses.append(response_object) 60 | except: 61 | pass 62 | 63 | items = [] 64 | for idx, response in enumerate(responses): 65 | items.append({ 66 | "uid": str(idx), 67 | "title": response["response"][:64], 68 | "subtitle": response["response"][64:], 69 | "arg": response["response"], 70 | "text": { 71 | "copy": response["response"], 72 | "largetype": response["response"] 73 | } 74 | }) 75 | 76 | # Print response 77 | print(json.dumps({ 78 | "variables": { 79 | "prompt": prompt, 80 | "responses": json.dumps(responses, indent=2) 81 | }, 82 | "items": items 83 | })) 84 | 85 | # Why do trees have so many branches? 86 | -------------------------------------------------------------------------------- /alfred/python/requirements.txt: -------------------------------------------------------------------------------- 1 | requests -------------------------------------------------------------------------------- /assets/ask-gpt-alfred-installation.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/assets/ask-gpt-alfred-installation.gif -------------------------------------------------------------------------------- /assets/ask-gpt-alfred-qa.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/assets/ask-gpt-alfred-qa.gif -------------------------------------------------------------------------------- /assets/ask-gpt-alfred-qal.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/assets/ask-gpt-alfred-qal.gif -------------------------------------------------------------------------------- /assets/ask-gpt-cli-run.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/assets/ask-gpt-cli-run.gif -------------------------------------------------------------------------------- /assets/ask-gpt-raycast-ask-chat.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/assets/ask-gpt-raycast-ask-chat.gif -------------------------------------------------------------------------------- /assets/ask-gpt-raycast-installation.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/assets/ask-gpt-raycast-installation.gif -------------------------------------------------------------------------------- /assets/ask-gpt-raycast.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tech-branch/ask-gpt/fbf1a507959d3cc27ce1da183967febf77c67cae/assets/ask-gpt-raycast.gif -------------------------------------------------------------------------------- /cli/README.md: -------------------------------------------------------------------------------- 1 | ## Installation 2 | 3 | Building from source: 4 | 5 | - [Install Go](https://go.dev/doc/install) 6 | - Navigate to the `cli` directory 7 | - run `go build` 8 | 9 | ``` 10 | cd cli 11 | go build 12 | ``` 13 | 14 | ## Use 15 | 16 | ```bash 17 | ./ask-gpt "Why do trees grow branches?" 18 | ``` 19 | 20 |

21 | 22 |

23 | 24 | 25 | ## Config 26 | 27 | Required 28 | 29 | - OPENAI_API_KEY 30 | 31 | Optional 32 | 33 | - OPENAI_MAX_TOKENS 34 | - Can vary from `1` to `2048` (newer models allow `4000` for large answers) 35 | - OPENAI_TEMPERATURE 36 | - `0.0` meaning the safest answer, `1.0` most 'diverse', here the default is `0.7` 37 | - OPENAI_MODEL 38 | - [Reference](https://beta.openai.com/docs/models/gpt-3) 39 | - `text-davinci-003` 40 | - `text-curie-001` 41 | - `text-babbage-001` 42 | - `text-ada-001` 43 | 44 | Example of a `.env` file: 45 | 46 | ``` 47 | OPENAI_API_KEY=sk-abcdefg 48 | OPENAI_MODEL=text-davinci-003 49 | OPENAI_TEMPERATURE=0.7 50 | OPENAI_MAX_TOKENS=30 51 | ``` 52 | -------------------------------------------------------------------------------- /cli/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/tech-branch/ask-gpt 2 | 3 | go 1.18 4 | 5 | require ( 6 | github.com/PullRequestInc/go-gpt3 v1.1.11 7 | github.com/joho/godotenv v1.4.0 8 | ) 9 | -------------------------------------------------------------------------------- /cli/go.sum: -------------------------------------------------------------------------------- 1 | github.com/PullRequestInc/go-gpt3 v1.1.11 h1:kZtCbAnUEKfUS50a+0TR2p9rJtz4t57THf5cxN3Ye/o= 2 | github.com/PullRequestInc/go-gpt3 v1.1.11/go.mod h1:F9yzAy070LhkqHS2154/IH0HVj5xq5g83gLTj7xzyfw= 3 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 4 | github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= 5 | github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 6 | github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 7 | github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= 8 | github.com/joefitzgerald/rainbow-reporter v0.1.0/go.mod h1:481CNgqmVHQZzdIbN52CupLJyoVwB10FQ/IQlF1pdL8= 9 | github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= 10 | github.com/joho/godotenv v1.4.0 h1:3l4+N6zfMWnkbPEXKng2o2/MR5mSwTrBih4ZEkkz1lg= 11 | github.com/joho/godotenv v1.4.0/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4= 12 | github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= 13 | github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= 14 | github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= 15 | github.com/maxbrunsfeld/counterfeiter/v6 v6.2.3/go.mod h1:1ftk08SazyElaaNvmqAfZWGwJzshjCfBXDLoQtPAMNk= 16 | github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= 17 | github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= 18 | github.com/onsi/gomega v1.9.0/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA= 19 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 20 | github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24q7p+U= 21 | github.com/sclevine/spec v1.4.0/go.mod h1:LvpgJaFyvQzRvc1kaDs0bulYwzC70PbiYjC4QnFHkOM= 22 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 23 | github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 24 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 25 | golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= 26 | golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= 27 | golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 28 | golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 29 | golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 30 | golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 31 | golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 32 | golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 33 | golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 34 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 35 | golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 36 | golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 37 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 38 | golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= 39 | golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 40 | golang.org/x/tools v0.0.0-20200301222351-066e0c02454c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= 41 | golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 42 | golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 43 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 44 | gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 45 | gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= 46 | gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= 47 | gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 48 | gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 49 | -------------------------------------------------------------------------------- /cli/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "context" 5 | "fmt" 6 | "log" 7 | "os" 8 | "strconv" 9 | 10 | "github.com/PullRequestInc/go-gpt3" 11 | "github.com/joho/godotenv" 12 | ) 13 | 14 | func main() { 15 | 16 | // load .env file if it exists 17 | godotenv.Load() 18 | 19 | // ------------ 20 | // Parameters 21 | // ------------ 22 | 23 | tokens := 30 24 | temperature := 0.7 25 | engine := "text-davinci-003" 26 | 27 | // ----------- 28 | // Overrides 29 | // ----------- 30 | 31 | maxTokens := os.Getenv("OPENAI_MAX_TOKENS") 32 | if maxTokens != "" { 33 | // convert string maxTokens to int tokens 34 | itokens, err := strconv.Atoi(maxTokens) 35 | if err != nil { 36 | log.Fatalln(err) 37 | } 38 | tokens = itokens 39 | } 40 | 41 | strTemp := os.Getenv("OPENAI_TEMPERATURE") 42 | if strTemp != "" { 43 | // convert string Temperature to float Temperature 44 | fTemp, err := strconv.ParseFloat(strTemp, 32) 45 | if err != nil { 46 | log.Fatalln(err) 47 | } 48 | temperature = fTemp 49 | } 50 | 51 | model := os.Getenv("OPENAI_MODEL") 52 | if model != "" { 53 | engine = model 54 | } 55 | 56 | // ------------ 57 | // API Config 58 | // ------------ 59 | 60 | apiKey := os.Getenv("OPENAI_API_KEY") 61 | if apiKey == "" { 62 | log.Fatalln("Missing API KEY, set OPENAI_API_KEY environment variable or use the .env file") 63 | } 64 | 65 | ctx := context.Background() 66 | client := gpt3.NewClient(apiKey) 67 | 68 | // read prompt from the command line argument 69 | prompt := os.Args[1] 70 | 71 | response, err := client.CompletionWithEngine(ctx, 72 | engine, 73 | gpt3.CompletionRequest{ 74 | Prompt: []string{prompt}, 75 | MaxTokens: gpt3.IntPtr(tokens), 76 | Temperature: gpt3.Float32Ptr(float32(temperature)), 77 | }, 78 | ) 79 | if err != nil { 80 | log.Fatalln(err) 81 | } 82 | 83 | output := response.Choices[0].Text 84 | 85 | // -------------------- 86 | // Display the result 87 | // -------------------- 88 | 89 | fmt.Println(output) 90 | } 91 | -------------------------------------------------------------------------------- /raycast/README.md: -------------------------------------------------------------------------------- 1 | ## Script installation 2 | 3 | 1. Add the `ask-gpt.py` script to a directory of your choice, be it an already existing scripts directory or a new one. 4 | > If it's a new directory, you'll have to tell Raycast about it 5 | - In Raycast, go to `Extensions`, then `scripts`, click the `Add` icon, `pick script directory` and point it to the directory you chose. 6 | 2. Create a new file in the scripting directory and name it `openai.toml`. Put your API key in it like `apikey = "sk-abcde"` 7 | 3. This script references `#!/usr/bin/env python3` for Python, but you might want to repoint it at an installation that works for you. 8 | 9 |

10 | 11 |

12 | 13 | You should be good to go. Fire up Raycast, type `ask` and `` to start filling the prompt argument. 14 | 15 |

16 | 17 |

18 | 19 | ## Configuration 20 | 21 | Feel free to modify the script, there's plenty to adjust to your liking. 22 | 23 | The script is commented in a way that should help navigate it pretty easily. 24 | 25 | Most notable things which you might want to tweak are the constants around the top of the script, these specify the `model`, `tokens` and `temperature` - I encourage you to read a bit more about these in the OpenAI documentation. 26 | 27 | All outputs are saved to files under `.ask-gpt/outputs/` for your future reference. 28 | -------------------------------------------------------------------------------- /raycast/ask-gpt.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env /usr/local/bin/python3.9 2 | 3 | # Required parameters: 4 | # @raycast.schemaVersion 1 5 | # @raycast.title Ask GPT 6 | # @raycast.mode fullOutput 7 | 8 | # Optional parameters: 9 | # @raycast.icon 🤖 10 | # @raycast.argument1 { "type": "text", "placeholder": "Prompt for the model" } 11 | # @raycast.packageName OpenAI GPT Productivity Toolset 12 | 13 | # Documentation: 14 | # @raycast.description Ask OpenAI GPT models a question 15 | # @raycast.author Tomasz Sobota 16 | # @raycast.authorURL https://techbranch.net 17 | 18 | import json 19 | import requests 20 | import toml 21 | import sys 22 | import os 23 | 24 | # ------------------- 25 | # OpenAI PARAMETERS 26 | # ------------------- 27 | # modify to your preference 28 | 29 | MODEL = "gpt-3.5-turbo" # Most capable model, atm also super cheap 30 | # MODEL = "text-davinci-003" # Most expensive, slow but very capable model 31 | # MODEL = "text-curie-001" # Less expensive, faster and almost as capable model 32 | # MODEL = "text-ada-001" # Least expensive, fastest but least capable model 33 | 34 | # MAX_TOKENS = 20 # Allow only brief answers, might be too general 35 | MAX_TOKENS = 512 # Allow a rather lenghty answer, encourages more context 36 | # MAX_TOKENS = 1024 # A large allowance for tokens, for long and complex answers 37 | 38 | TEMPERATURE = 0.8 # 0 would mean safest answers, check ranges for the model you use 39 | 40 | # 41 | # ------------------- 42 | 43 | # 44 | # Check if the necessary configs exist 45 | # 46 | 47 | if not os.path.exists('openai.toml'): 48 | raise Exception(""" 49 | \n\nSorry, you have to provide the API key in a openai.toml file.\n 50 | The format should be: \n 51 | 52 | apikey="sk-abcdefg"\n 53 | 54 | Feel free to try again once you have the file configured 55 | """) 56 | 57 | # ---------------------- 58 | # Load the configuration 59 | # 60 | 61 | config = toml.load('openai.toml') 62 | 63 | # ---------------------------- 64 | # Read the script parameters 65 | # 66 | 67 | api_key = config["apikey"] 68 | prompt = sys.argv[1] 69 | 70 | # ----------------------------------- 71 | # Prepare the web request to OpenAI 72 | # 73 | 74 | headers = { 75 | "Content-Type": "application/json", 76 | "Authorization": f"Bearer {api_key}" 77 | } 78 | 79 | data = { 80 | "model": MODEL, 81 | "max_tokens": MAX_TOKENS, 82 | "temperature": TEMPERATURE 83 | } 84 | 85 | if MODEL.startswith('gpt'): 86 | data["messages"] = [{"role": "user", "content": prompt}] 87 | 88 | elif MODEL.startswith('text'): 89 | data["prompt"] = prompt 90 | 91 | else: 92 | raise Exception(f"Unknown model type: {MODEL}, please check the script parameters and try again") 93 | 94 | # -------------------------------- 95 | # Make the web request to OpenAI 96 | # 97 | 98 | # standard text completion endpoint 99 | url = "https://api.openai.com/v1/completions" 100 | 101 | if MODEL.startswith('gpt'): 102 | # change to the ChatGPT completion endpoint 103 | url = "https://api.openai.com/v1/chat/completions" 104 | 105 | try: 106 | response_raw = requests.post( 107 | url, 108 | headers=headers, 109 | data=json.dumps(data) 110 | ) 111 | response = response_raw.json() 112 | except Exception as err: 113 | print(f"Encountered a {type(err)} error trying to communicate with OpenAI, here's the traceback") 114 | print(err) 115 | raise err 116 | 117 | # -------------------------------- 118 | # Parse the response from OpenAI 119 | # 120 | 121 | output = "" 122 | 123 | if not (response.get("error") is None): 124 | output = ( 125 | f"Something went wrong with the request, here's the error:\n" 126 | f"Error: {response.get('error')}" 127 | ) 128 | 129 | else: 130 | answer = "" 131 | if MODEL.startswith('gpt'): 132 | # Parse the answer from the ChatGPT response 133 | answer = response['choices'][0]['message']['content'] 134 | elif MODEL.startswith('text'): 135 | # Parse the answer from the standard text model response 136 | answer = response['choices'][0]['text'].replace("\n", "") 137 | else: 138 | # This should never happen, as it's also filtered earlier, but just in case 139 | raise Exception(f"Unknown model type: {MODEL}, please check the parameters and try again") 140 | 141 | completion_tokens = response['usage']['completion_tokens'] 142 | total_tokens = response['usage']['total_tokens'] 143 | 144 | output = ( 145 | f"Received an answer from {MODEL}:\n\n" 146 | f"Prompt: {prompt}\n---\nAnswer: \033[97;40m {answer} \033[0m \n\n" 147 | f"Used {completion_tokens} completion tokens and {total_tokens} in total" 148 | ) 149 | 150 | # Save the final output to a file 151 | # 152 | 153 | try: 154 | filename_txt = f".ask-gpt/outputs/{response['id']}.txt" 155 | filename_json = f".ask-gpt/outputs/{response['id']}.json" 156 | 157 | # make sure directories exist 158 | os.makedirs(os.path.dirname(filename_txt), exist_ok=True) 159 | 160 | # plain text output 161 | text_file = open(filename_txt, "w") 162 | _ = text_file.write(output) 163 | text_file.close() 164 | 165 | # raw json output 166 | json_file = open(filename_json, "w") 167 | _ = json_file.write(json.dumps(response)) 168 | json_file.close() 169 | 170 | except Exception as err: 171 | output = output + f"\nFailed to save the output to a file, here's the error:\n{err}" 172 | 173 | # -------------------------- 174 | # Display the final output 175 | # 176 | 177 | print(output) 178 | --------------------------------------------------------------------------------