├── LICENSE
├── README.md
└── openai
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2023 Janlay Wu
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # openai-cli
2 | A universal cli for OpenAI, written in BASH.
3 |
4 | # Features
5 | - [x] Scalable architecture allows for continuous support of new APIs.
6 | - [x] Custom API name, version, and all relevant properties.
7 | - [x] Dry-run mode (without actually initiating API calls) to facilitate debugging of APIs and save costs.
8 | - [x] New in v3: Supports any AI services that provide OpenAI-compatible APIs.
9 |
10 | Important changes in version 3:
11 | - `-v api_version` in previous versions is now removed. If you have a custom `OPENAI_API_ENDPOINT`, you need to append API version in it. The internal API version is remove as some services like DeepSeek don't have version prefix in `/v1/chat/completions`.
12 | - `OPENAI_CHAT_MODEL` no longer supported, use `OPENAI_API_MODEL` instead.
13 | - By default, the request no longer includes optional parameters `temperature` / `max_tokens`. You need to explictly add `+temperature` / `+max_tokens` to customize if necessary.
14 |
15 | Available APIs:
16 | - [x] `chat/completions` (default API)
17 | - [x] `models`
18 | - [x] `images/generations`
19 | - [x] `embeddings`
20 | - [x] `moderations`
21 |
22 | The default API `chat/completions` provides:
23 | - [x] Complete pipelining to interoperate with other applications
24 | - [x] Allow prompts to be read from command line arguments, file, and stdin
25 | - [x] Support streaming
26 | - [x] Support multiple topics
27 | - [x] Support continuous conversations.
28 | - [ ] Token usage
29 |
30 | # Installation
31 | - [jq](https://stedolan.github.io/jq/) is required.
32 | - Linux: `sudo apt install jq`
33 | - macOS: `brew install jq`
34 | - Download script and mark it executable:
35 | ```bash
36 | curl -fsSLOJ https://go.janlay.com/openai
37 | chmod +x openai
38 | ```
39 | You may want to add this file to a directory in `$PATH`.
40 |
41 | Also install the manual page, e.g.:
42 | ```bash
43 | pandoc -s -f markdown -t man README.md > /usr/local/man/man1/openai.1
44 | ```
45 |
46 | Further reading: curl's killer feature
47 | -OJ
is a killer feature
48 |
49 |
50 | Now you can try it out!
51 |
52 | # Tips
53 | ## Getting started
54 | To begin, type `openai -h` to access the help manual.
55 |
56 | ⚠️ If you run `openai` directly, it may appear to be stuck because it expects prompt content from stdin which is not yet available. To exit, simply press Ctrl+C to interrupt the process.
57 |
58 |
59 | Why are you so serious?
60 |
61 | What happens when the `openai` command is executed without any parameters? It means that:
62 | - The default API used will be `chat/completions`, and the schema version will be `v1`.
63 | - The prompt will be read from stdin.
64 | - The program will wait for input while stdin remains empty.
65 |
66 |
67 | ## Quick Examples
68 | The best way to understand how to use `openai` is to see various usage cases.
69 | - Debug API data for testing purposes
70 | `openai -n foo bar`
71 | - Say hello to OpenAI
72 | `openai Hello`
73 | - Use another model
74 | `openai +model=gpt-3.5-turbo-0301 Hello`
75 | - Disable streaming, allow for more variation in answer
76 | `openai +stream=false +temperature=1.1 Hello`
77 | - Call another available API
78 | `openai -a models`
79 | - Create a topic named `en2fr` with initial prompt
80 | `openai @en2fr Translate to French`
81 | - Use existing topic
82 | `openai @en2fr Hello, world!`
83 | - Read prompt from clipboard then send result to another topic
84 | `pbpaste | openai | openai @en2fr`
85 |
86 | ## Providing prompt
87 | There are multiple ways to obtain a prompt using `openai`:
88 | - Enclose the prompt in single quotes `'` or double quotes `"`
89 | `openai "Please help me translate '你好' into English"`
90 | - Use any argument that does not begin with a minus sign `-`
91 | `openai Hello, world!`
92 | - Place any arguments after `--`
93 | `openai -n -- What is the purpose of the -- argument in Linux commands`
94 | - Input from stdin
95 | `echo 'Hello, world!' | openai`
96 | - Specify a file path with `-f /path/to/file`
97 | `openai -f question.txt`
98 | - Use `-f-` for input from stdin
99 | `cat question.txt | openai -f-`
100 |
101 | Choose any one you like :-)
102 |
103 | ## OpenAI key
104 | `$OPENAI_API_KEY` must be available to use this tool. Prepare your OpenAI key in `~/.profile` file by adding this line:
105 | ```bash
106 | export OPENAI_API_KEY=sk-****
107 | ```
108 | Or you may want to run with a temporary key for one-time use:
109 | ```bash
110 | OPENAI_API_KEY=sk-**** openai hello
111 | ```
112 |
113 | Environment variables can also be set in `$HOME/.openai/config`.
114 |
115 | ## Working with compatible AI services
116 | You can set the environment variable `OPENAI_COMPATIBLE_PROVIDER` to another service name in uppercase, such as `DEEPSEEK`. Then set the three environment variables `DEEPSEEK_API_ENDPOINT`, `DEEPSEEK_API_KEY`, and `DEEPSEEK_API_MODEL` respectively:
117 | ```bash
118 | export OPENAI_COMPATIBLE_PROVIDER=DEEPSEEK
119 | export DEEPSEEK_API_ENDPOINT=https://api.deepseek.com
120 | export DEEPSEEK_API_KEY=sk-***
121 | export DEEPSEEK_API_MODEL=deepseek-chat
122 | ```
123 |
124 | You may have noticed that if you set `OPENAI_COMPATIBLE_PROVIDER` to `FOO`, you also need to set `FOO_API_ENDPOINT`, `FOO_API_KEY`, and `FOO_API_MODEL` accordingly.
125 |
126 | After gathering information on multiple AI providers, you can switch the AI service by setting a temporary environment variable, `OPENAI_COMPATIBLE_PROVIDER`, as shown below:
127 | ```bash
128 | # Switch to DeepSeek
129 | OPENAI_COMPATIBLE_PROVIDER=DEEPSEEK openai
130 | # Switch to Qwen
131 | OPENAI_COMPATIBLE_PROVIDER=QWEN openai
132 | # Switch to the default provider (OpenAI)
133 | OPENAI_COMPATIBLE_PROVIDER= openai
134 | ```
135 |
136 | ## Testing your API invocations
137 | `openai` offers a [dry-run mode](https://en.wikipedia.org/wiki/Dry_run) that allows you to test command composition without incurring any costs. Give it a try!
138 |
139 | ```bash
140 | openai -n hello, world!
141 |
142 | # This would be same:
143 | openai -n 'hello, world!'
144 | ```
145 |
146 |
147 | Command and output
148 |
149 | ```
150 | $ openai -n hello, world!
151 | Dry-run mode, no API calls made.
152 |
153 | Request URL:
154 | --------------
155 | https://api.openai.com/v1/chat/completions
156 |
157 | Authorization:
158 | --------------
159 | Bearer sk-cfw****NYre
160 |
161 | Payload:
162 | --------------
163 | {
164 | "model": "gpt-3.5-turbo",
165 | "temperature": 0.5,
166 | "max_tokens": 200,
167 | "stream": true,
168 | "messages": [
169 | {
170 | "role": "user",
171 | "content": "hello, world!"
172 | }
173 | ]
174 | }
175 | ```
176 |
177 | With full pipelining support, you can achieve the same functionality using alternative methods:
178 |
179 | ```bash
180 | echo 'hello, world!' | openai -n
181 | ```
182 |
183 |
184 | For BASH gurus
185 |
186 | This would be same:
187 | ```bash
188 | echo 'hello, world!' >hello.txt
189 | openai -n
202 |
203 | It seems you have understood the basic usage. Try to get real answer from OpenAI:
204 |
205 | ```bash
206 | openai hello, world!
207 | ```
208 |
209 |
210 | Command and output
211 |
212 | ```
213 | $ openai hello, world!
214 | Hello there! How can I assist you today?
215 | ```
216 |
217 |
218 |
219 | ## Topics
220 | Topic starts with a `@` sign. so `openai @translate Hello, world!` means calling the specified topic `translate`.
221 |
222 | To create new topic, like translate, with the initial prompt (system role, internally):
223 | ```bash
224 | openai @translate 'Translate, no other words: Chinese -> English, Non-Chinese -> Chinese'
225 | ```
226 |
227 | Then you can use the topic by
228 | ```bash
229 | openai @translate 'Hello, world!'
230 | ```
231 | You should get answer like `你好,世界!`.
232 |
233 | Again, to see what happens, use the dry-run mode by adding `-n`. You will see the payload would be sent:
234 | ```json
235 | {
236 | "model": "gpt-3.5-turbo",
237 | "temperature": 0.5,
238 | "max_tokens": 200,
239 | "stream": true,
240 | "messages": [
241 | {
242 | "role": "system",
243 | "content": "Translate, no other words: Chinese -> English, Non-Chinese -> Chinese"
244 | },
245 | {
246 | "role": "user",
247 | "content": "Hello, world!"
248 | }
249 | ]
250 | }
251 | ```
252 |
253 | ## Chatting
254 | All use cases above are standalone queries, not converstaions. To chat with OpenAI, use `-c`. This can also continue existing topic conversation by prepending `@topic`.
255 |
256 | Please note that chat requests will quickly consume tokens, leading to increased costs.
257 |
258 | ## Advanced
259 | To be continued.
260 |
261 | ## Manual
262 | To be continued.
263 |
264 | # LICENSE
265 | This project uses the MIT license. Please see [LICENSE](https://github.com/janlay/openai-cli/blob/master/LICENSE) for more information.
266 |
--------------------------------------------------------------------------------
/openai:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | #
3 | # OpenAI CLI v3.0
4 | # Created by @janlay
5 | #
6 |
7 | set -eo pipefail
8 |
9 | declare _config_dir="${OPENAI_DATA_DIR:-$XDG_CONFIG_HOME}"
10 | OPENAI_DATA_DIR="${_config_dir:-$HOME/.openai}"
11 |
12 | # Read config file?
13 | [ -e "$OPENAI_DATA_DIR"/config ] && . "$OPENAI_DATA_DIR"/config
14 |
15 | # openai-cli accepts various exported environment variables:
16 | # OPENAI_API_KEY : OpenAI's API key
17 | # OPENAI_API_ENDPOINT : Custom API endpoint
18 | # OPENAI_API_MODEL : Which model to use
19 | # OPENAI_DATA_DIR : Directory to store data
20 | OPENAI_API_ENDPOINT="${OPENAI_API_ENDPOINT:-https://api.openai.com/v1}"
21 | OPENAI_API_KEY="${OPENAI_API_KEY:-}"
22 | OPENAI_API_MODEL="${OPENAI_API_MODEL:-gpt-4o}"
23 |
24 | # defaults
25 | readonly _app_name=openai _app_version=3.0
26 | readonly provider="${OPENAI_COMPATIBLE_PROVIDER:-OPENAI}"
27 | readonly default_api_name=chat/completions default_model="$OPENAI_API_MODEL" default_topic=General
28 |
29 | declare -i chat_mode=0 dry_run=0
30 | declare tokens_file="$OPENAI_DATA_DIR/total_tokens" api_name=$default_api_name topic=$default_topic
31 | declare dump_file dumped_file data_file temp_dir rest_args prompt_file prompt
32 |
33 | trap cleanup EXIT
34 | cleanup() {
35 | if [ -d "$temp_dir" ]; then
36 | rm -rf -- "$temp_dir"
37 | fi
38 | }
39 |
40 | get_env() {
41 | local env_key="${provider}_${1}"
42 | echo "${!env_key}"
43 | }
44 |
45 | raise_error() {
46 | [ "$2" = 0 ] || echo -n "$_app_name: " >&2
47 | echo -e "$1" >&2
48 | exit "${2:-1}"
49 | }
50 |
51 | load_conversation() {
52 | [ -f "$data_file" ] && cat "$data_file" || echo '{}'
53 | }
54 |
55 | update_conversation() {
56 | local entry="$2" data
57 | [[ $entry == \{* ]] || entry=$(jq -n --arg content "$entry" '{$content}')
58 | entry=$(jq --arg role "$1" '. += {$role}' <<<"$entry")
59 | data=$(load_conversation)
60 | jq --argjson item "$entry" '.messages += [$item]' <<<"$data" >"$data_file"
61 | }
62 |
63 | save_tokens() {
64 | local data num="$1"
65 | [ -f "$data_file" ] && {
66 | data=$(load_conversation)
67 | jq --argjson tokens "$num" '.total_tokens += $tokens' <<<"$data" >"$data_file"
68 | }
69 |
70 | data=0
71 | [ -f "$tokens_file" ] && data=$(cat "$tokens_file")
72 | echo "$((data + num))" >"$tokens_file"
73 | }
74 |
75 | read_prompt() {
76 | # read prompt from args first
77 | local word accepts_props=1 props='{}' real_prompt
78 | if [ ${#rest_args[@]} -gt 0 ]; then
79 | # read file $prompt_file word by word, and extract words starting with '+'
80 | for word in "${rest_args[@]}"; do
81 | if [ $accepts_props -eq 1 ] && [ "${word:0:1}" = '+' ]; then
82 | word="${word:1}"
83 | # determine value's type for jq
84 | local options=(--arg key "${word%%=*}") value="${word#*=}" arg=--arg
85 | [[ $value =~ ^[+-]?\ ?[0-9.]+$ || $value = true || $value = false || $value == [\[\{]* ]] && arg=--argjson
86 | options+=("$arg" value "$value")
87 | props=$(jq "${options[@]}" '.[$key] = $value' <<<"$props")
88 | else
89 | real_prompt="$real_prompt $word"
90 | accepts_props=0
91 | fi
92 | done
93 | [ -n "$props" ] && echo "$props" >"$temp_dir/props"
94 | fi
95 |
96 | if [ -n "$real_prompt" ]; then
97 | [ -n "$prompt_file" ] && echo "* Prompt file \`$prompt_file' will be ignored as the prompt parameters are provided." >&2
98 | echo -n "${real_prompt:1}" >"$temp_dir/prompt"
99 | elif [ -n "$prompt_file" ]; then
100 | [ -f "$prompt_file" ] || raise_error "File not found: $prompt_file." 3
101 | [[ -s $prompt_file ]] || raise_error "Empty file: $prompt_file." 4
102 | fi
103 | }
104 |
105 | openai_models() {
106 | call_api | jq
107 | }
108 |
109 | openai_moderations() {
110 | local prop_file="$temp_dir/props" payload="{\"model\": \"text-moderation-latest\"}"
111 |
112 | # overwrite default properties with user's
113 | read_prompt
114 | [ -f "$prop_file" ] && payload=$(jq -n --argjson payload "$payload" '$payload | . += input' <"$prop_file")
115 |
116 | # append user's prompt to messages
117 | local payload_file="$temp_dir/payload" input_file="$temp_dir/prompt"
118 | [ -f "$input_file" ] || input_file="${prompt_file:-/dev/stdin}"
119 | jq -Rs -cn --argjson payload "$payload" '$payload | .input = input' "$input_file" >"$payload_file"
120 |
121 | call_api | jq -c '.results[]'
122 | }
123 |
124 | openai_images_generations() {
125 | local prop_file="$temp_dir/props" payload="{\"n\": 1, \"size\": \"1024x1024\"}"
126 |
127 | # overwrite default properties with user's
128 | read_prompt
129 | [ -f "$prop_file" ] && payload=$(jq -n --argjson payload "$payload" '$payload | . += input | . += {response_format: "url"}' <"$prop_file")
130 |
131 | # append user's prompt to messages
132 | local payload_file="$temp_dir/payload" input_file="$temp_dir/prompt"
133 | [ -f "$input_file" ] || input_file="${prompt_file:-/dev/stdin}"
134 | jq -Rs -cn --argjson payload "$payload" '$payload | .prompt = input' "$input_file" >"$payload_file"
135 |
136 | call_api | jq -r '.data[].url'
137 | }
138 |
139 | openai_embeddings() {
140 | local prop_file="$temp_dir/props" payload="{\"model\": \"text-embedding-ada-002\"}"
141 |
142 | # overwrite default properties with user's
143 | read_prompt
144 | [ -f "$prop_file" ] && payload=$(jq -n --argjson payload "$payload" '$payload | . += input' <"$prop_file")
145 |
146 | # append user's prompt to messages
147 | local payload_file="$temp_dir/payload" input_file="$temp_dir/prompt"
148 | [ -f "$input_file" ] || input_file="${prompt_file:-/dev/stdin}"
149 | jq -Rs -cn --argjson payload "$payload" '$payload | .input = input' "$input_file" >"$payload_file"
150 |
151 | call_api | jq -c
152 | }
153 |
154 | openai_chat_completions() {
155 | local streaming=0
156 | if [ -n "$dumped_file" ]; then
157 | # only succeeds when the dumped file is not streamed
158 | jq -er '.choices[0].message.content' <"$dumped_file" 2>/dev/null && return
159 | streaming=1
160 | else
161 | local prop_file="$temp_dir/props" model payload
162 | model=$(get_env API_MODEL)
163 | payload="{\"model\": \"$model\", \"stream\": true}"
164 |
165 | # overwrite default properties with user's
166 | read_prompt
167 | [ -f "$prop_file" ] && {
168 | payload=$(jq -n --argjson payload "$payload" '$payload | . += input | . += {messages: []}' <"$prop_file")
169 | }
170 |
171 | local data
172 | data=$(load_conversation | jq .messages)
173 | [ "$topic" != "$default_topic" ] && {
174 | if [ $chat_mode -eq 1 ]; then
175 | # load all messages for chat mode
176 | payload=$(jq --argjson messages "$data" 'setpath(["messages"]; $messages)' <<<"$payload")
177 | else
178 | # load only first message for non-chat mode
179 | payload=$(jq --argjson messages "$data" 'setpath(["messages"]; [$messages[0]])' <<<"$payload")
180 | fi
181 | }
182 | # append user's prompt to messages
183 | local payload_file="$temp_dir/payload" input_file="$temp_dir/prompt"
184 | [ -f "$input_file" ] || input_file="${prompt_file:-/dev/stdin}"
185 | jq -Rs -cn --argjson payload "$payload" '$payload | .messages += [{role: "user", content: input}]' "$input_file" >"$payload_file"
186 |
187 | streaming=$(jq -e 'if .stream then 1 else 0 end' <"$payload_file")
188 |
189 | # check o1's parameters
190 | jq -e 'select(.model | test("^o\\d")) and (.temperature or .top_p or .presence_penalty or .frequency_penalty .logprobs or .top_logprobs .logit_bias)' <"$payload_file" &>/dev/null && raise_error 'One or more unsupported API parameters used for model o1. See https://platform.openai.com/docs/guides/reasoning#limitations for more details.' 5
191 | fi
192 |
193 | local chunk reason text role fn_name
194 | if [ $streaming -eq 1 ]; then
195 | call_api | while read -r chunk; do
196 | [ -z "$chunk" ] && continue
197 | chunk=$(cut -d: -f2- <<<"$chunk" | jq '.choices[0]')
198 | reason=$(jq -r '.finish_reason // empty' <<<"$chunk")
199 | [[ $reason = stop || $reason = function_call ]] && break
200 | [ -n "$reason" ] && raise_error "API error: $reason" 10
201 |
202 | # get role and function info from the first chunk
203 | [ -z "$role" ] && {
204 | role=$(jq -r '.delta.role // empty' <<<"$chunk")
205 | fn_name=$(jq -r '.delta.function_call.name // empty' <<<"$chunk")
206 | }
207 |
208 | # workaround: https://stackoverflow.com/a/15184414
209 | chunk=$(
210 | jq -r '.delta | .function_call.arguments // .content // empty' <<<"$chunk"
211 | printf x
212 | )
213 | # ensure chunk is not empty
214 | [ ${#chunk} -ge 2 ] || continue
215 |
216 | chunk="${chunk:0:${#chunk}-2}"
217 | text="$text$chunk"
218 | echo -n "$chunk"
219 | done
220 | [ "$dry_run" -eq 0 ] && echo
221 | else
222 | text=$(call_api | jq -er '.choices[0].message.content')
223 | echo "$text"
224 | fi
225 |
226 | # append response to topic file for chat mode
227 | if [ "$chat_mode" -eq 1 ]; then
228 | [ -n "$fn_name" ] && text=$(jq -n --arg name "$fn_name" --argjson arguments "${text:-\{\}}" '{function_call: {$name, $arguments}}')
229 |
230 | update_conversation user "$prompt"
231 | update_conversation "$role" "$text"
232 | fi
233 | }
234 |
235 | # shellcheck disable=SC2120
236 | call_api() {
237 | # return dumped file if specified
238 | [ -n "$dumped_file" ] && {
239 | cat "$dumped_file"
240 | return
241 | }
242 |
243 | local url="$(get_env API_ENDPOINT)/$api_name" auth="Bearer $(get_env API_KEY)"
244 |
245 | # dry-run mode
246 | [ "$dry_run" -eq 1 ] && {
247 | echo "Dry-run mode, no API calls made."
248 | echo -e "\nRequest URL:\n--------------\n$url"
249 | echo -en "\nAuthorization:\n--------------\n"
250 | sed -E 's/(sk-.{3}).{41}/\1****/' <<<"$auth"
251 | [ -n "$payload_file" ] && {
252 | echo -e "\nPayload:\n--------------"
253 | jq <"$payload_file"
254 | }
255 | exit 0
256 | } >&2
257 |
258 | local args=("$url" --no-buffer -fsSL -H 'Content-Type: application/json' -H "Authorization: $auth")
259 | [ -n "$payload_file" ] && args+=(-d @"$payload_file")
260 | [ $# -gt 0 ] && args+=("$@")
261 |
262 | [ -n "$dump_file" ] && args+=(-o "$dump_file")
263 | curl "${args[@]}"
264 | [ -z "$dump_file" ] || exit 0
265 | }
266 |
267 | create_topic() {
268 | update_conversation system "${rest_args[*]}"
269 | raise_error "Topic '$topic' created with initial prompt '${rest_args[*]}'" 0
270 | }
271 |
272 | usage() {
273 | raise_error "OpenAI Client v$_app_version
274 |
275 | SYNOPSIS
276 | ABSTRACT
277 | $_app_name [-n] [-a api_name] [-o dump_file] [INPUT...]
278 | $_app_name -i dumped_file
279 |
280 | DEFAULT_API ($default_api_name)
281 | $_app_name [-c] [+property=value...] [@TOPIC] [-f file | prompt ...]
282 | prompt
283 | Prompt string for the request to OpenAI API. This can consist of multiple
284 | arguments, which are considered to be separated by spaces.
285 | -f file
286 | A file to be read as prompt. If file is - or neither this parameter nor a prompt
287 | is specified, read from standard input.
288 | -c
289 | Continues the topic, the default topic is '$default_topic'.
290 | property=value
291 | Overwrites default properties in payload. Prepend a plus sign '+' before property=value.
292 | eg: +model=gpt-3.5-turbo-0301, +stream=false
293 |
294 | TOPICS
295 | Topic starts with an at sign '@'.
296 | To create new topic, use \`$_app_name @new_topic initial prompt'
297 |
298 | OTHER APIS
299 | $_app_name -a models
300 |
301 | GLOBAL OPTIONS
302 | Global options apply to all APIs.
303 | -a name
304 | API name, default is '$default_api_name'.
305 | -n
306 | Dry-run mode, don't call API.
307 | -o filename
308 | Dumps API response to a file and exits.
309 | -i filename
310 | Uses specified dumped file instead of requesting API.
311 | Any request-related arguments and user input are ignored.
312 |
313 | --
314 | Ignores rest of arguments, useful when unquoted prompt consists of '-'.
315 |
316 | -h
317 | Shows this help" 0
318 | }
319 |
320 | parse() {
321 | local opt
322 | while getopts 'v:a:f:i:o:cnh' opt; do
323 | case "$opt" in
324 | c)
325 | chat_mode=1
326 | ;;
327 | a)
328 | api_name="$OPTARG"
329 | ;;
330 | f)
331 | prompt_file="$OPTARG"
332 | [ "$prompt_file" = - ] && prompt_file=
333 | ;;
334 | n)
335 | dry_run=1
336 | ;;
337 | i)
338 | dumped_file="$OPTARG"
339 | ;;
340 | o)
341 | dump_file="$OPTARG"
342 | ;;
343 | h | ?)
344 | usage
345 | ;;
346 | esac
347 | done
348 | shift "$((OPTIND - 1))"
349 |
350 | # extract the leading topic
351 | [[ "$1" =~ ^@ ]] && {
352 | topic="${1#@}"
353 | shift
354 | }
355 |
356 | [ $chat_mode -eq 0 ] || {
357 | [[ -n $topic && $topic != "$default_topic" ]] || raise_error 'Topic is required for chatting.' 2
358 | }
359 |
360 | rest_args=("$@")
361 | }
362 |
363 | check_bin() {
364 | command -v "$1" >/dev/null || raise_error "$1 not found. Use package manager (Homebrew, apt-get etc.) to install it." "${2:-1}"
365 | }
366 |
367 | main() {
368 | # show compatible provider if applicable
369 | if [ "${SUPPRESS_PROVIDER_TIPS:-0}" -eq 0 ] && [ -n "$OPENAI_COMPATIBLE_PROVIDER" ]; then
370 | echo "OpenAI compatible provider: $OPENAI_COMPATIBLE_PROVIDER" >&2
371 | fi
372 |
373 | # check required config
374 | local key
375 | for key in API_ENDPOINT API_KEY API_MODEL; do
376 | [ -z "$(get_env "$key")" ] && raise_error "Missing environment variable: ${provider}_$key." 1
377 | done
378 |
379 | parse "$@"
380 | check_bin jq 10
381 |
382 | mkdir -p "$OPENAI_DATA_DIR"
383 | data_file="$OPENAI_DATA_DIR/$topic.json"
384 | temp_dir=$(mktemp -d)
385 |
386 | if [[ $topic == "$default_topic" || -f "$data_file" ]]; then
387 | local fn="openai_${api_name//\//_}"
388 | [ "$(type -t "$fn")" = function ] || raise_error "API '$api_name' is not available." 12
389 | "$fn"
390 | else
391 | [ ${#rest_args[@]} -gt 0 ] || raise_error "Prompt for new topic is required" 13
392 | create_topic
393 | fi
394 | }
395 |
396 | main "$@"
397 |
--------------------------------------------------------------------------------