├── LICENSE ├── README.md ├── archive └── prolog2gpt-0.1.00 │ ├── pack.pl │ └── prolog │ └── prolog2gpt.pl ├── docs ├── Readme.md └── wiki │ └── Home.md ├── rel └── prolog2gpt-0.1.00.zip ├── src ├── pack.pl └── prolog │ └── prolog2gpt.pro └── test ├── otter.png ├── test001.pro └── tune_answer.jsonl /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 RdR 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # prolog2gpt 2 | SWI Prolog library to interface to the GPT API. 3 | 4 | # Introduction 5 | Large Language Models (LLM) like GPT have greatly advanced natural language processing in recent years. However, they can benefit from interfacing with other types of reasoning modules, including logic engines (see for example the discussions in "Faithful Chain-of-Thought Reasoning" 2023 by Lyu et al or "FOLIO: Natural Language Reasoning with First-Order Logic" 2022 by Han et al). 6 | 7 | Currently, there are interface libraries to GPT for Python and NodeJS, but not Prolog. The work in this repo seeks to address that gap by building a library for SWI Prolog. 8 | 9 | # Current Status 10 | Pre-alpha: Work has only just started. 11 | 12 | Most of the API is working and you can access GPT with simple prolog predicates. Have a look at the code documentation (see below) for examples. Also see the unit tests in `src/test/test001.pro` 13 | 14 | # Install 15 | 16 | First, make sure your GPT API key is set in the environment variable `GPTKEY`. Do this: 17 | 18 | 1. Create your GPT account at https://platform.openai.com 19 | 2. Create an API key at https://platform.openai.com/account/api-keys and, as instructed, save that key 20 | somewhere (e.g. in a text file in a secure folder). 21 | 3. Set an environment variable called `GPTKEY` to the key value (don't forget in Linux that if you added the environment variable to your bash startup script, e.g. `~/.bashrc`, then you need to `source` your `~/.bashrc` script to activate the new environment variable). 22 | 23 | Next, there are two ways to install. 24 | 25 | Firstly, as a pack from within Prolog, and using it as a library module 26 | 27 | ~~~ 28 | :- pack_install('prolog2gpt'). 29 | :- use_module(library(prolog2gpt)). 30 | 31 | % Now test that the pack installed and execute a first call to GPT 32 | :- init_gptkey. 33 | :- gpt_completions('text-davinci-03','Say hello',Answer,[]). 34 | 35 | ~~~ 36 | 37 | Otherwise, you can also just clone the git repository. 38 | 39 | ~~~ 40 | $ git clone https://github.com/RdR1024/prolog2gpt 41 | ~~~ 42 | 43 | Then `cd` into `prolog2gpt/src/prolog and launch`, execute `swipl` and try the following: 44 | 45 | ~~~ 46 | :- [prolog2gpt]. 47 | :- init_gptkey. % this makes the gpt key available to the gpt api predicates 48 | :- gpt_completions('text-davinci-003','My favourite animal is ',Text,[max_tokens=30]). 49 | 50 | ~~~ 51 | 52 | # Usage 53 | The prolog predicates mostly follow the GPT API (see https://platform.openai.com/docs/api-reference). However, there is usually a wrapper that returns the results from the API call into something Prolog-friendly (rather than leaving the results as a complex JSON data structure). 54 | 55 | For example, `gpt_completions(Model,Prompt,Result,Options)` will return a Prolog list of generated texts (called "completions" in GPT speak). If you want the returned JSON results instead, you can try 56 | `gpt_completions(Model,Prompt,Result,Raw,Options)` where `Raw` is set to `true`. In this case, the `Result` will be a json term structure. Most of the api predicates have this same "Raw" option. 57 | 58 | 59 | # Repository structure 60 | This repository has the following structure: 61 | 62 | - `docs` contains literature and additional documentation. A special subdirectory called `wiki` contains the source files for the separate github wiki repository (`github.com:RdR1024/prolog2gpt.wiki.git`). Note: I haven't populated this yet. 63 | - `rel` contains the periodic releases of the library. Users should use this directory to download stable copies of library. 64 | - `src` contains the source code. 65 | - `prolog` contains the prolog source code 66 | - `archive` old material that we keep for reference 67 | - `test` testing files for the source code 68 | 69 | 70 | # Documentation 71 | You can read the individual predicate comments in the source files (e.g. `prolog2gpt.pro`), or start the SWI-Prolog document server as follows: 72 | 73 | ~~~ 74 | :- doc_server(3030). 75 | :- portray_text(true). 76 | :- ['prolog2gpt.pro']. 77 | :- doc_server. 78 | ~~~ 79 | 80 | This should launch a web browser with the documentation for the `prolog2gpt.pro` file. 81 | 82 | # Known Issues 83 | Three of the APIs don't work. Luckily, they are not critical APIs, because there are 84 | workarounds. The problem seems to be with the way the URLs are formulated. They are all 85 | URLs that end in /{id}/{instruction} I'm investigating the issues, but so far no luck. 86 | The problem APIs are: 87 | 88 | * `files retrieve content` (GET https://api.openai.com/v1/files/{file_id}/content) 89 | The workaround at the moment is to keep a copy of the file content on your own 90 | computer. The file that it is referring to was by definition uploaded by you 91 | previously. 92 | * `fine-tunes cancel` (POST https://api.openai.com/v1/fine-tunes/{fine_tune_id}/cancel) 93 | The workaround is to just let the job complete, and then delete it 94 | * `fine-tunes events` (GET https://api.openai.com/v1/fine-tunes/{fine_tune_id}/events) 95 | The workaround is to get the JSON data from the plain fine-tunes API and then search 96 | within it for the events within the structure for {fine_tune_id}. 97 | -------------------------------------------------------------------------------- /archive/prolog2gpt-0.1.00/pack.pl: -------------------------------------------------------------------------------- 1 | name(prolog2gpt). 2 | title('Library of prolog predicates to access the GPT API'). 3 | version('0.1.00'). 4 | author('Richard de Rozario',richard.derozario@gmail.com). 5 | home('https://github.com/RdR1024/prolog2gpt'). 6 | download('https://github.com/RdR1024/rel/prolog2gpt-0.1.00.tgz'). -------------------------------------------------------------------------------- /archive/prolog2gpt-0.1.00/prolog/prolog2gpt.pl: -------------------------------------------------------------------------------- 1 | :- module(prolog2gpt,[ 2 | init_gptkey/0, 3 | gpt_models/1, gpt_models/2, 4 | gpt_models_detail/2, 5 | gpt_extract_data/4, 6 | gpt_extract_fields/3, 7 | gpt_extract_field_pairs/4, 8 | gpt_completions/4, gpt_completions/5, 9 | gpt_edits/4, gpt_edits/5, 10 | gpt_images_create/3, gpt_images_create/4, 11 | gpt_images_edits/4, gpt_images_edits/5, 12 | gpt_images_variations/3, gpt_images_variations/4, 13 | gpt_embeddings/4, gpt_embeddings/5, 14 | gpt_files/1, gpt_files/2, 15 | gpt_files_upload/4, gpt_files_upload/5, 16 | gpt_files_delete/2, gpt_files_delete/3, 17 | gpt_files_retrieve/2, gpt_files_retrieve/3, 18 | gpt_files_retrieve_content/2, gpt_files_retrieve_content/3, 19 | gpt_fine_tunes/1,gpt_fine_tunes/2,gpt_fine_tunes/3, gpt_fine_tunes/4, 20 | gpt_fine_tunes_detail/2, gpt_fine_tunes_detail/3, 21 | gpt_fine_tunes_cancel/2, gpt_fine_tunes_cancel/3, 22 | gpt_fine_tunes_events/2, gpt_fine_tunes_events/3, 23 | gpt_fine_tunes_delete/2, gpt_fine_tunes_delete/3, 24 | gpt_moderations/3, gpt_moderations/4 25 | ]). 26 | /** Prolog interface to GPT 27 | 28 | # Introduction 29 | 30 | This module provides prolog predicates to call the GPT API. 31 | 32 | Large Language Models (LLM) like GPT essentially predict what text comes next, based on 33 | learning the (latent) probabilistic relationships between text tokens. By training the 34 | model on massive samples of text, the natural language capabilities have improved 35 | dramatically in recent years. 36 | 37 | However, such language models can benefit from interaction with other types of modules, 38 | such as logic engines. To aid in developing such interactions, this library aims 39 | to make it easy to interact with GPT directly from Prolog, using predicates that call 40 | the GPT API. 41 | 42 | The Prolog predicates are based on the OpenAI API Reference: https://platform.openai.com/docs/api-reference 43 | 44 | # Usage 45 | 46 | 1. Create your GPT account at https://platform.openai.com 47 | 2. Create an API key at https://platform.openai.com/account/api-keys and, as instructed, save that key 48 | somewhere (e.g. in a text file in a secure folder). 49 | 3. Set an environment variable called `GPTKEY` to the key value (don't forget in Linux that if you added the environment variable to your bash startup script, e.g. `~/.bashrc`, then you need to `source` your `~/.bashrc` script to activate the new environment variable). 50 | 4. Use the `prolog2gpt.pro` Prolog module as usual 51 | 52 | @author Richard de Rozario 53 | @license MIT 54 | */ 55 | 56 | :- use_module(library(http/http_open)). 57 | :- use_module(library(http/http_client)). 58 | :- use_module(library(http/http_ssl_plugin)). 59 | :- use_module(library(http/json)). 60 | :- use_module(library(http/http_json)). 61 | :- use_module(library(http/json_convert)). 62 | 63 | %% init_gptkey is semidet. 64 | % Get the GPT API Key from the environment variable named `GPTKEY` and create 65 | % a prolog flag (`gptkey`) with the key value. Note: execute this predicate 66 | % before using any of the others, because the gpt key is needed for authorization 67 | % with each gpt api call. 68 | % 69 | % Example use: 70 | % ~~~ 71 | % :- init_gptkey, current_prolog_flag(gptkey,Key), writeln(Key). 72 | % Key = sk-manycharactersoftheacturalkeyvalue 73 | % ~~~ 74 | % 75 | init_gptkey:- 76 | getenv('GPTKEY',Key), 77 | create_prolog_flag(gptkey,Key,[type(atom)]). 78 | 79 | %% gpt_models(-Models:list) is semidet. 80 | %% gpt_models(-Models:json,+Raw:boolean) is semidet. 81 | % Get a list of the available GPT models 82 | % 83 | % Example use: 84 | % ~~~ 85 | % :- gpt_models(Models). 86 | % Models = [babbage,davinci,...] 87 | % ~~~ 88 | % 89 | % @arg Models The list of model names, or JSON term with model details 90 | % @arg Raw If `true` then Models is the raw json result, else Models is a list of model names 91 | % 92 | gpt_models(Models):- gpt_models(Models,false). 93 | gpt_models(Models,Raw):- 94 | current_prolog_flag(gptkey,Key), 95 | http_get('https://api.openai.com/v1/models',Ms, 96 | [authorization(bearer(Key)),application/json]), 97 | ( Raw=false 98 | -> gpt_extract_data(data,id,Ms,Models) 99 | ; Models=Ms 100 | ). 101 | 102 | 103 | %% gpt_models_detail(+Model:atom, -ModelDetails:json) is semidet. 104 | % Get the details of a particular model 105 | % 106 | % Example use: 107 | % ~~~ 108 | % :- gpt_models('text-davinci-003',Details). 109 | % Details = ... % the JSON term 110 | % ~~~ 111 | % 112 | % @arg Model The model name, Note: put names that end with numeric suffixes in 113 | % single quotes, to avoid the numeric being treated as a number. 114 | % For example, use `'text-davinci-003'` 115 | % @arg Details The details of the model as a JSON term 116 | % 117 | gpt_models_detail(Model,Details):- 118 | current_prolog_flag(gptkey,Key), 119 | atomic_concat('https://api.openai.com/v1/models/',Model,URL), 120 | http_get(URL,Details,[authorization(bearer(Key)),application/json]). 121 | 122 | 123 | %% gpt_extract_data(+Group:atom,+Fielname:atom,+Data:json,-Result:list) is semidet. 124 | % Extract a list of field data from a gpt json structure. Note: this predicate 125 | % makes some simple assumptions about how GPT API result data is structured. 126 | % 127 | % Example use: 128 | % ~~~ 129 | % :- gpt_models(Ms,true), gpt_extract_data(data,id,Ms,Models). 130 | % Models = ['babbage','text-davinci-001',...] 131 | % ~~~ 132 | % 133 | % @arg Group The GPT data group name. e.g. `data`, `choices`,... 134 | % @arg Fieldname The name of the field whose data we want 135 | % @arg Data The json data list from the GPT API, that contains one or more field values 136 | % @arg Result The resulting list of data values 137 | gpt_extract_data(Group,Fieldname,json(Data),Result):- 138 | member(Group=Fieldlist,Data), 139 | gpt_extract_fields(Fieldname,Fieldlist,Result). 140 | 141 | %% gpt_extract_fields(+Fieldname:atom,+Data:json,-Result:list) is semidet. 142 | % Extract a list of field data from a gpt json structure. Note: this predicate 143 | % makes some simple assumptions about how GPT API result data is structured. 144 | % 145 | % Example use: 146 | % ~~~ 147 | % :- Data=[json([id='babbage',object='model']),json([id='text-davinci-001',object='model'])], gpt_extract_data(data,id,Data,Models). 148 | % Models = ['babbage','text-davinci-001'] 149 | % ~~~ 150 | % 151 | % @arg Fieldname The name of the field whose data we want 152 | % @arg Data The list with json data from the GPT API, that contains one or more field values 153 | % @arg Result The resulting list of data values 154 | gpt_extract_fields(_,[],[]):-!. 155 | gpt_extract_fields(Fieldname,[json(Fields)|Fs],Results):- 156 | ( member(Fieldname=R,Fields) 157 | -> Results=[R|Res] 158 | ; Results=Res 159 | ), 160 | gpt_extract_fields(Fieldname,Fs,Res). 161 | 162 | %% gpt_extract_field_pairs(+Field1:atom,+Field2:atom,+Data:json,-Result:list) is semidet. 163 | % Extract a list of field pairs from a gpt json structure. Note: this predicate 164 | % makes some simple assumptions about how GPT API result data is structured. 165 | % 166 | % Example use: 167 | % ~~~ 168 | % :- Data=[json([id='123',filename=file1]),json([id='345',filename=file2])], gpt_extract_field_pairs(filename,id,Data,FieldPairs). 169 | % FieldPairs = [file1-'123',file2-'345'] 170 | % ~~~ 171 | % 172 | % @arg Fieldname The name of the field whose data we want 173 | % @arg Data The list with json data from the GPT API, that contains one or more field values 174 | % @arg Result The resulting list of data values 175 | gpt_extract_field_pairs(_,_,[],[]):-!. 176 | gpt_extract_field_pairs(Field1,Field2,[json(Fields)|Fs],Results):- 177 | ( member(Field1=F1,Fields) 178 | -> ( member(Field2=F2,Fields) 179 | -> Results = [F1-F2|Res] 180 | ; Results = Res 181 | ) 182 | ; Results = Res 183 | ),!, 184 | gpt_extract_field_pairs(Field1,Field2,Fs,Res). 185 | 186 | 187 | %% gpt_completions(+Model:atom, +Prompt:atom, -Result:text, +Options:list) is semidet. 188 | %% gpt_completions(+Model:atom, +Prompt:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 189 | % Get a prompted text completion from a GPT model. 190 | % 191 | % Example use: 192 | % ~~~ 193 | % :- gpt_completions('text-davinci-003','My favourite animal is ',Result,_,[]), 194 | % Result = ['a dog'] 195 | % ~~~ 196 | % 197 | % @arg Model The GPT model name, Note: put names that end with numeric suffixes in 198 | % single quotes, to avoid the numeric being treated as a number. 199 | % For example, use `'text-davinci-003'` 200 | % @arg Prompt The prompt that GPT will complete 201 | % @arg Result The text result, or json term with the result from GPT 202 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 203 | % the Result will be the text completion result 204 | % @arg Options The model completion options as list of json pair values (see below) 205 | % 206 | % 207 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 208 | % * suffix=S 209 | % A string (S) that is inserted after the completion 210 | % * max_tokens=M 211 | % The size of output, where `M` is a natural number (incl. 0). 212 | % GPT-3 can theoretically return up to 4096 tokens, but in practice less than half that. 213 | % One token is about 4 characters or 0.75 average word length. Defaults to 16. 214 | % * temperature=N 215 | % Controls "randomness" of output, with `0<=N<=2`. Defaults to 1. 216 | % Higher temperature means text will be more diverse, but 217 | % also risks more grammar mistakes and nonsense. Recommended to 218 | % change either this or `top_p`, but not both. 219 | % * top_p 220 | % An alternative to sampling with `temperature`, called 221 | % nucleus sampling, where the model considers the results 222 | % of the tokens with `top_p` probability mass. So 0.1 means 223 | % only the tokens comprising the top 10% probability mass are 224 | % considered. Use this, or `temperature`, but not both. 225 | % Defaults to 1. 226 | % * n=N 227 | % The number of completions (e.g. Results) to generate for each 228 | % prompt. Defaults to 1. 229 | % * stream=TF 230 | % If `true` then tokens will be sent as data-only 231 | % `server-sent events` as they become available, with the 232 | % stream terminated by a `data: [DONE]` message. 233 | % Defaults to `false` 234 | % * logprobs=N 235 | % Include the log probabilities on most likely tokens. For 236 | % example, if `logprobs=5`, the API will return a list of 237 | % the 5 most likely tokens. The API will always return the 238 | % log probability of the sampled token, so there may be up 239 | % to `logprobs+1` elements in the response. Defaults to 0. 240 | % * echo=TF 241 | % If `true`, echo back the prompt in addition to 242 | % the completion. Defaults to `false` 243 | % * stop=S 244 | % (string or list of strings). Up to 4 strings ("sequences") 245 | % where the API will stop generating further tokens. The 246 | % returned text will not contain the stop sequences. Defaults 247 | % to `null` 248 | % * presence_penalty=N 249 | % Number between -2.0 and 2.0. Positive values penalize new 250 | % tokens based on whether they appear in the text so far, 251 | % increase the model's likelihood to talk about new topics. 252 | % Defaults to 0. 253 | % * frequency_penalty=N 254 | % Number between -2.0 and 2.0. Positive values penalize new 255 | % tokens based on their existing frequency in the text so far, 256 | % decreasing the model's likelihood to repeat the same line 257 | % verbatim. Defaults to 0. 258 | % * best_of=N 259 | % Generates best_of completions server-side and returns the "best" 260 | % (the one with the highest log probability per token). Results cannot be streamed. 261 | % 262 | % When used with `n`, `best_of` controls the number of candidate completions 263 | % and `n` specifies how many to return – `best_of` must be greater than `n`. 264 | % 265 | % Note: Because this parameter generates many completions, it can quickly consume 266 | % your token quota. Use carefully and ensure that you have reasonable settings for 267 | % `max_tokens` and `stop`. 268 | % * logit_bias=JSON_TERM 269 | % Modify the likelihood of specified tokens appearing in the completion. 270 | % 271 | % Accepts a json object that maps tokens (specified by their token ID in the 272 | % GPT tokenizer) to an associated bias value from -100 to 100. You can use this 273 | % tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. 274 | % Mathematically, the bias is added to the logits generated by the model prior to sampling. 275 | % The exact effect will vary per model, but values between -1 and 1 should decrease or 276 | % increase likelihood of selection; values like -100 or 100 should result in a ban or 277 | % exclusive selection of the relevant token. 278 | % 279 | % As an example, you can pass `json('50256': -100)` to prevent the `<|endoftext|>` token 280 | % from being generated. 281 | % * user=S 282 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 283 | % 284 | gpt_completions(Model,Prompt,Result,Options):- 285 | gpt_completions(Model,Prompt,Result,false,Options),!. 286 | 287 | gpt_completions(Model,Prompt,Result,Raw,Options):- 288 | current_prolog_flag(gptkey,Key), 289 | atom_json_term(D,json([model=Model,prompt=Prompt|Options]),[]), 290 | Data = atom(application/json,D), 291 | http_post('https://api.openai.com/v1/completions',Data,ReturnData, 292 | [authorization(bearer(Key)),application/json]), 293 | ( Raw=false 294 | -> gpt_extract_data(choices,text,ReturnData,Result) 295 | ; Result= ReturnData 296 | ). 297 | 298 | %% gpt_edits(+Model:atom, +Instruction:atom, -Result:text, +Options:list) is semidet. 299 | %% gpt_edits(+Model:atom, +Instruction:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 300 | % Get a new edit for a given model, input and instruction. 301 | % Note: Only for the 'text-davinci-edit-001' or 'code-davinci-edit-001' models. 302 | % 303 | % Example use: 304 | % ~~~ 305 | % :- gpt_edit('text-davinci-001','Fix the spelling mistakes',Result,_, 306 | % [ input='What day of the wek is it?' 307 | % ]), 308 | % Result = 'What day of the week is it?' 309 | % ~~~ 310 | % 311 | % @arg Model The GPT model name, Note: put names that end with numeric suffixes in 312 | % single quotes, to avoid the numeric being treated as a number. 313 | % For example, use `'text-davinci-003'` 314 | % @arg Instruction The natural language editing instruction. 315 | % @arg Result The text result, or json term with the result from GPT 316 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 317 | % the Result will be the (first) text result 318 | % @arg Options The edit options as list of json pair values (see below) 319 | % 320 | % 321 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 322 | % * input=S 323 | % An atom of text (S) that the model needs to edit. Default='' 324 | % * max_tokens=M 325 | % The size of output, where `M` is a natural number (incl. 0). 326 | % GPT-3 can theoretically return up to 4096 tokens, but in practice less than half that. 327 | % One token is about 4 characters or 0.75 average word length. Defaults to 16. 328 | % * n=N 329 | % The number of completions (e.g. Results) to generate for each 330 | % prompt. Defaults to 1. 331 | % * temperature=N 332 | % Controls "randomness" of output, with `0<=N<=2`. Defaults to 1. 333 | % Higher temperature means text will be more diverse, but 334 | % also risks more grammar mistakes and nonsense. Recommended to 335 | % change either this or `top_p`, but not both. 336 | % * top_p 337 | % An alternative to sampling with `temperature`, called 338 | % nucleus sampling, where the model considers the results 339 | % of the tokens with `top_p` probability mass. So 0.1 means 340 | % only the tokens comprising the top 10% probability mass are 341 | % considered. Use this, or `temperature`, but not both. 342 | % Defaults to 1. 343 | gpt_edits(Model,Instruction,Result,Options):- 344 | gpt_edits(Model,Instruction,Result,false,Options),!. 345 | 346 | gpt_edits(Model,Instruction,Result,Raw,Options):- 347 | current_prolog_flag(gptkey,Key), 348 | atom_json_term(D,json([model=Model,instruction=Instruction|Options]),[]), 349 | Data = atom(application/json,D), 350 | http_post('https://api.openai.com/v1/edits',Data,ReturnData, 351 | [authorization(bearer(Key)),application/json]), 352 | ( Raw=false 353 | -> gpt_extract_data(choices,text,ReturnData,Result) 354 | ; Result= ReturnData 355 | ). 356 | 357 | %% gpt_images_create(+Prompt:atom, -Result:term, +Options:list) is semidet. 358 | %% gpt_images_create(+Prompt:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 359 | % Create an image from a text prompt. 360 | % 361 | % Example use: 362 | % ~~~ 363 | % :- gpt_images_create('A cute baby sea otter',Result,_,[]), 364 | % Result = ['https://...'] % url of the resulting image 365 | % ~~~ 366 | % 367 | % @arg Prompt The prompt that GPT will complete 368 | % @arg Result The text result, or json term with the result from GPT 369 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 370 | % the Result will be the (first) url or b64 result 371 | % @arg Options The edit options as list of json pair values (see below) 372 | % 373 | % 374 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 375 | % * n=N 376 | % The number of images to generate. Defaults to 1. 377 | % * size=Z 378 | % The size of the image. Must be one of `'256x256'`, `'512x512'`, or `'1024x1024'`. 379 | % Default is `'1024x1024'` 380 | % * response_format=S 381 | % The format of the generated images. Must be one of `url` or `b64_json`. Default is `url` 382 | % * user=S 383 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 384 | % 385 | gpt_images_create(Prompt,Result,Options):- 386 | gpt_images_create(Prompt,Result,false,Options). 387 | gpt_images_create(Prompt,Result,Raw,Options):- 388 | current_prolog_flag(gptkey,Key), 389 | atom_json_term(D,json([prompt=Prompt|Options]),[]), 390 | Data = atom(application/json,D), 391 | http_post('https://api.openai.com/v1/images/generations',Data,ReturnData, 392 | [authorization(bearer(Key)),application/json]), 393 | ( member(response_format=Format,Options) -> true ; Format=url ), 394 | ( Raw=false 395 | -> gpt_extract_data(data,Format,ReturnData,Result) 396 | ; Result= ReturnData 397 | ). 398 | 399 | %% gpt_images_edits(+Prompt:atom, -Result:term,+Options:list) is semidet. 400 | %% gpt_images_edits(+Prompt:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 401 | % Modify an image from a text prompt. 402 | % 403 | % Example use: 404 | % ~~~ 405 | % :- gpt_images_edits('A cute baby sea otter with a hat','./test/otter.png',Result,_,[]), 406 | % Result = ['https://...'] % url of the resulting image 407 | % ~~~ 408 | % 409 | % @arg Prompt The prompt that GPT will complete 410 | % @arg File The path/filename of the image to edit 411 | % @arg Result The text result, or json term with the result from GPT 412 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 413 | % the Result will be the (first) url or b64 result 414 | % @arg Options The edit options as list of pair values (see below) 415 | % 416 | % 417 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 418 | % * n=N 419 | % The number of images to generate. Defaults to 1. 420 | % * size=Z 421 | % The size of the image. Must be one of `'256x256'`, `'512x512'`, or `'1024x1024'`. 422 | % Default is `'1024x1024'` 423 | % * mask 424 | % An additional image whose fully transparent areas (e.g. where alpha is zero) 425 | % indicate where image should be edited. Must be a valid 426 | % PNG file, less than 4MB, and have the same dimensions as image. 427 | % * response_format=S 428 | % The format of the generated images. Must be one of `url` or `b64_json`. Default is `url` 429 | % * user=S 430 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 431 | % 432 | gpt_images_edits(Prompt,Image,Result,Options):- 433 | gpt_images_edits(Prompt,Image,Result,false,Options). 434 | gpt_images_edits(Prompt,Image,Result,Raw,Options):- 435 | current_prolog_flag(gptkey,Key), 436 | Data = form_data([prompt=Prompt,image=file(Image)|Options]), 437 | http_post('https://api.openai.com/v1/images/edits',Data,ReturnData, 438 | [authorization(bearer(Key)),application/json]), 439 | ( member(response_format=Format,Options) -> true ; Format=url ), 440 | ( Raw=false 441 | -> gpt_extract_data(data,Format,ReturnData,Result) 442 | ; Result= ReturnData 443 | ). 444 | 445 | %% gpt_images_variations(+File:atom, -Result:term,+Options:list) is semidet. 446 | %% gpt_images_variations(+File:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 447 | % Produce variation(s) of an image. 448 | % 449 | % Example use: 450 | % ~~~ 451 | % :- gpt_images_variations('./test/otter.png',Result,_,[]), 452 | % Result = ['https://...'] % url of the resulting image 453 | % ~~~ 454 | % 455 | % @arg Image The path/filename of image to vary 456 | % @arg Result The text result, or json term with the result from GPT 457 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 458 | % the Result will be the (first) url or b64 result 459 | % @arg Options The edit options as list of pair values (see below) 460 | % 461 | % 462 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 463 | % * n=N 464 | % The number of images to generate. Defaults to 1. 465 | % * size=Z 466 | % The size of the image. Must be one of `'256x256'`, `'512x512'`, or `'1024x1024'`. 467 | % Default is `'1024x1024'` 468 | % * response_format=S 469 | % The format of the generated images. Must be one of `url` or `b64_json`. Default is `url` 470 | % * user=S 471 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 472 | % 473 | gpt_images_variations(Image,Result,Options):- 474 | gpt_images_variations(Image,Result,false,Options). 475 | 476 | gpt_images_variations(Image,Result,Raw,Options):- 477 | current_prolog_flag(gptkey,Key), 478 | Data = form_data([image=file(Image)|Options]), 479 | http_post('https://api.openai.com/v1/images/variations',Data,ReturnData, 480 | [authorization(bearer(Key)),application/json]), 481 | ( member(response_format=Format,Options) -> true ; Format=url ), 482 | ( Raw=false 483 | -> gpt_extract_data(data,Format,ReturnData,Result) 484 | ; Result= ReturnData 485 | ). 486 | 487 | %% gpt_embeddings(+Input:text,-Result:list) is semidet. 488 | %% gpt_embeddings(+Input:text,-Result:list,+Raw:boolean) is semidet. 489 | % Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. 490 | % 491 | % Example use: 492 | % ~~~ 493 | % :- gpt_embeddings('text-embedding-ada-002','The food was delicious',Result), 494 | % Result = [0.0023064255,-0.009327292,...] 495 | % ~~~ 496 | % 497 | % @arg Input Atom, string, or list of such 498 | % @arg Result List of file names, or json term (depending on `Raw`) 499 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 500 | % the Result will be a simple list of file names 501 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 502 | % * user=S 503 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 504 | % 505 | gpt_embeddings(Model,Input,Result,Options):- 506 | gpt_embeddings(Model,Input,Result,false,Options),!. 507 | gpt_embeddings(Model,Input,Result,Raw,Options):- 508 | current_prolog_flag(gptkey,Key), 509 | atom_json_term(D,json([model=Model,input=Input|Options]),[]), 510 | Data = atom(application/json,D), 511 | http_post('https://api.openai.com/v1/embeddings',Data,ReturnData, 512 | [authorization(bearer(Key)),application/json]), 513 | ( Raw=false 514 | -> gpt_extract_data(data,embedding,ReturnData,Result) 515 | ; Result= ReturnData 516 | ). 517 | 518 | 519 | %% gpt_files(-Result:list) is semidet. 520 | %% gpt_files(-Result:list,+Raw:boolean) is semidet. 521 | % List all files that belong to the user's organization. 522 | % 523 | % Example use: 524 | % ~~~ 525 | % :- gpt_files(Result), 526 | % Result = ['file1.jsonl'-'file-12345','file2.jsonl'-'file-56789'] 527 | % ~~~ 528 | % 529 | % @arg Result List of Filename-ID pairs, or json term (depending on `Raw`) 530 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 531 | % the Result will be a simple list of file names 532 | gpt_files(Result):- 533 | gpt_files(Result,false). 534 | gpt_files(Result,Raw):- 535 | current_prolog_flag(gptkey,Key), 536 | http_get('https://api.openai.com/v1/files',json(ReturnData), 537 | [authorization(bearer(Key)),application/json]), 538 | ( Raw=false 539 | -> ( member(data=Files,ReturnData), 540 | gpt_extract_field_pairs(filename,id,Files,Result) 541 | ) 542 | ; Result= json(ReturnData) 543 | ). 544 | 545 | %% gpt_files_upload(+File:atom,+Purpose:text,-Result:list) is semidet. 546 | %% gpt_files_upload(+File:atom,+Purpose:text,-Result:list,+Raw:boolean) is semidet. 547 | % Upload a JSON Lines file (typically for fine-tuning) 548 | % 549 | % Example use: 550 | % ~~~ 551 | % :- gpt_files_upload('./test/tune_answer.jsonl','fine-tune',Result), 552 | % Result = ['file-XjGxS3KTG0uNmNOK362iJua3'] 553 | % ~~~ 554 | % 555 | % @arg File Filename to upload 556 | % @arg Purpose Purpose of the file. Currently only 'fine-tune' 557 | % @arg Result List of file names, or json term (depending on `Raw`) 558 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 559 | % the Result will be a simple list of file names 560 | gpt_files_upload(File,Purpose,Result,Options):- 561 | gpt_files_upload(File,Purpose,Result,false,Options),!. 562 | gpt_files_upload(File,Purpose,Result,Raw,Options):- 563 | current_prolog_flag(gptkey,Key), 564 | Data = form_data([file=file(File),purpose=Purpose|Options]), 565 | http_post('https://api.openai.com/v1/files',Data,json(ReturnData), 566 | [authorization(bearer(Key)),application/json]), 567 | ( Raw=false 568 | -> (member(id=ID,ReturnData),Result=[ID]) 569 | ; Result= json(ReturnData) 570 | ). 571 | 572 | %% gpt_files_delete(+FileID:atom,+Purpose:text,-Result:list) is semidet. 573 | %% gpt_files_delete(+FileID:atom,+Purpose:text,-Result:list,+Raw:boolean) is semidet. 574 | % Delete a (user) file from GPT storage 575 | % 576 | % Example use: 577 | % ~~~ 578 | % :- gpt_files_delete('file-XjGxS3KTG0uNmNOK362iJua3',Result), 579 | % Result = ['file-XjGxS3KTG0uNmNOK362iJua3'] 580 | % ~~~ 581 | % 582 | % @arg FileID File ID of file in GPT storage to delete 583 | % @arg Purpose Purpose of the file. Currently only 'fine-tune' 584 | % @arg Result List of file names, or json term (depending on `Raw`) 585 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 586 | % the Result will be a simple list of file names 587 | gpt_files_delete(FileID,Result):- 588 | gpt_files_delete(FileID,Result,false),!. 589 | gpt_files_delete(FileID,Result,Raw):- 590 | current_prolog_flag(gptkey,Key), 591 | atomic_concat('https://api.openai.com/v1/files/',FileID,URL), 592 | http_delete(URL,json(ReturnData), 593 | [authorization(bearer(Key)),application/json]), 594 | ( Raw=false 595 | -> (member(id=ID,ReturnData), Result=[ID]) 596 | ; Result= json(ReturnData) 597 | ). 598 | 599 | %% gpt_files_retrieve(+FileID:atom,-Result:list) is semidet. 600 | %% gpt_files_retrieve(+FileID:atom,-Result:list,+Raw:boolean) is semidet. 601 | % Retrieve a (user) file details 602 | % 603 | % Example use: 604 | % ~~~ 605 | % :- gpt_files_retrieve('file-XjGxS3KTG0uNmNOK362iJua3',Result), 606 | % Result = ['myfile.jsonl'] 607 | % ~~~ 608 | % 609 | % @arg FileID File ID of file in GPT storage to retrieve 610 | % @arg Result List with file name, or json term (depending on `Raw`) 611 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 612 | % the Result will be a simple list of file names 613 | gpt_files_retrieve(FileID,Result):- 614 | gpt_files_retrieve(FileID,Result,false),!. 615 | gpt_files_retrieve(FileID,Result,Raw):- 616 | current_prolog_flag(gptkey,Key), 617 | atomic_concat('https://api.openai.com/v1/files/',FileID,URL), 618 | http_get(URL,json(ReturnData), 619 | [authorization(bearer(Key)),application/json]), 620 | ( Raw=false 621 | -> (member(filename=File,ReturnData), Result=[File]) 622 | ; Result= json(ReturnData) 623 | ). 624 | 625 | %% gpt_files_retrieve_content(+FileID:atom,+Purpose:text,-Result:list) is semidet. 626 | %% gpt_files_retrieve(+FileID:atom,+Purpose:text,-Result:list,+Raw:boolean) is semidet. 627 | % Retrieve a (user) file details 628 | % 629 | % Example use: 630 | % ~~~ 631 | % :- gpt_files_retrieve('file-XjGxS3KTG0uNmNOK362iJua3',Result), 632 | % Result = ['myfile.jsonl'] 633 | % ~~~ 634 | % 635 | % @arg FileID File ID of file in GPT storage to retrieve 636 | % @arg Result List with file name, or json term (depending on `Raw`) 637 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 638 | % the Result will be a simple list of file names 639 | % TODO: ***** this API doesn't work for some reason ***** 640 | gpt_files_retrieve_content(FileID,Result):- 641 | gpt_files_retrieve_content(FileID,Result,false),!. 642 | gpt_files_retrieve_content(FileID,Result,Raw):- 643 | current_prolog_flag(gptkey,Key), 644 | atomic_list_concat(['https://api.openai.com/v1/files/',FileID,'/content'],URL), 645 | http_get(URL,ReturnData, [authorization(bearer(Key))]), 646 | ( Raw=false 647 | -> (member(filename=File,ReturnData), Result=[File]) 648 | ; Result= ReturnData 649 | ). 650 | 651 | 652 | 653 | %% gpt_fine_tunes(+TrainingFile:text,-Result:list) is semidet. 654 | %% gpt_fine_tunes(+TrainingFile:text,-Result:list,+Raw:boolean) is semidet. 655 | % Get a vector representation of a given TrainingFile that can be easily consumed by machine learning models and algorithms. 656 | % 657 | % Example use: 658 | % ~~~ 659 | % :- gpt_fine_tunes('file-XGinujblHPwGLSztz8cPS8XY',Result), 660 | % Result = ['ft-AF1WoRqd3aJAHsqc9NY7iL8F'] 661 | % ~~~ 662 | % 663 | % @arg TrainingFile Atom with the GPT file ID of an uploaded file 664 | % @arg Result Fine-tuned request event in list, or json term of details (depending on `Raw`) 665 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 666 | % the Result will be a simple list of file names 667 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 668 | % * validation_file=F 669 | % The ID of an uploaded file that contains validation data. 670 | % 671 | % If you provide this file, the data is used to generate validation 672 | % metrics periodically during fine-tuning. These metrics can be viewed 673 | % in the fine-tuning results file. Your train and validation data should 674 | % be mutually exclusive. 675 | % 676 | % Your dataset must be formatted as a JSONL file, where each validation 677 | % example is a JSON object with the keys "prompt" and "completion". 678 | % Additionally, you must upload your file with the purpose fine-tune. 679 | % * model=M 680 | % The name of the base model to fine-tune. You can select one of 'ada', 681 | % 'babbage', 'curie', 'davinci', or a fine-tuned model created after 682 | % 2022-04-21. To learn more about these models, see the Models documentation. 683 | % Defaults to 'curie'. 684 | % * n_epochs=N 685 | % The number of epochs to train the model for. An epoch refers to one full 686 | % cycle through the training dataset. Defaults to 4. 687 | % * batch_size=N 688 | % The batch size to use for training. The batch size is the number of 689 | % training examples used to train a single forward and backward pass. 690 | % 691 | % By default, the batch size will be dynamically configured to be ~0.2% 692 | % of the number of examples in the training set, capped at 256 - in 693 | % general, we've found that larger batch sizes tend to work better for 694 | % larger datasets. Defaults to `null`. 695 | % * learning_rate_multiplier=N 696 | % The learning rate multiplier to use for training. The fine-tuning 697 | % learning rate is the original learning rate used for pretraining 698 | % multiplied by this value. 699 | % 700 | % By default, the learning rate multiplier is the 0.05, 0.1, or 0.2 701 | % depending on final batch_size (larger learning rates tend to perform 702 | % better with larger batch sizes). We recommend experimenting with 703 | % values in the range 0.02 to 0.2 to see what produces the best results. 704 | % Defaults to `null`. 705 | % * prompt_loss_weight=N 706 | % The weight to use for loss on the prompt tokens. This controls how 707 | % much the model tries to learn to generate the prompt (as compared to 708 | % the completion which always has a weight of 1.0), and can add a 709 | % stabilizing effect to training when completions are short. 710 | % 711 | % If prompts are extremely long (relative to completions), it may make 712 | % sense to reduce this weight so as to avoid over-prioritizing learning 713 | % the prompt. Defaults to `0.01` 714 | % * compute_classification_metrics=B 715 | % If set, we calculate classification-specific metrics such as accuracy 716 | % and F-1 score using the validation set at the end of every epoch. 717 | % These metrics can be viewed in the results file. 718 | % 719 | % In order to compute classification metrics, you must provide a 720 | % validation_file. Additionally, you must specify classification_n_classes 721 | % for multiclass classification or classification_positive_class for 722 | % binary classification. Defaults to `false` 723 | % * classification_n_classes=N 724 | % The number of classes in a classification task. This parameter is 725 | % required for multiclass classification. Defaults to `null`. 726 | % * classification_positive_class=S 727 | % The positive class in binary classification. This parameter is needed 728 | % to generate precision, recall, and F1 metrics when doing binary 729 | % classification. Defaults to `null`. 730 | % * classification_betas=List 731 | % If this is provided, we calculate F-beta scores at the specified beta 732 | % values. The F-beta score is a generalization of F-1 score. This is only 733 | % used for binary classification. 734 | % 735 | % With a beta of 1 (i.e. the F-1 score), precision and recall are given 736 | % the same weight. A larger beta score puts more weight on recall and 737 | % less on precision. A smaller beta score puts more weight on precision 738 | % and less on recall. Defaults to `null`. 739 | % * suffix=S 740 | % A string of up to 40 characters that will be added to your fine-tuned 741 | % model name. For example, a suffix of "custom-model-name" would produce 742 | % a model name like `ada:ft-your-org:custom-model-name-2022-02-15-04-21-04`. 743 | % 744 | gpt_fine_tunes(TrainingFile,Result,Options):- 745 | gpt_fine_tunes(TrainingFile,Result,false,Options),!. 746 | gpt_fine_tunes(TrainingFile,Result,Raw,Options):- 747 | current_prolog_flag(gptkey,Key), 748 | atom_json_term(D,json([training_file=TrainingFile|Options]),[]), 749 | Data = atom(application/json,D), 750 | http_post('https://api.openai.com/v1/fine-tunes',Data,json(ReturnData), 751 | [authorization(bearer(Key)),application/json]), 752 | ( Raw=false 753 | -> member(id=Result,ReturnData) 754 | ; Result= json(ReturnData) 755 | ). 756 | 757 | %% gpt_fine_tunes(-Result:list) is semidet. 758 | %% gpt_fine_tunes(-Result:list,+Raw:boolean) is semidet. 759 | % Gets a list of fine-tunes jobs. 760 | % 761 | % Example use: 762 | % ~~~ 763 | % :- gpt_fine-tunes(Result), 764 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'-'ft-090asf0asf0',...] 765 | % ~~~ 766 | % 767 | % @arg Result List with file name, or json term (depending on `Raw`) 768 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 769 | % the Result will be a simple list of file names 770 | gpt_fine_tunes(Result):- 771 | gpt_fine_tunes(Result,false),!. 772 | gpt_fine_tunes(Result,Raw):- 773 | current_prolog_flag(gptkey,Key), 774 | http_get('https://api.openai.com/v1/fine-tunes',json(ReturnData), 775 | [authorization(bearer(Key)),application/json]), 776 | ( Raw=false 777 | -> ( member(data=Models,ReturnData), 778 | gpt_extract_field_pairs(fine_tuned_model,id,Models,Result) 779 | ) 780 | ; Result= json(ReturnData) 781 | ). 782 | 783 | %% gpt_fine_tunes_detail(+ID:atom,-Result:list) is semidet. 784 | %% gpt_fine_tunes_detail(+ID:atom,-Result:list,+Raw:boolean) is semidet. 785 | % Gets details of a fine-tunes job. 786 | % 787 | % Example use: 788 | % ~~~ 789 | % :- gpt_fine_tunes_detail('ft-090asf0asf0',Result), 790 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'] 791 | % ~~~ 792 | % 793 | % @arg Result List with file name, or json term (depending on `Raw`) 794 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 795 | % the Result will be a simple list of file names 796 | gpt_fine_tunes_detail(ID,Result):- 797 | gpt_fine_tunes_detail(ID,Result,false),!. 798 | gpt_fine_tunes_detail(ID,Result,Raw):- 799 | current_prolog_flag(gptkey,Key), 800 | atomic_concat('https://api.openai.com/v1/fine-tunes/',ID,URL), 801 | http_get(URL,json(ReturnData), 802 | [authorization(bearer(Key)),application/json]), 803 | ( Raw=false 804 | -> ( member(fine_tuned_model=TunedModel,ReturnData), 805 | Result=[TunedModel] 806 | ) 807 | ; Result= json(ReturnData) 808 | ). 809 | 810 | %% gpt_fine_tunes_cancel(+ID:atom,-Result:list) is semidet. 811 | %% gpt_fine_tunes_cancel(+ID:atom,-Result:list,+Raw:boolean) is semidet. 812 | % Cancel a fine-tunes job. 813 | % 814 | % Example use: 815 | % ~~~ 816 | % :- gpt_fine_tunes_cancel([_-ID]),(ID,Result), 817 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'] 818 | % ~~~ 819 | % 820 | % @arg ID ID of the fine-tunes job 821 | % @arg Result List with file name, or json term (depending on `Raw`) 822 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 823 | % the Result will be a simple list of file names 824 | % TODO: ***** DOES NOT WORK **** something to do with post without data? 825 | gpt_fine_tunes_cancel(ID,Result):- 826 | gpt_fine_tunes_cancel(ID,Result,false),!. 827 | gpt_fine_tunes_cancel(ID,Result,Raw):- 828 | current_prolog_flag(gptkey,Key), 829 | atomic_list_concat(['https://api.openai.com/v1/fine-tunes/',ID,'/cancel'],URL), 830 | http_post(URL,[],json(ReturnData), 831 | [authorization(bearer(Key)),application/json]), 832 | ( Raw=false 833 | -> ( member(fine_tuned_model=TunedModel,ReturnData), 834 | Result=[TunedModel] 835 | ) 836 | ; Result= json(ReturnData) 837 | ). 838 | 839 | %% gpt_fine_tunes_events(+ID:atom,-Result:list) is semidet. 840 | %% gpt_fine_tunes_events(+ID:atom,-Result:list,+Raw:boolean) is semidet. 841 | % List events of a fine-tunes job. 842 | % 843 | % Example use: 844 | % ~~~ 845 | % :- gpt_fine_tunes_events([_-ID]),(ID,Result), 846 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'] 847 | % ~~~ 848 | % 849 | % @arg ID ID of the fine-tunes job 850 | % @arg Result List with file name, or json term (depending on `Raw`) 851 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 852 | % the Result will be a simple list of file names 853 | % TODO: ***** DOES NOT WORK **** something to do with post without data? 854 | gpt_fine_tunes_events(ID,Result):- 855 | gpt_fine_tunes_events(ID,Result,false),!. 856 | gpt_fine_tunes_events(ID,Result,Raw):- 857 | current_prolog_flag(gptkey,Key), 858 | atomic_list_concat(['https://api.openai.com/v1/fine-tunes/',ID,'/events'],URL), 859 | http_get(URL,json(ReturnData), 860 | [authorization(bearer(Key)),application/json]), 861 | ( Raw=false 862 | -> ( member(fine_tuned_model=TunedModel,ReturnData), 863 | Result=[TunedModel] 864 | ) 865 | ; Result= json(ReturnData) 866 | ). 867 | 868 | %% gpt_fine_tunes_delete(+ID:atom,-Result:list) is semidet. 869 | %% gpt_fine_tunes_delete(+ID:atom,-Result:list,+Raw:boolean) is semidet. 870 | % Delete a fine-tunes job from GPT storage 871 | % 872 | % Example use: 873 | % ~~~ 874 | % :- gpt_fine_tunes([_-ID]),gpt_fine_tunes_delete(ID,Result), 875 | % Result = ['ft-XjGxS3KTG0uNmNOK362iJua3'] 876 | % ~~~ 877 | % 878 | % @arg ID File ID of file in GPT storage to delete 879 | % @arg Purpose Purpose of the file. Currently only 'fine-tune' 880 | % @arg Result List of file names, or json term (depending on `Raw`) 881 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 882 | % the Result will be a simple list of file names 883 | gpt_fine_tunes_delete(ID,Result):- 884 | gpt_fine_tunes_delete(ID,Result,false),!. 885 | gpt_fine_tunes_delete(ID,Result,Raw):- 886 | current_prolog_flag(gptkey,Key), 887 | atomic_concat('https://api.openai.com/v1/models/',ID,URL), 888 | http_delete(URL,json(ReturnData), 889 | [authorization(bearer(Key)),application/json]), 890 | ( Raw=false 891 | -> (member(id=ID,ReturnData), Result=[ID]) 892 | ; Result= json(ReturnData) 893 | ). 894 | 895 | 896 | %% gpt_moderations(+Model:atom,+Input:text,-Result:list,+Options:list) is semidet. 897 | % Given a input text, outputs if the model classifies it as violating OpenAI's content policy. 898 | % 899 | % Example use: 900 | % ~~~ 901 | % :- gpt_moderations('I want to kill them',Result), 902 | % Result = [sexual=false, hate=false, violence=true, 'self-harm'=false, 903 | % 'sexual/minors'=false, 'hate/threatening'=false, 'violence/graphic'=false]. 904 | % ~~~ 905 | % 906 | % @arg Input Text to test for content policy violation 907 | % @arg Result JSON structure with policy scores 908 | gpt_moderations(Input,Result,Options):- 909 | gpt_moderations(Input,Result,false,Options). 910 | gpt_moderations(Input,Result,Raw,Options):- 911 | current_prolog_flag(gptkey,Key), 912 | atom_json_term(D,json([input=Input|Options]),[]), 913 | Data = atom(application/json,D), 914 | http_post('https://api.openai.com/v1/moderations',Data,ReturnData, 915 | [authorization(bearer(Key)),application/json]), 916 | ( Raw=false 917 | -> ( gpt_extract_data(results,categories,ReturnData,[json(R)]), 918 | maplist(json_pair_boolean,R,Result) 919 | ) 920 | ; Result= ReturnData 921 | ). 922 | 923 | json_pair_boolean(Name='@'(Boolean),Name=Boolean):-!. 924 | json_pair_boolean(Name=Val,Name=Val):-!. 925 | -------------------------------------------------------------------------------- /docs/Readme.md: -------------------------------------------------------------------------------- 1 | # The Docs Folder 2 | 3 | The docs folder contains any relevant literature and extra documentation (e.g. user guides) for the project. 4 | 5 | A special subfolder called "wiki" contains the source files for the github wiki repo (github.com//.wiki.git). For this project, we assume that you have a directory (folder) called `prolog2gpt` and one called `prolog2gpt.wiki`. You would not edit the content in `prolog2gpt.wiki` directly, but instead edit the content in `prolog2gpt/src/docs/wiki/` and then copy with `cp -R prolog2gpt/src/docs/wiki/* prolog2gpt.wiki` (and commit+push `prolog2gpt.wiki`). 6 | 7 | This enables us to work on the documentation as "source" and only push the finished version to the wiki. 8 | -------------------------------------------------------------------------------- /docs/wiki/Home.md: -------------------------------------------------------------------------------- 1 | # Welcome to the prolog2gpt wiki 2 | 3 | prolog2gpt is (will be) a SWI Prolog library to interface to the GPT API. 4 | 5 | ## Introduction 6 | 7 | Large Language Models (LLM) like GPT have greatly advanced natural language processing in recent years. However, they can benefit from interfacing with other types of reasoning modules, including logic engines. See for example the discussions in "Faithful Chain-of-Thought Reasoning" [[1]](#1) and "FOLIO: Natural Language Reasoning with First-Order Logic" [[2]](#2). 8 | 9 | Currently, there are interface libraries to GPT for Python and NodeJS, but not Prolog. The work in this repo seeks to address that gap by building a library for SWI Prolog. 10 | 11 | 12 | ## References 13 | [[1]]: Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong, E., Apidianaki, M. and Callison-Burch, C., 2023. Faithful Chain-of-Thought Reasoning. arXiv preprint https://arxiv.org/pdf/2301.13379. 14 | 15 | [[2]]: Han, S., Schoelkopf, H., Zhao, Y., Qi, Z., Riddell, M., Benson, L., Sun, L., Zubova, E., Qiao, Y., Burtell, M. and Peng, D., 2022. Folio: Natural language reasoning with first-order logic. https://arxiv.org/pdf/2209.00840. -------------------------------------------------------------------------------- /rel/prolog2gpt-0.1.00.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RdR1024/prolog2gpt/c19890ceee13e7692bb9e02661d18a3353764947/rel/prolog2gpt-0.1.00.zip -------------------------------------------------------------------------------- /src/pack.pl: -------------------------------------------------------------------------------- 1 | name(prolog2gpt). 2 | title('Library of prolog predicates to access the GPT API'). 3 | version('0.1.00'). 4 | author('Richard de Rozario','richard.derozario@gmail.com'). 5 | home('https://github.com/RdR1024/prolog2gpt'). 6 | download('https://github.com/RdR1024/prolog2gpt/raw/main/rel/prolog2gpt-0.1.00.zip'). -------------------------------------------------------------------------------- /src/prolog/prolog2gpt.pro: -------------------------------------------------------------------------------- 1 | :- module(prolog2gpt,[ 2 | init_gptkey/0, 3 | gpt_models/1, gpt_models/2, 4 | gpt_models_detail/2, 5 | gpt_extract_data/4, 6 | gpt_extract_fields/3, 7 | gpt_extract_field_pairs/4, 8 | gpt_completions/4, gpt_completions/5, 9 | gpt_images_create/3, gpt_images_create/4, 10 | gpt_images_edits/4, gpt_images_edits/5, 11 | gpt_images_variations/3, gpt_images_variations/4, 12 | gpt_embeddings/4, gpt_embeddings/5, 13 | gpt_files/1, gpt_files/2, 14 | gpt_files_upload/4, gpt_files_upload/5, 15 | gpt_files_delete/2, gpt_files_delete/3, 16 | gpt_files_retrieve/2, gpt_files_retrieve/3, 17 | gpt_files_retrieve_content/2, gpt_files_retrieve_content/3, 18 | gpt_fine_tunes/1,gpt_fine_tunes/2,gpt_fine_tunes/3, gpt_fine_tunes/4, 19 | gpt_fine_tunes_detail/2, gpt_fine_tunes_detail/3, 20 | gpt_fine_tunes_cancel/2, gpt_fine_tunes_cancel/3, 21 | gpt_fine_tunes_events/2, gpt_fine_tunes_events/3, 22 | gpt_fine_tunes_delete/2, gpt_fine_tunes_delete/3, 23 | gpt_moderations/3, gpt_moderations/4 24 | ]). 25 | /** Prolog interface to GPT 26 | 27 | # Introduction 28 | 29 | This module provides prolog predicates to call the GPT API. 30 | 31 | Large Language Models (LLM) like GPT essentially predict what text comes next, based on 32 | learning the (latent) probabilistic relationships between text tokens. By training the 33 | model on massive samples of text, the natural language capabilities have improved 34 | dramatically in recent years. 35 | 36 | However, such language models can benefit from interaction with other types of modules, 37 | such as logic engines. To aid in developing such interactions, this library aims 38 | to make it easy to interact with GPT directly from Prolog, using predicates that call 39 | the GPT API. 40 | 41 | The Prolog predicates are based on the OpenAI API Reference: https://platform.openai.com/docs/api-reference 42 | 43 | # Usage 44 | 45 | 1. Create your GPT account at https://platform.openai.com 46 | 2. Create an API key at https://platform.openai.com/account/api-keys and, as instructed, save that key 47 | somewhere (e.g. in a text file in a secure folder). 48 | 3. Set an environment variable called `GPTKEY` to the key value (don't forget in Linux that if you added the environment variable to your bash startup script, e.g. `~/.bashrc`, then you need to `source` your `~/.bashrc` script to activate the new environment variable). 49 | 4. Use the `prolog2gpt.pro` Prolog module as usual 50 | 51 | @author Richard de Rozario 52 | @license MIT 53 | */ 54 | 55 | :- use_module(library(http/http_open)). 56 | :- use_module(library(http/http_client)). 57 | :- use_module(library(http/http_ssl_plugin)). 58 | :- use_module(library(http/json)). 59 | :- use_module(library(http/http_json)). 60 | :- use_module(library(http/json_convert)). 61 | 62 | %% init_gptkey is semidet. 63 | % Get the GPT API Key from the environment variable named `GPTKEY` and create 64 | % a prolog flag (`gptkey`) with the key value. Note: execute this predicate 65 | % before using any of the others, because the gpt key is needed for authorization 66 | % with each gpt api call. 67 | % 68 | % Example use: 69 | % ~~~ 70 | % :- init_gptkey, current_prolog_flag(gptkey,Key), writeln(Key). 71 | % Key = sk-manycharactersoftheacturalkeyvalue 72 | % ~~~ 73 | % 74 | init_gptkey:- 75 | getenv('GPTKEY',Key), 76 | create_prolog_flag(gptkey,Key,[type(atom)]). 77 | 78 | %% gpt_models(-Models:list) is semidet. 79 | %% gpt_models(-Models:json,+Raw:boolean) is semidet. 80 | % Get a list of the available GPT models 81 | % 82 | % Example use: 83 | % ~~~ 84 | % :- gpt_models(Models). 85 | % Models = [babbage,davinci,...] 86 | % ~~~ 87 | % 88 | % @arg Models The list of model names, or JSON term with model details 89 | % @arg Raw If `true` then Models is the raw json result, else Models is a list of model names 90 | % 91 | gpt_models(Models):- gpt_models(Models,false). 92 | gpt_models(Models,Raw):- 93 | current_prolog_flag(gptkey,Key), 94 | http_get('https://api.openai.com/v1/models',Ms, 95 | [authorization(bearer(Key)),application/json]), 96 | ( Raw=false 97 | -> gpt_extract_data(data,id,Ms,Models) 98 | ; Models=Ms 99 | ). 100 | 101 | 102 | %% gpt_models_detail(+Model:atom, -ModelDetails:json) is semidet. 103 | % Get the details of a particular model 104 | % 105 | % Example use: 106 | % ~~~ 107 | % :- gpt_models('text-davinci-003',Details). 108 | % Details = ... % the JSON term 109 | % ~~~ 110 | % 111 | % @arg Model The model name, Note: put names that end with numeric suffixes in 112 | % single quotes, to avoid the numeric being treated as a number. 113 | % For example, use `'text-davinci-003'` 114 | % @arg Details The details of the model as a JSON term 115 | % 116 | gpt_models_detail(Model,Details):- 117 | current_prolog_flag(gptkey,Key), 118 | atomic_concat('https://api.openai.com/v1/models/',Model,URL), 119 | http_get(URL,Details,[authorization(bearer(Key)),application/json]). 120 | 121 | 122 | %% gpt_extract_data(+Group:atom,+Fielname:atom,+Data:json,-Result:list) is semidet. 123 | % Extract a list of field data from a gpt json structure. Note: this predicate 124 | % makes some simple assumptions about how GPT API result data is structured. 125 | % 126 | % Example use: 127 | % ~~~ 128 | % :- gpt_models(Ms,true), gpt_extract_data(data,id,Ms,Models). 129 | % Models = ['babbage','text-davinci-001',...] 130 | % ~~~ 131 | % 132 | % @arg Group The GPT data group name. e.g. `data`, `choices`,... 133 | % @arg Fieldname The name of the field whose data we want 134 | % @arg Data The json data list from the GPT API, that contains one or more field values 135 | % @arg Result The resulting list of data values 136 | gpt_extract_data(Group, Fieldname, json(Data), Result):- 137 | member(Group=Fieldlist, Data), 138 | gpt_extract_fields(Fieldname, Fieldlist, Result). 139 | 140 | %% gpt_extract_fields(+Fieldname:atom,+Data:json,-Result:list) is semidet. 141 | % Extract a list of field data from a gpt json structure. Note: this predicate 142 | % makes some simple assumptions about how GPT API result data is structured. 143 | % 144 | % Example use: 145 | % ~~~ 146 | % :- Data=[json([id='babbage',object='model']),json([id='text-davinci-001',object='model'])], gpt_extract_data(data,id,Data,Models). 147 | % Models = ['babbage','text-davinci-001'] 148 | % ~~~ 149 | % 150 | % @arg Fieldname The name of the field whose data we want 151 | % @arg Data The list with json data from the GPT API, that contains one or more field values 152 | % @arg Result The resulting list of data values 153 | gpt_extract_fields(_,[],[]):-!. 154 | gpt_extract_fields(Fieldname,[json(Fields)|Fs],Results):- 155 | ( member(Fieldname=R,Fields) 156 | -> Results=[R|Res] 157 | ; Results=Res 158 | ), 159 | gpt_extract_fields(Fieldname,Fs,Res). 160 | 161 | %% gpt_extract_field_pairs(+Field1:atom,+Field2:atom,+Data:json,-Result:list) is semidet. 162 | % Extract a list of field pairs from a gpt json structure. Note: this predicate 163 | % makes some simple assumptions about how GPT API result data is structured. 164 | % 165 | % Example use: 166 | % ~~~ 167 | % :- Data=[json([id='123',filename=file1]),json([id='345',filename=file2])], gpt_extract_field_pairs(filename,id,Data,FieldPairs). 168 | % FieldPairs = [file1-'123',file2-'345'] 169 | % ~~~ 170 | % 171 | % @arg Fieldname The name of the field whose data we want 172 | % @arg Data The list with json data from the GPT API, that contains one or more field values 173 | % @arg Result The resulting list of data values 174 | gpt_extract_field_pairs(_,_,[],[]):-!. 175 | gpt_extract_field_pairs(Field1,Field2,[json(Fields)|Fs],Results):- 176 | ( member(Field1=F1,Fields) 177 | -> ( member(Field2=F2,Fields) 178 | -> Results = [F1-F2|Res] 179 | ; Results = Res 180 | ) 181 | ; Results = Res 182 | ),!, 183 | gpt_extract_field_pairs(Field1,Field2,Fs,Res). 184 | 185 | 186 | %% gpt_completions(+Model:atom, +Prompt:atom, -Result:text, +Options:list) is semidet. 187 | %% gpt_completions(+Model:atom, +Prompt:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 188 | % Get a prompted text completion from a GPT model. 189 | % 190 | % Example use: 191 | % ~~~ 192 | % :- gpt_completions('text-davinci-003','My favourite animal is ',Result,_,[]), 193 | % Result = ['a dog'] 194 | % ~~~ 195 | % 196 | % @arg Model The GPT model name, Note: put names that end with numeric suffixes in 197 | % single quotes, to avoid the numeric being treated as a number. 198 | % For example, use `'text-davinci-003'` 199 | % @arg Prompt The prompt that GPT will complete 200 | % @arg Result The text result, or json term with the result from GPT 201 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 202 | % the Result will be the text completion result 203 | % @arg Options The model completion options as list of json pair values (see below) 204 | % 205 | % 206 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 207 | % * suffix=S 208 | % A string (S) that is inserted after the completion 209 | % * max_tokens=M 210 | % The size of output, where `M` is a natural number (incl. 0). 211 | % GPT-3 can theoretically return up to 4096 tokens, but in practice less than half that. 212 | % One token is about 4 characters or 0.75 average word length. Defaults to 16. 213 | % * temperature=N 214 | % Controls "randomness" of output, with `0<=N<=2`. Defaults to 1. 215 | % Higher temperature means text will be more diverse, but 216 | % also risks more grammar mistakes and nonsense. Recommended to 217 | % change either this or `top_p`, but not both. 218 | % * top_p 219 | % An alternative to sampling with `temperature`, called 220 | % nucleus sampling, where the model considers the results 221 | % of the tokens with `top_p` probability mass. So 0.1 means 222 | % only the tokens comprising the top 10% probability mass are 223 | % considered. Use this, or `temperature`, but not both. 224 | % Defaults to 1. 225 | % * n=N 226 | % The number of completions (e.g. Results) to generate for each 227 | % prompt. Defaults to 1. 228 | % * stream=TF 229 | % If `true` then tokens will be sent as data-only 230 | % `server-sent events` as they become available, with the 231 | % stream terminated by a `data: [DONE]` message. 232 | % Defaults to `false` 233 | % * logprobs=N 234 | % Include the log probabilities on most likely tokens. For 235 | % example, if `logprobs=5`, the API will return a list of 236 | % the 5 most likely tokens. The API will always return the 237 | % log probability of the sampled token, so there may be up 238 | % to `logprobs+1` elements in the response. Defaults to 0. 239 | % * echo=TF 240 | % If `true`, echo back the prompt in addition to 241 | % the completion. Defaults to `false` 242 | % * stop=S 243 | % (string or list of strings). Up to 4 strings ("sequences") 244 | % where the API will stop generating further tokens. The 245 | % returned text will not contain the stop sequences. Defaults 246 | % to `null` 247 | % * presence_penalty=N 248 | % Number between -2.0 and 2.0. Positive values penalize new 249 | % tokens based on whether they appear in the text so far, 250 | % increase the model's likelihood to talk about new topics. 251 | % Defaults to 0. 252 | % * frequency_penalty=N 253 | % Number between -2.0 and 2.0. Positive values penalize new 254 | % tokens based on their existing frequency in the text so far, 255 | % decreasing the model's likelihood to repeat the same line 256 | % verbatim. Defaults to 0. 257 | % * best_of=N 258 | % Generates best_of completions server-side and returns the "best" 259 | % (the one with the highest log probability per token). Results cannot be streamed. 260 | % 261 | % When used with `n`, `best_of` controls the number of candidate completions 262 | % and `n` specifies how many to return – `best_of` must be greater than `n`. 263 | % 264 | % Note: Because this parameter generates many completions, it can quickly consume 265 | % your token quota. Use carefully and ensure that you have reasonable settings for 266 | % `max_tokens` and `stop`. 267 | % * logit_bias=JSON_TERM 268 | % Modify the likelihood of specified tokens appearing in the completion. 269 | % 270 | % Accepts a json object that maps tokens (specified by their token ID in the 271 | % GPT tokenizer) to an associated bias value from -100 to 100. You can use this 272 | % tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. 273 | % Mathematically, the bias is added to the logits generated by the model prior to sampling. 274 | % The exact effect will vary per model, but values between -1 and 1 should decrease or 275 | % increase likelihood of selection; values like -100 or 100 should result in a ban or 276 | % exclusive selection of the relevant token. 277 | % 278 | % As an example, you can pass `json('50256': -100)` to prevent the `<|endoftext|>` token 279 | % from being generated. 280 | % * user=S 281 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 282 | % 283 | gpt_completions(Model, Prompt, Result, Options):- 284 | gpt_completions(Model, Prompt, Result, false, Options),!. 285 | 286 | gpt_completions(Model, Prompt, Result, Raw, Options):- 287 | current_prolog_flag(gptkey,Key), 288 | 289 | atom_json_term(D,json([model = Model, messages = [json([role = user, content = Prompt])] | Options]),[]), 290 | Data = atom(application/json, D), 291 | 292 | http_post('https://api.openai.com/v1/chat/completions', Data, ReturnData, 293 | [authorization(bearer(Key)), application/json]), 294 | ( Raw = false 295 | -> ( gpt_extract_data(choices, message, ReturnData, [json(Message)]), 296 | member(content = Result, Message)) 297 | ; Result = ReturnData 298 | ). 299 | 300 | %% gpt_images_create(+Prompt:atom, -Result:term, +Options:list) is semidet. 301 | %% gpt_images_create(+Prompt:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 302 | % Create an image from a text prompt. 303 | % 304 | % Example use: 305 | % ~~~ 306 | % :- gpt_images_create('A cute baby sea otter',Result,_,[]), 307 | % Result = ['https://...'] % url of the resulting image 308 | % ~~~ 309 | % 310 | % @arg Prompt The prompt that GPT will complete 311 | % @arg Result The text result, or json term with the result from GPT 312 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 313 | % the Result will be the (first) url or b64 result 314 | % @arg Options The edit options as list of json pair values (see below) 315 | % 316 | % 317 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 318 | % * n=N 319 | % The number of images to generate. Defaults to 1. 320 | % * size=Z 321 | % The size of the image. Must be one of `'256x256'`, `'512x512'`, or `'1024x1024'`. 322 | % Default is `'1024x1024'` 323 | % * response_format=S 324 | % The format of the generated images. Must be one of `url` or `b64_json`. Default is `url` 325 | % * user=S 326 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 327 | % 328 | gpt_images_create(Prompt,Result,Options):- 329 | gpt_images_create(Prompt,Result,false,Options). 330 | gpt_images_create(Prompt,Result,Raw,Options):- 331 | current_prolog_flag(gptkey,Key), 332 | atom_json_term(D,json([prompt=Prompt|Options]),[]), 333 | Data = atom(application/json,D), 334 | http_post('https://api.openai.com/v1/images/generations',Data,ReturnData, 335 | [authorization(bearer(Key)),application/json]), 336 | ( member(response_format=Format,Options) -> true ; Format=url ), 337 | ( Raw=false 338 | -> gpt_extract_data(data,Format,ReturnData,Result) 339 | ; Result= ReturnData 340 | ). 341 | 342 | %% gpt_images_edits(+Prompt:atom, -Result:term,+Options:list) is semidet. 343 | %% gpt_images_edits(+Prompt:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 344 | % Modify an image from a text prompt. 345 | % 346 | % Example use: 347 | % ~~~ 348 | % :- gpt_images_edits('A cute baby sea otter with a hat','./test/otter.png',Result,_,[]), 349 | % Result = ['https://...'] % url of the resulting image 350 | % ~~~ 351 | % 352 | % @arg Prompt The prompt that GPT will complete 353 | % @arg File The path/filename of the image to edit 354 | % @arg Result The text result, or json term with the result from GPT 355 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 356 | % the Result will be the (first) url or b64 result 357 | % @arg Options The edit options as list of pair values (see below) 358 | % 359 | % 360 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 361 | % * n=N 362 | % The number of images to generate. Defaults to 1. 363 | % * size=Z 364 | % The size of the image. Must be one of `'256x256'`, `'512x512'`, or `'1024x1024'`. 365 | % Default is `'1024x1024'` 366 | % * mask 367 | % An additional image whose fully transparent areas (e.g. where alpha is zero) 368 | % indicate where image should be edited. Must be a valid 369 | % PNG file, less than 4MB, and have the same dimensions as image. 370 | % * response_format=S 371 | % The format of the generated images. Must be one of `url` or `b64_json`. Default is `url` 372 | % * user=S 373 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 374 | % 375 | gpt_images_edits(Prompt,Image,Result,Options):- 376 | gpt_images_edits(Prompt,Image,Result,false,Options). 377 | gpt_images_edits(Prompt,Image,Result,Raw,Options):- 378 | current_prolog_flag(gptkey,Key), 379 | Data = form_data([prompt=Prompt,image=file(Image)|Options]), 380 | http_post('https://api.openai.com/v1/images/edits',Data,ReturnData, 381 | [authorization(bearer(Key)),application/json]), 382 | ( member(response_format=Format,Options) -> true ; Format=url ), 383 | ( Raw=false 384 | -> gpt_extract_data(data,Format,ReturnData,Result) 385 | ; Result= ReturnData 386 | ). 387 | 388 | %% gpt_images_variations(+File:atom, -Result:term,+Options:list) is semidet. 389 | %% gpt_images_variations(+File:atom, -Result:term, ?Raw:boolean,+Options:list) is semidet. 390 | % Produce variation(s) of an image. 391 | % 392 | % Example use: 393 | % ~~~ 394 | % :- gpt_images_variations('./test/otter.png',Result,_,[]), 395 | % Result = ['https://...'] % url of the resulting image 396 | % ~~~ 397 | % 398 | % @arg Image The path/filename of image to vary 399 | % @arg Result The text result, or json term with the result from GPT 400 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 401 | % the Result will be the (first) url or b64 result 402 | % @arg Options The edit options as list of pair values (see below) 403 | % 404 | % 405 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 406 | % * n=N 407 | % The number of images to generate. Defaults to 1. 408 | % * size=Z 409 | % The size of the image. Must be one of `'256x256'`, `'512x512'`, or `'1024x1024'`. 410 | % Default is `'1024x1024'` 411 | % * response_format=S 412 | % The format of the generated images. Must be one of `url` or `b64_json`. Default is `url` 413 | % * user=S 414 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 415 | % 416 | gpt_images_variations(Image,Result,Options):- 417 | gpt_images_variations(Image,Result,false,Options). 418 | 419 | gpt_images_variations(Image,Result,Raw,Options):- 420 | current_prolog_flag(gptkey,Key), 421 | Data = form_data([image=file(Image)|Options]), 422 | http_post('https://api.openai.com/v1/images/variations',Data,ReturnData, 423 | [authorization(bearer(Key)),application/json]), 424 | ( member(response_format=Format,Options) -> true ; Format=url ), 425 | ( Raw=false 426 | -> gpt_extract_data(data,Format,ReturnData,Result) 427 | ; Result= ReturnData 428 | ). 429 | 430 | %% gpt_embeddings(+Input:text,-Result:list) is semidet. 431 | %% gpt_embeddings(+Input:text,-Result:list,+Raw:boolean) is semidet. 432 | % Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms. 433 | % 434 | % Example use: 435 | % ~~~ 436 | % :- gpt_embeddings('text-embedding-ada-002','The food was delicious',Result), 437 | % Result = [0.0023064255,-0.009327292,...] 438 | % ~~~ 439 | % 440 | % @arg Input Atom, string, or list of such 441 | % @arg Result List of file names, or json term (depending on `Raw`) 442 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 443 | % the Result will be a simple list of file names 444 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 445 | % * user=S 446 | % A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. 447 | % 448 | gpt_embeddings(Model,Input,Result,Options):- 449 | gpt_embeddings(Model,Input,Result,false,Options),!. 450 | gpt_embeddings(Model,Input,Result,Raw,Options):- 451 | current_prolog_flag(gptkey,Key), 452 | atom_json_term(D,json([model=Model,input=Input|Options]),[]), 453 | Data = atom(application/json,D), 454 | http_post('https://api.openai.com/v1/embeddings',Data,ReturnData, 455 | [authorization(bearer(Key)),application/json]), 456 | ( Raw=false 457 | -> gpt_extract_data(data,embedding,ReturnData,Result) 458 | ; Result= ReturnData 459 | ). 460 | 461 | 462 | %% gpt_files(-Result:list) is semidet. 463 | %% gpt_files(-Result:list,+Raw:boolean) is semidet. 464 | % List all files that belong to the user's organization. 465 | % 466 | % Example use: 467 | % ~~~ 468 | % :- gpt_files(Result), 469 | % Result = ['file1.jsonl'-'file-12345','file2.jsonl'-'file-56789'] 470 | % ~~~ 471 | % 472 | % @arg Result List of Filename-ID pairs, or json term (depending on `Raw`) 473 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 474 | % the Result will be a simple list of file names 475 | gpt_files(Result):- 476 | gpt_files(Result,false). 477 | gpt_files(Result,Raw):- 478 | current_prolog_flag(gptkey,Key), 479 | http_get('https://api.openai.com/v1/files',json(ReturnData), 480 | [authorization(bearer(Key)),application/json]), 481 | ( Raw=false 482 | -> ( member(data=Files,ReturnData), 483 | gpt_extract_field_pairs(filename,id,Files,Result) 484 | ) 485 | ; Result= json(ReturnData) 486 | ). 487 | 488 | %% gpt_files_upload(+File:atom,+Purpose:text,-Result:list) is semidet. 489 | %% gpt_files_upload(+File:atom,+Purpose:text,-Result:list,+Raw:boolean) is semidet. 490 | % Upload a JSON Lines file (typically for fine-tuning) 491 | % 492 | % Example use: 493 | % ~~~ 494 | % :- gpt_files_upload('./test/tune_answer.jsonl','fine-tune',Result), 495 | % Result = ['file-XjGxS3KTG0uNmNOK362iJua3'] 496 | % ~~~ 497 | % 498 | % @arg File Filename to upload 499 | % @arg Purpose Purpose of the file. Currently only 'fine-tune' 500 | % @arg Result List of file names, or json term (depending on `Raw`) 501 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 502 | % the Result will be a simple list of file names 503 | gpt_files_upload(File,Purpose,Result,Options):- 504 | gpt_files_upload(File,Purpose,Result,false,Options),!. 505 | gpt_files_upload(File,Purpose,Result,Raw,Options):- 506 | current_prolog_flag(gptkey,Key), 507 | Data = form_data([file=file(File),purpose=Purpose|Options]), 508 | http_post('https://api.openai.com/v1/files',Data,json(ReturnData), 509 | [authorization(bearer(Key)),application/json]), 510 | ( Raw=false 511 | -> (member(id=ID,ReturnData),Result=[ID]) 512 | ; Result= json(ReturnData) 513 | ). 514 | 515 | %% gpt_files_delete(+FileID:atom,+Purpose:text,-Result:list) is semidet. 516 | %% gpt_files_delete(+FileID:atom,+Purpose:text,-Result:list,+Raw:boolean) is semidet. 517 | % Delete a (user) file from GPT storage 518 | % 519 | % Example use: 520 | % ~~~ 521 | % :- gpt_files_delete('file-XjGxS3KTG0uNmNOK362iJua3',Result), 522 | % Result = ['file-XjGxS3KTG0uNmNOK362iJua3'] 523 | % ~~~ 524 | % 525 | % @arg FileID File ID of file in GPT storage to delete 526 | % @arg Purpose Purpose of the file. Currently only 'fine-tune' 527 | % @arg Result List of file names, or json term (depending on `Raw`) 528 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 529 | % the Result will be a simple list of file names 530 | gpt_files_delete(FileID,Result):- 531 | gpt_files_delete(FileID,Result,false),!. 532 | gpt_files_delete(FileID,Result,Raw):- 533 | current_prolog_flag(gptkey,Key), 534 | atomic_concat('https://api.openai.com/v1/files/',FileID,URL), 535 | http_delete(URL,json(ReturnData), 536 | [authorization(bearer(Key)),application/json]), 537 | ( Raw=false 538 | -> (member(id=ID,ReturnData), Result=[ID]) 539 | ; Result= json(ReturnData) 540 | ). 541 | 542 | %% gpt_files_retrieve(+FileID:atom,-Result:list) is semidet. 543 | %% gpt_files_retrieve(+FileID:atom,-Result:list,+Raw:boolean) is semidet. 544 | % Retrieve a (user) file details 545 | % 546 | % Example use: 547 | % ~~~ 548 | % :- gpt_files_retrieve('file-XjGxS3KTG0uNmNOK362iJua3',Result), 549 | % Result = ['myfile.jsonl'] 550 | % ~~~ 551 | % 552 | % @arg FileID File ID of file in GPT storage to retrieve 553 | % @arg Result List with file name, or json term (depending on `Raw`) 554 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 555 | % the Result will be a simple list of file names 556 | gpt_files_retrieve(FileID,Result):- 557 | gpt_files_retrieve(FileID,Result,false),!. 558 | gpt_files_retrieve(FileID,Result,Raw):- 559 | current_prolog_flag(gptkey,Key), 560 | atomic_concat('https://api.openai.com/v1/files/',FileID,URL), 561 | http_get(URL,json(ReturnData), 562 | [authorization(bearer(Key)),application/json]), 563 | ( Raw=false 564 | -> (member(filename=File,ReturnData), Result=[File]) 565 | ; Result= json(ReturnData) 566 | ). 567 | 568 | %% gpt_files_retrieve_content(+FileID:atom,+Purpose:text,-Result:list) is semidet. 569 | %% gpt_files_retrieve(+FileID:atom,+Purpose:text,-Result:list,+Raw:boolean) is semidet. 570 | % Retrieve a (user) file details 571 | % 572 | % Example use: 573 | % ~~~ 574 | % :- gpt_files_retrieve('file-XjGxS3KTG0uNmNOK362iJua3',Result), 575 | % Result = ['myfile.jsonl'] 576 | % ~~~ 577 | % 578 | % @arg FileID File ID of file in GPT storage to retrieve 579 | % @arg Result List with file name, or json term (depending on `Raw`) 580 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 581 | % the Result will be a simple list of file names 582 | % TODO: ***** this API doesn't work for some reason ***** 583 | gpt_files_retrieve_content(FileID,Result):- 584 | gpt_files_retrieve_content(FileID,Result,false),!. 585 | gpt_files_retrieve_content(FileID,Result,Raw):- 586 | current_prolog_flag(gptkey,Key), 587 | atomic_list_concat(['https://api.openai.com/v1/files/',FileID,'/content'],URL), 588 | http_get(URL,ReturnData, [authorization(bearer(Key))]), 589 | ( Raw=false 590 | -> (member(filename=File,ReturnData), Result=[File]) 591 | ; Result= ReturnData 592 | ). 593 | 594 | 595 | 596 | %% gpt_fine_tunes(+TrainingFile:text,-Result:list) is semidet. 597 | %% gpt_fine_tunes(+TrainingFile:text,-Result:list,+Raw:boolean) is semidet. 598 | % Get a vector representation of a given TrainingFile that can be easily consumed by machine learning models and algorithms. 599 | % 600 | % Example use: 601 | % ~~~ 602 | % :- gpt_fine_tunes('file-XGinujblHPwGLSztz8cPS8XY',Result), 603 | % Result = ['ft-AF1WoRqd3aJAHsqc9NY7iL8F'] 604 | % ~~~ 605 | % 606 | % @arg TrainingFile Atom with the GPT file ID of an uploaded file 607 | % @arg Result Fine-tuned request event in list, or json term of details (depending on `Raw`) 608 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 609 | % the Result will be a simple list of file names 610 | % Options (Note option descriptions are mostly from the GPT API reference -- see the https://platform.openai.com/docs/api-reference for up-to-date and further details): 611 | % * validation_file=F 612 | % The ID of an uploaded file that contains validation data. 613 | % 614 | % If you provide this file, the data is used to generate validation 615 | % metrics periodically during fine-tuning. These metrics can be viewed 616 | % in the fine-tuning results file. Your train and validation data should 617 | % be mutually exclusive. 618 | % 619 | % Your dataset must be formatted as a JSONL file, where each validation 620 | % example is a JSON object with the keys "prompt" and "completion". 621 | % Additionally, you must upload your file with the purpose fine-tune. 622 | % * model=M 623 | % The name of the base model to fine-tune. You can select one of 'ada', 624 | % 'babbage', 'curie', 'davinci', or a fine-tuned model created after 625 | % 2022-04-21. To learn more about these models, see the Models documentation. 626 | % Defaults to 'curie'. 627 | % * n_epochs=N 628 | % The number of epochs to train the model for. An epoch refers to one full 629 | % cycle through the training dataset. Defaults to 4. 630 | % * batch_size=N 631 | % The batch size to use for training. The batch size is the number of 632 | % training examples used to train a single forward and backward pass. 633 | % 634 | % By default, the batch size will be dynamically configured to be ~0.2% 635 | % of the number of examples in the training set, capped at 256 - in 636 | % general, we've found that larger batch sizes tend to work better for 637 | % larger datasets. Defaults to `null`. 638 | % * learning_rate_multiplier=N 639 | % The learning rate multiplier to use for training. The fine-tuning 640 | % learning rate is the original learning rate used for pretraining 641 | % multiplied by this value. 642 | % 643 | % By default, the learning rate multiplier is the 0.05, 0.1, or 0.2 644 | % depending on final batch_size (larger learning rates tend to perform 645 | % better with larger batch sizes). We recommend experimenting with 646 | % values in the range 0.02 to 0.2 to see what produces the best results. 647 | % Defaults to `null`. 648 | % * prompt_loss_weight=N 649 | % The weight to use for loss on the prompt tokens. This controls how 650 | % much the model tries to learn to generate the prompt (as compared to 651 | % the completion which always has a weight of 1.0), and can add a 652 | % stabilizing effect to training when completions are short. 653 | % 654 | % If prompts are extremely long (relative to completions), it may make 655 | % sense to reduce this weight so as to avoid over-prioritizing learning 656 | % the prompt. Defaults to `0.01` 657 | % * compute_classification_metrics=B 658 | % If set, we calculate classification-specific metrics such as accuracy 659 | % and F-1 score using the validation set at the end of every epoch. 660 | % These metrics can be viewed in the results file. 661 | % 662 | % In order to compute classification metrics, you must provide a 663 | % validation_file. Additionally, you must specify classification_n_classes 664 | % for multiclass classification or classification_positive_class for 665 | % binary classification. Defaults to `false` 666 | % * classification_n_classes=N 667 | % The number of classes in a classification task. This parameter is 668 | % required for multiclass classification. Defaults to `null`. 669 | % * classification_positive_class=S 670 | % The positive class in binary classification. This parameter is needed 671 | % to generate precision, recall, and F1 metrics when doing binary 672 | % classification. Defaults to `null`. 673 | % * classification_betas=List 674 | % If this is provided, we calculate F-beta scores at the specified beta 675 | % values. The F-beta score is a generalization of F-1 score. This is only 676 | % used for binary classification. 677 | % 678 | % With a beta of 1 (i.e. the F-1 score), precision and recall are given 679 | % the same weight. A larger beta score puts more weight on recall and 680 | % less on precision. A smaller beta score puts more weight on precision 681 | % and less on recall. Defaults to `null`. 682 | % * suffix=S 683 | % A string of up to 40 characters that will be added to your fine-tuned 684 | % model name. For example, a suffix of "custom-model-name" would produce 685 | % a model name like `ada:ft-your-org:custom-model-name-2022-02-15-04-21-04`. 686 | % 687 | gpt_fine_tunes(TrainingFile,Result,Options):- 688 | gpt_fine_tunes(TrainingFile,Result,false,Options),!. 689 | gpt_fine_tunes(TrainingFile,Result,Raw,Options):- 690 | current_prolog_flag(gptkey,Key), 691 | atom_json_term(D,json([training_file=TrainingFile|Options]),[]), 692 | Data = atom(application/json,D), 693 | http_post('https://api.openai.com/v1/fine-tunes',Data,json(ReturnData), 694 | [authorization(bearer(Key)),application/json]), 695 | ( Raw=false 696 | -> member(id=Result,ReturnData) 697 | ; Result= json(ReturnData) 698 | ). 699 | 700 | %% gpt_fine_tunes(-Result:list) is semidet. 701 | %% gpt_fine_tunes(-Result:list,+Raw:boolean) is semidet. 702 | % Gets a list of fine-tunes jobs. 703 | % 704 | % Example use: 705 | % ~~~ 706 | % :- gpt_fine-tunes(Result), 707 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'-'ft-090asf0asf0',...] 708 | % ~~~ 709 | % 710 | % @arg Result List with file name, or json term (depending on `Raw`) 711 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 712 | % the Result will be a simple list of file names 713 | gpt_fine_tunes(Result):- 714 | gpt_fine_tunes(Result,false),!. 715 | gpt_fine_tunes(Result,Raw):- 716 | current_prolog_flag(gptkey,Key), 717 | http_get('https://api.openai.com/v1/fine-tunes',json(ReturnData), 718 | [authorization(bearer(Key)),application/json]), 719 | ( Raw=false 720 | -> ( member(data=Models,ReturnData), 721 | gpt_extract_field_pairs(fine_tuned_model,id,Models,Result) 722 | ) 723 | ; Result= json(ReturnData) 724 | ). 725 | 726 | %% gpt_fine_tunes_detail(+ID:atom,-Result:list) is semidet. 727 | %% gpt_fine_tunes_detail(+ID:atom,-Result:list,+Raw:boolean) is semidet. 728 | % Gets details of a fine-tunes job. 729 | % 730 | % Example use: 731 | % ~~~ 732 | % :- gpt_fine_tunes_detail('ft-090asf0asf0',Result), 733 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'] 734 | % ~~~ 735 | % 736 | % @arg Result List with file name, or json term (depending on `Raw`) 737 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 738 | % the Result will be a simple list of file names 739 | gpt_fine_tunes_detail(ID,Result):- 740 | gpt_fine_tunes_detail(ID,Result,false),!. 741 | gpt_fine_tunes_detail(ID,Result,Raw):- 742 | current_prolog_flag(gptkey,Key), 743 | atomic_concat('https://api.openai.com/v1/fine-tunes/',ID,URL), 744 | http_get(URL,json(ReturnData), 745 | [authorization(bearer(Key)),application/json]), 746 | ( Raw=false 747 | -> ( member(fine_tuned_model=TunedModel,ReturnData), 748 | Result=[TunedModel] 749 | ) 750 | ; Result= json(ReturnData) 751 | ). 752 | 753 | %% gpt_fine_tunes_cancel(+ID:atom,-Result:list) is semidet. 754 | %% gpt_fine_tunes_cancel(+ID:atom,-Result:list,+Raw:boolean) is semidet. 755 | % Cancel a fine-tunes job. 756 | % 757 | % Example use: 758 | % ~~~ 759 | % :- gpt_fine_tunes_cancel([_-ID]),(ID,Result), 760 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'] 761 | % ~~~ 762 | % 763 | % @arg ID ID of the fine-tunes job 764 | % @arg Result List with file name, or json term (depending on `Raw`) 765 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 766 | % the Result will be a simple list of file names 767 | % TODO: ***** DOES NOT WORK **** something to do with post without data? 768 | gpt_fine_tunes_cancel(ID,Result):- 769 | gpt_fine_tunes_cancel(ID,Result,false),!. 770 | gpt_fine_tunes_cancel(ID,Result,Raw):- 771 | current_prolog_flag(gptkey,Key), 772 | atomic_list_concat(['https://api.openai.com/v1/fine-tunes/',ID,'/cancel'],URL), 773 | http_post(URL,[],json(ReturnData), 774 | [authorization(bearer(Key)),application/json]), 775 | ( Raw=false 776 | -> ( member(fine_tuned_model=TunedModel,ReturnData), 777 | Result=[TunedModel] 778 | ) 779 | ; Result= json(ReturnData) 780 | ). 781 | 782 | %% gpt_fine_tunes_events(+ID:atom,-Result:list) is semidet. 783 | %% gpt_fine_tunes_events(+ID:atom,-Result:list,+Raw:boolean) is semidet. 784 | % List events of a fine-tunes job. 785 | % 786 | % Example use: 787 | % ~~~ 788 | % :- gpt_fine_tunes_events([_-ID]),(ID,Result), 789 | % Result = ['curie:ft-personal-2022-02-15-04-21-04'] 790 | % ~~~ 791 | % 792 | % @arg ID ID of the fine-tunes job 793 | % @arg Result List with file name, or json term (depending on `Raw`) 794 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 795 | % the Result will be a simple list of file names 796 | % TODO: ***** DOES NOT WORK **** something to do with post without data? 797 | gpt_fine_tunes_events(ID,Result):- 798 | gpt_fine_tunes_events(ID,Result,false),!. 799 | gpt_fine_tunes_events(ID,Result,Raw):- 800 | current_prolog_flag(gptkey,Key), 801 | atomic_list_concat(['https://api.openai.com/v1/fine-tunes/',ID,'/events'],URL), 802 | http_get(URL,json(ReturnData), 803 | [authorization(bearer(Key)),application/json]), 804 | ( Raw=false 805 | -> ( member(fine_tuned_model=TunedModel,ReturnData), 806 | Result=[TunedModel] 807 | ) 808 | ; Result= json(ReturnData) 809 | ). 810 | 811 | %% gpt_fine_tunes_delete(+ID:atom,-Result:list) is semidet. 812 | %% gpt_fine_tunes_delete(+ID:atom,-Result:list,+Raw:boolean) is semidet. 813 | % Delete a fine-tunes job from GPT storage 814 | % 815 | % Example use: 816 | % ~~~ 817 | % :- gpt_fine_tunes([_-ID]),gpt_fine_tunes_delete(ID,Result), 818 | % Result = ['ft-XjGxS3KTG0uNmNOK362iJua3'] 819 | % ~~~ 820 | % 821 | % @arg ID File ID of file in GPT storage to delete 822 | % @arg Purpose Purpose of the file. Currently only 'fine-tune' 823 | % @arg Result List of file names, or json term (depending on `Raw`) 824 | % @arg Raw If `true` the Result will be the json term, if `false` (default) 825 | % the Result will be a simple list of file names 826 | gpt_fine_tunes_delete(ID,Result):- 827 | gpt_fine_tunes_delete(ID,Result,false),!. 828 | gpt_fine_tunes_delete(ID,Result,Raw):- 829 | current_prolog_flag(gptkey,Key), 830 | atomic_concat('https://api.openai.com/v1/models/',ID,URL), 831 | http_delete(URL,json(ReturnData), 832 | [authorization(bearer(Key)),application/json]), 833 | ( Raw=false 834 | -> (member(id=ID,ReturnData), Result=[ID]) 835 | ; Result= json(ReturnData) 836 | ). 837 | 838 | 839 | %% gpt_moderations(+Model:atom,+Input:text,-Result:list,+Options:list) is semidet. 840 | % Given a input text, outputs if the model classifies it as violating OpenAI's content policy. 841 | % 842 | % Example use: 843 | % ~~~ 844 | % :- gpt_moderations('I want to kill them',Result), 845 | % Result = [sexual=false, hate=false, violence=true, 'self-harm'=false, 846 | % 'sexual/minors'=false, 'hate/threatening'=false, 'violence/graphic'=false]. 847 | % ~~~ 848 | % 849 | % @arg Input Text to test for content policy violation 850 | % @arg Result JSON structure with policy scores 851 | gpt_moderations(Input,Result,Options):- 852 | gpt_moderations(Input,Result,false,Options). 853 | gpt_moderations(Input,Result,Raw,Options):- 854 | current_prolog_flag(gptkey,Key), 855 | atom_json_term(D,json([input=Input|Options]),[]), 856 | Data = atom(application/json,D), 857 | http_post('https://api.openai.com/v1/moderations',Data,ReturnData, 858 | [authorization(bearer(Key)),application/json]), 859 | ( Raw=false 860 | -> ( gpt_extract_data(results,categories,ReturnData,[json(R)]), 861 | maplist(json_pair_boolean,R,Result) 862 | ) 863 | ; Result= ReturnData 864 | ). 865 | 866 | json_pair_boolean(Name='@'(Boolean),Name=Boolean):-!. 867 | json_pair_boolean(Name=Val,Name=Val):-!. 868 | -------------------------------------------------------------------------------- /test/otter.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/RdR1024/prolog2gpt/c19890ceee13e7692bb9e02661d18a3353764947/test/otter.png -------------------------------------------------------------------------------- /test/test001.pro: -------------------------------------------------------------------------------- 1 | % A Prolog test suite. 2 | % Load this file as usual (e.g. ['test001.pro']) and then 3 | % `:- run_tests.` 4 | % 5 | % See the SWI-Prolog documentation for Prolog Unit Tests 6 | 7 | :- begin_tests(prolog2gpt). 8 | :- use_module('../src/prolog/prolog2gpt.pro'). 9 | 10 | % check that the GPT key is obtainable from the environment 11 | % Note: run this test before all others, so that the key is initialized 12 | test(init_key):- 13 | format('~ntest "init_key"~n',[]), 14 | init_gptkey, 15 | current_prolog_flag(gptkey,Key), 16 | format(' Key: ~w~n',[Key]). 17 | 18 | % check the availability of GPT models 19 | test(models,[nondet]):- 20 | format('test "models"~n',[]), 21 | gpt_models(Models), 22 | format(' Models: ~w~n',[Models]). 23 | 24 | % get the details of a named model 25 | test(a_model):- 26 | format('test getting model details~n',[]), 27 | gpt_models_detail('gpt-4',Details), 28 | format('gpt-4 details:~n~w~n',[Details]). 29 | 30 | % basic check of text completion, no response parsing 31 | test(completion01,[nondet]):- 32 | format('test basic completion without response parsing~n',[]), 33 | gpt_completions('gpt-3.5-turbo','My favourite animal is ',Text, true, []), 34 | format('Resulting text: ~w~n',Text). 35 | 36 | % basic check of text completion 37 | test(completion02,[nondet]):- 38 | format('test basic completion~n',[]), 39 | gpt_completions('gpt-3.5-turbo','My favourite animal is ',Text, []), 40 | format('Resulting text: ~w~n',Text). 41 | 42 | % basic check of image generation 43 | test(image_create01,[nondet]):- 44 | format('test image creation~n',[]), 45 | gpt_images_create('A cute baby sea otter',Result,[]), 46 | format('Image url: ~w~n',Result). 47 | 48 | % basic check of image edit 49 | test(image_edit01,[nondet]):- 50 | format('test image edit~n',[]), 51 | gpt_images_edits('A cartoon otter with a hat','./otter.png',Result,[]), 52 | format('Image url: ~w~n',Result). 53 | 54 | % basic check of image variation 55 | test(image_variation01,[nondet]):- 56 | format('test image variation~n',[]), 57 | gpt_images_variations('./otter.png',Result,[]), 58 | format('Image url: ~w~n',Result). 59 | 60 | % basic check of text embeddings 61 | test(edits01,[nondet]):- 62 | format('test basic embeddings~n',[]), 63 | gpt_embeddings('text-embedding-ada-002','The food was delicious',Text,[]), 64 | format('Resulting text: ~w~n',Text). 65 | 66 | % basic check of file upload, list, details, and delete 67 | test(upload01,[nondet]):- 68 | format('test file upload~n',[]), 69 | gpt_files_upload('./tune_answer.jsonl','fine-tune',[ID],[]), 70 | format('File ID: ~w~n',[ID]), 71 | gpt_files_retrieve(ID,R,true), 72 | format('File details: ~w~n',[R]), 73 | gpt_files(List), 74 | format('File list: ~w~n',[List]), 75 | gpt_files_delete(ID,RDel), 76 | format('File deleted: ~w~n',RDel). 77 | 78 | % basic check of moderations 79 | test(moderations,[nondet]):- 80 | format('test moderations~n',[]), 81 | gpt_moderations('I want to kill them',R,[]), 82 | format('Moderation result: ~w~n',[R]). 83 | 84 | :- end_tests(prolog2gpt). -------------------------------------------------------------------------------- /test/tune_answer.jsonl: -------------------------------------------------------------------------------- 1 | {"prompt":"what is the answer?","completion":"In that case, what is the question?"} 2 | {"prompt":"what is the answer?","completion":"42"} --------------------------------------------------------------------------------