├── ShellGPT.psd1 ├── README.md └── ShellGPT.psm1 /ShellGPT.psd1: -------------------------------------------------------------------------------- 1 | @{ 2 | ModuleVersion = '1.3.5' 3 | GUID = 'a038bbff-4e60-4201-944e-b2e6a01ed20c' 4 | RootModule = 'ShellGPT.psm1' 5 | FunctionsToExport = @('Invoke-OpenAICompletion', 'New-OpenAICompletionPrompt', 'Set-OpenAICompletionCharacter', 'New-OpenAICompletionConversation', 'Add-OpenAICompletionMessageToConversation', 'New-OpenAIEdit', 'New-OpenAIImage', 'Get-OpenAIModels', 'Get-OpenAIModelById', 'Get-OpenAIFiles','Get-OpenAIFileById', 'Get-OpenAIFileContent', 'New-OpenAIFile', 'Remove-OpenAIFile', 'Get-OpenAIFineTuneJobs','Get-OpenAIFineTuneJobById','Get-OpenAIFineTuneEvents', 'Remove-OpenAIFineTuneModel', 'Stop-OpenAIFineTuneJob', 'New-OpenAIFineTuneJob', 'New-OpenAIFineTuneTrainingFile', 'Import-OpenAIPromptFromJson', 'Export-OpenAIPromptToJson', 'New-OpenAIEmbedding', 'Convert-PDFtoText', 'Get-ShellGPTHelpMessage', 'Start-ShellGPT', 'Get-OpenAiQuickResponse', 'AzAI', 'OpenAI') 6 | PowerShellVersion = '5.1' 7 | Author = 'Yanik Maurer' 8 | Description = 'Command-line tool that provides an easy-to-use interface for accessing OpenAIs GPT API using PowerShell. It makes it easy to access the full potential of GPT-3 from the comfort of your command line and within your scripts and automations. GitHub Repo: https://github.com/yamautomate/PowerGPT' 9 | } 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ShellGPT - A PowerShell Module for the APIs of OpenAI. 2 | 3 | The [`ShellGPT`](https://www.powershellgallery.com/packages/ShellGPT/1.3.2) PowerShell Module is a command-line tool that provides an easy-to-use interface for accessing OpenAI's GPT API Endpoints using PowerShell. With this wrapper, you can generate natural language text, translate text, summarize articles, create images, create fine-tuned models, feed text-files, PDFs and .JSONs from your local device and more. 4 | 5 | The wrapper provides a simple syntax for calling the API and handling the response, making it easy to integrate GPT into your PowerShell scripts. 6 | 7 | This module is made by an individual and not OpenAI. 8 | 9 | ## Endpoint and Model Compatibility 10 | This module supports the following endpoints from OpenAI as seen in the table below. 11 | 12 | | Endpoint | Model | cmdlets | 13 | | ------------- | ------------- |------------- | 14 | | /v1/chat/completions | gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301 | `Invoke-OpenAICompletion` | 15 | | /v1/edits | text-davinci-edit-00, code-davinci-edit-001 | `New-OpenAIEdit` | 16 | | /v1/images/generations | DALL-E | `New-OpenAIImage` | 17 | | /v1/embeddings | text-embedding-ada-002 | `New-OpenAIEmbedding` | 18 | | /v1/models | - | `Get-OpenAIModels`, `Get-OpenAIModelById` | 19 | | /v1/files | - | `Get-OpenAIFiles`, `Get-OpenAIFileById`, `Get-OpenAIFileContent`, `New-OpenAIFile`, `Remove-OpenAIFile` | 20 | | /v1/fine-tunes | davinci, curie, babbage, ada | `New-OpenAIFineTuneJob`, `Get-OpenAIFineTuneJobs`, `Get-OpenAIFineTuneJobById`,`Get-OpenAIFineTuneEvents`,`Remove-OpenAIFineTuneModel`,`Stop-OpenAIFineTuneJob` | 21 | 22 | ## Requirements 23 | ShellGPT requires the following: 24 | 25 | - PowerShell 7.3.3 or higher 26 | - An [OpenAI API key](https://help.openai.com/en/articles/4936850-where-do-i-find-my-secret-api-key) 27 | 28 | If yout want to include .PDFs in your prompts: 29 | - [iTextSharp.5.5.13.3](https://www.nuget.org/packages/iTextSharp#readme-body-tab) (itextsharp.dll) 30 | - BouncyCastle.1.8.9 (BouncyCastle.Crypto.dll, dependency of itextsharp) 31 | 32 | 33 | 34 | ## Installation 35 | To use the "ShellGPT" module and its functions, you need to install the module from PowerShell Gallery first, by using `Install-Module`. 36 | 1. Open PowerShell and run `Install-Module`: 37 | ```powershell 38 | Install-Module ShellGPT 39 | ``` 40 | 41 | If you want to include .PDF's in your prompt, you also need to install the according .dll that provides the functionality for that. This Module currently uses "itextsharp.dll". 42 | 43 | 1. Open PowerShell and run `Register-PackageSource` to register NuGet as a Package Source: 44 | ```powershell 45 | Register-PackageSource -provider NuGet -name nugetRepository -location https://www.nuget.org/api/v2 46 | ``` 47 | 2. Then install the .DLL from NuGet: 48 | ```powershell 49 | Install-Package Itextsharp 50 | ``` 51 | ## About using Microsoft Azure OpenAI Service 52 | WHen you want to use ShellGPT with your very own Microsoft Azure OpenAI Service you need to specify the following parameters: 53 | ```powershell 54 | -UseAzure $NameOfAzureOpenAIService -DeploymentName $NameOfYourDeployedModel 55 | ``` 56 | 57 | ## How to use QuickResponse functions 58 | The module offers the function ```Get-OpenAIQuickResponse``` that allows you to call either OpenAI's Completion API or Microsoft Azure Open AI, depending on if you use the parameter ```-useAzure "NameOfAzureResource"```. This functions leverages the usage of environment variables to define the needed details for authentication. 59 | 60 | The following environment variables need to be set for using the OpenAI API in QuickResponse: 61 | ```powershell 62 | - $env:OAI_APIKey 63 | ``` 64 | The following environment variables need to be set for using the Microsoft Azure OpenAI API in QuickResponse: 65 | ```powershell 66 | - $env:AZ_OAI_APIKey 67 | - $env:AZ_OAI_ResourceName 68 | - $env:AZ_OAI_DeploymentName 69 | ``` 70 | There are also wrapper functions that further abbreviate a call to the API: 71 | ```OpenAI``` is essentially an alias for ```Get-OpenAIQuickResponse``` with directly calling an OpenAI endpoint. 72 | ```AzAI``` is an alias for ```Get-OpenAIQuickResponse``` with ```-useAzure``` so that you can directly call your Microsoft Azure OpenAI API. 73 | 74 | QuickResponse uses the following default values as parameters: 75 | ```powershell 76 | - [string]$model = "gpt-4", 77 | - [string]$stop = "\n", 78 | - [double]$temperature = 0.4, 79 | - [int]$max_tokens = 900, 80 | - [bool]$ShowOutput = $false, 81 | - [bool]$ShowTokenUsage = $false, 82 | - [string]$instructor = "You are a helpful AI. You answer as concisely as possible.", 83 | - [string]$assistantReply = "Hello! I'm a ChatGPT-4 Model. How can I help you?", 84 | ``` 85 | 86 | To call a Microsoft Azure OpenAI API, frist define the variables (they per default persist per sessions): 87 | ```powershell 88 | $env:AZ_OAI_APIKey = "Your key from the Azure Resource" 89 | $env:AZ_OAI_ResourceName = "Name of your Azure OpenAI Resource" 90 | $env:AZ_OAI_DeploymentName = "Name of your deployment" 91 | ``` 92 | 93 | Then you can launch your queries: 94 | ```powershell 95 | AzAI "What is the capitol of Switzerland?" 96 | The Capitol of Switzerland is Bern 97 | ``` 98 | 99 | You can also pipe values to these cmdlets: 100 | ```powershell 101 | (Get-Content -Path C:\temp\log.txt) | AzAi -Instructor "You are a GPT Model that helps analyze data. You respond with a summary of data you have been queried" 102 | ``` 103 | 104 | 105 | ## How to start the interactive ChatBot for PowerShell 106 | 107 | You need to define the `$APIKey` first: 108 | ```powershell 109 | $APIKey = "YOUR_API_KEY" 110 | ``` 111 | 112 | Then you can use `Start-ShellGPT` to start the command-line based ChatBot using the default values: 113 | ```powershell 114 | Start-ShellGPT -APIKey $APIKey 115 | ``` 116 | The default values are: 117 | 118 | ``` 119 | $model = "gpt-3.5-turbo" 120 | $stop = "\n" 121 | $temperature = 0.4 122 | $max_tokens = 900 123 | ``` 124 | 125 | 126 | If you want to launch `ShellGPT` with your own parameter values, you can define them and call `Start-ShellGPT` with all params you wish to use: 127 | 128 | ```powershell 129 | $model = "gpt-3" 130 | $stop = "." 131 | $temperature = 0.1 132 | $max_tokens = 200 133 | ``` 134 | 135 | Then you can use `Start-ShellGPT` and pass along the parameters you defined above: 136 | ```powershell 137 | Start-ShellGPT -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop 138 | ``` 139 | ## How-To use the ChatBot 140 | When launched, you are asked if you want to continue an existing conversation or a new one. If you have exported a prompt earlier, you can continue where left off by importing it. If you start a new conversation, you can pick the character. Chat provides the ChatBot experience. 141 | 142 | You can now enter your prompt. You can also use the following commands: 143 | - file | 144 | - out | 145 | - export | 146 | - quit | 147 | 148 | ### "file | " command 149 | Using the "file |" command, you can tell the ChatBot to parse a local file and use it within the prompt. That way you can ask it questions about a local pdf, csv, json or txt file. 150 | 151 | In the example below we ask it to summarize the content of the file "test.txt" (Contains some PowerShell code): 152 | 153 | ``` 154 | ShellGPT @ 03/30/2023 20:14:34 | Your query for ChatGPT or commands for ShellGPT: file | test.txt | Summarize this: 155 | ``` 156 | The CompletionAPI responds: 157 | ``` 158 | CompletionAPI @ 03/30/2023 20:16:19 | This code attempts to create a new file with the specified name and write JSON strings to it. If an error occurs, it catches the exception and reports the error message and details. 159 | ``` 160 | 161 | ### "out | " command 162 | Using the "out |" command, you can tell the ChatBot to export the output to your prompt to a local file. For example, you can ask it to remove the try/catch block from our "test.txt" and generate the output, so that you then can store it in "notry.ps1": 163 | 164 | ``` 165 | ShellGPT @ 03/30/2023 20:14:34 | Your query for ChatGPT or commands for ShellGPT: file | C:\users\yanik\test.txt | remove the try/catch block. Do not append any additional text or reasoning | out | C:\users\yanik\notry.ps1 166 | ``` 167 | 168 | ### "export | " command 169 | Using the "export |" command, you can tell the ChatBot to export your current prompt to a local file: 170 | ``` 171 | ShellGPT @ 03/30/2023 20:14:34 | Your query for ChatGPT or commands for ShellGPT: export | 172 | ShellGPT @ 04/06/2023 20:36:52 | Provide the full path to the prompt*.json file that you want to export now and later continue the conversation on: C:\Users\Yanik\MyPrompt.json 173 | ``` 174 | You can then later use that prompt to continue your conversation by importing it during the start of a new conversation 175 | 176 | ### "newconvo | " command 177 | This command drops the current prompt and starts a new conversation. 178 | ``` 179 | ShellGPT @ 03/30/2023 20:14:34 | Your query for ChatGPT or commands for ShellGPT: newconvo | 180 | ShellGPT @ 04/06/2023 20:40:41 | Do you want to restore an existing conversation? (enter 'y' or 'yes'): 181 | ``` 182 | 183 | ### "quit | " command 184 | The ChatBot exits. 185 | ``` 186 | ShellGPT @ 03/30/2023 20:14:34 | Your query for ChatGPT or commands for ShellGPT: quit | 187 | ``` 188 | 189 | ## How to construct prompts using the ShellGPT Module 190 | We have several ways of how we can create prompts with the CompletionAPI module. 191 | 192 | The easiest and most customizable one is to use the `New-OpenAICompletionPrompt` function. It lets you create a prompt from scratch, or append a query to a prompt. 193 | 194 | Let's create a completely new prompt: 195 | ```powershell 196 | $prompt = New-OpenAICompletionPrompt -query "What is the Capital of France?" -role "user" -instructor "You are a helpful AI." -assistantReply "Bonjour, how can I help you today?" 197 | ``` 198 | 199 | In the above example, we are creating a prompt with a user role, a query of "What is the Capital of France?", a system role with the message "You are a helpful AI.", and an assistant role with the message "Bonjour, how can I help you today?". 200 | 201 | Please note: The functions expect the prompt to be of type [System.Collections.ArrayList]. WWhy? Because we can add and remove content easily without destroying the array again and again. So make sure you declare your variable that holds the prompt to be as of type [System.Collections.ArrayList] when you want to reuse your prompt as an input. 202 | 203 | You can also use the "short-form" by just specifying the query, using the default values (Chat character): 204 | ```powershell 205 | $prompt = New-OpenAICompletionPrompt -query "What is the Capital of France?" 206 | ``` 207 | 208 | ### Using previous messages to craft a prompt 209 | We can also create a prompt with previous messages by passing in an array of messages as the "previousMessages" parameter. Here's an example: 210 | ```powershell 211 | $previousMessages = @( 212 | @{ 213 | role = "system" 214 | content = "You are a helpful AI." 215 | }, 216 | @{ 217 | role = "assistant" 218 | content = "Hello! How can I help you today?" 219 | } 220 | ) 221 | 222 | $prompt = New-OpenAICompletionPrompt -query "What is the Capital of France?" -role "user" -previousMessages $previousMessages 223 | ``` 224 | In this example, we are creating a prompt with a user role and a query of "What is the Capital of France?" along with two previous messages (system and assistant roles) in the conversation. 225 | 226 | 227 | ### Using Default values 228 | If we want to create a simple prompt with just a query and using the default values for the other parameters: 229 | ```powershell 230 | $prompt = New-OpenAICompletionPrompt -query "What is the Capital of France?" 231 | ``` 232 | 233 | ### Add a file to your prompt 234 | You can use the `-filePath` parameter to specifiy the path to a local file, which shall be included in your prompt. The function then reads the content of the file, strips it of illegal characters and appends it to the prompt together with your query. That way, you can ask questions about your local files. 235 | Currently supported and tested file-types: 236 | - .pdf 237 | - .txt 238 | - .json 239 | - .csv 240 | - .html 241 | 242 | Example: 243 | ```powershell 244 | New-OpenAICompletionPrompt -query "What is this?" -filePath "C:\Users\Yanik\MyFile.pdf" 245 | ``` 246 | 247 | ## How to create a character using prompts 248 | We can use this to create specific training prompts for the model. 249 | Here is an example, where we tell the model to act as a pirate: 250 | 251 | ```powershell 252 | $previousMessages = @( 253 | @{ 254 | role = "system" 255 | content = "You are a helpful AI that was raised as a pirate. You append Awwwwr! to every respond you make." 256 | }, 257 | @{ 258 | role = "assistant" 259 | content = "Hello! How can I help you today? Awwwwr!" 260 | }, 261 | @{ 262 | role = "user" 263 | content = "What is the capitol of france?" 264 | }, 265 | @{ 266 | role = "assistant" 267 | content = "The capitol of france is Paris, Awwwwr!" 268 | } 269 | ) 270 | ``` 271 | We also include an example of an exchange between the user and assistant, so that the model can "learn" what we expect it to do. 272 | 273 | ## How to pass our prompt to the API for completion 274 | Now we have a prompt ready we can send to the Completion API for actual completion. We do this by using the `Invoke-OpenAICompletion` function and passing it the prompt we just generated as an input. 275 | 276 | If we want to use the default values for the models parameter, we can call the functions just with specifiyng the -prompt and -APIKey: 277 | 278 | The we can use `Invoke-OpenAICompletion` to call the API: 279 | ```powershell 280 | Invoke-OpenAICompletion -prompt $prompt -APIKey $APIKey 281 | ``` 282 | And ultimately we get our output. First, the extracted text in the response from the API and then the new prompt, where the response from API has been added to. If we'd assign the output from `Invoke-OpenAICompletion` to a variable, we can use that again as the input for the next API Call if we want to have a conversation on a topic with the API. 283 | 284 | ```powershell 285 | ChatGPT: I am an AI language model designed to assist with various tasks and answer questions. Awwwwr! 286 | 287 | Name Value 288 | ---- ----- 289 | role system 290 | content You are a helpful AI that was raised as a pirate. You append Awwwwr! to every respond you make. 291 | role assistant 292 | content Hello! How can I help you today? Awwwwr! 293 | role user 294 | content What is the capitol of france? 295 | role assistant 296 | content The capitol of france is Paris, Awwwwr!. 297 | role user 298 | content Who are you? 299 | role assistant 300 | content I am an AI language model designed to assist with various tasks and answer questions. Awwwwr! 301 | ``` 302 | 303 | ## Understanding how the OpenAI API generates completions 304 | Autoregressive models like the ones used by OpenAI are trained to predict the probability distribution of the next token given the preceding tokens. For example, a language model can predict the next word in a sentence given the preceding words. 305 | 306 | The API uses a prompt as a starting point for generating text. A prompt is a piece of text that serves as the input to the OpenAI model. It can be a single sentence or a longer document, and it can include any kind of text that provides context or guidance for the model. The model generates additional text one token at a time based on the probabilities of the next token given the preceding tokens. 307 | 308 | The quality and relevance of the generated text can depend heavily on the quality and specificity of the prompt, as well as the amount and type of training data that the model has been exposed to. 309 | 310 | OpenAI's Model is trained on massive amounts of text data, so it has learned to predict the probability distribution of the next token based on patterns it has observed in that data. 311 | 312 | ## Understanding prompts 313 | Before we construct a prompt, we need to define what that actually is. 314 | 315 | A prompt is a collection of one or more messages between one ore more parties. A prompt can be thought of more specifically as a piece of text that serves as the input to the OpenAI model, and it can include not only messages from multiple parties in a conversation, but also any other type of text that provides context or guidance for the model. Prompts can specify the topic, tone, style, or purpose of the text to be generated. 316 | 317 | A prompt looks like this: 318 | ```powershell 319 | Name Value 320 | ---- ----- 321 | content You are a helpful AI. 322 | role system 323 | content How can I help you today? 324 | role assistant 325 | content What is the Capitol of Switzerland? 326 | role user 327 | ``` 328 | 329 | Each message in a prompt has a content and a role, where the role specifies the speaker of the message (system, assistant, or user). 330 | As shown above, a message in a prompt can be assigned to three roles: 331 | - `system` 332 | - `assistant` 333 | - `user` 334 | 335 | The roles help the model distinguish between different speakers and understand the context of the conversation. 336 | These roles are essentially just labels for the different types of messages and are not necessarily representative of specific individuals or entities. These roles are not mandatory and can be customized based on your use case. I just happend to have them hardcoded for my use-cases. 337 | 338 | The `content` field, represents the individual message from the according role. 339 | 340 | With that, we can construct a chain of messages (a conversation) between an assistant, and a user. 341 | 342 | The `system` value defines the general behaviour of the assistant. This is also often referred to as the "Instructor". With that, we can control what the model should behave and act like. For example: 343 | - "You are a helpful AI" 344 | - "You are a Pirate, that answers every request with Arrrr!" 345 | - "You are a villain in a James Bond Movie" 346 | 347 | With the prompt, we can generate context for the model. For example, we can use prompts to construct a chat conversation, or use prompts to "train" the model to behave even more as we want it to. 348 | 349 | When using prompts for chat conversations, the prompt contains the whole conversation, so that the model has enough context to have a natural conversation. This allows the model to "remember" what you asked a few questions ago. In contrast, when using prompts for training, the prompt is carefully crafted to show the model how it should behave and respond to certain inputs. This allows the model to learn and generalize from the examples in the prompt. 350 | 351 | This is used in the `Set-OpenAICompletionAPICharacter` function, where the function returns a "trained" character prompt we can use. 352 | 353 | A trained character is a prompt that has been specifically designed to 'train' the OpenAI model to respond in a particular way. It typically includes a set of example questions or statements and the corresponding responses that the model should produce. By using a trained character, we can achieve more consistent and accurate responses from the model. 354 | 355 | For example, the prompt for the trained character "SentimentAnalysis" look like this: 356 | ``` 357 | Name Value 358 | ---- ----- 359 | content You are an API that analyzes text sentiment. You provide your answer in a .JSON format in the following structure: { "sentiment": 0.9 } You only answer with the .JSON object. You do not provide any reasoning why you did it that way. The sentiment is a va… 360 | role system 361 | content {[sentiment, 0.9]} 362 | role assistant 363 | ``` 364 | The Instructor expanded reads: 365 | ``` 366 | You are an API that analyzes text sentiment. 367 | You provide your answer in a .JSON format in the following structure: { "sentiment": 0.9 } 368 | You only answer with the .JSON object. 369 | You do not provide any reasoning why you did it that way. 370 | The sentiment is a value between 0 - 1. 371 | Where 1 is the most positive sentiment and 0 is the most negative. 372 | If you can not extract the sentiment, you specify it with "unknown" in your response. 373 | ``` 374 | 375 | And the first `assistant` message (created by me to specify the model how I expect the output): 376 | ``` 377 | { 378 | "sentiment": 0.9 379 | } 380 | ``` 381 | 382 | The more examples (messages) are provided in a prompt, the more context the model has and the more predictable becomes its output. When using a prompt for training, we only need to make sure that we can include the last question of the user to the prompt before we run into the `max_token` limit, whereas in chat-mode we should limit training to what is only necessary. 383 | 384 | So, essentially we stitch together an object that represents a conversation between a `system`, the `assistant` and a `user`. Then we add the users question/message to the conversation prompt and send it to the model for completion. 385 | 386 | 387 | ## Understanding tokens and limits 388 | As stated above, when generating a prompt we need to be vary of its size. Why? 389 | Because the models of the endpoints we use (gpt-3.5-turbo and others) do have a maximum lenght of tokens. 390 | Tokens can be looked at as pieces of words. When the API processes a prompt, the input is broken down into tokens. Some general rule of thumb for tokens is: 391 | - 1 token ~= 4 chars in English 392 | - 1 token ~= ¾ words 393 | - 100 tokens ~= 75 words 394 | 395 | When the API generates the completion for our prompt, this also uses tokens as the API generates them. 396 | 397 | As the `gpt-3.5-turbo` can at max. process and complete prompts with 4096 tokens, we need to ensure we do not hit that limit. So, we need to make sure that our prompt AND the completion do not exceed the token limit. 398 | 399 | Tokens are used as the unit for pricing and quotas for the OpenAI API. The specific pricing and quota details can be found on the OpenAI website. For example, for the `gpt-3.5-turbo` model, 1k tokens to be processed costs $0.002 400 | 401 | To limit our spending, we can leverage the API Parameter `max_tokens`. With it, we can define what the maximum amount of tokens is we want to use. If the prompt and completion requires more tokens than what we have defined in `max_tokens`, the API returns an error. 402 | -------------------------------------------------------------------------------- /ShellGPT.psm1: -------------------------------------------------------------------------------- 1 | function Invoke-OpenAICompletion { 2 | <# 3 | .SYNOPSIS 4 | This function sends a prompt to the OpenAI Completion API or Azure OpenAI API to generate a text completion based on the specified settings. 5 | 6 | .DESCRIPTION 7 | The Invoke-OpenAICompletion function sends a prompt to the OpenAI Completion API or Azure OpenAI API to generate a text completion. It takes a prompt as input, which is a list of strings containing the context of the completion. The function requires an API key to access the API and accepts various optional parameters to customize the generated text completion. The response from the API is used to update the prompt, and the updated prompt is returned as output. 8 | 9 | .PARAMETER prompt 10 | The prompt parameter is a mandatory parameter that accepts a list of strings containing the context of the completion. This parameter is used as input for generating the text completion. 11 | 12 | .PARAMETER APIKey 13 | The APIKey parameter is a mandatory parameter that accepts an API key to authenticate the request to the OpenAI Completion API. 14 | 15 | .PARAMETER model 16 | The model parameter is an optional parameter that specifies the model to use for generating the text completion. 17 | 18 | .PARAMETER stop 19 | The stop parameter is an optional parameter that specifies a stop token that the API should use to stop generating the text completion. 20 | 21 | .PARAMETER temperature 22 | The temperature parameter is an optional parameter that controls the randomness of the generated text completion. 23 | 24 | .PARAMETER max_tokens 25 | The max_tokens parameter is an optional parameter that specifies the maximum number of tokens that the API can generate in the text completion. 26 | 27 | .PARAMETER ShowOutput 28 | The ShowOutput parameter is an optional boolean parameter that specifies whether to display the generated output from the API. 29 | 30 | .PARAMETER ShowTokenUsage 31 | The ShowTokenUsage parameter is an optional boolean parameter that specifies whether to display the token usage details. 32 | 33 | .PARAMETER UseAzure 34 | The UseAzure parameter is an optional boolean parameter that specifies whether to use the Azure OpenAI API endpoint instead of the OpenAI API. 35 | 36 | .EXAMPLE 37 | # Example of how to use the function to generate a text completion 38 | #> 39 | 40 | param( 41 | [Parameter(Mandatory=$true)][ValidateNotNullOrEmpty()] [System.Collections.ArrayList]$prompt, 42 | [Parameter(Mandatory=$true)] [string]$APIKey, 43 | [Parameter(Mandatory=$false)][string]$model = "gpt-3.5-turbo", 44 | [Parameter(Mandatory=$false)][string]$stop = "\n", 45 | [Parameter(Mandatory=$false)][double]$temperature = 0.4, 46 | [Parameter(Mandatory=$false)][int]$max_tokens = 900, 47 | [Parameter(Mandatory=$false)][bool]$ShowOutput = $false, 48 | [Parameter(Mandatory=$false)][bool]$ShowTokenUsage = $false, 49 | [Parameter(Mandatory=$false)][string]$UseAzure, 50 | [Parameter(Mandatory=$false)][string]$DeploymentName 51 | ) 52 | 53 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Building request...") 54 | 55 | # Select the appropriate URI and headers based on whether Azure API is used 56 | if ($UseAzure -and $DeploymentName) { 57 | $uri = "https://$UseAzure.openai.azure.com/openai/deployments/$DeploymentName/chat/completions?api-version=2024-02-01" 58 | $headers = @{ 59 | "Content-Type" = "application/json" 60 | "api-key" = $APIKey 61 | } 62 | $RequestBody = @{ 63 | model = $model 64 | messages = $prompt 65 | temperature = $temperature 66 | max_tokens = $max_tokens 67 | top_p = 0.95 68 | frequency_penalty = 0 69 | presence_penalty = 0 70 | stop = $stop 71 | } 72 | } 73 | 74 | else { 75 | $uri = 'https://api.openai.com/v1/chat/completions' 76 | $headers = @{ 77 | "Content-Type" = "application/json" 78 | "Authorization" = "Bearer $APIKey" 79 | } 80 | $RequestBody = @{ 81 | messages = $prompt 82 | model = $model 83 | temperature = $temperature 84 | max_tokens = $max_tokens 85 | stop = $stop 86 | } 87 | } 88 | 89 | $RequestBody = $RequestBody | ConvertTo-Json -depth 3 90 | $Requestbody = [System.Text.Encoding]::UTF8.GetBytes($RequestBody) 91 | 92 | $RestMethodParameter=@{ 93 | Method='Post' 94 | Uri = $uri 95 | Body = $RequestBody 96 | Headers = $Headers 97 | } 98 | 99 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Sending request to URI: "+($uri)) 100 | 101 | try { 102 | #Call the OpenAI completions API 103 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Sending off API Call using 'Invoke-RestMethod' to this URI: "+($uri)) 104 | $APIresponse = Invoke-RestMethod @RestMethodParameter 105 | 106 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Received response from API: "+($APIresponse | Out-String)) 107 | 108 | #Extract Textresponse from API response 109 | $convertedResponseForOutput = $APIresponse.choices.message.content 110 | $tokenUsage = $APIresponse.usage 111 | 112 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Extracted Output: "+($convertedResponseForOutput)) 113 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | TokenUsage for this prompt: "+($TokenUsage.prompt_tokens)+" for completion: "+($TokenUsage.completion_tokens)+" Total tokens used: "+($TokenUsage.total_tokens)) 114 | 115 | #Append text output to prompt for returning it 116 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Creating new prompt with API response...") 117 | [System.Collections.ArrayList]$prompt = New-OpenAICompletionPrompt -query $convertedResponseForOutput -role "assistant" -previousMessages $prompt -model $model 118 | 119 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | New Prompt is: "+($prompt | Out-String)) 120 | 121 | If ($ShowTokenUsage -eq $true) 122 | { 123 | Write-Host ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | TokenUsage for this prompt: "+($TokenUsage.prompt_tokens)+" for completion: "+($TokenUsage.completion_tokens)+" Total tokens used: "+($TokenUsage.total_tokens)) -ForegroundColor Yellow 124 | } 125 | 126 | if ($ShowOutput) 127 | { 128 | Write-Host ("ShellGPT @ "+(Get-Date)+" | "+($convertedResponseForOutput)) -ForegroundColor Green 129 | } 130 | 131 | [System.Collections.ArrayList]$promptToReturn = $prompt 132 | } 133 | catch { 134 | $errorDetails = $_.ErrorDetails.Message 135 | 136 | Write-Host ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Unable to handle Error "+($_.Exception.Message)+"See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later.") -ForegroundColor "Red" 137 | Write-Host ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Error Details: "+($errorDetails)) -ForegroundColor "Red" 138 | 139 | if ($errorDetails.contains("invalid JSON: 'utf-8'")) { 140 | Write-Host ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Your prompt seems to contain characters that can be misinterpreted in utf-8 encoding. Remove those characters and try again."+($promptToReturn |Out-String)) -ForegroundColor "Yellow" 141 | } 142 | 143 | [System.Collections.ArrayList]$prompt.RemoveAt($prompt.count-1) 144 | [System.Collections.ArrayList]$promptToReturn = [System.Collections.ArrayList]$prompt 145 | 146 | Write-Verbose ("ShellGPT-Invoke-OpenAICompletion @ "+(Get-Date)+" | Returning Input prompt, without the last query due to error and to prevent the prompt from becoming unusable: "+($promptToReturn | Out-String)) 147 | 148 | } 149 | 150 | return [System.Collections.ArrayList]$promptToReturn 151 | } 152 | 153 | function New-OpenAICompletionPrompt { 154 | <# 155 | .SYNOPSIS 156 | Creates a prompt for an OpenAI completion API. 157 | .DESCRIPTION 158 | This PowerShell function generates a prompt to be sent to an OpenAI completion API using the user's query and additional input if applicable. 159 | .PARAMETER query 160 | Specifies the user's query to be used in the prompt. 161 | .PARAMETER role 162 | Specifies the role to be added to the prompt. This parameter is optional, and the default value is "user". 163 | .PARAMETER instructor 164 | Specifies the instruction string to be added to the prompt. This parameter is optional, and the default value is "You are ChatGPT, a helpful AI Assistant." 165 | .PARAMETER assistantReply 166 | Specifies the first, unseen reply by the model. This parameter is optional, and the default value is "Hello! I'm ChatGPT, a GPT Model. How can I assist you today?" 167 | .PARAMETER previousMessages 168 | Specifies an array of previous messages in the conversation. This parameter is optional. 169 | .PARAMETER filePath 170 | Specifies the file path for a file containing additional input. This parameter is optional. 171 | .PARAMETER model 172 | Specifies the name of the OpenAI model to use for completion. This parameter is optional. 173 | .INPUTS 174 | This function does not accept input by pipeline. 175 | .OUTPUTS 176 | The function returns a [System.Collections.ArrayList] prompt as output. 177 | #> 178 | 179 | param ( 180 | [Parameter(Mandatory=$true)] 181 | [string]$query, 182 | [Parameter(Mandatory=$false)] 183 | [ValidateSet("system", "assistant", "user")] 184 | [string]$role = "user", 185 | [Parameter(Mandatory=$false)] 186 | [string]$instructor = "You are ChatGPT, a helpful AI Assistant.", 187 | [Parameter(Mandatory=$false)] 188 | [string]$assistantReply = "Hello! I'm ChatGPT, a GPT Model. How can I assist you today?", 189 | [Parameter(Mandatory=$false)] 190 | [System.Collections.ArrayList]$previousMessages, 191 | [Parameter(Mandatory=$false)] 192 | [string]$filePath, 193 | [Parameter(Mandatory=$false)] 194 | [string]$model 195 | ) 196 | 197 | if ($filePath) 198 | { 199 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | File path was provided: "+($filepath)) 200 | 201 | 202 | if ($filePath.EndsWith(".pdf")) 203 | { 204 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | File is PDF. Trying to read content and generate .txt...") 205 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | File is PDF. Reworking filepath to only have forward slashes...") 206 | $filePath = $filePath.Replace("\","/") 207 | 208 | try { 209 | $filePath = Convert-PDFtoText -filePath $filePath -TypeToExport txt 210 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | PDF Content was read, and .txt created at this path: "+($filepath)) 211 | 212 | } 213 | catch { 214 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | We ran into trouble reading the PDF content and writing it to a .txt file "+($filepath)) 215 | $errorToReport = $_.Exception.Message 216 | $errorDetails = $_.ErrorDetails.Message 217 | 218 | Write-Host ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | We ran into trouble reading the PDF content and writing it to a .txt file "+($errorToReport)) 219 | 220 | if ($errorDetails) 221 | { 222 | Write-Host ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | Details: "+($errorDetails)) 223 | } 224 | } 225 | } 226 | #Remove characters the API can not interpret: 227 | $query = $query -replace '(?m)^\s+','' 228 | $query = $query -replace '\r','' 229 | $query = $query -replace '●','' 230 | $query = $query -replace '“',"'" 231 | $query = $query -replace '”',"'" 232 | $query = $query -replace 'ä',"ae" 233 | $query = $query -replace 'ö',"oe" 234 | $query = $query -replace 'ü',"ue" 235 | $query = $query -replace 'ß',"ss" 236 | $query = $query -replace '\u00A0', ' ' 237 | 238 | $iso = [System.Text.Encoding]::GetEncoding("iso-8859-1") 239 | $bytes = $iso.GetBytes($query) 240 | $bytes = [System.Text.Encoding]::UTF8.GetBytes($query) 241 | 242 | $query = [System.Text.Encoding]::utf8.GetString($bytes) 243 | 244 | try { 245 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | Trying to read content of file using UTF-8 encoding...") 246 | $filecontent = Get-Content -Path $filePath -Raw -Encoding utf8 247 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | File content extracted...") 248 | $query = "$query $filecontent" 249 | } 250 | catch { 251 | $errorToReport = $_.Exception.Message 252 | $errorDetails = $_.ErrorDetails.Message 253 | $message = "Unable to handle Error: "+$errorToReport+" See Error details below." 254 | 255 | write-host "Error:"$message -ForegroundColor red 256 | 257 | if ($errorDetails) 258 | { 259 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 260 | } 261 | } 262 | } 263 | 264 | if ($previousMessages) 265 | { 266 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | Previous Messages are present: "+($previousMessages | Out-String)) 267 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | Adding new query: "+($query)+" for role: "+($role)+" to previous Messages") 268 | 269 | $previousMessages.Add(@{ 270 | role = $role 271 | content = $query 272 | }) | Out-Null 273 | 274 | Write-Verbose ("ShellGPT-New-OpenAICompletionPrompt @ "+(Get-Date)+" | Added new query to previousmessages") 275 | 276 | [System.Collections.ArrayList]$promptToReturn = [System.Collections.ArrayList]$previousMessages 277 | } 278 | 279 | else 280 | { 281 | [System.Collections.ArrayList]$promptToReturn = @( 282 | @{ 283 | role = "system" 284 | content = $instructor 285 | }, 286 | @{ 287 | role = "assistant" 288 | content = $assistantReply 289 | }, 290 | @{ 291 | role = $role 292 | content = $query 293 | } 294 | ) 295 | } 296 | 297 | return $promptToReturn 298 | } 299 | 300 | function Set-OpenAICompletionCharacter { 301 | <# 302 | .SYNOPSIS 303 | This function, Set-OpenAICompletionCharacter, sets the response mode and prompts for a ChatGPT-3.5 model. 304 | .DESCRIPTION 305 | This function allows the user to set the mode for the response and prompts for a ChatGPT-3.5 model. 306 | The response mode can be one of the following: Chat, SentimentAndTickerAnalysis, SentimentAnalysis, IntentAnalysis, or IntentAndSubjectAnalysis. 307 | Depending on the selected mode, the function will generate pre-defined (hardcoded) prompts for the user to use. 308 | .PARAMETER mode 309 | This parameter specifies the mode for the response. It is a required parameter and must be one of the following values: Chat, SentimentAndTickerAnalysis, SentimentAnalysis, IntentAnalysis, or IntentAndSubjectAnalysis. 310 | .PARAMETER instructor 311 | This parameter specifies the text prompt for the ChatGPT-3.5 model. It is an optional parameter and defaults to "You are a helpful AI. You answer as concisely as possible." if not specified. 312 | .PARAMETER assistantReply 313 | This parameter specifies the response of the ChatGPT-3.5 model. It is an optional parameter and defaults to "Hello! I'm a ChatGPT-3.5 Model. How can I help you?" if not specified. 314 | .INPUTS 315 | This function does not take any input. 316 | .OUTPUTS 317 | This function returns a list of text prompts and responses in a .JSON format. The structure of the .JSON object varies depending on the selected mode. 318 | .EXAMPLE 319 | PS C:> Set-OpenAICompletionCharacter -mode Chat -instructor "How can I assist you today?" -assistantReply "I can help you with a variety of tasks such as answering questions or providing information on a specific topic." 320 | This example sets the response mode to "Chat" and generates a prompt for the ChatGPT-3.5 model with the instructor text "How can I assist you today?" and the assistant reply "I can help you with a variety of tasks such as answering questions or providing information on a specific topic." 321 | .LINK 322 | #> 323 | param ( 324 | [Parameter(Mandatory=$true)] 325 | [ValidateSet("Chat", "SentimentAndTickerAnalysis", "SentimentAnalysis", "IntentAnalysis","IntentAndSubjectAnalysis")] 326 | $mode, 327 | [Parameter(Mandatory=$false)] 328 | $instructor = "You are a helpful AI. You answer as concisely as possible.", 329 | [Parameter(Mandatory=$false)] 330 | $assistantReply = "Hello! I'm a ChatGPT-3.5 Model. How can I help you?" 331 | ) 332 | 333 | switch ($mode) 334 | { 335 | "Chat" { 336 | $instructor = $instructor 337 | $assistantReply = $assistantReply 338 | } 339 | 340 | "SentimentAndTickerAnalysis" { 341 | $assistantReply = @{ 342 | ticker = "BTC" 343 | asset_type = "Cryptocurrency" 344 | sentiment = 0.9 345 | } 346 | $instructor = "You are part of a trading bot API that analyzes tweets. When presented with a text message, you extract either the Cryptocurrency or Stockmarket abbrev for that ticker and you also analyze the text for sentiment. You provide your answer in a .JSON format in the following structure: { 'ticker': 'USDT', 'asset_type': 'Cryptocurrency', 'sentiment': 0.8 } You only answer with the .JSON object. You do not provide any reasoning why you did it that way. The sentiment is a value between 0 - 1. Where 1 is the most positive sentiment and 0 is the most negative. If you can not extract the Ticker, you specify it with 'unknown' in your response. Same for sentiment." 347 | } 348 | 349 | "SentimentAnalysis" { 350 | $assistantReply = @{ 351 | sentiment = 0.9 352 | } 353 | $instructor = 'You are an API that analyzes text sentiment. You provide your answer in a .JSON format in the following structure: { "sentiment": 0.9 } You only answer with the .JSON object. You do not provide any reasoning why you did it that way. The sentiment is a value between 0 - 1. Where 1 is the most positive sentiment and 0 is the most negative. If you can not extract the sentiment, you specify it with "unknown" in your response.' 354 | } 355 | 356 | "IntentAnalysis" { 357 | $assistantReply = @{ 358 | intent = "purchase" 359 | } 360 | $instructor = 'You are an API that analyzes the core intent of the text. You provide your answer in a .JSON format in the following structure: { "intent": descriptive verb for intent } You only answer with the .JSON object. You do not provide any reasoning why you did it that way. The intent represents the one intent you extracted during your analysis. If you can not extract the intent with a probability of 70% or more, you specify it with "unknown" in your response.' 361 | } 362 | 363 | "IntentAndSubjectAnalysis" { 364 | $assistantReply = @{ 365 | intent = "purchase" 366 | topic = "bananas" 367 | } 368 | $instructor = 'You are an API that analyzes the core intent of the text and the subject the the intent wants to act upon. You provide your answer in a .JSON format in the following structure: { "intent": "descriptive verb for intent", "subject": "bananas" } You only answer with the .JSON object. You do not provide any reasoning why you did it that way. The intent represents the one intent you extracted during your analysis. The subject is the thing the intent wants to act upon (what does some want to buy? want information do they want?). If you can not extract the intent and or subject with a probability of 70% or more, you specify it with "unknown" in your response.' 369 | } 370 | 371 | default { 372 | throw "Invalid mode parameter. Allowed values are 'Chat', 'SentimentAndTickerAnalysis', and 'SentimentAnalysis'." 373 | } 374 | } 375 | 376 | [System.Collections.ArrayList]$promptToReturn = @( 377 | @{ 378 | role = "system" 379 | content = $instructor 380 | }, 381 | @{ 382 | role = "assistant" 383 | content = $assistantReply 384 | } 385 | ) 386 | 387 | return [System.Collections.ArrayList]$promptToReturn 388 | } 389 | 390 | function New-OpenAICompletionConversation { 391 | <# 392 | .SYNOPSIS 393 | This function creates a new conversation with the OpenAI Completion API to generate AI-assisted responses to user queries. 394 | .DESCRIPTION 395 | The New-OpenAICompletionConversation function allows users to initiate a new conversation with the OpenAI Completion API. The function takes in user queries, an API key, and various optional parameters that modify the behavior of the API. 396 | .PARAMETER Character 397 | This parameter is optional and allows users to specify a pre-defined character model for generating responses. Valid options are "Chat", "SentimentAndTickerAnalysis", "SentimentAnalysis", "IntentAnalysis", and "IntentAndSubjectAnalysis". 398 | .PARAMETER query 399 | This parameter is mandatory and specifies the user query for which the OpenAI Completion API will generate responses. 400 | .PARAMETER APIKey 401 | This parameter is mandatory and specifies the API key that the function will use to authenticate with the OpenAI Completion API. 402 | .PARAMETER instructor 403 | This parameter is optional and specifies the role of the AI. By default, it is set to "You are a helpful AI. You answer as concisely as possible." 404 | .PARAMETER assistantReply 405 | This parameter is optional and specifies the initial message that the AI will send to the user. By default, it is set to "Hello! I'm a ChatGPT-3.5 Model. How can I help you?" 406 | .PARAMETER model 407 | This parameter is optional and allows users to specify a different GPT-3.5 model to use for generating responses. By default, it is set to "gpt-3.5-turbo". 408 | .PARAMETER stop 409 | This parameter is optional and specifies a string that the OpenAI Completion API will use to indicate the end of a response. By default, it is set to "\n". 410 | .PARAMETER temperature 411 | This parameter is optional and specifies the "creativity" of the AI-generated responses. Valid values are between 0 and 1, with higher values indicating more creative responses. By default, it is set to 0.4. 412 | .PARAMETER max_tokens 413 | This parameter is optional and specifies the maximum number of tokens (words) that the OpenAI Completion API will use to generate a response. By default, it is set to 900. 414 | .PARAMETER filePath 415 | This parameter is optional and specifies the path to a file that contains previous conversation messages. If provided, the function will use these messages to generate more context-aware responses. 416 | .PARAMETER ShowOutput 417 | This parameter is optional and specifies whether or not to display the output of the OpenAI Completion API. 418 | .PARAMETER ShowTokenUsage 419 | This parameter is optional and specifies whether or not to display the number of tokens used by the OpenAI Completion API to generate a response. 420 | .INPUTS 421 | None. 422 | .OUTPUTS 423 | The function returns an updated prompt with the user query and the response from the CompletionAPI. 424 | .EXAMPLE 425 | PS C:> New-OpenAICompletionConversation -query "What is the weather like today?" -APIKey "YOUR_API_KEY" 426 | This example initiates a new conversation with the OpenAI Completion API and generates AI-generated responses to the query "What is the weather like today?". 427 | 428 | #> 429 | param ( 430 | [Parameter(Mandatory=$false)][ValidateSet("Chat", "SentimentAndTickerAnalysis","SentimentAnalysis","IntentAnalysis","IntentAndSubjectAnalysis")][System.Object]$Character, 431 | [Parameter(Mandatory=$true)] [string]$query, 432 | [Parameter(Mandatory=$true)] [string]$APIKey, 433 | [Parameter(Mandatory=$false)] $instructor = "You are a helpful AI. You answer as concisely as possible.", 434 | [Parameter(Mandatory=$false)] $assistantReply = "Hello! I'm a ChatGPT-3.5 Model. How can I help you?", 435 | [Parameter(Mandatory=$false)] [string]$model = "gpt-3.5-turbo", 436 | [Parameter(Mandatory=$false)] [string]$stop = "\n", 437 | [Parameter(Mandatory=$false)] [double]$temperature = 0.4, 438 | [Parameter(Mandatory=$false)] [int]$max_tokens = 900, 439 | [Parameter(Mandatory=$false)] [string]$filePath, 440 | [Parameter(Mandatory=$false)] [bool]$ShowOutput = $false, 441 | [Parameter(Mandatory=$false)] [bool]$ShowTokenUsage = $false, 442 | [Parameter(Mandatory=$false)] [string]$UseAzure, 443 | [Parameter(Mandatory=$false)] [string]$DeploymentName 444 | ) 445 | 446 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Initializing new conversation...") 447 | 448 | if ($Character -eq $null) 449 | { 450 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Character is not provided.") 451 | 452 | if ($filePath) 453 | { 454 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | FilePath is provided: "+($filePath)) 455 | 456 | [System.Collections.ArrayList]$promptForAPI = New-OpenAICompletionPrompt -query $query -instructor $instructor -role "user" -assistantReply $assistantReply -filePath $filePath -model $model 457 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Prompt is: "+($promptForAPI | Out-String)) 458 | } 459 | else { 460 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | FilePath is not provided") 461 | 462 | [System.Collections.ArrayList]$promptForAPI = New-OpenAICompletionPrompt -query $query -instructor $instructor -role "user" -assistantReply $assistantReply -model $model 463 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Prompt is: "+($promptForAPI | Out-String)) 464 | } 465 | } 466 | else 467 | { 468 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Character is provided: "+$Character) 469 | 470 | [System.Collections.ArrayList]$characterPrompt= Set-OpenAICompletionCharacter -mode $Character -instructor $instructor -assistantReply $assistantReply 471 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Character prompt is: ") 472 | If ($filePath) 473 | { 474 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | FilePath is provided: "+($filePath)) 475 | 476 | [System.Collections.ArrayList]$promptForAPI = New-OpenAICompletionPrompt -query $query -role "user" -previousMessages $characterPrompt -filePath $filePath -model $model 477 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Prompt is: "+($promptForAPI | Out-String)) 478 | } 479 | 480 | else { 481 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | FilePath is not provided") 482 | 483 | [System.Collections.ArrayList]$promptForAPI = New-OpenAICompletionPrompt -query $query -role "user" -previousMessages $characterPrompt -model $model 484 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Prompt is: "+($promptForAPI | Out-String)) 485 | } 486 | 487 | } 488 | 489 | Write-Verbose ("ShellGPT-New-OpenAICompletionConversation @ "+(Get-Date)+" | Calling OpenAI Completion API with prompt...") 490 | if ($UseAzure) 491 | { 492 | [System.Collections.ArrayList]$promptToReturn = Invoke-OpenAICompletion -Prompt $promptForAPI -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -UseAzure $UseAzure -DeploymentName $DeploymentName 493 | } 494 | else { 495 | [System.Collections.ArrayList]$promptToReturn = Invoke-OpenAICompletion -Prompt $promptForAPI -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput 496 | } 497 | 498 | 499 | return [System.Collections.ArrayList]$promptToReturn 500 | } 501 | 502 | function Add-OpenAICompletionMessageToConversation { 503 | <# 504 | .SYNOPSIS 505 | This function adds a user query to a conversation with previous messages and obtains a response using the OpenAI Completion API. 506 | .DESCRIPTION 507 | The Add-OpenAICompletionMessageToConversation function takes in a user query, an array of previous messages in the conversation, an API key for the OpenAI Completion API, and various optional parameters to generate a response from the API. 508 | The function then returns the response from the API along with the updated previous messages. 509 | .PARAMETER query 510 | The user's query to be added to the conversation. 511 | .PARAMETER previousMessages 512 | An array of previous messages in the conversation. 513 | .PARAMETER APIKey 514 | API key for the OpenAI Completion API. 515 | .PARAMETER model 516 | The model to use from the endpoint. Default value is "gpt-3.5-turbo". 517 | .PARAMETER stop 518 | The stop instructor for the model. Default value is "\n". 519 | .PARAMETER temperature 520 | The temperature value to use for sampling. Default value is 0.4. 521 | .PARAMETER max_tokens 522 | The maximum number of tokens to generate in the response. Default value is 900. 523 | .PARAMETER filePath 524 | The path to the file containing the previous messages in the conversation. 525 | .PARAMETER ShowOutput 526 | A switch parameter to determine if the output of the OpenAI Completion API should be displayed. Default value is $false. 527 | .PARAMETER ShowTokenUsage 528 | A switch parameter to determine if the number of tokens used in the response should be displayed. Default value is $false. 529 | .INPUTS 530 | None. The function does not accept pipeline input. 531 | .OUTPUTS 532 | The function returns a System.Collections.ArrayList object containing the response from the OpenAI Completion API and the updated previous messages in the conversation. 533 | .EXAMPLE 534 | $previousMessages = New-OpenAICompletionConversation -query "Hello, how are you?" -previousMessages $previousMessages -APIKey "YOUR_API_KEY" 535 | Add-OpenAICompletionMessageToConversation -query "Alright. Whats the capitol of Switzerland?" -previousMessages $previousMessages -APIKey "YOUR_API_KEY" 536 | This example adds a user query "Alright. Whats the capitol of Switzerland?" to a conversation with previous messages "Hello, how are you?" The function then obtains a response using the OpenAI Completion API and displays the output. 537 | #> 538 | 539 | param ( 540 | [Parameter(Mandatory=$true)][string]$query, 541 | [Parameter(Mandatory=$true)][System.Collections.ArrayList]$previousMessages, 542 | [Parameter(Mandatory=$true)][string]$APIKey, 543 | [Parameter(Mandatory=$true)][string]$UseAzure, 544 | [Parameter(Mandatory=$true)][string]$DeploymentName, 545 | [Parameter(Mandatory=$false)][string]$model = "gpt-3.5-turbo", 546 | [Parameter(Mandatory=$false)][string]$stop = "\n", 547 | [Parameter(Mandatory=$false)][double]$temperature = 0.4, 548 | [Parameter(Mandatory=$false)][int]$max_tokens = 900, 549 | [Parameter(Mandatory=$false)][string]$filePath, 550 | [Parameter(Mandatory=$false)][bool]$ShowOutput = $false, 551 | [Parameter(Mandatory=$false)][bool]$ShowTokenUsage = $false 552 | ) 553 | 554 | if ($filePath) 555 | { 556 | Write-Verbose ("ShellGPT-Add-OpenAICompletionMessageToConversation @ "+(Get-Date)+" | FilePath is provided: "+($filePath | Out-String)) 557 | [System.Collections.ArrayList]$prompt = New-OpenAICompletionPrompt -query $query -role "user" -previousMessages $previousMessages -filePath $filePath -model $model 558 | } 559 | else { 560 | Write-Verbose ("ShellGPT-Add-OpenAICompletionMessageToConversation @ "+(Get-Date)+" | FilePath is not provided") 561 | [System.Collections.ArrayList]$prompt = New-OpenAICompletionPrompt -query $query -role "user" -previousMessages $previousMessages -model $model 562 | } 563 | 564 | # Call the Invoke-ChatGPT function to get the response from the API. 565 | try { 566 | if ($useAzure){ 567 | [System.Collections.ArrayList]$returnPromptFromAPI = Invoke-OpenAICompletion -prompt $prompt -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -UseAzure $useAzure -DeploymentName $DeploymentName 568 | 569 | } 570 | else { 571 | [System.Collections.ArrayList]$returnPromptFromAPI = Invoke-OpenAICompletion -prompt $prompt -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput 572 | } 573 | } 574 | catch { 575 | [System.Collections.ArrayList]$returnPromptFromAPI = $prompt } 576 | 577 | 578 | # Return the response from the API and the updated previous messages. 579 | return [System.Collections.ArrayList]$returnPromptFromAPI 580 | 581 | } 582 | function New-OpenAIEdit { 583 | <# 584 | .SYNOPSIS 585 | This PowerShell function connects to the OpenAI Edit API to execute a given instruction on a specified model using a given prompt, with the option to specify temperature for sampling. 586 | .DESCRIPTION 587 | The New-OpenAIEdit function allows users to interact with the OpenAI Edit API, which takes a prompt as input and applies an instruction to it, generating a new output. This function handles the API request by constructing the necessary HTTP request headers and body and parsing the API response. The user must provide a prompt, an API key for authentication, an instruction for the model, and can optionally specify a model and temperature value for sampling. 588 | .PARAMETER query 589 | The prompt to send to the API to act upon. 590 | .PARAMETER APIKey 591 | The API key to authenticate the request. 592 | .PARAMETER instruction 593 | The instruction for the model like "Fix the grammar". 594 | .PARAMETER model 595 | The model to use from the endpoint. This parameter is optional and has a default value of "text-davinci-edit-001" if not specified. Valid options for this parameter are "text-davinci-edit-001" and "code-davinci-edit-001". 596 | .PARAMETER temperature 597 | The temperature value to use for sampling. This parameter is optional and has a default value of 0.4 if not specified. 598 | .INPUTS 599 | This function does not take any input from the pipeline. 600 | .OUTPUTS 601 | The function outputs the response text generated by the API in response to the prompt provided by the user. 602 | .EXAMPLE 603 | Example usage: 604 | $APIKey = "myAPIKey1234" 605 | $prompt = "The quick brown fox jumps over the lazy dog." 606 | $instruction = "Fix the grammar." 607 | New-OpenAIEdit -query $prompt -APIKey $APIKey -instruction $instruction 608 | This command sends a prompt to the OpenAI Edit API with the instruction to fix the grammar. The function returns the edited text response generated by the API. 609 | #> 610 | 611 | param ( 612 | [Parameter(Mandatory=$true)] 613 | [string]$query, # The prompt to send to the API to act upon. 614 | [Parameter(Mandatory=$true)] 615 | [string]$APIKey, # The API key to authenticate the request. 616 | [Parameter(Mandatory=$true)] 617 | [string]$instruction, # The instruction for the model like "Fix the grammar" 618 | [Parameter(Mandatory=$false)] 619 | [ValidateSet("text-davinci-edit-001", "code-davinci-edit-001")] 620 | [string]$model = "text-davinci-edit-001", # The model to use from the endpoint. 621 | [Parameter(Mandatory=$false)] 622 | [double]$temperature = 0.4 # The temperature value to use for sampling. 623 | ) 624 | 625 | #Building Request for API 626 | $headers = @{ 627 | "Content-Type" = "application/json" 628 | "Authorization" = "Bearer $APIKey" 629 | } 630 | 631 | $RequestBody = @{ 632 | model = $model 633 | input = $query 634 | instruction = $instruction 635 | temperature= $temperature 636 | } 637 | 638 | #Convert the whole Body to be JSON, so that API can interpret it 639 | $RequestBody = $RequestBody | ConvertTo-Json 640 | 641 | $RestMethodParameter=@{ 642 | Method='Post' 643 | Uri ='https://api.openai.com/v1/edits' 644 | body=$RequestBody 645 | Headers=$Headers 646 | } 647 | 648 | try { 649 | #Call the OpenAI Edit API 650 | $APIresponse = Invoke-RestMethod @RestMethodParameter 651 | 652 | #Extract Textresponse from API response 653 | $convertedResponseForOutput = $APIresponse.choices.text 654 | 655 | $outputColor = "Green" 656 | } 657 | catch { 658 | # If there was an error, define an error message to the written. 659 | $errorToReport = $_.Exception.Message 660 | $errorDetails = $_.ErrorDetails.Message 661 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 662 | 663 | $outputColor = "Red" 664 | } 665 | 666 | #Output the text response. 667 | write-host "EditAPI:"$convertedResponseForOutput -ForegroundColor $outputColor 668 | 669 | if ($errorDetails) 670 | { 671 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 672 | $convertedResponseForOutput = "Error. See above for details." 673 | } 674 | 675 | return $convertedResponseForOutput 676 | } 677 | function New-OpenAIImage { 678 | param ( 679 | [Parameter(Mandatory=$true)] 680 | [string]$query, # The prompt to send to the API to act upon. 681 | [Parameter(Mandatory=$true)] 682 | [string]$APIKey, # The API key to authenticate the request. 683 | [Parameter(Mandatory=$false)] 684 | [int]$n = 1, # The temperature value to use for sampling. 685 | [Parameter(Mandatory=$false)] 686 | [ValidateSet("256x256", "512x512", "1024x1024")] 687 | [string]$size = "256x256" # The model to use from the endpoint. 688 | ) 689 | 690 | #Building Request for API 691 | $headers = @{ 692 | "Content-Type" = "application/json" 693 | "Authorization" = "Bearer $APIKey" 694 | } 695 | 696 | $RequestBody = @{ 697 | prompt = $query 698 | n = $n 699 | size = $size 700 | } 701 | 702 | #Convert the whole Body to be JSON, so that API can interpret it 703 | $RequestBody = $RequestBody | ConvertTo-Json 704 | 705 | $RestMethodParameter=@{ 706 | Method='Post' 707 | Uri ='https://api.openai.com/v1/images/generations' 708 | body=$RequestBody 709 | Headers=$Headers 710 | } 711 | 712 | try { 713 | #Call the OpenAI Edit API 714 | $APIresponse = Invoke-RestMethod @RestMethodParameter 715 | 716 | #Extract Textresponse from API response 717 | $convertedResponseForOutput = $APIresponse.data.url 718 | 719 | $outputColor = "Green" 720 | } 721 | catch { 722 | # If there was an error, define an error message to the written. 723 | $errorToReport = $_.Exception.Message 724 | $errorDetails = $_.ErrorDetails.Message 725 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 726 | 727 | $outputColor = "Red" 728 | } 729 | 730 | if ($errorDetails) 731 | { 732 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 733 | $convertedResponseForOutput = "Error. See above for details." 734 | } 735 | 736 | return $convertedResponseForOutput 737 | } 738 | 739 | function Get-OpenAIModels { 740 | param ( 741 | [Parameter(Mandatory=$true)] 742 | [string]$APIKey # The API key to authenticate the request. 743 | ) 744 | 745 | $uri = 'https://api.openai.com/v1/models' 746 | 747 | #Building Request for API 748 | $headers = @{ 749 | "Content-Type" = "application/json" 750 | "Authorization" = "Bearer $APIKey" 751 | } 752 | 753 | $RestMethodParameter=@{ 754 | Method='Get' 755 | Uri = $uri 756 | Headers=$Headers 757 | } 758 | 759 | try { 760 | #Call the OpenAI Edit API 761 | $APIresponse = Invoke-RestMethod @RestMethodParameter 762 | 763 | $convertedResponseForOutput = $APIresponse.data | Select-Object id, owned_by 764 | 765 | } 766 | catch { 767 | # If there was an error, define an error message to the written. 768 | $errorToReport = $_.Exception.Message 769 | $errorDetails = $_.ErrorDetails.Message 770 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 771 | } 772 | 773 | if ($errorDetails) 774 | { 775 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 776 | $convertedResponseForOutput = "Error. See above for details." 777 | } 778 | 779 | return $convertedResponseForOutput 780 | } 781 | 782 | function Get-OpenAIModelById { 783 | param ( 784 | [Parameter(Mandatory=$true, ParameterSetName='Retrieve')] 785 | [string]$ModelId, 786 | [Parameter(Mandatory=$true)] 787 | [string]$APIKey # The API key to authenticate the request. 788 | ) 789 | 790 | 791 | $uri = 'https://api.openai.com/v1/models/'+$ModelId 792 | 793 | #Building Request for API 794 | $headers = @{ 795 | "Content-Type" = "application/json" 796 | "Authorization" = "Bearer $APIKey" 797 | } 798 | 799 | $RestMethodParameter=@{ 800 | Method='Get' 801 | Uri = $uri 802 | Headers=$Headers 803 | } 804 | 805 | try { 806 | #Call the OpenAI Edit API 807 | $APIresponse = Invoke-RestMethod @RestMethodParameter 808 | 809 | $convertedResponseForOutput = $APIresponse 810 | 811 | } 812 | catch { 813 | # If there was an error, define an error message to the written. 814 | $errorToReport = $_.Exception.Message 815 | $errorDetails = $_.ErrorDetails.Message 816 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 817 | } 818 | 819 | if ($errorDetails) 820 | { 821 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 822 | $convertedResponseForOutput = "Error. See above for details." 823 | } 824 | 825 | return $convertedResponseForOutput 826 | } 827 | 828 | function Get-OpenAIFiles { 829 | param ( 830 | [Parameter(Mandatory=$true)] 831 | [string]$APIKey # The API key to authenticate the request. 832 | ) 833 | 834 | #Building Request for API 835 | $ContentType = "application/json" 836 | $uri = 'https://api.openai.com/v1/files' 837 | $method = 'Get' 838 | 839 | $headers = @{ 840 | "Content-Type" = $ContentType 841 | "Authorization" = "Bearer $APIKey" 842 | } 843 | $RestMethodParameter=@{ 844 | Method=$method 845 | Uri =$uri 846 | body=$body 847 | Headers=$Headers 848 | } 849 | 850 | try { 851 | #Call the OpenAI Edit API 852 | $APIresponse = Invoke-RestMethod @RestMethodParameter 853 | $convertedResponseForOutput = $APIresponse.data | select-object id, purpose, filename, status, bytes 854 | #Extract Textresponse from API response 855 | 856 | } 857 | catch { 858 | # If there was an error, define an error message to the written. 859 | $errorToReport = $_.Exception.Message 860 | $errorDetails = $_.ErrorDetails.Message 861 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 862 | } 863 | 864 | if ($errorDetails) 865 | { 866 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 867 | $convertedResponseForOutput = "Error. See above for details." 868 | } 869 | 870 | return $convertedResponseForOutput 871 | } 872 | 873 | function Get-OpenAIFileById { 874 | param ( 875 | [Parameter(Mandatory=$true)] 876 | [string]$FileIdToRetrieve, 877 | [Parameter(Mandatory=$true)] 878 | [string]$APIKey # The API key to authenticate the request. 879 | ) 880 | 881 | #Building Request for API 882 | $ContentType = "application/json" 883 | $uri = 'https://api.openai.com/v1/files/'+$FileIdToRetrieve 884 | $method = 'Get' 885 | 886 | $headers = @{ 887 | "Content-Type" = $ContentType 888 | "Authorization" = "Bearer $APIKey" 889 | } 890 | 891 | $RestMethodParameter=@{ 892 | Method=$method 893 | Uri =$uri 894 | body=$body 895 | Headers=$Headers 896 | } 897 | 898 | try { 899 | #Call the OpenAI Edit API 900 | $APIresponse = Invoke-RestMethod @RestMethodParameter 901 | $convertedResponseForOutput = $APIresponse 902 | #Extract Textresponse from API response 903 | 904 | } 905 | catch { 906 | # If there was an error, define an error message to the written. 907 | $errorToReport = $_.Exception.Message 908 | $errorDetails = $_.ErrorDetails.Message 909 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 910 | } 911 | 912 | if ($errorDetails) 913 | { 914 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 915 | $convertedResponseForOutput = "Error. See above for details." 916 | } 917 | 918 | return $convertedResponseForOutput 919 | 920 | } 921 | 922 | function Get-OpenAIFileContent { 923 | param ( 924 | [Parameter(Mandatory=$true)] 925 | [string]$FileIdToRetrieveContent, 926 | [Parameter(Mandatory=$true)] 927 | [string]$APIKey # The API key to authenticate the request. 928 | ) 929 | 930 | #Building Request for API 931 | $ContentType = "application/json" 932 | $uri = 'https://api.openai.com/v1/files/'+$FileIdToRetrieveContent+'/content' 933 | $method = 'Get' 934 | 935 | $headers = @{ 936 | "Content-Type" = $ContentType 937 | "Authorization" = "Bearer $APIKey" 938 | } 939 | 940 | $RestMethodParameter=@{ 941 | Method=$method 942 | Uri =$uri 943 | body=$body 944 | Headers=$Headers 945 | } 946 | 947 | try { 948 | #Call the OpenAI Edit API 949 | $APIresponse = Invoke-RestMethod @RestMethodParameter 950 | $convertedResponseForOutput = $APIresponse 951 | 952 | } 953 | catch { 954 | # If there was an error, define an error message to the written. 955 | $errorToReport = $_.Exception.Message 956 | $errorDetails = $_.ErrorDetails.Message 957 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 958 | } 959 | 960 | if ($errorDetails) 961 | { 962 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 963 | $convertedResponseForOutput = "Error. See above for details." 964 | } 965 | 966 | return $convertedResponseForOutput 967 | 968 | } 969 | 970 | function New-OpenAIFile { 971 | param ( 972 | [Parameter(Mandatory=$true)] 973 | [string]$FileToUpload, # The trainings file to upload. Provide the full path to it. 974 | [Parameter(Mandatory=$true)] 975 | [string]$APIKey, # The API key to authenticate the request. 976 | [Parameter(Mandatory=$false)] 977 | [string]$Purpose = "fine-tune" # The purpose label for the file. The API currently expects the value "fine-tune" only. 978 | 979 | ) 980 | 981 | # Read the file into a byte array 982 | $fileBytes = [System.IO.File]::ReadAllBytes($FileToUpload) 983 | 984 | # Create the multipart/form-data request body 985 | $body = [System.Net.Http.MultipartFormDataContent]::new() 986 | $fileContent = [System.Net.Http.ByteArrayContent]::new($fileBytes) 987 | $body.Add($fileContent, "file", [System.IO.Path]::GetFileName($FileToUpload)) 988 | $body.Add([System.Net.Http.StringContent]::new($purpose), "purpose") 989 | 990 | $ContentType = "multipart/form-data" 991 | $uri = 'https://api.openai.com/v1/files' 992 | $method = 'Post' 993 | 994 | 995 | $headers = @{ 996 | "Content-Type" = $ContentType 997 | "Authorization" = "Bearer $APIKey" 998 | } 999 | 1000 | $RestMethodParameter=@{ 1001 | Method=$method 1002 | Uri =$uri 1003 | body=$body 1004 | Headers=$Headers 1005 | } 1006 | 1007 | try { 1008 | #Call the OpenAI Edit API 1009 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1010 | $convertedResponseForOutput = $APIresponse 1011 | 1012 | } 1013 | catch { 1014 | # If there was an error, define an error message to the written. 1015 | $errorToReport = $_.Exception.Message 1016 | $errorDetails = $_.ErrorDetails.Message 1017 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1018 | } 1019 | 1020 | if ($errorDetails) 1021 | { 1022 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1023 | $convertedResponseForOutput = "Error. See above for details." 1024 | } 1025 | 1026 | return $convertedResponseForOutput 1027 | } 1028 | 1029 | function Remove-OpenAIFile { 1030 | param ( 1031 | [Parameter(Mandatory=$true)] 1032 | [string]$FileIdToDelete, # The Id of the File to delete 1033 | [Parameter(Mandatory=$true)] 1034 | [string]$APIKey # The API key to authenticate the request. 1035 | ) 1036 | 1037 | #Building Request for API 1038 | $ContentType = "application/json" 1039 | $uri = 'https://api.openai.com/v1/files/'+$FileIdToDelete 1040 | $method = 'Delete' 1041 | 1042 | $headers = @{ 1043 | "Content-Type" = $ContentType 1044 | "Authorization" = "Bearer $APIKey" 1045 | } 1046 | 1047 | $RestMethodParameter=@{ 1048 | Method=$method 1049 | Uri =$uri 1050 | body=$body 1051 | Headers=$Headers 1052 | } 1053 | 1054 | try { 1055 | #Call the OpenAI Edit API 1056 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1057 | $convertedResponseForOutput = $APIresponse 1058 | #Extract Textresponse from API response 1059 | 1060 | } 1061 | catch { 1062 | # If there was an error, define an error message to the written. 1063 | $errorToReport = $_.Exception.Message 1064 | $errorDetails = $_.ErrorDetails.Message 1065 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1066 | } 1067 | 1068 | if ($errorDetails) 1069 | { 1070 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1071 | $convertedResponseForOutput = "Error. See above for details." 1072 | } 1073 | 1074 | return $convertedResponseForOutput 1075 | } 1076 | 1077 | function Get-OpenAIFineTuneJobs { 1078 | param ( 1079 | [Parameter(Mandatory=$true)] 1080 | [string]$APIKey # The API key to authenticate the request. 1081 | ) 1082 | 1083 | #Building Request for API 1084 | $uri = 'https://api.openai.com/v1/fine-tunes' 1085 | $method = 'Get' 1086 | 1087 | $headers = @{ 1088 | "Content-Type" = "application/json" 1089 | "Authorization" = "Bearer $APIKey" 1090 | } 1091 | 1092 | $RestMethodParameter=@{ 1093 | Method=$method 1094 | Uri =$uri 1095 | body=$RequestBody 1096 | Headers=$Headers 1097 | } 1098 | 1099 | try { 1100 | #Call the OpenAI Edit API 1101 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1102 | $convertedResponseForOutput = $APIresponse.data 1103 | 1104 | #Extract Textresponse from API response 1105 | 1106 | } 1107 | catch { 1108 | # If there was an error, define an error message to the written. 1109 | $errorToReport = $_.Exception.Message 1110 | $errorDetails = $_.ErrorDetails.Message 1111 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1112 | } 1113 | 1114 | if ($errorDetails) 1115 | { 1116 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1117 | $convertedResponseForOutput = "Error. See above for details." 1118 | } 1119 | 1120 | return $convertedResponseForOutput 1121 | } 1122 | 1123 | function Get-OpenAIFineTuneJobById { 1124 | param ( 1125 | [Parameter(Mandatory=$true)] 1126 | [string]$FineTuneId, # The Id of the FineTuneJob you want to get details on. 1127 | [Parameter(Mandatory=$true)] 1128 | [string]$APIKey # The API key to authenticate the request. 1129 | ) 1130 | 1131 | #Building Request for API 1132 | $uri = 'https://api.openai.com/v1/fine-tunes/'+$FineTuneId 1133 | $method = 'Get' 1134 | $headers = @{ 1135 | "Content-Type" = "application/json" 1136 | "Authorization" = "Bearer $APIKey" 1137 | } 1138 | 1139 | $RestMethodParameter=@{ 1140 | Method=$method 1141 | Uri =$uri 1142 | body=$RequestBody 1143 | Headers=$Headers 1144 | } 1145 | 1146 | try { 1147 | #Call the OpenAI Edit API 1148 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1149 | $convertedResponseForOutput = $APIresponse 1150 | 1151 | #Extract Textresponse from API response 1152 | 1153 | } 1154 | catch { 1155 | # If there was an error, define an error message to the written. 1156 | $errorToReport = $_.Exception.Message 1157 | $errorDetails = $_.ErrorDetails.Message 1158 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1159 | } 1160 | 1161 | if ($errorDetails) 1162 | { 1163 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1164 | $convertedResponseForOutput = "Error. See above for details." 1165 | } 1166 | 1167 | return $convertedResponseForOutput 1168 | 1169 | } 1170 | 1171 | function Get-OpenAIFineTuneEvents { 1172 | param ( 1173 | [Parameter(Mandatory=$true)] 1174 | [string]$FineTuneIdToListEvents, # The Id of the FineTuneJob you want to get the events for. 1175 | [Parameter(Mandatory=$true)] 1176 | [string]$APIKey # The API key to authenticate the request. 1177 | ) 1178 | 1179 | #Building Request for API 1180 | 1181 | $uri = 'https://api.openai.com/v1/fine-tunes/'+$FineTuneIdToListEvents+'/events' 1182 | $method = 'Get' 1183 | 1184 | $headers = @{ 1185 | "Content-Type" = "application/json" 1186 | "Authorization" = "Bearer $APIKey" 1187 | } 1188 | 1189 | $RestMethodParameter=@{ 1190 | Method=$method 1191 | Uri =$uri 1192 | body=$RequestBody 1193 | Headers=$Headers 1194 | } 1195 | 1196 | try { 1197 | #Call the OpenAI Edit API 1198 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1199 | $convertedResponseForOutput = $APIresponse 1200 | 1201 | } 1202 | catch { 1203 | # If there was an error, define an error message to the written. 1204 | $errorToReport = $_.Exception.Message 1205 | $errorDetails = $_.ErrorDetails.Message 1206 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1207 | } 1208 | 1209 | if ($errorDetails) 1210 | { 1211 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1212 | $convertedResponseForOutput = "Error. See above for details." 1213 | } 1214 | 1215 | return $convertedResponseForOutput 1216 | } 1217 | 1218 | function Remove-OpenAIFineTuneModel { 1219 | param ( 1220 | [Parameter(Mandatory=$true)] 1221 | [string]$ModelToDelete, # The Name of the Model you want to delete. 1222 | [Parameter(Mandatory=$true)] 1223 | [string]$APIKey # The API key to authenticate the request. 1224 | ) 1225 | 1226 | $uri = 'https://api.openai.com/v1/models/'+$ModelToDelete 1227 | $method = 'Delete' 1228 | 1229 | $headers = @{ 1230 | "Content-Type" = "application/json" 1231 | "Authorization" = "Bearer $APIKey" 1232 | } 1233 | 1234 | $RestMethodParameter=@{ 1235 | Method=$method 1236 | Uri =$uri 1237 | body=$RequestBody 1238 | Headers=$Headers 1239 | } 1240 | 1241 | try { 1242 | #Call the OpenAI Edit API 1243 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1244 | $convertedResponseForOutput = $APIresponse 1245 | 1246 | } 1247 | catch { 1248 | # If there was an error, define an error message to the written. 1249 | $errorToReport = $_.Exception.Message 1250 | $errorDetails = $_.ErrorDetails.Message 1251 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1252 | } 1253 | 1254 | if ($errorDetails) 1255 | { 1256 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1257 | $convertedResponseForOutput = "Error. See above for details." 1258 | } 1259 | 1260 | return $convertedResponseForOutput 1261 | } 1262 | 1263 | function Stop-OpenAIFineTuneJob { 1264 | param ( 1265 | [Parameter(Mandatory=$true, ParameterSetName='Cancel')] 1266 | [string]$FineTuneIdToCancel, # The Id of the FineTuneJob you want to cancel. 1267 | [Parameter(Mandatory=$true)] 1268 | [string]$APIKey # The API key to authenticate the request. 1269 | ) 1270 | 1271 | #Building Request for API 1272 | 1273 | $uri = 'https://api.openai.com/v1/fine-tunes/'+$FineTuneIdToCancel+'/cancel' 1274 | $method = 'Post' 1275 | 1276 | $headers = @{ 1277 | "Content-Type" = "application/json" 1278 | "Authorization" = "Bearer $APIKey" 1279 | } 1280 | 1281 | $RestMethodParameter=@{ 1282 | Method=$method 1283 | Uri =$uri 1284 | body=$RequestBody 1285 | Headers=$Headers 1286 | } 1287 | 1288 | try { 1289 | #Call the OpenAI Edit API 1290 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1291 | $convertedResponseForOutput = $APIresponse 1292 | 1293 | } 1294 | catch { 1295 | # If there was an error, define an error message to the written. 1296 | $errorToReport = $_.Exception.Message 1297 | $errorDetails = $_.ErrorDetails.Message 1298 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1299 | } 1300 | 1301 | if ($errorDetails) 1302 | { 1303 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1304 | $convertedResponseForOutput = "Error. See above for details." 1305 | } 1306 | 1307 | return $convertedResponseForOutput 1308 | 1309 | } 1310 | 1311 | function New-OpenAIFineTuneJob { 1312 | param ( 1313 | [Parameter(Mandatory=$true)] 1314 | [string]$trainingFileId, # The Id of the trainings file (.jsonl) you want to create a fine tuned model for. 1315 | [Parameter(Mandatory=$true)] 1316 | [string]$APIKey, # The API key to authenticate the request. 1317 | [Parameter(Mandatory=$false)] 1318 | [string]$model, # The name of the model you want to create a fine-tuned version of. 1319 | [Parameter(Mandatory=$false)] 1320 | [integer]$n_epochs, # The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. 1321 | [Parameter(Mandatory=$false)] 1322 | [string]$suffix, # A string of up to 40 characters that will be added to your fine-tuned model name 1323 | [Parameter(Mandatory=$false)] 1324 | [string]$validation_fileId, # The ID of an uploaded file that contains validation data. 1325 | [Parameter(Mandatory=$false)] 1326 | [int]$batch_size, # The batch size to use for training. The batch size is the number of training examples used to train a single forward and backward pass. 1327 | [Parameter(Mandatory=$false)] 1328 | [double]$learning_rate_multiplier, # The learning rate multiplier to use for training. The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this value. 1329 | [Parameter(Mandatory=$false)] 1330 | [double]$prompt_loss_weight, # The weight to use for loss on the prompt tokens. This controls how much the model tries to learn to generate the prompt (as compared to the completion which always has a weight of 1.0), and can add a stabilizing effect to training when completions are short. 1331 | [Parameter(Mandatory=$false)] 1332 | [bool]$compute_classification_metrics, # If set, we calculate classification-specific metrics such as accuracy and F-1 score using the validation set at the end of every epoch 1333 | [Parameter(Mandatory=$false)] 1334 | [integer]$classification_n_classes, # The number of classes in a classification task. 1335 | [Parameter(Mandatory=$false)] 1336 | [string]$classification_positive_class, # The positive class in binary classification. 1337 | [Parameter(Mandatory=$false)] 1338 | [array]$classification_betas # If this is provided, we calculate F-beta scores at the specified beta values. The F-beta score is a generalization of F-1 score. This is only used for binary classification. 1339 | ) 1340 | 1341 | #Building Request for API 1342 | $uri = 'https://api.openai.com/v1/fine-tunes' 1343 | $method = 'Post' 1344 | 1345 | $RequestBody = @{ 1346 | training_file = $trainingFileId 1347 | } 1348 | 1349 | if ($model) 1350 | { 1351 | $RequestBody.Add("model", $model) 1352 | } 1353 | 1354 | if ($epochs) 1355 | { 1356 | $RequestBody.Add("n_epochs", $n_epochs) 1357 | } 1358 | 1359 | if ($suffix) 1360 | { 1361 | $RequestBody.Add("suffix", $suffix) 1362 | } 1363 | 1364 | if ($validation_fileId) 1365 | { 1366 | $RequestBody.Add("validation_file", $validation_fileId) 1367 | } 1368 | 1369 | if ($batch_size) 1370 | { 1371 | $RequestBody.Add("batch_size", $batch_size) 1372 | } 1373 | 1374 | if ($learning_rate_multiplier) 1375 | { 1376 | $RequestBody.Add("learning_rate_multiplier", $learning_rate_multiplier) 1377 | } 1378 | 1379 | if ($prompt_loss_weight) 1380 | { 1381 | $RequestBody.Add("prompt_loss_weight", $prompt_loss_weight) 1382 | } 1383 | 1384 | if ($compute_classification_metrics) 1385 | { 1386 | $RequestBody.Add("compute_classification_metrics", $compute_classification_metrics) 1387 | } 1388 | 1389 | if ($classification_n_classes) 1390 | { 1391 | $RequestBody.Add("classification_n_classes", $classification_n_classess) 1392 | } 1393 | 1394 | if ($classification_positive_classes) 1395 | { 1396 | $RequestBody.Add("classification_positive_classes", $classification_positive_classes) 1397 | } 1398 | 1399 | if ($classification_betas) 1400 | { 1401 | $RequestBody.Add("classification_betas", $classification_betass) 1402 | } 1403 | 1404 | $RequestBody = $RequestBody | ConvertTo-Json 1405 | 1406 | $headers = @{ 1407 | "Content-Type" = "application/json" 1408 | "Authorization" = "Bearer $APIKey" 1409 | } 1410 | $RestMethodParameter=@{ 1411 | Method=$method 1412 | Uri =$uri 1413 | body=$RequestBody 1414 | Headers=$Headers 1415 | } 1416 | 1417 | try { 1418 | #Call the OpenAI Edit API 1419 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1420 | $convertedResponseForOutput = $APIresponse 1421 | 1422 | } 1423 | catch { 1424 | # If there was an error, define an error message to the written. 1425 | $errorToReport = $_.Exception.Message 1426 | $errorDetails = $_.ErrorDetails.Message 1427 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1428 | } 1429 | 1430 | if ($errorDetails) 1431 | { 1432 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1433 | $convertedResponseForOutput = "Error. See above for details." 1434 | } 1435 | 1436 | return $convertedResponseForOutput 1437 | } 1438 | 1439 | function New-OpenAIFineTuneTrainingFile { 1440 | param ( 1441 | [Parameter(Mandatory=$true)] 1442 | [string]$prompt, # The prompt to create a training file with. This will be the first prompt. 1443 | [Parameter(Mandatory=$true)] 1444 | [string]$completion, # The completion for the first prompt. 1445 | [Parameter(Mandatory=$true)] 1446 | [string]$Path # the full path to the .JSONL file to be created and stored. 1447 | ) 1448 | 1449 | $objects = @( 1450 | @{ prompt = $prompt ; completion = $completion } 1451 | ) 1452 | $jsonStrings = $objects | ForEach-Object { $_ | ConvertTo-Json -Compress } 1453 | 1454 | if ($Path.EndsWith(".jsonl")) 1455 | { 1456 | $NewName = $Path 1457 | } 1458 | else { 1459 | $NewName = ($path+".jsonl") 1460 | } 1461 | 1462 | try { 1463 | New-Item -Name $NewName -ItemType File -Force 1464 | $jsonStrings | Out-File -FilePath $NewName -Encoding utf8 -Append -Force 1465 | } 1466 | catch { 1467 | $errorToReport = $_.Exception.Message 1468 | $errorDetails = $_.ErrorDetails.Message 1469 | $message = "Unable to handle Error: "+$errorToReport+" See Error details below." 1470 | 1471 | write-host "Error:"$message -ForegroundColor $outputColor 1472 | 1473 | if ($errorDetails) 1474 | { 1475 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1476 | } 1477 | } 1478 | 1479 | } 1480 | 1481 | function Import-OpenAIPromptFromJson { 1482 | param ( 1483 | [Parameter(Mandatory=$true)] 1484 | [string]$Path #Required. The file path to the JSON file containing the prompt. This parameter is mandatory. 1485 | ) 1486 | 1487 | $promptJson = Get-Content -Path $Path -Raw 1488 | $prompt = $promptJson | ConvertFrom-Json 1489 | 1490 | return $prompt 1491 | } 1492 | 1493 | function Export-OpenAIPromptToJson { 1494 | param ( 1495 | [Parameter(Mandatory=$true)] 1496 | [string]$Path, #The file path to the JSON file containing the prompt. This parameter is mandatory. 1497 | [Parameter(Mandatory=$true)] 1498 | [System.Object]$prompt #The prompt (System.Object) to export to a .JSON file. 1499 | ) 1500 | 1501 | $prompt | ConvertTo-Json | Out-File -Encoding utf8 -FilePath $path 1502 | 1503 | 1504 | return $prompt 1505 | } 1506 | 1507 | function New-OpenAIEmbedding { 1508 | param ( 1509 | [Parameter(Mandatory=$true)] 1510 | [string]$APIKey, # The API key to authenticate the request. 1511 | [Parameter(Mandatory=$false)] 1512 | [string]$text, # The Id of the FineTuneJob you want to get the events for. 1513 | [Parameter(Mandatory=$false)] 1514 | [string]$Model = "text-embedding-ada-002" # The Id of the FineTuneJob you want to get the events for. 1515 | ) 1516 | 1517 | #Building Request for API 1518 | $uri = 'https://api.openai.com/v1/embeddings' 1519 | $method = 'Post' 1520 | 1521 | $headers = @{ 1522 | "Content-Type" = "application/json" 1523 | "Authorization" = "Bearer $APIKey" 1524 | } 1525 | 1526 | $RequestBody = @{ 1527 | input = $text 1528 | model = $Model 1529 | } 1530 | 1531 | #Convert the whole Body to be JSON, so that API can interpret it 1532 | $RequestBody = $RequestBody | ConvertTo-Json 1533 | 1534 | $RestMethodParameter=@{ 1535 | Method=$method 1536 | Uri =$uri 1537 | body=$RequestBody 1538 | Headers=$Headers 1539 | } 1540 | 1541 | try { 1542 | #Call the OpenAI Edit API 1543 | $APIresponse = Invoke-RestMethod @RestMethodParameter 1544 | $convertedResponseForOutput = $APIresponse 1545 | 1546 | } 1547 | catch { 1548 | # If there was an error, define an error message to the written. 1549 | $errorToReport = $_.Exception.Message 1550 | $errorDetails = $_.ErrorDetails.Message 1551 | $convertedResponseForOutput = "Unable to handle Error: "+$errorToReport+" See Error details below. Retry query. If the error persists, consider exporting your current prompt and to continue later." 1552 | } 1553 | 1554 | if ($errorDetails) 1555 | { 1556 | write-host "ErrorDetails:"$errorDetails -ForegroundColor "Red" 1557 | $convertedResponseForOutput = "Error. See above for details." 1558 | } 1559 | 1560 | return $convertedResponseForOutput 1561 | 1562 | } 1563 | 1564 | function Convert-PDFtoText { 1565 | <# 1566 | .SYNOPSIS 1567 | Converts a PDF file to a text, CSV, or JSON file. 1568 | .DESCRIPTION 1569 | The Convert-PDFtoText function takes a PDF file and converts it to a text file. You can also choose to export the data in CSV or JSON format. 1570 | If the PDF file is empty, the function returns a message indicating that it either has no text or consists only of pictures or a scan. 1571 | .PARAMETER filePath 1572 | The path to the PDF file you want to convert. This is a mandatory parameter. 1573 | .PARAMETER TypeToExport 1574 | Specifies the file format to export the data to. Valid options are "txt", "csv", or "json". This is a mandatory parameter. 1575 | .INPUTS 1576 | This function does not accept input from the pipeline. 1577 | .OUTPUTS 1578 | The function outputs the converted text file in the specified format. 1579 | .EXAMPLE 1580 | Convert-PDFtoText -filePath "C:\Documents\example.pdf" -TypeToExport "txt" 1581 | This command converts the example.pdf file located in the C:\Documents folder to a text file. 1582 | #> 1583 | 1584 | param( 1585 | [Parameter(Mandatory=$true)] 1586 | [string]$filePath, 1587 | [Parameter(Mandatory=$true)] 1588 | [ValidateSet("txt", "csv", "json")] 1589 | [string]$TypeToExport 1590 | ) 1591 | 1592 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Convertig PDF to Text: "+($filepath)) 1593 | 1594 | try 1595 | { 1596 | Add-Type -Path "C:\Program Files\PackageManagement\NuGet\Packages\BouncyCastle.1.8.9\lib\BouncyCastle.Crypto.dll" 1597 | Add-Type -Path "C:\Program Files\PackageManagement\NuGet\Packages\iTextSharp.5.5.13.3\lib\itextsharp.dll" 1598 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Loaded itextsharp.dll") 1599 | } 1600 | 1601 | catch 1602 | { 1603 | 1604 | Write-Host ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | itextsharp.dll not present. Make sure you installed it and it is in expected folder: C:\Program Files\PackageManagement\NuGet\Packages\iTextSharp.5.5.13.3\lib") -ForegroundColor Red 1605 | Write-Host ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Unable to handle Error "+($_.Exception.Message)) -ForegroundColor "Red" 1606 | } 1607 | 1608 | try { 1609 | $pdf = New-Object iTextSharp.text.pdf.pdfreader -ArgumentList $filePath 1610 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | PDF was found.") 1611 | 1612 | $text = "" 1613 | for ($page = 1; $page -le $pdf.NumberOfPages; $page++){ 1614 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Parsing text...") 1615 | $text+=([iTextSharp.text.pdf.parser.PdfTextExtractor]::GetTextFromPage($pdf,$page)) 1616 | } 1617 | $pdf.Close() 1618 | 1619 | 1620 | if ($text -eq "" -or $text -eq " " -or $text -eq $null) 1621 | { 1622 | Write-Host ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | PDF was found, but it looks like its empty. Either it really has no text or it consist only of pictures or a Scan. ShellGPT does not have OCR. The prompt will not have any additional content in it.") -ForegroundColor Red 1623 | } 1624 | 1625 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Done parsing PDF. Preparing export to .txt") 1626 | 1627 | if ($filePath.Contains("\")) 1628 | { 1629 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Filepath is the whole path. Splitting it up...") 1630 | $filename = $filepath.split("\")[($filepath.split("\")).count-1] 1631 | $basenamefile = ($filename.Split(".pdf"))[0] 1632 | $Outputfolder = ($filepath.split($basenamefile))[0] 1633 | } 1634 | else { 1635 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Filepath is only filename. Indicates run in the dir where the script was launched") 1636 | $filename = $filePath 1637 | $basenamefile = ($filename.Split(".pdf"))[0] 1638 | $Outputfolder = "" 1639 | } 1640 | 1641 | switch ($TypeToExport) { 1642 | "txt" { 1643 | $exportEnding = ".txt" 1644 | } 1645 | "csv" { 1646 | $exportEnding = ".csv" 1647 | 1648 | } 1649 | "json" { 1650 | $exportEnding = ".json" 1651 | 1652 | } 1653 | "jsonl" { 1654 | $exportEnding = ".jsonl" 1655 | } 1656 | default { 1657 | Write-Host ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Invalid option for Switch.") 1658 | } 1659 | } 1660 | 1661 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Export type is: "+($exportEnding)) 1662 | 1663 | $OutputPath = $Outputfolder+$basenamefile+$exportEnding 1664 | 1665 | Write-Verbose ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Outputpath is: "+($OutputPath)) 1666 | 1667 | $text | Out-File $OutputPath -Force 1668 | 1669 | } 1670 | catch { 1671 | Write-Host ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | PDF could not be loaded. Is itextsharp.dll present? Does the PDF exist? Is the Path valid?") -ForegroundColor Red 1672 | Write-Host ("ShellGPT-Convert-PDFtoText @ "+(Get-Date)+" | Unable to handle Error "+($_.Exception.Message)) -ForegroundColor "Red" 1673 | } 1674 | 1675 | return $OutputPath 1676 | } 1677 | 1678 | function Get-ShellGPTHelpMessage { 1679 | Write-Host ("ShellGPT @ "+(Get-Date)+" | To include a file for the model to reference in the prompt, use the following notation 'file | pathtofile | instruction' in the query.") -ForegroundColor DarkMagenta 1680 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Supported file types are: .txt, .pdf, .csv, .json") -ForegroundColor DarkMagenta 1681 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1682 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Example: file | C:\Users\Yanik\test.txt | Summarize this:") -ForegroundColor DarkGray 1683 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1684 | Write-Host ("ShellGPT @ "+(Get-Date)+" | This will summarize the content in the file 'test.txt'") -ForegroundColor DarkGray 1685 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Example: file | C:\Users\Yanik\test.pdf | Summarize this:") -ForegroundColor DarkGray 1686 | Write-Host ("ShellGPT @ "+(Get-Date)+" | This will create a .txt file with the content of the .PDF, read it and summarize the content in the file 'test.txt'") -ForegroundColor DarkGray 1687 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1688 | Write-Host ("ShellGPT @ "+(Get-Date)+" | There are a few other commands available. ") -ForegroundColor DarkMagenta 1689 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1690 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Start a new conversation: ") -ForegroundColor DarkGray 1691 | Write-Host ("ShellGPT @ "+(Get-Date)+" | newconvo | ") -ForegroundColor DarkGray 1692 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1693 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Export the current prompt: ") -ForegroundColor DarkGray 1694 | Write-Host ("ShellGPT @ "+(Get-Date)+" | export | ") -ForegroundColor DarkGray 1695 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1696 | 1697 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Stop ShellGPT: ") -ForegroundColor DarkGray 1698 | Write-Host ("ShellGPT @ "+(Get-Date)+" | quit | ") -ForegroundColor DarkGray 1699 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1700 | 1701 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Stop ShellGPT and export the prompt: ") -ForegroundColor DarkGray 1702 | Write-Host ("-------------------------------------------------------------------------------------------------------------------------") -ForegroundColor DarkGray 1703 | } 1704 | 1705 | function Start-ShellGPT { 1706 | <# 1707 | .SYNOPSIS 1708 | Start-ShellGPT is a PowerShell function that allows users to communicate with OpenAI's GPT-3.5 model. Users can select a character for the model to assume and provide queries or commands for the model to complete. 1709 | The function also allows users to continue a previous conversation by providing the path to a JSON file containing the previous conversation. 1710 | .DESCRIPTION 1711 | Start-ShellGPT is a PowerShell function that provides users with a shell interface to communicate with OpenAI's GPT-3.5 model. 1712 | Users can specify a character for the model to assume and provide queries or commands for the model to complete. 1713 | The function supports several parameters that allow users to configure the behavior of the model, including the temperature and maximum number of tokens. 1714 | Additionally, users can continue a previous conversation by providing the path to a JSON file containing the previous conversation. 1715 | .PARAMETER APIKey 1716 | The APIKey parameter is mandatory and specifies the API key to use when communicating with the OpenAI API. 1717 | .PARAMETER model 1718 | The model parameter is optional and specifies the name of the GPT-3.5 model to use. The default value is "gpt-3.5-turbo". 1719 | .PARAMETER stop 1720 | The stop parameter is optional and specifies the string used to stop the model from generating additional output. 1721 | The default value is "\n". 1722 | .PARAMETER temperature 1723 | The temperature parameter is optional and specifies the "temperature" to use when generating text. 1724 | The default value is 0.4. 1725 | .PARAMETER max_tokens 1726 | The max_tokens parameter is optional and specifies the maximum number of tokens to generate. 1727 | The default value is 900. 1728 | .PARAMETER ShowOutput 1729 | The ShowOutput parameter is optional and specifies whether to show the output of the model. 1730 | The default value is $false. 1731 | .PARAMETER ShowTokenUsage 1732 | The ShowTokenUsage parameter is optional and specifies whether to show the token usage of the model. 1733 | The default value is $false. 1734 | .PARAMETER instructor 1735 | The instructor parameter is optional and specifies the prompt that the model uses to generate text. 1736 | The default value is "You are a helpful AI. You answer as concisely as possible." 1737 | .PARAMETER assistantReply 1738 | The assistantReply parameter is optional and specifies the initial reply of the model when a new conversation is started. 1739 | The default value is "Hello! I'm a ChatGPT-3.5 Model. How can I help you?" 1740 | .INPUTS 1741 | Start-ShellGPT does not take any input by pipeline. 1742 | .OUTPUTS 1743 | Start-ShellGPT does not output anything to the pipeline. 1744 | .EXAMPLE 1745 | Start-ShellGPT -APIKey "my_api_key" 1746 | This example starts a new conversation with the GPT-3.5 model using the specified API key. 1747 | .EXAMPLE 1748 | Start-ShellGPT -APIKey "my_api_key" -model "gpt-3.5" 1749 | This example starts a new conversation with the GPT-3.5 model named "gpt-3.5" using the specified API key. 1750 | .EXAMPLE 1751 | Start-ShellGPT -APIKey "your_api_key" -temperature 0.8 -max_tokens 1000 1752 | This example starts a new conversation with the GPT-3.5 model "gpt-3.5-turbo" using the specified API key and sets the temperature to 0.8 and the maximum number of tokens to 1000. 1753 | #> 1754 | 1755 | param ( 1756 | [Parameter(Mandatory=$true)][string]$APIKey, 1757 | [Parameter(Mandatory=$false)][string]$UseAzure, 1758 | [Parameter(Mandatory=$false)][string]$DeploymentName, 1759 | [Parameter(Mandatory=$false)][string]$model = "gpt-3.5-turbo", 1760 | [Parameter(Mandatory=$false)][string]$stop = "\n", 1761 | [Parameter(Mandatory=$false)][double]$temperature = 0.4, 1762 | [Parameter(Mandatory=$false)][int]$max_tokens = 900, 1763 | [Parameter(Mandatory=$false)][bool]$ShowOutput = $false, 1764 | [Parameter(Mandatory=$false)][bool]$ShowTokenUsage = $false, 1765 | [Parameter(Mandatory=$false)][string]$instructor = "You are a helpful AI. You answer as concisely as possible.", 1766 | [Parameter(Mandatory=$false)] [string]$assistantReply = "Hello! I'm a ChatGPT-3.5 Model. How can I help you?" 1767 | ) 1768 | 1769 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Initializing... ") 1770 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Used Model is : "+($model)) 1771 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Used stop instructor is : "+($stop)) 1772 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Used temperature is: "+($temperature)) 1773 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Used max_tokens is: "+($max_tokens)) 1774 | 1775 | $contiueConversation = $(Write-Host ("ShellGPT @ "+(Get-Date)+" | Do you want to restore an existing conversation? (enter 'y' or 'yes'): ") -ForegroundColor yellow -NoNewLine; Read-Host) 1776 | 1777 | if ($contiueConversation -eq "y" -or $contiueConversation -eq "yes") 1778 | { 1779 | Get-ShellGPTHelpMessage 1780 | $importPath = $(Write-Host ("ShellGPT @ "+(Get-Date)+" | Provide the full path to the prompt*.json file you want to continue the conversation on: ") -ForegroundColor yellow -NoNewLine; Read-Host) 1781 | [System.Collections.ArrayList]$importedPrompt = Import-OpenAIPromptFromJson -Path $importPath 1782 | [System.Collections.ArrayList]$previousMessages = $importedPrompt 1783 | } 1784 | else 1785 | { 1786 | # Display a welcome message and instructions for stopping the conversation. 1787 | Get-ShellGPTHelpMessage 1788 | 1789 | # Initialize the previous messages array. 1790 | [System.Collections.ArrayList]$previousMessages = @() 1791 | 1792 | $option = Read-Host ("ShellGPT @ "+(Get-Date)+" | Select the Character the Model should assume:`n1: Chat`n2: Ticker and Sentiment Analysis`n3: Sentiment Analysis`n4: Intent Analysis`n5: Intent & Topic Analysis`nShellGPT @ "+(Get-Date)+" | Enter the according number of the character you'd like") 1793 | 1794 | switch ($option) { 1795 | "1" { 1796 | $Character = "Chat" 1797 | } 1798 | "2" { 1799 | $Character = "SentimentAndTickerAnalysis" 1800 | } 1801 | "3" { 1802 | $Character = "SentimentAnalysis" 1803 | } 1804 | "4" { 1805 | $Character = "IntentAnalysis" 1806 | } 1807 | "5" { 1808 | $Character = "IntentAndSubjectAnalysis" 1809 | } 1810 | 1811 | default { 1812 | $Character = "Chat" 1813 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Invalid option selected.") -ForegroundColor Yellow 1814 | } 1815 | } 1816 | 1817 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Selected Character is: "+($Character)) -ForegroundColor Yellow 1818 | $InitialQuery = Read-Host ("ShellGPT @ "+(Get-Date)+" | Your query for ChatGPT or commands for ShellGPT") 1819 | 1820 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | InitialQuery is: "+($InitialQuery)) 1821 | 1822 | switch -Regex ($InitialQuery) { 1823 | "^file \|.*" { 1824 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | InitialQuery is File command") 1825 | 1826 | $filePath = (($InitialQuery.split("|"))[1]).TrimStart(" ") 1827 | $filepath = $filePath.TrimEnd(" ") 1828 | $filePath = $filePath.Replace('"','') 1829 | $FileQuery = (($InitialQuery.split("|"))[2]).TrimStart(" ") 1830 | 1831 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Extracted FilePath from Query is: "+($filePath)) 1832 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Extracted Query is: "+($FileQuery)) 1833 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Starting Conversation...") 1834 | 1835 | if ($UseAzure) 1836 | { 1837 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $FileQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -filePath $filePath -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply -UseAzure $UseAzure -DeploymentName $DeploymentName 1838 | } 1839 | else { 1840 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $FileQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -filePath $filePath -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply 1841 | } 1842 | 1843 | Write-Host ("CompletionAPI @ "+(Get-Date)+" | "+($conversationPrompt[($conversationPrompt.count)-1].content)) -ForegroundColor Green 1844 | 1845 | if ($InitialQuery.Contains("| out |")) 1846 | { 1847 | $filePathOut = (($InitialQuery.split("|"))[4]).TrimStart(" ") 1848 | $filePathOut = $filePathOut.TrimEnd(" ") 1849 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Writing output to file: "+($filePathOut)) -ForegroundColor Yellow 1850 | 1851 | try { 1852 | ($conversationPrompt[($conversationPrompt.count)-1].content) | Out-File -Encoding utf8 -FilePath $filePathOut 1853 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Successfully created file with output at: "+($filePathOut)) -ForegroundColor Green 1854 | 1855 | } 1856 | catch { 1857 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Could not write output to file: "+($filePathOut)) -ForegroundColor Red 1858 | } 1859 | } 1860 | } 1861 | "^quit \|.*" { 1862 | Write-Host ("ShellGPT @ "+(Get-Date)+" | ShellGPT is exiting now...") -ForegroundColor Yellow 1863 | Start-Sleep 5 1864 | exit 1865 | } 1866 | "^export \|.*" { 1867 | Write-Host ("ShellGPT @ "+(Get-Date)+" | ShellGPT has nothing to export :(") -ForegroundColor Yellow 1868 | } 1869 | "^\s*$" { 1870 | Write-Host ("ShellGPT @ "+(Get-Date)+" | You have not provided any input. Will not send this query to the CompletionAPI") -ForegroundColor Yellow 1871 | [System.Collections.ArrayList]$conversationPrompt = Set-OpenAICompletionCharacter $Character 1872 | } 1873 | default { 1874 | 1875 | if ($InitialQuery.contains("| out |")) 1876 | { 1877 | $filePathOut = (($InitialQuery.split("|"))[2]).TrimStart(" ") 1878 | $filePathOut = $filePathOut.TrimEnd(" ") 1879 | $InitialQuery = (($InitialQuery.split("|"))[0]).TrimStart(" ") 1880 | $InitialQuery = $InitialQuery.TrimEnd(" ") 1881 | 1882 | if ($UseAzure) 1883 | { 1884 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply -UseAzure $UseAzure -DeploymentName $DeploymentName 1885 | } 1886 | else 1887 | { 1888 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply 1889 | } 1890 | 1891 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Writing output to file: "+($filePathOut)) -ForegroundColor Yellow 1892 | 1893 | try { 1894 | ($conversationPrompt[($conversationPrompt.count)-1].content) | Out-File -Encoding utf8 -FilePath $filePathOut 1895 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Successfully created file with output at: "+($filePathOut)) -ForegroundColor Green 1896 | 1897 | } 1898 | catch { 1899 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Could not write output to file: "+($filePathOut)) -ForegroundColor Red 1900 | } 1901 | } 1902 | else 1903 | { 1904 | if ($UseAzure) 1905 | { 1906 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply -UseAzure $UseAzure -DeploymentName $DeploymentName 1907 | } 1908 | else { 1909 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply 1910 | } 1911 | } 1912 | Write-Host ("CompletionAPI @ "+(Get-Date)+" | "+($conversationPrompt[($conversationPrompt.count)-1].content)) -ForegroundColor Green 1913 | } 1914 | } 1915 | 1916 | [System.Collections.ArrayList]$previousMessages = $conversationPrompt 1917 | } 1918 | 1919 | # Initialize the continue variable. 1920 | $continue = $true 1921 | 1922 | # Loop until the user stops the conversation. 1923 | while ($continue) { 1924 | 1925 | # Prompt the user to enter their query for ChatGPT or commands for ShellGPT. 1926 | $userQuery = Read-Host ("ShellGPT @ "+(Get-Date)+" | Your query for ChatGPT or commands for ShellGPT") 1927 | 1928 | switch -Regex ($userQuery) { 1929 | "^file \|.*" { 1930 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | InitialQuery is File command") 1931 | 1932 | $filePath = (($userQuery.split("|"))[1]).TrimStart(" ") 1933 | $filepath = $filePath.TrimEnd(" ") 1934 | $filePath = $filePath.Replace('"','') 1935 | 1936 | $FileQuery = (($userQuery.split("|"))[2]).TrimStart(" ") 1937 | 1938 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Extracted FilePath from Query is: "+($filePath)) 1939 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Extracted Query is: "+($FileQuery)) 1940 | 1941 | 1942 | if ($UseAzure) 1943 | { 1944 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $FileQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -filePath $filePath -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -UseAzure $UseAzure -DeploymentName $DeploymentName 1945 | } 1946 | else { 1947 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $FileQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -filePath $filePath -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput 1948 | } 1949 | Write-Host ("CompletionAPI @ "+(Get-Date)+" | "+($conversationPrompt[($conversationPrompt.count)-1].content)) -ForegroundColor Green 1950 | 1951 | if ($userQuery.Contains("| out |")) 1952 | { 1953 | 1954 | $filePathOut = (($UserQuery.split("|"))[4]).TrimStart(" ") 1955 | $filePathOut = $filePathOut.TrimEnd(" ") 1956 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Writing output to file: "+($filePathOut)) -ForegroundColor Yellow 1957 | 1958 | try { 1959 | ($conversationPrompt[($conversationPrompt.count)-1].content) | Out-File -Encoding utf8 -FilePath $filePathOut 1960 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Successfully created file with output at: "+($filePathOut)) -ForegroundColor Green 1961 | 1962 | } 1963 | catch { 1964 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Could not write output to file: "+($filePathOut)) -ForegroundColor Red 1965 | } 1966 | 1967 | } 1968 | 1969 | } 1970 | "^newconvo \|.*" { 1971 | Start-ShellGPT -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop 1972 | } 1973 | "^quit \|.*" { 1974 | $exportBool = $(Write-Host ("ShellGPT @ "+(Get-Date)+" | Do you want to export the current prompt before exiting? Enter 'y' or 'yes': ") -ForegroundColor yellow -NoNewLine; Read-Host) 1975 | if ($exportBool -eq "y" -or $exportBool -eq "yes" -or $exportBool -eq "Y" -or $exportBool -eq "YES") 1976 | { 1977 | $exportPath = $(Write-Host ("ShellGPT @ "+(Get-Date)+" | Provide the full path to the prompt*.json file that you want to export now and later continue the conversation on: ") -ForegroundColor yellow -NoNewLine; Read-Host) 1978 | Export-OpenAIPromptToJson -Path $exportPath -prompt $previousMessages 1979 | Write-Host ("ShellGPT @ "+(Get-Date)+" | ShellGPT exported the prompt to: "+($exportPath)) -ForegroundColor yellow 1980 | } 1981 | Write-Host ("ShellGPT @ "+(Get-Date)+" | ShellGPT is exiting now...") -ForegroundColor yellow 1982 | Start-Sleep 5 1983 | exit 1984 | } 1985 | "^export \|.*" { 1986 | 1987 | $exportPath = $(Write-Host ("ShellGPT @ "+(Get-Date)+" | Provide the full path to the prompt*.json file that you want to export now and later continue the conversation on: ") -ForegroundColor yellow -NoNewLine; Read-Host) 1988 | 1989 | Export-OpenAIPromptToJson -Path $exportPath -prompt $previousMessages 1990 | Write-Host ("ShellGPT @ "+(Get-Date)+" | ShellGPT exported the prompt to: "+($exportPath)) -ForegroundColor yellow 1991 | } 1992 | "^\s*$" { 1993 | Write-Host ("ShellGPT @ "+(Get-Date)+" | You have not provided any input. Will not send this query to the CompletionAPI") -ForegroundColor Yellow 1994 | [System.Collections.ArrayList]$conversationPrompt = Set-OpenAICompletionCharacter $Character 1995 | } 1996 | default { 1997 | if ($userQuery.contains("| out |")) 1998 | { 1999 | $filePathOut = (($InitialQuery.split("|"))[2]).TrimStart(" ") 2000 | $filePathOut = $filePathOut.TrimEnd(" ") 2001 | $UserQuery = (($UserQuery.split("|"))[0]).TrimStart(" ") 2002 | $UserQuery = $UserQuery.TrimEnd(" ") 2003 | 2004 | if ($UseAzure){ 2005 | [System.Collections.ArrayList]$conversationPrompt = Add-OpenAICompletionMessageToConversation -query $userQuery -previousMessages $previousMessages -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -UseAzure $UseAzure -DeploymentName $DeploymentName 2006 | } 2007 | else { 2008 | [System.Collections.ArrayList]$conversationPrompt = Add-OpenAICompletionMessageToConversation -query $userQuery -previousMessages $previousMessages -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput 2009 | } 2010 | 2011 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Writing output to file: "+($filePathOut)) -ForegroundColor Yellow 2012 | 2013 | try { 2014 | ($conversationPrompt[($conversationPrompt.count)-1].content) | Out-File -Encoding utf8 -FilePath $filePathOut 2015 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Successfully created file with output at: "+($filePathOut)) -ForegroundColor Green 2016 | 2017 | } 2018 | catch { 2019 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Could not write output to file: "+($filePathOut)) -ForegroundColor Red 2020 | } 2021 | 2022 | } 2023 | else 2024 | { 2025 | if ($useAzure) { 2026 | [System.Collections.ArrayList]$conversationPrompt = Add-OpenAICompletionMessageToConversation -query $userQuery -previousMessages $previousMessages -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -UseAzure $UseAzure -DeploymentName $DeploymentName 2027 | } 2028 | else { 2029 | [System.Collections.ArrayList]$conversationPrompt = Add-OpenAICompletionMessageToConversation -query $userQuery -previousMessages $previousMessages -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput 2030 | } 2031 | } 2032 | 2033 | Write-Host ("CompletionAPI @ "+(Get-Date)+" | "+($conversationPrompt[($conversationPrompt.count)-1].content)) -ForegroundColor Green 2034 | } 2035 | } 2036 | 2037 | [System.Collections.ArrayList]$previousMessages = $conversationPrompt 2038 | } 2039 | } 2040 | function Get-OpenAiQuickResponse { 2041 | 2042 | param ( 2043 | [Parameter(Mandatory=$true, Position = 0, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)] [ValidateNotNullOrEmpty()] [string]$query, 2044 | [Parameter(Mandatory=$false, Position = 1)] [switch]$useAzure, 2045 | [Parameter(Mandatory=$false, Position = 2)] [string]$DeploymentName, 2046 | [Parameter(Mandatory=$false, Position = 3)] [string]$model = "gpt-4", 2047 | [Parameter(Mandatory=$false, Position = 4)] [string]$stop = "\n", 2048 | [Parameter(Mandatory=$false, Position = 5)] [double]$temperature = 0.4, 2049 | [Parameter(Mandatory=$false, Position = 6)] [int]$max_tokens = 900, 2050 | [Parameter(Mandatory=$false, Position = 7)] [bool]$ShowOutput = $false, 2051 | [Parameter(Mandatory=$false, Position = 8)] [bool]$ShowTokenUsage = $false, 2052 | [Parameter(Mandatory=$false, Position = 9)] [string]$instructor = "You are a helpful AI. You answer as concisely as possible.", 2053 | [Parameter(Mandatory=$false, Position = 10)] [string]$assistantReply = "Hello! I'm a ChatGPT-4 Model. How can I help you?", 2054 | [Parameter(Mandatory=$false, Position = 11)] [string]$Character = "Chat" 2055 | ) 2056 | 2057 | if ($useAzure) 2058 | { 2059 | if (!($env:AZ_OAI_APIKey)) 2060 | { 2061 | Write-Host "Please define the environment variable AZ_OAI_APIKey with your Microsoft Azure OpenAI API Key" 2062 | throw "Please define the environment variable AZ_OAI_APIKey" 2063 | } 2064 | 2065 | if (!($env:AZ_OAI_ResourceName)) 2066 | { 2067 | Write-Host "Please define the environment variable AZ_OAI_ResourceName with the name of your Microsoft Azure OpenAI Resource" 2068 | throw "Please define the environment variable AZ_OAI_DeploymentName" 2069 | } 2070 | if (!($env:AZ_OAI_DeploymentName)) 2071 | { 2072 | Write-Host "Please define the environment variable AZ_OAI_DeploymentName with the name of your Microsoft Azure OpenAI deployment" 2073 | throw "Please define the environment variable AZ_OAI_DeploymentName" 2074 | } 2075 | 2076 | $AzureResourceName = $env:AZ_OAI_ResourceName 2077 | $DeploymentName = $env:AZ_OAI_DeploymentName 2078 | $APIKey = $env:AZ_OAI_APIKey 2079 | } 2080 | else { 2081 | if (!($env:OAI_APIKey)) 2082 | { 2083 | Write-Host "Please define the environment variable OAI_APIKey with your OpenAI API Key" 2084 | throw "Please define the environment variable OAI_APIKey" 2085 | } 2086 | $APIKey = $env:OAI_APIKey 2087 | } 2088 | 2089 | $InitialQuery = $query 2090 | 2091 | switch -Regex ($InitialQuery) { 2092 | "^file \|.*" { 2093 | 2094 | $filePath = (($InitialQuery.split("|"))[1]).TrimStart(" ") 2095 | $filepath = $filePath.TrimEnd(" ") 2096 | $filePath = $filePath.Replace('"','') 2097 | $FileQuery = (($InitialQuery.split("|"))[2]).TrimStart(" ") 2098 | 2099 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Extracted FilePath from Query is: "+($filePath)) 2100 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Extracted Query is: "+($FileQuery)) 2101 | Write-Verbose ("ShellGPT @ "+(Get-Date)+" | Starting Conversation...") 2102 | 2103 | if ($useAzure){ 2104 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $FileQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -filePath $filePath -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply -UseAzure $AzureResourceName -DeploymentName $DeploymentName 2105 | } 2106 | else { 2107 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $FileQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -filePath $filePath -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply 2108 | } 2109 | if ($InitialQuery.Contains("| out |")) 2110 | { 2111 | $filePathOut = (($InitialQuery.split("|"))[4]).TrimStart(" ") 2112 | $filePathOut = $filePathOut.TrimEnd(" ") 2113 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Writing output to file: "+($filePathOut)) -ForegroundColor Yellow 2114 | 2115 | try { 2116 | ($conversationPrompt[($conversationPrompt.count)-1].content) | Out-File -Encoding utf8 -FilePath $filePathOut 2117 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Successfully created file with output at: "+($filePathOut)) -ForegroundColor Green 2118 | 2119 | } 2120 | catch { 2121 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Could not write output to file: "+($filePathOut)) -ForegroundColor Red 2122 | } 2123 | } 2124 | } 2125 | 2126 | "^\s*$" { 2127 | Write-Host ("ShellGPT @ "+(Get-Date)+" | You have not provided any input. Will not send this query to the CompletionAPI") -ForegroundColor Yellow 2128 | [System.Collections.ArrayList]$conversationPrompt = Set-OpenAICompletionCharacter $Character 2129 | } 2130 | default { 2131 | 2132 | if ($InitialQuery.contains("| out |")) 2133 | { 2134 | $filePathOut = (($InitialQuery.split("|"))[2]).TrimStart(" ") 2135 | $filePathOut = $filePathOut.TrimEnd(" ") 2136 | $InitialQuery = (($InitialQuery.split("|"))[0]).TrimStart(" ") 2137 | $InitialQuery = $InitialQuery.TrimEnd(" ") 2138 | 2139 | if ($useAzure){ 2140 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply -UseAzure $AzureResourceName -DeploymentName $DeploymentName 2141 | } 2142 | else { 2143 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply 2144 | } 2145 | 2146 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Writing output to file: "+($filePathOut)) -ForegroundColor Yellow 2147 | 2148 | try { 2149 | ($conversationPrompt[($conversationPrompt.count)-1].content) | Out-File -Encoding utf8 -FilePath $filePathOut 2150 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Successfully created file with output at: "+($filePathOut)) -ForegroundColor Green 2151 | 2152 | } 2153 | catch { 2154 | Write-Host ("ShellGPT @ "+(Get-Date)+" | Could not write output to file: "+($filePathOut)) -ForegroundColor Red 2155 | } 2156 | } 2157 | else 2158 | { 2159 | if ($useAzure){ 2160 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply -UseAzure $AzureResourceName -DeploymentName $DeploymentName 2161 | } 2162 | else { 2163 | [System.Collections.ArrayList]$conversationPrompt = New-OpenAICompletionConversation -Character $Character -query $InitialQuery -instructor $instructor -APIKey $APIKey -temperature $temperature -max_tokens $max_tokens -model $model -stop $stop -ShowTokenUsage $ShowTokenUsage -ShowOutput $ShowOutput -assistantReply $assistantReply 2164 | } 2165 | } 2166 | } 2167 | } 2168 | 2169 | $APIKey = $null 2170 | return ($conversationPrompt[($conversationPrompt.count)-1].content) 2171 | 2172 | } 2173 | 2174 | function AzAI { 2175 | param( 2176 | [Parameter(Mandatory=$true, ValueFromPipeline=$true)][string]$query, 2177 | [string]$DeploymentName, 2178 | [string]$model = "gpt-4", 2179 | [string]$stop = "\n", 2180 | [double]$temperature = 0.4, 2181 | [int]$max_tokens = 900, 2182 | [bool]$ShowOutput = $false, 2183 | [bool]$ShowTokenUsage = $false, 2184 | [string]$instructor = "You are a helpful AI. You are being talked to trough a PowerShell interface. You answer as concisely as possible.", 2185 | [string]$assistantReply = "Hello! I'm a ChatGPT-4 Model. How can I help you?", 2186 | [string]$Character = "Chat" 2187 | ) 2188 | 2189 | Get-OpenAiQuickResponse -query $query -useAzure:$true -DeploymentName $DeploymentName -model $model -stop $stop -temperature $temperature -max_tokens $max_tokens -ShowOutput $ShowOutput -ShowTokenUsage $ShowTokenUsage -instructor $instructor -assistantReply $assistantReply -Character $Character 2190 | } 2191 | 2192 | function OpenAI { 2193 | param( 2194 | [Parameter(Mandatory=$true, ValueFromPipeline=$true)][string]$query, 2195 | [string]$model = "gpt-4", 2196 | [string]$stop = "\n", 2197 | [double]$temperature = 0.4, 2198 | [int]$max_tokens = 900, 2199 | [bool]$ShowOutput = $false, 2200 | [bool]$ShowTokenUsage = $false, 2201 | [string]$instructor = "You are a helpful AI. You are being talked to trough a PowerShell interface. You answer as concisely as possible.", 2202 | [string]$assistantReply = "Hello! I'm a ChatGPT-4 Model. How can I help you?", 2203 | [string]$Character = "Chat" 2204 | ) 2205 | Get-OpenAiQuickResponse -query $query -model $model -stop $stop -temperature $temperature -max_tokens $max_tokens -ShowOutput $ShowOutput -ShowTokenUsage $ShowTokenUsage -instructor $instructor -assistantReply $assistantReply -Character $Character 2206 | } 2207 | --------------------------------------------------------------------------------