├── .gitattributes ├── .gitignore ├── LICENSE.txt ├── README.md ├── __init__ .py ├── audio_output.py ├── functions ├── __init__.py ├── chat.py ├── openaif.py └── samples.py ├── main.py └── requirements.txt /.gitattributes: -------------------------------------------------------------------------------- 1 | ############################################################################### 2 | # Set default behavior to automatically normalize line endings. 3 | ############################################################################### 4 | * text=auto 5 | 6 | ############################################################################### 7 | # Set default behavior for command prompt diff. 8 | # 9 | # This is need for earlier builds of msysgit that does not have it on by 10 | # default for csharp files. 11 | # Note: This is only used by command line 12 | ############################################################################### 13 | #*.cs diff=csharp 14 | 15 | ############################################################################### 16 | # Set the merge driver for project and solution files 17 | # 18 | # Merging from the command prompt will add diff markers to the files if there 19 | # are conflicts (Merging from VS is not affected by the settings below, in VS 20 | # the diff markers are never inserted). Diff markers may cause the following 21 | # file extensions to fail to load in VS. An alternative would be to treat 22 | # these files as binary and thus will always conflict and require user 23 | # intervention with every merge. To do so, just uncomment the entries below 24 | ############################################################################### 25 | #*.sln merge=binary 26 | #*.csproj merge=binary 27 | #*.vbproj merge=binary 28 | #*.vcxproj merge=binary 29 | #*.vcproj merge=binary 30 | #*.dbproj merge=binary 31 | #*.fsproj merge=binary 32 | #*.lsproj merge=binary 33 | #*.wixproj merge=binary 34 | #*.modelproj merge=binary 35 | #*.sqlproj merge=binary 36 | #*.wwaproj merge=binary 37 | 38 | ############################################################################### 39 | # behavior for image files 40 | # 41 | # image files are treated as binary by default. 42 | ############################################################################### 43 | #*.jpg binary 44 | #*.png binary 45 | #*.gif binary 46 | 47 | ############################################################################### 48 | # diff behavior for common document formats 49 | # 50 | # Convert binary document formats to text before diffing them. This feature 51 | # is only available from the command line. Turn it on by uncommenting the 52 | # entries below. 53 | ############################################################################### 54 | #*.doc diff=astextplain 55 | #*.DOC diff=astextplain 56 | #*.docx diff=astextplain 57 | #*.DOCX diff=astextplain 58 | #*.dot diff=astextplain 59 | #*.DOT diff=astextplain 60 | #*.pdf diff=astextplain 61 | #*.PDF diff=astextplain 62 | #*.rtf diff=astextplain 63 | #*.RTF diff=astextplain 64 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | env 2 | .env 3 | *.lock 4 | __pycache__ 5 | __pypackages__ 6 | .venv 7 | *.pyc 8 | .pdm.toml 9 | build/ 10 | dist/ 11 | node_modules/ 12 | .vs/ 13 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) [year] [fullname] 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # openai-functions-wrapper 2 | 3 | ## ChatGPT can now execute your functions (including accessing your database) as needed, and even send emails out on your behalf! 4 | 5 | ### TL;DR: On June 13, 2023, OpenAI released chatGPT4 (and 3.5Turbo) version "0613". This release allows the ChatCompletion API to make "function_calls." This means that when provided with a list of functions, ChatCompletion will decide what functions are required to be executed to fulfill the user's request. 6 | 7 | 8 | This functionality is still experimental and results can be unpredictable and unreliable, but it is a huge step forward in allowing OpenAI's LLMs to interact with the real world, from news, weather, and stock data, to company proprietary vector databases. This project provides an example chaining together 5 function calls at ChatCompletion's request without using langchain. 9 | 10 | Hopefully, this project will help demystify the pipeline for executing multiple requests with OpenAI's chatcompletion. 11 | 12 | Additionally, I've created function wrappers that allow you to create the format necessary for chatcompletion to request function execution and parameters. You can add a function to your function list with as little as a single line of code. Here is an example of the code required to enable ChatCompletion to interact with a Pinecone database: 13 | 14 | ``` 15 | f = function(name="getPineconeData", description="Company data pertaining only to health care plans, company policies, and employee roles.") 16 | f.properties.add(property("prompt",PropertyType.string, "The prompt to be used to query the vector database. This must be in the form of a concise sentence.", True)) 17 | f.properties.add(property("top",PropertyType.integer, "Records to be returned.", True, None, default=5)) 18 | ``` 19 | Obviously, you need to add the code to get the data, but **wiring up function definitions with ChatCompletion is now super easy!** 20 | 21 | For ChatCompletion to work effectively, you need to keep a running history of the session. This is done in the openaif.py file. This illustrates how to append your conversation to "messages" to make the chat pipeline and function information work correctly. This also allows you to inject a "system" message (which alters the chat context, I'm not sure if it is for the best...). 22 | 23 | ## Setup 24 | 1) Download this project 25 | 2) Create a python environment and install the requirements (in requirements.txt) 26 | 3) create a .env file with the environment variables (see below) 27 | 4) Get any API keys you might need (or adjust the user prompt to say something simple like "what is my dogs name and what time is it?" which doesn't require any external api calls.) 28 | 29 | ## .env File 30 | Create a .env file in the directory with the following (depending on what you'll be using): 31 | ``` 32 | OPENAI_APIKEY={get a key from OpenAI} 33 | 34 | NEWSAPI_ORG_URL=https://newsapi.org/v2/everything 35 | NEWSAPI_KEY={get a free key from newsapi.org} 36 | 37 | WEATHER_URL=http://api.weatherapi.com/v1/current.json 38 | 3DAY_WEATHER_URL=http://api.weatherapi.com/v1/forecast.json 39 | WEATHERAPI_KEY={get a free key from weatherapi.com} 40 | 41 | PINECONE_API_KEY={get a free key from pinecone.io} 42 | PINECONE_ENV={the environment on your index - often something like asia-southeast1-gcp..} 43 | PINECONE_INDEX_NAME={whatever you named your index} 44 | SENTENCE_ENCODER={I used all-MiniLM-L6-v2, which was in the 45 | pinecone demos, but you can use whatever - 46 | needed for encoding sentences for pinecone queries} 47 | 48 | SENDGRID_FROM_EMAIL=xxx@yyy.com 49 | SENDGRID_API_KEY={get a free api key from sendgrid.com - it looks like you do need to add a credit card to activate, however.} 50 | 51 | PYTTSX3_VOICE_ID=english-us 52 | PYTTSX3_SPEED=135 53 | ``` 54 | ## Pinecone: 55 | If you're going to use pinecone, you'll need to create an index on pinecone and add encoded data. You can look at my other project "azure-pinecone-openai-demo" for scripts that will upload azure's "Northwinds" employee dataset, or upload your own data. If you use your own data, remember to update the description of what is stored in your data so ChatCompletion can comprehend when that function_call might be required. 56 | 57 | # Startup 58 | ``` 59 | python main.py 60 | ``` 61 | for Text to Speech output (using pyttsx3) add the --speak flag: 62 | ``` 63 | python main.py --speak 64 | ``` 65 | 66 | ## Usage / Results 67 | 68 | In the example, I show how you can have chatGPT request data from multiple sources. Based upon the following prompt: 69 | 70 | **"What is my dog's name, tell me what time is it in PST, what is the weather like in London, and what sightseeing activities would you recommend for London this time of year? Also please give me 5 articles on the US Economy from the last week. Also are hearing aids included in my Northwinds Standard Healthcare Plan? Also email bob@xxxxxxxx.com and tell him I am running late for lunch."** 71 | 72 | ChatGPT determines it needs to perform the following external calls: 73 | 74 | 1) getCurrentUTCDateTime - system function to obtain the UTC date. 75 | 2) getDogName - system function to return a random dog name. :-) 76 | 3) getWeather - to obtain the weather in London (this is a free api you can sign up for at weatherapi.com)- 77 | 4) getNews - to obtain 4 news articles on the US Economy (this is a free api you can sign up for at newsapi.org) 78 | 5) getPineconeData - this illustrates how chatGPT can format a query to an external vector database and return silo'd information (you can sign up for a free Pinecone database at Pinecone.io). 79 | 7) sendEmail - this is fun - chatGPT can be a personal assistant for you, sending out emails. 80 | 81 | The result is as follows (ChatCompletion queried 5 functions for information from external data sources, **and then sent an email!**): 82 | 83 | ``` 84 | "message": { 85 | "role": "assistant", 86 | "content": "Your dog's name is Snoopy. 87 | 88 | The current time in PST is 12:52 PM. 89 | 90 | The weather in London is partly cloudy with a temperature of 18.0\u00b0C (64.4\u00b0F). 91 | 92 | For sightseeing activities in London, I would recommend visiting popular attractions 93 | such as the Tower of London, Buckingham Palace, the British Museum, the London Eye, 94 | and taking a boat tour on the River Thames. Additionally, you can explore the vibrant 95 | neighborhoods of Covent Garden, Camden Town, and Notting Hill. Don't forget to check 96 | out the West End for world-class theater performances. 97 | 98 | Here are 5 articles on the US Economy from the last week:\ 99 | 100 | 1. Title: \"On-Orbit Satellite Servicing, New Crew Capsules and Artificial Gravity: 101 | NASA's Latest Tech Initiative\" 102 | Description: \"A new Blue Origin crewed spacecraft is in the works as part of a NASA collaboration 103 | designed to advance the orbital economy, with the space agency lending its expertise to seven 104 | different commercial partners.\" 105 | 106 | 2. Title: \"Partial de-dollarization is possible, but China won't dethrone the greenback even if 107 | it becomes the world's largest economy\" 108 | Description: \"History suggests that dollar dominance could carry on well into the second half 109 | of the 21st century, even if China's economy outpaces the US.\" 110 | 111 | 3. Title: \"Federal Reserve officials announce pause in US interest-rate hikes\" 112 | Description: \"Even with the pause, Fed officials suggest further increases may come depending on 113 | how close the economy gets to the 2% inflation target. US Federal Reserve officials have announced 114 | a pause in interest-rate hikes, leaving rates at 5% to 5.25% after more than a year of...\" 115 | 116 | 4. Title: \"Forget de-dollarization - Argentina may adopt the greenback as its currency and abandon the peso\" 117 | Description: \"If Argentina adopts the greenback, it would become the biggest economy so far to 118 | dollarize, according to Bloomberg.\" 119 | 120 | 5. Title: \"The US economy is entering expansion, not recession, and investors fear they are missing 121 | out, Fundstrat's Tom Lee says\" 122 | Description: \"The US is entering a phase of economic expansion and not a recession, which means 123 | the stock-market rally will become more broad-based, Fundstrat's Tom Lee has said.\" 124 | 125 | Regarding your question about hearing aids, according to the information I have, hearing aids are 126 | included in the Northwinds Standard Healthcare Plan. The plan covers routine hearing aid maintenance, 127 | repair, counseling, evaluations, and hearing aid accessories. However, services provided by 128 | out-of-network providers may not be covered fully. It's always best to check with your healthcare 129 | plan for details and specifics. 130 | 131 | I have sent an email to bob@xxxxxxxx.com informing him that you are running late for lunch." 132 | }, 133 | "finish_reason": "stop" 134 | } 135 | 136 | ``` 137 | 138 | Here is the email chatGPT executed (this literally arrived in an inbox!): 139 | 140 | ``` 141 | From: 142 | Date: Sun, Jun 18, 2023 at 12:52 PM 143 | Subject: Running late for lunch 144 | To: 145 | 146 | 147 | Hi Bob, just wanted to let you know that I am running late for lunch. See you soon! 148 | ``` 149 | 150 | 151 | 152 | One thing I found of interest was that chatGPT formed a sentence for the vector search: 153 | 154 | ``` 155 | 156 | "choices": [ 157 | { 158 | "index": 0, 159 | "message": { 160 | "role": "assistant", 161 | "content": null, 162 | "function_call": { 163 | "name": "getPineconeData", 164 | "arguments": "{\n \"prompt\": \"Northwinds Standard Healthcare Plan hearing aids\"\n}" 165 | } 166 | }, 167 | "finish_reason": "function_call" 168 | } 169 | ], 170 | "usage": { 171 | "prompt_tokens": 803, 172 | "completion_tokens": 24, 173 | "total_tokens": 827 174 | } 175 | ``` 176 | -------------------------------------------------------------------------------- /__init__ .py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/seanconnolly2000/openai-functions-wrapper/0c245f173ba516c2ad2ce8e9d27e7d4e31100fb3/__init__ .py -------------------------------------------------------------------------------- /audio_output.py: -------------------------------------------------------------------------------- 1 | """ 2 | This module provides functionality related to audio playback. 3 | """ 4 | import os 5 | from typing import Union 6 | from io import BytesIO 7 | import pygame 8 | import pyttsx3 9 | 10 | 11 | def initialize_audio(): 12 | """ 13 | Initialize the pyttsx3 engine. 14 | """ 15 | engine = pyttsx3.init() 16 | 17 | # Get the settings from the .env file 18 | voice_id = os.environ.get("PYTTSX3_VOICE_ID") 19 | speed = os.environ.get("PYTTSX3_SPEED") 20 | 21 | # Set the voice 22 | voices = engine.getProperty('voices') 23 | for v in voices: 24 | if voice_id in v.id: 25 | engine.setProperty('voice', v.id) 26 | break 27 | 28 | # Set the speed 29 | if speed is not None: 30 | engine.setProperty('rate', int(speed)) 31 | 32 | return engine 33 | 34 | 35 | def play_audio(audio: Union[bytes, BytesIO]): 36 | """ 37 | Play audio data using pygame.mixer. 38 | 39 | Args: 40 | audio (Union[bytes, BytesIO]): The audio data played. 41 | """ 42 | 43 | if not isinstance(audio, (bytes, BytesIO)): 44 | return 45 | if isinstance(audio, bytes): 46 | audio = BytesIO(audio) 47 | 48 | pygame.mixer.music.load(audio) 49 | pygame.mixer.music.play() 50 | while pygame.mixer.music.get_busy(): 51 | pygame.time.wait(10) 52 | 53 | 54 | def tts_output(engine, text): 55 | """ 56 | Convert text to speech using pyttsx3. 57 | 58 | Args: 59 | engine: The pyttsx3 engine. 60 | text (str): The text to be converted to speech. 61 | """ 62 | engine.say(text) 63 | engine.runAndWait() 64 | -------------------------------------------------------------------------------- /functions/__init__.py: -------------------------------------------------------------------------------- 1 | from .chat import * 2 | from .samples import * 3 | from .openaif import * -------------------------------------------------------------------------------- /functions/chat.py: -------------------------------------------------------------------------------- 1 | import _collections_abc 2 | from typing import Dict, List 3 | from enum import Enum 4 | 5 | 6 | # propertype.name for name, .value for value 7 | class PropertyType(str, Enum): 8 | string = 'string' 9 | integer= 'integer' 10 | 11 | class property(): 12 | def __init__(self, name: str, type: PropertyType, description: str, required: bool=False, enum: List=[], default: str=None): 13 | self.name = name 14 | self.type = type 15 | self.description = description 16 | self.required = required 17 | self.enum = enum 18 | self.default = default 19 | 20 | 21 | class properties(Dict): 22 | def add(self, item: property): 23 | if isinstance(item, property): 24 | super().__setitem__(item.name, {'type': item.type.value, 'description': item.description, 'required': item.required}) 25 | else: 26 | raise ValueError('Must be type "property."') 27 | 28 | # add enumerators if passed 29 | if item.enum != None and len(item.enum) > 0: 30 | super().__getitem__(item.name)['enum'] = item.enum 31 | 32 | # default of property 33 | if item.default != None: 34 | super().__getitem__(item.name)['default'] = item.default 35 | 36 | 37 | class function(): 38 | def __init__(self, name:str, description:str): 39 | self.name = name 40 | self.description = description 41 | self.properties = properties() #if properties is None else properties 42 | self.properties_wo_required = {} 43 | def to_json(self): 44 | # create "required" list for chatcompletion 45 | self.required = [k for k, v in self.properties.items() if v['required'] == True] 46 | 47 | #remove 'required' attribute from properties as it is a list of it's own 48 | for k, v in self.properties.items(): 49 | self.properties_wo_required[k] = {m: v[m] for m in v.keys() - {'required'}} 50 | 51 | funct = {'name' : self.name, 52 | 'description' : self.description, 53 | 'parameters' : { 'type': 'object', 54 | 'properties' : self.properties_wo_required }, 55 | 'required' : self.required 56 | } 57 | return funct 58 | 59 | 60 | # Security is always a concern when relying on a 3rd party to tell your code what to execute. 61 | # By moving to a dictionary object from a simple list, we can subsequently "look up" the function 62 | # that chatGPT returns to confirm we should be considering executing it. 63 | class functions(_collections_abc.MutableMapping): 64 | def __init__(self): 65 | super().__init__() 66 | self._dict = dict() 67 | 68 | def __getitem__(self, key): 69 | return self._dict.__getitem__(key) 70 | 71 | def __setitem__(self, key, value): 72 | self._dict.__setitem__(key, value) 73 | self._changed = True 74 | 75 | def __delitem__(self, key): 76 | self._dict.__delitem__(key) 77 | self._changed = True 78 | 79 | def __iter__(self): 80 | return self._dict.__iter__() 81 | 82 | def __len__(self): 83 | return self._dict.__len__() 84 | 85 | # ChatGPT is expecting an List. 86 | def to_json(self): 87 | l = [] 88 | for item in self._dict: 89 | l.append(self._dict[item].to_json()) 90 | return l 91 | 92 | # You could use a list base if you would like: 93 | # class functions(_collections_abc.MutableSequence): 94 | # def __init__(self): 95 | # self.data = [] 96 | 97 | # def __len__(self): 98 | # return len(self.data) 99 | 100 | # def __repr__(self): 101 | # return repr(self.data) 102 | 103 | # def __delitem__(self, index): 104 | # self.data.__delitem__(index) 105 | 106 | # def __setitem__(self, index, value): 107 | # self.data.__setitem__(index, value) 108 | 109 | # def insert(self, index, value): 110 | # self.data.insert(index, value) 111 | 112 | # def __getitem__(self, index): 113 | # return self.data.__getitem__(index) 114 | 115 | # def append(self, value): 116 | # self.data.append(value) 117 | 118 | # def to_json(self): 119 | # l = [] 120 | # for i in self.data: 121 | # l.append(i.to_json()) 122 | # return l 123 | -------------------------------------------------------------------------------- /functions/openaif.py: -------------------------------------------------------------------------------- 1 | import openai 2 | import json 3 | from typing import List 4 | from functions.chat import functions 5 | 6 | #openai with functions - openaif 7 | # The secret to using functions is to establish a dialog that chatcompletion can refer back to. 8 | # This is done by storing all messages from the user, from the function, and from the assistant (chat) in messages and passing them! 9 | 10 | class openaif(): 11 | # To initialize, you'll need to pass your API key and list of functions. You can change your model if you are using gpt4 12 | def __init__(self, api_key: str, functions: functions=[]): 13 | self.api_key = api_key 14 | self.openai = openai 15 | self.model = 'gpt-3.5-turbo-0613' # gpt-4-0613 16 | self.openai.api_key = self.api_key 17 | self.openai.Engine.list()['data'][0] # will throw an error if invalid key 18 | self.temperature = 0 #note: people have noticed chatGPT not including required parameters. Setting temperature to 0 seems to fix that 19 | self.messages = [] 20 | self.maximum_function_content_char_size = 2000 #to prevent token overflow 21 | self.functions = functions 22 | self.infinite_loop_counter = 0 #don't want to burn through too many openai credits :-) 23 | 24 | # This is a preamble adhered to throughout the session since it uses the "system" role: "Act like you are a..." 25 | # Feel free to experiment with the system role. In my experiments, this seems to cause problematic output including sending content through in the same responses that also request a function_call. 26 | def set_chat_context(self, prompt:str): 27 | self.messages.insert(0, {"role": "system", "content": prompt}) 28 | 29 | def clear_chat_session(self): 30 | self.messages = [] 31 | 32 | def user_request(self, prompt:str)-> str: 33 | self.messages.append({"role": "user", "content": prompt}) 34 | res = self.call_openai() 35 | if res['choices'][0]['message']: 36 | self.messages.append(res['choices'][0]['message']) 37 | while res['choices'][0]['finish_reason'] == 'function_call': 38 | self.infinite_loop_counter += 1 39 | if self.infinite_loop_counter > 100: exit() # you can do whatever you want, but if chatgpt is instructing continuous function calls, something needs to be looked at 40 | function_name = res['choices'][0]['message']['function_call']['name'] 41 | function_args = json.loads(res['choices'][0]['message']['function_call']['arguments']) 42 | # As mentioned, security is always a concern when allowing a 3rd party system to execute functions on your servers!!! 43 | # This call assures that you've at least passed these functions to chatGPT. 44 | # You should also consider scrubbing the parameters to prevent SQL or other injections! 45 | if function_name in self.functions: 46 | modules = __import__('functions.samples') 47 | funct = getattr(modules, function_name) 48 | function_response = str(funct(**function_args)) #responses must be string in order to append to messages 49 | if len(function_response) > self.maximum_function_content_char_size: function_response = function_response[:self.maximum_function_content_char_size] 50 | res = self.function_call(function_name, function_response) 51 | return res['choices'][0]['message']['content'] 52 | 53 | def function_call(self, function:str, function_response:str): 54 | self.messages.append({"role": "function", "name": function, "content": function_response}) 55 | res = self.call_openai() 56 | if res['choices'][0]['message']: 57 | self.messages.append(res['choices'][0]['message']) 58 | return res 59 | 60 | def call_openai(self)->str: 61 | #function_object = json.loads(str(functions)) 62 | if self.functions == []: 63 | res = self.openai.ChatCompletion.create( 64 | model=self.model, 65 | temperature=self.temperature, 66 | messages=self.messages, 67 | ) 68 | else: 69 | res = self.openai.ChatCompletion.create( 70 | model=self.model, 71 | temperature=self.temperature, 72 | messages=self.messages, 73 | functions=self.functions.to_json() 74 | ) 75 | #uncomment if you want to see the communications: 76 | if self.functions != []: 77 | print(self.functions.to_json()) 78 | print(self.messages) 79 | print(res) 80 | return res 81 | 82 | -------------------------------------------------------------------------------- /functions/samples.py: -------------------------------------------------------------------------------- 1 | import os 2 | from typing import List 3 | import requests 4 | import datetime 5 | import random 6 | from .openaif import * 7 | 8 | 9 | def getCurrentUTCDateTime() -> str: 10 | return str(datetime.datetime.utcnow()) 11 | 12 | def getDogName() -> str: 13 | return random.choice(['Fido', 'Spot', 'Rover', 'Woof', 'Snoopy']) 14 | 15 | def getNews(**kwargs) -> List: 16 | # free sign up at newsapi.org 17 | query_params = kwargs 18 | query_params['apiKey'] = os.environ.get("NEWSAPI_KEY") 19 | url = os.environ.get("NEWSAPI_ORG_URL") 20 | 21 | # fetching data in json format 22 | try: 23 | res = requests.get(url, params=query_params) 24 | data = res.json() 25 | news = [] 26 | if data["articles"] != None: 27 | for article in data["articles"]: 28 | news.append({'title': article['title'], 'description': article['description']}) 29 | return news 30 | except: 31 | return None 32 | 33 | 34 | def getCurrentWeather(**kwargs)->List: 35 | # free signup at weatherapi.com 36 | query_params = kwargs 37 | query_params['key'] = os.environ.get("WEATHERAPI_KEY") 38 | query_params['aqi'] = 'no' 39 | query_params['alerts'] = 'no' 40 | 41 | url = os.environ.get("WEATHER_URL") 42 | try: 43 | res = requests.get(url, params=query_params) 44 | data = res.json() 45 | weather = {} 46 | if data["current"] != None: 47 | weather['current_condition'] = data['current']['condition']['text'] 48 | weather['current_temp_f'] = data['current']['temp_f'] 49 | weather['current_temp_c'] = data['current']['temp_c'] 50 | return weather 51 | except: 52 | return None 53 | 54 | 55 | def getThreeDayForecast(**kwargs)->List: 56 | # free signup at weatherapi.com 57 | query_params = kwargs 58 | query_params['key'] = os.environ.get("WEATHERAPI_KEY") 59 | query_params['days'] = '3' 60 | query_params['aqi'] = 'yes' 61 | query_params['alerts'] = 'yes' 62 | 63 | url = os.environ.get("3DAY_WEATHER_URL") 64 | try: 65 | res = requests.get(url, params=query_params) 66 | data = res.json() 67 | forecast = "The weather forecast for the next 3 days in {} is:\n".format(kwargs['q']) 68 | if data["forecast"] != None: 69 | for day in data['forecast']['forecastday']: 70 | forecast += "{}. {} with a maximum temperature of {} and a minimum temperature of {}. There is a {} chance of rain.\n".format( 71 | day['date'], 72 | day['day']['condition']['text'], 73 | day['day']['maxtemp_f'], 74 | day['day']['mintemp_f'], 75 | day['day']['daily_chance_of_rain']) 76 | return forecast 77 | except: 78 | return None 79 | 80 | # This is a fun one - allow chatGPT to send text to itself and make requests 81 | # Not sure this would ever happen, but kind of fun to think about... 82 | def askChatGPT(**kwargs)->str: 83 | question = kwargs['question'] if 'question' in kwargs else '' 84 | text = kwargs['text'] if 'text' in kwargs else '' 85 | temperature = kwargs['temperature'] if 'temperature' in kwargs else 0 86 | prompt = "QUESTION:" + question + "\nTEXT:" + text 87 | openai_key = os.environ.get("OPENAI_APIKEY") 88 | oai = openaif(openai_key) 89 | oai.temperature = temperature 90 | return oai.user_request(prompt) 91 | 92 | # if you don't plan to use Sendgrid for sending emails, comment out this section 93 | # import os 94 | # from sendgrid import SendGridAPIClient 95 | # from sendgrid.helpers.mail import Mail 96 | # def sendEmail(**kwargs)->str: 97 | # to_email = kwargs['to_email'] if 'to_email' in kwargs else None 98 | # subject = kwargs['subject'] if 'subject' in kwargs else None 99 | # body = kwargs['body'] if 'body' in kwargs else None 100 | 101 | # leave if chat doesn't provide email, subject, or body 102 | # if to_email == None or subject == None or body == None: return 103 | 104 | 105 | message = Mail(from_email=os.environ.get('SENDGRID_FROM_EMAIL'), to_emails=to_email, subject=subject, html_content=body) 106 | try: 107 | sg = SendGridAPIClient(os.environ.get('SENDGRID_API_KEY')) 108 | response = sg.send(message) 109 | if response.status_code == 202: 110 | return "MESSAGE WAS SUCCESSFULLY SENT." 111 | else: 112 | return "MESSAGE MAY NOT HAVE BEEN SENT." 113 | #print(response.status_code) 114 | #print(response.body) 115 | #print(response.headers) 116 | except Exception as e: 117 | #print(e.message) 118 | return "MESSAGE FAILED TO SEND." 119 | 120 | 121 | # If you don't plan to use Pinecone, comment out everything below: 122 | # import pinecone 123 | # from sentence_transformers import SentenceTransformer 124 | # import torch 125 | 126 | # Initialize Pinecone client and the Index, which will be passed to the chat approaches. 127 | # def getPineconeData(**kwargs)->str: 128 | # prompt = kwargs['prompt'] 129 | # top = kwargs['top'] if 'top' in kwargs else 5 130 | 131 | # index=os.environ.get('PINECONE_INDEX_NAME') 132 | # api_key=os.environ.get('PINECONE_API_KEY') 133 | # env=os.environ.get('PINECONE_ENV') 134 | # sentence_encoder = os.environ.get('SENTENCE_ENCODER') # example: all-MiniLM-L6-v2 135 | # pinecone.init( 136 | # api_key=api_key, 137 | # environment=env 138 | # ) 139 | # pinecone_index = pinecone.Index(index) 140 | # device = 'cuda' if torch.cuda.is_available() else 'cpu' 141 | # encoder = SentenceTransformer( sentence_encoder, device=device) 142 | # query = encoder.encode(prompt).tolist() 143 | # matches = pinecone_index.query(query, top_k=top, include_metadata=True) 144 | # content = '' 145 | # for result in matches['matches']: 146 | # content += result['metadata']['content'] 147 | # return content 148 | 149 | 150 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import os 3 | import json 4 | 5 | from dotenv import load_dotenv 6 | from functions.openaif import openaif 7 | from functions.chat import functions, function, property, PropertyType 8 | from functions.samples import ( 9 | getCurrentWeather, 10 | getThreeDayForecast, 11 | getNews, 12 | getCurrentUTCDateTime, 13 | ) 14 | from audio_output import ( 15 | initialize_audio, 16 | tts_output, 17 | ) # Import the necessary functions from audio_output.py 18 | 19 | 20 | def main(): 21 | load_dotenv() 22 | # You'll need to create a ".env" file with your credentials in the format: 23 | # OPENAI_APIKEY=sk-xxxxxxx 24 | # OTHER_ITEMS=xxxyyy 25 | parser = argparse.ArgumentParser() 26 | parser.add_argument( 27 | "--speak", action="store_true", help="Enable text-to-speech output" 28 | ) 29 | args = parser.parse_args() 30 | 31 | # Initialize audio if --speak is specified 32 | engine = None 33 | if args.speak: 34 | engine = initialize_audio() 35 | 36 | functions_available_to_chatGPT = functions() 37 | 38 | # Hmmm - Not sure about this one - hopefully others might have thoughts. Let ChatGPT ask itself questions. I'm not sure it can do that unless prompted by a user. 39 | # One thing I've noticed is that it may be beneficial to pass the main oai's "messages" into this so it has context... 40 | f = function( 41 | name="askChatGPT", 42 | description="Use a Large Language Model (LLM) to perform analysis, summarization, or classification of text using ChatGPT.", 43 | ) 44 | f.properties.add( 45 | property( 46 | "temperature", 47 | PropertyType.integer, 48 | "The temperature associated with the request: 0 for factual, up to 2 for very creative.", 49 | True, 50 | ) 51 | ) 52 | f.properties.add( 53 | property( 54 | "question", 55 | PropertyType.string, 56 | "What are you requesting be done with the text?", 57 | True, 58 | ) 59 | ) 60 | f.properties.add( 61 | property("text", PropertyType.string, "The text to be analyzed", True) 62 | ) 63 | functions_available_to_chatGPT[f.name] = f 64 | 65 | # If you've used SQLCLient or OracleClient, this is similar. You create your function, and add parameters. 66 | # Then you add your function to the "functions" dictionary object (a dictionary is used to allow subsequent function lookup) 67 | # Note: "default" on properties is not specified, however, it seems to help chatcompletion. 68 | f = function(name="getNews", description="News API function") 69 | f.properties.add( 70 | property("q", PropertyType.string, "Query to return news stories", True) 71 | ) 72 | f.properties.add( 73 | property( 74 | "language", 75 | PropertyType.string, 76 | "Language of News", 77 | True, 78 | ["en", "es"], 79 | default="en", 80 | ) 81 | ) 82 | f.properties.add( 83 | property("pageSize", PropertyType.integer, "Page Size", True, None, default=5) 84 | ) 85 | f.properties.add( 86 | property("from", PropertyType.string, "Optional Date of oldest article.", False) 87 | ) 88 | f.properties.add( 89 | property("to", PropertyType.string, "Optional Date of newest article.", False) 90 | ) 91 | functions_available_to_chatGPT[f.name] = f 92 | 93 | # Weather API (current weather - can be improved to include forecast) 94 | f = function( 95 | name="getCurrentWeather", description="Weather API function for current weather" 96 | ) 97 | f.properties.add( 98 | property("q", PropertyType.string, "Name of city to get weather", True) 99 | ) 100 | functions_available_to_chatGPT[f.name] = f 101 | 102 | # Weather API (3-day forecast) 103 | f = function( 104 | name="getThreeDayForecast", 105 | description="Retrieves the weather forecast for the next 3 days for a specific city.", 106 | ) 107 | f.properties.add( 108 | property("q", PropertyType.string, "Name of city to get weather", True) 109 | ) 110 | functions_available_to_chatGPT[f.name] = f 111 | 112 | # Pinecone vector database API (contains demo "company HR data" from Northwinds) 113 | # comment out if you are not using Pinecone. 114 | # f = function(name="getPineconeData", description="Company data pertaining only to health care plans, company policies, and employee roles.") 115 | # f.properties.add(property("prompt",PropertyType.string, "The prompt to be used to query the vector database. This must be in the form of a concise sentence.", True)) 116 | # f.properties.add(property("top",PropertyType.integer, "Records to be returned.", True, None, default=5)) 117 | # functions_available_to_chatGPT[f.name] = f 118 | 119 | # Send Email 120 | # comment out if you are not using SendGrid. 121 | # f = function(name="sendEmail", description="Send an email. Must include to_email, subject, and body properties.") 122 | # f.properties.add(property("to_email",PropertyType.string, "The email recipient address in email format.", True)) 123 | # f.properties.add(property("subject",PropertyType.string, "The subject of the email.", True)) 124 | # f.properties.add(property("body",PropertyType.string, "The body of the email.", True)) 125 | # functions_available_to_chatGPT[f.name] = f 126 | 127 | # returns the datetime in GMT 128 | f = function( 129 | name="getCurrentUTCDateTime", description="Obtain the current UTC datetime." 130 | ) 131 | functions_available_to_chatGPT[f.name] = f 132 | 133 | # returns a random dog's name 134 | f = function(name="getDogName", description="Obtain the dog's name") 135 | functions_available_to_chatGPT[f.name] = f 136 | 137 | # instantiate the llm with the functions in list format (.to_json()) 138 | openai_key = os.environ.get("OPENAI_APIKEY") 139 | oai = openaif(openai_key, functions_available_to_chatGPT) 140 | 141 | # Feel free to experiment with the system role below. In my experiments, this seems to cause problematic output including ignoring user 142 | # instruction to convert to PST, and sending content through in the same responses that also request a function_call. 143 | # oai.set_chat_context("You are an extremely happy assistant. Only include data from function calls in your responses. If you don't know something, say 'I don't know.'") 144 | 145 | # FUN CHALLENGE: make 5 calls: getDogName, getCurrentDateTime (and switch it to Eastern - not always accurate), getCurrentWeather, getThreeDayForecast, and get some news stories. 146 | # "What is my dog's name, tell me what time is it in EST, what is the weather like in Atlanta, what sightseeing activities would you recommend for Metro Atlanta this time of year, and also please give me 5 articles from Atlanta from the last couple days." 147 | # Since I am asking it to get a little creative with sightseeing tips for Atlanta, I'm setting the temperature below to 1. 148 | oai.temperature = 1 149 | 150 | # Contributor John was cool enough to add the recursive question input. 151 | while True: 152 | prompt = input("Enter your question ('cls' to reset chat, 'quit' to quit): ") 153 | if prompt.lower() == "quit": 154 | break 155 | if prompt.lower() == "cls": 156 | oai.clear_chat_session() 157 | else: 158 | res = oai.user_request(prompt) 159 | # Replace the degree symbol 160 | res = res.replace("\u00b0F", " degrees Fahrenheit").replace( 161 | "\u00b0C", " degrees Celcius" 162 | ) 163 | print(res) 164 | 165 | # Speak the response if --speak is specified 166 | if args.speak: 167 | tts_output(engine, res) 168 | 169 | 170 | if __name__ == "__main__": 171 | main() 172 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | openai>=0.27.8 2 | diffusers>=0.17.1 3 | transformers>=4.30.2 4 | accelerate>=0.20.3 5 | scipy>=1.10.1 6 | safetensors>=0.3.1 7 | xformers>=0.0.20 8 | python-dotenv>=1.0.0 9 | pinecone-client>=2.2.1 10 | sentence-transformers>=2.2.2 11 | torch>=2.0.1 12 | sendgrid>=6.10.0 13 | pyttsx3>=2.90 14 | pygame>=2.0.2 --------------------------------------------------------------------------------