├── LICENSE ├── README.md └── universal_api.py /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Piaoyang Cui 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Universal API 2 | 3 | *Don't know how to implement a function? Let Universal API help you on the fly!* 4 | 5 | This is a non-serious library that can implement any functions on the fly using LLMs. To use it, simply call a function with a descriptive name and parameters, and it will be defined, implemented and called at runtime. Generated functions are cached so you don't pay for things that are already implemented. 6 | 7 | ## Usage 8 | 9 | ```python 10 | api = UniversalAPI() 11 | # Start to call arbitrary functions 12 | print(api.sort([3, 2, 1])) # returns [1, 2, 3] 13 | print(api.sort([4, 3, 2, 1])) # returns [1, 2, 3, 4] using cached implementation 14 | print(api.sort([1, 2, 3], reverse=True)) # returns [3, 2, 1] 15 | print(api.add(1, 2)) # returns 3 16 | print(api.reverse('hello')) # returns 'olleh' 17 | api.fizzbuzz(15) # prints the fizzbuzz sequence up to 15 18 | api.print_picachu() # prints an ASCII art of Picachu 19 | ``` 20 | 21 | ## Warning 22 | This library will execute unverified code in your local machine. It's **NOT** safe to run in production (or really, any serious environment). 23 | 24 | ## Notes 25 | By default this library uses OpenAI GPT API. You can modify environment variables like `OPENAI_BASE_URL` to use other LLM endpoints (e.g. Anyscale Endpoint), which allows you to run on other models like the LLaMa series. It's also possible to use local LLMs. 26 | -------------------------------------------------------------------------------- /universal_api.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import os 3 | from openai import OpenAI 4 | 5 | class UniversalAPI: 6 | def __init__(self): 7 | self.cached_methods = {} 8 | self.openai_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) 9 | 10 | def __getattr__(self, name): 11 | def method(*args, **kwargs): 12 | method_signature = (name, len(args), tuple(sorted(kwargs.keys()))) 13 | if method_signature in self.cached_methods: 14 | cached_method = self.cached_methods[method_signature] 15 | logging.info(f"Using cached method: {name} with arguments {args} and kwargs {kwargs}") 16 | return cached_method(*args, **kwargs) 17 | 18 | prompt = f"The user is asking for a method called {name} with arguments {args} and kwargs {kwargs}. Return the python code that implements this method. Only return the python code, without the ```python prefix and ``` sufix. Do not include example usage." 19 | messages = [ 20 | {"role": "user", "content": prompt}, 21 | ] 22 | logging.info(f"Generating dynamic method called: {name}, with arguments {args} and kwargs {kwargs}") 23 | response = self.openai_client.chat.completions.create( 24 | model='gpt-4-turbo-preview', 25 | messages=messages, 26 | max_tokens=256, # Adjust the number of tokens as needed 27 | temperature=0, # Adjust the creativity level 28 | ) 29 | llm_response = response.choices[0].message.content 30 | logging.info(f"LLM response: {llm_response}") 31 | exec(llm_response) 32 | self.cached_methods[method_signature] = locals()[name] 33 | return locals()[name](*args, **kwargs) 34 | 35 | return method 36 | 37 | # Illustration 38 | api = UniversalAPI() 39 | print(api.sort([3, 2, 1])) # returns [1, 2, 3] 40 | print(api.sort([4, 3, 2, 1])) # returns [1, 2, 3, 4] using cached implementation 41 | print(api.sort([1, 2, 3], reverse=True)) # returns [3, 2, 1] 42 | print(api.add(1, 2)) # returns 3 43 | print(api.reverse('hello')) # returns 'olleh' 44 | api.fizzbuzz(15) # prints the fizzbuzz sequence up to 15 45 | api.print_picachu() # prints an ASCII art of Picachu 46 | --------------------------------------------------------------------------------