├── .gitignore ├── .env.sample ├── requirements.txt ├── vercel.json ├── flaskGPT.py └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .vercel 2 | .env 3 | /venv/ -------------------------------------------------------------------------------- /.env.sample: -------------------------------------------------------------------------------- 1 | OPENAI_API_KEY=sk-your-openai-api-key 2 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | langchain==0.0.188 2 | openai==0.27.7 3 | google-api-python-client==2.88.0 4 | python-dotenv==1.0.0 5 | flask==2.3.2 -------------------------------------------------------------------------------- /vercel.json: -------------------------------------------------------------------------------- 1 | { 2 | "builds": [{"src": "flaskGPT.py", "use": "@vercel/python"}], 3 | "routes": [{"src": "/api/(.*)", "dest": "flaskGPT.py"}] 4 | } -------------------------------------------------------------------------------- /flaskGPT.py: -------------------------------------------------------------------------------- 1 | from dotenv import load_dotenv 2 | load_dotenv() 3 | 4 | from langchain.chat_models import ChatOpenAI 5 | from langchain.schema import ( 6 | AIMessage, 7 | HumanMessage, 8 | SystemMessage 9 | ) 10 | 11 | chat = ChatOpenAI( 12 | temperature=1.0, 13 | max_tokens=100, 14 | model_name="gpt-3.5-turbo", 15 | verbose=True 16 | ) 17 | 18 | from flask import Flask, request, jsonify 19 | app = Flask(__name__) 20 | 21 | @app.route('/api/prompt', methods=['POST']) 22 | def prompt(): 23 | data = request.get_json() 24 | prompt = data.get('prompt', '') 25 | 26 | messages = [ 27 | SystemMessage(content="You are a funny chatboth only capable of answering one question at a time. Remember to have fun!"), 28 | HumanMessage(content=prompt) 29 | ] 30 | 31 | output = chat(messages) 32 | return jsonify({'response': output.content}) 33 | 34 | if __name__ == '__main__': 35 | app.run(port=3000) -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # High Level 2 | This project was created to be thin and enable you to build on top. 3 | It uses Langchain, OpenAI, Flask, and Vercel to KISS and deploy a One-Shot AI server in seconds. 4 | 5 | Doesn't have what you're looking for? 6 | Go build it! 7 | 8 | ## Deploy on Vercel 9 | Use this flow to yolo deploy to Vercel. 10 | 11 | ### Deploy directly from github 12 | 1. Fork the repostory. 13 | 2. Go to Vercel and choose this repository to deploy. 14 | 3. Go to your Vercel, this project's page & settings after it completes deployment, and configure the Environment Variables by setting. 15 | ``` 16 | OPEN_API_KEY=your-api-key 17 | ``` 18 | 3. Curl your endpoint and test your server! 19 | ``` 20 | curl -X POST https://flaskgpt-your-account-vercel.app/api/prompt -H "Content-Type: application/json" -d "{\"prompt\": \"What is the funniest joke you've ever heard?\"}" 21 | ``` 22 | 4. Have fun! 23 | 24 | ## Deploy locally 25 | Use this flow to run the server locally and test it. 26 | 27 | ### Terminal #1 - Setup your server 28 | 1. Clone the repository onto your local filesystem. 29 | 2. Configure your .env by setting 30 | ``` 31 | OPEN_API_KEY=your-api-key 32 | ``` 33 | 3. Setup your venv and activate it 34 | ``` 35 | python3 -m venv venv 36 | source venv/bin/activate 37 | ``` 38 | 4. Install dependencies 39 | ``` 40 | pip install -r requirements.txt 41 | ``` 42 | 5. Start your server 43 | ``` 44 | python3 flaskGPT.py 45 | ``` 46 | 47 | ### Terminal #2 - Test your server 48 | 6. Start a new terminal 49 | 7. In your 2nd terminal, curl your server 50 | ``` 51 | curl -X POST https://flaskgpt-your-account-vercel.app/api/prompt -H "Content-Type: application/json" -d "{\"prompt\": \"What is the funniest joke you've ever heard?\"}" 52 | ``` 53 | 8. Have fun! 54 | 55 | --------------------------------------------------------------------------------