├── requirements.txt ├── utils.py ├── Dockerfile ├── README.md ├── .gitignore ├── .dockerignore └── main.py /requirements.txt: -------------------------------------------------------------------------------- 1 | fastapi 2 | pydantic 3 | uvicorn[standard] 4 | python-multipart 5 | numpy 6 | Pillow 7 | tensorflow -------------------------------------------------------------------------------- /utils.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | 3 | from PIL import Image 4 | from io import BytesIO 5 | 6 | def load_image_into_numpy_array(data): 7 | return np.array(Image.open(BytesIO(data))) -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.10.3-slim-buster 2 | 3 | WORKDIR /workspace 4 | 5 | COPY requirements.txt requirements.txt 6 | 7 | RUN pip install -r requirements.txt 8 | 9 | COPY . . 10 | 11 | ENV PYTHONUNBUFFERED=1 12 | 13 | ENV HOST 0.0.0.0 14 | 15 | EXPOSE 8080 16 | 17 | CMD ["python", "main.py"] 18 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ML API Template 🚀 2 | 3 | Hello everyone! In this repository, I (Kaenova, a mentor in Bangkit 2023 program) will provide you with a head start on creating an ML API. Please read every line and comment carefully. 4 | 5 | I have included code examples for both text-based input and image-based input APIs. To run this server, make sure to install all the required libraries listed in `requirements.txt` by executing the command `pip install -r requirements.txt`. Then, use the command `python main.py` to run the server. 6 | 7 | ## Machine Learning Setup 8 | 9 | Please prepare your model in either the `.h5` or saved model format. Place your model in the same folder as the `main.py` file. You will load your model in the code provided below. There are two options available: one for image-based input and another for text-based input. You need to complete either the `def predict_text` or `def predict_image` functions in `main.py` based on your model's input type. 10 | 11 | ## Cloud Computing 12 | 13 | You can check the endpoints used for the machine learning models in this API. The available endpoints are `/predict_text` for text-based input and `/predict_image` for image-based input. 14 | 15 | For the `/predict_text` endpoint, you need to send a JSON payload with the following structure: 16 | ```json 17 | { 18 | "text": "your text" 19 | } 20 | ``` 21 | 22 | For the `/predict_image` endpoint, you need to send a multipart-form with a field named "uploaded_file" containing the image file. 23 | 24 | You can view the API documentation by accessing the `/docs` endpoint after running the server. Additionally, a Dockerfile is provided to facilitate modification and container image creation. By default, the server runs on port 8080, but you can customize the port by injecting the `PORT` environment variable. 25 | 26 | ## Consultation 27 | 28 | If you need any assistance or would like to schedule a consultation with me, feel free to reach out to me on Discord (kaenova#2859). We can discuss your requirements and arrange a consultation time. 29 | 30 | Finally, I encourage you to share your capstone application with me! You can connect with me on the following platforms: 31 | 32 | - Instagram: [@kaenovama](https://www.instagram.com/kaenovama) 33 | - Twitter: [@kaenovama](https://twitter.com/kaenovama) 34 | - LinkedIn: [/in/kaenova](https://www.linkedin.com/in/kaenova) 35 | 36 | Now, you can start writing your deploying your model. Happy coding! 37 | 38 | --- 39 | 40 | ## Usage 41 | 42 | To get started, follow these steps: 43 | 44 | 1. Clone the repository: 45 | ```sh 46 | git clone https://github.com/your-username/your-repository.git 47 | ``` 48 | 49 | 2. Install the required libraries: 50 | ```sh 51 | pip install -r requirements.txt 52 | ``` 53 | 54 | 3. Prepare your machine learning model: 55 | - If you have an `.h5` model file, place it in the same folder as `main.py`. 56 | - If you have a saved model format, place it in a folder named `my_model_folder` in the same directory as `main.py`. 57 | 58 | 4. Read the `main.py` file carefully 59 | 60 | 5. Complete the `predict_text` or `predict_image` function in `main.py` 61 | 62 | 6. Run the server: 63 | ```sh 64 | python main.py 65 | ``` 66 | 67 | --- 68 | 69 | ## One More Thing 70 | 71 | One thing i should mention. Many people still asking **Who should deploy the model? Is it CC or is it ML?** 72 | > Both parties need to collaborate. Cloud Computing (CC) may not be familiar with the contents of your .h5 file, and Machine Learning (ML) may not be familiar with HTTP POST and GET requests. Therefore, it is the responsibility of both parties to deploy the model. 73 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | 162 | venv -------------------------------------------------------------------------------- /.dockerignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | 162 | venv -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | # README 2 | # Hello everyone, in here I (Kaenova | Bangkit Mentor ML-20) 3 | # will give you some headstart on createing ML API. 4 | # Please read every lines and comments carefully. 5 | # 6 | # I give you a headstart on text based input and image based input API. 7 | # To run this server, don't forget to install all the libraries in the 8 | # requirements.txt simply by "pip install -r requirements.txt" 9 | # and then use "python main.py" to run it 10 | # 11 | # For ML: 12 | # Please prepare your model either in .h5 or saved model format. 13 | # Put your model in the same folder as this main.py file. 14 | # You will load your model down the line into this code. 15 | # There are 2 option I give you, either your model image based input 16 | # or text based input. You need to finish functions "def predict_text" or "def predict_image" 17 | # 18 | # For CC: 19 | # You can check the endpoint that ML being used, eiter it's /predict_text or 20 | # /predict_image. For /predict_text you need a JSON {"text": "your text"}, 21 | # and for /predict_image you need to send an multipart-form with a "uploaded_file" 22 | # field. you can see this api documentation when running this server and go into /docs 23 | # I also prepared the Dockerfile so you can easily modify and create a container iamge 24 | # The default port is 8080, but you can inject PORT environement variable. 25 | # 26 | # If you want to have consultation with me 27 | # just chat me through Discord (kaenova#2859) and arrange the consultation time 28 | # 29 | # Share your capstone application with me! 🥳 30 | # Instagram @kaenovama 31 | # Twitter @kaenovama 32 | # LinkedIn /in/kaenova 33 | 34 | ## Start your code here! ## 35 | 36 | import os 37 | import uvicorn 38 | import traceback 39 | import tensorflow as tf 40 | 41 | from pydantic import BaseModel 42 | from urllib.request import Request 43 | from fastapi import FastAPI, Response, UploadFile 44 | from utils import load_image_into_numpy_array 45 | 46 | # Initialize Model 47 | # If you already put yout model in the same folder as this main.py 48 | # You can load .h5 model or any model below this line 49 | 50 | # If you use h5 type uncomment line below 51 | # model = tf.keras.models.load_model('./my_model.h5') 52 | # If you use saved model type uncomment line below 53 | # model = tf.saved_model.load("./my_model_folder") 54 | 55 | app = FastAPI() 56 | 57 | # This endpoint is for a test (or health check) to this server 58 | @app.get("/") 59 | def index(): 60 | return "Hello world from ML endpoint!" 61 | 62 | # If your model need text input use this endpoint! 63 | class RequestText(BaseModel): 64 | text:str 65 | 66 | @app.post("/predict_text") 67 | def predict_text(req: RequestText, response: Response): 68 | try: 69 | # In here you will get text sent by the user 70 | text = req.text 71 | print("Uploaded text:", text) 72 | 73 | # Step 1: (Optional) Do your text preprocessing 74 | 75 | # Step 2: Prepare your data to your model 76 | 77 | # Step 3: Predict the data 78 | # result = model.predict(...) 79 | 80 | # Step 4: Change the result your determined API output 81 | 82 | return "Endpoint not implemented" 83 | except Exception as e: 84 | traceback.print_exc() 85 | response.status_code = 500 86 | return "Internal Server Error" 87 | 88 | # If your model need image input use this endpoint! 89 | @app.post("/predict_image") 90 | def predict_image(uploaded_file: UploadFile, response: Response): 91 | try: 92 | # Checking if it's an image 93 | if uploaded_file.content_type not in ["image/jpeg", "image/png"]: 94 | response.status_code = 400 95 | return "File is Not an Image" 96 | 97 | # In here you will get a numpy array in "image" variable. 98 | # You can use this file, to load and do processing 99 | # later down the line 100 | image = load_image_into_numpy_array(uploaded_file.file.read()) 101 | print("Image shape:", image.shape) 102 | 103 | # Step 1: (Optional, but you should have one) Do your image preprocessing 104 | 105 | # Step 2: Prepare your data to your model 106 | 107 | # Step 3: Predict the data 108 | # result = model.predict(...) 109 | 110 | # Step 4: Change the result your determined API output 111 | 112 | return "Endpoint not implemented" 113 | except Exception as e: 114 | traceback.print_exc() 115 | response.status_code = 500 116 | return "Internal Server Error" 117 | 118 | 119 | # Starting the server 120 | # Your can check the API documentation easily using /docs after the server is running 121 | port = os.environ.get("PORT", 8080) 122 | print(f"Listening to http://0.0.0.0:{port}") 123 | uvicorn.run(app, host='0.0.0.0',port=port) --------------------------------------------------------------------------------