├── .devcontainer └── devcontainer.json ├── LICENSE ├── Notebooks ├── Image_captioning_by_VGG16.ipynb ├── Image_captioning_by_inception-v3.ipynb └── image-caption-generator-resnet50.ipynb ├── README.md ├── app └── app.py ├── best_mode_vgg_40.h5 ├── demo ├── 2023-09-15 15-04-14.mp4 ├── 2862c9da088557e1140068ed564f2307c63ba489a54363d44d19161c.jpg ├── 3071676551_a65741e372.jpg ├── 70cf5ab3538f1eb82c96313a7983be80e95988cc9d1dab4b5a5aa51e (1).jpeg ├── 738020db6a97154799_0x0.jpg ├── 78c65793b4613ed5d95686c90cad4c6856e018f3727b139220b38223.jpg ├── 961b377b96dbf7cbe2729e0552d2bd7e720c7bc16bf761fd7b53efeb.jpg ├── Screenshot_89.png ├── d89abf6bfc3f652c5e3ff50294259853abbb348ad95afca77fc5a200.jpg ├── e98c741a23421241f7ce5cb399ead8762ec71f2b74e1e58ad957eae4.jpg ├── ezgif-1-ba79ce21f2.gif ├── generated.gif └── september 9, 2019 200 pm findlay residence.gif ├── requirements.txt ├── tokenizer40.pickle └── tokenizer40_new.pickle /.devcontainer/devcontainer.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Python 3", 3 | // Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile 4 | "image": "mcr.microsoft.com/devcontainers/python:1-3.11-bullseye", 5 | "customizations": { 6 | "codespaces": { 7 | "openFiles": [ 8 | "README.md", 9 | "app.py" 10 | ] 11 | }, 12 | "vscode": { 13 | "settings": {}, 14 | "extensions": [ 15 | "ms-python.python", 16 | "ms-python.vscode-pylance" 17 | ] 18 | } 19 | }, 20 | "updateContentCommand": "[ -f packages.txt ] && sudo apt update && sudo apt upgrade -y && sudo xargs apt install -y 2 | 3 | 4 | 5 | 6 | 7 | 8 | LinkedIn Profile 9 | 10 |

11 | 12 |

Image Caption Generator

13 | 14 | 15 |

16 | This is a Deep Learning Model for generating captions for images. It uses techniques from Computer Vision and Natural Language Processing. Some examples of images from test dataset and the captions generated by the model are shown below. 17 |

18 | 19 |

20 | Image Captions demo 23 |

24 | 25 | 26 | 27 | 28 |

TABLE OF CONTENTS

29 |
    30 |
  1. Introduction
  2. 31 |
  3. Dataset
  4. 32 |
  5. Used Model
  6. 33 |
  7. Models performance
  8. 34 |
  9. Predicted Result
  10. 35 |
  11. Frameworks, Libraries & Languages
  12. 36 |
  13. Deployment
  14. 37 |
  15. Demo
  16. 38 |
  17. Conclusion
  18. 39 |
  19. Acknowledgement
  20. 40 |
41 | 42 | 43 |

Introduction

44 | 45 | Deep Learning and Neural Networks have found profound applications in both NLP and Computer Vision. Before the Deep Learning era, statistical and Machine Learning techniques were commonly used for these tasks, especially in NLP. Neural Networks however have now proven to be powerful techniques, especially for more complex tasks. With the increase in size of available datasets and efficient computational tools, Deep Learning is being throughly researched on and applied in an increasing number of areas.Image captioning is the process of taking an image and generating a caption that accurately describes the scene. This is a difficult task for neural networks because it requires understanding both natural language and computer vision. It is the intersection between NLP and Computer Vision. 46 |

47 |

48 |

49 |
50 |

51 | The purpose of your image captioning project is to develop a system that can generate descriptive captions for images. The primary objective is to automate the process of generating accurate and meaningful textual descriptions that capture the visual content and context of the images.

52 |

53 | Image Captions demo 57 | 58 | 59 | 60 |

Dataset

61 | 62 | This project uses the ```Flickr 8K``` dataset for training the model. This can be downloaded from here. It contains 8000 images, most of them featuring people and animals in a state of action.Though i have used two more dataset like [Flickr 30k](https://www.kaggle.com/datasets/adityajn105/flickr30k) and [Microsoft CoCo](https://www.kaggle.com/datasets/awsaf49/coco-2017-dataset) but flickr 8k comparatively doing well from among datasets.Here Each image is provided with five different captions describing the entities and events depicted in the image. Different captions of the same image tend to focus on different aspects of the scene, or use different linguistic constructions. This ensures that there is enough linguistic variety in the description of the images. 63 | 64 |

65 | 66 |

Used Model

67 | 68 |

69 | In this project i used several models like vgg16 with lstm,resnet50 with lstm, inception v3 with lstm but considering everything i deployed vgg16 with lstm for my convenience the Vgg16 architecture for obtaining the image features. VGG networks, standing for Visual Geometry Group networks, have played a pivotal role in the realm of Computer Vision. Their significance was highlighted when they secured victory in the prestigious ImageNet Challenge. VGG networks demonstrated a groundbreaking concept—how even deep neural networks (VGG16 and VGG19 are examples) can be effectively trained, transcending the limitations of the vanishing gradient problem. 70 | 71 | The hallmark of VGG networks is their simplicity and uniformity. Unlike the original ResNet with its 152 layers, VGG networks feature a straightforward and easily understandable architecture. This simplicity, combined with their effectiveness, makes them a popular choice for various computer vision tasks. 72 | 73 | In the world of Transfer Learning, VGG networks shine brightly. Keras, a widely-used deep learning framework, includes pre-trained VGG models along with weights fine-tuned on the expansive ImageNet dataset. This availability has made VGG networks a go-to choice for researchers and practitioners in the field. Since we only need this network for getting the image feature vectors, so we remove the last layer. 74 |

75 | 76 |
77 | Image 78 |
79 | 80 |

Models performance

81 | 82 | Models | Accuracy | BLEU-1 | BLEU-2 83 | --- | --- | --- | --- 84 | Vgg 16 with Lstm | 0.5125 | 0.540437 | 0.316454 85 | Resnet 50 with Lstm | 0.5522 | 0.538153 | 0.321559 86 | InceptionV3 with Lstm | 0.5012 | 0.545291 | 0.323035 87 | 88 |

Predicted Result

89 | 90 | Image | Caption 91 | --- | --- 92 | | **Generated Caption:** Surfer rides the waves. 93 | | **Generated Caption:** Woman in green shirt and glasses is climbing large rock. 94 | | **Generated Caption:** Dog jumps over hurdle. 95 | | **Generated Caption:** Man in blue jacket snowboarding. 96 | | **Generated Caption:** Basketball player dribbles the ball. 97 | 98 | 99 |

Frameworks, Libraries & Languages

100 | 101 | 109 | 110 |

Deployment

111 | 112 | I made a web application using streamlit framework. This web application is hosted and deployed on two platforms: [HuggingFace Spaces](https://huggingface.co/spaces/MdRiad/Image_caption_generator) and [share.streamlit.io](https://imagecaptiongenerator-cjmnhj4scsrxheqxtsmune.streamlit.app/). The implementation can be found in ```app``` folder. 113 | 114 | 115 |

Demo

116 | 117 | 118 | This my web application demo. The application you can find [HuggingFace Spaces](https://huggingface.co/spaces/MdRiad/Image_caption_generator) and [share.streamlit.io](https://imagecaptiongenerator-cjmnhj4scsrxheqxtsmune.streamlit.app/). 119 | 120 | 121 | https://github.com/riad5089/Image_Caption_Generator/assets/93583569/270f348f-404a-4823-b014-bed14be0d5b1 122 | 123 |

Conclusion

124 | 125 | This image caption generator application shows promise but currently faces challenges in generating accurate captions due to factors such as limited computational resources, a relatively small dataset, occasional GPU issues, and memory constraints. However, I am committed to making continuous improvements and enhancements to enhance its performance in the future as well as i made this project using a LLM model called Gemini-Pro-Vision you can check out this [repository](https://github.com/riad5089/Image-Captioning-Web-App-with-Gemini-Pro-Vision) which has more accurate prediction. If you have any suggestions regarding this feel free to send me pull request. 126 | 127 |

Acknowledgement

128 |

129 | I referred many articles & research papers while working on this project. Some of them are listed below- 130 |

131 | 138 | 139 | 140 | 141 | 142 | 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /app/app.py: -------------------------------------------------------------------------------- 1 | 2 | import streamlit as st 3 | import numpy as np 4 | import pickle 5 | import tensorflow as tf 6 | # from tensorflow import keras 7 | from tensorflow.keras.models import Model 8 | from tensorflow.keras.applications.vgg16 import VGG16, preprocess_input 9 | from tensorflow.keras.preprocessing.image import load_img, img_to_array 10 | from tensorflow.keras.models import load_model 11 | from tensorflow.keras.preprocessing.sequence import pad_sequences 12 | from PIL import Image 13 | from PIL import Image 14 | import requests 15 | from io import BytesIO 16 | 17 | 18 | 19 | 20 | def set_bg_hack_url(): 21 | ''' 22 | A function to unpack an image from url and set as bg. 23 | Returns 24 | ------- 25 | The background. 26 | ''' 27 | st.markdown( 28 | f""" 29 | 39 | """, 40 | unsafe_allow_html=True 41 | ) 42 | 43 | 44 | print(set_bg_hack_url()) 45 | 46 | 47 | # Preprocess the uploaded image 48 | def preprocess_image(uploaded_image): 49 | image = Image.open(uploaded_image) 50 | image = image.resize((224, 224)) # Resize the image to match VGG16 input size 51 | image = np.array(image) 52 | image = preprocess_input(image) 53 | return image 54 | 55 | # Preprocess the image from URL 56 | def preprocess_image_url(image_url): 57 | response = requests.get(image_url) 58 | image = Image.open(BytesIO(response.content)) 59 | image = image.resize((224, 224)) # Resize the image to match VGG16 input size 60 | image = np.array(image) 61 | image = preprocess_input(image) 62 | return image 63 | 64 | # Load MobileNetV2 model 65 | model_1 = load_model("best_mode_vgg_40.h5", compile=False) 66 | 67 | # Load the tokenizer 68 | with open("tokenizer40_new.pickle", 'rb') as handle: 69 | tokenizer = pickle.load(handle) 70 | 71 | def idx_to_word(integer, tokenizer): 72 | for word, index in tokenizer.word_index.items(): 73 | if index == integer: 74 | return word 75 | return None 76 | 77 | # Function to generate captions 78 | def predict_caption(model, image, tokenizer, max_length): 79 | # add start tag for generation process 80 | in_text = 'startseq' 81 | # iterate over the max length of sequence 82 | for i in range(max_length): 83 | # encode input sequence 84 | sequence = tokenizer.texts_to_sequences([in_text])[0] 85 | # pad the sequence 86 | sequence = pad_sequences([sequence], max_length) 87 | # predict next word 88 | yhat = model.predict([image, sequence], verbose=0) 89 | # get index with high probability 90 | yhat = np.argmax(yhat) 91 | # convert index to word 92 | word = idx_to_word(yhat, tokenizer) 93 | # stop if word not found 94 | if word is None: 95 | break 96 | # append word as input for generating the next word 97 | in_text += " " + word 98 | # stop if we reach end tag 99 | if word == 'endseq': 100 | break 101 | return in_text 102 | 103 | def main(): 104 | st.title("Image Caption Generator 📷 ➡️ 📝") 105 | 106 | # Choose an input option: Upload or URL 107 | input_option = st.radio("Select an input option:", ("Upload Image", "Image URL")) 108 | 109 | if input_option == "Upload Image": 110 | # Upload an image 111 | uploaded_image = st.file_uploader("Choose an image...", type=["jpg", "png", "jpeg"]) 112 | 113 | if uploaded_image is not None: 114 | image = Image.open(uploaded_image) 115 | st.image(image, caption="Uploaded Image", use_column_width=True) 116 | 117 | # Generate caption button 118 | if st.button("Generate Caption"): 119 | # Preprocess the uploaded image 120 | new_image = preprocess_image(uploaded_image) 121 | 122 | # Generate features for the new image using the pre-trained VGG16 model 123 | vgg_model = VGG16() 124 | vgg_model = Model(inputs=vgg_model.inputs, outputs=vgg_model.layers[-2].output) 125 | new_image_features = vgg_model.predict(np.array([new_image]), verbose=0) 126 | 127 | # Predict caption for the new image 128 | generated_caption = predict_caption(model_1, new_image_features, tokenizer, max_length=35) 129 | generated_caption = generated_caption.replace('startseq', '').replace('endseq', '').strip() 130 | generated_caption = generated_caption.capitalize() 131 | 132 | # Display the generated caption 133 | st.markdown('#### Predicted Captions:') 134 | st.markdown(f"

{generated_caption}.

", 135 | unsafe_allow_html=True) 136 | 137 | elif input_option == "Image URL": 138 | # Input image URL 139 | image_url = st.text_input("Enter the image URL:") 140 | 141 | if image_url: 142 | # Display the image 143 | image = Image.open(BytesIO(requests.get(image_url).content)) 144 | st.image(image, caption="Image", use_column_width=True) 145 | 146 | # Generate caption button 147 | if st.button("Generate Caption"): 148 | # Preprocess the image from URL 149 | new_image = preprocess_image_url(image_url) 150 | 151 | # Generate features for the new image using the pre-trained VGG16 model 152 | vgg_model = VGG16() 153 | vgg_model = Model(inputs=vgg_model.inputs, outputs=vgg_model.layers[-2].output) 154 | new_image_features = vgg_model.predict(np.array([new_image]), verbose=0) 155 | 156 | # Generate caption for the new image 157 | generated_caption = predict_caption(model_1, new_image_features, tokenizer, max_length=35) 158 | generated_caption = generated_caption.replace('startseq', '').replace('endseq', '').strip() 159 | generated_caption = generated_caption.capitalize() 160 | 161 | # Display the generated caption 162 | st.markdown('#### Predicted Captions:') 163 | st.markdown(f"

{generated_caption}.

", 164 | unsafe_allow_html=True) 165 | 166 | if __name__ == "__main__": 167 | main() 168 | 169 | 170 | 171 | 172 | # import streamlit as st 173 | # import numpy as np 174 | # import pickle 175 | # import tensorflow as tf 176 | # from tensorflow import keras 177 | # from keras.models import Model 178 | # from keras.applications.vgg16 import VGG16, preprocess_input 179 | # from keras.preprocessing.image import load_img, img_to_array 180 | # from keras.models import load_model 181 | # from keras.preprocessing.sequence import pad_sequences 182 | # from PIL import Image 183 | # import requests 184 | # from io import BytesIO 185 | # import pyttsx3 186 | # import base64 187 | 188 | 189 | # def set_bg_hack_url(): 190 | # ''' 191 | # A function to unpack an image from url and set as bg. 192 | # Returns 193 | # ------- 194 | # The background. 195 | # ''' 196 | # st.markdown( 197 | # f""" 198 | # 208 | # """, 209 | # unsafe_allow_html=True 210 | # ) 211 | 212 | 213 | # print(set_bg_hack_url()) 214 | # # 215 | # # 216 | # # # Preprocess the uploaded image 217 | # def preprocess_image(uploaded_image): 218 | # image = Image.open(uploaded_image) 219 | # image = image.resize((224, 224)) # Resize the image to match VGG16 input size 220 | # image = np.array(image) 221 | # image = preprocess_input(image) 222 | # return image 223 | 224 | # # Preprocess the image from URL 225 | # def preprocess_image_url(image_url): 226 | # response = requests.get(image_url) 227 | # image = Image.open(BytesIO(response.content)) 228 | # image = image.resize((224, 224)) # Resize the image to match VGG16 input size 229 | # image = np.array(image) 230 | # image = preprocess_input(image) 231 | # return image 232 | 233 | # # Load MobileNetV2 model 234 | # model_1 = load_model("best_mode_vgg_40.h5", compile=False) 235 | 236 | # # Load the tokenizer 237 | # with open("tokenizer40.pickle", 'rb') as handle: 238 | # tokenizer = pickle.load(handle) 239 | 240 | # def idx_to_word(integer, tokenizer): 241 | # for word, index in tokenizer.word_index.items(): 242 | # if index == integer: 243 | # return word 244 | # return None 245 | 246 | # # Function to generate captions 247 | # def predict_caption(model, image, tokenizer, max_length): 248 | # # add start tag for generation process 249 | # in_text = 'startseq' 250 | # # iterate over the max length of sequence 251 | # for i in range(max_length): 252 | # # encode input sequence 253 | # sequence = tokenizer.texts_to_sequences([in_text])[0] 254 | # # pad the sequence 255 | # sequence = pad_sequences([sequence], max_length) 256 | # # predict next word 257 | # yhat = model.predict([image, sequence], verbose=0) 258 | # # get index with high probability 259 | # yhat = np.argmax(yhat) 260 | # # convert index to word 261 | # word = idx_to_word(yhat, tokenizer) 262 | # # stop if word not found 263 | # if word is None: 264 | # break 265 | # # append word as input for generating the next word 266 | # in_text += " " + word 267 | # # stop if we reach end tag 268 | # if word == 'endseq': 269 | # break 270 | # return in_text 271 | 272 | # def generate_audio(caption): 273 | # engine = pyttsx3.init() 274 | # engine.save_to_file(caption, 'caption_audio.mp3') 275 | # engine.runAndWait() 276 | 277 | # def main(): 278 | 279 | 280 | # st.title("Image Caption Generator 📷 ➡️ 📝") 281 | 282 | 283 | # # Choose an input option: Upload or URL 284 | # input_option = st.radio("Select an input option:", ("Upload Image", "Image URL")) 285 | 286 | # if input_option == "Upload Image": 287 | # # Upload an image 288 | # uploaded_image = st.file_uploader("Choose an image...", type=["jpg", "png", "jpeg"]) 289 | 290 | # if uploaded_image is not None: 291 | # image = Image.open(uploaded_image) 292 | # st.image(image, caption="Uploaded Image", use_column_width=True) 293 | 294 | # # Generate caption button 295 | # if st.button("Generate Caption"): 296 | # # Preprocess the uploaded image 297 | # new_image = preprocess_image(uploaded_image) 298 | 299 | # # Generate features for the new image using the pre-trained VGG16 model 300 | # vgg_model = VGG16() 301 | # vgg_model = Model(inputs=vgg_model.inputs, outputs=vgg_model.layers[-2].output) 302 | # new_image_features = vgg_model.predict(np.array([new_image]), verbose=0) 303 | 304 | # # Predict caption for the new image 305 | # generated_caption = predict_caption(model_1, new_image_features, tokenizer, max_length=35) 306 | # generated_caption = generated_caption.replace('startseq', '').replace('endseq', '').strip() 307 | # generated_caption = generated_caption.capitalize() 308 | 309 | # # Generate audio from the caption 310 | # generate_audio(generated_caption) 311 | 312 | # # Display the generated caption 313 | # st.markdown('#### Predicted Caption:') 314 | # st.markdown(f"

{generated_caption}.

", 315 | # unsafe_allow_html=True) 316 | 317 | # # Display the audio 318 | # st.audio('caption_audio.mp3') 319 | 320 | 321 | # elif input_option == "Image URL": 322 | # # Input image URL 323 | # image_url = st.text_input("Enter the image URL:") 324 | 325 | # if image_url: 326 | # # Display the image 327 | # image = Image.open(BytesIO(requests.get(image_url).content)) 328 | # st.image(image, caption="Image", use_column_width=True) 329 | 330 | # # Generate caption button 331 | # if st.button("Generate Caption"): 332 | # # Preprocess the image from URL 333 | # new_image = preprocess_image_url(image_url) 334 | 335 | # # Generate features for the new image using the pre-trained VGG16 model 336 | # vgg_model = VGG16() 337 | # vgg_model = Model(inputs=vgg_model.inputs, outputs=vgg_model.layers[-2].output) 338 | # new_image_features = vgg_model.predict(np.array([new_image]), verbose=0) 339 | 340 | # # Generate caption for the new image 341 | # generated_caption = predict_caption(model_1, new_image_features, tokenizer, max_length=35) 342 | # generated_caption = generated_caption.replace('startseq', '').replace('endseq', '').strip() 343 | # generated_caption = generated_caption.capitalize() 344 | 345 | # # Generate audio from the caption 346 | # generate_audio(generated_caption) 347 | 348 | # # Display the generated caption 349 | # st.markdown('#### Predicted Caption:') 350 | # st.markdown(f"

{generated_caption}.

", 351 | # unsafe_allow_html=True) 352 | 353 | # # Display the audio 354 | # st.audio('caption_audio.mp3') 355 | 356 | 357 | # if __name__ == "__main__": 358 | # main() 359 | 360 | -------------------------------------------------------------------------------- /best_mode_vgg_40.h5: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/best_mode_vgg_40.h5 -------------------------------------------------------------------------------- /demo/2023-09-15 15-04-14.mp4: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/2023-09-15 15-04-14.mp4 -------------------------------------------------------------------------------- /demo/2862c9da088557e1140068ed564f2307c63ba489a54363d44d19161c.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/2862c9da088557e1140068ed564f2307c63ba489a54363d44d19161c.jpg -------------------------------------------------------------------------------- /demo/3071676551_a65741e372.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/3071676551_a65741e372.jpg -------------------------------------------------------------------------------- /demo/70cf5ab3538f1eb82c96313a7983be80e95988cc9d1dab4b5a5aa51e (1).jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/70cf5ab3538f1eb82c96313a7983be80e95988cc9d1dab4b5a5aa51e (1).jpeg -------------------------------------------------------------------------------- /demo/738020db6a97154799_0x0.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/738020db6a97154799_0x0.jpg -------------------------------------------------------------------------------- /demo/78c65793b4613ed5d95686c90cad4c6856e018f3727b139220b38223.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/78c65793b4613ed5d95686c90cad4c6856e018f3727b139220b38223.jpg -------------------------------------------------------------------------------- /demo/961b377b96dbf7cbe2729e0552d2bd7e720c7bc16bf761fd7b53efeb.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/961b377b96dbf7cbe2729e0552d2bd7e720c7bc16bf761fd7b53efeb.jpg -------------------------------------------------------------------------------- /demo/Screenshot_89.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/Screenshot_89.png -------------------------------------------------------------------------------- /demo/d89abf6bfc3f652c5e3ff50294259853abbb348ad95afca77fc5a200.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/d89abf6bfc3f652c5e3ff50294259853abbb348ad95afca77fc5a200.jpg -------------------------------------------------------------------------------- /demo/e98c741a23421241f7ce5cb399ead8762ec71f2b74e1e58ad957eae4.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/e98c741a23421241f7ce5cb399ead8762ec71f2b74e1e58ad957eae4.jpg -------------------------------------------------------------------------------- /demo/ezgif-1-ba79ce21f2.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/ezgif-1-ba79ce21f2.gif -------------------------------------------------------------------------------- /demo/generated.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/generated.gif -------------------------------------------------------------------------------- /demo/september 9, 2019 200 pm findlay residence.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/demo/september 9, 2019 200 pm findlay residence.gif -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | tensorflow==2.13.0 2 | streamlit==1.12.0 3 | keras==2.13.1 4 | gunicorn==21.2.0 5 | Pillow>=8.0.0 6 | protobuf==3.20.3 7 | 8 | 9 | 10 | 11 | 12 | absl-py==1.4.0 13 | altair==4.2.2 14 | 15 | argon2-cffi==23.1.0 16 | argon2-cffi-bindings==21.2.0 17 | arrow==1.2.3 18 | asttokens==2.2.1 19 | astunparse==1.6.3 20 | 21 | attrs==23.1.0 22 | Babel==2.12.1 23 | 24 | bleach==6.0.0 25 | blinker==1.6.2 26 | cachetools==5.3.1 27 | cffi==1.15.1 28 | charset-normalizer==3.2.0 29 | click==8.1.7 30 | colorama==0.4.6 31 | comm==0.1.4 32 | comtypes==1.2.0 33 | 34 | cycler==0.11.0 35 | debugpy==1.6.7.post1 36 | decorator==5.1.1 37 | defusedxml==0.7.1 38 | entrypoints==0.4 39 | exceptiongroup==1.1.3 40 | executing==1.2.0 41 | fastjsonschema==2.18.0 42 | flatbuffers==23.5.26 43 | 44 | fqdn==1.5.1 45 | gast==0.4.0 46 | gitdb==4.0.10 47 | GitPython==3.1.33 48 | google-auth==2.22.0 49 | google-auth-oauthlib==1.0.0 50 | google-pasta==0.2.0 51 | grpcio==1.57.0 52 | 53 | helper==2.5.0 54 | idna==3.4 55 | ipytablewidgets==0.3.1 56 | 57 | ipywidgets==8.1.0 58 | isoduration==20.11.0 59 | jedi==0.19.0 60 | Jinja2==3.1.2 61 | json5==0.9.14 62 | jsonpointer==2.4 63 | 64 | 65 | kiwisolver==1.4.5 66 | libclang==16.0.6 67 | lz4==4.3.2 68 | Markdown==3.4.4 69 | MarkupSafe==2.1.3 70 | matplotlib-inline==0.1.6 71 | mdurl==0.1.2 72 | mistune==3.0.1 73 | 74 | nest-asyncio==1.5.7 75 | nltk==3.2.4 76 | 77 | notebook_shim==0.2.3 78 | oauthlib==3.2.2 79 | opencv-python==4.8.0.76 80 | opt-einsum==3.3.0 81 | overrides==7.4.0 82 | packaging==23.1 83 | pandocfilters==1.5.0 84 | parso==0.8.3 85 | pickleshare==0.7.5 86 | 87 | platformdirs==3.10.0 88 | preprocessing==0.1.13 89 | prometheus-client==0.17.1 90 | prompt-toolkit==3.0.39 91 | 92 | psutil==5.9.5 93 | pure-eval==0.2.2 94 | 95 | pyasn1==0.5.0 96 | pyasn1-modules==0.3.0 97 | pycparser==2.21 98 | pydeck==0.8.1b0 99 | Pygments==2.16.1 100 | Pympler==1.0.1 101 | pyparsing==3.0.9 102 | python-dateutil==2.8.2 103 | python-json-logger==2.0.7 104 | pyttsx3==2.90 105 | pytz==2023.3 106 | 107 | PyYAML==6.0.1 108 | pyzmq==25.1.1 109 | qtconsole==5.4.4 110 | QtPy==2.4.0 111 | requests==2.31.0 112 | requests-oauthlib==1.3.1 113 | rfc3339-validator==0.1.4 114 | rfc3986-validator==0.1.1 115 | rich==13.5.2 116 | 117 | rsa==4.9 118 | semver==3.0.1 119 | six==1.16.0 120 | smmap==5.0.0 121 | sniffio==1.3.0 122 | soupsieve==2.4.1 123 | sphinx-rtd-theme==0.2.4 124 | stack-data==0.6.2 125 | 126 | tensorboard-data-server==0.7.1 127 | 128 | tensorflow-estimator==2.13.0 129 | tensorflow-io-gcs-filesystem==0.31.0 130 | termcolor==2.3.0 131 | terminado==0.17.1 132 | tinycss2==1.2.1 133 | tokenizer==3.4.3 134 | toml==0.10.2 135 | tomli==2.0.1 136 | toolz==0.12.0 137 | 138 | traitlets==5.9.0 139 | traittypes==0.2.1 140 | typing_extensions==4.5.0 141 | tzdata==2023.3 142 | tzlocal==5.0.1 143 | uri-template==1.3.0 144 | urllib3==1.26.16 145 | watchdog==3.0.0 146 | wcwidth==0.2.6 147 | webcolors==1.13 148 | webencodings==0.5.1 149 | widgetsnbextension==4.0.8 150 | wrapt==1.15.0 151 | -------------------------------------------------------------------------------- /tokenizer40.pickle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/tokenizer40.pickle -------------------------------------------------------------------------------- /tokenizer40_new.pickle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riad5089/Image_Caption_Generator/ffc6f433965f603d05a793cbcd8495e981c1d350/tokenizer40_new.pickle --------------------------------------------------------------------------------