├── .gitignore ├── LICENSE ├── README.md ├── app.py ├── architecture.png ├── funcs.py ├── input ├── classes.txt ├── data.pickle └── intents.json ├── main.py ├── model.py └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__ 2 | *.txt -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # TChatBot-API 2 | A `Flask REST API` service to serve trained `ChatBots` using `Tensorflow Serving` and `Docker containers` 3 | 4 | # Architecture of the Application: 5 | ![architecture](architecture.png) 6 | 7 | # Usage: 8 | - Create a virtual environment 9 | - Activate the environment 10 | - Clone the repository inside the environment: 11 | 12 | git clone https://github.com/deepraj1729/TChatBot-API.git 13 | - Navigate to the directory `TChatBot-API` 14 | - Install the requirements with this command: 15 | 16 | pip install -r requirements.txt 17 | - To run the API from both server side and client side: 18 | - Open `2 terminals` from the current location (from inside `TChatBot-API folder`) 19 | - One for `Server-Side` 20 | - One for `Client-Side` 21 | - Run this command in terminal 1 (`Server-Side`) to run the Flask server 22 | 23 | python app.py 24 | - Run this command in terminal 2 (`Client-Side`) to run the chat session 25 | 26 | python main.py 27 | 28 | - To `end/quit` Chat session in the `client-side` terminal, type `quit` and `enter` 29 | - To `end/quit` Server session in the `server-side` terminal, use `ctrl + c` 30 | 31 | -------------------------------------------------------------------------------- /app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask,request,jsonify 2 | from funcs import getClasses,bagOfWords,load_JSON,ProcessData 3 | from pathlib import Path 4 | import numpy as np 5 | from model import get_prediction 6 | import os 7 | from flask_cors import CORS 8 | 9 | app = Flask(__name__) 10 | CORS(app) 11 | 12 | dir_path = os.path.dirname(os.path.realpath(__file__)) 13 | root = Path(dir_path) 14 | 15 | json_path = root / "input" / "intents.json" 16 | 17 | classes = getClasses() 18 | sorted_classes = sorted(classes) 19 | 20 | json_data = load_JSON(json_path) 21 | words,labels,training,output = ProcessData(json_data,train = False) 22 | 23 | @app.route('/api',methods=['POST']) 24 | def chat_output(): 25 | data = request.get_json(force = True) 26 | 27 | #input text sent via POST request 28 | input_text = data["input"] 29 | 30 | #Prediction np array 31 | tag_pred = get_prediction([bagOfWords(input_text,words).tolist()]) 32 | 33 | #Corresponding tag output 34 | chatOut_index = np.argmax(tag_pred) 35 | 36 | #Equivalent tag for the predicted index 37 | tag = sorted_classes[chatOut_index] 38 | 39 | #Getting the data for the corresponding tag in the JSON 40 | intents = json_data["intents"] 41 | 42 | reply = "" 43 | # confidence = chatOut[chatOut_index] 44 | 45 | #Loop to randomly choose a reply for the corresonding tag 46 | for i in range(len(intents)): 47 | if intents[i]["tag"] == tag: 48 | responses = intents[i]["responses"] 49 | reply = np.random.choice(responses, 1, replace=False)[0] 50 | 51 | result={"reply":reply} 52 | 53 | response = jsonify(result) 54 | # response.headers.add('Access-Control-Allow-Origin', '*') 55 | return response 56 | 57 | if __name__ == "__main__": 58 | app.run(port=8000,debug=False) -------------------------------------------------------------------------------- /architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deepraj1729/TChatBot-API/24fb47c0801ee798d251975e8d2374b0b70dff93/architecture.png -------------------------------------------------------------------------------- /funcs.py: -------------------------------------------------------------------------------- 1 | import nltk 2 | from nltk.stem.lancaster import LancasterStemmer 3 | import numpy as np 4 | import random 5 | import json 6 | import pickle 7 | from pathlib import Path 8 | import os 9 | 10 | 11 | dir_path = os.path.dirname(os.path.realpath(__file__)) 12 | root = Path(dir_path) 13 | classes_path = root / "input" / "classes.txt" 14 | pickle_path = root / "input" / "data.pickle" 15 | tokenizer_path = root / "input" / "tokenizer.txt" 16 | 17 | def check_tokenizer(): 18 | try: 19 | with open(tokenizer_path,"r") as f: 20 | found = f.read() 21 | if(found == 'True'): 22 | return 23 | except: 24 | try: 25 | data = nltk.data.find('tokenizers/punkt') 26 | print("Found punkt tokenizer at {}".format(data)) 27 | with open(tokenizer_path,"w") as f: 28 | f.write("True") 29 | except LookupError: 30 | print("Downloading tokenizer for processing data for the ChatBot") 31 | nltk.download('punkt') 32 | with open(tokenizer_path,"w") as f: 33 | f.write("True") 34 | 35 | def load_JSON(path): 36 | data = None 37 | try: 38 | with open(path) as f: 39 | data = json.load(f) 40 | print("\nLoaded JSON file: {} successfully".format(path)) 41 | except: 42 | print("\nFile not found in the path specified.Dataset not loaded successfully") 43 | exit() 44 | return data 45 | 46 | def getClasses(): 47 | with open(classes_path,"r") as f: 48 | tags = f.readlines() 49 | tags = [x.strip() for x in tags] 50 | return tags 51 | 52 | def ProcessData(data,train = False): 53 | check_tokenizer() 54 | if(train == False): 55 | try: 56 | with open(pickle_path,"rb") as f: 57 | words,labels,training,output = pickle.load(f) 58 | print("Loaded stemmed data from pickle") 59 | return words,labels,training,output 60 | except: 61 | print("Stemming data from the intents.json file") 62 | stemmer = LancasterStemmer() 63 | words = [] 64 | labels = [] 65 | docs_X = [] 66 | docs_y = [] 67 | 68 | 69 | with open(classes_path,"w") as f: 70 | f.write("") 71 | 72 | for intent in data["intents"]: 73 | for pattern in intent["patterns"]: 74 | wrds = nltk.word_tokenize(pattern) 75 | words.extend(wrds) 76 | docs_X.append(wrds) 77 | docs_y.append(intent["tag"]) 78 | 79 | if intent["tag"] not in labels: 80 | labels.append(intent["tag"]) 81 | with open(classes_path,"a") as f: 82 | f.write(intent["tag"]+"\n") 83 | 84 | 85 | #List of non-redundant words the model has seen 86 | words = [stemmer.stem(w.lower()) for w in words if w != "?"] 87 | words = np.array(words) 88 | words = sorted(np.unique(words)) 89 | 90 | 91 | 92 | #labels (sorted) 93 | labels = sorted(labels) 94 | 95 | 96 | training = [] 97 | output = [] 98 | 99 | out_empty = [0 for _ in range(len(labels))] 100 | 101 | 102 | for x,doc in enumerate(docs_X): 103 | bag = np.array([]) 104 | wrds = [stemmer.stem(w.lower()) for w in doc if w != "?"] 105 | 106 | for w in words: 107 | if w in wrds: 108 | bag = np.append(bag,np.array([1])) 109 | else: 110 | bag = np.append(bag,np.array([0])) 111 | 112 | output_row = out_empty[:] 113 | output_row[labels.index(docs_y[x])] = 1 114 | 115 | training.append(bag) 116 | output.append(np.argmax(output_row)) 117 | 118 | 119 | 120 | #into np arrays 121 | training = np.asarray(training) 122 | output = np.asarray(output) 123 | 124 | 125 | with open(pickle_path,"wb") as f: 126 | print("Stemmed data saved in pickle file...") 127 | pickle.dump((words,labels,training,output),f) 128 | return words,labels,training,output 129 | else: 130 | with open(classes_path,"w") as f: 131 | f.write("") 132 | print("Stemming data from the intents.json file") 133 | stemmer = LancasterStemmer() 134 | words = [] 135 | labels = [] 136 | docs_X = [] 137 | docs_y = [] 138 | 139 | for intent in data["intents"]: 140 | for pattern in intent["patterns"]: 141 | wrds = nltk.word_tokenize(pattern) 142 | words.extend(wrds) 143 | docs_X.append(wrds) 144 | docs_y.append(intent["tag"]) 145 | 146 | if intent["tag"] not in labels: 147 | labels.append(intent["tag"]) 148 | 149 | with open(classes_path,"a") as f: 150 | f.write(intent["tag"]+"\n") 151 | 152 | 153 | #List of non-redundant words the model has seen 154 | words = [stemmer.stem(w.lower()) for w in words if w != "?"] 155 | words = np.array(words) 156 | words = sorted(np.unique(words)) 157 | 158 | 159 | 160 | #labels (sorted) 161 | labels = sorted(labels) 162 | 163 | 164 | training = [] 165 | output = [] 166 | 167 | out_empty = [0 for _ in range(len(labels))] 168 | 169 | 170 | for x,doc in enumerate(docs_X): 171 | bag = np.array([]) 172 | wrds = [stemmer.stem(w.lower()) for w in doc if w != "?"] 173 | 174 | for w in words: 175 | if w in wrds: 176 | bag = np.append(bag,np.array([1])) 177 | else: 178 | bag = np.append(bag,np.array([0])) 179 | 180 | output_row = out_empty[:] 181 | output_row[labels.index(docs_y[x])] = 1 182 | 183 | training.append(bag) 184 | output.append(np.argmax(output_row)) 185 | 186 | 187 | 188 | #into np arrays 189 | training = np.asarray(training) 190 | output = np.asarray(output) 191 | 192 | 193 | with open(pickle_path,"wb") as f: 194 | print("Stemmed data saved in pickle file...") 195 | pickle.dump((words,labels,training,output),f) 196 | return words,labels,training,output 197 | 198 | 199 | def bagOfWords(s,words): 200 | stemmer = LancasterStemmer() 201 | bag = [0 for _ in range(len(words))] 202 | 203 | s_words = nltk.word_tokenize(s) 204 | s_words = [stemmer.stem(word.lower()) for word in s_words if word != "?"] 205 | s_words = np.array(s_words) 206 | 207 | for se in s_words: 208 | for i,w in enumerate(words): 209 | if w == se: 210 | bag[i] = 1 211 | 212 | bag = np.array(bag) 213 | bag.shape = (1,bag.shape[0]) 214 | return bag 215 | -------------------------------------------------------------------------------- /input/classes.txt: -------------------------------------------------------------------------------- 1 | greeting 2 | goodbye 3 | age 4 | name 5 | shop 6 | hours 7 | wish 8 | -------------------------------------------------------------------------------- /input/data.pickle: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/deepraj1729/TChatBot-API/24fb47c0801ee798d251975e8d2374b0b70dff93/input/data.pickle -------------------------------------------------------------------------------- /input/intents.json: -------------------------------------------------------------------------------- 1 | { 2 | "intents": [ 3 | { 4 | "tag": "greeting", 5 | "patterns": [ 6 | "Hi", 7 | "How are you", 8 | "Is anyone there?", 9 | "Hello", 10 | "Good day", 11 | "Whats up", 12 | "Hi there", 13 | "Hey!", 14 | "Hi, Sup?" 15 | ], 16 | "responses": [ 17 | "Hello!", 18 | "I'm doing good", 19 | "Good to see you again!", 20 | "Hi there, how can I help?", 21 | "Can I help You?", 22 | "Yoo! Whatsup?" 23 | ] 24 | }, 25 | { 26 | "tag": "goodbye", 27 | "patterns": [ 28 | "cya", 29 | "See you later", 30 | "Goodbye", 31 | "I am Leaving", 32 | "Have a Good day", 33 | "ba byee" 34 | ], 35 | "responses": [ 36 | "Sad to see you go :(", 37 | "Talk to you later", 38 | "Goodbye!", 39 | "Byeee", 40 | "See ya", 41 | "See ya later", 42 | "will see ya later" 43 | ] 44 | }, 45 | { 46 | "tag": "age", 47 | "patterns": [ 48 | "how old", 49 | "how old is TensorChat", 50 | "what is your age", 51 | "how old are you", 52 | "age?", 53 | "Can I know your age" 54 | ], 55 | "responses": [ 56 | "I am 12 years old!", 57 | "12 years young!", 58 | "Hi there, I am 12 years young", 59 | "Yoo! I am 12 years old", 60 | "My age! It's 12 Yo!" 61 | ] 62 | }, 63 | { 64 | "tag": "name", 65 | "patterns": [ 66 | "what is your name", 67 | "what should I call you", 68 | "whats your name?" 69 | ], 70 | "responses": [ 71 | "You can call me TensorChatBot, call me TCBot.", 72 | "I'm TensorChatBot call me TCBot.", 73 | "I'm TCBot aka TensorChatBot." 74 | ] 75 | }, 76 | { 77 | "tag": "shop", 78 | "patterns": [ 79 | "Id like to buy something", 80 | "whats on the menu", 81 | "what do you recommend for buying?", 82 | "could i get something to eat", 83 | "What should I buy?" 84 | ], 85 | "responses": [ 86 | "We sell chocolate chip cookies for $2!", 87 | "Cookies are on the menu!", 88 | "Cookies are our favourite item" 89 | ] 90 | }, 91 | { 92 | "tag": "hours", 93 | "patterns": [ 94 | "when are you guys open", 95 | "what are your hours", 96 | "hours of operation", 97 | "When is the shop open?", 98 | "When does the shop open?" 99 | ], 100 | "responses": [ 101 | "We are open 7am-4pm Monday-Friday!" 102 | ] 103 | }, 104 | { 105 | "tag": "wish", 106 | "patterns": [ 107 | "Good Morning", 108 | "Good afternoon", 109 | "Good evening", 110 | "happy birthday" 111 | ], 112 | "responses": [ 113 | "Hey! Good morning", 114 | "Hey! sup?", 115 | "Sup?", 116 | "Thanks! Have a great day" 117 | ] 118 | } 119 | ] 120 | } -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import json 3 | import time 4 | 5 | url = "http://localhost:8000/api" 6 | 7 | print("------+------+-----+------+-----+-----+-----+------+------") 8 | print(" TChatBot Session Initializing......") 9 | print("------+------+-----+------+-----+-----+-----+------+------\n") 10 | print("TChatBot Session started at : {} ".format(time.ctime())) 11 | print("Type \'quit\' to end chat session") 12 | print("\n------+------+-----+------+-----+-----+-----+------+------") 13 | time.sleep(1) 14 | print("\nTChatBot (Bot): Hey! I'm TChatBot :)") #This will be done in the JS Backend by default 15 | 16 | while True: 17 | 18 | input_data = input("\nUser (Human): ") 19 | 20 | if(str(input_data).lower()=='quit'): 21 | print("------+------+-----+------+-----+-----+-----+------+------") 22 | print(" TChatBot Session Expiring......") 23 | print("------+------+-----+------+-----+-----+-----+------+------") 24 | print("Exited Chat Session....") 25 | print("------+------+-----+------+-----+-----+-----+------+------\n") 26 | exit() 27 | 28 | data = json.dumps({"input":input_data}) 29 | r = requests.post(url,data) 30 | out = r.json()["reply"] 31 | print("TChatBot (Bot): {}".format(out)) -------------------------------------------------------------------------------- /model.py: -------------------------------------------------------------------------------- 1 | from flask import jsonify 2 | import numpy as np 3 | import json 4 | import requests 5 | 6 | MODEL_URL='https://tchatbot1.herokuapp.com/v1/models/chat_model:predict' 7 | 8 | def get_prediction(bag_of_words): 9 | 10 | data = json.dumps({ 11 | "signature_name": "serving_default", 12 | "instances": bag_of_words 13 | }) 14 | headers = {"content-type": "application/json"} 15 | 16 | response = requests.post(url=MODEL_URL, data=data, headers=headers) 17 | result = json.loads(response.text) 18 | prediction = np.squeeze(result['predictions'][0]) 19 | return prediction 20 | 21 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | Flask==1.1.1 2 | nltk==3.4.5 3 | numpy==1.17.0 4 | requests==2.24.0 5 | Flask-Cors==3.0.9 --------------------------------------------------------------------------------