├── Chatbot
├── Speech recognition
│ ├── LICENSE
│ ├── OpenAI-Whisper-ChatGPT-Notebook.ipynb
│ ├── OpenAI-Whisper-ChatGPTAPI-Audio-Output-Notebook.ipynb
│ └── readme.md
└── Text Generation
│ ├── Deep learning
│ ├── Tim_Chatbot.ipynb
│ ├── loading model.py
│ ├── main.py
│ └── readme.md
│ ├── Simplilearn
│ ├── README.md
│ ├── Readme.md
│ ├── chatbot.py
│ ├── intents.json
│ └── new.py
│ └── pythoncv Telegram channel
│ ├── Python Chatbot Project.py
│ ├── Python_Chatbot_Project_–_Learn_to_build_your_first_chatbot_using.docx
│ └── readme.md
├── Discord Bots
├── CS50.db
├── LICENSE
├── README.md
├── auth.py
├── cogs
│ └── check.py
├── readme.md
└── requirements.txt
├── Instagram Bots
├── Commenting_bot.ipynb
├── Scraping Instagram with Python.pdf
├── WebscrapingInstagram_completeNotebook.ipynb
├── instagram.py
├── mport instaloader.py
└── readme.md
├── Snapp Prize
├── Snapp.py
├── gui.py
├── http_error.py
└── readme.md
├── Telegram Bots
├── Automation
│ ├── Melanee robot.py
│ └── readme.md
├── Bot.py
├── mtrx.py
└── readme.md
└── readme.md
/Chatbot/Speech recognition/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/Chatbot/Speech recognition/OpenAI-Whisper-ChatGPT-Notebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "kvi6cAEHY5sh"
7 | },
8 | "source": [
9 | "# Installation"
10 | ]
11 | },
12 | {
13 | "cell_type": "code",
14 | "execution_count": null,
15 | "metadata": {
16 | "colab": {
17 | "base_uri": "https://localhost:8080/"
18 | },
19 | "id": "ZsJUxc0aRsAf",
20 | "outputId": "29eb71fb-e703-4af4-9004-6621d502fdc9"
21 | },
22 | "outputs": [
23 | {
24 | "name": "stdout",
25 | "output_type": "stream",
26 | "text": [
27 | "\u001b[K |████████████████████████████████| 5.8 MB 15.3 MB/s \n",
28 | "\u001b[K |████████████████████████████████| 7.6 MB 65.1 MB/s \n",
29 | "\u001b[K |████████████████████████████████| 182 kB 76.9 MB/s \n",
30 | "\u001b[?25h Building wheel for whisper (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
31 | "\u001b[K |████████████████████████████████| 11.6 MB 12.6 MB/s \n",
32 | "\u001b[K |████████████████████████████████| 55 kB 4.7 MB/s \n",
33 | "\u001b[K |████████████████████████████████| 2.3 MB 62.8 MB/s \n",
34 | "\u001b[K |████████████████████████████████| 84 kB 3.8 MB/s \n",
35 | "\u001b[K |████████████████████████████████| 84 kB 3.9 MB/s \n",
36 | "\u001b[K |████████████████████████████████| 278 kB 76.2 MB/s \n",
37 | "\u001b[K |████████████████████████████████| 106 kB 71.6 MB/s \n",
38 | "\u001b[K |████████████████████████████████| 213 kB 55.8 MB/s \n",
39 | "\u001b[K |████████████████████████████████| 56 kB 5.6 MB/s \n",
40 | "\u001b[K |████████████████████████████████| 54 kB 3.0 MB/s \n",
41 | "\u001b[K |████████████████████████████████| 64 kB 3.5 MB/s \n",
42 | "\u001b[K |████████████████████████████████| 80 kB 12.0 MB/s \n",
43 | "\u001b[K |████████████████████████████████| 68 kB 8.4 MB/s \n",
44 | "\u001b[K |████████████████████████████████| 68 kB 8.3 MB/s \n",
45 | "\u001b[K |████████████████████████████████| 68 kB 8.6 MB/s \n",
46 | "\u001b[K |████████████████████████████████| 68 kB 8.6 MB/s \n",
47 | "\u001b[K |████████████████████████████████| 50 kB 8.5 MB/s \n",
48 | "\u001b[K |████████████████████████████████| 856 kB 76.9 MB/s \n",
49 | "\u001b[K |████████████████████████████████| 593 kB 72.7 MB/s \n",
50 | "\u001b[K |████████████████████████████████| 4.0 MB 66.1 MB/s \n",
51 | "\u001b[?25h Building wheel for ffmpy (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
52 | " Building wheel for python-multipart (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
53 | " Building wheel for uuid (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
54 | ]
55 | }
56 | ],
57 | "source": [
58 | "!pip install -q git+https://github.com/openai/whisper.git\n",
59 | "!pip install -q gradio\n",
60 | "!pip install -q pyChatGPT"
61 | ]
62 | },
63 | {
64 | "cell_type": "markdown",
65 | "metadata": {
66 | "id": "usUSep_lY7q8"
67 | },
68 | "source": [
69 | "# Imports"
70 | ]
71 | },
72 | {
73 | "cell_type": "code",
74 | "execution_count": null,
75 | "metadata": {
76 | "id": "Kr5faKybKi4p"
77 | },
78 | "outputs": [],
79 | "source": [
80 | "import whisper\n",
81 | "import gradio as gr \n",
82 | "import time\n",
83 | "from pyChatGPT import ChatGPT\n",
84 | "import warnings"
85 | ]
86 | },
87 | {
88 | "cell_type": "markdown",
89 | "metadata": {
90 | "id": "EyW6weauY9gf"
91 | },
92 | "source": [
93 | "# Defining Variables"
94 | ]
95 | },
96 | {
97 | "cell_type": "code",
98 | "execution_count": null,
99 | "metadata": {
100 | "id": "u_6_s2iHboR4"
101 | },
102 | "outputs": [],
103 | "source": [
104 | "warnings.filterwarnings(\"ignore\")"
105 | ]
106 | },
107 | {
108 | "cell_type": "code",
109 | "execution_count": null,
110 | "metadata": {
111 | "id": "HP2ovD3xaDfC"
112 | },
113 | "outputs": [],
114 | "source": [
115 | "secret_token = \"\" # Enter your session token here!"
116 | ]
117 | },
118 | {
119 | "cell_type": "code",
120 | "execution_count": null,
121 | "metadata": {
122 | "colab": {
123 | "base_uri": "https://localhost:8080/"
124 | },
125 | "id": "cMNnv3oHaDt8",
126 | "outputId": "c5ae5c6d-163d-4d28-a483-8f0035760098"
127 | },
128 | "outputs": [
129 | {
130 | "name": "stderr",
131 | "output_type": "stream",
132 | "text": [
133 | "100%|███████████████████████████████████████| 139M/139M [00:06<00:00, 23.7MiB/s]\n"
134 | ]
135 | }
136 | ],
137 | "source": [
138 | "model = whisper.load_model(\"base\")"
139 | ]
140 | },
141 | {
142 | "cell_type": "code",
143 | "execution_count": null,
144 | "metadata": {
145 | "colab": {
146 | "base_uri": "https://localhost:8080/"
147 | },
148 | "id": "e3bjy1E3R-MN",
149 | "outputId": "d396d1df-bdc5-4e62-abf7-3f57be3154de"
150 | },
151 | "outputs": [
152 | {
153 | "data": {
154 | "text/plain": [
155 | "device(type='cuda', index=0)"
156 | ]
157 | },
158 | "execution_count": 6,
159 | "metadata": {},
160 | "output_type": "execute_result"
161 | }
162 | ],
163 | "source": [
164 | "model.device"
165 | ]
166 | },
167 | {
168 | "cell_type": "markdown",
169 | "metadata": {
170 | "id": "14CPZJMxY_Ks"
171 | },
172 | "source": [
173 | "# Transcribe Function"
174 | ]
175 | },
176 | {
177 | "cell_type": "code",
178 | "execution_count": null,
179 | "metadata": {
180 | "id": "JtTvvQQPcOZZ"
181 | },
182 | "outputs": [],
183 | "source": [
184 | "def transcribe(audio):\n",
185 | "\n",
186 | " # load audio and pad/trim it to fit 30 seconds\n",
187 | " audio = whisper.load_audio(audio)\n",
188 | " audio = whisper.pad_or_trim(audio)\n",
189 | "\n",
190 | " # make log-Mel spectrogram and move to the same device as the model\n",
191 | " mel = whisper.log_mel_spectrogram(audio).to(model.device)\n",
192 | "\n",
193 | " # detect the spoken language\n",
194 | " _, probs = model.detect_language(mel)\n",
195 | "\n",
196 | " # decode the audio\n",
197 | " options = whisper.DecodingOptions()\n",
198 | " result = whisper.decode(model, mel, options)\n",
199 | " result_text = result.text\n",
200 | "\n",
201 | " # Pass the generated text to Audio\n",
202 | " chatgpt_api = ChatGPT(secret_token)\n",
203 | " resp = chatgpt_api.send_message(result_text)\n",
204 | " out_result = resp['message']\n",
205 | "\n",
206 | " return [result_text, out_result]"
207 | ]
208 | },
209 | {
210 | "cell_type": "code",
211 | "execution_count": null,
212 | "metadata": {
213 | "id": "8Yn912jvfiz-"
214 | },
215 | "outputs": [],
216 | "source": []
217 | },
218 | {
219 | "cell_type": "markdown",
220 | "metadata": {
221 | "id": "aJaFmE9aZB_8"
222 | },
223 | "source": [
224 | "# Gradio Interface"
225 | ]
226 | },
227 | {
228 | "cell_type": "code",
229 | "execution_count": null,
230 | "metadata": {
231 | "colab": {
232 | "base_uri": "https://localhost:8080/",
233 | "height": 633
234 | },
235 | "id": "deSAVvfJcWBo",
236 | "outputId": "4fb3e1dd-0a3e-4554-ba35-2a8e8543d376"
237 | },
238 | "outputs": [
239 | {
240 | "name": "stdout",
241 | "output_type": "stream",
242 | "text": [
243 | "Hint: Set streaming=True for Audio component to use live streaming.\n",
244 | "Colab notebook detected. To show errors in colab notebook, set `debug=True` in `launch()`\n",
245 | "Note: opening Chrome Inspector may crash demo inside Colab notebooks.\n",
246 | "\n",
247 | "To create a public link, set `share=True` in `launch()`.\n"
248 | ]
249 | },
250 | {
251 | "data": {
252 | "application/javascript": [
253 | "(async (port, path, width, height, cache, element) => {\n",
254 | " if (!google.colab.kernel.accessAllowed && !cache) {\n",
255 | " return;\n",
256 | " }\n",
257 | " element.appendChild(document.createTextNode(''));\n",
258 | " const url = await google.colab.kernel.proxyPort(port, {cache});\n",
259 | "\n",
260 | " const external_link = document.createElement('div');\n",
261 | " external_link.innerHTML = `\n",
262 | "
\n",
267 | " `;\n",
268 | " element.appendChild(external_link);\n",
269 | "\n",
270 | " const iframe = document.createElement('iframe');\n",
271 | " iframe.src = new URL(path, url).toString();\n",
272 | " iframe.height = height;\n",
273 | " iframe.allow = \"autoplay; camera; microphone; clipboard-read; clipboard-write;\"\n",
274 | " iframe.width = width;\n",
275 | " iframe.style.border = 0;\n",
276 | " element.appendChild(iframe);\n",
277 | " })(7860, \"/\", \"100%\", 500, false, window.element)"
278 | ],
279 | "text/plain": [
280 | ""
281 | ]
282 | },
283 | "metadata": {},
284 | "output_type": "display_data"
285 | },
286 | {
287 | "data": {
288 | "text/plain": []
289 | },
290 | "execution_count": 8,
291 | "metadata": {},
292 | "output_type": "execute_result"
293 | }
294 | ],
295 | "source": [
296 | "output_1 = gr.Textbox(label=\"Speech to Text\")\n",
297 | "output_2 = gr.Textbox(label=\"ChatGPT Output\")\n",
298 | "\n",
299 | "\n",
300 | "gr.Interface(\n",
301 | " title = 'OpenAI Whisper and ChatGPT ASR Gradio Web UI', \n",
302 | " fn=transcribe, \n",
303 | " inputs=[\n",
304 | " gr.inputs.Audio(source=\"microphone\", type=\"filepath\")\n",
305 | " ],\n",
306 | "\n",
307 | " outputs=[\n",
308 | " output_1, output_2\n",
309 | " ],\n",
310 | " live=True).launch()"
311 | ]
312 | },
313 | {
314 | "cell_type": "code",
315 | "execution_count": null,
316 | "metadata": {
317 | "id": "y2Zid2MKdPxK"
318 | },
319 | "outputs": [],
320 | "source": []
321 | }
322 | ],
323 | "metadata": {
324 | "accelerator": "GPU",
325 | "colab": {
326 | "collapsed_sections": [
327 | "kvi6cAEHY5sh",
328 | "usUSep_lY7q8",
329 | "14CPZJMxY_Ks",
330 | "aJaFmE9aZB_8"
331 | ],
332 | "provenance": []
333 | },
334 | "gpuClass": "standard",
335 | "kernelspec": {
336 | "display_name": "Python 3 (ipykernel)",
337 | "language": "python",
338 | "name": "python3"
339 | },
340 | "language_info": {
341 | "codemirror_mode": {
342 | "name": "ipython",
343 | "version": 3
344 | },
345 | "file_extension": ".py",
346 | "mimetype": "text/x-python",
347 | "name": "python",
348 | "nbconvert_exporter": "python",
349 | "pygments_lexer": "ipython3",
350 | "version": "3.10.6"
351 | },
352 | "toc": {
353 | "base_numbering": 1,
354 | "nav_menu": {},
355 | "number_sections": true,
356 | "sideBar": true,
357 | "skip_h1_title": false,
358 | "title_cell": "Table of Contents",
359 | "title_sidebar": "Contents",
360 | "toc_cell": false,
361 | "toc_position": {},
362 | "toc_section_display": true,
363 | "toc_window_display": false
364 | },
365 | "varInspector": {
366 | "cols": {
367 | "lenName": 16,
368 | "lenType": 16,
369 | "lenVar": 40
370 | },
371 | "kernels_config": {
372 | "python": {
373 | "delete_cmd_postfix": "",
374 | "delete_cmd_prefix": "del ",
375 | "library": "var_list.py",
376 | "varRefreshCmd": "print(var_dic_list())"
377 | },
378 | "r": {
379 | "delete_cmd_postfix": ") ",
380 | "delete_cmd_prefix": "rm(",
381 | "library": "var_list.r",
382 | "varRefreshCmd": "cat(var_dic_list()) "
383 | }
384 | },
385 | "types_to_exclude": [
386 | "module",
387 | "function",
388 | "builtin_function_or_method",
389 | "instance",
390 | "_Feature"
391 | ],
392 | "window_display": false
393 | }
394 | },
395 | "nbformat": 4,
396 | "nbformat_minor": 1
397 | }
398 |
--------------------------------------------------------------------------------
/Chatbot/Speech recognition/OpenAI-Whisper-ChatGPTAPI-Audio-Output-Notebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {
6 | "id": "kvi6cAEHY5sh"
7 | },
8 | "source": [
9 | "# Installation"
10 | ]
11 | },
12 | {
13 | "cell_type": "code",
14 | "execution_count": 1,
15 | "metadata": {
16 | "id": "ZsJUxc0aRsAf",
17 | "colab": {
18 | "base_uri": "https://localhost:8080/"
19 | },
20 | "outputId": "98042512-9122-45b6-cee2-829d8e447d74"
21 | },
22 | "outputs": [
23 | {
24 | "output_type": "stream",
25 | "name": "stdout",
26 | "text": [
27 | " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
28 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m6.3/6.3 MB\u001b[0m \u001b[31m17.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
29 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m190.3/190.3 KB\u001b[0m \u001b[31m16.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
30 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.6/7.6 MB\u001b[0m \u001b[31m20.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
31 | "\u001b[?25h Building wheel for openai-whisper (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
32 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m14.2/14.2 MB\u001b[0m \u001b[31m99.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
33 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m140.7/140.7 KB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
34 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m107.0/107.0 KB\u001b[0m \u001b[31m13.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
35 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.2/56.2 KB\u001b[0m \u001b[31m7.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
36 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.5/50.5 KB\u001b[0m \u001b[31m6.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
37 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.9/56.9 KB\u001b[0m \u001b[31m8.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
38 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.5/71.5 KB\u001b[0m \u001b[31m10.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
39 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m45.7/45.7 KB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
40 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m84.5/84.5 KB\u001b[0m \u001b[31m11.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
41 | "\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
42 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.1/2.1 MB\u001b[0m \u001b[31m85.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
43 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m66.4/66.4 KB\u001b[0m \u001b[31m10.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
44 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m69.6/69.6 KB\u001b[0m \u001b[31m9.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
45 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 KB\u001b[0m \u001b[31m8.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
46 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m80.6/80.6 KB\u001b[0m \u001b[31m11.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
47 | "\u001b[?25h Building wheel for ffmpy (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
48 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m70.1/70.1 KB\u001b[0m \u001b[31m3.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
49 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m62.8/62.8 KB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
50 | "\u001b[?25h"
51 | ]
52 | }
53 | ],
54 | "source": [
55 | "!pip install -q git+https://github.com/openai/whisper.git\n",
56 | "!pip install -q gradio\n",
57 | "!pip install -q openai\n",
58 | "!pip install -q gTTS"
59 | ]
60 | },
61 | {
62 | "cell_type": "markdown",
63 | "metadata": {
64 | "id": "usUSep_lY7q8"
65 | },
66 | "source": [
67 | "# Imports"
68 | ]
69 | },
70 | {
71 | "cell_type": "code",
72 | "execution_count": 2,
73 | "metadata": {
74 | "id": "Kr5faKybKi4p"
75 | },
76 | "outputs": [],
77 | "source": [
78 | "import whisper\n",
79 | "import gradio as gr \n",
80 | "import time\n",
81 | "import warnings\n",
82 | "import json\n",
83 | "import openai\n",
84 | "import os\n",
85 | "from gtts import gTTS"
86 | ]
87 | },
88 | {
89 | "cell_type": "markdown",
90 | "metadata": {
91 | "id": "EyW6weauY9gf"
92 | },
93 | "source": [
94 | "# Defining Variables"
95 | ]
96 | },
97 | {
98 | "cell_type": "code",
99 | "execution_count": 3,
100 | "metadata": {
101 | "id": "u_6_s2iHboR4"
102 | },
103 | "outputs": [],
104 | "source": [
105 | "warnings.filterwarnings(\"ignore\")"
106 | ]
107 | },
108 | {
109 | "cell_type": "code",
110 | "execution_count": 4,
111 | "metadata": {
112 | "id": "2zj7g1C381VJ"
113 | },
114 | "outputs": [],
115 | "source": [
116 | "with open('GPT_SECRET_KEY.json') as f:\n",
117 | " data = json.load(f)"
118 | ]
119 | },
120 | {
121 | "cell_type": "code",
122 | "execution_count": 5,
123 | "metadata": {
124 | "id": "mihgqxDv81VJ"
125 | },
126 | "outputs": [],
127 | "source": [
128 | "openai.api_key = data[\"API_KEY\"]"
129 | ]
130 | },
131 | {
132 | "cell_type": "code",
133 | "execution_count": 6,
134 | "metadata": {
135 | "id": "cMNnv3oHaDt8",
136 | "colab": {
137 | "base_uri": "https://localhost:8080/"
138 | },
139 | "outputId": "5d11fe12-c0e8-4373-ccdf-2a08271bcef1"
140 | },
141 | "outputs": [
142 | {
143 | "output_type": "stream",
144 | "name": "stderr",
145 | "text": [
146 | "100%|████████████████████████████████████████| 139M/139M [00:00<00:00, 180MiB/s]\n"
147 | ]
148 | }
149 | ],
150 | "source": [
151 | "model = whisper.load_model(\"base\")"
152 | ]
153 | },
154 | {
155 | "cell_type": "code",
156 | "execution_count": 7,
157 | "metadata": {
158 | "id": "e3bjy1E3R-MN",
159 | "colab": {
160 | "base_uri": "https://localhost:8080/"
161 | },
162 | "outputId": "0ce4bb4d-cefd-4706-c4fc-d5815a4b9664"
163 | },
164 | "outputs": [
165 | {
166 | "output_type": "execute_result",
167 | "data": {
168 | "text/plain": [
169 | "device(type='cuda', index=0)"
170 | ]
171 | },
172 | "metadata": {},
173 | "execution_count": 7
174 | }
175 | ],
176 | "source": [
177 | "model.device"
178 | ]
179 | },
180 | {
181 | "cell_type": "code",
182 | "source": [
183 | "!ffmpeg -f lavfi -i anullsrc=r=44100:cl=mono -t 10 -q:a 9 -acodec libmp3lame Temp.mp3"
184 | ],
185 | "metadata": {
186 | "id": "zlhNLyS3B22D",
187 | "colab": {
188 | "base_uri": "https://localhost:8080/"
189 | },
190 | "outputId": "5798ffb9-769f-4793-e961-7b697c5e59ec"
191 | },
192 | "execution_count": 8,
193 | "outputs": [
194 | {
195 | "output_type": "stream",
196 | "name": "stdout",
197 | "text": [
198 | "ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers\n",
199 | " built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)\n",
200 | " configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared\n",
201 | " libavutil 56. 31.100 / 56. 31.100\n",
202 | " libavcodec 58. 54.100 / 58. 54.100\n",
203 | " libavformat 58. 29.100 / 58. 29.100\n",
204 | " libavdevice 58. 8.100 / 58. 8.100\n",
205 | " libavfilter 7. 57.100 / 7. 57.100\n",
206 | " libavresample 4. 0. 0 / 4. 0. 0\n",
207 | " libswscale 5. 5.100 / 5. 5.100\n",
208 | " libswresample 3. 5.100 / 3. 5.100\n",
209 | " libpostproc 55. 5.100 / 55. 5.100\n",
210 | "Input #0, lavfi, from 'anullsrc=r=44100:cl=mono':\n",
211 | " Duration: N/A, start: 0.000000, bitrate: 352 kb/s\n",
212 | " Stream #0:0: Audio: pcm_u8, 44100 Hz, mono, u8, 352 kb/s\n",
213 | "Stream mapping:\n",
214 | " Stream #0:0 -> #0:0 (pcm_u8 (native) -> mp3 (libmp3lame))\n",
215 | "Press [q] to stop, [?] for help\n",
216 | "Output #0, mp3, to 'Temp.mp3':\n",
217 | " Metadata:\n",
218 | " TSSE : Lavf58.29.100\n",
219 | " Stream #0:0: Audio: mp3 (libmp3lame), 44100 Hz, mono, s16p\n",
220 | " Metadata:\n",
221 | " encoder : Lavc58.54.100 libmp3lame\n",
222 | "size= 39kB time=00:00:10.00 bitrate= 32.1kbits/s speed= 236x \n",
223 | "video:0kB audio:39kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.568409%\n"
224 | ]
225 | }
226 | ]
227 | },
228 | {
229 | "cell_type": "markdown",
230 | "source": [
231 | "# ChatGPT_API_Function"
232 | ],
233 | "metadata": {
234 | "id": "_gNUvwL2BzD4"
235 | }
236 | },
237 | {
238 | "cell_type": "code",
239 | "execution_count": 9,
240 | "metadata": {
241 | "id": "4QXJeLiy81VL"
242 | },
243 | "outputs": [],
244 | "source": [
245 | "def chatgpt_api(input_text):\n",
246 | " messages = [\n",
247 | " {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"}]\n",
248 | " \n",
249 | " if input_text:\n",
250 | " messages.append(\n",
251 | " {\"role\": \"user\", \"content\": input_text},\n",
252 | " )\n",
253 | " chat_completion = openai.ChatCompletion.create(\n",
254 | " model=\"gpt-3.5-turbo\", messages=messages\n",
255 | " )\n",
256 | " \n",
257 | " reply = chat_completion.choices[0].message.content\n",
258 | " return reply"
259 | ]
260 | },
261 | {
262 | "cell_type": "markdown",
263 | "metadata": {
264 | "id": "14CPZJMxY_Ks"
265 | },
266 | "source": [
267 | "# Transcribe Function"
268 | ]
269 | },
270 | {
271 | "cell_type": "code",
272 | "execution_count": 10,
273 | "metadata": {
274 | "id": "JtTvvQQPcOZZ"
275 | },
276 | "outputs": [],
277 | "source": [
278 | "def transcribe(audio):\n",
279 | "\n",
280 | " language = 'en'\n",
281 | "\n",
282 | " audio = whisper.load_audio(audio)\n",
283 | " audio = whisper.pad_or_trim(audio)\n",
284 | "\n",
285 | " mel = whisper.log_mel_spectrogram(audio).to(model.device)\n",
286 | "\n",
287 | " _, probs = model.detect_language(mel)\n",
288 | "\n",
289 | " options = whisper.DecodingOptions()\n",
290 | " result = whisper.decode(model, mel, options)\n",
291 | " result_text = result.text\n",
292 | " \n",
293 | " out_result = chatgpt_api(result_text)\n",
294 | " \n",
295 | " audioobj = gTTS(text = out_result, \n",
296 | " lang = language, \n",
297 | " slow = False)\n",
298 | " \n",
299 | " audioobj.save(\"Temp.mp3\")\n",
300 | "\n",
301 | " return [result_text, out_result, \"Temp.mp3\"]"
302 | ]
303 | },
304 | {
305 | "cell_type": "markdown",
306 | "metadata": {
307 | "id": "aJaFmE9aZB_8"
308 | },
309 | "source": [
310 | "# Gradio Interface"
311 | ]
312 | },
313 | {
314 | "cell_type": "code",
315 | "execution_count": 11,
316 | "metadata": {
317 | "id": "deSAVvfJcWBo",
318 | "scrolled": false,
319 | "colab": {
320 | "base_uri": "https://localhost:8080/",
321 | "height": 616
322 | },
323 | "outputId": "a8a51e43-5dce-4c91-f56f-51541e849fc1"
324 | },
325 | "outputs": [
326 | {
327 | "output_type": "stream",
328 | "name": "stdout",
329 | "text": [
330 | "Colab notebook detected. To show errors in colab notebook, set debug=True in launch()\n",
331 | "Note: opening Chrome Inspector may crash demo inside Colab notebooks.\n",
332 | "\n",
333 | "To create a public link, set `share=True` in `launch()`.\n"
334 | ]
335 | },
336 | {
337 | "output_type": "display_data",
338 | "data": {
339 | "text/plain": [
340 | ""
341 | ],
342 | "application/javascript": [
343 | "(async (port, path, width, height, cache, element) => {\n",
344 | " if (!google.colab.kernel.accessAllowed && !cache) {\n",
345 | " return;\n",
346 | " }\n",
347 | " element.appendChild(document.createTextNode(''));\n",
348 | " const url = await google.colab.kernel.proxyPort(port, {cache});\n",
349 | "\n",
350 | " const external_link = document.createElement('div');\n",
351 | " external_link.innerHTML = `\n",
352 | " \n",
357 | " `;\n",
358 | " element.appendChild(external_link);\n",
359 | "\n",
360 | " const iframe = document.createElement('iframe');\n",
361 | " iframe.src = new URL(path, url).toString();\n",
362 | " iframe.height = height;\n",
363 | " iframe.allow = \"autoplay; camera; microphone; clipboard-read; clipboard-write;\"\n",
364 | " iframe.width = width;\n",
365 | " iframe.style.border = 0;\n",
366 | " element.appendChild(iframe);\n",
367 | " })(7860, \"/\", \"100%\", 500, false, window.element)"
368 | ]
369 | },
370 | "metadata": {}
371 | },
372 | {
373 | "output_type": "execute_result",
374 | "data": {
375 | "text/plain": []
376 | },
377 | "metadata": {},
378 | "execution_count": 11
379 | }
380 | ],
381 | "source": [
382 | "output_1 = gr.Textbox(label=\"Speech to Text\")\n",
383 | "output_2 = gr.Textbox(label=\"ChatGPT Output\")\n",
384 | "output_3 = gr.Audio(\"Temp.mp3\")\n",
385 | "\n",
386 | "gr.Interface(\n",
387 | " title = 'OpenAI Whisper and ChatGPT ASR Gradio Web UI', \n",
388 | " fn=transcribe, \n",
389 | " inputs=[\n",
390 | " gr.inputs.Audio(source=\"microphone\", type=\"filepath\")\n",
391 | " ],\n",
392 | "\n",
393 | " outputs=[\n",
394 | " output_1, output_2, output_3\n",
395 | " ],\n",
396 | " live=True).launch()"
397 | ]
398 | },
399 | {
400 | "cell_type": "code",
401 | "execution_count": null,
402 | "metadata": {
403 | "id": "y2Zid2MKdPxK"
404 | },
405 | "outputs": [],
406 | "source": []
407 | }
408 | ],
409 | "metadata": {
410 | "accelerator": "GPU",
411 | "colab": {
412 | "provenance": []
413 | },
414 | "gpuClass": "standard",
415 | "kernelspec": {
416 | "display_name": "Python 3 (ipykernel)",
417 | "language": "python",
418 | "name": "python3"
419 | },
420 | "language_info": {
421 | "codemirror_mode": {
422 | "name": "ipython",
423 | "version": 3
424 | },
425 | "file_extension": ".py",
426 | "mimetype": "text/x-python",
427 | "name": "python",
428 | "nbconvert_exporter": "python",
429 | "pygments_lexer": "ipython3",
430 | "version": "3.9.15"
431 | }
432 | },
433 | "nbformat": 4,
434 | "nbformat_minor": 0
435 | }
--------------------------------------------------------------------------------
/Chatbot/Speech recognition/readme.md:
--------------------------------------------------------------------------------
1 | ## Reference:
2 |
3 | https://github.com/bhattbhavesh91/voice-assistant-whisper-chatgpt
4 |
5 |
6 |
7 |
8 |
9 |
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Deep learning/Tim_Chatbot.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "provenance": [],
7 | "include_colab_link": true
8 | },
9 | "kernelspec": {
10 | "name": "python3",
11 | "display_name": "Python 3"
12 | },
13 | "language_info": {
14 | "name": "python"
15 | }
16 | },
17 | "cells": [
18 | {
19 | "cell_type": "markdown",
20 | "metadata": {
21 | "id": "view-in-github",
22 | "colab_type": "text"
23 | },
24 | "source": [
25 | "
"
26 | ]
27 | },
28 | {
29 | "cell_type": "code",
30 | "execution_count": null,
31 | "metadata": {
32 | "id": "j5pqdqTirzEe"
33 | },
34 | "outputs": [],
35 | "source": [
36 | "import tensorflow\n",
37 | "# pip install tensorflow==2.13.0rc1"
38 | ]
39 | },
40 | {
41 | "cell_type": "code",
42 | "source": [
43 | "#pip install nltk==3.8.1\n",
44 | "import nltk\n",
45 | "from nltk.stem.lancaster import LancasterStemmer\n",
46 | "stemmer = LancasterStemmer()"
47 | ],
48 | "metadata": {
49 | "id": "jy2UV9eUr2NR"
50 | },
51 | "execution_count": null,
52 | "outputs": []
53 | },
54 | {
55 | "cell_type": "code",
56 | "source": [
57 | "!pip install tflearn==0.5.0"
58 | ],
59 | "metadata": {
60 | "colab": {
61 | "base_uri": "https://localhost:8080/"
62 | },
63 | "id": "nR61OhO6uSQY",
64 | "outputId": "5c99c46b-0b18-44aa-d8bc-b3dc63683211"
65 | },
66 | "execution_count": null,
67 | "outputs": [
68 | {
69 | "output_type": "stream",
70 | "name": "stdout",
71 | "text": [
72 | "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
73 | "Collecting tflearn\n",
74 | " Downloading tflearn-0.5.0.tar.gz (107 kB)\n",
75 | "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m107.3/107.3 kB\u001b[0m \u001b[31m3.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
76 | "\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
77 | "Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from tflearn) (1.22.4)\n",
78 | "Requirement already satisfied: six in /usr/local/lib/python3.10/dist-packages (from tflearn) (1.16.0)\n",
79 | "Requirement already satisfied: Pillow in /usr/local/lib/python3.10/dist-packages (from tflearn) (8.4.0)\n",
80 | "Building wheels for collected packages: tflearn\n",
81 | " Building wheel for tflearn (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
82 | " Created wheel for tflearn: filename=tflearn-0.5.0-py3-none-any.whl size=127283 sha256=8df8ea58808a88be3069c65031cbca1027749254bb8fa578ac55eddf7827d4cc\n",
83 | " Stored in directory: /root/.cache/pip/wheels/55/fb/7b/e06204a0ceefa45443930b9a250cb5ebe31def0e4e8245a465\n",
84 | "Successfully built tflearn\n",
85 | "Installing collected packages: tflearn\n",
86 | "Successfully installed tflearn-0.5.0\n"
87 | ]
88 | }
89 | ]
90 | },
91 | {
92 | "cell_type": "code",
93 | "source": [
94 | "import numpy\n",
95 | "import tflearn\n",
96 | "import tensorflow\n",
97 | "import random\n",
98 | "import json\n",
99 | "import pickle"
100 | ],
101 | "metadata": {
102 | "colab": {
103 | "base_uri": "https://localhost:8080/"
104 | },
105 | "id": "7KNLhGaLt5YI",
106 | "outputId": "b2cc8799-23c3-4365-b6db-5de275b8c46a"
107 | },
108 | "execution_count": null,
109 | "outputs": [
110 | {
111 | "output_type": "stream",
112 | "name": "stderr",
113 | "text": [
114 | "WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/tensorflow/python/compat/v2_compat.py:107: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\n",
115 | "Instructions for updating:\n",
116 | "non-resource variables are not supported in the long term\n"
117 | ]
118 | }
119 | ]
120 | },
121 | {
122 | "cell_type": "code",
123 | "source": [
124 | "with open(\"intents.json\") as file:\n",
125 | " data = json.load(file)"
126 | ],
127 | "metadata": {
128 | "id": "2T5WXg1xuPED"
129 | },
130 | "execution_count": null,
131 | "outputs": []
132 | },
133 | {
134 | "cell_type": "code",
135 | "source": [
136 | "try:\n",
137 | " with open(\"data.pickle\", \"rb\") as f:\n",
138 | " words, labels, training, output = pickle.load(f)\n",
139 | "except:\n",
140 | " words = []\n",
141 | " labels = []\n",
142 | " docs_x = []\n",
143 | " docs_y = []\n"
144 | ],
145 | "metadata": {
146 | "id": "4Bf6ZfJKv0Qq"
147 | },
148 | "execution_count": null,
149 | "outputs": []
150 | },
151 | {
152 | "cell_type": "code",
153 | "source": [
154 | "nltk.download('punkt')"
155 | ],
156 | "metadata": {
157 | "colab": {
158 | "base_uri": "https://localhost:8080/"
159 | },
160 | "id": "XTusZdRjwrNj",
161 | "outputId": "9c48183b-8edf-45ba-b611-b25653ce2974"
162 | },
163 | "execution_count": null,
164 | "outputs": [
165 | {
166 | "output_type": "stream",
167 | "name": "stderr",
168 | "text": [
169 | "[nltk_data] Downloading package punkt to /root/nltk_data...\n",
170 | "[nltk_data] Unzipping tokenizers/punkt.zip.\n"
171 | ]
172 | },
173 | {
174 | "output_type": "execute_result",
175 | "data": {
176 | "text/plain": [
177 | "True"
178 | ]
179 | },
180 | "metadata": {},
181 | "execution_count": 8
182 | }
183 | ]
184 | },
185 | {
186 | "cell_type": "code",
187 | "source": [
188 | "for intent in data[\"intents\"]:\n",
189 | " for pattern in intent[\"patterns\"]:\n",
190 | " wrds = nltk.word_tokenize(pattern)\n",
191 | " words.extend(wrds)\n",
192 | " docs_x.append(wrds)\n",
193 | " docs_y.append(intent[\"tag\"])\n",
194 | "\n",
195 | " if intent[\"tag\"] not in labels:\n",
196 | " labels.append(intent[\"tag\"])\n",
197 | "\n",
198 | " words = [stemmer.stem(w.lower()) for w in words if w != \"?\"]\n",
199 | " words = sorted(list(set(words)))\n",
200 | "\n",
201 | " labels = sorted(labels)\n",
202 | "\n",
203 | " training = []\n",
204 | " output = []\n",
205 | "\n",
206 | " out_empty = [0 for _ in range(len(labels))]\n",
207 | "\n",
208 | " for x, doc in enumerate(docs_x):\n",
209 | " bag = []\n",
210 | "\n",
211 | " wrds = [stemmer.stem(w.lower()) for w in doc]\n",
212 | " for w in words:\n",
213 | " if w in wrds:\n",
214 | " bag.append(1)\n",
215 | " else:\n",
216 | " bag.append(0)\n",
217 | " output_row = out_empty[:]\n",
218 | " output_row[labels.index(docs_y[x])] = 1\n",
219 | "\n",
220 | " training.append(bag)\n",
221 | " output.append(output_row)\n",
222 | "\n",
223 | " training = numpy.array(training)\n",
224 | " output = numpy.array(output)\n",
225 | " with open(\"data.pickle\", \"wb\") as f:\n",
226 | " pickle.dump((words, labels, training, output), f)\n",
227 | ""
228 | ],
229 | "metadata": {
230 | "id": "rzZHEPfbv1yM"
231 | },
232 | "execution_count": null,
233 | "outputs": []
234 | },
235 | {
236 | "cell_type": "code",
237 | "source": [
238 | "try:\n",
239 | " model.load('model.tflearn')\n",
240 | "except:\n",
241 | " tensorflow.compat.v1.reset_default_graph()\n",
242 | "\n",
243 | "\n",
244 | " net = tflearn.input_data(shape=[None, len(training[0])])\n",
245 | " net = tflearn.fully_connected(net, 8)\n",
246 | " net = tflearn.fully_connected(net, 8)\n",
247 | " net = tflearn.fully_connected(net, len(output[0]), activation='softmax')\n",
248 | " net = tflearn.regression(net)\n",
249 | "\n",
250 | " model = tflearn.DNN(net)\n",
251 | "\n",
252 | " model.fit(training, output, n_epoch=100, batch_size=8, show_metric=True)\n",
253 | " model.save(\"model.tflearn\")"
254 | ],
255 | "metadata": {
256 | "colab": {
257 | "base_uri": "https://localhost:8080/"
258 | },
259 | "id": "4NQKTD0U32N7",
260 | "outputId": "815dfc86-6be4-4b20-a9db-120854efda4c"
261 | },
262 | "execution_count": null,
263 | "outputs": [
264 | {
265 | "output_type": "stream",
266 | "name": "stdout",
267 | "text": [
268 | "Training Step: 399 | total loss: \u001b[1m\u001b[32m1.05555\u001b[0m\u001b[0m | time: 0.009s\n",
269 | "| Adam | epoch: 100 | loss: 1.05555 - acc: 0.4007 -- iter: 24/26\n",
270 | "Training Step: 400 | total loss: \u001b[1m\u001b[32m1.06306\u001b[0m\u001b[0m | time: 0.012s\n",
271 | "| Adam | epoch: 100 | loss: 1.06306 - acc: 0.3606 -- iter: 26/26\n",
272 | "--\n"
273 | ]
274 | }
275 | ]
276 | },
277 | {
278 | "cell_type": "markdown",
279 | "source": [
280 | "# New Section"
281 | ],
282 | "metadata": {
283 | "id": "7tTrztxe7CKv"
284 | }
285 | },
286 | {
287 | "cell_type": "code",
288 | "source": [
289 | "def bag_of_words(s, words):\n",
290 | " bag = [0 for _ in range(len(words))]\n",
291 | "\n",
292 | " s_words = nltk.word_tokenize(s)\n",
293 | " s_words = [stemmer.stem(word.lower()) for word in s_words]\n",
294 | "\n",
295 | " for se in s_words:\n",
296 | " for i, w in enumerate(words):\n",
297 | " if w == se:\n",
298 | " bag[i] = 1\n",
299 | "\n",
300 | " return numpy.array(bag)\n",
301 | "\n",
302 | "\n",
303 | "def chat():\n",
304 | " print(\"Start talking with the bot (type quit to stop)!\")\n",
305 | " while True:\n",
306 | " inp = input(\"You: \")\n",
307 | " if inp.lower() == \"quit\":\n",
308 | " break\n",
309 | "\n",
310 | " results = model.predict([bag_of_words(inp, words)])\n",
311 | " results_index = numpy.argmax(results)\n",
312 | " tag = labels[results_index]\n",
313 | "\n",
314 | " for tg in data[\"intents\"]:\n",
315 | " if tg['tag'] == tag:\n",
316 | " responses = tg['responses']\n",
317 | "\n",
318 | " print(random.choice(responses))\n",
319 | "\n",
320 | "chat()"
321 | ],
322 | "metadata": {
323 | "id": "9IIIivrl43hq"
324 | },
325 | "execution_count": null,
326 | "outputs": []
327 | }
328 | ]
329 | }
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Deep learning/loading model.py:
--------------------------------------------------------------------------------
1 | #loading model
2 |
3 |
4 | import nltk
5 | from nltk.stem.lancaster import LancasterStemmer
6 | stemmer = LancasterStemmer()
7 |
8 | import numpy
9 | import tflearn
10 | import tensorflow
11 | import random
12 | import json
13 | import pickle
14 |
15 | with open("intents.json") as file:
16 | data = json.load(file)
17 |
18 | try:
19 | with open("data.pickle", "rb") as f:
20 | words, labels, training, output = pickle.load(f)
21 | except:
22 | words = []
23 | labels = []
24 | docs_x = []
25 | docs_y = []
26 |
27 | for intent in data["intents"]:
28 | for pattern in intent["patterns"]:
29 | wrds = nltk.word_tokenize(pattern)
30 | words.extend(wrds)
31 | docs_x.append(wrds)
32 | docs_y.append(intent["tag"])
33 |
34 | if intent["tag"] not in labels:
35 | labels.append(intent["tag"])
36 |
37 | words = [stemmer.stem(w.lower()) for w in words if w != "?"]
38 | words = sorted(list(set(words)))
39 |
40 | labels = sorted(labels)
41 |
42 | training = []
43 | output = []
44 |
45 | out_empty = [0 for _ in range(len(labels))]
46 |
47 | for x, doc in enumerate(docs_x):
48 | bag = []
49 |
50 | wrds = [stemmer.stem(w.lower()) for w in doc]
51 |
52 | for w in words:
53 | if w in wrds:
54 | bag.append(1)
55 | else:
56 | bag.append(0)
57 |
58 | output_row = out_empty[:]
59 | output_row[labels.index(docs_y[x])] = 1
60 |
61 | training.append(bag)
62 | output.append(output_row)
63 |
64 |
65 | training = numpy.array(training)
66 | output = numpy.array(output)
67 |
68 | with open("data.pickle", "wb") as f:
69 | pickle.dump((words, labels, training, output), f)
70 |
71 | tensorflow.reset_default_graph()
72 |
73 | net = tflearn.input_data(shape=[None, len(training[0])])
74 | net = tflearn.fully_connected(net, 8)
75 | net = tflearn.fully_connected(net, 8)
76 | net = tflearn.fully_connected(net, len(output[0]), activation="softmax")
77 | net = tflearn.regression(net)
78 |
79 | model = tflearn.DNN(net)
80 |
81 | try:
82 | model.load("model.tflearn")
83 | except:
84 | model.fit(training, output, n_epoch=1000, batch_size=8, show_metric=True)
85 | model.save("model.tflearn")
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Deep learning/main.py:
--------------------------------------------------------------------------------
1 | def bag_of_words(s, words):
2 | bag = [0 for _ in range(len(words))]
3 |
4 | s_words = nltk.word_tokenize(s)
5 | s_words = [stemmer.stem(word.lower()) for word in s_words]
6 |
7 | for se in s_words:
8 | for i, w in enumerate(words):
9 | if w == se:
10 | bag[i] = 1
11 |
12 | return numpy.array(bag)
13 |
14 |
15 | def chat():
16 | print("Start talking with the bot (type quit to stop)!")
17 | while True:
18 | inp = input("You: ")
19 | if inp.lower() == "quit":
20 | break
21 |
22 | results = model.predict([bag_of_words(inp, words)])
23 | results_index = numpy.argmax(results)
24 | tag = labels[results_index]
25 |
26 | for tg in data["intents"]:
27 | if tg['tag'] == tag:
28 | responses = tg['responses']
29 |
30 | print(random.choice(responses))
31 |
32 | chat()
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Deep learning/readme.md:
--------------------------------------------------------------------------------
1 | Tim Chatbot
2 |
3 | https://www.youtube.com/watch?v=wypVcNIH6D4&list=PLzMcBGfZo4-ndH9FoC4YWHGXG5RZekt-Q&index=1&t=0s
4 |
5 | https://www.techwithtim.net/tutorials/ai-chatbot
6 |
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Simplilearn/README.md:
--------------------------------------------------------------------------------
1 | # create_chatbot_using_python
2 | This code is an implementation of a simple chatbot using TensorFlow, which is a machine learning framework developed by Google. The chatbot is trained using a neural network to classify user inputs into predefined intents and provide appropriate responses based on the detected intent.
3 | The intents.json file is the data that we will provide to our chatbot
4 |
5 |
6 | References
7 |
8 | https://www.youtube.com/watch?v=t933Gh5fNrc
9 |
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Simplilearn/Readme.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Simplilearn/chatbot.py:
--------------------------------------------------------------------------------
1 | import random
2 | import json
3 | import pickle
4 | import numpy as np
5 | import nltk
6 |
7 | from nltk.stem import WordNetLemmatizer
8 | from keras.models import load_model
9 |
10 | lemmatizer = WordNetLemmatizer()
11 | intents = json.loads(open('C:\Simplilearn\Python\Python projects\chatbot using python\chatbot\intents.json').read())
12 |
13 | words = pickle.load(open('words.pkl', 'rb'))
14 | classes = pickle.load(open('classes.pkl', 'rb'))
15 | model = load_model('chatbot_model.h5')
16 |
17 |
18 | def clean_up_sentence(sentence):
19 | sentence_words = nltk.word_tokenize(sentence)
20 | sentence_words = [lemmatizer.lemmatize(word) for word in sentence_words]
21 | return sentence_words
22 |
23 | def bag_of_words (sentence):
24 | sentence_words = clean_up_sentence(sentence)
25 | bag = [0] * len(words)
26 | for w in sentence_words:
27 | for i, word in enumerate(words):
28 | if word == w:
29 | bag[i] = 1
30 | return np.array(bag)
31 |
32 | def predict_class (sentence):
33 | bow = bag_of_words (sentence)
34 | res = model.predict(np.array([bow]))[0]
35 | ERROR_THRESHOLD = 0.25
36 | results = [[i, r] for i, r in enumerate(res) if r > ERROR_THRESHOLD]
37 |
38 | results.sort(key=lambda x: x[1], reverse=True)
39 | return_list = []
40 | for r in results:
41 | return_list.append({'intent': classes [r[0]], 'probability': str(r[1])})
42 | return return_list
43 |
44 | def get_response(intents_list, intents_json):
45 | tag = intents_list[0]['intent']
46 | list_of_intents = intents_json['intents']
47 | for i in list_of_intents:
48 | if i['tag'] == tag:
49 | result = random.choice (i['responses'])
50 | break
51 | return result
52 |
53 | print("GO! Bot is running!")
54 |
55 | while True:
56 | message = input("")
57 | ints = predict_class (message)
58 | res = get_response (ints, intents)
59 | print (res)
60 |
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Simplilearn/intents.json:
--------------------------------------------------------------------------------
1 | {"intents": [
2 | {"tag": "greeting",
3 | "patterns": ["Hi there", "How are you", "Is anyone there?","Hey","Hola", "Hello", "Good day"],
4 | "responses": ["Hello", "Good to see you again", "Hi there, how can I help?"],
5 | "context": [""]
6 | },
7 | {"tag": "goodbye",
8 | "patterns": ["Bye", "See you later", "Goodbye", "Nice chatting to you, bye", "Till next time"],
9 | "responses": ["See you!", "Have a nice day", "Bye! Come back again soon."],
10 | "context": [""]
11 | },
12 | {"tag": "thanks",
13 | "patterns": ["Thanks", "Thank you", "That's helpful", "Awesome, thanks", "Thanks for helping me"],
14 | "responses": ["My pleasure", "You're Welcome"],
15 | "context": [""]
16 | },
17 | {"tag": "query",
18 | "patterns": ["What is Simplilearn?"],
19 | "responses": ["Simplilearn is the popular online Bootcamp & online courses learning platform "],
20 | "context": [""]
21 | }
22 | ]}
--------------------------------------------------------------------------------
/Chatbot/Text Generation/Simplilearn/new.py:
--------------------------------------------------------------------------------
1 | import random
2 | import json
3 | import pickle
4 | import numpy as np
5 | import tensorflow as tf
6 |
7 | import nltk
8 | from nltk.stem import WordNetLemmatizer
9 |
10 | lemmatizer = WordNetLemmatizer()
11 |
12 | intents = json.loads(open('C:\Simplilearn\Python\Python projects\chatbot using python\chatbot\intents.json').read())
13 |
14 | words = []
15 | classes = []
16 | documents = []
17 | ignoreLetters = ['?', '!', '.', ',']
18 |
19 | for intent in intents['intents']:
20 | for pattern in intent['patterns']:
21 | wordList = nltk.word_tokenize(pattern)
22 | words.extend(wordList)
23 | documents.append((wordList, intent['tag']))
24 | if intent['tag'] not in classes:
25 | classes.append(intent['tag'])
26 |
27 | words = [lemmatizer.lemmatize(word) for word in words if word not in ignoreLetters]
28 | words = sorted(set(words))
29 |
30 | classes = sorted(set(classes))
31 |
32 | pickle.dump(words, open('words.pkl', 'wb'))
33 | pickle.dump(classes, open('classes.pkl', 'wb'))
34 |
35 | training = []
36 | outputEmpty = [0] * len(classes)
37 |
38 | for document in documents:
39 | bag = []
40 | wordPatterns = document[0]
41 | wordPatterns = [lemmatizer.lemmatize(word.lower()) for word in wordPatterns]
42 | for word in words:
43 | bag.append(1) if word in wordPatterns else bag.append(0)
44 |
45 | outputRow = list(outputEmpty)
46 | outputRow[classes.index(document[1])] = 1
47 | training.append(bag + outputRow)
48 |
49 | random.shuffle(training)
50 | training = np.array(training)
51 |
52 | trainX = training[:, :len(words)]
53 | trainY = training[:, len(words):]
54 |
55 |
56 | model = tf.keras.Sequential()
57 | model.add(tf.keras.layers.Dense(128, input_shape=(len(trainX[0]),), activation = 'relu'))
58 | model.add(tf.keras.layers.Dropout(0.5))
59 | model.add(tf.keras.layers.Dense(64, activation = 'relu'))
60 | model.add(tf.keras.layers.Dropout(0.5))
61 | model.add(tf.keras.layers.Dense(len(trainY[0]), activation='softmax'))
62 |
63 | sgd = tf.keras.optimizers.SGD(learning_rate=0.01, momentum=0.9, nesterov=True)
64 | model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
65 |
66 | hist = model.fit(np.array(trainX), np.array(trainY), epochs=200, batch_size=5, verbose=1)
67 | model.save('chatbot_model.h5', hist)
68 | print('Done')
69 |
70 |
71 |
72 |
--------------------------------------------------------------------------------
/Chatbot/Text Generation/pythoncv Telegram channel/Python Chatbot Project.py:
--------------------------------------------------------------------------------
1 | # telegram id => @pythonCV & @pythonCV_gp
2 |
3 | # 1. Import and load the data file
4 |
5 | # First, make a file name as train_chatbot.py.
6 | # We import the necessary packages for our chatbot and initialize the variables we will use in our Python project.
7 | # The data file is in JSON format so we used the json package to parse the JSON file into Python.
8 |
9 | import nltk
10 | from nltk.stem import WordNetLemmatizer
11 | lemmatizer = WordNetLemmatizer()
12 | import json
13 | import pickle
14 |
15 | import numpy as np
16 | from keras.models import Sequential
17 | from keras.layers import Dense, Activation, Dropout
18 | from keras.optimizers import SGD
19 | import random
20 |
21 | words=[]
22 | classes = []
23 | documents = []
24 | ignore_words = ['?', '!']
25 | data_file = open('intents.json').read()
26 | intents = json.loads(data_file)
27 |
28 | # 2. Preprocess data
29 |
30 | for intent in intents['intents']:
31 | for pattern in intent['patterns']:
32 |
33 | #tokenize each word
34 | w = nltk.word_tokenize(pattern)
35 | words.extend(w)
36 | #add documents in the corpus
37 | documents.append((w, intent['tag']))
38 |
39 | # add to our classes list
40 | if intent['tag'] not in classes:
41 | classes.append(intent['tag'])
42 |
43 |
44 | # Now we will lemmatize each word and remove duplicate words from the list.
45 | # Lemmatizing is the process of converting a word into its lemma form and then creating a pickle file to store the Python objects which we will use while predicting.
46 | # lemmatize, lower each word and remove duplicates
47 | words = [lemmatizer.lemmatize(w.lower()) for w in words if w not in ignore_words]
48 | words = sorted(list(set(words)))
49 | # sort classes
50 | classes = sorted(list(set(classes)))
51 | # documents = combination between patterns and intents
52 | print (len(documents), "documents")
53 | # classes = intents
54 | print (len(classes), "classes", classes)
55 | # words = all words, vocabulary
56 | print (len(words), "unique lemmatized words", words)
57 |
58 | pickle.dump(words,open('words.pkl','wb'))
59 | pickle.dump(classes,open('classes.pkl','wb'))
60 |
61 |
62 |
63 | # 3. Create training and testing data
64 | # create our training data
65 | training = []
66 | # create an empty array for our output
67 | output_empty = [0] * len(classes)
68 | # training set, bag of words for each sentence
69 | for doc in documents:
70 | # initialize our bag of words
71 | bag = []
72 | # list of tokenized words for the pattern
73 | pattern_words = doc[0]
74 | # lemmatize each word - create base word, in attempt to represent related words
75 | pattern_words = [lemmatizer.lemmatize(word.lower()) for word in pattern_words]
76 | # create our bag of words array with 1, if word match found in current pattern
77 | for w in words:
78 | bag.append(1) if w in pattern_words else bag.append(0)
79 |
80 | # output is a '0' for each tag and '1' for current tag (for each pattern)
81 | output_row = list(output_empty)
82 | output_row[classes.index(doc[1])] = 1
83 |
84 | training.append([bag, output_row])
85 | # shuffle our features and turn into np.array
86 | random.shuffle(training)
87 | training = np.array(training)
88 | # create train and test lists. X - patterns, Y - intents
89 | train_x = list(training[:,0])
90 | train_y = list(training[:,1])
91 | print("Training data created")
92 |
93 |
94 |
95 | # 4. Build the model
96 |
97 | # Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
98 | # equal to number of intents to predict output intent with softmax
99 | model = Sequential()
100 | model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu'))
101 | model.add(Dropout(0.5))
102 | model.add(Dense(64, activation='relu'))
103 | model.add(Dropout(0.5))
104 | model.add(Dense(len(train_y[0]), activation='softmax'))
105 |
106 | # Compile model. Stochastic gradient descent with Nesterov accelerated gradient gives good results for this model
107 | sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
108 | model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
109 |
110 | #fitting and saving the model
111 | hist = model.fit(np.array(train_x), np.array(train_y), epochs=200, batch_size=5, verbose=1)
112 | model.save('chatbot_model.h5', hist)
113 |
114 | print("model created")
115 |
116 | # 5. Predict the response (Graphical User Interface)
117 |
118 |
119 |
120 | import nltk
121 | from nltk.stem import WordNetLemmatizer
122 | lemmatizer = WordNetLemmatizer()
123 | import pickle
124 | import numpy as np
125 |
126 | from keras.models import load_model
127 | model = load_model('chatbot_model.h5')
128 | import json
129 | import random
130 | intents = json.loads(open('intents.json').read())
131 | words = pickle.load(open('words.pkl','rb'))
132 | classes = pickle.load(open('classes.pkl','rb'))
133 |
134 |
135 | def clean_up_sentence(sentence):
136 | # tokenize the pattern - split words into array
137 | sentence_words = nltk.word_tokenize(sentence)
138 | # stem each word - create short form for word
139 | sentence_words = [lemmatizer.lemmatize(word.lower()) for word in sentence_words]
140 | return sentence_words
141 | # return bag of words array: 0 or 1 for each word in the bag that exists in the sentence
142 |
143 | def bow(sentence, words, show_details=True):
144 | # tokenize the pattern
145 | sentence_words = clean_up_sentence(sentence)
146 | # bag of words - matrix of N words, vocabulary matrix
147 | bag = [0]*len(words)
148 | for s in sentence_words:
149 | for i,w in enumerate(words):
150 | if w == s:
151 | # assign 1 if current word is in the vocabulary position
152 | bag[i] = 1
153 | if show_details:
154 | print ("found in bag: %s" % w)
155 | return(np.array(bag))
156 |
157 | def predict_class(sentence, model):
158 | # filter out predictions below a threshold
159 | p = bow(sentence, words,show_details=False)
160 | res = model.predict(np.array([p]))[0]
161 | ERROR_THRESHOLD = 0.25
162 | results = [[i,r] for i,r in enumerate(res) if r>ERROR_THRESHOLD]
163 | # sort by strength of probability
164 | results.sort(key=lambda x: x[1], reverse=True)
165 | return_list = []
166 | for r in results:
167 | return_list.append({"intent": classes[r[0]], "probability": str(r[1])})
168 | return return_list
169 |
170 |
171 | def getResponse(ints, intents_json):
172 | tag = ints[0]['intent']
173 | list_of_intents = intents_json['intents']
174 | for i in list_of_intents:
175 | if(i['tag']== tag):
176 | result = random.choice(i['responses'])
177 | break
178 | return result
179 |
180 | def chatbot_response(text):
181 | ints = predict_class(text, model)
182 | res = getResponse(ints, intents)
183 | return res
184 |
185 | #Creating GUI with tkinter
186 | import tkinter
187 | from tkinter import *
188 |
189 |
190 | def send():
191 | msg = EntryBox.get("1.0",'end-1c').strip()
192 | EntryBox.delete("0.0",END)
193 |
194 | if msg != '':
195 | ChatLog.config(state=NORMAL)
196 | ChatLog.insert(END, "You: " + msg + '\n\n')
197 | ChatLog.config(foreground="#442265", font=("Verdana", 12 ))
198 |
199 | res = chatbot_response(msg)
200 | ChatLog.insert(END, "Bot: " + res + '\n\n')
201 |
202 | ChatLog.config(state=DISABLED)
203 | ChatLog.yview(END)
204 |
205 | base = Tk()
206 | base.title("Hello")
207 | base.geometry("400x500")
208 | base.resizable(width=FALSE, height=FALSE)
209 |
210 | #Create Chat window
211 | ChatLog = Text(base, bd=0, bg="white", height="8", width="50", font="Arial",)
212 |
213 | ChatLog.config(state=DISABLED)
214 |
215 | #Bind scrollbar to Chat window
216 | scrollbar = Scrollbar(base, command=ChatLog.yview, cursor="heart")
217 | ChatLog['yscrollcommand'] = scrollbar.set
218 |
219 | #Create Button to send message
220 | SendButton = Button(base, font=("Verdana",12,'bold'), text="Send", width="12", height=5,
221 | bd=0, bg="#32de97", activebackground="#3c9d9b",fg='#ffffff',
222 | command= send )
223 |
224 | #Create the box to enter message
225 | EntryBox = Text(base, bd=0, bg="white",width="29", height="5", font="Arial")
226 | #EntryBox.bind("", send)
227 |
228 |
229 | #Place all components on the screen
230 | scrollbar.place(x=376,y=6, height=386)
231 | ChatLog.place(x=6,y=6, height=386, width=370)
232 | EntryBox.place(x=128, y=401, height=90, width=265)
233 | SendButton.place(x=6, y=401, height=90)
234 |
235 | base.mainloop()
236 |
237 |
238 |
239 | #Creating GUI with tkinter
240 | import tkinter
241 | from tkinter import *
242 |
243 |
244 | def send():
245 | msg = EntryBox.get("1.0",'end-1c').strip()
246 | EntryBox.delete("0.0",END)
247 |
248 | if msg != '':
249 | ChatLog.config(state=NORMAL)
250 | ChatLog.insert(END, "You: " + msg + '\n\n')
251 | ChatLog.config(foreground="#442265", font=("Verdana", 12 ))
252 |
253 | res = chatbot_response(msg)
254 | ChatLog.insert(END, "Bot: " + res + '\n\n')
255 |
256 | ChatLog.config(state=DISABLED)
257 | ChatLog.yview(END)
258 |
259 | base = Tk()
260 | base.title("Hello")
261 | base.geometry("400x500")
262 | base.resizable(width=FALSE, height=FALSE)
263 |
264 | #Create Chat window
265 | ChatLog = Text(base, bd=0, bg="white", height="8", width="50", font="Arial",)
266 |
267 | ChatLog.config(state=DISABLED)
268 |
269 | #Bind scrollbar to Chat window
270 | scrollbar = Scrollbar(base, command=ChatLog.yview, cursor="heart")
271 | ChatLog['yscrollcommand'] = scrollbar.set
272 |
273 | #Create Button to send message
274 | SendButton = Button(base, font=("Verdana",12,'bold'), text="Send", width="12", height=5,
275 | bd=0, bg="#32de97", activebackground="#3c9d9b",fg='#ffffff',
276 | command= send )
277 |
278 | #Create the box to enter message
279 | EntryBox = Text(base, bd=0, bg="white",width="29", height="5", font="Arial")
280 | #EntryBox.bind("", send)
281 |
282 |
283 | #Place all components on the screen
284 | scrollbar.place(x=376,y=6, height=386)
285 | ChatLog.place(x=6,y=6, height=386, width=370)
286 | EntryBox.place(x=128, y=401, height=90, width=265)
287 | SendButton.place(x=6, y=401, height=90)
288 |
289 | base.mainloop()
--------------------------------------------------------------------------------
/Chatbot/Text Generation/pythoncv Telegram channel/Python_Chatbot_Project_–_Learn_to_build_your_first_chatbot_using.docx:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Melanee-Melanee/Python-Bots/8a3fd1779f7b79c7079c396b2d395ba78c6d8b29/Chatbot/Text Generation/pythoncv Telegram channel/Python_Chatbot_Project_–_Learn_to_build_your_first_chatbot_using.docx
--------------------------------------------------------------------------------
/Chatbot/Text Generation/pythoncv Telegram channel/readme.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/Discord Bots/CS50.db:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Melanee-Melanee/Python-Bots/8a3fd1779f7b79c7079c396b2d395ba78c6d8b29/Discord Bots/CS50.db
--------------------------------------------------------------------------------
/Discord Bots/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/Discord Bots/README.md:
--------------------------------------------------------------------------------
1 | # authentication discord bot
2 | CS50x Iran summer authentication discord bot a simple bot script to authentication useres
3 |
4 | ## Preview
5 |
6 | 
7 |
8 | 
9 | ## Installation
10 |
11 | 1. Download the Project with `git clone https://github.com/YasTechOrg/cs50_auth.git`
12 |
13 | 2. open cmd in project folder
14 |
15 | 2. install the requirements.txt with
16 |
17 |
18 | ```bash
19 | pip install -r requirements.txt
20 | ```
21 |
22 | ## Config
23 | open check.py in cog folder and edit the variables below:
24 |
25 | 1. self.add_role
26 | 2. self.remove_role
27 | 3. self.error_channel
28 | 4. self.lobby_channel
29 |
30 | add your data in database file(cs50.db)
31 |
32 | ## Run and setup
33 | After installation all requirements
34 |
35 | Follow the steps below :
36 | 1. run auth.py and paste your token
37 | 2. enjoy :)
38 |
39 | ## Contact
40 | Hamidreza Farzin
41 |
42 | Instagram : @hamidrezafarzin.hv
43 |
44 |
45 | LinkedIn : @hamidreza Farzin
46 |
47 |
48 | Discord : H_VICTOR#2999
49 |
--------------------------------------------------------------------------------
/Discord Bots/auth.py:
--------------------------------------------------------------------------------
1 | import disnake
2 | from disnake.ext import commands
3 | import asyncio
4 | import time
5 | import sqlite3
6 | import sys
7 | import os
8 |
9 | token = input("Enter a token :").strip()
10 | os.system("cls")
11 |
12 | intents = disnake.Intents.all()
13 |
14 | bot = commands.Bot(command_prefix = '.', intents=intents, help_command=None)
15 | #Hamidreza Farzin Hossein araghi daniel soheil sahar halaji amirhossein foroghi
16 | white_list = [440552369864966144, 696999720182087722, 724108042227941427, 898160244235075625, 714364321139785809]
17 |
18 | @bot.event
19 | async def on_ready():
20 | print(f""" Bot ID : {bot.user.name}""")
21 | print("Developer : Hamidreza Farzin (H_VICTOR#2999)")
22 | await bot.change_presence(activity=disnake.Activity(type=disnake.ActivityType.watching, name="CS50X Iran"))
23 |
24 | @bot.command()
25 | @commands.cooldown(4, 2, commands.BucketType.guild)
26 | async def ping(ctx):
27 | if ctx.author.id in white_list:
28 | await ctx.send(f"**Bot ping : {round(bot.latency * 1000)}ms**")
29 | else:
30 | pass
31 |
32 | bot.load_extension("cogs.check")
33 |
34 | try :
35 | bot.run(token, reconnect=True)
36 | except disnake.errors.LoginFailure:
37 | print("login faild please check to correct token(app will be terminated)")
38 | time.sleep(3)
39 | sys.exit()
40 |
--------------------------------------------------------------------------------
/Discord Bots/cogs/check.py:
--------------------------------------------------------------------------------
1 | import disnake
2 | from disnake.ext import commands
3 | import asyncio
4 | import sqlite3
5 |
6 | # # create form
7 | class MyModal(disnake.ui.Modal):
8 | def __init__(self) -> None:
9 | self.add_role = 997142096836304896
10 | self.remove_role = 997143448484315247
11 | self.error_channel = 997858250106089632
12 | self.lobby_channel = 997144260501569599
13 | components = [
14 | disnake.ui.TextInput(
15 | label="شماره تلفن خود به انگلیسی",
16 | placeholder = "مثال (09111111111)",
17 | custom_id="phone",
18 | style=disnake.TextInputStyle.short,
19 | max_length=11,
20 | ),
21 |
22 | ]
23 | super().__init__(title="CS50 Discord Authentication", custom_id="create_tag", components=components)
24 |
25 | async def callback(self, inter: disnake.ModalInteraction) -> None:
26 | number_input = inter.text_values.get("phone")
27 |
28 | data = sqlite3_check(number_input)
29 | #print(data)
30 | Check = False
31 |
32 | try :
33 | # check the phone number in the database
34 | if "phone_number" in data:
35 | # check phone number in database equal to the phone number in the form
36 | if data["phone_number"] == number_input :
37 | if data["Login"] == None and data["Discord_id_name"] == None and data["Discord_id"] == None:
38 | # success message
39 | embed=disnake.Embed(color=0x14db4c)
40 | embed.title = "اطلاعات شما تایید شد"
41 | embed.description = f"""سلام , {data['First']} {data['Last']}
42 | اطلاعات شما برسی و تایید شد و تا چند ثانیه دیگر سرور برای شما باز خواهد شد
43 | """
44 | await inter.response.send_message(embed = embed, ephemeral=True)
45 | # add and remove roles
46 | # set your personal role id for add and remove
47 | Add_Role_id = inter.guild.get_role(self.add_role)
48 | Remove_role_id = inter.guild.get_role(self.remove_role)
49 | await asyncio.sleep(3)
50 | await inter.author.add_roles(Add_Role_id)
51 | await inter.author.remove_roles(Remove_role_id)
52 | # connect to database
53 | connect = sqlite3.connect("CS50.db")
54 | db = connect.cursor()
55 | db.execute(f"UPDATE account_user SET Login = 'YES', Discord_id_name = '{inter.author}', Discord_id = '{inter.author.id}' WHERE phone_number = '{number_input}'")
56 | connect.commit()
57 | connect.close()
58 | Check = True
59 |
60 | else:
61 | #if if phone number is already signed up in server this message will sent for user
62 | Check = True
63 | embed=disnake.Embed(color=0xe00909)
64 | embed.title = " از شماره مورد نظر شمااستفاده شده در صورت نیاز به پشتیبانی اطلاع دهید"
65 | await inter.response.send_message(embed = embed, ephemeral=True)
66 | # if phone number dose not exist in database this message will be sent for user
67 | if not Check:
68 | embed=disnake.Embed(color=0xe00909)
69 | embed.title = "متاسفانه مشخصات شما در لیست موجود نمیباشد لطفا به پشتیبانی اطلاع دهید"
70 | await inter.response.send_message(embed = embed, ephemeral=True)
71 | except Exception as e:
72 | #error Log
73 | # set your personal channel for errors
74 | channel = inter.guild.get_channel(self.error_channel)
75 | await channel.send(e)
76 | # if there is a problem in Discord api or no response from discord this message will be shown
77 | async def on_error(self, error: Exception, inter: disnake.ModalInteraction) -> None:
78 | await inter.response.send_message("مشکلی پیش امده لطفا بعدا تلاش کنید", ephemeral=True)
79 |
80 | class button(disnake.ui.View):
81 | def __init__(self):
82 | super().__init__(timeout=None)
83 |
84 | @disnake.ui.button(label="Authenticate", style=disnake.ButtonStyle.success)
85 | async def authenticate(self, button:disnake.ui.Button, inter: disnake.MessageInteraction):
86 | await inter.response.send_modal(MyModal())
87 |
88 |
89 | class check(commands.Cog, MyModal):
90 | def __init__(self, bot:commands.Bot):
91 | self.bot = bot
92 | MyModal.__init__(self)
93 |
94 | @commands.Cog.listener()
95 | async def on_ready(self):
96 | # lobby channel
97 | # set your personal channel
98 | channel = self.bot.get_channel(self.lobby_channel)
99 | # delete last 10 message from channel
100 | await channel.purge(limit = 10)
101 | embed=disnake.Embed(color=0x14db4c)
102 | embed.title = "احراز هویت شرکت کنندگان"
103 | embed.description = "لطفا برای دسترسی به محتوای سرور روی دکمه زیر کلیک کنید"
104 | await channel.send(embed = embed, view=button())
105 |
106 | def sqlite3_check(number_input):
107 | connect = sqlite3.connect("CS50.db")
108 | db = connect.cursor()
109 | db.execute(f"SELECT * FROM account_user WHERE phone_number = '{number_input}';")
110 | data = db.fetchall()
111 | row = {}
112 | for x in data:
113 | row = {
114 | "First":None, "Last":None, "phone_number":None, "Login":None, "Discord_id_name":None, "Discord_id":None,
115 | }
116 | row["First"] = x[0]
117 | row["Last"] = x[1]
118 | row["phone_number"] = x[2]
119 | row["Login"] = x[3]
120 | row["Discord_id_name"] = x[4]
121 | row["Discord_id"] = x[5]
122 | connect.close()
123 | return row
124 |
125 | def setup(bot:commands.Bot):
126 | bot.add_cog(check(bot))
127 |
--------------------------------------------------------------------------------
/Discord Bots/readme.md:
--------------------------------------------------------------------------------
1 |
2 | https://github.com/YasTechOrg/cs50_auth
3 |
--------------------------------------------------------------------------------
/Discord Bots/requirements.txt:
--------------------------------------------------------------------------------
1 | disnake==2.5.2
--------------------------------------------------------------------------------
/Instagram Bots/Commenting_bot.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "## STEP 1: WebDriver & Module Imports\n",
8 | "\n",
9 | "Chrome Driver : https://chromedriver.chromium.org/\n",
10 | "
\n",
11 | "Firefox Driver : https://github.com/mozilla/geckodriver/releases"
12 | ]
13 | },
14 | {
15 | "cell_type": "code",
16 | "execution_count": 36,
17 | "metadata": {},
18 | "outputs": [],
19 | "source": [
20 | "from selenium import webdriver\n",
21 | "from selenium.webdriver.common.keys import Keys\n",
22 | "from selenium.webdriver.support import expected_conditions as EC\n",
23 | "from selenium.webdriver.common.by import By\n",
24 | "from selenium.webdriver.support.wait import WebDriverWait\n",
25 | "import time \n",
26 | "import random "
27 | ]
28 | },
29 | {
30 | "cell_type": "markdown",
31 | "metadata": {},
32 | "source": [
33 | "## STEP 2: Create & Launch a Webdriver Object"
34 | ]
35 | },
36 | {
37 | "cell_type": "code",
38 | "execution_count": 37,
39 | "metadata": {},
40 | "outputs": [],
41 | "source": [
42 | "driver = webdriver.Chrome(\"C:/Users/goaim/chromedriver.exe\")\n",
43 | "driver.get(\"https://instagram.com\")"
44 | ]
45 | },
46 | {
47 | "cell_type": "markdown",
48 | "metadata": {},
49 | "source": [
50 | "## STEP 3: Log In"
51 | ]
52 | },
53 | {
54 | "cell_type": "code",
55 | "execution_count": 38,
56 | "metadata": {},
57 | "outputs": [],
58 | "source": [
59 | "time.sleep(5)\n",
60 | "username = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"input[name='username']\")))\n",
61 | "password = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"input[name='password']\")))\n",
62 | "\n",
63 | "username.send_keys(\"my_username\")\n",
64 | "password.send_keys(\"my_password\")\n",
65 | "\n",
66 | "submit = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button[type='submit']\"))).click()"
67 | ]
68 | },
69 | {
70 | "cell_type": "markdown",
71 | "metadata": {},
72 | "source": [
73 | "## STEP 4: Handle Alerts"
74 | ]
75 | },
76 | {
77 | "cell_type": "code",
78 | "execution_count": 39,
79 | "metadata": {},
80 | "outputs": [],
81 | "source": [
82 | "time.sleep(5)\n",
83 | "alert = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, \"//button[contains(text(), 'Not Now')]\"))).click()\n",
84 | "alert = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, \"//button[contains(text(), 'Not Now')]\"))).click()"
85 | ]
86 | },
87 | {
88 | "cell_type": "markdown",
89 | "metadata": {},
90 | "source": [
91 | "## STEP 5: Select Hashtag"
92 | ]
93 | },
94 | {
95 | "cell_type": "code",
96 | "execution_count": 42,
97 | "metadata": {},
98 | "outputs": [],
99 | "source": [
100 | "hashtag = \"fashion\"\n",
101 | "driver.get(\"https://www.instagram.com/explore/tags/\" + hashtag + \"/\")\n",
102 | "time.sleep(5)"
103 | ]
104 | },
105 | {
106 | "cell_type": "markdown",
107 | "metadata": {},
108 | "source": [
109 | "## STEP 6: Scroll Multiple Times"
110 | ]
111 | },
112 | {
113 | "cell_type": "code",
114 | "execution_count": 43,
115 | "metadata": {},
116 | "outputs": [],
117 | "source": [
118 | "n_scrolls = 3\n",
119 | "for i in range(1, n_scrolls):\n",
120 | " driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n",
121 | " time.sleep(5)"
122 | ]
123 | },
124 | {
125 | "cell_type": "markdown",
126 | "metadata": {},
127 | "source": [
128 | "## STEP 7: Target Links to Images "
129 | ]
130 | },
131 | {
132 | "cell_type": "code",
133 | "execution_count": 44,
134 | "metadata": {},
135 | "outputs": [
136 | {
137 | "data": {
138 | "text/plain": [
139 | "['https://www.instagram.com/p/CKuomiYJ4yp/',\n",
140 | " 'https://www.instagram.com/p/CKuoXhDAPZc/',\n",
141 | " 'https://www.instagram.com/p/CKuh7osBO-q/']"
142 | ]
143 | },
144 | "execution_count": 44,
145 | "metadata": {},
146 | "output_type": "execute_result"
147 | }
148 | ],
149 | "source": [
150 | "anchors = driver.find_elements_by_tag_name(\"a\")\n",
151 | "anchors = [a.get_attribute(\"href\") for a in anchors]\n",
152 | "anchors = [a for a in anchors if a.startswith(\"https://www.instagram.com/p/\")]\n",
153 | "\n",
154 | "anchors[:3]"
155 | ]
156 | },
157 | {
158 | "cell_type": "markdown",
159 | "metadata": {},
160 | "source": [
161 | "## STEP 8: Loop over photos and comment"
162 | ]
163 | },
164 | {
165 | "cell_type": "code",
166 | "execution_count": 45,
167 | "metadata": {},
168 | "outputs": [
169 | {
170 | "name": "stdout",
171 | "output_type": "stream",
172 | "text": [
173 | "I've waited for 1seconds\n",
174 | "I've waited for 5seconds\n",
175 | "I've waited for 6seconds\n"
176 | ]
177 | }
178 | ],
179 | "source": [
180 | "data = anchors[:3]\n",
181 | "greeting = [\"Hi\", \"Hello\", \"Hey\", \"Heeey\", \"Greetings\"]\n",
182 | "\n",
183 | "for a in data:\n",
184 | " driver.get(a)\n",
185 | " time.sleep(5)\n",
186 | "\n",
187 | " random_idx = random.randint(0, (len(greeting)-1))\n",
188 | " my_comment = greeting[random_idx] + \" I saw your photos - they are beautiful! wanna collab??\"\n",
189 | " \n",
190 | " form = driver.find_element_by_tag_name(\"form\").click()\n",
191 | " text_area = driver.find_element_by_tag_name(\"textarea\")\n",
192 | " text_area.send_keys(my_comment)\n",
193 | " submit = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button[type='submit']\"))).click()\n",
194 | " \n",
195 | " seconds = random.randint(1,10)\n",
196 | " time.sleep(seconds)\n",
197 | " print(\"I've waited for \" + str(seconds) + \"seconds\") \n",
198 | " "
199 | ]
200 | }
201 | ],
202 | "metadata": {
203 | "kernelspec": {
204 | "display_name": "Python 3",
205 | "language": "python",
206 | "name": "python3"
207 | },
208 | "language_info": {
209 | "codemirror_mode": {
210 | "name": "ipython",
211 | "version": 3
212 | },
213 | "file_extension": ".py",
214 | "mimetype": "text/x-python",
215 | "name": "python",
216 | "nbconvert_exporter": "python",
217 | "pygments_lexer": "ipython3",
218 | "version": "3.8.5"
219 | }
220 | },
221 | "nbformat": 4,
222 | "nbformat_minor": 4
223 | }
224 |
--------------------------------------------------------------------------------
/Instagram Bots/Scraping Instagram with Python.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Melanee-Melanee/Python-Bots/8a3fd1779f7b79c7079c396b2d395ba78c6d8b29/Instagram Bots/Scraping Instagram with Python.pdf
--------------------------------------------------------------------------------
/Instagram Bots/WebscrapingInstagram_completeNotebook.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Web Scraping Instagram with Selenium"
8 | ]
9 | },
10 | {
11 | "cell_type": "code",
12 | "execution_count": 133,
13 | "metadata": {},
14 | "outputs": [],
15 | "source": [
16 | "#imports here\n",
17 | "from selenium import webdriver\n",
18 | "from selenium.webdriver.common.keys import Keys\n",
19 | "from selenium.webdriver.support import expected_conditions as EC\n",
20 | "from selenium.webdriver.common.by import By\n",
21 | "from selenium.webdriver.support.wait import WebDriverWait"
22 | ]
23 | },
24 | {
25 | "cell_type": "markdown",
26 | "metadata": {},
27 | "source": [
28 | "## Download ChromeDriver\n",
29 | "Now we need to download latest stable release of ChromeDriver from:\n",
30 | "
\n",
31 | "https://chromedriver.chromium.org/"
32 | ]
33 | },
34 | {
35 | "cell_type": "code",
36 | "execution_count": 134,
37 | "metadata": {},
38 | "outputs": [],
39 | "source": [
40 | "#specify the path to chromedriver.exe (download and save on your computer)\n",
41 | "driver = webdriver.Chrome('C:/Users/goaim/chromedriver.exe')\n",
42 | "\n",
43 | "#open the webpage\n",
44 | "driver.get(\"http://www.instagram.com\")\n",
45 | "\n",
46 | "#target username\n",
47 | "username = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"input[name='username']\")))\n",
48 | "password = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"input[name='password']\")))\n",
49 | "\n",
50 | "#enter username and password\n",
51 | "username.clear()\n",
52 | "username.send_keys(\"my_username\")\n",
53 | "password.clear()\n",
54 | "password.send_keys(\"my_password\")\n",
55 | "\n",
56 | "#target the login button and click it\n",
57 | "button = WebDriverWait(driver, 2).until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"button[type='submit']\"))).click()\n",
58 | "\n",
59 | "#We are logged in!"
60 | ]
61 | },
62 | {
63 | "cell_type": "code",
64 | "execution_count": 135,
65 | "metadata": {},
66 | "outputs": [],
67 | "source": [
68 | "#nadle NOT NOW\n",
69 | "not_now = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//button[contains(text(), \"Not Now\")]'))).click()\n",
70 | "not_now2 = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//button[contains(text(), \"Not Now\")]'))).click()"
71 | ]
72 | },
73 | {
74 | "cell_type": "markdown",
75 | "metadata": {},
76 | "source": [
77 | "## Search keywords"
78 | ]
79 | },
80 | {
81 | "cell_type": "code",
82 | "execution_count": 136,
83 | "metadata": {},
84 | "outputs": [],
85 | "source": [
86 | "import time\n",
87 | "\n",
88 | "#target the search input field\n",
89 | "searchbox = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, \"//input[@placeholder='Search']\")))\n",
90 | "searchbox.clear()\n",
91 | "\n",
92 | "#search for the hashtag cat\n",
93 | "keyword = \"#cat\"\n",
94 | "searchbox.send_keys(keyword)\n",
95 | " \n",
96 | "# Wait for 5 seconds\n",
97 | "time.sleep(5)\n",
98 | "searchbox.send_keys(Keys.ENTER)\n",
99 | "time.sleep(5)\n",
100 | "searchbox.send_keys(Keys.ENTER)\n",
101 | "time.sleep(5)"
102 | ]
103 | },
104 | {
105 | "cell_type": "code",
106 | "execution_count": null,
107 | "metadata": {},
108 | "outputs": [],
109 | "source": []
110 | },
111 | {
112 | "cell_type": "code",
113 | "execution_count": 137,
114 | "metadata": {},
115 | "outputs": [
116 | {
117 | "name": "stdout",
118 | "output_type": "stream",
119 | "text": [
120 | "Number of scraped images: 46\n"
121 | ]
122 | }
123 | ],
124 | "source": [
125 | "#scroll down to scrape more images\n",
126 | "driver.execute_script(\"window.scrollTo(0, 4000);\")\n",
127 | "\n",
128 | "#target all images on the page\n",
129 | "images = driver.find_elements_by_tag_name('img')\n",
130 | "images = [image.get_attribute('src') for image in images]\n",
131 | "images = images[:-2]\n",
132 | "\n",
133 | "print('Number of scraped images: ', len(images))"
134 | ]
135 | },
136 | {
137 | "cell_type": "markdown",
138 | "metadata": {},
139 | "source": [
140 | "## Save images to computer\n",
141 | "\n",
142 | "First we'll create a new folder for our images somewhere on our computer.\n",
143 | "
\n",
144 | "Then, we'll save all the images there."
145 | ]
146 | },
147 | {
148 | "cell_type": "code",
149 | "execution_count": 138,
150 | "metadata": {},
151 | "outputs": [
152 | {
153 | "data": {
154 | "text/plain": [
155 | "'C:\\\\Users\\\\goaim\\\\cats'"
156 | ]
157 | },
158 | "execution_count": 138,
159 | "metadata": {},
160 | "output_type": "execute_result"
161 | }
162 | ],
163 | "source": [
164 | "import os\n",
165 | "import wget\n",
166 | "\n",
167 | "path = os.getcwd()\n",
168 | "path = os.path.join(path, keyword[1:] + \"s\")\n",
169 | "\n",
170 | "#create the directory\n",
171 | "os.mkdir(path)\n",
172 | "\n",
173 | "path"
174 | ]
175 | },
176 | {
177 | "cell_type": "code",
178 | "execution_count": 139,
179 | "metadata": {},
180 | "outputs": [
181 | {
182 | "name": "stdout",
183 | "output_type": "stream",
184 | "text": [
185 | "100% [..............................................................................] 53506 / 53506"
186 | ]
187 | }
188 | ],
189 | "source": [
190 | "#download images\n",
191 | "counter = 0\n",
192 | "for image in images:\n",
193 | " save_as = os.path.join(path, keyword[1:] + str(counter) + '.jpg')\n",
194 | " wget.download(image, save_as)\n",
195 | " counter += 1"
196 | ]
197 | }
198 | ],
199 | "metadata": {
200 | "kernelspec": {
201 | "display_name": "Python 3",
202 | "language": "python",
203 | "name": "python3"
204 | },
205 | "language_info": {
206 | "codemirror_mode": {
207 | "name": "ipython",
208 | "version": 3
209 | },
210 | "file_extension": ".py",
211 | "mimetype": "text/x-python",
212 | "name": "python",
213 | "nbconvert_exporter": "python",
214 | "pygments_lexer": "ipython3",
215 | "version": "3.8.5"
216 | }
217 | },
218 | "nbformat": 4,
219 | "nbformat_minor": 4
220 | }
221 |
--------------------------------------------------------------------------------
/Instagram Bots/instagram.py:
--------------------------------------------------------------------------------
1 | from instabot import Bot
2 | bot = Bot()
3 | bot.login(username="", password="")
4 |
5 | ###### upload a picture #######
6 | bot.upload_photo("yoda.jpg", caption="biscuit eating baby")
7 |
8 | ###### follow someone #######
9 | bot.follow("elonrmuskk")
10 |
11 | ###### send a message #######
12 | bot.send_message("Hello from Dhaval", ['user1','user2'])
13 |
14 | ###### get follower info #######
15 | my_followers = bot.get_user_followers("dhavalsays")
16 | for follower in my_followers:
17 | print(follower)
18 |
19 | bot.unfollow_everyone()
20 |
--------------------------------------------------------------------------------
/Instagram Bots/mport instaloader.py:
--------------------------------------------------------------------------------
1 | import instaloader
2 |
3 | # Create an instance of instaloader
4 | insta = instaloader.Instaloader()
5 |
6 | # Get the post by its shortcode
7 | post = instaloader.Post.from_shortcode(insta.context, 'CqPYQHpODb7')
8 |
9 | # Print the comments
10 | for comment in post.get_comments():
11 | print(comment.text)
--------------------------------------------------------------------------------
/Instagram Bots/readme.md:
--------------------------------------------------------------------------------
1 |
2 | https://www.youtube.com/watch?v=3QU-vJGJKTk
3 |
4 | https://www.instagram.com/p/CsGmbfhLjpw/?utm_source=ig_web_copy_link&igshid=MzRlODBiNWFlZA==
5 |
--------------------------------------------------------------------------------
/Snapp Prize/Snapp.py:
--------------------------------------------------------------------------------
1 | import requests
2 | from errors.http_error import VoucherExceededException, PhoneInvalidException
3 |
4 |
5 | class SnappTaxi:
6 | def __init__(self, user_number=None):
7 | self.user_number = user_number
8 | self.access_token = None
9 | self.session = requests.Session()
10 |
11 |
12 | def update_user_number(self, user_number):
13 | self.user_number = user_number
14 |
15 | def load_token(self):
16 |
17 | try:
18 | """
19 | To check the existence of the session file and whether the token has expired or not
20 | """
21 |
22 | with open(f'sessions/{self.user_number}_token.session', 'r') as f:
23 | self.access_token = f.read().strip()
24 |
25 | if self.access_token:
26 |
27 | self.session.headers.update({
28 | 'authorization': f'Bearer {self.access_token}',
29 | 'User-Agent': 'Mozilla/5.0 (Linux; Android 9; SM-G950F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Mobile Safari/537.36',
30 | })
31 |
32 | """Checking the token validity."""
33 | if self.checking_token_validity():
34 | print("[+] Token was found and Token is valid")
35 | return True
36 | else:
37 | print("[-] Token is invalid !")
38 | return False
39 |
40 |
41 | except FileNotFoundError:
42 | print("[-] No token file found")
43 | self.access_token = None
44 | return False
45 |
46 |
47 | def send_sms(self):
48 | url = 'https://app.snapp.taxi/api/api-passenger-oauth/v2/otp'
49 | payload = {'cellphone': self.user_number}
50 | response = self.session.post(url, json=payload)
51 | response_json = response.json()
52 |
53 | if 'message' in response_json and response_json['message'] == 'not a valid cellphone':
54 | raise(PhoneInvalidException)
55 |
56 |
57 | """
58 | Checking the token validity.
59 | This function verifies the validity of the created session
60 | to prevent login if the session file has been tampered with.
61 | """
62 |
63 | def checking_token_validity(self):
64 |
65 | url = 'https://app.snapp.taxi/api/api-base/v5/passenger/jek/content?lat=null&long=null'
66 | response = self.session.get(url)
67 |
68 | if response.json()['message'] == 'Unauthorized':
69 | print("[-] The token is invalid")
70 | return False
71 | else:
72 | print("[+] The token is valid")
73 | return True
74 |
75 | def login(self, sms_code):
76 | url = 'https://app.snapp.taxi/api/api-passenger-oauth/v2/auth'
77 | payload = {
78 | 'grant_type': 'sms_v2',
79 | 'client_id': 'ios_sadjfhasd9871231hfso234',
80 | 'client_secret': '23497shjlf982734-=1031nln',
81 | 'cellphone': self.user_number,
82 | 'token': sms_code,
83 | 'referrer': 'pwa',
84 | 'device_id': '3832c88d-6b17-4995-a592-ed8bf4ef9cc7',
85 | 'secure_id': '3832c88d-6b17-4995-a592-ed8bf4ef9cc7',
86 | }
87 | response = self.session.post(url, json=payload)
88 | if response.status_code == 200:
89 | response_json = response.json()
90 |
91 |
92 | with open(f'sessions/{self.user_number}_token.session', 'w') as f:
93 | f.write(response_json['access_token'])
94 |
95 | return True
96 | else:
97 | print(response.text)
98 | print("[-] There's a problem with the login !")
99 | return False
100 |
101 |
102 | def get_reward_id_by_name(self, name):
103 |
104 | url = 'https://snappclub.snapp.ir/api/v1/user/homepage/1'
105 | response = self.session.get(url)
106 | response_json = response.json()
107 | if response.status_code == 200 :
108 | for product in response_json['data']['homepage']['Products']:
109 | if name in product['name']:
110 | return product['id']
111 | else:
112 | print("[-] 404 This coupon was not found :)")
113 | return False
114 | else:
115 | print(response.text)
116 | print("[-] An issue has occurred regarding the receipt of the prizes !")
117 |
118 |
119 | def redeem_prize(self, id):
120 | url = 'https://snappclub.snapp.ir/api/v1/user/voucher/redeem'
121 | params = {'product_id': id}
122 | response = self.session.post(url, json=params)
123 |
124 |
125 | if response.status_code == 200 and response.json()['data']['status'] != 'fail':
126 | print(f'[+] Voucher {id} redeemed successfully.')
127 |
128 | elif response.status_code == 403:
129 | raise(VoucherExceededException)
130 |
131 | else:
132 | raise(RuntimeError)
133 | print('[-] Failed to redeem voucher.')
134 |
--------------------------------------------------------------------------------
/Snapp Prize/gui.py:
--------------------------------------------------------------------------------
1 | import tkinter as tk
2 | from tkinter import messagebox
3 | from Snapp import SnappTaxi
4 | from errors.http_error import VoucherExceededException, PhoneInvalidException
5 |
6 |
7 | MY_NUMBER = '0919*******'
8 |
9 | class LoginFrame(tk.Frame):
10 | def __init__(self, master=None, switch_frame_callback=None, snapp=None):
11 |
12 | super().__init__(master)
13 | self.master = master
14 | self.snapp = snapp
15 | self.switch_frame_callback = switch_frame_callback
16 | self.grid()
17 |
18 |
19 | self.phone_label = tk.Label(self, text="Your phone number")
20 | self.phone_label.grid(row=0, column=0)
21 |
22 | self.phone_entry = tk.Entry(self, )
23 | self.phone_entry.insert(0, MY_NUMBER) # Add placeholder text
24 | self.phone_entry.bind("", self.clear_placeholder) # Clear placeholder text when the user clicks on the Entry
25 | self.phone_entry.grid(row=0, column=1)
26 |
27 |
28 | self.send_button = tk.Button(self)
29 | self.send_button["text"] = "Send Sms Code"
30 |
31 | # Handle send sms code
32 | #snapp_instance = SnappTaxi()
33 | self.send_button["command"] = self.send_code
34 | self.send_button.grid(row=1, column=0)
35 |
36 | self.code_label = tk.Label(self, text="Code")
37 | self.code_label.grid(row=2, column=0)
38 |
39 | self.code_entry = tk.Entry(self)
40 | self.code_entry.grid(row=2, column=1)
41 |
42 | self.verify_button = tk.Button(self)
43 | self.verify_button["text"] = "Accept Code"
44 | self.verify_button["command"] = self.verify_code
45 | self.verify_button.grid(row=3, column=0)
46 |
47 |
48 |
49 | def clear_placeholder(self, event):
50 | if self.phone_entry.get() == MY_NUMBER:
51 | self.phone_entry.delete(0, tk.END)
52 |
53 |
54 | def send_code(self):
55 |
56 | if self.phone_entry.get() != MY_NUMBER :
57 |
58 | self.snapp.update_user_number(self.phone_entry.get())
59 |
60 | try:
61 | token = self.snapp.load_token()
62 |
63 | if token:
64 | messagebox.showinfo("Success", "Your logged in !")
65 | self.switch_frame_callback()
66 | else:
67 |
68 | self.snapp.send_sms()
69 | self.send_button.destroy()
70 | messagebox.showinfo("Info", "Token not found! The verification code has been sent to your phone number.")
71 |
72 | except PhoneInvalidException as e:
73 | messagebox.showinfo("error", str(e))
74 |
75 | except Exception as e:
76 | raise(e)
77 |
78 |
79 | def verify_code(self):
80 |
81 | if self.code_entry.get() and self.phone_entry.get():
82 |
83 | # Run the async function in the event loop
84 | result = self.snapp.login(self.code_entry.get())
85 |
86 | if result:
87 | messagebox.showinfo("Success", "Your logged in !")
88 | self.switch_frame_callback()
89 | else:
90 | messagebox.showinfo("Error", "Error !")
91 |
92 |
93 | class MainApplicationFrame(tk.Frame):
94 | def __init__(self, master=None, snapp:SnappTaxi=None):
95 | super().__init__(master)
96 | self.master = master
97 | self.snapp = snapp
98 | self.grid()
99 |
100 |
101 | self.prize_text_label = tk.Label(self, text="prize text:")
102 | self.prize_text_label.grid(row=1, column=0)
103 | self.prize_text_entry = tk.Entry(self)
104 | self.prize_text_entry.grid(row=1, column=1)
105 |
106 |
107 | self.count_label = tk.Label(self, text="Voucher Count:")
108 | self.count_label.grid(row=2, column=0)
109 | self.count_entry = tk.Entry(self)
110 | self.count_entry.grid(row=2, column=1)
111 |
112 |
113 | self.send_button = tk.Button(self)
114 | self.send_button["text"] = "redeem"
115 | self.send_button["command"] = self.reedem_prize
116 | self.send_button.grid(row=3, column=1)
117 |
118 | # self.counter_label = tk.Label(self, text='')
119 | # self.counter_label.grid(row=4, column=1)
120 |
121 | self.counter_label = tk.Label(self, text='None')
122 | self.counter_label.grid(row=5, column=1)
123 |
124 | def reedem_prize(self):
125 |
126 | self.snapp.load_token()
127 | prize_id = self.snapp.get_reward_id_by_name(self.prize_text_entry.get())
128 |
129 |
130 | if prize_id :
131 |
132 | for i in range(1, int(self.count_entry.get()) + 1):
133 |
134 | try:
135 | self.counter_label.config(text=f'{i} Voucher redeemed successfully')
136 | self.update_idletasks() # Force update the GUI
137 | self.snapp.redeem_prize(prize_id)
138 | # self.master.after(2000, lambda: self.master.focufos_force()) # Bring focus back to the root window after 2 seconds
139 |
140 | except VoucherExceededException as e:
141 | messagebox.showinfo("ERROR", str(e))
142 | break
143 |
144 | except RuntimeError as e:
145 | messagebox.showinfo("ERROR", "Error !")
146 | break
147 |
148 | else:
149 | messagebox.showinfo("Successful", "All coupons have been successfully received")
150 | else:
151 | messagebox.showinfo("ERROR", "prize not found !")
152 |
153 | class MainApplication(tk.Tk):
154 | def __init__(self):
155 | super().__init__()
156 |
157 | self.snapp = SnappTaxi()
158 | self.geometry("300x300")
159 | self.login_frame = LoginFrame(self, self.show_main_frame, self.snapp)
160 | self.main_frame = MainApplicationFrame(self, self.snapp)
161 | self.show_login_frame()
162 |
163 | def show_login_frame(self):
164 | self.main_frame.grid_remove()
165 | self.login_frame.grid()
166 |
167 | def show_main_frame(self):
168 | self.login_frame.grid_remove()
169 | self.main_frame.grid()
170 |
171 |
172 | if __name__ == "__main__":
173 | app = MainApplication()
174 | app.mainloop()
175 |
--------------------------------------------------------------------------------
/Snapp Prize/http_error.py:
--------------------------------------------------------------------------------
1 | class VoucherExceededException(Exception):
2 | def __init__(self, message="Voucher usage limit exceeded"):
3 | super().__init__(message)
4 |
5 | class PhoneInvalidException(Exception):
6 | def __init__(self, message="not a valid cellphone"):
7 | super().__init__(message)
8 |
--------------------------------------------------------------------------------
/Snapp Prize/readme.md:
--------------------------------------------------------------------------------
1 | # Snapp Prize
2 |
3 | https://github.com/itrewm/Snapp-prize/tree/main
4 |
5 | By using this project, you can automatically convert all your points into lottery tickets or other prizes offered by Snapp company.
6 |
7 | ## Demo
8 | https://www.instagram.com/p/CtmelffRAHX/
9 |
10 |
11 | ## requirments
12 |
13 | ```bash
14 | pip install requests
15 | ```
16 |
17 |
18 | ## Deployment
19 |
20 | To deploy this project run
21 |
22 |
23 |
24 |
25 | ```bash
26 | python gui.py
27 | ```
28 |
29 | ## Authors
30 |
31 | - [@itrewm](https://github.com/itrewm)
32 |
33 |
34 | ## 🔗 Links
35 | [](https://instagram.com/itrewm)
36 |
37 | [](https://t.me/rewwm)
38 |
--------------------------------------------------------------------------------
/Telegram Bots/Automation/Melanee robot.py:
--------------------------------------------------------------------------------
1 | import schedule, requests, time
2 | text= ''' سلام من ربات مِلانی ام. اگه پست های این چنل رو دوست دارید لطفا برای بقیه فوروارد کنید.
3 | @melaneepython
4 | '''
5 |
6 | # LINK_1_FORMAT => https://api.telegram.org/bot/getUpdates
7 | #https://core.telegram.org/bots/api
8 | # LINK_2_FORMAT => https://api.telegram.org/bot/sendMessage?chat_id=&text="ENTER YOUR TEXT"
9 |
10 | def bot():
11 | requests.get(f""" https://api.telegram.org/bot/sendMessage?chat_id=&text={text}"""")
12 |
13 |
14 | schedule.every(1).day.at("19:30").do(bot)
15 | while True:
16 | schedule.run_pending()
17 | time.sleep(1)
18 |
--------------------------------------------------------------------------------
/Telegram Bots/Automation/readme.md:
--------------------------------------------------------------------------------
1 | Set up Autoposting on your Channel/Group using a Telegram Bot
2 |
3 | https://www.youtube.com/watch?v=1RJ2-kefC1I&t=23s
4 |
5 | Finally, you can run and host your program on [Pythonanywheere](https://www.pythonanywhere.com/).
6 |
--------------------------------------------------------------------------------
/Telegram Bots/Bot.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import telegram
3 | from telegram import (ReplyKeyboardMarkup, ReplyKeyboardRemove)
4 | from telegram.ext import (Updater, CommandHandler, MessageHandler, Filters, ConversationHandler)
5 | import os
6 | #import operating system
7 |
8 | logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.INFO)
9 |
10 | logger = logging.getLogger(__name__)
11 |
12 | LOCATION, PHOTO, NAME, SERVING, TIME, CONFIRMATION = range(6)
13 |
14 | reply_keyboard = [['شروع دوباره', 'مورد تایید است']]
15 | markup = ReplyKeyboardMarkup(reply_keyboard, resize_keyboard=True, one_time_keyboard=True)
16 |
17 | TOKEN = "1283896894:AAEPphaHyJ15erYLr2m8gK__ja3tQDbd6K0"
18 | bot = telegram.Bot(token=TOKEN)
19 | chat_id = "@wasted_food"
20 |
21 | PORT = int(os.environ.get('PORT', 5000))
22 |
23 | def facts_to_str(user_data):
24 | facts = list()
25 |
26 | for key, value in user_data.items():
27 | facts.append('{} - {}'.format(key, value))
28 |
29 | return "\n".join(facts).join(['\n', '\n'])
30 |
31 | def start(update, context):
32 | update.message.reply_text("سلام من ربات دستیار اهدا غذا هستم می خوام کمکت کنم که غذای اضافه رو دور نریزی و یه نیازمندبدی. برای شروع لطفا آدرستو بنویس و بفرست:")
33 |
34 | return LOCATION
35 |
36 | def location(update, context):
37 | user = update.message.from_user
38 | user_data = context.user_data
39 | category = 'آدرس'
40 | text = update.message.text
41 | user_data[category] = text
42 |
43 | logger.info("Location of %s: %s", user.first_name, update.message.text)
44 |
45 | update.message.reply_text("خب؛ اگر تصویری از غذا داری ارسال کن اگر نه از /skip استفاده کن تا این مرحله رو رد کنی")
46 |
47 | return PHOTO
48 |
49 | def photo (update, context):
50 | user = update.message.from_user
51 | user_data = context.user_data
52 | photo_file = update.message.photo[-1].get_file()
53 | photo_file.download('food_photo.jpg')
54 | category = 'تصویر دارد'
55 | user_data[category] = 'بله'
56 |
57 | logger.info("Photo of %s: %s", user.first_name, 'food_photo.jpg')
58 |
59 | update.message.reply_text('بسیار عالی؛ اسم غذا چیه')
60 |
61 | return NAME
62 |
63 | def skip_photo(update, context):
64 | user = update.message.from_user
65 | user_data = context.user_data
66 | category = 'تصویر دارد'
67 | user_data[category] = 'نه'
68 |
69 | logger.info("User %s did not send an photo", user.first_name)
70 |
71 | update.message.reply_text('بسیار عالی؛ اسم غذا چیه')
72 |
73 | return NAME
74 |
75 | def food_name(update, context):
76 | user = update.message.from_user
77 | user_data = context.user_data
78 | category = 'نام غذا'
79 | text = update.message.text
80 | user_data[category] = text
81 |
82 | logger.info("Name of the food is %s", text)
83 |
84 | update.message.reply_text('این غذا برای چند نفر قابل استفاده است')
85 |
86 | return SERVING
87 |
88 | def serving(update, context):
89 | user = update.message.from_user
90 | user_data = context.user_data
91 | category = 'این غذا برای چند نفر است'
92 | text = update.message.text
93 | user_data[category] = text
94 |
95 | logger.info("Name of the servings %s", text)
96 |
97 | update.message.reply_text('چقدر زمان می برد تا غذا آماده شود')
98 |
99 | return TIME
100 |
101 | def time(update, context):
102 | user = update.message.from_user
103 | user_data = context.user_data
104 | category = 'چقدر زمان می برد تا غذا آماده شود'
105 | text = update.message.text
106 | user_data[category] = text
107 |
108 | logger.info("Time to take Food by %s", text)
109 |
110 | update.message.reply_text('از این که اطلاعات را برای ما ارسال کردین سپاس گزاریم . لطفا بررسی کنید که آیا اطلاعات مورد تاییدتان است یا نه {}'.format(facts_to_str(user_data)),
111 | reply_markup=markup)
112 |
113 | return CONFIRMATION
114 |
115 | def confirmation(update, context):
116 | user = update.message.from_user
117 | user_data = context.user_data
118 | update.message.reply_text('از شما سپاس گزاریم اطلاعات شما بر روی کانال ' + chat_id + ' ارسال شد.',
119 | reply_markup=ReplyKeyboardRemove())
120 | if (user_data['تصویر دارد'] == 'بله'):
121 | del user_data['تصویر دارد']
122 | bot.send_photo(chat_id=chat_id, photo=open('food_photo.jpg', 'rb'),
123 | caption=' این غذا در دسترس است جزءیات در زیر ذکر شده \n {}'.format(facts_to_str(user_data)) +
124 | "\n برای اطلاعات بیشتر با ارسال کننده در ازتباط باشید {}".format(user.name),
125 | parse_mode=telegram.ParseMode.HTML)
126 |
127 | else:
128 | del user_data['تصویر دارد']
129 | bot.send_message(chat_id=chat_id, text=' این غذا در دسترس است جزءیات در زیر ذکر شده \n {}'.format(facts_to_str(user_data)) +
130 | "\n برای اطلاعات بیشتر با ارسال کننده در ازتباط باشید {}".format(user.name),
131 | parse_mode=telegram.ParseMode.HTML)
132 |
133 |
134 | return ConversationHandler.END
135 |
136 | def cancel(update, context):
137 | user = update.message.from_user
138 | logger.info("User %s canceled the coneversation", user.first_name)
139 |
140 | update.message.reply_text("بدرود امیدوارم بازم شما رو ببینم", reply_markup=ReplyKeyboardRemove())
141 |
142 | return ConversationHandler.END
143 |
144 | def error(update, context):
145 | logger.warning('Update "%s" caused error "%s"', update, context.error)
146 |
147 | def main():
148 | updater = Updater(TOKEN, use_context=True)
149 |
150 | dp = updater.dispatcher
151 |
152 | conv_handler = ConversationHandler(
153 | entry_points = [CommandHandler('start', start)],
154 |
155 | states={
156 | LOCATION:[CommandHandler('start', start), MessageHandler(Filters.text, location)],
157 | PHOTO:[CommandHandler('start', start), MessageHandler(Filters.photo, photo), CommandHandler('skip', skip_photo)],
158 | NAME:[CommandHandler('start', start), MessageHandler(Filters.text, food_name)],
159 | SERVING:[CommandHandler('start', start), MessageHandler(Filters.text, serving)],
160 | TIME:[CommandHandler('start', start), MessageHandler(Filters.text, time)],
161 | CONFIRMATION:[CommandHandler('start', start), MessageHandler(Filters.regex('^مورد تایید است$'), confirmation),
162 | MessageHandler(Filters.regex('^شروع دوباره$'), start)]
163 | },
164 |
165 | fallbacks=[CommandHandler('cancle', cancel)]
166 | )
167 |
168 | dp.add_handler(conv_handler)
169 |
170 | dp.add_error_handler(error)
171 |
172 | updater.start_webhook(listen='0.0.0.0', port=PORT, url_path=TOKEN)
173 | updater.bot.set_webhook('https://cb7921f3.ngrok.io/' + TOKEN)
174 |
175 | updater.idle()
176 |
177 | if __name__ == '__main__':
178 | main()
179 |
180 |
--------------------------------------------------------------------------------
/Telegram Bots/mtrx.py:
--------------------------------------------------------------------------------
1 | # _*_ coding: utf-8 _*_
2 | import re, time, pytz
3 | from telegram import Update, InlineKeyboardMarkup, InlineKeyboardButton, ParseMode
4 | from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, Filters
5 | from telegram.chataction import ChatAction
6 | from PIL import Image
7 | from time import sleep
8 | from datetime import time
9 |
10 | updater = Updater(
11 | token="TOKEN", use_context=True)
12 |
13 | a = 0
14 |
15 | def start(update, context):
16 | global a
17 | context.bot.send_chat_action(update.message.chat_id, ChatAction.TYPING)
18 | if update.message.chat_id == 5074618670:
19 | if a == 0:
20 | context.job_queue.run_daily(alarm,time(hour=9, minute=00, tzinfo=pytz.timezone('Asia/Tehran')),days=(0, 1, 2, 3, 4, 5, 6),context=update.message.chat_id)
21 | context.job_queue.run_daily(alarm,time(hour=13, minute=00, tzinfo=pytz.timezone('Asia/Tehran')),days=(0, 1, 2, 3, 4, 5, 6),context=update.message.chat_id)
22 | context.job_queue.run_daily(alarm,time(hour=17, minute=00, tzinfo=pytz.timezone('Asia/Tehran')),days=(0, 1, 2, 3, 4, 5, 6),context=update.message.chat_id)
23 | context.bot.sendMessage(chat_id=update.message.chat_id,text='The robot started')
24 | a = 1
25 | else:
26 | context.bot.sendMessage(chat_id=update.message.chat_id,text='The robot is running')
27 | else:
28 | context.bot.sendMessage(chat_id=update.message.chat_id,text='Hello World !')
29 |
30 | def alarm(context):
31 | keyboard = InlineKeyboardMarkup([[InlineKeyboardButton(text='create now', url='https://carbon.now.sh')]])
32 | context.bot.sendMessage(chat_id=5074618670,text='✅ Alarm !',reply_markup=keyboard)
33 |
34 | def text(update, context):
35 | world = ['fohsh', 'no']
36 | if update.message.text in world:
37 | context.bot.deleteMessage(message_id=update.message.message_id, chat_id=update.message.chat_id)
38 |
39 | print('The robot started')
40 | updater.dispatcher.add_handler(CommandHandler("start", start, pass_job_queue=True))
41 | updater.dispatcher.add_handler(MessageHandler(Filters.text, text))
42 | updater.start_polling()
43 | updater.idle()
--------------------------------------------------------------------------------
/Telegram Bots/readme.md:
--------------------------------------------------------------------------------
1 |
2 | YouTube:
3 |
4 | https://www.youtube.com/watch?v=Mw2KqU9FVGc&t=1041s
5 |
6 | https://www.youtube.com/watch?v=5EqErpci8eU&t=14s
7 |
8 | https://www.youtube.com/watch?v=RDZEjP8YzOM&t=313s
9 |
10 | https://www.youtube.com/watch?v=1RJ2-kefC1I&t=18s
11 |
12 | https://www.youtube.com/watch?v=uMBn1Y9Kegw&t=19s
13 |
14 | https://www.youtube.com/watch?v=PTFIVTCjcn0&t=13s
15 |
16 | GitHub:
17 |
18 | https://github.com/samyarkd/telegram_profile_auto_change
19 |
20 | https://github.com/bazzazi/group_chat_bot_for_telegram_python
21 |
22 | Others:
23 |
24 | https://medium.com/@ManHay_Hong/how-to-create-a-telegram-bot-and-send-messages-with-python-4cf314d9fa3e
25 |
26 | https://www.dignited.com/25296/how-to-create-telegram-bot-for-telegram-channel/
27 |
--------------------------------------------------------------------------------
/readme.md:
--------------------------------------------------------------------------------
1 | <