├── .github
├── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
└── PULL_REQUEST_TEMPLATE
│ └── pull_request_template.md
├── .gitignore
├── .pep8speaks.yml
├── .travis.yml
├── CODE_OF_CONDUCT.md
├── LICENSE
├── README.md
├── chatbot.py
├── config.py
├── google_places.py
├── intentClassification
├── files
│ ├── music.zip
│ ├── restaurant.zip
│ └── weather.zip
└── intent_classification.py
├── microbots
├── FaceFilterBot
│ ├── README.md
│ ├── demo_cute.png
│ ├── demo_love.png
│ ├── demo_spaceman.png
│ └── filters
│ │ └── blue_filter.py
├── Sample
│ ├── Eye.png
│ ├── _DS_Store
│ ├── dog.py
│ ├── doggy_ears.png
│ ├── doggy_nose.png
│ ├── doggy_tongue.png
│ ├── eyes.py
│ ├── faceSwap.py
│ └── mlmodels
│ │ ├── la_muse.mlmodel
│ │ ├── rain_princess.mlmodel
│ │ ├── udnie.mlmodel
│ │ └── wave.mlmodel
├── covid19
│ ├── Readme_covid19.md
│ ├── config.py
│ ├── covid19_summary.py
│ ├── files
│ │ └── summary.zip
│ └── requirements.txt
├── signup
│ ├── config.py
│ ├── crud.py
│ ├── models.py
│ ├── requirements.txt
│ └── voiceSignup.py
└── splitwise
│ ├── README.md
│ ├── app.py
│ └── config.py
├── requirements.txt
├── requirements_windows.txt
├── setup.cfg
├── setup.py
├── test-requirements.txt
├── tox.ini
├── voice_conf.py
└── weather.txt
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Create a report to help us improve
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Describe the bug**
11 | A clear and concise description of what the bug is.
12 |
13 | **To Reproduce**
14 | Steps to reproduce the behavior:
15 | 1. Go to '...'
16 | 2. Click on '....'
17 | 3. Scroll down to '....'
18 | 4. See error
19 |
20 | **Expected behavior**
21 | A clear and concise description of what you expected to happen.
22 |
23 | **Screenshots**
24 | If applicable, add screenshots to help explain your problem.
25 |
26 | **Desktop (please complete the following information):**
27 | - OS: [e.g. iOS]
28 | - Browser [e.g. chrome, safari]
29 | - Version [e.g. 22]
30 |
31 | **Smartphone (please complete the following information):**
32 | - Device: [e.g. iPhone6]
33 | - OS: [e.g. iOS8.1]
34 | - Browser [e.g. stock browser, safari]
35 | - Version [e.g. 22]
36 |
37 | **Additional context**
38 | Add any other context about the problem here.
39 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature request
3 | about: Suggest an idea for this project
4 | title: ''
5 | labels: ''
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Is your feature request related to a problem? Please describe.**
11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12 |
13 | **Describe the solution you'd like**
14 | A clear and concise description of what you want to happen.
15 |
16 | **Describe alternatives you've considered**
17 | A clear and concise description of any alternative solutions or features you've considered.
18 |
19 | **Additional context**
20 | Add any other context or screenshots about the feature request here.
21 |
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md:
--------------------------------------------------------------------------------
1 | Please prefix your pull request with one of the following: [FEATURE] [FIX] [IMPROVEMENT].
2 |
3 |
4 | # Title
5 |
6 | ## Description
7 |
8 | Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
9 |
10 | Fixes # (issue)
11 |
12 | ## Type of change
13 |
14 | Please delete options that are not relevant.
15 |
16 | - [ ] Bug fix (non-breaking change which fixes an issue)
17 | - [ ] New feature (non-breaking change which adds functionality)
18 | - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
19 | - [ ] This change requires a documentation update
20 |
21 | ## How Has This Been Tested?
22 |
23 | Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
24 |
25 | - [ ] Test A
26 | - [ ] Test B
27 |
28 | **Test Configuration**:
29 | * Firmware version:
30 | * Hardware:
31 | * Toolchain:
32 | * SDK:
33 |
34 | ## Checklist:
35 |
36 | - [ ] My code follows the style guidelines of this project
37 | - [ ] I have performed a self-review of my own code
38 | - [ ] I have commented my code, particularly in hard-to-understand areas
39 | - [ ] I have made corresponding changes to the documentation
40 | - [ ] My changes generate no new warnings
41 | - [ ] I have added tests that prove my fix is effective or that my feature works
42 | - [ ] New and existing unit tests pass locally with my changes
43 | - [ ] Any dependent changes have been merged and published in downstream modules
44 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | *.pyc
2 |
3 | .DS_Store
4 |
--------------------------------------------------------------------------------
/.pep8speaks.yml:
--------------------------------------------------------------------------------
1 | scanner:
2 | diff_only: True # If False, the entire file touched by the Pull Request is scanned for errors. If True, only the diff is scanned.
3 | linter: pycodestyle # Other option is flake8
4 |
5 | pycodestyle: # Same as scanner.linter value. Other option is flake8
6 | max-line-length: 100 # Default is 79 in PEP 8
7 | ignore: # Errors and warnings to ignore
8 | - W504 # line break after binary operator
9 | - E402 # module level import not at top of file
10 | - E731 # do not assign a lambda expression, use a def
11 | - C406 # Unnecessary list literal - rewrite as a dict literal.
12 | - E741 # ambiguous variable name
13 |
14 | no_blank_comment: True # If True, no comment is made on PR without any errors.
15 | descending_issues_order: False # If True, PEP 8 issues in message will be displayed in descending order of line numbers in the file
16 |
17 | message: # Customize the comment made by the bot
18 | opened: # Messages when a new PR is submitted
19 | header: "Hello @{name}! Thanks for opening this PR. "
20 | # The keyword {name} is converted into the author's username
21 | footer: "Do see the [Hitchhiker's guide to code style](https://goo.gl/hqbW4r)"
22 | # The messages can be written as they would over GitHub
23 | updated: # Messages when new commits are added to the PR
24 | header: "Hello @{name}! Thanks for updating this PR. "
25 | footer: "" # Why to comment the link to the style guide everytime? :)
26 | no_errors: "There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers: "
27 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | language: python
2 | python:
3 | - "2.7"
4 | - "3.5"
5 | - "3.6"
6 | cache: pip
7 | matrix:
8 | include:
9 | # ======= OSX ========
10 | - name: "Python 2.7.14 on macOS 10.13"
11 | os: osx
12 | osx_image: xcode9.3 # Python 2.7.14_2 running on macOS 10.13
13 | language: shell # 'language: python' is an error on Travis CI macOS
14 | before_install:
15 | - python --version
16 | # - pip install -U pip
17 | # - python -m pip install --upgrade pip
18 | - pip install pytest --user
19 | - pip install codecov --user
20 | install: pip install ".[test]" --user
21 | script: python -m pytest
22 | after_success: python -m codecov
23 | - name: "Python 3.5.4 on macOS 10.13"
24 | os: osx
25 | osx_image: xcode9.4 # Python 3.5.4 running on macOS 10.13
26 | language: shell # 'language: python' is an error on Travis CI macOS
27 | before_install:
28 | - python3 --version
29 | - pip3 install -U pip
30 | - pip3 install -U pytest
31 | - pip3 install codecov
32 | script: python3 -m pytest
33 | after_success: python 3 -m codecov
34 | - name: "Python 3.6.5 on macOS 10.13"
35 | os: osx
36 | osx_image: xcode9.4 # Python 3.6.5 running on macOS 10.13
37 | language: shell # 'language: python' is an error on Travis CI macOS
38 | before_install:
39 | - python3 --version
40 | - pip3 install -U pip
41 | - pip3 install -U pytest
42 | - pip3 install codecov
43 | script: python3 -m pytest
44 | after_success: python 3 -m codecov
45 | # ====== WINDOWS =========
46 | - name: "Python 2.7 on Windows"
47 | os: windows # Windows 10.0.17134 N/A Build 17134
48 | language: shell # 'language: python' errors Travis CI Windows
49 | before_install:
50 | - choco install python2
51 | - python --version
52 | - python -m pip install --upgrade pip
53 | - pip install --upgrade pytest
54 | - pip install codecov
55 | env: PATH=/c/Python27:/c/Python27/Scripts:$PATH
56 | - name: "Python 3.5.4 on Windows"
57 | os: windows # Windows 10.0.17134 N/A Build 17134
58 | language: shell # 'language: python' is an error on Travis CI Windows
59 | before_install:
60 | - choco install python --version 3.5.4
61 | - python --version
62 | - python -m pip install --upgrade pip
63 | - pip3 install --upgrade pytest
64 | - pip3 install codecov
65 | env: PATH=/c/Python35:/c/Python35/Scripts:$PATH
66 | - name: "Python 3.6.8 on Windows"
67 | os: windows # Windows 10.0.17134 N/A Build 17134
68 | language: shell # 'language: python' is an error on Travis CI Windows
69 | before_install:
70 | - choco install python --version 3.6.8
71 | - python --version
72 | - python -m pip install --upgrade pip
73 | - pip3 install --upgrade pytest
74 | - pip3 install codecov
75 | env: PATH=/c/Python36:/c/Python36/Scripts:$PATH
76 | before_install:
77 | - python --version
78 | - pip install -U pip
79 | - pip install -U pytest
80 | - pip install codecov
81 | install:
82 | - sudo apt-get install portaudio19-dev
83 | - pip install -r requirements.txt
84 | - pip install -r test-requirements.txt
85 | - python -c "import nltk; nltk.download('stopwords')"
86 | script:
87 | - nosetests --nocapture --with-cov --cov-config .coveragerc
88 | - pytest
89 | after_success:
90 | - codecov
91 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Contributor Covenant Code of Conduct
2 |
3 | ## Our Pledge
4 |
5 | In the interest of fostering an open and welcoming environment, we as
6 | contributors and maintainers pledge to making participation in our project and
7 | our community a harassment-free experience for everyone, regardless of age, body
8 | size, disability, ethnicity, sex characteristics, gender identity and expression,
9 | level of experience, education, socio-economic status, nationality, personal
10 | appearance, race, religion, or sexual identity and orientation.
11 |
12 | ## Our Standards
13 |
14 | Examples of behavior that contributes to creating a positive environment
15 | include:
16 |
17 | * Using welcoming and inclusive language
18 | * Being respectful of differing viewpoints and experiences
19 | * Gracefully accepting constructive criticism
20 | * Focusing on what is best for the community
21 | * Showing empathy towards other community members
22 |
23 | Examples of unacceptable behavior by participants include:
24 |
25 | * The use of sexualized language or imagery and unwelcome sexual attention or
26 | advances
27 | * Trolling, insulting/derogatory comments, and personal or political attacks
28 | * Public or private harassment
29 | * Publishing others' private information, such as a physical or electronic
30 | address, without explicit permission
31 | * Other conduct which could reasonably be considered inappropriate in a
32 | professional setting
33 |
34 | ## Our Responsibilities
35 |
36 | Project maintainers are responsible for clarifying the standards of acceptable
37 | behavior and are expected to take appropriate and fair corrective action in
38 | response to any instances of unacceptable behavior.
39 |
40 | Project maintainers have the right and responsibility to remove, edit, or
41 | reject comments, commits, code, wiki edits, issues, and other contributions
42 | that are not aligned to this Code of Conduct, or to ban temporarily or
43 | permanently any contributor for other behaviors that they deem inappropriate,
44 | threatening, offensive, or harmful.
45 |
46 | ## Scope
47 |
48 | This Code of Conduct applies both within project spaces and in public spaces
49 | when an individual is representing the project or its community. Examples of
50 | representing a project or community include using an official project e-mail
51 | address, posting via an official social media account, or acting as an appointed
52 | representative at an online or offline event. Representation of a project may be
53 | further defined and clarified by project maintainers.
54 |
55 | ## Enforcement
56 |
57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
58 | reported by contacting the project team at satyammittalid@gmail.com. All
59 | complaints will be reviewed and investigated and will result in a response that
60 | is deemed necessary and appropriate to the circumstances. The project team is
61 | obligated to maintain confidentiality with regard to the reporter of an incident.
62 | Further details of specific enforcement policies may be posted separately.
63 |
64 | Project maintainers who do not follow or enforce the Code of Conduct in good
65 | faith may face temporary or permanent repercussions as determined by other
66 | members of the project's leadership.
67 |
68 | ## Attribution
69 |
70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
72 |
73 | [homepage]: https://www.contributor-covenant.org
74 |
75 | For answers to common questions about this code of conduct, see
76 | https://www.contributor-covenant.org/faq
77 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 Satyam Mittal
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Voice Enabled Chatbot
2 |
3 |
4 | [](https://travis-ci.org/scalability4all/voice-enabled-chatbot) [](https://codecov.io/gh/scalability4all/voice-enabled-chatbot)
5 |
6 | Implementing a voice enabled chatbot which converses with a user via their voice in natural language. The user should be able to interact with the application like a voice assistant and appropriate response will be returned by the application (also through voice). The number of topics to converse upon will be fixed however the user should be able to converse through natural language.
7 |
8 | If we have topics like the weather, location information, inventory information etc the
9 | application should be able to converse in that particular topic. Like, if we have questions
10 | such as:-
11 |
12 | Hey, what is the weather today? - **Weather Information**
13 |
14 | I want to know if there are any shopping stores near me? - **Location Information**
15 |
16 | Will it rain today? - **Weather Information**
17 |
18 | Can you tell me availability of item in store X? - **Inventory Information**
19 |
20 | Initially the application should be able to identify each particular topic and then return the
21 | appropriate response. After that, the chatbot should carry on the conversation regarding that
22 | particular topic. For eg:
23 |
24 | **User** - I want to know if there are any shopping stores near me.
25 |
26 | **Chatbot** - These are the nearest stores:- A, B, C. Where do you wanna go?
27 |
28 | **User** - I want to go to the most popular one among them
29 |
30 | **Chatbot**- All right, showing you directions to C.
31 |
32 | So, mainly the chatbot should formulate appropriate responses to incoming questions and
33 | carry on the conversation. Keep in mind, the conversation would be in natural language
34 | and the user has returned sufficient information.
35 |
36 |
37 | ## Concept
38 |
39 | Before starting conversation, bot will fetch the location of the user and other details to give personalized results.
40 |
41 | **Step 1: Speech-2-Text:** Given a speech through Microphone, store it and convert it using SpeechRecognition and PyAudio libraries.
42 |
43 | **Step 2: Topic Modelling:** Get Entity and Intent of chat using model with a corpora. To get the trained model, we will use the classifier to categorize it into weather, location and inventory. After that using RASA-NLU with Spacy library, we will extract the entities.
44 |
45 | **Step 3:** After Finding Intent and Entity, we will set the model using the following method:
46 | Intent = Weather: Based on entity specified, We will use weather API to fetch data of location.
47 | Intent = Location: Following conversation flow:
48 | Get Stores located or Any Nearby Stores
49 | Choose Store
50 | Inventory Details about Store
51 |
52 | **Step 4:** Use cache mechanism to give result about recently used query.
53 |
54 | ## Changelog
55 | - #### v0.1.1
56 | - Added support for speech to text
57 | - Added support of weather and google places api
58 | - Added basic introduction chat
59 | - Added voice support to chat
60 |
61 | ## Usage
62 |
63 | To change the language, enter the BCP-47 code from [language support](https://cloud.google.com/speech-to-text/docs/languages). If you want the language to be English (default), press enter.
64 |
65 | Arguments:
66 | ```
67 | For now, you can use it for following domain:
68 | Introduction: Hi, Hello..
69 | Weather: Get weather info
70 | Google Search: get any google search
71 | Wikipedia Search: What is "query"?
72 | Google Places: Get me best places.
73 | ```
74 | You can quit by pressing Ctrl+C
75 |
76 | ## Build the application locally
77 | * Clone the repo
78 | - Clone the ```voice-enabled-chatbot``` repo locally. In the terminal, run :
79 | ```git clone https://github.com/satyammittal/voice-enabled-chatbot.git```
80 | * Install the dependencies
81 | - We need PyAudio, a cross-platform audio I/O library.For this run :
82 | ```sudo apt-get install portaudio19-dev``` (linux)
83 | ```brew install portaudio``` (mac)
84 | - Further, install other requirements using :
85 | ```pip install -r requirements.txt```
86 | - Using windows, install other requirements using:
87 | ```pip install -r requirements_windows.txt```
88 | - Install english stopwords using :
89 | ```python -c "import nltk; nltk.download('stopwords')"```
90 | - The *pyowm* is supposed to be instable in windows.
91 | * Configure Google API Key
92 | - Go to the [Google Cloud Platform Console](https://cloud.google.com/console/google/maps-apis/overview).
93 | - Select or create the project for which you want to add an API key.
94 | - Click the menu button and select __APIs & Services > Credentials__.
95 | - On the __Credentials__ page, click __Create credentials > API key__.
96 | - Copy and paste the API key in [`config.py`](/config.py) file.
97 |
98 | ## Run the application
99 | Run the application using command - ```python chatbot.py```
100 |
101 | ## Milestones
102 |
103 | 1. Completing chat bot so that it works on multiple domain specified through config.
104 | 2. Adding classification techniques for intent seperation.
105 | 3. Automated method for Entity creation from sentences.
106 | 4. Use cache mechanism to give result about recently used query.
107 | 5. At the end, the deliverable will be to implement user interface for a sample chatbot implemented.
108 | 6. We will also extend it to create plugin for companies requiring chatbot. They can put their domain in config file and data separately to give personalized result.
109 | 7. Multi Language Support
110 |
111 | ## Sample output
112 |
113 |
114 |
115 |
116 |
117 | ## References
118 | * [Speech To Text - Python](https://medium.com/@rahulvaish/speech-to-text-python-77b510f06de)
119 | * [Topic-Focused Summarization of Chat Conversations](https://link.springer.com/content/pdf/10.1007%2F978-3-642-36973-5_88.pdf)
120 |
--------------------------------------------------------------------------------
/chatbot.py:
--------------------------------------------------------------------------------
1 | import random
2 | import datetime
3 | import webbrowser
4 | import pyttsx3
5 | import wikipedia
6 | from pygame import mixer
7 | import pyowm
8 | import config
9 | import speech_recognition as sr
10 | from google_places import (
11 | change_location_query,
12 | filter_sentence,
13 | get_location,
14 | nearby_places,
15 | requests,
16 | )
17 | import pyjokes
18 | from googletrans import Translator
19 | from voice_conf import *
20 | # from speech_recognition.__main__ import r, audio
21 | from intentClassification.intent_classification import IntentClassification
22 |
23 | greetings = ['hey there', 'hello', 'hi', 'Hai', 'hey!', 'hey', 'hi there!']
24 | question = ['How are you?', 'How are you doing?', 'What\'s up?']
25 | responses = ['Okay', "I'm fine"]
26 | var1 = ['who made you', 'who created you']
27 | var2 = ['I_was_created_by_Edward_right_in_his_computer.',
28 | 'Edward', 'Some_guy_whom_i_never_got_to_know.']
29 | var3 = ['what time is it', 'what is the time', 'time']
30 | var4 = ['who are you', 'what is you name']
31 | var5 = ['date', 'what is the date', 'what date is it', 'tell me the date']
32 | cmd1 = ['open browser', 'open google']
33 | cmd2 = ['play music', 'play songs', 'play a song', 'open music player']
34 | cmd3 = [
35 | 'tell a joke',
36 | 'tell me a joke',
37 | 'say something funny',
38 | 'tell something funny']
39 | cmd4 = ['open youtube', 'i want to watch a video']
40 | cmd5 = [
41 | 'tell me the weather',
42 | 'weather',
43 | 'what about the weather',
44 | 'what\'s the weather']
45 | cmd6 = ['exit', 'close', 'goodbye', 'nothing', 'catch you later', 'bye']
46 | cmd7 = [
47 | "what is your color",
48 | "what is your colour",
49 | "your color",
50 | "your color?",
51 | ]
52 | colrep = [
53 | "Right now its rainbow",
54 | "Right now its transparent",
55 | "Right now its non chromatic",
56 | ]
57 | cmd8 = ["what is you favourite colour", "what is your favourite color"]
58 | cmd9 = ["thank you"]
59 |
60 | repfr9 = ["youre welcome", "glad i could help you"]
61 |
62 | intentClassifier = IntentClassification()
63 |
64 | personalized, longitude, latitude = get_location()
65 | stores = []
66 | stores_data = {}
67 |
68 | print("hi ", "Setting location through ip bias, Change location?")
69 | change_location = False
70 |
71 | language_conf = input('Language(en-US): ')
72 | if language_conf == '':
73 | language_conf = "en-US"
74 | voice_language = getVoiceID(language_conf[:2])
75 |
76 | engine = pyttsx3.init()
77 | voices = engine.getProperty('voices')
78 | engine.setProperty('voice', voice_language)
79 | volume = engine.getProperty('volume')
80 | engine.setProperty('volume', 10.0)
81 | rate = engine.getProperty('rate')
82 |
83 | engine.setProperty('rate', rate - 25)
84 |
85 | translator = Translator()
86 |
87 | while True:
88 | speech_type = input("Speech/Text: ")
89 | if speech_type.lower() != "speech":
90 | translate = input("Type: ")
91 | else:
92 | r = sr.Recognizer()
93 | with sr.Microphone() as source:
94 | t = translator.translate('Say something', dest=language_conf[:2])
95 | print(t.text)
96 | engine.say(t.text)
97 | engine.runAndWait()
98 | r.adjust_for_ambient_noise(source)
99 | audio = r.listen(source)
100 | try:
101 | translate = r.recognize_google(audio, language=language_conf)
102 | print("You said:- " + translate)
103 | except sr.UnknownValueError:
104 | print("Could not understand audio")
105 | engine.say("I didnt get that. Rerun the code")
106 | engine.runAndWait()
107 | intent = intentClassifier.intent_identifier(translate)
108 | print("Intent:", intent)
109 | # TODO:: entity based weather output
110 | if intent == "weather":
111 | print("here")
112 | owm = pyowm.OWM(config.weather_api_key)
113 | observation = owm.weather_at_place("Bangalore, IN")
114 | observation_list = owm.weather_around_coords(12.972442, 77.580643)
115 | w = observation.get_weather()
116 | w.get_wind()
117 | w.get_humidity()
118 | w.get_temperature("celsius")
119 | print(w)
120 | print(w.get_wind())
121 | print(w.get_humidity())
122 | print(w.get_temperature("celsius"))
123 | engine.say(w.get_wind())
124 | engine.runAndWait()
125 | engine.say("humidity")
126 | engine.runAndWait()
127 | engine.say(w.get_humidity())
128 | engine.runAndWait()
129 | engine.say("temperature")
130 | engine.runAndWait()
131 | engine.say(w.get_temperature("celsius"))
132 | engine.runAndWait()
133 | if intent == "music" or intent == "restaurant":
134 | engine.say("please wait")
135 | engine.runAndWait()
136 | print(wikipedia.summary(translate))
137 | engine.say(wikipedia.summary(translate))
138 | engine.runAndWait()
139 |
140 | if translate in greetings:
141 | random_greeting = random.choice(greetings)
142 | print(random_greeting)
143 | elif translate.lower() == "yes":
144 | change_location = True
145 | print("Location?")
146 | elif change_location is True:
147 | personalized = change_location_query(translate, config.google_api_key)
148 | change_location = False
149 | elif translate in question:
150 | print("I am fine")
151 | elif translate in var1:
152 | reply = random.choice(var2)
153 | print(reply)
154 | elif translate in cmd9:
155 | print(random.choice(repfr9))
156 | elif translate in cmd7:
157 | print(random.choice(colrep))
158 | print("It keeps changing every micro second")
159 | elif translate in cmd8:
160 | print(random.choice(colrep))
161 | print("It keeps changing every micro second")
162 | elif translate in cmd2:
163 | mixer.init()
164 | mixer.music.load("song.wav")
165 | mixer.music.play()
166 | elif translate in var4:
167 | engine.say("I am a bot, silly")
168 | engine.runAndWait()
169 | elif translate in cmd4:
170 | webbrowser.open("http://www.youtube.com")
171 | elif translate in cmd6:
172 | print("see you later")
173 | exit()
174 | elif translate in cmd5:
175 | print("here")
176 | url = (
177 | "http://api.openweathermap.org/data/2.5/weather?" +
178 | "lat={}&lon={}&appid={}&units={}"
179 | ).format(
180 | latitude,
181 | longitude,
182 | config.weather_api_key,
183 | config.weather_temperature_format,
184 | )
185 | r = requests.get(url)
186 | x = r.json()
187 | city = x["name"]
188 | windSpeed = x["wind"]["speed"]
189 | skyDescription = x["weather"][0]["description"]
190 | maxTemperature = x["main"]["temp_max"]
191 | minTemperature = x["main"]["temp_min"]
192 | temp = x["main"]["temp"]
193 | humidity = x["main"]["humidity"]
194 | pressure = x["main"]["pressure"]
195 | # use the above variables based on user needs
196 | print(
197 | "Weather in {} is {} "
198 | "with temperature {} celsius"
199 | ", humidity in the air is {} "
200 | "and wind blowing at a speed of {}".format(
201 | city, skyDescription, temp, humidity, windSpeed
202 | )
203 | )
204 | engine.say(
205 | "Weather in {} is {} "
206 | "with temperature {} celsius"
207 | ", humidity in the air is {} "
208 | "and wind blowing at a speed of {}".format(
209 | city, skyDescription, temp, humidity, windSpeed
210 | )
211 | )
212 | engine.runAndWait()
213 | elif translate in var3 or translate in var5:
214 | current_time = datetime.datetime.now()
215 | if translate in var3:
216 | print(current_time.strftime("The time is %H:%M"))
217 | engine.say(current_time.strftime("The time is %H:%M"))
218 | engine.runAndWait()
219 | elif translate in var5:
220 | print(current_time.strftime("The date is %B %d, %Y"))
221 | engine.say(current_time.strftime("The date is %B %d %Y"))
222 | engine.runAndWait()
223 | elif translate in cmd1:
224 | webbrowser.open("http://www.google.com")
225 | elif translate in cmd3:
226 | jokrep = pyjokes.get_joke()
227 | print(jokrep)
228 | engine.say(jokrep)
229 | engine.runAndWait()
230 | elif ("them" in translate.split(" ") or
231 | "popular" in translate.split(" ")) and stores:
232 | sorted_stores_data = sorted(
233 | stores_data, key=lambda x: x["rating"], reverse=True
234 | )
235 | sorted_stores = [x["name"] for x in sorted_stores_data][:5]
236 | if "order" in translate:
237 | print("These are the stores: ")
238 | for store in sorted_stores:
239 | print(store)
240 | engine.say(sorted_stores)
241 | engine.runAndWait()
242 | if "popular" in translate:
243 | print("Most popular one is: ", sorted_stores[0])
244 | if "go" in translate:
245 | lat = sorted_stores_data[0]["geometry"]["location"]["lat"]
246 | lng = sorted_stores_data[0]["geometry"]["location"]["lng"]
247 | url = "http://maps.google.com/maps?q={},{}".format(lat, lng)
248 | webbrowser.open_new(url)
249 | engine.say(
250 | "Showing you directions to the store {}".format(
251 | sorted_stores[0]
252 | )
253 | )
254 | engine.runAndWait()
255 | elif (
256 | "stores" in translate.split(" ") or
257 | "food" in translate.split(" ") or
258 | "restaurant" in translate
259 | ):
260 | stores = []
261 | stores_data = {}
262 | query = filter_sentence(translate)
263 | stores, stores_data = nearby_places(
264 | config.google_api_key,
265 | personalized.city,
266 | query,
267 | personalized.latitude,
268 | personalized.longitude,
269 | )
270 | print("These are the stores: ")
271 | for store in stores:
272 | print(store)
273 | engine.say(stores)
274 | engine.runAndWait()
275 | print("Where do you want to go:")
276 | engine.say("Where do you want to go:")
277 | engine.runAndWait()
278 | else:
279 | engine.say("please wait")
280 | engine.runAndWait()
281 | print(wikipedia.summary(translate))
282 | engine.say(wikipedia.summary(translate))
283 | engine.runAndWait()
284 | userInput3 = input("or else search in google")
285 | webbrowser.open_new("http://www.google.com/search?q=" + userInput3)
286 |
--------------------------------------------------------------------------------
/config.py:
--------------------------------------------------------------------------------
1 | google_api_key = "" # add config key
2 | weather_api_key = "" # add config key
3 |
4 |
5 | # adding domains for intent classification
6 | domains = ['music', 'restaurant', 'weather']
7 |
8 | geolocation_api = "https://geolocation-db.com/json"
9 | weather_temperature_format = "metric" # It gives temperature in celsius
10 |
--------------------------------------------------------------------------------
/google_places.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import json
3 | import config
4 | from collections import namedtuple
5 | from nltk.tokenize import word_tokenize
6 | from nltk.corpus import stopwords
7 | stop_words = set(stopwords.words('english'))
8 |
9 |
10 | def nearby_places(
11 | api_key,
12 | location,
13 | query,
14 | latitude,
15 | longitude,
16 | sort_by="prominence"):
17 | url = "https://maps.googleapis.com/maps/api/place/textsearch/json?"
18 | if "near" in query:
19 | sort_by = "distance"
20 | req = "{url}query='{query}'&location={latitude},{longitude}&rankby={rankby}&key={key}".format(
21 | url=url,
22 | query=query,
23 | latitude=latitude,
24 | longitude=longitude,
25 | key=api_key,
26 | rankby=sort_by
27 | )
28 | # print(req)
29 | r = requests.get(req)
30 | x = r.json()
31 | y = x['results']
32 | arr = []
33 | count = 5
34 | for i in range(min(count, len(y))):
35 | arr.append(y[i]['name'])
36 | return arr, y
37 |
38 |
39 | def get_location():
40 | # {u'city': u'Hyderabad', u'longitude':78.4744,
41 | # u'latitude': 17.3753, u'state': u'Telangana', u'IPv4': u'157.48.48.45',
42 | # u'country_code': u'IN', u'country_name': u'India', u'postal': u'500025'}
43 | url = config.geolocation_api
44 | response = requests.get(url)
45 | data = response.json()
46 | data_named = namedtuple("User", data.keys())(*data.values())
47 | return data_named, data['longitude'], data['latitude']
48 |
49 |
50 | def filter_sentence(text):
51 | word_tokens = word_tokenize(text)
52 | filtered_sentence = [w for w in word_tokens if w not in stop_words]
53 | ans = ""
54 | for x in filtered_sentence:
55 | ans = ans + x + " "
56 | return ans
57 |
58 |
59 | def change_location_query(address, key):
60 | url = "https://maps.googleapis.com/maps/api/geocode/json?address={}&key={}".format(
61 | address, key)
62 | r = requests.get(url)
63 | x = r.json()
64 | user = namedtuple('User', 'city longitude latitude place_id')
65 | try:
66 | x = x['results']
67 | location = x[0]['address_components'][2]['long_name']
68 | lattitude = x[0]['geometry']['location']['lat']
69 | longitude = x[0]['geometry']['location']['lng']
70 | return user(
71 | city=location,
72 | longitude=longitude,
73 | latitude=lattitude,
74 | place_id=x[0]['place_id'])
75 | except Exception as e:
76 | print(e)
77 | return user(
78 | city="Hyderabad",
79 | longitude=17.44,
80 | latitude=78.34,
81 | place_id="ChIJK_h_8QMKzjsRlrTemJvoAp0")
82 |
--------------------------------------------------------------------------------
/intentClassification/files/music.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/intentClassification/files/music.zip
--------------------------------------------------------------------------------
/intentClassification/files/restaurant.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/intentClassification/files/restaurant.zip
--------------------------------------------------------------------------------
/intentClassification/files/weather.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/intentClassification/files/weather.zip
--------------------------------------------------------------------------------
/intentClassification/intent_classification.py:
--------------------------------------------------------------------------------
1 | import json
2 | import operator
3 | import os
4 | import zipfile
5 | import nltk
6 | import random
7 | import collections
8 | import pandas as pd
9 | from nltk.corpus import stopwords
10 | from nltk.corpus import wordnet
11 | from nltk.stem import WordNetLemmatizer
12 | from nltk.metrics import precision, recall, f_measure, ConfusionMatrix
13 | from config import domains
14 | nltk.download('averaged_perceptron_tagger')
15 | nltk.download('punkt')
16 | nltk.download('wordnet')
17 | nltk.download('stopwords')
18 |
19 |
20 | class IntentClassification:
21 | def __init__(self):
22 | self.lemmatizer = WordNetLemmatizer()
23 | self.data = {}
24 | self.document = []
25 | self.flat_list = []
26 |
27 | self.read_files()
28 |
29 | """Getting the words from the data"""
30 | self.get_words()
31 | """Removes the **stop words** like ( ‘off’, ‘is’, ‘s’, ‘am’, ‘or’) and
32 | ***non alphabetical*** characters"""
33 | self.flat_list = self.remove_stop_words(self.flat_list)
34 |
35 | """**Lemmatization** i.e., tranforms different
36 | forms of words to a single one"""
37 | filtered_list = self.lemmatization(self.flat_list)
38 |
39 | """Getting the ***frequency*** of each word and extracting top 2000"""
40 |
41 | frequency_distribution = nltk.FreqDist(
42 | w.lower() for w in filtered_list
43 | )
44 |
45 | self.word_features = list(frequency_distribution)[:2000]
46 |
47 | """Training the model"""
48 |
49 | self.test_set = nltk.classify.apply_features(
50 | self.feature_extraction, self.document[:500]
51 | )
52 | self.train_set = nltk.classify.apply_features(
53 | self.feature_extraction, self.document[500:]
54 | )
55 | self.classifier = nltk.NaiveBayesClassifier.train(self.train_set)
56 |
57 | def read_files(self):
58 | for domain in domains:
59 | domainpath = os.path.join(
60 | os.path.dirname(__file__), "files/" + domain + ".zip"
61 | )
62 | with zipfile.ZipFile(domainpath, "r") as z:
63 | for filename in z.namelist():
64 | with z.open(filename) as f:
65 | data = f.read()
66 | d = json.loads(data.decode("utf-8"))
67 | df = pd.DataFrame(d)
68 | self.data[domain] = df["text"].to_numpy()
69 |
70 | def get_words(self):
71 | self.document = [
72 | (text, category)
73 | for category in self.data.keys()
74 | for text in self.data[category]
75 | ]
76 |
77 | random.shuffle(self.document)
78 | array_words = [nltk.word_tokenize(w) for (w, cat) in self.document]
79 | self.flat_list = [word for sent in array_words for word in sent]
80 |
81 | def remove_stop_words(self, words):
82 | stop_words = set(stopwords.words("english"))
83 |
84 | words_filtered = []
85 |
86 | for w in words:
87 | if w not in stop_words:
88 | if w.isalpha():
89 | words_filtered.append(w)
90 |
91 | return words_filtered
92 |
93 | def get_wordnet_pos(self, word):
94 | """Map POS tag to first character lemmatize() accepts"""
95 | tag = nltk.pos_tag([word])[0][1][0].upper()
96 | tag_dict = {
97 | "J": wordnet.ADJ,
98 | "N": wordnet.NOUN,
99 | "V": wordnet.VERB,
100 | "R": wordnet.ADV,
101 | }
102 |
103 | return tag_dict.get(tag, wordnet.NOUN)
104 |
105 | def lemmatization(self, words):
106 | return [
107 | self.lemmatizer.lemmatize(w, self.get_wordnet_pos(w))
108 | for w in words
109 | ]
110 |
111 | def feature_extraction(self, doc):
112 | document_words = [word.lower() for word in nltk.word_tokenize(doc)]
113 |
114 | document_words = self.remove_stop_words(document_words)
115 | document_words = self.lemmatization(document_words)
116 | features = {}
117 | for word in self.word_features:
118 | if word in document_words:
119 | features["contains({})".format(word)] = word in document_words
120 | return features
121 |
122 | def measuring_accuracy(self):
123 | """Testing the model *accuracy*"""
124 | print(
125 | "Accuracy:", nltk.classify.accuracy(self.classifier, self.test_set)
126 | )
127 | self.classifier.show_most_informative_features(20)
128 | """Measuring **Precision,Recall,F-Measure** of a classifier.
129 | Finding **Confusion matrix**"""
130 | actual_set = collections.defaultdict(set)
131 | predicted_set = collections.defaultdict(set)
132 | # cm here refers to confusion matrix
133 | actual_set_cm = []
134 | predicted_set_cm = []
135 | for i, (feature, label) in enumerate(self.test_set):
136 | actual_set[label].add(i)
137 | actual_set_cm.append(label)
138 | predicted_label = self.classifier.classify(feature)
139 | predicted_set[predicted_label].add(i)
140 | predicted_set_cm.append(predicted_label)
141 |
142 | for category in self.data.keys():
143 | print(
144 | category,
145 | "precision :",
146 | precision(actual_set[category], predicted_set[category]),
147 | )
148 | print(
149 | category,
150 | "recall :",
151 | recall(actual_set[category], predicted_set[category]),
152 | )
153 | print(
154 | category,
155 | "f-measure :",
156 | f_measure(actual_set[category], predicted_set[category]),
157 | )
158 | confusion_matrix = ConfusionMatrix(actual_set_cm, predicted_set_cm)
159 | print("Confusion Matrix")
160 | print(confusion_matrix)
161 |
162 | def intent_identifier(self, text):
163 | dist = self.classifier.prob_classify(self.feature_extraction(text))
164 | first_label = next(iter(dist.samples()))
165 | all_equal = all(
166 | round(dist.prob(label), 1) == round(dist.prob(first_label), 1)
167 | for label in dist.samples()
168 | )
169 | if all_equal:
170 | return None
171 | else:
172 | return max(
173 | [(label, dist.prob(label)) for label in dist.samples()],
174 | key=operator.itemgetter(1),
175 | )[0]
176 |
--------------------------------------------------------------------------------
/microbots/FaceFilterBot/README.md:
--------------------------------------------------------------------------------
1 | # FaceFilter Microbot
2 | When users identified as in "strong moods", even intelligent replies are not always very effective. A creative response that can possibly alter their strong mood positively showing them their Face with filter that is relevant to situation in the conversation.
3 |
4 | Lets see in action sample conversations.
5 |
6 |
7 |
8 |  |
9 | As you can see in the conversation, user has 'strong mood' of being sad because of not looking good. Our mood detection detects this strong signal and our intent classifier calls our bot, and our bot sends this uplifting FaceFilter using users camera. |
10 |
11 |
12 |
13 |
14 | Here, user has 'strong mood' of being sad because loneliness. Our mood detection detects this strong signal and our intent classifier calls our bot, and our bot sends this funny FaceFilter to create mystery to take make it fun and lite. |
15 |  |
16 |
17 |
18 |
19 |
20 |  |
21 | Here, user has 'strong mood' of passion and adds patience for future. Our mood detection detects this strong signal and our intent classifier calls our bot, and our bot sends breaks that patience by helping user visualize future that he has to be patient for! |
22 |
23 |
24 |
25 | This microbot will be a really cool addition to our chatbot and make it more fun!
26 |
27 |
28 | ## Break Down
29 |
30 | So what do we need to make this work? We break down in parts :-
31 |
32 |
33 | - Step 1 - Building Live Filters
34 | - Live Face detection contour
35 | - Image processing to create photos for each variety of filters
36 | - Stitching corresponding image on detected contour points
37 |
38 | - Step 2 - Mood Classification Module
39 | - Step 3 - Training Params for Intent Classifier
40 | - Step 4 - Integrating and creating configuration file
41 |
42 |
43 | ## Changelog
44 | - #### v0.1.1
45 | - Added demo/readme
46 |
47 | ## Usage
48 | `python main.py`
49 |
50 |
51 | ## Build the application locally
52 | Will be updated
53 |
61 | ## Run the application
62 | Will be updated
63 |
64 |
--------------------------------------------------------------------------------
/microbots/FaceFilterBot/demo_cute.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/FaceFilterBot/demo_cute.png
--------------------------------------------------------------------------------
/microbots/FaceFilterBot/demo_love.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/FaceFilterBot/demo_love.png
--------------------------------------------------------------------------------
/microbots/FaceFilterBot/demo_spaceman.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/FaceFilterBot/demo_spaceman.png
--------------------------------------------------------------------------------
/microbots/FaceFilterBot/filters/blue_filter.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 | from collections import Counter
4 | import imutils
5 |
6 |
7 | def extractSkin(image):
8 | # Taking a copy of the image
9 | img = image.copy()
10 | # Converting from BGR Colours Space to HSV
11 | img = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
12 |
13 | # Defining HSV Threadholds
14 | lower_threshold = np.array([0, 48, 80], dtype=np.uint8)
15 | upper_threshold = np.array([20, 255, 255], dtype=np.uint8)
16 |
17 | # Single Channel mask,denoting presence of colours in the specified threshold
18 | skinMask = cv2.inRange(img,lower_threshold,upper_threshold)
19 |
20 | # Cleaning up mask using Gaussian Filter
21 | skinMask = cv2.GaussianBlur(skinMask,(3,3),0)
22 |
23 | # Extracting skin from the threshold mask
24 | skin = cv2.bitwise_and(img,img,mask=skinMask)
25 |
26 | # Converting the image back to BRG color space
27 | img = cv2.cvtColor(skin,cv2.COLOR_HSV2BGR)
28 |
29 | # Observed BGR to RGBA conversion gives a more appropriate color tint that opencv colormask options
30 | # Added alpha channel to convert black pixels transparent and overlap (WIP)
31 | img_a = cv2.cvtColor(img, cv2.COLOR_BGR2RGBA)
32 |
33 | # Return the Skin image
34 | return img_a
35 |
36 |
37 | camera = cv2.VideoCapture(0)
38 |
39 | while True:
40 | # grab the current frame
41 | (grabbed, frame) = camera.read()
42 | frame = imutils.resize(frame, width = 500)
43 | # Call extractSkin() to extract only skin portions of the frame
44 | skin = extractSkin(frame)
45 |
46 | b_channel, g_channel, r_channel = cv2.split(frame)
47 |
48 | #creating a dummy alpha channel in frame
49 | alpha_channel = np.ones(b_channel.shape, dtype=b_channel.dtype) * 50
50 |
51 | frame_BGRA = cv2.merge((b_channel, g_channel, r_channel, alpha_channel))
52 |
53 | # show the skin in the image along with the mask
54 | cv2.imshow("images", np.hstack([frame_BGRA, skin]))
55 | # if the 'q' key is pressed, stop the loop
56 | if cv2.waitKey(1) & 0xFF == ord("q"):
57 | break
58 |
59 | # cleanup the camera and close any open windows
60 | camera.release()
61 | cv2.destroyAllWindows()
62 |
63 |
64 |
65 |
--------------------------------------------------------------------------------
/microbots/Sample/Eye.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/Eye.png
--------------------------------------------------------------------------------
/microbots/Sample/_DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/_DS_Store
--------------------------------------------------------------------------------
/microbots/Sample/dog.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 | import dlib
4 | from scipy.spatial import distance as dist
5 | from scipy.spatial import ConvexHull
6 |
7 | from imutils import face_utils, rotate_bound
8 | import math
9 |
10 |
11 |
12 | def draw_sprite(frame, sprite, x_offset, y_offset):
13 | (h,w) = (sprite.shape[0], sprite.shape[1])
14 | (imgH,imgW) = (frame.shape[0], frame.shape[1])
15 |
16 | if y_offset+h >= imgH: #if sprite gets out of image in the bottom
17 | sprite = sprite[0:imgH-y_offset,:,:]
18 |
19 | if x_offset+w >= imgW: #if sprite gets out of image to the right
20 | sprite = sprite[:,0:imgW-x_offset,:]
21 |
22 | if x_offset < 0: #if sprite gets out of image to the left
23 | sprite = sprite[:,abs(x_offset)::,:]
24 | w = sprite.shape[1]
25 | x_offset = 0
26 |
27 | #for each RGB chanel
28 | for c in range(3):
29 | #chanel 4 is alpha: 255 is not transpartne, 0 is transparent background
30 | frame[y_offset:y_offset+h, x_offset:x_offset+w, c] = \
31 | sprite[:,:,c] * (sprite[:,:,3]/255.0) + frame[y_offset:y_offset+h, x_offset:x_offset+w, c] * (1.0 - sprite[:,:,3]/255.0)
32 | return frame
33 |
34 | #Adjust the given sprite to the head's width and position
35 | #in case of the sprite not fitting the screen in the top, the sprite should be trimed
36 | def adjust_sprite2head(sprite, head_width, head_ypos, ontop = True):
37 | (h_sprite,w_sprite) = (sprite.shape[0], sprite.shape[1])
38 | factor = 1.0*head_width/w_sprite
39 | sprite = cv2.resize(sprite, (0,0), fx=factor, fy=factor) # adjust to have the same width as head
40 | (h_sprite,w_sprite) = (sprite.shape[0], sprite.shape[1])
41 |
42 | y_orig = head_ypos-h_sprite if ontop else head_ypos # adjust the position of sprite to end where the head begins
43 | if (y_orig < 0): #check if the head is not to close to the top of the image and the sprite would not fit in the screen
44 | sprite = sprite[abs(y_orig)::,:,:] #in that case, we cut the sprite
45 | y_orig = 0 #the sprite then begins at the top of the image
46 | return (sprite, y_orig)
47 |
48 | # Applies sprite to image detected face's coordinates and adjust it to head
49 | def apply_sprite(image, path2sprite,w,x,y, angle, ontop = True):
50 | sprite = cv2.imread(path2sprite,-1)
51 | #print sprite.shape
52 | sprite = rotate_bound(sprite, angle)
53 | (sprite, y_final) = adjust_sprite2head(sprite, w, y, ontop)
54 | image = draw_sprite(image,sprite,x, y_final)
55 |
56 | #points are tuples in the form (x,y)
57 | # returns angle between points in degrees
58 | def calculate_inclination(point1, point2):
59 | x1,x2,y1,y2 = point1[0], point2[0], point1[1], point2[1]
60 | incl = 180/math.pi*math.atan((float(y2-y1))/(x2-x1))
61 | return incl
62 |
63 |
64 | def calculate_boundbox(list_coordinates):
65 | x = min(list_coordinates[:,0])
66 | y = min(list_coordinates[:,1])
67 | w = max(list_coordinates[:,0]) - x
68 | h = max(list_coordinates[:,1]) - y
69 | return (x,y,w,h)
70 |
71 | def get_face_boundbox(points, face_part):
72 | if face_part == 1:
73 | (x,y,w,h) = calculate_boundbox(points[17:22]) #left eyebrow
74 | elif face_part == 2:
75 | (x,y,w,h) = calculate_boundbox(points[22:27]) #right eyebrow
76 | elif face_part == 3:
77 | (x,y,w,h) = calculate_boundbox(points[36:42]) #left eye
78 | elif face_part == 4:
79 | (x,y,w,h) = calculate_boundbox(points[42:48]) #right eye
80 | elif face_part == 5:
81 | (x,y,w,h) = calculate_boundbox(points[29:36]) #nose
82 | elif face_part == 6:
83 | (x,y,w,h) = calculate_boundbox(points[48:68]) #mouth
84 | return (x,y,w,h)
85 |
86 |
87 | Path = "shape_predictor_68_face_landmarks.dat"
88 |
89 | fullPoints = list(range(0, 68))
90 | facePoints = list(range(17, 68))
91 |
92 | jawLinePoints = list(range(0, 17))
93 | rightEyebrowPoints = list(range(17, 22))
94 | leftEyebrowPoints = list(range(22, 27))
95 | nosePoints = list(range(27, 36))
96 | rightEyePoints = list(range(36, 42))
97 | leftEyePoints = list(range(42, 48))
98 | mouthOutlinePoints = list(range(48, 61))
99 | mouthInnerPoints = list(range(61, 68))
100 |
101 | detector = dlib.get_frontal_face_detector()
102 |
103 | predictor = dlib.shape_predictor(Path)
104 |
105 | video_capture = cv2.VideoCapture(0)
106 |
107 | while True:
108 | ret, frame = video_capture.read()
109 |
110 | if ret:
111 | grayFrame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
112 |
113 | detectedFaces = detector(grayFrame, 0)
114 |
115 | for face in detectedFaces:
116 | x1 = face.left()
117 | y1 = face.top()
118 | x2 = face.right()
119 | y2 = face.bottom()
120 |
121 | landmarks = predictor(frame, face)
122 | landmarks = face_utils.shape_to_np(landmarks)
123 |
124 | incl = calculate_inclination(landmarks[17], landmarks[26])
125 | is_mouth_open = (landmarks[66][1] - landmarks[62][1]) >= 15
126 |
127 | (x0, y0, w0, h0) = get_face_boundbox(landmarks, 6)
128 | (x3, y3, w3, h3) = get_face_boundbox(landmarks, 5) # nose
129 | apply_sprite(frame, "doggy_nose.png", w3, x3, y3, incl, ontop=False)
130 |
131 | apply_sprite(frame, "doggy_ears.png", face.width(), x1, y1, incl)
132 |
133 | if is_mouth_open:
134 | apply_sprite(frame, "doggy_tongue.png", w0, x0, y0, incl, ontop=False)
135 |
136 |
137 | #apply_sprite(frame, "doggy_ears.png", w, x, y, incl)
138 |
139 | #if is_mouth_open:
140 | #apply_sprite(frame, "doggy_tongue.png", w0, x0, y0, incl, ontop=False)
141 | cv2.imshow("Faces with Filter", frame)
142 | cv2.waitKey(1)
143 |
--------------------------------------------------------------------------------
/microbots/Sample/doggy_ears.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/doggy_ears.png
--------------------------------------------------------------------------------
/microbots/Sample/doggy_nose.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/doggy_nose.png
--------------------------------------------------------------------------------
/microbots/Sample/doggy_tongue.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/doggy_tongue.png
--------------------------------------------------------------------------------
/microbots/Sample/eyes.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 | import dlib
4 | from scipy.spatial import distance as dist
5 | from scipy.spatial import ConvexHull
6 |
7 |
8 | def EyeSize(eye):
9 | eyeWidth = dist.euclidean(eye[0], eye[3])
10 | hull = ConvexHull(eye)
11 | eyeCenter = np.mean(eye[hull.vertices, :], axis=0)
12 |
13 | eyeCenter = eyeCenter.astype(int)
14 |
15 | return int(eyeWidth), eyeCenter
16 |
17 |
18 | def PlaceEye(frame, eyeCenter, eyeSize):
19 | eyeSize = int(eyeSize * 1.5)
20 |
21 | x1 = int(eyeCenter[0, 0] - (eyeSize / 2))
22 | x2 = int(eyeCenter[0, 0] + (eyeSize / 2))
23 | y1 = int(eyeCenter[0, 1] - (eyeSize / 2))
24 | y2 = int(eyeCenter[0, 1] + (eyeSize / 2))
25 |
26 | h, w = frame.shape[:2]
27 |
28 | # check for clipping
29 | if x1 < 0:
30 | x1 = 0
31 | if y1 < 0:
32 | y1 = 0
33 | if x2 > w:
34 | x2 = w
35 | if y2 > h:
36 | y2 = h
37 |
38 | #re-calculate the size to avoid clipping
39 | eyeOverlayWidth = x2 - x1
40 | eyeOverlayHeight = y2 - y1
41 |
42 | # calculate the masks for the overlay
43 | eyeOverlay = cv2.resize(filter, (eyeOverlayWidth, eyeOverlayHeight), interpolation=cv2.INTER_AREA)
44 | mask = cv2.resize(filterMask, (eyeOverlayWidth, eyeOverlayHeight), interpolation=cv2.INTER_AREA)
45 | mask_inv = cv2.resize(filterMaskInv, (eyeOverlayWidth, eyeOverlayHeight), interpolation=cv2.INTER_AREA)
46 |
47 | # take ROI for the verlay from background, equal to size of the overlay image
48 | roi = frame[y1:y2, x1:x2]
49 |
50 | # roi_bg contains the original image only where the overlay is not, in the region that is the size of the overlay.
51 | roi_bg = cv2.bitwise_and(roi, roi, mask=mask_inv)
52 |
53 | # roi_fg contains the image pixels of the overlay only where the overlay should be
54 | roi_fg = cv2.bitwise_and(eyeOverlay, eyeOverlay, mask=mask)
55 |
56 | # join the roi_bg and roi_fg
57 | dst = cv2.add(roi_bg, roi_fg)
58 |
59 | # place the joined image, saved to dst back over the original image
60 | frame[y1:y2, x1:x2] = dst
61 |
62 |
63 | Path = "shape_predictor_68_face_landmarks.dat"
64 |
65 | fullPoints = list(range(0, 68))
66 | facePoints = list(range(17, 68))
67 |
68 | jawLinePoints = list(range(0, 17))
69 | rightEyebrowPoints = list(range(17, 22))
70 | leftEyebrowPoints = list(range(22, 27))
71 | nosePoints = list(range(27, 36))
72 | rightEyePoints = list(range(36, 42))
73 | leftEyePoints = list(range(42, 48))
74 | mouthOutlinePoints = list(range(48, 61))
75 | mouthInnerPoints = list(range(61, 68))
76 |
77 | detector = dlib.get_frontal_face_detector()
78 |
79 | predictor = dlib.shape_predictor(Path)
80 |
81 | filter = cv2.imread('Eye.png',-1)
82 |
83 | filterMask = filter[:,:,3]
84 | filterMaskInv = cv2.bitwise_not(filterMask)
85 |
86 | filter = filter[:,:,0:3]
87 | origEyeHeight, origEyeWidth = filterMask.shape[:2]
88 |
89 | video_capture = cv2.VideoCapture(0)
90 |
91 | def shape_to_np(shape, dtype="int"):
92 | # initialize the list of (x, y)-coordinates
93 | coords = np.zeros((shape.num_parts, 2), dtype=dtype)
94 |
95 | # loop over all facial landmarks and convert them
96 | # to a 2-tuple of (x, y)-coordinates
97 | for i in range(0, shape.num_parts):
98 | coords[i] = (shape.part(i).x, shape.part(i).y)
99 |
100 | # return the list of (x, y)-coordinates
101 | return coords
102 |
103 |
104 |
105 | def add_landmarks(mat, face, frame):
106 | predictor = dlib.shape_predictor()
107 | shape = predictor(mat, face)
108 | shape = shape_to_np(shape)
109 | for (x, y) in shape:
110 | cv2.circle(mat, (x, y), 1, (0, 0, 255), -1)
111 |
112 |
113 | while True:
114 | ret, frame = video_capture.read()
115 |
116 | if ret:
117 | grayFrame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
118 |
119 | detectedFaces = detector(grayFrame, 0)
120 |
121 | for face in detectedFaces:
122 | x1 = face.left()
123 | y1 = face.top()
124 | x2 = face.right()
125 | y2 = face.bottom()
126 |
127 | landmarks = np.matrix([[p.x, p.y] for p in predictor(frame, face).parts()])
128 |
129 | leftEye = landmarks[leftEyePoints]
130 | rightEye = landmarks[rightEyePoints]
131 | leftEyeSize, leftEyeCenter = EyeSize(leftEye)
132 | rightEyeSize, rightEyeCenter = EyeSize(rightEye)
133 | PlaceEye(frame, leftEyeCenter, leftEyeSize)
134 | PlaceEye(frame, rightEyeCenter, rightEyeSize)
135 |
136 | cv2.imshow("Faces with Filter", frame)
137 | cv2.waitKey(1)
138 |
--------------------------------------------------------------------------------
/microbots/Sample/faceSwap.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import cv2
3 | import dlib
4 | from scipy.spatial import distance as dist
5 | from scipy.spatial import ConvexHull
6 |
7 | from imutils import face_utils, rotate_bound
8 | import math
9 |
10 |
11 | def applyAffineTransform(src, srcTri, dstTri, size):
12 | # Given a pair of triangles, find the affine transform.
13 | warpMat = cv2.getAffineTransform(np.float32(srcTri), np.float32(dstTri))
14 |
15 | # Apply the Affine Transform just found to the src image
16 | dst = cv2.warpAffine(src, warpMat, (size[0], size[1]), None, flags=cv2.INTER_LINEAR,
17 | borderMode=cv2.BORDER_REFLECT_101)
18 |
19 | return dst
20 |
21 |
22 | # Check if a point is inside a rectangle
23 | def rectContains(rect, point):
24 | if point[0] < rect[0]:
25 | return False
26 | elif point[1] < rect[1]:
27 | return False
28 | elif point[0] > rect[0] + rect[2]:
29 | return False
30 | elif point[1] > rect[1] + rect[3]:
31 | return False
32 | return True
33 |
34 |
35 | # calculate delanauy triangulation
36 | def calculateDelaunayTriangles(rect, points):
37 | # create subdiv
38 | subdiv = cv2.Subdiv2D(rect);
39 | # Insert points into subdiv
40 | for p in points:
41 | subdiv.insert(p)
42 |
43 | triangleList = subdiv.getTriangleList();
44 |
45 | delaunayTri = []
46 |
47 | pt = []
48 | for t in triangleList:
49 | pt.append((t[0], t[1]))
50 | pt.append((t[2], t[3]))
51 | pt.append((t[4], t[5]))
52 |
53 | pt1 = (t[0], t[1])
54 | pt2 = (t[2], t[3])
55 | pt3 = (t[4], t[5])
56 |
57 | if rectContains(rect, pt1) and rectContains(rect, pt2) and rectContains(rect, pt3):
58 | ind = []
59 | # Get face-points (from 68 face detector) by coordinates
60 | for j in range(0, 3):
61 | for k in range(0, len(points)):
62 | if (abs(pt[j][0] - points[k][0]) < 1.0 and abs(pt[j][1] - points[k][1]) < 1.0):
63 | ind.append(k)
64 | # Three points form a triangle. Triangle array corresponds to the file tri.txt in FaceMorph
65 | if len(ind) == 3:
66 | delaunayTri.append((ind[0], ind[1], ind[2]))
67 |
68 | pt = []
69 | return delaunayTri
70 |
71 |
72 | def warpTriangle(img1, img2, t1, t2):
73 | # Find bounding rectangle for each triangle
74 | r1 = cv2.boundingRect(np.float32([t1]))
75 | r2 = cv2.boundingRect(np.float32([t2]))
76 |
77 | # Offset points by left top corner of the respective rectangles
78 | t1Rect = []
79 | t2Rect = []
80 | t2RectInt = []
81 |
82 | for i in range(0, 3):
83 | t1Rect.append(((t1[i][0] - r1[0]), (t1[i][1] - r1[1])))
84 | t2Rect.append(((t2[i][0] - r2[0]), (t2[i][1] - r2[1])))
85 | t2RectInt.append(((t2[i][0] - r2[0]), (t2[i][1] - r2[1])))
86 |
87 | # Get mask by filling triangle
88 | mask = np.zeros((r2[3], r2[2], 3), dtype=np.float32)
89 | cv2.fillConvexPoly(mask, np.int32(t2RectInt), (1.0, 1.0, 1.0), 16, 0);
90 |
91 | # Apply warpImage to small rectangular patches
92 | img1Rect = img1[r1[1]:r1[1] + r1[3], r1[0]:r1[0] + r1[2]]
93 | # img2Rect = np.zeros((r2[3], r2[2]), dtype = img1Rect.dtype)
94 |
95 | size = (r2[2], r2[3])
96 |
97 | img2Rect = applyAffineTransform(img1Rect, t1Rect, t2Rect, size)
98 |
99 | img2Rect = img2Rect * mask
100 |
101 | # Copy triangular region of the rectangular patch to the output image
102 | img2[r2[1]:r2[1] + r2[3], r2[0]:r2[0] + r2[2]] = img2[r2[1]:r2[1] + r2[3], r2[0]:r2[0] + r2[2]] * (
103 | (1.0, 1.0, 1.0) - mask)
104 |
105 | img2[r2[1]:r2[1] + r2[3], r2[0]:r2[0] + r2[2]] = img2[r2[1]:r2[1] + r2[3], r2[0]:r2[0] + r2[2]] + img2Rect
106 |
107 |
108 | Path = "shape_predictor_68_face_landmarks.dat"
109 |
110 | fullPoints = list(range(0, 68))
111 | facePoints = list(range(17, 68))
112 |
113 | jawLinePoints = list(range(0, 17))
114 | rightEyebrowPoints = list(range(17, 22))
115 | leftEyebrowPoints = list(range(22, 27))
116 | nosePoints = list(range(27, 36))
117 | rightEyePoints = list(range(36, 42))
118 | leftEyePoints = list(range(42, 48))
119 | mouthOutlinePoints = list(range(48, 61))
120 | mouthInnerPoints = list(range(61, 68))
121 |
122 | detector = dlib.get_frontal_face_detector()
123 |
124 | predictor = dlib.shape_predictor(Path)
125 |
126 | video_capture = cv2.VideoCapture(0)
127 |
128 | while True:
129 | ret, frame = video_capture.read()
130 |
131 | if ret:
132 | grayFrame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
133 |
134 | detectedFaces = detector(grayFrame, 0)
135 |
136 | for i in range(0, len(detectedFaces) - 1, 2):
137 | faceA = detectedFaces[i]
138 | faceB = detectedFaces[i + 1]
139 | # x1 = face.left()
140 | # y1 = face.top()
141 | # x2 = face.right()
142 | # y2 = face.bottom()
143 |
144 | points1 = [(int(p.x), int(p.y)) for p in predictor(frame, faceA).parts()]
145 | points2 = [(int(p.x), int(p.y)) for p in predictor(frame, faceB).parts()]
146 | img1 = np.copy(frame)
147 | img2 = np.copy(frame)
148 | img1Warped = np.copy(img2)
149 | hull1 = []
150 | hull2 = []
151 |
152 | hullIndex = cv2.convexHull(np.array(points2), returnPoints=False)
153 | for i in range(0, len(hullIndex)):
154 | hull1.append(points1[int(hullIndex[i])])
155 | hull2.append(points2[int(hullIndex[i])])
156 |
157 | # Find delanauy traingulation for convex hull points
158 | sizeImg2 = img2.shape
159 | rect = (0, 0, sizeImg2[1], sizeImg2[0])
160 | dt = calculateDelaunayTriangles(rect, hull2)
161 | if len(dt) == 0:
162 | quit()
163 | # Apply affine transformation to Delaunay triangles
164 | for i in range(0, len(dt)):
165 | t1 = []
166 | t2 = []
167 |
168 | # get points for img1, img2 corresponding to the triangles
169 | for j in range(0, 3):
170 | t1.append(hull1[dt[i][j]])
171 | t2.append(hull2[dt[i][j]])
172 |
173 | warpTriangle(img1, img1Warped, t1, t2)
174 |
175 | # Calculate Mask
176 | hull8U = []
177 | for i in range(0, len(hull2)):
178 | hull8U.append((hull2[i][0], hull2[i][1]))
179 |
180 | mask = np.zeros(img2.shape, dtype=img2.dtype)
181 |
182 | cv2.fillConvexPoly(mask, np.int32(hull8U), (255, 255, 255))
183 |
184 | r = cv2.boundingRect(np.float32([hull2]))
185 |
186 | center = ((r[0] + int(r[2] / 2), r[1] + int(r[3] / 2)))
187 |
188 | # Clone seamlessly.
189 | output = cv2.seamlessClone(np.uint8(img1Warped), img2, mask, center, cv2.NORMAL_CLONE)
190 |
191 | points1, points2 = points2, points1
192 | img1 = np.copy(frame)
193 | img2 = np.copy(frame)
194 | img1Warped = np.copy(img2)
195 | hull1 = []
196 | hull2 = []
197 |
198 | hullIndex = cv2.convexHull(np.array(points2), returnPoints=False)
199 | for i in range(0, len(hullIndex)):
200 | hull1.append(points1[int(hullIndex[i])])
201 | hull2.append(points2[int(hullIndex[i])])
202 |
203 | # Find delanauy traingulation for convex hull points
204 | sizeImg2 = img2.shape
205 | rect = (0, 0, sizeImg2[1], sizeImg2[0])
206 | dt = calculateDelaunayTriangles(rect, hull2)
207 | if len(dt) == 0:
208 | quit()
209 | # Apply affine transformation to Delaunay triangles
210 | for i in range(0, len(dt)):
211 | t1 = []
212 | t2 = []
213 |
214 | # get points for img1, img2 corresponding to the triangles
215 | for j in range(0, 3):
216 | t1.append(hull1[dt[i][j]])
217 | t2.append(hull2[dt[i][j]])
218 |
219 | warpTriangle(img1, img1Warped, t1, t2)
220 |
221 | # Calculate Mask
222 | hull8U = []
223 | for i in range(0, len(hull2)):
224 | hull8U.append((hull2[i][0], hull2[i][1]))
225 |
226 | mask = np.zeros(img2.shape, dtype=img2.dtype)
227 |
228 | cv2.fillConvexPoly(mask, np.int32(hull8U), (255, 255, 255))
229 |
230 | r = cv2.boundingRect(np.float32([hull2]))
231 |
232 | center = ((r[0] + int(r[2] / 2), r[1] + int(r[3] / 2)))
233 |
234 | # Clone seamlessly.
235 | output = cv2.seamlessClone(np.uint8(img1Warped), output, mask, center, cv2.NORMAL_CLONE)
236 |
237 |
238 | cv2.imshow("Faces with Filter", output)
239 | cv2.waitKey(1)
240 |
241 |
242 |
--------------------------------------------------------------------------------
/microbots/Sample/mlmodels/la_muse.mlmodel:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/mlmodels/la_muse.mlmodel
--------------------------------------------------------------------------------
/microbots/Sample/mlmodels/rain_princess.mlmodel:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/mlmodels/rain_princess.mlmodel
--------------------------------------------------------------------------------
/microbots/Sample/mlmodels/udnie.mlmodel:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/mlmodels/udnie.mlmodel
--------------------------------------------------------------------------------
/microbots/Sample/mlmodels/wave.mlmodel:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/Sample/mlmodels/wave.mlmodel
--------------------------------------------------------------------------------
/microbots/covid19/Readme_covid19.md:
--------------------------------------------------------------------------------
1 | # COVID-19
2 |
3 | ## API: A summary of new and total cases per country updated daily.
4 | Using a free API from https://api.covid19api.com/summary
--------------------------------------------------------------------------------
/microbots/covid19/config.py:
--------------------------------------------------------------------------------
1 |
2 | # covid19_api from Postman
3 | # a summary of new and total cases per country updated daily
4 | endPointUrl = 'https://api.covid19api.com/summary'
5 |
--------------------------------------------------------------------------------
/microbots/covid19/covid19_summary.py:
--------------------------------------------------------------------------------
1 | import urllib.request
2 | import os.path
3 | import zipfile
4 | import pathlib
5 | import json
6 |
7 | print('COVID-19: a summary of new and total cases per country updated daily.')
8 |
9 | # Create a directory if it does not exist
10 | filesDir = 'files'
11 | if not os.path.exists(filesDir):
12 | os.makedirs(filesDir)
13 |
14 | # Defines functions and classes which help in opening URLs (mostly HTTP) in a complex world
15 | # Retrieves the contents of url and places it in filename.
16 | endPointUrl = 'https://api.covid19api.com/summary'
17 | jsonFileName = 'summary.json'
18 | zipFileName = 'summary.zip'
19 |
20 | # relativeJsonFilePath = os.path.join(filesPath, jsonFileName)
21 | relativeZipFilePath = os.path.join(filesDir, zipFileName)
22 |
23 | # Request and write Json file
24 | urllib.request.urlretrieve(endPointUrl, jsonFileName)
25 |
26 | # Create a ziped file of files in file folder and name it with the date
27 | zipObj = zipfile.ZipFile(relativeZipFilePath, mode='w')
28 | zipObj.write(jsonFileName)
29 | zipObj.close()
30 |
31 | # Delete json file
32 | os.remove(jsonFileName)
33 |
--------------------------------------------------------------------------------
/microbots/covid19/files/summary.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/scalability4all/voice-enabled-chatbot/d40c93607ed052f38b3eb52d6735069eeee5754c/microbots/covid19/files/summary.zip
--------------------------------------------------------------------------------
/microbots/covid19/requirements.txt:
--------------------------------------------------------------------------------
1 | urllib
2 | os.path
3 | zipfile
4 | pathlib
5 | json
--------------------------------------------------------------------------------
/microbots/signup/config.py:
--------------------------------------------------------------------------------
1 | from sqlalchemy import BigInteger, String
2 |
3 | # create a local postgresql database and server
4 | # Scheme : "postgres+psycopg2://:@:/"
5 | DATABASE_URI = ''
6 |
7 | table_name = 'register'
8 |
9 | # First column has to be primary key
10 | form_fields = [{'fname': 'name', 'ftype': String, },
11 | {'fname': 'email', 'ftype': String, 'null': False},
12 | {'fname': 'number', 'ftype': BigInteger, 'null': True}, ]
13 |
--------------------------------------------------------------------------------
/microbots/signup/crud.py:
--------------------------------------------------------------------------------
1 | import config
2 | import models
3 | from sqlalchemy.orm import sessionmaker
4 | from sqlalchemy import create_engine, inspect
5 |
6 | engine = create_engine(config.DATABASE_URI)
7 | Session = sessionmaker(bind=engine)
8 |
9 |
10 | class crudOps:
11 | def createTable(self):
12 | models.Base.metadata.create_all(engine)
13 |
14 | def recreate_database(self):
15 | models.Base.metadata.drop_all(engine)
16 | models.Base.metadata.create_all(engine)
17 |
18 | def insertRow(self, row):
19 | s = Session()
20 | s.add(row)
21 | s.commit()
22 | s.close_all()
23 |
24 | def allRow(self):
25 | s = Session()
26 | data = s.query(models.Register).all()
27 | s.close_all()
28 | return data
29 |
30 | def primaryCol(self):
31 | s = Session()
32 | dataList = []
33 | data = s.query(models.Register.col1)
34 | for i in data:
35 | for j in i:
36 | dataList.append(j)
37 | s.close_all()
38 | return dataList
39 |
40 |
41 | a = crudOps()
42 |
43 | # Checking if register table present or not
44 | inspector = inspect(engine)
45 | table_present = False
46 | for table_name in inspector.get_table_names():
47 | if table_name == config.table_name:
48 | table_present = True
49 | if not table_present:
50 | a.createTable()
51 |
--------------------------------------------------------------------------------
/microbots/signup/models.py:
--------------------------------------------------------------------------------
1 | from sqlalchemy.ext.declarative import declarative_base
2 | from sqlalchemy import Column, BigInteger, String
3 | import config
4 |
5 | Base = declarative_base()
6 | formFields = config.form_fields
7 |
8 |
9 | class Register(Base):
10 | __tablename__ = config.table_name
11 | col1 = Column(formFields[0]['fname'], formFields[0]['ftype'], primary_key=True)
12 | col2 = Column(formFields[1]['fname'], formFields[1]['ftype'], nullable=formFields[1]['null'])
13 | col3 = Column(formFields[2]['fname'], formFields[2]['ftype'], nullable=formFields[2]['null'])
14 |
15 | def __repr__(self):
16 | return "Register(name ={}, email = {}, number = {})"\
17 | .format(self.col1, self.col2, self.col3)
18 |
--------------------------------------------------------------------------------
/microbots/signup/requirements.txt:
--------------------------------------------------------------------------------
1 | sqlalchemy
2 | pyttsx3
3 | psycopg2
4 |
--------------------------------------------------------------------------------
/microbots/signup/voiceSignup.py:
--------------------------------------------------------------------------------
1 | import config
2 | import models
3 | import pyttsx3
4 | import crud
5 |
6 |
7 | def speak(text):
8 | talk.say(text)
9 | talk.runAndWait()
10 |
11 |
12 | def checkIfNull(field, inp):
13 | while inp == "":
14 | print("{} should not be empty. Please type some username".format(field))
15 | speak("{} should not be empty. Please type some username".format(field))
16 | inp = input()
17 | return inp
18 |
19 |
20 | talk = pyttsx3.init()
21 | voices = talk.getProperty('voices')
22 | talk.setProperty('voice', voices[1].id)
23 | volume = talk.getProperty('volume')
24 | talk.setProperty('volume', 10.0)
25 | rate = talk.getProperty('rate')
26 | talk.setProperty('rate', rate - 25)
27 | # creating object to isert data into table
28 | obj = crud.crudOps()
29 | print("Welcome to Website")
30 | speak("Welcome to Website")
31 | print("Register")
32 | speak("Register")
33 | formFields = config.form_fields
34 | lenFields = len(formFields)
35 | tableRow = []
36 |
37 | # primary key
38 | print("Could you type a user{}".format(formFields[0]['fname']))
39 | speak("Could you type a user{}".format(formFields[0]['fname']))
40 | userfield = input()
41 |
42 | # check if it is null
43 | userfield = checkIfNull(formFields[0]['fname'], userfield)
44 |
45 | # Check if it is already exists
46 | primarycol = obj.primaryCol()
47 | while userfield in primarycol:
48 | print("Sorry Username already exists type another username")
49 | speak("Sorry Username already exists type another username")
50 | userfield = input()
51 | tableRow.append(userfield)
52 |
53 | # For remaining form fields
54 | for i in range(1, lenFields):
55 | print("Could you type your {}".format(formFields[i]['fname']))
56 | speak("Could you type your {}".format(formFields[i]['fname']))
57 | userfield = input()
58 | # check if it is null
59 | if not formFields[i]['null']:
60 | userfield = checkIfNull(formFields[i]['fname'], userfield)
61 | tableRow.append(userfield)
62 |
63 | newRow = models.Register(
64 | col1=tableRow[0],
65 | col2=tableRow[1],
66 | col3=int(tableRow[2]),
67 | )
68 | obj.insertRow(newRow)
69 | getData = obj.allRow()
70 | print(getData)
71 |
--------------------------------------------------------------------------------
/microbots/splitwise/README.md:
--------------------------------------------------------------------------------
1 | # flask-splitwise-example
2 | A Flask application to show the usage of Splitwise SDK.
3 |
4 | ## Installation
5 | This application is dependent on Flask and Splitwise. nstall these python packages using the commands below:
6 |
7 | Install using pip :
8 |
9 | ```sh
10 | $ pip install flask
11 | $ pip install splitwise
12 | ```
13 |
14 | ## Register your application
15 |
16 | Register your application on [splitwise](https://secure.splitwise.com/oauth_clients) and get your consumer key and consumer secret.
17 |
18 | Use the following -
19 |
20 | Homepage URL - http://localhost:5000
21 |
22 | Callback URL - http://localhost:5000/authorize
23 |
24 | Make note of **Consumer Key** and **Consumer Secret**
25 |
26 | ## Set Configuraion
27 |
28 | Open ```config.py``` and replace consumer_key and consumer_secret by the values you got after registering your application.
29 |
30 | ## Run the application
31 |
32 | Goto the cloned repository and type
33 |
34 | ```python
35 | python app.py
36 | ```
37 |
38 | Goto http://localhost:5000/ on your browser and you will be able to see the applcation running.
39 |
--------------------------------------------------------------------------------
/microbots/splitwise/app.py:
--------------------------------------------------------------------------------
1 | from flask import Flask, render_template, redirect, session, url_for, request
2 | from splitwise import Splitwise
3 | import config as Config
4 |
5 | app = Flask(__name__)
6 | app.secret_key = "test_secret_key"
7 |
8 |
9 | @app.route("/")
10 | def home():
11 | if 'access_token' in session:
12 | return redirect(url_for("friends"))
13 | return render_template("home.html")
14 |
15 |
16 | @app.route("/login")
17 | def login():
18 |
19 | sObj = Splitwise(Config.consumer_key, Config.consumer_secret)
20 | url, secret = sObj.getAuthorizeURL()
21 | session['secret'] = secret
22 | return redirect(url)
23 |
24 |
25 | @app.route("/authorize")
26 | def authorize():
27 |
28 | if 'secret' not in session:
29 | return redirect(url_for("home"))
30 |
31 | oauth_token = request.args.get('oauth_token')
32 | oauth_verifier = request.args.get('oauth_verifier')
33 |
34 | sObj = Splitwise(Config.consumer_key, Config.consumer_secret)
35 | access_token = sObj.getAccessToken(oauth_token, session['secret'], oauth_verifier)
36 | session['access_token'] = access_token
37 |
38 | return redirect(url_for("friends"))
39 |
40 |
41 | @app.route("/friends")
42 | def friends():
43 | if 'access_token' not in session:
44 | return redirect(url_for("home"))
45 |
46 | sObj = Splitwise(Config.consumer_key, Config.consumer_secret)
47 | sObj.setAccessToken(session['access_token'])
48 |
49 | friends = sObj.getFriends()
50 | return render_template("friends.html", friends=friends)
51 |
52 |
53 | if __name__ == "__main__":
54 | app.run(threaded=True, debug=True)
55 |
--------------------------------------------------------------------------------
/microbots/splitwise/config.py:
--------------------------------------------------------------------------------
1 | consumer_key = ''
2 | consumer_secret = ''
3 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | pandas
2 | wikipedia
3 | SpeechRecognition
4 | pygame
5 | python3-weather-api
6 | pyowm
7 | pyaudio
8 | pyttsx3
9 | pyttsx
10 | nltk
11 | numpy
12 | pyjokes
13 | googletrans
14 |
--------------------------------------------------------------------------------
/requirements_windows.txt:
--------------------------------------------------------------------------------
1 | anaconda-client
2 | anaconda-navigator
3 | attrs
4 | backcall
5 | beautifulsoup4
6 | bleach
7 | certifi
8 | cffi
9 | chardet
10 | clyent
11 | colorama
12 | conda
13 | conda-build
14 | conda-package-handling
15 | cryptography
16 | decorator
17 | defusedxml
18 | entrypoints
19 | filelock
20 | glob2
21 | idna
22 | importlib-metadata
23 | ipykernel
24 | ipython
25 | ipython-genutils
26 | jedi
27 | Jinja2
28 | jsonschema
29 | jupyter-client
30 | jupyter-core
31 | libarchive-c
32 | MarkupSafe
33 | menuinst
34 | mistune
35 | nbconvert
36 | nbformat
37 | notebook
38 | nltk
39 | olefile
40 | pandocfilters
41 | parso
42 | pickleshare
43 | Pillow
44 | pkginfo
45 | prometheus-client
46 | prompt-toolkit
47 | psutil
48 | pycosat
49 | pycparser
50 | Pygments
51 | pyOpenSSL
52 | pyrsistent
53 | PySocks
54 | pyttsx3==2.71
55 | python-dateutil
56 | pytz
57 | pyowm
58 | pyaudio
59 | pywin32
60 | pypiwin32
61 | pywinpty
62 | PyYAML
63 | pyzmq
64 | QtPy
65 | requests
66 | ruamel-yaml
67 | Send2Trash
68 | SpeechRecognition
69 | sip
70 | six
71 | soupsieve
72 | terminado
73 | testpath
74 | tornado
75 | tqdm
76 | traitlets
77 | urllib3
78 | wikipedia
79 | wcwidth
80 | webencodings
81 | win-inet-pton
82 | wincertstore
83 | xmltodict
84 | zipp
85 | pyjokes
86 | googletrans
87 |
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | [metadata]
2 | name = voice-enabled-chatbot
3 | summary = Voice enabled chat bot
4 | description-file =
5 | README.md
6 |
7 | classifier =
8 | Intended Audience :: Information Technology
9 | Operating System :: POSIX :: Linux
10 | Programming Language :: Python
11 | Programming Language :: Python :: 3
12 | Programming Language :: Python :: 3.6
13 |
14 | [files]
15 | packages =
16 | microbots
17 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | import setuptools
2 |
3 | setuptools.setup(
4 | setup_requires=['pbr>=1.8'],
5 | pbr=True)
6 |
--------------------------------------------------------------------------------
/test-requirements.txt:
--------------------------------------------------------------------------------
1 | codecov
2 | nose-cov
3 | pycodestyle
4 |
--------------------------------------------------------------------------------
/tox.ini:
--------------------------------------------------------------------------------
1 | # tox (https://tox.readthedocs.io/) is a tool for running tests
2 | # in multiple virtualenvs. This configuration file will run the
3 | # test suite on all supported python versions. To use it, "pip install tox"
4 | # and then run "tox" from this directory.
5 |
6 | [tox]
7 | skipsdist = True
8 | envlist = py37,pep8
9 |
10 | [testenv]
11 | basepython = python3
12 | usedevelop = True
13 | install_command = pip install {opts} {packages}
14 | setenv =
15 | VIRTUAL_ENV={envdir}
16 | deps =
17 | -r{toxinidir}/requirements.txt
18 | -r{toxinidir}/test-requirements.txt
19 |
20 | [testenv:pep8]
21 | commands = pycodestyle {posargs}
22 |
23 | [testenv:nosetests]
24 | commands = nosetests --nocapture --with-cov --cov-config .coveragerc
25 |
26 | [pycodestyle]
27 | # E123, E125 skipped as they are invalid PEP-8.
28 | show-source = True
29 | ignore = W504,E402,E731,C406,E741,E231,E501,
30 | E222,W293,W391,W291,E265,E251,E303,E122,
31 | E221,E703,E111,E101,W191,E117,E302,E262,
32 | E261,E226
33 | max-line-length = 100
34 | exclude=.venv,.git,.tox,dist,doc,lib/python,*egg,build
35 |
--------------------------------------------------------------------------------
/voice_conf.py:
--------------------------------------------------------------------------------
1 | def getVoiceID(arg):
2 | return {
3 | "af": "afrikaans",
4 | "bg": "bulgarian",
5 | "ca": "catalan",
6 | "cs": "czech",
7 | "da": "danish",
8 | "de": "german",
9 | "el": "greek",
10 | "en": "english",
11 | "es": "spanish",
12 | "et": "estonian",
13 | "fa": "persian",
14 | "fi": "finnish",
15 | "fr": "french",
16 | "hi": "hindi",
17 | "hr": "croatian",
18 | "hu": "hungarian",
19 | "hy": "armenian",
20 | "id": "indonesian",
21 | "is": "icelandic",
22 | "it": "italian",
23 | "ka": "georgian",
24 | "kn": "kannada",
25 | "lt": "lithuanian",
26 | "lv": "latvian",
27 | "mk": "macedonian",
28 | "ml": "malayalam",
29 | "ms": "malay" ,
30 | "ne": "nepali" ,
31 | "nl": "dutch" ,
32 | "pa": "punjabi",
33 | "pl": "polish",
34 | "pt": "brazil",
35 | "ro": "romanian",
36 | "ru": "russian",
37 | "sk": "slovak",
38 | "sq": "albanian",
39 | "sr": "serbian",
40 | "sv": "swedish",
41 | "sw": "swahili-test",
42 | "ta": "tamil",
43 | "tr": "turkish",
44 | "vi": "vietnam"
45 | }.get(arg, "default")
46 |
--------------------------------------------------------------------------------
/weather.txt:
--------------------------------------------------------------------------------
1 | hey, what is weather today ?
2 | weather outside ?
3 | weather
4 | temperature
5 | rainfall
6 | weather in mumbai
7 | weather in delhi
8 | weather in chandigarh
9 | weather in haryana
10 | will it rain today ?
11 | is it hot?
--------------------------------------------------------------------------------