├── requirements.txt ├── Config.py ├── LICENSE ├── .gitignore ├── README.md ├── msg_monitor.py └── reddit_response.py /requirements.txt: -------------------------------------------------------------------------------- 1 | beautifulsoup4>=4.5 2 | praw>=5.2.0 3 | selenium>=3.6.0 4 | -------------------------------------------------------------------------------- /Config.py: -------------------------------------------------------------------------------- 1 | cid = 'ID goes here' 2 | secret = 'Secret goes here' 3 | agent = 'python:(name):(version) (by (author))' 4 | user = 'bot username' 5 | password = 'bot password' 6 | subreddit = 'subreddit' 7 | blacklisted_devs = [] 8 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2017 Alex Singleton 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | local_settings.py 56 | 57 | # Flask stuff: 58 | instance/ 59 | .webassets-cache 60 | 61 | # Scrapy stuff: 62 | .scrapy 63 | 64 | # Sphinx documentation 65 | docs/_build/ 66 | 67 | # PyBuilder 68 | target/ 69 | 70 | # Jupyter Notebook 71 | .ipynb_checkpoints 72 | 73 | # pyenv 74 | .python-version 75 | 76 | # celery beat schedule file 77 | celerybeat-schedule 78 | 79 | # SageMath parsed files 80 | *.sage.py 81 | 82 | # dotenv 83 | .env 84 | 85 | # virtualenv 86 | .venv 87 | venv/ 88 | ENV/ 89 | 90 | # Spyder project settings 91 | .spyderproject 92 | .spyproject 93 | 94 | # Rope project settings 95 | .ropeproject 96 | 97 | # mkdocs documentation 98 | /site 99 | 100 | # mypy 101 | .mypy_cache/ 102 | 103 | .vscode/ -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | #### The Google Play Deals Bot does a few things for the /r/GooglePlayDeals subreddit: 2 | 3 | * Responds to all app submissions on /r/GooglePlayDeals with information about the app found on the store page 4 | * Flairs deals when they are expired when users reply with "expired" 5 | * Flairs deals with a new or popular tag based on the number of dowloads (New is <1,000 and popular is >10,000, and a 4 star rating) 6 | * Can also respond to self text posts that contain links to more than 1 app. 7 | 8 | #### These are the current features but here are a few more I would like to implement: 9 | 10 | * Logging (errors) - Sort of implemented 11 | 12 | ### General workflow: 13 | 14 | * reddit_response.py 15 | 1. Monitors subreddit for new posts 16 | 2. Scrapes the google play link for information using bs4 and Selenium 17 | 3. Reply to post and add flair 18 | 19 | * msg_monitor.py 20 | 1. Monitors inbox for unread comment replies 21 | 2. Replies to comment, checks if it has expired, and flairs post 22 | 3. Marks comment as read and moves on 23 | 24 | ### Needed modules 25 | 26 | * PRAW 5.2.0+ 27 | * bs4 28 | * selenium 29 | * Firefox browser and selenium drivfer 30 | 31 | ### Example response: 32 | >Info for Lucid Launcher Pro: 33 | >Current price (USD): $1.49 was $2.99 34 | >Developer: Lucid Dev Team 35 | >Rating: 4.4/5 36 | >Installs: 10,000+ 37 | >Size: 3.6M 38 | >Last updated: January 23, 2019 39 | >Contains IAPs: No 40 | >Contains Ads: No 41 | >Short description: 42 | 43 | >>Lucid Launcher Pro unlocks various features for Lucid Launcher and will also receive updates earlier than the free version. If you want to request a feature please request at our Google+ page or Contact us via E-mail. 44 | >>Pro Version Unlocks: 45 | >>★Custom Search Text (Look at screenshots) 46 | >>★Ability to hide app label in favorites bar 47 | >>★More Page Transition Animations 48 | >>★Vertical Page Transitions 49 | >>★More Home Pages 50 | >>★Custom Sidebar Theme 51 | 52 | >(footer) 53 | 54 | #### Other stuff 55 | 56 | In order to run the bot, you need to fill out the Config.py file but currently we don't need another version of the bot running around so that probably won't matter. I'm uploading this to github for better version control and also because a few users have wanted to view the source of the bot. Any help would be appreciated if you want to contribute to the project. 57 | 58 | **Feel free to submit issues for ideas you have or issues with the bot!** 59 | -------------------------------------------------------------------------------- /msg_monitor.py: -------------------------------------------------------------------------------- 1 | import time 2 | import praw 3 | import prawcore 4 | import Config 5 | footer = "\n\n*****\n\n^^^[Source](https://github.com/a-ton/gpd-bot) ^^^| ^^^[Suggestions?](https://www.reddit.com/message/compose?to=Swimmer249)" 6 | reddit = praw.Reddit(client_id=Config.cid, 7 | client_secret=Config.secret, 8 | password=Config.password, 9 | user_agent=Config.agent, 10 | username=Config.user) 11 | 12 | print("Monitoring inbox...") 13 | while True: 14 | try: 15 | for msg in reddit.inbox.stream(): 16 | # checks if bot has already replied (good if script has to restart) 17 | if isinstance(msg, praw.models.Comment): 18 | responded = False 19 | for comment in msg.refresh().replies: 20 | if comment.author.name == "GPDBot": 21 | responded = True 22 | break 23 | if responded: 24 | continue 25 | # checks if the message body contains "expired" or "oops" 26 | if isinstance(msg, praw.models.Comment): 27 | msg_text = msg.body.lower() 28 | oops = False 29 | expired = False 30 | try: 31 | if msg_text.index("oops") > -1: 32 | oops = True 33 | except ValueError: 34 | pass 35 | try: 36 | if msg_text.index("expired") > -1: 37 | expired = True 38 | except ValueError: 39 | pass 40 | reply_msg = "" 41 | if oops: 42 | msg.submission.mod.flair(text=None, css_class=None) 43 | print("unflairing... responded to: " + msg.author.name) 44 | reply_msg = "Flair removed." + footer 45 | elif expired: 46 | msg.submission.mod.flair(text='Deal Expired', css_class='expired') 47 | print("flairing... responded to: " + msg.author.name) 48 | reply_msg = "Deal marked as expired. Reply with \"oops\" if this is incorrect." + footer 49 | 50 | if reply_msg != "": 51 | msg.mark_read() 52 | msg.reply(body=reply_msg) 53 | except (prawcore.exceptions.RequestException, prawcore.exceptions.ResponseException): 54 | print ("Error connecting to reddit servers. Retrying in 5 minutes...") 55 | time.sleep(300) 56 | 57 | except praw.exceptions.APIException: 58 | print ("rate limited, wait 5 seconds") 59 | time.sleep(5) 60 | -------------------------------------------------------------------------------- /reddit_response.py: -------------------------------------------------------------------------------- 1 | import time 2 | import praw 3 | import prawcore 4 | import requests 5 | from bs4 import BeautifulSoup 6 | from selenium import webdriver 7 | from selenium.webdriver.common.by import By 8 | import Config 9 | import re 10 | import urllib 11 | import urllib.request 12 | reddit = praw.Reddit(client_id=Config.cid, 13 | client_secret=Config.secret, 14 | password=Config.password, 15 | user_agent=Config.agent, 16 | username=Config.user) 17 | subreddit = reddit.subreddit(Config.subreddit) 18 | 19 | class Error(Exception): 20 | """Base class""" 21 | pass 22 | 23 | class LinkError(Error): 24 | """Could not parse the URL""" 25 | pass 26 | 27 | class BlacklistedDev(Error): 28 | """This developer is blacklisted""" 29 | pass 30 | 31 | class AppInfo: 32 | # Get the name of the app from the store page 33 | def getName(self): 34 | return self.store_page.find("h1", class_="Fd93Bb").get_text() 35 | 36 | def get_downloads(self): 37 | for item in self.expanded_details: 38 | if "Downloads" in item.text: 39 | downloads = item.text[len("Downloads"):] 40 | return downloads 41 | return "Couldn't get downloads" 42 | 43 | def get_rating(self): 44 | try: 45 | rating = self.store_page.find("div", class_="TT9eCd").get_text() 46 | except AttributeError: 47 | return "NA " 48 | return rating[:-4] + "/5" 49 | 50 | def get_developer(self): 51 | dev = self.store_page.find("div", class_="Vbfug auoIOc") 52 | dev_url = dev.find("a").get("href") 53 | if dev.get_text() in Config.blacklisted_devs: 54 | raise BlacklistedDev 55 | return "[" + dev.get_text() + "]" + "(https://play.google.com" + dev_url + ")" 56 | 57 | def get_update_date(self): 58 | for item in self.expanded_details: 59 | if "Updated on" in item.text: 60 | return item.text[len("Updated on"):] 61 | return "Couldn't get update date" 62 | 63 | #def getSize(self): 64 | # return "Currently unavailable" 65 | 66 | def getPriceInfo(self): 67 | try: 68 | temp = self.store_page.find("span", class_="VfPpkd-vQzf8d").get_text() 69 | except TypeError: 70 | return "incorrect link" 71 | split_price = temp.split(" ") 72 | full_price = split_price[0] 73 | try: 74 | sale_price = split_price[1] 75 | except IndexError: 76 | sale_price = "Buy" 77 | if sale_price == "Buy": 78 | sale_price = full_price 79 | if sale_price == "Install": 80 | sale_price = "Free" 81 | if full_price == "Install": 82 | full_price = "Free" 83 | return sale_price + " was " + full_price 84 | 85 | 86 | def get_play_pass(self): 87 | play_pass = self.store_page.find("a", class_="pFise") 88 | if play_pass: 89 | return "\n**Included with Play Pass** " 90 | return "" 91 | 92 | def get_ads(self): 93 | check = self.store_page.findAll("span", class_="UIuSk") 94 | for item in check: 95 | if "Contains ads" in item.get_text(): 96 | return "Yes" 97 | return "No" 98 | 99 | def get_iap_info(self): 100 | iap_info = "No" 101 | check = self.store_page.findAll("span", class_="UIuSk") 102 | for item in check: 103 | if "In-app purchases" in item.get_text(): 104 | iap_info = "Yes" 105 | if iap_info == "Yes": 106 | for selenium_item in self.expanded_details: 107 | if "In-app purchases" in selenium_item.text: 108 | iap_info += ", " + selenium_item.text[len("In-app purchases"):] 109 | return iap_info 110 | 111 | def get_permissions(self): 112 | perm_list = "" 113 | 114 | for perm in self.expanded_permissions: 115 | perm_list += ", " 116 | if perm_list == ", ": 117 | perm_list = perm_list[:-2] 118 | perm_list += perm.text 119 | 120 | if perm_list == "": 121 | perm_list = "No permmissisons requested" 122 | 123 | return "Permissions: " + perm_list + " " 124 | 125 | def get_description(self): 126 | desc_strings = self.store_page.find("div", class_="bARER").stripped_strings 127 | desc = '' 128 | total_chars = 0 129 | total_lines = 0 130 | for string in desc_strings: 131 | desc += '>' + string + ' \n' 132 | total_chars += len(string) 133 | total_lines += 1 134 | if total_chars >= 400: 135 | break 136 | if total_lines >= 10: 137 | break 138 | return desc 139 | 140 | def __init__(self, submission, url): 141 | self.blacklist = False 142 | self.invalid = False 143 | self.submission = submission 144 | page = requests.get(url).text 145 | 146 | self.store_page = BeautifulSoup(page, "html.parser") 147 | self.name = self.getName() 148 | self.rating = self.get_rating() 149 | try: 150 | self.developer = self.get_developer() 151 | except BlacklistedDev: 152 | self.blacklist = True 153 | self.price_info = self.getPriceInfo() 154 | self.play_pass = self.get_play_pass() 155 | self.description = self.get_description() 156 | self.url = url 157 | self.ads = self.get_ads() 158 | 159 | self.selenium = webdriver.Firefox() 160 | self.selenium.get(url) 161 | time.sleep(5) 162 | details_button = self.selenium.find_element(By.XPATH, "/html/body/c-wiz[2]/div/div/div[1]/div/div[2]/div/div[1]/div[1]/c-wiz[3]/div/section/header/div/div[2]/button/i") 163 | details_button.click() 164 | time.sleep(1) 165 | 166 | self.expanded_details = self.selenium.find_elements(By.CLASS_NAME, "sMUprd") 167 | self.downloads = self.get_downloads() 168 | self.last_update = self.get_update_date() 169 | #self.size = self.getSize() 170 | self.iap_info = self.get_iap_info() 171 | 172 | permissions_button = self.selenium.find_element(By.CSS_SELECTOR, "span.TCqkTe") 173 | permissions_button.click() 174 | time.sleep(1) 175 | self.expanded_permissions = self.selenium.find_elements(By.CLASS_NAME, "aPeBBe") 176 | self.permissions = self.get_permissions() 177 | self.selenium.close() 178 | 179 | 180 | def flair(app_rating, num_installs, sub): 181 | inst = num_installs.split("+") 182 | if inst[0] == "Couldn't": 183 | return 184 | try: 185 | val = int(inst[0].replace(',', '')) 186 | except (ValueError): 187 | return 188 | if val <= 500: 189 | sub.mod.flair(text='New app', css_class=None) 190 | elif val >= 100000 and int(app_rating[0:1]) >= 4: 191 | sub.mod.flair(text= 'Popular app', css_class=None) 192 | 193 | # make an empty file for first run 194 | f = open("postids.txt","a+") 195 | f.close() 196 | def logID(postid): 197 | f = open("postids.txt","a+") 198 | f.write(postid + "\n") 199 | f.close() 200 | 201 | def respond(submission): 202 | footer = """ 203 | 204 | ***** 205 | 206 | ^^^[Source](https://github.com/a-ton/gpd-bot) 207 | ^^^| 208 | ^^^[Suggestions?](https://www.reddit.com/message/compose?to=Swimmer249)""" 209 | 210 | all_urls = [] 211 | if submission.is_self: 212 | all_urls = re.findall('(?:(?:https?):\/\/)?[\w/\-?=%.]+\.[\w/\-?=%.]+', submission.selftext) 213 | if len(all_urls) == 0: 214 | print("NO LINK FOUND skipping: " + submission.title) 215 | logID(submission.id) 216 | return 217 | else: 218 | all_urls.append(submission.url) 219 | 220 | # remove duplicate URLs 221 | unique_urls = [*set(all_urls)] 222 | 223 | # find apps that we can respond to 224 | valid_apps = [] 225 | required_url = ["http", "play.google"] 226 | disallowed_url = ["collection/cluster", "dev?id=", "store/search"] 227 | for url in unique_urls: 228 | # check if strings in required_url are part of url and if strings in disallowed_url do not exist in url 229 | if not all(x in url for x in required_url): 230 | continue 231 | if any(x in url for x in disallowed_url): 232 | continue 233 | app = AppInfo(submission, url) 234 | if app.blacklist: 235 | reply_text = "Sorry, deals from one or more of the developers in your post have been blacklisted. Here is the full list of blacklisted developers: https://www.reddit.com/r/googleplaydeals/wiki/blacklisted_devlopers" 236 | submission.mod.remove() 237 | submission.reply(body=reply_text).mod.distinguish() 238 | print("Removed (developer blacklist): " + submission.title) 239 | logID(submission.id) 240 | return 241 | if app.invalid: 242 | continue 243 | valid_apps.append(app) 244 | if len(valid_apps) >= 10: 245 | break 246 | 247 | if len(valid_apps) == 0: 248 | print("All invalid links, skipping: " + submission.title) 249 | logID(submission.id) 250 | return 251 | 252 | reply_text = "" 253 | 254 | if len(valid_apps) == 1: 255 | flair(app.rating, app.downloads, submission) 256 | 257 | reply_text = f"""Info for {app.name}: 258 | 259 | Current price (USD): {app.price_info} {app.play_pass} 260 | Developer: {app.developer} 261 | Rating: {app.rating} 262 | Installs: {app.downloads} 263 | Last updated: {app.last_update} 264 | Contains IAPs: {app.iap_info} 265 | Contains Ads: {app.ads} 266 | {app.permissions} 267 | Short description: 268 | 269 | {app.description} 270 | 271 | ***** 272 | 273 | If this deal has expired, please reply to this comment with \"expired\". ^^^Abuse ^^^will ^^^result ^^^in ^^^a ^^^ban.""" 274 | 275 | 276 | if len(valid_apps) > 1: 277 | reply_text = "" 278 | for app_num, app in enumerate(valid_apps): 279 | if app_num >= 10: 280 | break 281 | reply_text += f"Info for [{app.name}]({app.url}): Price (USD): {app.price_info} | Rating: {app.rating} | Installs: {app.downloads} | Updated: {app.last_update} | IAPs/Ads: {app.iap_info}/{app.ads}\n\n*****\n\n" 282 | if len(valid_apps) >= 10: 283 | reply_text += "...and more. Max of 10 apps reached.\n\n*****\n\n" 284 | reply_text += "If any of these deals have expired, please reply to this comment with \"expired\". ^^^Abuse ^^^will ^^^result ^^^in ^^^a ^^^ban." 285 | 286 | reply_text += footer 287 | submission.reply(body=reply_text) 288 | print("Replied to: " + submission.title) 289 | logID(submission.id) 290 | 291 | while True: 292 | try: 293 | print("Initializing bot...") 294 | for submission in subreddit.stream.submissions(): 295 | if submission.created < int(time.time()) - 86400: 296 | continue 297 | if submission.title[0:2].lower() == "[a" or submission.title[0:2].lower() == "[i" or submission.title[0:2].lower() == "[g": 298 | if submission.id in open('postids.txt').read(): 299 | continue 300 | for top_level_comment in submission.comments: 301 | try: 302 | if top_level_comment.author and top_level_comment.author.name == "GPDBot": 303 | logID(submission.id) 304 | break 305 | except AttributeError: 306 | pass 307 | else: # no break before, so no comment from GPDBot 308 | respond(submission) 309 | continue 310 | except (prawcore.exceptions.RequestException, prawcore.exceptions.ResponseException): 311 | print ("Error connecting to reddit servers. Retrying in 5 minutes...") 312 | time.sleep(300) 313 | 314 | except praw.exceptions.APIException: 315 | print ("Rate limited, waiting 5 seconds") 316 | time.sleep(5) 317 | --------------------------------------------------------------------------------