├── Reposter ├── __init__.py ├── utils │ ├── __init__.py │ ├── logger.py │ ├── configurer.py │ └── information_parser.py ├── FacebookReddit │ ├── __init__.py │ └── post_submitter.py ├── RedditFacebook │ ├── __init__.py │ ├── core │ │ ├── __init__.py │ │ ├── post.py │ │ └── subreddit.py │ └── post_getter.py └── reposter.py ├── .dockerignore ├── screenshots ├── 0.gif ├── 1.PNG ├── 2.PNG ├── 3.PNG └── 4.PNG ├── index.py ├── docker-compose.yaml ├── requirements.txt ├── config.ini ├── schedule_index.py ├── log.json ├── Dockerfile-reposter ├── information.json ├── LICENSE.md ├── .gitignore ├── facebook_perm_token.py └── README.md /Reposter/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Reposter/utils/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.dockerignore: -------------------------------------------------------------------------------- 1 | reposter-env/ 2 | 3 | -------------------------------------------------------------------------------- /Reposter/FacebookReddit/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Reposter/RedditFacebook/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /Reposter/RedditFacebook/core/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /screenshots/0.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SlapBot/reposter/HEAD/screenshots/0.gif -------------------------------------------------------------------------------- /screenshots/1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SlapBot/reposter/HEAD/screenshots/1.PNG -------------------------------------------------------------------------------- /screenshots/2.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SlapBot/reposter/HEAD/screenshots/2.PNG -------------------------------------------------------------------------------- /screenshots/3.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SlapBot/reposter/HEAD/screenshots/3.PNG -------------------------------------------------------------------------------- /screenshots/4.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/SlapBot/reposter/HEAD/screenshots/4.PNG -------------------------------------------------------------------------------- /index.py: -------------------------------------------------------------------------------- 1 | from Reposter.reposter import Reposter 2 | 3 | 4 | r = Reposter() 5 | r.main() 6 | -------------------------------------------------------------------------------- /docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | reposter: 4 | image: agent-reposter 5 | build: 6 | context: . 7 | dockerfile: Dockerfile-reposter 8 | volumes: 9 | - "./:/app" 10 | -------------------------------------------------------------------------------- /Reposter/RedditFacebook/core/post.py: -------------------------------------------------------------------------------- 1 | class Post: 2 | def __init__(self, post_id, title, url, type): 3 | self.id = post_id 4 | self.title = title 5 | self.url = url 6 | self.type = type 7 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | certifi==2018.4.16 2 | chardet==3.0.4 3 | facebook-sdk==2.0.0 4 | idna==2.6 5 | praw==5.1.0 6 | prawcore==0.12.0 7 | requests==2.20.0 8 | update-checker==0.16 9 | urllib3==1.24.2 10 | schedule==0.6.0 11 | -------------------------------------------------------------------------------- /config.ini: -------------------------------------------------------------------------------- 1 | [REDDIT] 2 | client_id = 3 | client_secret = 4 | username = 5 | password = 6 | user_agent = nice bots v0.1 7 | allowed_domains = i.imgur.com i.redd.it 8 | 9 | [STEPHANIE] 10 | app_id = 11 | app_secret = 12 | short_token = 13 | 14 | -------------------------------------------------------------------------------- /schedule_index.py: -------------------------------------------------------------------------------- 1 | import time 2 | import schedule 3 | from Reposter.reposter import Reposter 4 | 5 | 6 | def run(reposter: Reposter): 7 | reposter.main() 8 | 9 | 10 | if __name__ == "__main__": 11 | r = Reposter() 12 | schedule.every(3).hours.do(run, r) 13 | 14 | while True: 15 | schedule.run_pending() 16 | time.sleep(1) 17 | -------------------------------------------------------------------------------- /log.json: -------------------------------------------------------------------------------- 1 | { 2 | "8yv0xp": { 3 | "url": "https://i.redd.it/vl31zcrxay911.jpg", 4 | "posted_at": "Sat Jul 14 18:00:06 2018", 5 | "title": "My chubby sea otter tattoo" 6 | }, 7 | "9hq5kd": { 8 | "url": "https://i.imgur.com/NtXdluM.gifv", 9 | "posted_at": "Fri Sep 21 15:00:05 2018", 10 | "title": "i think they otter be friends for life" 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /Dockerfile-reposter: -------------------------------------------------------------------------------- 1 | # Dockerfile-flask 2 | # We simply inherit the Python 3 image. This image does 3 | # not particularly care what OS runs underneath 4 | FROM python:3.6 5 | # Set an environment variable with the directory 6 | # where we'll be running the app 7 | ENV APP /app 8 | # Create the directory and instruct Docker to operate 9 | # from there from now on 10 | RUN mkdir $APP 11 | WORKDIR $APP 12 | # Copy the requirements file in order to install 13 | # Python dependencies 14 | COPY requirements.txt . 15 | # Install Python dependencies 16 | RUN pip install -r requirements.txt 17 | # We copy the rest of the codebase into the image 18 | COPY . . 19 | 20 | CMD [ "python", "./schedule_index.py" ] 21 | -------------------------------------------------------------------------------- /information.json: -------------------------------------------------------------------------------- 1 | { 2 | "leads": [ 3 | { 4 | "facebook": { 5 | "name": "Earthian", 6 | "page_id": "1855714958090094", 7 | "token": "", 8 | "message": false 9 | }, 10 | "subreddits":[ 11 | { 12 | "name": "earthporn" 13 | }, 14 | { 15 | "name": "earthporngifs" 16 | } 17 | ] 18 | }, 19 | { 20 | "facebook": { 21 | "name": "Otterance", 22 | "page_id": "264385350745645", 23 | "token": "", 24 | "message": false 25 | }, 26 | "subreddits": [ 27 | { 28 | "name": "otters" 29 | } 30 | ] 31 | }, 32 | { 33 | "facebook": { 34 | "name": "Stephanie Virtual Assistant", 35 | "page_id": "319363778501603", 36 | "token": "", 37 | "message": true 38 | }, 39 | "subreddits": [ 40 | { 41 | "name": "ProgrammerHumor" 42 | } 43 | ] 44 | } 45 | ] 46 | } 47 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2017 Ujjwal Gupta 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 6 | 7 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 8 | 9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /Reposter/utils/logger.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import time 4 | 5 | 6 | class Logger: 7 | def __init__(self, name="log.json"): 8 | self.filename = self.retrieve_filename(name) 9 | self.json_data = self.get_json_data(self.filename) 10 | self.ids = self.get_already_posted_ids(self.json_data) 11 | 12 | @staticmethod 13 | def retrieve_filename(name): 14 | filename = os.path.abspath(os.path.join(os.path.dirname(__file__), 15 | os.pardir, os.pardir, name)) 16 | return filename 17 | 18 | @staticmethod 19 | def get_json_data(filename): 20 | with open(filename, "r") as j: 21 | json_data = json.load(j) 22 | j.close() 23 | return json_data 24 | 25 | @staticmethod 26 | def get_already_posted_ids(data): 27 | return data 28 | 29 | def log(self, post): 30 | self.ids[post.id] = { 31 | "title": post.title, 32 | "url": post.url, 33 | "posted_at": time.ctime() 34 | } 35 | 36 | with open(self.filename, "w") as j: 37 | json.dump(self.ids, j) 38 | j.close() 39 | return True 40 | 41 | 42 | logger = Logger() 43 | -------------------------------------------------------------------------------- /Reposter/RedditFacebook/post_getter.py: -------------------------------------------------------------------------------- 1 | import praw 2 | from Reposter.RedditFacebook.core.subreddit import SubReddit 3 | from Reposter.utils.configurer import config 4 | 5 | 6 | class PostGetter: 7 | def __init__(self, name): 8 | self.config = config 9 | self.reddit = praw.Reddit( 10 | client_id=self.config.get_configuration("client_id"), 11 | client_secret=self.config.get_configuration("client_secret"), 12 | username=self.config.get_configuration("username"), 13 | password=self.config.get_configuration("password"), 14 | user_agent=self.config.get_configuration("user_agent") 15 | ) 16 | self.subreddit = SubReddit(self.reddit, name) 17 | self.posts = self.subreddit.get_posts() 18 | 19 | def get_post(self, content): 20 | for post in self.posts: 21 | if post.type == content: 22 | return post 23 | return False 24 | 25 | def get_posts(self): 26 | return self.subreddit.get_posts() 27 | 28 | def get_photo_post(self): 29 | return self.get_post(content="photo") 30 | 31 | def get_gif_post(self): 32 | return self.get_post(content="gif") 33 | 34 | def get_any_post(self): 35 | if len(self.posts) > 0: 36 | return self.posts[0] 37 | # Write your logic to get a specific post - say the most controversial? Highest Reception? Most upvotes? 38 | return False 39 | 40 | -------------------------------------------------------------------------------- /Reposter/utils/configurer.py: -------------------------------------------------------------------------------- 1 | import os 2 | import configparser 3 | 4 | 5 | class Configurer: 6 | def __init__(self, filename="config.ini"): 7 | print("initialised") 8 | self.abs_filename = self.get_abs_filename(filename) 9 | self.config = configparser.ConfigParser() 10 | self.config.read(self.abs_filename) 11 | self.sections = self.config.sections() 12 | 13 | @staticmethod 14 | def get_abs_filename(filename): 15 | return os.path.abspath(os.path.join(os.path.dirname(__file__), 16 | os.pardir, os.pardir, filename)) 17 | 18 | def get_configuration(self, key, section="REDDIT"): 19 | try: 20 | value = self.config[section][key] 21 | except KeyError: 22 | print("API KEYS FOR '%s' is not provided in the config.ini file." 23 | " Refer back to the docs, or just add the goddamn key." % key) 24 | return False 25 | if value: 26 | return value 27 | print("The correct API KEY wasn't provided or wasn't provided at all for %s, what the ... okay man" 28 | " now look back at docs to find how to do that, is pretty simple just one line long. " 29 | "Lazy ass" % key) 30 | return False 31 | 32 | def write_configuration(self, key, value, section="REDDIT"): 33 | self.config.set(section, key, value) 34 | with open(self.abs_filename, 'w') as configfile: 35 | self.config.write(configfile) 36 | configfile.close() 37 | return value 38 | 39 | 40 | config = Configurer() 41 | -------------------------------------------------------------------------------- /Reposter/reposter.py: -------------------------------------------------------------------------------- 1 | from Reposter.RedditFacebook.post_getter import PostGetter 2 | from Reposter.FacebookReddit.post_submitter import PostSubmitter 3 | from Reposter.utils.information_parser import infoparser 4 | 5 | 6 | class Reposter: 7 | def __init__(self): 8 | self.infoparser = infoparser 9 | 10 | def main(self): 11 | for job in self.infoparser.jobs: 12 | try: 13 | self.execute(job) 14 | except Exception as e: 15 | print(e) 16 | pass 17 | 18 | def execute(self, job, tries=0, max_tries=5): 19 | posts = self.get_posts(job.subreddits) 20 | tries += 1 21 | if not posts and tries < max_tries: 22 | return self.execute(job, tries) 23 | else: 24 | self.submit_posts(job.facebook, posts) 25 | 26 | @staticmethod 27 | def get_posts(subreddits): 28 | posts = [] 29 | for subreddit in subreddits: 30 | pg = PostGetter(subreddit.name) 31 | post = pg.get_any_post() 32 | if post: 33 | print("Got my post as %s from %s subreddit." % (post.url, subreddit.name)) 34 | posts.append(post) 35 | else: 36 | print("No new post found.") 37 | return posts 38 | 39 | @staticmethod 40 | def submit_posts(social_media, posts): 41 | ps = PostSubmitter(social_media.page_id, social_media.token, social_media.message) 42 | for post in posts: 43 | ps.submit_post(post) 44 | print("Posted my post as %s for %s page." % (post.title, social_media.name)) 45 | -------------------------------------------------------------------------------- /Reposter/RedditFacebook/core/subreddit.py: -------------------------------------------------------------------------------- 1 | from Reposter.RedditFacebook.core.post import Post 2 | from Reposter.utils.logger import logger 3 | from Reposter.utils.configurer import config 4 | 5 | 6 | class SubReddit: 7 | def __init__(self, reddit, name): 8 | self.reddit = reddit 9 | self.config = config 10 | self.logger = logger 11 | self.subreddit = self.reddit.subreddit(name) 12 | self.allowed_domains = self.parse_allowed_domains() 13 | self.repost_ids = self.logger.ids 14 | 15 | def parse_allowed_domains(self): 16 | return self.config.get_configuration("allowed_domains").split(" ") 17 | 18 | def retrieve_submissions(self, limit=10): 19 | submissions = [] 20 | for submission in self.subreddit.hot(limit=limit): 21 | if submission.id not in self.repost_ids: 22 | if submission.domain in self.allowed_domains: 23 | submissions.append(submission) 24 | return submissions 25 | 26 | def retrieve_posts(self, submissions): 27 | posts = [] 28 | for submission in submissions: 29 | posts.append(Post(post_id=submission.id, title=submission.title, url=submission.url, 30 | type=self.guess_type(submission.url))) 31 | return posts 32 | 33 | def get_posts(self): 34 | submissions = self.retrieve_submissions() 35 | posts = self.retrieve_posts(submissions) 36 | return posts 37 | 38 | @staticmethod 39 | def guess_type(url): 40 | if url[len(url) - 4:] == "gifv": 41 | return "gif" 42 | else: 43 | "photo" 44 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Created by .ignore support plugin (hsz.mobi) 2 | ### Python template 3 | # Byte-compiled / optimized / DLL files 4 | __pycache__/ 5 | *.py[cod] 6 | *$py.class 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | env/ 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | wheels/ 26 | *.egg-info/ 27 | .installed.cfg 28 | *.egg 29 | 30 | # PyInstaller 31 | # Usually these files are written by a python script from a template 32 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 33 | *.manifest 34 | *.spec 35 | 36 | # Installer logs 37 | pip-log.txt 38 | pip-delete-this-directory.txt 39 | 40 | # Unit test / coverage reports 41 | htmlcov/ 42 | .tox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *,cover 49 | .hypothesis/ 50 | 51 | # Translations 52 | *.mo 53 | *.pot 54 | 55 | # Django stuff: 56 | *.log 57 | local_settings.py 58 | 59 | # Flask stuff: 60 | instance/ 61 | .webassets-cache 62 | 63 | # Scrapy stuff: 64 | .scrapy 65 | 66 | # Sphinx documentation 67 | docs/_build/ 68 | 69 | # PyBuilder 70 | target/ 71 | 72 | # Jupyter Notebook 73 | .ipynb_checkpoints 74 | 75 | # pyenv 76 | .python-version 77 | 78 | # celery beat schedule file 79 | celerybeat-schedule 80 | 81 | # SageMath parsed files 82 | *.sage.py 83 | 84 | # dotenv 85 | .env 86 | 87 | # virtualenv 88 | .venv 89 | venv/ 90 | ENV/ 91 | 92 | # Spyder project settings 93 | .spyderproject 94 | 95 | # Rope project settings 96 | .ropeproject 97 | 98 | # Pycharm IDE 99 | .idea 100 | 101 | # Virtualenv 102 | 103 | reposter-env/ -------------------------------------------------------------------------------- /Reposter/utils/information_parser.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | 4 | 5 | class InformationParser: 6 | def __init__(self, name="information.json"): 7 | self.filename = self.retrieve_filename(name) 8 | self.json_data = self.get_json_data() 9 | self.jobs = [] 10 | self.place_jobs(self.json_data) 11 | 12 | @staticmethod 13 | def retrieve_filename(name): 14 | filename = os.path.abspath(os.path.join(os.path.dirname(__file__), 15 | os.pardir, os.pardir, name)) 16 | return filename 17 | 18 | def get_json_data(self): 19 | with open(self.filename, "r") as j: 20 | json_data = json.load(j) 21 | return json_data 22 | 23 | def write_json_data(self, data): 24 | with open(self.filename, "w") as j: 25 | json.dump(data, j, indent=4) 26 | self.json_data = self.get_json_data() 27 | return True 28 | 29 | def place_jobs(self, json_data): 30 | for data in json_data['leads']: 31 | self.jobs.append(Job(data)) 32 | 33 | 34 | class Facebook: 35 | def __init__(self, data): 36 | self.page_id = data['page_id'] 37 | self.name = data['name'] 38 | self.token = data['token'] 39 | self.message = data['message'] 40 | 41 | 42 | class Subreddit: 43 | def __init__(self, data): 44 | self.name = data['name'] 45 | 46 | 47 | class Job: 48 | def __init__(self, data): 49 | self.facebook = Facebook(data['facebook']) 50 | self.subreddits = [] 51 | self.process_subreddits(data['subreddits']) 52 | 53 | def process_subreddits(self, subreddits): 54 | for subreddit in subreddits: 55 | self.subreddits.append(Subreddit(subreddit)) 56 | 57 | 58 | infoparser = InformationParser() 59 | -------------------------------------------------------------------------------- /Reposter/FacebookReddit/post_submitter.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import facebook 3 | from Reposter.utils.logger import logger 4 | import urllib.request 5 | import os 6 | 7 | 8 | class PostSubmitter: 9 | def __init__(self, page_id, 10 | token, 11 | message=False): 12 | self.page_id = page_id 13 | self.oauth_access_token = token 14 | self.message = message 15 | self.graph = facebook.GraphAPI(self.oauth_access_token) 16 | self.logger = logger 17 | 18 | def post(self, post, content): 19 | status = True 20 | post_title = None 21 | if self.message: 22 | post_title = post.title 23 | try: 24 | if content == "photo": 25 | self.graph.put_object(self.page_id, "photos", 26 | message=post_title, 27 | url=post.url) 28 | elif content == "gif": 29 | self.graph.put_object(self.page_id, "videos", 30 | message=post_title, 31 | url=self.parse_gif_url(post.url)) 32 | except Exception as e: 33 | print(e) 34 | status = False 35 | if status: 36 | self.logger.log(post) 37 | return status 38 | 39 | def post_photo(self, post): 40 | return self.post(post, content="photo") 41 | 42 | def post_gif(self, post): 43 | gif_url = self.parse_gif_url(post.url) 44 | gif_filename = self.get_gif_file(gif_url) 45 | self.post_to_facebook(post, gif_filename) 46 | os.unlink(gif_filename) 47 | 48 | @staticmethod 49 | def parse_gif_url(url): 50 | if url[len(url) - 4:] == "gifv": 51 | return url[:len(url) - 4] + "gif" 52 | elif url[len(url) - 3:] == "gif": 53 | return url[:len(url) - 3] + "gif" 54 | return False 55 | 56 | def get_gif_file(self, gif_url): 57 | response = urllib.request.urlopen(gif_url) 58 | data = response.read() # a `bytes` object 59 | filename = self.save_gif_file(data) 60 | return filename 61 | 62 | @staticmethod 63 | def save_gif_file(data): 64 | filename = "tempfile.gif" 65 | with open(filename, 'wb') as gf: 66 | gf.write(data) 67 | gf.close() 68 | return filename 69 | 70 | def post_to_facebook(self, post, gif_filename): 71 | gif_file = {'file': open(gif_filename, 'rb')} 72 | url = 'https://graph-video.facebook.com/%s/videos?access_token=%s' % (self.page_id, self.oauth_access_token) 73 | response = requests.post(url, files=gif_file) 74 | print(response.text) 75 | self.logger.log(post) 76 | 77 | def submit_post(self, post): 78 | if post.type == "gif": 79 | self.post_gif(post) 80 | else: 81 | self.post_photo(post) 82 | -------------------------------------------------------------------------------- /facebook_perm_token.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from Reposter.utils.information_parser import infoparser 3 | from Reposter.utils.configurer import config 4 | 5 | 6 | # https://stackoverflow.com/a/28418469 7 | # Make sure to ask for page access token and request publish_pages option when getting the short lived token 8 | 9 | # Get page access tokens 10 | def get_access_token_by_page_id(accounts, page_id): 11 | for account in accounts: 12 | if account['id'] == page_id: 13 | return account['access_token'] 14 | return False 15 | 16 | 17 | def get_long_permanent_token(page_token): 18 | url = "https://graph.facebook.com/v6.0/oauth/access_token" 19 | params = { 20 | "grant_type": "fb_exchange_token", 21 | "client_id": app_id, 22 | "client_secret": app_secret, 23 | "fb_exchange_token": page_token 24 | } 25 | response = requests.get(url, params=params) 26 | if not response.ok: 27 | print(response.text) 28 | exit("Something bad happened") 29 | data = response.json() 30 | long_access_token = data['access_token'] 31 | accound_id_url = "https://graph.facebook.com/v6.0/me" 32 | response = requests.get(accound_id_url, params={"access_token": long_access_token}) 33 | if not response.ok: 34 | print(response.text) 35 | exit("Something bad happened") 36 | data = response.json() 37 | account_id = data['id'] 38 | permanent_token_url = "https://graph.facebook.com/v6.0/%s/accounts" % account_id 39 | requests.get(permanent_token_url, params={"access_token": long_access_token}) 40 | if not response.ok: 41 | print(response.text) 42 | exit("Something bad happened") 43 | return long_access_token 44 | 45 | 46 | # Initialize parameters 47 | page = "STEPHANIE" 48 | app_id = config.get_configuration("app_id", page) 49 | app_secret = config.get_configuration("app_secret", page) 50 | short_token = config.get_configuration("short_token", page) # https://developers.facebook.com/tools/explorer 51 | 52 | # Get account id 53 | print("Fetching account_id...") 54 | accound_id_url = "https://graph.facebook.com/v6.0/me" 55 | response = requests.get(accound_id_url, params={"access_token": short_token}) 56 | if not response.ok: 57 | print(response.text) 58 | exit("Something bad happened") 59 | data = response.json() 60 | account_id = data['id'] 61 | print(account_id) 62 | 63 | print("Fetching page-access-tokens...") 64 | page_access_tokens_url = "https://graph.facebook.com/%s/accounts".format(account_id) 65 | response = requests.get("https://graph.facebook.com/{0}/accounts?access_token={1}".format( 66 | account_id, short_token 67 | )) 68 | if not response.ok: 69 | print(response.text) 70 | exit("Something bad happened") 71 | data = response.json()['data'] 72 | 73 | # Get defined pages from information.json 74 | new_information_data = {'leads': []} 75 | pages = infoparser.get_json_data()['leads'] 76 | for page in pages: 77 | if 'facebook' not in page: 78 | continue 79 | page_token = get_access_token_by_page_id(data, page['facebook']['page_id']) 80 | if page_token: 81 | long_permanent_token = get_long_permanent_token(page_token) 82 | new_information_data['leads'].append( 83 | { 84 | "facebook": { 85 | "name": page['facebook']['name'], 86 | "page_id": page['facebook']['page_id'], 87 | "token": long_permanent_token, 88 | "message": page['facebook']['message'] 89 | }, 90 | "subreddits": page['subreddits'] 91 | } 92 | ) 93 | 94 | # Write to information.json 95 | infoparser.write_json_data(new_information_data) 96 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # REPOSTER 2 | 3 | A framework to manage, monitor and deploy marketing in social-media by re-posting content from one place to the another. 4 | 5 | - [Motivation](#motivation) 6 | - [Features](#features) 7 | - [Installation](#installation) 8 | - [Pre-Requisites](#pre-requisites) 9 | - [Application Setup](#application-setup) 10 | - [Docker Setup](#docker-setup) 11 | - [Native Setup](#native-setup) 12 | - [Authentication Setup](#authentication-setup) 13 | - [Usage](#usage) 14 | - [Periodic-Posting](#periodic-posting) 15 | - [Application Scheduling](#application-scheduling) 16 | - [Crontab](#crontab) 17 | - [Advanced Setup](#advanced-setup) 18 | - [Why?](#why) 19 | 20 | ## Motivation 21 | 22 | I wanted a setup that could be give me an entire control over content creation in a programmable automated way. With this framework, I can add as many as content retrievers and submitters as possible and create any kind of dynamic programmable workflow between them. 23 | 24 | So for example, I could have reddit -> multiple facebook pages, twitter, instagram pages content submission, or tumblr, reddit, custom website -> Multiple facebook pages, twitter, etc. without really worrying about the underlying integration part. 25 | 26 | Consider it like a free zapier across social media handle content management. 27 | 28 | ![](https://github.com/SlapBot/reposter/blob/master/screenshots/0.gif) 29 | 30 | ## Features 31 | 32 | 1. Declarative setup to control the submission of re-posts. 33 | 2. Out of the box support for Reddit -> Facebook re-posts. 34 | 3. Super simple API to add new social media handles for retrieval or submission of content. 35 | 4. Super clean logging with logic to ensure duplicate posts don't get posted. 36 | 5. Add support of many-to-many reposts for as many handles, pages you want. 37 | - Like Reddit -> Facebook, Instagram, Twitter. 38 | - Like Tumblr -> Facebook, Reddit, HackerNews. 39 | - Reddit (multiple subreddits), Tumblr -> Facebook (Multiple Pages), Twiiter (Multiple Handles) 40 | 6. Remember its just the boilerplate allowing you to come up with any kind of imagination of its use-cases and social media handles. 41 | 42 | 43 | ## Installation 44 | 45 | ### Pre-requisites 46 | 47 | 1. Python3 48 | 2. pip 49 | 3. virtualenv 50 | 51 | ### Application Setup 52 | 53 | #### Docker Setup 54 | 1. Run `docker-compose build` to build the docker-image. 55 | - Additional notes: 56 | - To run: `docker-compose up` after you're done with the authentication setup. 57 | - By default it runs an application scheduler: [Application Scheduling](#application-scheduling) 58 | - You can make it run regular index setup by swapping line `CMD [ "python", "./schedule_index.py" ]` in `Dockerfile-reposter` with `CMD [ "python", "./index.py" ]` 59 | #### Native Setup 60 | 1. Clone the repository: `git clone github.com/slapbot/reposter` 61 | 2. Cd into the directory: `cd reposter` 62 | 3. Create a virtualenv for python: `python -m venv reposter-env` 63 | 4. Activate the virtualenv: 64 | - Linux: `source reposter-env/bin/activate` 65 | - Windows: `source reposter-env/Scripts/activate` 66 | 5. Upgrade your pip to latest version: `pip install --upgrade pip` 67 | 6. Install the application level dependencies: `pip install -r requirements.txt` 68 | 69 | ### Authentication Setup 70 | 71 | 1. Create a Reddit bot and add in the credentials at: config.ini under `[REDDIT]` section. 72 | - `allowed_domain`: Enter the domains that you want to parse for content retrieval. 73 | 2. Add in your page information at `config.ini` as a section like `[STEPHANIE]` shown in the example. 74 | - Next add in your credentials after creating an app at facebook. 75 | - Fill in the `app_id` and `app_secret` from facebook console. 76 | - Fill in the `short_token` which will act as user-access-token from [Graph API Console](https://developers.facebook.com/tools/explorer) 77 | - Ensure that you give it these permissions 78 | ```user_birthday 79 | email 80 | manage_pages 81 | pages_manage_cta 82 | pages_show_list 83 | publish_pages 84 | publish_to_groups 85 | public_profile 86 | ``` 87 | 3. Write the workflow logic in declarative syntax of json at `information.json` as follows: 88 | - There is already an example to showcase the working. 89 | - `facebook` tag encapsulates all of the information needed for Facebook workflow. 90 | - `Name`: Name of the facebook page. 91 | - `page_id`: ID of the facebook page. 92 | - `token`: leave it blank for now. 93 | - `message`: If you'd wanna write the message retrieved from the posts (from Reddit or any other content aggregator). 94 | - `subreddits` tag allows you to mention the subreddits from where you'd wanna retrieve the posts from. 95 | 4. Run `python facebook_perm_token.py` to get a permanent page access tokens for each page which will automatically be inserted 96 | in `informations.json` file. 97 | 5. Now you must be able to run `python index.py` to automate the process of re-posting from one social media handle to another. 98 | 99 | ## Usage 100 | ``` 101 | class Reposter: 102 | def __init__(self): 103 | self.infoparser = infoparser 104 | 105 | def main(self): 106 | for job in self.infoparser.jobs: 107 | try: 108 | self.execute(job) 109 | except Exception: 110 | pass 111 | 112 | def execute(self, job, tries=0, max_tries=5): 113 | posts = self.get_posts(job.subreddits) 114 | tries += 1 115 | if not posts and tries < max_tries: 116 | return self.execute(job, tries) 117 | else: 118 | self.submit_posts(job.facebook, posts) 119 | 120 | @staticmethod 121 | def get_posts(subreddits): 122 | posts = [] 123 | for subreddit in subreddits: 124 | pg = PostGetter(subreddit.name) 125 | post = pg.get_any_post() 126 | print("Got my post as %s from %s subreddit." % (post.url, subreddit.name)) 127 | posts.append(post) 128 | return posts 129 | 130 | @staticmethod 131 | def submit_posts(social_media, posts): 132 | ps = PostSubmitter(social_media.page_id, social_media.token, social_media.message) 133 | for post in posts: 134 | ps.submit_post(post) 135 | print("Posted my post as %s for %s page." % (post.title, social_media.name)) 136 | 137 | ``` 138 | ### Periodic Posting 139 | 140 | #### Application Scheduling 141 | 1. Go to `schedule_index.py` and add commands like: 142 | - `schedule.every(3).hours.do(run, r)` to run every 3 hours. 143 | - `schedule.every(70).mins.do(run, r)` to run every 70 minutes. 144 | 2. Run `python schedule_index.py` instead of `index.py` 145 | 3. The default added entry creates Facebook posts every 3 hours retrieved from Reddit. 146 | 147 | #### Crontab 148 | 1. Use a crontab to schedule your posts: `crontab -e` 149 | 2. Add this entry `0 */3 * * * //reposter/reposter-env/bin/python //reposter/index.py >/dev/null 2>&1` 150 | 3. The above added entry creates Facebook posts every 3 hours retrieved from Reddit. 151 | 152 | 153 | ### Advanced Setup 154 | 155 | 1. Throughout the code you can abstract a lot of entries to the `information.json` like which posts to choose from Reddit (currently its the top rated one.) 156 | 2. Adding new social media handles for retrieval and submission is super easy: 157 | - Create their own module and add the hooks at `information.json` 158 | - Regulate their separate logic in a much more modular way. 159 | 160 | ## Why? 161 | 162 | Its one of the first pieces of software I wrote (back in 2017) when I was introduced to Python and wanted to manage social media handles of one of my 163 | Open Source Project - [Stephanie](github.com/slapbot/stephanie-va) which was programmed to make automated posts in its 164 | Facebook Page at [Stephanie - Facebook Page](https://www.facebook.com/Stephanie.VA17/). 165 | 166 | More Pages using the program: 167 | 168 | 1. [Earthian](https://www.facebook.com/Earthian-1855714958090094/) 169 | 2. [Otterance](https://www.facebook.com/Otterance-264385350745645/) 170 | 171 | So in the process of checking out my previous project - I just thought to make it open-source for anyone to use. 172 | 173 | Its super simply and doesn't require any fancy programming to get started with. 174 | --------------------------------------------------------------------------------