├── .github └── workflows │ └── pythonpublish.yml ├── .gitignore ├── LICENSE ├── README.md ├── images ├── ecgo.png ├── quickstart_fanspage.png ├── quickstart_group.png ├── 屏幕截图 2021-06-19 112621.png └── 屏幕截图 2021-06-19 152222.png ├── main.py ├── paser.py ├── requester.py ├── requirements.txt ├── sample ├── 20221013_Sample.ipynb ├── FansPages.ipynb ├── Group.ipynb └── data │ ├── PyConTaiwan.parquet │ └── corollacrossclub.parquet ├── setup.py ├── tests ├── test_facebook_crawler.py ├── test_page_parser.py ├── test_post_parser.py ├── test_requester.py └── test_utils.py └── utils.py /.github/workflows/pythonpublish.yml: -------------------------------------------------------------------------------- 1 | name: Upload Python Package 2 | 3 | on: 4 | release: 5 | types: [created] 6 | 7 | jobs: 8 | deploy: 9 | runs-on: ubuntu-latest 10 | steps: 11 | - uses: actions/checkout@v1 12 | - name: Set up Python 13 | uses: actions/setup-python@v1 14 | with: 15 | python-version: '3.x' 16 | - name: Install dependencies 17 | run: | 18 | python -m pip install --upgrade pip 19 | pip install setuptools wheel twine 20 | - name: Build and publish 21 | env: 22 | TWINE_USERNAME: ${{ secrets.PYPI_USERNAME }} 23 | TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }} 24 | run: | 25 | python setup.py sdist bdist_wheel 26 | twine upload dist/* 27 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | develop/ 2 | .ipynb_checkpoints/ 3 | .vscode/ 4 | *egg-info/ 5 | .idea/ 6 | venv2/ 7 | build/ 8 | dist/ 9 | __pycache__/ 10 | data/ -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 tlyu0419 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Facebook_Crawler 2 | [](https://pepy.tech/project/facebook-crawler) 3 | [](https://pepy.tech/project/facebook-crawler) 4 | [](https://pepy.tech/project/facebook-crawler) 5 | 6 | ## What's this? 7 | 8 | This python package aims to help people who need to collect and analyze the public Fanspages or Groups data from Facebook with ease and efficiency. 9 | 10 | Here are the three big points of this project: 11 | 1. Private: You don't need to log in to your account. 12 | 2. Easy: Just key in the link of Fanspage or group and the target date, and it will work. 13 | 3. Efficient: It collects the data through the requests package directly instead of opening another browser. 14 | 15 | 16 | 這個 Python 套件旨在幫助使用者輕鬆且快速的收集 Facebook 公開粉絲頁和公開社團的資料,藉以進行後續的分析。 17 | 18 | 以下是本專案的 3 個重點: 19 | 1. 隱私: 不需要登入你個人的帳號密碼 20 | 2. 簡單: 僅需輸入粉絲頁/社團的網址和停止的日期就可以開始執行程式 21 | 3. 高效: 透過 requests 直接向伺服器請求資料,不需另外開啟一個新的瀏覽器 22 | 23 | ## Quickstart 24 | ### Install 25 | ```pip 26 | pip install -U facebook-crawler 27 | ``` 28 | 29 | ### Usage 30 | - Facebook Fanspage 31 | ```python 32 | import facebook_crawler 33 | pageurl= 'https://www.facebook.com/diudiu333' 34 | facebook_crawler.Crawl_PagePosts(pageurl=pageurl, until_date='2021-01-01') 35 | ``` 36 |  37 | 38 | - Group 39 | ```python 40 | import facebook_crawler 41 | groupurl = 'https://www.facebook.com/groups/pythontw' 42 | facebook_crawler.Crawl_GroupPosts(groupurl, until_date='2021-01-01') 43 | ``` 44 |  45 | 46 | ## FAQ 47 | - **How to get the comments or replies to the posts?** 48 | > Please write an Email to me and tell me your project goal. Thanks! 49 | 50 | - **How can I find out the post's link through the data?** 51 | > You can add the string 'https://www.facebook.com' in front of the POSTID, and it's just its post link. So, for example, if the POSTID is 123456789, and its link is 'https://www.facebook.com/12345679'. 52 | 53 | - **Can I directly collect the data in the specific time period?** 54 | > No! This is the same as the behavior when we use Facebook. We need to collect the data from the newest posts to the older posts. 55 | 56 | ## License 57 | [MIT License](https://github.com/TLYu0419/facebook_crawler/blob/main/LICENSE) 58 | 59 | ## Contribution 60 | 61 | [](https://payment.ecpay.com.tw/QuickCollect/PayData?GcM4iJGUeCvhY%2fdFqqQ%2bFAyf3uA10KRo%2fqzP4DWtVcw%3d) 62 | 63 | A donation is not the limitation to utilizing this package, but it would be great to have your support. Either donate, star or fork are good methods to support me keep maintaining and developing this project. 64 | 65 | Thanks to these donors' help, due to their kind help, this project could keep maintained and developed. 66 | 67 | **贊助不是使用這個套件的必要條件**,但如能獲得你的支持我將會非常感謝。不論是贊助、給予星星或分享都是很好的支持方式,幫助我繼續維護和開發這個專案 68 | 69 | 由於這些捐助者的幫助,由於他們的慷慨的幫助,這個項目才得以持續維護和發展 70 | - Universities 71 | - [Department of Social Work. The Chinese University of Hong Kong(香港中文大學社會工作學系)](https://web.swk.cuhk.edu.hk/zh-tw/) 72 | - [Education, Graduate School of Curriculum and Instructional Communications Technology. National Taipei University.(國立台北教育大學課程與教學傳播科技研究所)](https://cict.ntue.edu.tw/?locale=zh_tw) 73 | - [Department of Dusiness Administration. Chung Hua University.(中華大學企業管理學系)](https://ba.chu.edu.tw/?Lang=en) 74 | 75 | ## Contact Info 76 | - Author: TENG-LIN YU 77 | - Email: tlyu0419@gmail.com 78 | - Facebook: https://www.facebook.com/tlyu0419 79 | - PYPI: https://pypi.org/project/facebook-crawler/ 80 | - Github: https://github.com/TLYu0419/facebook_crawler 81 | 82 | ## Log 83 | - 0.028: Modularized the crawler function. 84 | - 0.0.26 85 | 1. Auto changes the cookie after it's expired to keep crawling data without changing IP. 86 | -------------------------------------------------------------------------------- /images/ecgo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tlyu0419/facebook_crawler/82ec3ec46aa0a324252bbb6274b57cbf29c27e6e/images/ecgo.png -------------------------------------------------------------------------------- /images/quickstart_fanspage.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tlyu0419/facebook_crawler/82ec3ec46aa0a324252bbb6274b57cbf29c27e6e/images/quickstart_fanspage.png -------------------------------------------------------------------------------- /images/quickstart_group.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tlyu0419/facebook_crawler/82ec3ec46aa0a324252bbb6274b57cbf29c27e6e/images/quickstart_group.png -------------------------------------------------------------------------------- /images/屏幕截图 2021-06-19 112621.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tlyu0419/facebook_crawler/82ec3ec46aa0a324252bbb6274b57cbf29c27e6e/images/屏幕截图 2021-06-19 112621.png -------------------------------------------------------------------------------- /images/屏幕截图 2021-06-19 152222.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tlyu0419/facebook_crawler/82ec3ec46aa0a324252bbb6274b57cbf29c27e6e/images/屏幕截图 2021-06-19 152222.png -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | from paser import _parse_category, _parse_pagename, _parse_creation_time, _parse_pagetype, _parse_likes, _parse_docid, _parse_pageurl 2 | from paser import _parse_entryPoint, _parse_identifier, _parse_docid, _parse_composite_nojs, _parse_composite_graphql, _parse_relatedpages, _parse_pageinfo 3 | from requester import _get_homepage, _get_posts, _get_headers 4 | from utils import _init_request_vars 5 | from bs4 import BeautifulSoup 6 | import os 7 | import re 8 | 9 | import json 10 | import time 11 | import tqdm 12 | 13 | import pandas as pd 14 | import pickle 15 | 16 | import datetime 17 | import warnings 18 | warnings.filterwarnings("ignore") 19 | 20 | 21 | def Crawl_PagePosts(pageurl, until_date='2018-01-01', cursor=''): 22 | # initial request variables 23 | df, cursor, max_date, break_times = _init_request_vars(cursor) 24 | 25 | # get headers 26 | headers = _get_headers(pageurl) 27 | 28 | # Get pageid, postid and entryPoint from homepage_response 29 | homepage_response = _get_homepage(pageurl, headers) 30 | entryPoint = _parse_entryPoint(homepage_response) 31 | identifier = _parse_identifier(entryPoint, homepage_response) 32 | docid = _parse_docid(entryPoint, homepage_response) 33 | 34 | # Keep crawling post until reach the until_date 35 | while max_date >= until_date: 36 | try: 37 | # Get posts by identifier, docid and entryPoint 38 | resp = _get_posts(headers, identifier, entryPoint, docid, cursor) 39 | if entryPoint == 'nojs': 40 | ndf, max_date, cursor = _parse_composite_nojs(resp) 41 | df.append(ndf) 42 | else: 43 | ndf, max_date, cursor = _parse_composite_graphql(resp) 44 | df.append(ndf) 45 | # Test 46 | # print(ndf.shape[0]) 47 | break_times = 0 48 | except: 49 | # print(resp.json()[:3000]) 50 | try: 51 | if resp.json()['data']['node']['timeline_feed_units']['page_info']['has_next_page'] == False: 52 | print('The posts of the page has run over!') 53 | break 54 | except: 55 | pass 56 | print('Break Times {}: Something went wrong with this request. Sleep 20 seconds and send request again.'.format( 57 | break_times)) 58 | print('REQUEST LOG >> pageid: {}, docid: {}, cursor: {}'.format( 59 | identifier, docid, cursor)) 60 | print('RESPONSE LOG: ', resp.text[:3000]) 61 | print('================================================') 62 | break_times += 1 63 | 64 | if break_times > 15: 65 | print('Please check your target page/group has up to date.') 66 | print('If so, you can ignore this break time message, if not, please change your Internet IP and run this crawler again.') 67 | break 68 | 69 | time.sleep(20) 70 | # Get new headers 71 | headers = _get_headers(pageurl) 72 | 73 | # Concat all dataframes 74 | df = pd.concat(df, ignore_index=True) 75 | df['UPDATETIME'] = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") 76 | return df 77 | 78 | 79 | def Crawl_GroupPosts(pageurl, until_date='2022-01-01'): 80 | df = Crawl_PagePosts(pageurl, until_date) 81 | return df 82 | 83 | 84 | def Crawl_RelatedPages(seedpages, rounds): 85 | # init 86 | df = pd.DataFrame(data=[], columns=['SOURCE', 'TARGET', 'ROUND']) 87 | pageurls = list(set(seedpages)) 88 | crawled_list = list(set(df['SOURCE'])) 89 | headers = _get_headers(pageurls[0]) 90 | for i in range(rounds): 91 | print('Round {} started at: {}!'.format( 92 | i, datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'))) 93 | for pageurl in tqdm(pageurls): 94 | if pageurl not in crawled_list: 95 | try: 96 | homepage_response = _get_homepage( 97 | pageurl=pageurl, headers=headers) 98 | if 'Sorry, something went wrong.' not in homepage_response.text: 99 | entryPoint = _parse_entryPoint(homepage_response) 100 | identifier = _parse_identifier( 101 | entryPoint, homepage_response) 102 | relatedpages = _parse_relatedpages( 103 | homepage_response, entryPoint, identifier) 104 | ndf = pd.DataFrame({'SOURCE': homepage_response.url, 105 | 'TARGET': relatedpages, 106 | 'ROUND': i}) 107 | df = pd.concat([df, ndf], ignore_index=True) 108 | except: 109 | pass 110 | # print('ERROE: {}'.format(pageurl)) 111 | pageurls = list(set(df['TARGET'])) 112 | crawled_list = list(set(df['SOURCE'])) 113 | return df 114 | 115 | 116 | def Crawl_PageInfo(pagenum, pageurl): 117 | break_times = 0 118 | global headers 119 | while True: 120 | try: 121 | homepage_response = _get_homepage(pageurl, headers) 122 | pageinfo = _parse_pageinfo(homepage_response) 123 | with open('data/pageinfo/' + str(pagenum) + '.pickle', "wb") as fp: 124 | pickle.dump(pageinfo, fp) 125 | break 126 | except: 127 | break_times = break_times + 1 128 | if break_times >= 5: 129 | break 130 | time.sleep(5) 131 | headers = _get_headers(pageurl=pageurl) 132 | 133 | 134 | if __name__ == '__main__': 135 | 136 | os.makedirs('data/', exist_ok=True) 137 | # ===== fb_api_req_friendly_name: ProfileCometTimelineFeedRefetchQuery ==== 138 | pageurl = 'https://www.facebook.com/Gooaye' # 股癌 Gooaye: 30.4萬追蹤, 139 | pageurl = 'https://www.facebook.com/StockOldBull' # 股海老牛: 16萬 140 | pageurl = 'https://www.facebook.com/twherohan' 141 | pageurl = 'https://www.facebook.com/diudiu333' 142 | pageurl = 'https://www.facebook.com/chengwentsan' 143 | pageurl = 'https://www.facebook.com/MaYingjeou' 144 | pageurl = 'https://www.facebook.com/roberttawikofficial' 145 | pageurl = 'https://www.facebook.com/NizamAbTitingan' 146 | pageurl = 'https://www.facebook.com/joebiden' 147 | 148 | # ==== fb_api_req_friendly_name: CometModernPageFeedPaginationQuery ==== 149 | pageurl = 'https://www.facebook.com/ebcmoney/' # 東森財經: 81萬追蹤 150 | pageurl = 'https://www.facebook.com/moneyweekly.tw/' # 理財周刊: 36.3萬 151 | pageurl = 'https://www.facebook.com/cmoneyapp/' # CMoney 理財寶: 84.2萬 152 | pageurl = 'https://www.facebook.com/emily0806' # 艾蜜莉-自由之路: 20.9萬追蹤 153 | pageurl = 'https://www.facebook.com/imoney889/' # 林恩如-飆股女王: 10.2萬 154 | pageurl = 'https://www.facebook.com/wealth1974/' # 財訊: 17.5萬 155 | pageurl = 'https://www.facebook.com/smart16888/' # 郭莉芳理財講堂: 1.6萬 156 | pageurl = 'https://www.facebook.com/smartmonthly/' # Smart 智富月刊: 52.6萬 157 | pageurl = 'https://www.facebook.com/ezmoney.tw/' # 統一投信: 1.5萬 158 | pageurl = 'https://www.facebook.com/MoneyMoneyMeg/' # Money錢: 20.7萬 159 | pageurl = 'https://www.facebook.com/imoneymagazine/' # iMoney 智富雜誌: 38萬 160 | pageurl = 'https://www.facebook.com/edigest/' # 經濟一週 EDigest: 36.2萬 161 | pageurl = 'https://www.facebook.com/BToday/' # 今周刊:107萬 162 | pageurl = 'https://www.facebook.com/GreenHornFans/' # 綠角財經筆記: 25萬 163 | pageurl = 'https://www.facebook.com/ec.ltn.tw/' # 自由時報財經頻道 42,656人在追蹤 164 | pageurl = 'https://www.facebook.com/MoneyDJ' # MoneyDJ理財資訊 141,302人在追蹤 165 | pageurl = 'https://www.facebook.com/YahooTWFinance/' # Yahoo奇摩股市理財 149,624人在追蹤 166 | pageurl = 'https://www.facebook.com/win3105' 167 | pageurl = 'https://www.facebook.com/Diss%E7%BA%8F%E7%B6%BF-111182238148502/' 168 | 169 | # fb_api_req_friendly_name: CometUFICommentsProviderQuery 170 | pageurl = 'https://www.facebook.com/anuetw/' # Anue鉅亨網財經新聞: 31.2萬追蹤 171 | pageurl = 'https://www.facebook.com/wealtholic/' # 投資癮 Wealtholic: 2.萬 172 | 173 | # fb_api_req_friendly_name: PresenceStatusProviderSubscription_ContactProfilesQuery 174 | # fb_api_req_friendly_name: GroupsCometFeedRegularStoriesPaginationQuery 175 | pageurl = 'https://www.facebook.com/groups/pythontw' 176 | pageurl = 'https://www.facebook.com/groups/corollacrossclub/' 177 | 178 | df = Crawl_PagePosts(pageurl, until_date='2022-08-10') 179 | # df = Crawl_RelatedPages(seedpages=pageurls, rounds=10) 180 | 181 | df = pd.read_csv( 182 | './data/relatedpages_edgetable.csv')[['SOURCE', 'TARGET', 'ROUND']] 183 | 184 | headers = _get_headers(pageurl=pageurl) 185 | 186 | for pagenum in tqdm(df['index']): 187 | try: 188 | Crawl_PageInfo(pagenum=pagenum, pageurl=df['pageurl'][pagenum]) 189 | except: 190 | pass 191 | 192 | homepage_response = _get_homepage(pageurl, headers) 193 | pageinfo = _parse_pageinfo(homepage_response) 194 | 195 | # 196 | import pandas as pd 197 | from main import Crawl_PagePosts 198 | pageurl = 'https://www.facebook.com/hatendhu' 199 | df = Crawl_PagePosts(pageurl, until_date='2014-11-01') 200 | df 201 | df.to_pickle('./data/20220926_hatendhu.pkl') 202 | -------------------------------------------------------------------------------- /paser.py: -------------------------------------------------------------------------------- 1 | import re 2 | import json 3 | import pandas as pd 4 | from bs4 import BeautifulSoup 5 | from utils import _extract_id, _init_request_vars 6 | import datetime 7 | from requester import _get_pageabout, _get_pagetransparency, _get_homepage, _get_posts, _get_headers 8 | import requests 9 | # Post-Paser 10 | 11 | 12 | def _parse_edgelist(resp): 13 | ''' 14 | Take edges from the response by graphql api 15 | ''' 16 | edges = [] 17 | try: 18 | edges = resp.json()['data']['node']['timeline_feed_units']['edges'] 19 | except: 20 | for data in resp.text.split('\r\n', -1): 21 | try: 22 | edges.append(json.loads(data)[ 23 | 'data']['node']['timeline_list_feed_units']['edges'][0]) 24 | except: 25 | edges.append(json.loads(data)['data']) 26 | return edges 27 | 28 | 29 | def _parse_edge(edge): 30 | ''' 31 | Parse edge to take informations, such as post name, id, message..., etc. 32 | ''' 33 | comet_sections = edge['node']['comet_sections'] 34 | # name 35 | name = comet_sections['context_layout']['story']['comet_sections']['actor_photo']['story']['actors'][0]['name'] 36 | 37 | # creation_time 38 | creation_time = comet_sections['context_layout']['story']['comet_sections']['metadata'][0]['story']['creation_time'] 39 | 40 | # message 41 | try: 42 | message = comet_sections['content']['story']['comet_sections']['message']['story']['message']['text'] 43 | except: 44 | try: 45 | message = comet_sections['content']['story']['comet_sections']['message_container']['story']['message']['text'] 46 | except: 47 | message = comet_sections['content']['story']['comet_sections']['message_container'] 48 | # postid 49 | postid = comet_sections['feedback']['story']['feedback_context'][ 50 | 'feedback_target_with_context']['ufi_renderer']['feedback']['subscription_target_id'] 51 | 52 | # actorid 53 | pageid = comet_sections['context_layout']['story']['comet_sections']['actor_photo']['story']['actors'][0]['id'] 54 | 55 | # comment_count 56 | comment_count = comet_sections['feedback']['story']['feedback_context'][ 57 | 'feedback_target_with_context']['ufi_renderer']['feedback']['comment_count']['total_count'] 58 | 59 | # reaction_count 60 | reaction_count = comet_sections['feedback']['story']['feedback_context']['feedback_target_with_context'][ 61 | 'ufi_renderer']['feedback']['comet_ufi_summary_and_actions_renderer']['feedback']['reaction_count']['count'] 62 | 63 | # share_count 64 | share_count = comet_sections['feedback']['story']['feedback_context']['feedback_target_with_context'][ 65 | 'ufi_renderer']['feedback']['comet_ufi_summary_and_actions_renderer']['feedback']['share_count']['count'] 66 | 67 | # toplevel_comment_count 68 | toplevel_comment_count = comet_sections['feedback']['story']['feedback_context'][ 69 | 'feedback_target_with_context']['ufi_renderer']['feedback']['toplevel_comment_count']['count'] 70 | 71 | # top_reactions 72 | top_reactions = comet_sections['feedback']['story']['feedback_context']['feedback_target_with_context']['ufi_renderer'][ 73 | 'feedback']['comet_ufi_summary_and_actions_renderer']['feedback']['cannot_see_top_custom_reactions']['top_reactions']['edges'] 74 | 75 | # comet_footer_renderer for link 76 | try: 77 | comet_footer_renderer = comet_sections['content']['story']['attachments'][0]['comet_footer_renderer'] 78 | # attachment_title 79 | attachment_title = comet_footer_renderer['attachment']['title_with_entities']['text'] 80 | # attachment_description 81 | attachment_description = comet_footer_renderer['attachment']['description']['text'] 82 | except: 83 | attachment_title = '' 84 | attachment_description = '' 85 | 86 | # all_subattachments for photos 87 | try: 88 | try: 89 | media = comet_sections['content']['story']['attachments'][0]['styles']['attachment']['all_subattachments']['nodes'] 90 | attachments_photos = ', '.join( 91 | [image['media']['viewer_image']['uri'] for image in media]) 92 | except: 93 | media = comet_sections['content']['story']['attachments'][0]['styles']['attachment'] 94 | attachments_photos = media['media']['photo_image']['uri'] 95 | except: 96 | attachments_photos = '' 97 | 98 | # cursor 99 | cursor = edge['cursor'] 100 | 101 | # actor url 102 | actor_url = comet_sections['context_layout']['story']['comet_sections']['actor_photo']['story']['actors'][0]['url'] 103 | 104 | # post url 105 | post_url = comet_sections['content']['story']['wwwURL'] 106 | 107 | return [name, pageid, postid, creation_time, message, reaction_count, comment_count, toplevel_comment_count, share_count, top_reactions, attachment_title, attachment_description, attachments_photos, cursor, actor_url, post_url] 108 | 109 | 110 | def _parse_domops(resp): 111 | ''' 112 | Take name, data id, time , message and page link from domops 113 | ''' 114 | data = re.sub(r'for \(;;\);', '', resp.text) 115 | data = json.loads(data) 116 | domops = data['domops'][0][3]['__html'] 117 | cursor = re.findall( 118 | 'timeline_cursor%22%3A%22(.*?)%22%2C%22timeline_section_cursor', domops)[0] 119 | content_list = [] 120 | soup = BeautifulSoup(domops, 'lxml') 121 | 122 | for content in soup.findAll('div', {'class': 'userContentWrapper'}): 123 | # name 124 | name = content.find('img')['aria-label'] 125 | # id 126 | dataid = content.find('div', {'data-testid': 'story-subtitle'})['id'] 127 | # actorid 128 | pageid = _extract_id(dataid, 0) 129 | # postid 130 | postid = _extract_id(dataid, 1) 131 | # time 132 | time = content.find('abbr')['data-utime'] 133 | # message 134 | message = content.find('div', {'data-testid': 'post_message'}) 135 | if message == None: 136 | message = '' 137 | else: 138 | if len(message.findAll('p')) >= 1: 139 | message = ''.join(p.text for p in message.findAll('p')) 140 | elif len(message.select('span > span')) >= 2: 141 | message = message.find('span').text 142 | 143 | # attachment_title 144 | try: 145 | attachment_title = content.find( 146 | 'a', {'data-lynx-mode': 'hover'})['aria-label'] 147 | except: 148 | attachment_title = '' 149 | # attachment_description 150 | try: 151 | attachment_description = content.find( 152 | 'a', {'data-lynx-mode': 'hover'}).text 153 | except: 154 | attachment_description = '' 155 | # actor_url 156 | actor_url = content.find('a')['href'].split('?')[0] 157 | 158 | # post_url 159 | post_url = 'https://www.facebook.com/' + postid 160 | content_list.append([name, pageid, postid, time, message, attachment_title, 161 | attachment_description, cursor, actor_url, post_url]) 162 | return content_list, cursor 163 | 164 | 165 | def _parse_jsmods(resp): 166 | ''' 167 | Take postid, pageid, comment count , reaction count, sharecount, reactions and display_comments_count from jsmods 168 | ''' 169 | data = re.sub(r'for \(;;\);', '', resp.text) 170 | data = json.loads(data) 171 | jsmods = data['jsmods'] 172 | 173 | requires_list = [] 174 | for requires in jsmods['pre_display_requires']: 175 | try: 176 | feedback = requires[3][1]['__bbox']['result']['data']['feedback'] 177 | # subscription_target_id ==> postid 178 | subscription_target_id = feedback['subscription_target_id'] 179 | # owning_profile_id ==> pageid 180 | owning_profile_id = feedback['owning_profile']['id'] 181 | # comment_count 182 | comment_count = feedback['comment_count']['total_count'] 183 | # reaction_count 184 | reaction_count = feedback['reaction_count']['count'] 185 | # share_count 186 | share_count = feedback['share_count']['count'] 187 | # top_reactions 188 | top_reactions = feedback['top_reactions']['edges'] 189 | # display_comments_count 190 | display_comments_count = feedback['display_comments_count']['count'] 191 | 192 | # append data to list 193 | requires_list.append([subscription_target_id, owning_profile_id, comment_count, 194 | reaction_count, share_count, top_reactions, display_comments_count]) 195 | except: 196 | pass 197 | 198 | # reactions--video posts 199 | for requires in jsmods['require']: 200 | try: 201 | # entidentifier ==> postid 202 | entidentifier = requires[3][2]['feedbacktarget']['entidentifier'] 203 | # pageid 204 | actorid = requires[3][2]['feedbacktarget']['actorid'] 205 | # comment count 206 | commentcount = requires[3][2]['feedbacktarget']['commentcount'] 207 | # reaction count 208 | likecount = requires[3][2]['feedbacktarget']['likecount'] 209 | # sharecount 210 | sharecount = requires[3][2]['feedbacktarget']['sharecount'] 211 | # reactions 212 | reactions = [] 213 | # display_comments_count 214 | commentcount = requires[3][2]['feedbacktarget']['commentcount'] 215 | 216 | # append data to list 217 | requires_list.append( 218 | [entidentifier, actorid, commentcount, likecount, sharecount, reactions, commentcount]) 219 | except: 220 | pass 221 | return requires_list 222 | 223 | 224 | def _parse_composite_graphql(resp): 225 | edges = _parse_edgelist(resp) 226 | df = [] 227 | for edge in edges: 228 | try: 229 | ndf = _parse_edge(edge) 230 | df.append(ndf) 231 | except: 232 | pass 233 | df = pd.DataFrame(df, columns=['NAME', 'PAGEID', 'POSTID', 'TIME', 'MESSAGE', 'REACTIONCOUNT', 'COMMENTCOUNT', 'DISPLAYCOMMENTCOUNT', 234 | 'SHARECOUNT', 'REACTIONS', 'ATTACHMENT_TITLE', 'ATTACHMENT_DESCRIPTION', 'ATTACHMENT_PHOTOS', 'CURSOR', 'ACTOR_URL', 'POST_URL']) 235 | df = df[['NAME', 'PAGEID', 'POSTID', 'TIME', 'MESSAGE', 'ATTACHMENT_TITLE', 'ATTACHMENT_DESCRIPTION', 'ATTACHMENT_PHOTOS', 'REACTIONCOUNT', 236 | 'COMMENTCOUNT', 'DISPLAYCOMMENTCOUNT', 'SHARECOUNT', 'REACTIONS', 'CURSOR', 'ACTOR_URL', 'POST_URL']] 237 | cursor = df['CURSOR'].to_list()[-1] 238 | df['TIME'] = df['TIME'].apply(lambda x: datetime.datetime.fromtimestamp( 239 | int(x)).strftime("%Y-%m-%d %H:%M:%S")) 240 | max_date = df['TIME'].max() 241 | print('The maximum date of these posts is: {}, keep crawling...'.format(max_date)) 242 | return df, max_date, cursor 243 | 244 | 245 | def _parse_composite_nojs(resp): 246 | domops, cursor = _parse_domops(resp) 247 | domops = pd.DataFrame(domops, columns=['NAME', 'PAGEID', 'POSTID', 'TIME', 'MESSAGE', 248 | 'ATTACHMENT_TITLE', 'ATTACHMENT_DESCRIPTION', 'CURSOR', 'ACTOR_URL', 'POST_URL']) 249 | domops['TIME'] = domops['TIME'].apply( 250 | lambda x: datetime.datetime.fromtimestamp(int(x)).strftime("%Y-%m-%d %H:%M:%S")) 251 | 252 | jsmods = _parse_jsmods(resp) 253 | jsmods = pd.DataFrame(jsmods, columns=[ 254 | 'POSTID', 'PAGEID', 'COMMENTCOUNT', 'REACTIONCOUNT', 'SHARECOUNT', 'REACTIONS', 'DISPLAYCOMMENTCOUNT']) 255 | 256 | df = pd.merge(left=domops, 257 | right=jsmods, 258 | how='inner', 259 | on=['PAGEID', 'POSTID']) 260 | 261 | df = df[['NAME', 'PAGEID', 'POSTID', 'TIME', 'MESSAGE', 'ATTACHMENT_TITLE', 'ATTACHMENT_DESCRIPTION', 262 | 'REACTIONCOUNT', 'COMMENTCOUNT', 'DISPLAYCOMMENTCOUNT', 'SHARECOUNT', 'REACTIONS', 'CURSOR', 263 | 'ACTOR_URL', 'POST_URL']] 264 | max_date = df['TIME'].max() 265 | print('The maximum date of these posts is: {}, keep crawling...'.format(max_date)) 266 | return df, max_date, cursor 267 | 268 | # Page paser 269 | 270 | 271 | def _parse_pagetype(homepage_response): 272 | if '/groups/' in homepage_response.url: 273 | pagetype = 'Group' 274 | else: 275 | pagetype = 'Fanspage' 276 | return pagetype 277 | 278 | 279 | def _parse_pagename(homepage_response): 280 | raw_json = homepage_response.text.encode('utf-8').decode('unicode_escape') 281 | # pattern1 282 | if len(re.findall(r'{"page":{"name":"(.*?)",', raw_json)) >= 1: 283 | pagename = re.findall(r'{"page":{"name":"(.*?)",', raw_json)[0] 284 | pagename = re.sub(r'\s\|\sFacebook', '', pagename) 285 | return pagename 286 | # pattern2 287 | if len(re.findall('","name":"(.*?)","', raw_json)) >= 1: 288 | pagename = re.findall('","name":"(.*?)","', raw_json)[0] 289 | pagename = re.sub(r'\s\|\sFacebook', '', pagename) 290 | return pagename 291 | 292 | 293 | def _parse_entryPoint(homepage_response): 294 | try: 295 | entryPoint = re.findall( 296 | '"entryPoint":{"__dr":"(.*?)"}}', homepage_response.text)[0] 297 | except: 298 | entryPoint = 'nojs' 299 | return entryPoint 300 | 301 | 302 | def _parse_identifier(entryPoint, homepage_response): 303 | if entryPoint in ['ProfilePlusCometLoggedOutRouteRoot.entrypoint', 'CometGroupDiscussionRoot.entrypoint']: 304 | # pattern 1 305 | if len(re.findall('"identifier":"{0,1}([0-9]{5,})"{0,1},', homepage_response.text)) >= 1: 306 | identifier = re.findall( 307 | '"identifier":"{0,1}([0-9]{5,})"{0,1},', homepage_response.text)[0] 308 | 309 | # pattern 2 310 | elif len(re.findall('fb://profile/(.*?)"', homepage_response.text)) >= 1: 311 | identifier = re.findall( 312 | 'fb://profile/(.*?)"', homepage_response.text)[0] 313 | 314 | # pattern 3 315 | elif len(re.findall('content="fb://group/([0-9]{1,})" />', homepage_response.text)) >= 1: 316 | identifier = re.findall( 317 | 'content="fb://group/([0-9]{1,})" />', homepage_response.text)[0] 318 | 319 | elif entryPoint in ['CometSinglePageHomeRoot.entrypoint', 'nojs']: 320 | # pattern 1 321 | if len(re.findall('"pageID":"{0,1}([0-9]{5,})"{0,1},', homepage_response.text)) >= 1: 322 | identifier = re.findall( 323 | '"pageID":"{0,1}([0-9]{5,})"{0,1},', homepage_response.text)[0] 324 | 325 | return identifier 326 | 327 | 328 | def _parse_docid(entryPoint, homepage_response): 329 | soup = BeautifulSoup(homepage_response.text, 'lxml') 330 | if entryPoint == 'nojs': 331 | docid = 'NoDocid' 332 | else: 333 | for link in soup.findAll('link', {'rel': 'preload'}): 334 | resp = requests.get(link['href']) 335 | for line in resp.text.split('\n', -1): 336 | if 'ProfileCometTimelineFeedRefetchQuery_' in line: 337 | docid = re.findall('e.exports="([0-9]{1,})"', line)[0] 338 | break 339 | 340 | if 'CometModernPageFeedPaginationQuery_' in line: 341 | docid = re.findall('e.exports="([0-9]{1,})"', line)[0] 342 | break 343 | 344 | if 'CometUFICommentsProviderQuery_' in line: 345 | docid = re.findall('e.exports="([0-9]{1,})"', line)[0] 346 | break 347 | 348 | if 'GroupsCometFeedRegularStoriesPaginationQuery' in line: 349 | docid = re.findall('e.exports="([0-9]{1,})"', line)[0] 350 | break 351 | if 'docid' in locals(): 352 | break 353 | return docid 354 | 355 | 356 | def _parse_likes(homepage_response, entryPoint, headers): 357 | if entryPoint in ['CometGroupDiscussionRoot.entrypoint']: 358 | pageabout = _get_pageabout(homepage_response, entryPoint, headers) 359 | members = re.findall( 360 | ',"group_total_members_info_text":"(.*?) total members","', pageabout.text)[0] 361 | members = re.sub(',', '', members) 362 | return members 363 | else: 364 | # pattern 1 365 | data = re.findall( 366 | '"page_likers":{"global_likers_count":([0-9]{1,})},"', homepage_response.text) 367 | if len(data) >= 1: 368 | likes = data[0] 369 | return likes 370 | # pattern 2 371 | data = re.findall( 372 | ' ([0-9]{0,},{0,}[0-9]{0,},{0,}[0-9]{0,},{0,}[0-9]{0,},{0,}[0-9]{0,},{0,}) likes', homepage_response.text) 373 | if len(data) >= 1: 374 | likes = data[0] 375 | likes = re.sub(',', '', likes) 376 | return likes 377 | 378 | 379 | def _parse_creation_time(homepage_response, entryPoint, headers): 380 | try: 381 | if entryPoint in ['ProfilePlusCometLoggedOutRouteRoot.entrypoint']: 382 | transparency_response = _get_pagetransparency( 383 | homepage_response, entryPoint, headers) 384 | transparency_info = re.findall( 385 | '"field_section_type":"transparency","profile_fields":{"nodes":\[{"title":(.*?}),"field_type":"creation_date",', transparency_response.text)[0] 386 | creation_time = json.loads(transparency_info)['text'] 387 | 388 | elif entryPoint in ['CometSinglePageHomeRoot.entrypoint']: 389 | creation_time = re.findall( 390 | ',"page_creation_date":{"text":"Page created - (.*?)"},', homepage_response.text)[0] 391 | 392 | elif entryPoint in ['nojs']: 393 | if len(re.findall('Page created - (.*?)', homepage_response.text)) >= 1: 394 | creation_time = re.findall( 395 | 'Page created - (.*?)', homepage_response.text)[0] 396 | else: 397 | creation_time = re.findall( 398 | ',"foundingDate":"(.*?)"}', homepage_response.text)[0][:10] 399 | 400 | elif entryPoint in ['CometGroupDiscussionRoot.entrypoint']: 401 | pageabout = _get_pageabout(homepage_response, entryPoint, headers) 402 | creation_time = re.findall( 403 | '"group_history_summary":{"text":"Group created on (.*?)"}},', pageabout.text)[0] 404 | 405 | try: 406 | creation_time = datetime.datetime.strptime( 407 | creation_time, '%B %d, %Y') 408 | except: 409 | creation_time = creation_time + ', ' + datetime.datetime.now().year 410 | creation_time = datetime.datetime.strptime( 411 | creation_time, '%B %d, %Y') 412 | creation_time = creation_time.strftime('%Y-%m-%d') 413 | except: 414 | creation_time = 'NotAvailable' 415 | return creation_time 416 | 417 | 418 | def _parse_category(homepage_response, entryPoint, headers): 419 | pageabout = _get_pageabout(homepage_response, entryPoint, headers) 420 | if entryPoint in ['ProfilePlusCometLoggedOutRouteRoot.entrypoint']: 421 | if 'Page \\u00b7 Politician' in pageabout.text: 422 | category = 'Politician' 423 | if len(re.findall(r'"text":"Page \\u00b7 (.*?)"}', homepage_response.text)) >= 1: 424 | category = re.findall( 425 | r'"text":"Page \\u00b7 (.*?)"}', homepage_response.text)[0] 426 | else: 427 | soup = BeautifulSoup(pageabout.text) 428 | for script in soup.findAll('script', {'type': 'application/ld+json'}): 429 | if 'BreadcrumbList' in script.text: 430 | data = script.text.encode('utf-8').decode('unicode_escape') 431 | category = json.loads(data)['itemListElement'] 432 | category = ' / '.join([cate['name'] for cate in category]) 433 | elif entryPoint in ['CometSinglePageHomeRoot.entrypoint', 'nojs']: 434 | if len(re.findall('","category_name":"(.*?)","', homepage_response.text)) >= 1: 435 | category = re.findall( 436 | '","category_name":"(.*?)","', homepage_response.text) 437 | category = ' / '.join([cate for cate in category]) 438 | else: 439 | soup = BeautifulSoup(homepage_response.text) 440 | if len(soup.findAll('span', {'itemprop': 'itemListElement'})) >= 1: 441 | category = [span.text for span in soup.findAll( 442 | 'span', {'itemprop': 'itemListElement'})] 443 | category = ' / '.join(category) 444 | else: 445 | for script in soup.findAll('script', {'type': 'application/ld+json'}): 446 | if 'BreadcrumbList' in script.text: 447 | data = script.text.encode( 448 | 'utf-8').decode('unicode_escape') 449 | category = json.loads(data)['itemListElement'] 450 | category = ' / '.join([cate['name'] 451 | for cate in category]) 452 | elif entryPoint in ['PagesCometAdminSelfViewAboutContainerRoot.entrypoint']: 453 | category = eval(re.findall( 454 | '"page_categories":(.*?),"addressEditable', homepage_response.text)[0]) 455 | category = ' / '.join([cate['text'] for cate in category]) 456 | elif entryPoint in ['CometGroupDiscussionRoot.entrypoint']: 457 | category = 'Group' 458 | try: 459 | category = re.sub(r'\\/', '/', category) 460 | except: 461 | category = '' 462 | return category 463 | 464 | 465 | def _parse_pageurl(homepage_response): 466 | pageurl = homepage_response.url 467 | pageurl = re.sub('/$', '', pageurl) 468 | return pageurl 469 | 470 | 471 | def _parse_relatedpages(homepage_response, entryPoint, identifier): 472 | relatedpages = [] 473 | if entryPoint in ['CometSinglePageHomeRoot.entrypoint']: 474 | try: 475 | data = re.findall( 476 | r'"related_pages":\[(.*?)\],"view_signature"', homepage_response.text)[0] 477 | data = re.sub('},{', '},,,,{', data) 478 | for pages in data.split(',,,,', -1): 479 | # print('id:', json.loads(pages)['id']) 480 | # print('category_name:', json.loads(pages)['category_name']) 481 | # print('name:', json.loads(pages)['name']) 482 | url = json.loads(pages)['url'] 483 | url = url.split('?', -1)[0] 484 | url = re.sub(r'/$', '', url) 485 | # print('url:', url) 486 | # print('========') 487 | relatedpages.append(url) 488 | except: 489 | pass 490 | 491 | elif entryPoint in ['nojs']: 492 | soup = BeautifulSoup(homepage_response.text, 'lxml') 493 | soup = soup.find( 494 | 'div', {'id': 'PageRelatedPagesSecondaryPagelet_{}'.format(identifier)}) 495 | for page in soup.select('ul > li > div'): 496 | # print('name: ', page.find('img')['aria-label']) 497 | url = page.find('a')['href'] 498 | url = url.split('?', -1)[0] 499 | url = re.sub(r'/$', '', url) 500 | # print('url:', url) 501 | # print('===========') 502 | relatedpages.append(url) 503 | 504 | elif entryPoint in ['ProfilePlusCometLoggedOutRouteRoot.entrypoint', 'CometGroupDiscussionRoot.entrypoint']: 505 | pass 506 | # print('There\'s no related pages recommend.') 507 | return relatedpages 508 | 509 | 510 | def _parse_pageinfo(homepage_response): 511 | ''' 512 | Parse the homepage response to get the page information, including id, docid and api_name. 513 | ''' 514 | # pagetype 515 | pagetype = _parse_pagetype(homepage_response) 516 | 517 | # pagename 518 | pagename = _parse_pagename(homepage_response) 519 | 520 | # entryPoint 521 | entryPoint = _parse_entryPoint(homepage_response) 522 | 523 | # identifier 524 | identifier = _parse_identifier(entryPoint, homepage_response) 525 | 526 | # docid 527 | docid = _parse_docid(entryPoint, homepage_response) 528 | 529 | # likes / members 530 | likes = _parse_likes(homepage_response, entryPoint, headers) 531 | 532 | # creation time 533 | creation_time = _parse_creation_time( 534 | homepage_response, entryPoint, headers) 535 | 536 | # category 537 | category = _parse_category(homepage_response, entryPoint, headers) 538 | 539 | # pageurl 540 | pageurl = _parse_pageurl(homepage_response) 541 | 542 | return [pagetype, pagename, identifier, likes, creation_time, category, pageurl] 543 | 544 | 545 | if __name__ == '__main__': 546 | # pageurls 547 | pageurl = 'https://www.facebook.com/mohw.gov.tw' 548 | pageurl = 'https://www.facebook.com/groups/pythontw' 549 | pageurl = 'https://www.facebook.com/Gooaye' 550 | pageurl = 'https://www.facebook.com/emily0806' 551 | pageurl = 'https://www.facebook.com/anuetw/' 552 | pageurl = 'https://www.facebook.com/wealtholic/' 553 | pageurl = 'https://www.facebook.com/hatendhu' 554 | 555 | headers = _get_headers(pageurl) 556 | headers['Referer'] = 'https://www.facebook.com/hatendhu' 557 | headers['Origin'] = 'https://www.facebook.com' 558 | headers['Cookie'] = 'dpr=1.5; datr=rzIwY5yARwMzcR9H2GyqId_l' 559 | 560 | homepage_response = _get_homepage(pageurl=pageurl, headers=headers) 561 | 562 | entryPoint = _parse_entryPoint(homepage_response) 563 | print(entryPoint) 564 | 565 | identifier = _parse_identifier(entryPoint, homepage_response) 566 | 567 | docid = _parse_docid(entryPoint, homepage_response) 568 | 569 | df, cursor, max_date, break_times = _init_request_vars(cursor='') 570 | cursor = 'AQHRlIMW9sczmHGnME47XeSdDNj6Jk9EcBOMlyxBdMNbZHM7dwd0rn8wsaxQxeXUsuhKVaMgVwPHb9YS9468INvb5yw2osoEmXd_sMXvj8rLhmBxeaJucMSPIDux_JuiHToC' 571 | cursor = 'AQHRxSZTqUvlLpkXCnrOjdX0gZeyn-Q1cuJzn4SPJuZ5rkYi7nZFByE5pwy4AsBoUOtcmF28lNfXR_rqv7oO7545iURm_mx46aZLBDiYfPmgI2mjscHUTiVi5vv1vj5EXiF4' 572 | resp = _get_posts(headers=headers, identifier=identifier, 573 | entryPoint=entryPoint, docid=docid, cursor=cursor) 574 | 575 | # graphql 576 | edges = _parse_edgelist(resp) 577 | print(len(edges)) 578 | _parse_edge(edges[0]) 579 | edges[0].keys() 580 | edges[0]['node'].keys() 581 | edges[0]['node']['comet_sections'].keys() 582 | edges[0]['node']['comet_sections'] 583 | df, max_date, cursor = _parse_composite_graphql(resp) 584 | df 585 | # nojs 586 | content_list, cursor = _parse_domops(resp) 587 | 588 | df, max_date, cursor = _parse_composite_nojs(resp) 589 | 590 | # page paser 591 | 592 | pagename = _parse_pagename(homepage_response).encode('utf-8').decode() 593 | likes = _parse_likes(homepage_response, entryPoint, headers) 594 | creation_time = _parse_creation_time( 595 | homepage_response=homepage_response, entryPoint=entryPoint, headers=headers) 596 | category = _parse_category(homepage_response, entryPoint, headers) 597 | pageurl = _parse_pageurl(homepage_response) 598 | -------------------------------------------------------------------------------- /requester.py: -------------------------------------------------------------------------------- 1 | import re 2 | import requests 3 | import time 4 | from utils import _init_request_vars 5 | 6 | 7 | def _get_headers(pageurl): 8 | ''' 9 | Send a request to get cookieid as headers. 10 | ''' 11 | pageurl = re.sub('www', 'm', pageurl) 12 | resp = requests.get(pageurl) 13 | headers = {'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 14 | 'accept-language': 'en'} 15 | headers['cookie'] = '; '.join(['{}={}'.format(cookieid, resp.cookies.get_dict()[ 16 | cookieid]) for cookieid in resp.cookies.get_dict()]) 17 | # headers['cookie'] = headers['cookie'] + '; locale=en_US' 18 | return headers 19 | 20 | 21 | def _get_homepage(pageurl, headers): 22 | ''' 23 | Send a request to get the homepage response 24 | ''' 25 | pageurl = re.sub('/$', '', pageurl) 26 | timeout_cnt = 0 27 | while True: 28 | try: 29 | homepage_response = requests.get( 30 | pageurl, headers=headers, timeout=3) 31 | return homepage_response 32 | except: 33 | time.sleep(5) 34 | timeout_cnt = timeout_cnt + 1 35 | if timeout_cnt > 20: 36 | class homepage_response(): 37 | text = 'Sorry, something went wrong.' 38 | return homepage_response 39 | 40 | 41 | def _get_pageabout(homepage_response, entryPoint, headers): 42 | ''' 43 | Send a request to get the about page response 44 | ''' 45 | pageurl = re.sub('/$', '', homepage_response.url) 46 | pageabout = requests.get(pageurl + '/about', headers=headers) 47 | return pageabout 48 | 49 | 50 | def _get_pagetransparency(homepage_response, entryPoint, headers): 51 | ''' 52 | Send a request to get the transparency page response 53 | ''' 54 | pageurl = re.sub('/$', '', homepage_response.url) 55 | if entryPoint in ['ProfilePlusCometLoggedOutRouteRoot.entrypoint']: 56 | transparency_response = requests.get( 57 | pageurl + '/about_profile_transparency', headers=headers) 58 | return transparency_response 59 | 60 | 61 | def _get_posts(headers, identifier, entryPoint, docid, cursor): 62 | ''' 63 | Send a request to get new posts from fanspage/group. 64 | ''' 65 | if entryPoint in ['nojs']: 66 | params = {'page_id': identifier, 67 | 'cursor': str({"timeline_cursor": cursor, 68 | "timeline_section_cursor": '{}', 69 | "has_next_page": 'true'}), 70 | 'surface': 'www_pages_posts', 71 | 'unit_count': 10, 72 | '__a': '1'} 73 | resp = requests.get(url='https://www.facebook.com/pages_reaction_units/more/', 74 | params=params) 75 | 76 | else: # entryPoint in ['CometSinglePageHomeRoot.entrypoint', 'ProfilePlusCometLoggedOutRouteRoot.entrypoint', 'CometGroupDiscussionRoot.entrypoint'] 77 | data = {'variables': str({'cursor': cursor, 78 | 'id': identifier, 79 | 'count': 3}), 80 | 'doc_id': docid} 81 | resp = requests.post(url='https://www.facebook.com/api/graphql/', 82 | data=data, 83 | headers=headers) 84 | return resp 85 | 86 | 87 | if __name__ == '__main__': 88 | pageurl = 'https://www.facebook.com/ec.ltn.tw/' 89 | pageurl = 'https://www.facebook.com/Gooaye' 90 | pageurl = 'https://www.facebook.com/groups/pythontw' 91 | pageurl = 'https://www.facebook.com/hatendhu' 92 | headers = _get_headers(pageurl) 93 | homepage_response = _get_homepage(pageurl=pageurl, headers=headers) 94 | 95 | df, cursor, max_date, break_times = _init_request_vars() 96 | cursor = 'AQHRlIMW9sczmHGnME47XeSdDNj6Jk9EcBOMlyxBdMNbZHM7dwd0rn8wsaxQxeXUsuhKVaMgVwPHb9YS9468INvb5yw2osoEmXd_sMXvj8rLhmBxeaJucMSPIDux_JuiHToC' 97 | cursor = 'AQHRixL5fPMA_nM-78jGg4LohG3M4a2-YQR6WSaWOTiqPRJ1dOGchYRzp1wdDtusNd-5FkCPXwByL_kZM2iyLIz1XHB8WIEzHYXTU3vQzviOI9GexNv__RPn1xnFJZddnjX3' 98 | 99 | from paser import _parse_entryPoint, _parse_identifier, _parse_docid, _parse_composite_graphql 100 | entryPoint = _parse_entryPoint(homepage_response) 101 | identifier = _parse_identifier(entryPoint, homepage_response) 102 | docid = _parse_docid(entryPoint, homepage_response) 103 | df, cursor, max_date, break_times = _init_request_vars(cursor='') 104 | 105 | resp = _get_posts(headers=headers, identifier=identifier, 106 | entryPoint=entryPoint, docid=docid, cursor=cursor) 107 | ndf, max_date, cursor = _parse_composite_graphql(resp) 108 | resp.json() 109 | ndf 110 | max_date 111 | cursor 112 | # print(len(resp.text)) 113 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | requests==2.24.0 2 | bs4==0.0.1 3 | pandas==1.2.4 4 | numpy==1.20.3 5 | dicttoxml==1.7.4 6 | lxml==4.9.2 -------------------------------------------------------------------------------- /sample/20221013_Sample.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 3, 6 | "id": "7946c9a9-0a27-4de2-a32e-8de574f8d7fb", 7 | "metadata": { 8 | "execution": { 9 | "iopub.execute_input": "2022-10-13T14:01:50.402181Z", 10 | "iopub.status.busy": "2022-10-13T14:01:50.402181Z", 11 | "iopub.status.idle": "2022-10-13T14:02:32.391460Z", 12 | "shell.execute_reply": "2022-10-13T14:02:32.391460Z", 13 | "shell.execute_reply.started": "2022-10-13T14:01:50.402181Z" 14 | }, 15 | "tags": [] 16 | }, 17 | "outputs": [ 18 | { 19 | "name": "stdout", 20 | "output_type": "stream", 21 | "text": [ 22 | "The maximum date of these posts is: 2022-10-13 01:06:00, keep crawling...\n", 23 | "The maximum date of these posts is: 2022-10-12 19:47:59, keep crawling...\n", 24 | "The maximum date of these posts is: 2022-10-12 19:47:37, keep crawling...\n", 25 | "The maximum date of these posts is: 2022-10-12 19:47:19, keep crawling...\n", 26 | "The maximum date of these posts is: 2022-10-12 02:37:08, keep crawling...\n", 27 | "The maximum date of these posts is: 2022-10-12 02:36:56, keep crawling...\n", 28 | "The maximum date of these posts is: 2022-10-12 02:36:50, keep crawling...\n", 29 | "The maximum date of these posts is: 2022-10-10 23:07:48, keep crawling...\n", 30 | "The maximum date of these posts is: 2022-10-10 18:18:13, keep crawling...\n", 31 | "The maximum date of these posts is: 2022-10-09 22:47:33, keep crawling...\n", 32 | "The maximum date of these posts is: 2022-10-08 23:59:35, keep crawling...\n", 33 | "The maximum date of these posts is: 2022-10-08 16:27:26, keep crawling...\n", 34 | "The maximum date of these posts is: 2022-10-07 21:05:48, keep crawling...\n", 35 | "The maximum date of these posts is: 2022-10-07 21:05:34, keep crawling...\n", 36 | "The maximum date of these posts is: 2022-10-07 21:05:14, keep crawling...\n", 37 | "The maximum date of these posts is: 2022-10-07 21:04:46, keep crawling...\n", 38 | "The maximum date of these posts is: 2022-10-07 21:04:17, keep crawling...\n", 39 | "The maximum date of these posts is: 2022-10-07 00:58:39, keep crawling...\n", 40 | "The maximum date of these posts is: 2022-10-06 23:56:05, keep crawling...\n", 41 | "The maximum date of these posts is: 2022-10-06 22:25:05, keep crawling...\n", 42 | "The maximum date of these posts is: 2022-10-06 22:24:57, keep crawling...\n", 43 | "The maximum date of these posts is: 2022-10-06 22:24:48, keep crawling...\n", 44 | "The maximum date of these posts is: 2022-10-06 22:24:39, keep crawling...\n", 45 | "The maximum date of these posts is: 2022-10-06 22:24:31, keep crawling...\n", 46 | "The maximum date of these posts is: 2022-10-06 22:24:19, keep crawling...\n", 47 | "The maximum date of these posts is: 2022-10-05 21:33:20, keep crawling...\n", 48 | "The maximum date of these posts is: 2022-10-05 21:33:12, keep crawling...\n", 49 | "The maximum date of these posts is: 2022-10-05 21:33:05, keep crawling...\n", 50 | "The maximum date of these posts is: 2022-10-05 21:32:59, keep crawling...\n", 51 | "The maximum date of these posts is: 2022-10-04 23:21:34, keep crawling...\n", 52 | "The maximum date of these posts is: 2022-10-04 23:21:14, keep crawling...\n", 53 | "The maximum date of these posts is: 2022-10-04 23:20:37, keep crawling...\n", 54 | "The maximum date of these posts is: 2022-10-04 23:19:57, keep crawling...\n", 55 | "The maximum date of these posts is: 2022-10-04 23:19:33, keep crawling...\n", 56 | "The maximum date of these posts is: 2022-10-03 22:40:18, keep crawling...\n", 57 | "The maximum date of these posts is: 2022-10-01 06:49:51, keep crawling...\n", 58 | "The maximum date of these posts is: 2022-10-01 06:49:39, keep crawling...\n", 59 | "The maximum date of these posts is: 2022-10-01 06:49:32, keep crawling...\n", 60 | "The maximum date of these posts is: 2022-09-30 00:28:37, keep crawling...\n" 61 | ] 62 | }, 63 | { 64 | "data": { 65 | "text/html": [ 66 | "
| \n", 84 | " | NAME\n", 85 | " | PAGEID\n", 86 | " | POSTID\n", 87 | " | TIME\n", 88 | " | MESSAGE\n", 89 | " | ATTACHMENT_TITLE\n", 90 | " | ATTACHMENT_DESCRIPTION\n", 91 | " | ATTACHMENT_PHOTOS\n", 92 | " | REACTIONCOUNT\n", 93 | " | COMMENTCOUNT\n", 94 | " | DISPLAYCOMMENTCOUNT\n", 95 | " | SHARECOUNT\n", 96 | " | REACTIONS\n", 97 | " | CURSOR\n", 98 | " | ACTOR_URL\n", 99 | " | POST_URL\n", 100 | " | UPDATETIME\n", 101 | " | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0\n", 106 | " | 黑特東華 NDHU Hate\n", 107 | " | 100064708507691\n", 108 | " | 476738834493063\n", 109 | " | 2022-10-13 01:06:00\n", 110 | " | #52635\\n\\n志學街一堆店家把東華學生當搖錢樹,大家應該聯合抵制一下,盤子店通通給他倒...\n", 111 | " | \n", 112 | " | \n", 113 | " | \n", 114 | " | 166\n", 115 | " | 25\n", 116 | " | 15\n", 117 | " | 2\n", 118 | " | [{'reaction_count': 138, 'node': {'id': '16358...\n", 119 | " | AQHRWkzldCEuXabhI1tJPZnVEn7FKGxga7nPHhIBgGzMmB...\n", 120 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 121 | " | https://www.facebook.com/permalink.php?story_f...\n", 122 | " | 2022-10-13 22:02:32\n", 123 | " | 
| 1\n", 126 | " | 黑特東華 NDHU Hate\n", 127 | " | 100064708507691\n", 128 | " | 476738777826402\n", 129 | " | 2022-10-13 01:05:58\n", 130 | " | #52634\\n\\n電音社第二次社課\\n\\n提醒各位這禮拜日為電音社第二次上課,不管是沒有參...\n", 131 | " | \n", 132 | " | \n", 133 | " | https://scontent.ftpe10-1.fna.fbcdn.net/v/t39....\n", 134 | " | 3\n", 135 | " | 0\n", 136 | " | 0\n", 137 | " | 0\n", 138 | " | [{'reaction_count': 2, 'node': {'id': '1635855...\n", 139 | " | AQHRRN-x6Hvn-AZ35SOZ2CNtJpaM3_9yYj9j9dTx5LxCWy...\n", 140 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 141 | " | https://www.facebook.com/permalink.php?story_f...\n", 142 | " | 2022-10-13 22:02:32\n", 143 | " | 
| 2\n", 146 | " | 黑特東華 NDHU Hate\n", 147 | " | 100064708507691\n", 148 | " | 476533761180237\n", 149 | " | 2022-10-12 19:48:11\n", 150 | " | #52633\\n\\n10/12晚上7.左右在理工停車場靠學活那側撿到BKS1(安全帽藍芽耳機...\n", 151 | " | \n", 152 | " | \n", 153 | " | https://scontent.ftpe10-1.fna.fbcdn.net/v/t39....\n", 154 | " | 3\n", 155 | " | 0\n", 156 | " | 0\n", 157 | " | 0\n", 158 | " | [{'reaction_count': 3, 'node': {'id': '1635855...\n", 159 | " | AQHRD6AVoPeV1kHsvTjT0XPsC42w8-nrRawALu9lBRlhH5...\n", 160 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 161 | " | https://www.facebook.com/permalink.php?story_f...\n", 162 | " | 2022-10-13 22:02:32\n", 163 | " | 
| 3\n", 166 | " | 黑特東華 NDHU Hate\n", 167 | " | 100064708507691\n", 168 | " | 476533567846923\n", 169 | " | 2022-10-12 19:47:59\n", 170 | " | #52632\\n\\n#非黑特\\nThis is 頌啦!浪‧人‧鬆‧餅👏👏👏\\n東華巡迴場10...\n", 171 | " | \n", 172 | " | \n", 173 | " | https://scontent.ftpe10-1.fna.fbcdn.net/v/t39....\n", 174 | " | 16\n", 175 | " | 0\n", 176 | " | 0\n", 177 | " | 1\n", 178 | " | [{'reaction_count': 16, 'node': {'id': '163585...\n", 179 | " | AQHRMe5o8KJIMTVYawF1o8Vd0cJzDc1dF1eum3VLMIYTpL...\n", 180 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 181 | " | https://www.facebook.com/permalink.php?story_f...\n", 182 | " | 2022-10-13 22:02:32\n", 183 | " | 
| 4\n", 186 | " | 黑特東華 NDHU Hate\n", 187 | " | 100064708507691\n", 188 | " | 476533317846948\n", 189 | " | 2022-10-12 19:47:47\n", 190 | " | #52631\\n\\n想問下 禮拜一晚上有上拳擊課的學生 \\n因為連假所以補課 想問是什麼時候...\n", 191 | " | \n", 192 | " | \n", 193 | " | \n", 194 | " | 9\n", 195 | " | 9\n", 196 | " | 4\n", 197 | " | 0\n", 198 | " | [{'reaction_count': 9, 'node': {'id': '1635855...\n", 199 | " | AQHRppObbMGsrzRdDW6mI7HAOUiedMntD77Xe_-pnyteE3...\n", 200 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 201 | " | https://www.facebook.com/permalink.php?story_f...\n", 202 | " | 2022-10-13 22:02:32\n", 203 | " | 
| ...\n", 206 | " | ...\n", 207 | " | ...\n", 208 | " | ...\n", 209 | " | ...\n", 210 | " | ...\n", 211 | " | ...\n", 212 | " | ...\n", 213 | " | ...\n", 214 | " | ...\n", 215 | " | ...\n", 216 | " | ...\n", 217 | " | ...\n", 218 | " | ...\n", 219 | " | ...\n", 220 | " | ...\n", 221 | " | ...\n", 222 | " | ...\n", 223 | " | 
| 112\n", 226 | " | 黑特東華 NDHU Hate\n", 227 | " | 100064708507691\n", 228 | " | 466889638811316\n", 229 | " | 2022-10-01 06:49:29\n", 230 | " | #52524\\n\\n29號接近午夜時分在外環拉轉按喇叭,後來一路騎進學人宿舍往舊宿去、還繼續...\n", 231 | " | \n", 232 | " | \n", 233 | " | https://scontent.ftpe10-1.fna.fbcdn.net/v/t39....\n", 234 | " | 47\n", 235 | " | 5\n", 236 | " | 2\n", 237 | " | 1\n", 238 | " | [{'reaction_count': 34, 'node': {'id': '163585...\n", 239 | " | AQHRsblILurBMfS7TsLUVyW3GVjCkGIVSJLXMn3lb0_a0P...\n", 240 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 241 | " | https://www.facebook.com/permalink.php?story_f...\n", 242 | " | 2022-10-13 22:02:32\n", 243 | " | 
| 113\n", 246 | " | 黑特東華 NDHU Hate\n", 247 | " | 100064708507691\n", 248 | " | 465849725581974\n", 249 | " | 2022-09-30 00:28:39\n", 250 | " | #52523\\n\\n向晴裝有買車停在宿舍旁的車主可不可以管好自己的車,常常半夜一直逼逼逼逼讓...\n", 251 | " | \n", 252 | " | \n", 253 | " | \n", 254 | " | 6\n", 255 | " | 0\n", 256 | " | 0\n", 257 | " | 0\n", 258 | " | [{'reaction_count': 6, 'node': {'id': '1635855...\n", 259 | " | AQHRXXwiAQgdKEmnvEo3mUXyzuH6f-wXNQDeyzxZr2kEky...\n", 260 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 261 | " | https://www.facebook.com/permalink.php?story_f...\n", 262 | " | 2022-10-13 22:02:32\n", 263 | " | 
| 114\n", 266 | " | 黑特東華 NDHU Hate\n", 267 | " | 100064708507691\n", 268 | " | 465849705581976\n", 269 | " | 2022-09-30 00:28:37\n", 270 | " | #52522\\n\\n宿舍都知道要公告說晚上十點過後要小聲要安靜\\n然後擷雲宿委還可以快十一點...\n", 271 | " | \n", 272 | " | \n", 273 | " | \n", 274 | " | 27\n", 275 | " | 6\n", 276 | " | 5\n", 277 | " | 0\n", 278 | " | [{'reaction_count': 21, 'node': {'id': '163585...\n", 279 | " | AQHRB_nYO_BnR2qBBs4LXzM6CPeLfOlTMiCE3ZR6Ed9uZB...\n", 280 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 281 | " | https://www.facebook.com/permalink.php?story_f...\n", 282 | " | 2022-10-13 22:02:32\n", 283 | " | 
| 115\n", 286 | " | 黑特東華 NDHU Hate\n", 287 | " | 100064708507691\n", 288 | " | 465849682248645\n", 289 | " | 2022-09-30 00:28:35\n", 290 | " | #52521\\n\\n*非黑特*\\n誠徵*日領* 兼職、打工、工讀, 10/3(一)3位Pm1...\n", 291 | " | \n", 292 | " | \n", 293 | " | \n", 294 | " | 8\n", 295 | " | 1\n", 296 | " | 1\n", 297 | " | 0\n", 298 | " | [{'reaction_count': 8, 'node': {'id': '1635855...\n", 299 | " | AQHRQ11A6xsDyh3EgTdpIoAWmxFJQkoSXUrlVoeHXSDylA...\n", 300 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 301 | " | https://www.facebook.com/permalink.php?story_f...\n", 302 | " | 2022-10-13 22:02:32\n", 303 | " | 
| 116\n", 306 | " | 黑特東華 NDHU Hate\n", 307 | " | 100064708507691\n", 308 | " | 465849665581980\n", 309 | " | 2022-09-30 00:28:33\n", 310 | " | #52520\\n\\n同學你的便當🍱留在統冠(志學)\\n由於一直等不到你來拿\\n先幫你冰起來\\...\n", 311 | " | \n", 312 | " | \n", 313 | " | https://scontent.ftpe10-1.fna.fbcdn.net/v/t39....\n", 314 | " | 11\n", 315 | " | 0\n", 316 | " | 0\n", 317 | " | 0\n", 318 | " | [{'reaction_count': 8, 'node': {'id': '1635855...\n", 319 | " | AQHR5AKAESdSG7AYj3DfxnH6Gb8piuyeh9hP-f4Y9IFOdv...\n", 320 | " | https://www.facebook.com/people/%E9%BB%91%E7%8...\n", 321 | " | https://www.facebook.com/permalink.php?story_f...\n", 322 | " | 2022-10-13 22:02:32\n", 323 | " | 
117 rows × 17 columns
\n", 327 | "| \n", 75 | " | NAME\n", 76 | " | TIME\n", 77 | " | MESSAGE\n", 78 | " | LINK\n", 79 | " | PAGEID\n", 80 | " | POSTID\n", 81 | " | COMMENTCOUNT\n", 82 | " | REACTIONCOUNT\n", 83 | " | SHARECOUNT\n", 84 | " | DISPLAYCOMMENTCOUNT\n", 85 | " | ANGER\n", 86 | " | HAHA\n", 87 | " | LIKE\n", 88 | " | LOVE\n", 89 | " | SORRY\n", 90 | " | SUPPORT\n", 91 | " | WOW\n", 92 | " | UPDATETIME\n", 93 | " | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0\n", 98 | " | 丟丟妹\n", 99 | " | 2021-12-03 02:36:02\n", 100 | " | 怎麼分不清 到底是誰帶壞誰XD 丟丟妹 和 Alizabeth 娘娘 根本姐妹淘🤣 互相「曉...\n", 101 | " | https://www.facebook.com/diudiu333\n", 102 | " | 1723714034327589\n", 103 | " | 4964611303571163\n", 104 | " | 704.0\n", 105 | " | 21797.0\n", 106 | " | 546.0\n", 107 | " | 572.0\n", 108 | " | 6.0\n", 109 | " | 5798.0\n", 110 | " | 15841.0\n", 111 | " | 108.0\n", 112 | " | 6.0\n", 113 | " | 17.0\n", 114 | " | 21.0\n", 115 | " | 2022-01-03 12:42:18\n", 116 | " | 
| 1\n", 119 | " | 丟丟妹\n", 120 | " | 2021-11-28 21:11:13\n", 121 | " | 我喜歡這樣的陽光☀️ 開放留言 想要丟丟賣什麼? 遇到瓶頸了⋯⋯⋯😢 ❤️ #留起來\n", 122 | " | https://www.facebook.com/diudiu333\n", 123 | " | 1723714034327589\n", 124 | " | 4949186545113639\n", 125 | " | 1675.0\n", 126 | " | 12674.0\n", 127 | " | 22.0\n", 128 | " | 1142.0\n", 129 | " | 1.0\n", 130 | " | 7.0\n", 131 | " | 12510.0\n", 132 | " | 125.0\n", 133 | " | 0.0\n", 134 | " | 25.0\n", 135 | " | 6.0\n", 136 | " | 2022-01-03 12:42:18\n", 137 | " | 
| 2\n", 140 | " | 丟丟妹\n", 141 | " | 2021-11-19 18:02:56\n", 142 | " | 人客啊 客倌們😆歡迎留言+1 2A頂級大閘蟹 (含繩6兩) 特價3088免運 (6隻一箱)最...\n", 143 | " | https://www.facebook.com/diudiu333\n", 144 | " | 1723714034327589\n", 145 | " | 4918431521522475\n", 146 | " | 719.0\n", 147 | " | 32242.0\n", 148 | " | 65.0\n", 149 | " | 543.0\n", 150 | " | 1.0\n", 151 | " | 31.0\n", 152 | " | 31884.0\n", 153 | " | 290.0\n", 154 | " | 3.0\n", 155 | " | 18.0\n", 156 | " | 15.0\n", 157 | " | 2022-01-03 12:42:18\n", 158 | " | 
| 3\n", 161 | " | 丟丟妹\n", 162 | " | 2021-11-12 22:35:08\n", 163 | " | 丟丟妹 終於不用離婚🤣 這次直播半路認走 #乾哥哥:#郭丟丟! 叫買能力根本 #隔空遺傳?!...\n", 164 | " | https://www.facebook.com/diudiu333\n", 165 | " | 1723714034327589\n", 166 | " | 4894343770597917\n", 167 | " | 477.0\n", 168 | " | 18072.0\n", 169 | " | 656.0\n", 170 | " | 431.0\n", 171 | " | 23.0\n", 172 | " | 3495.0\n", 173 | " | 14351.0\n", 174 | " | 125.0\n", 175 | " | 17.0\n", 176 | " | 29.0\n", 177 | " | 32.0\n", 178 | " | 2022-01-03 12:42:18\n", 179 | " | 
| 4\n", 182 | " | 丟丟妹\n", 183 | " | 2021-11-10 21:00:52\n", 184 | " | 各位親愛的帥哥美女們 由於訊息無限爆炸中 (已經快馬加鞭) 有留訊息就是 已接結單唷 造成...\n", 185 | " | https://www.facebook.com/diudiu333\n", 186 | " | 1723714034327589\n", 187 | " | 4887230927975868\n", 188 | " | 352.0\n", 189 | " | 9598.0\n", 190 | " | 17.0\n", 191 | " | 220.0\n", 192 | " | 1.0\n", 193 | " | 19.0\n", 194 | " | 9476.0\n", 195 | " | 59.0\n", 196 | " | 0.0\n", 197 | " | 34.0\n", 198 | " | 9.0\n", 199 | " | 2022-01-03 12:42:18\n", 200 | " | 
| ...\n", 203 | " | ...\n", 204 | " | ...\n", 205 | " | ...\n", 206 | " | ...\n", 207 | " | ...\n", 208 | " | ...\n", 209 | " | ...\n", 210 | " | ...\n", 211 | " | ...\n", 212 | " | ...\n", 213 | " | ...\n", 214 | " | ...\n", 215 | " | ...\n", 216 | " | ...\n", 217 | " | ...\n", 218 | " | ...\n", 219 | " | ...\n", 220 | " | ...\n", 221 | " | 
| 75\n", 224 | " | 丟丟妹\n", 225 | " | 2020-05-03 16:52:40\n", 226 | " | 猜猜我在哪裡 陳冠霖牛仔部落格 連靜雯joanne lien\n", 227 | " | https://www.facebook.com/diudiu333\n", 228 | " | 1723714034327589\n", 229 | " | 184909892573719\n", 230 | " | 2921.0\n", 231 | " | 12355.0\n", 232 | " | 851.0\n", 233 | " | 2921.0\n", 234 | " | 0.0\n", 235 | " | 0.0\n", 236 | " | 0.0\n", 237 | " | 0.0\n", 238 | " | 0.0\n", 239 | " | 0.0\n", 240 | " | 0.0\n", 241 | " | 2022-01-03 12:42:18\n", 242 | " | 
| 76\n", 245 | " | 丟丟妹\n", 246 | " | 2020-04-30 13:36:40\n", 247 | " | 踢爆娃娃機...\n", 248 | " | https://www.facebook.com/diudiu333\n", 249 | " | 1723714034327589\n", 250 | " | 3173673282664983\n", 251 | " | 946.0\n", 252 | " | 22357.0\n", 253 | " | 1035.0\n", 254 | " | 856.0\n", 255 | " | 76.0\n", 256 | " | 2661.0\n", 257 | " | 19275.0\n", 258 | " | 161.0\n", 259 | " | 25.0\n", 260 | " | 71.0\n", 261 | " | 88.0\n", 262 | " | 2022-01-03 12:42:18\n", 263 | " | 
| 77\n", 266 | " | 丟丟妹\n", 267 | " | 2020-04-29 21:51:13\n", 268 | " | 今日直播暫停乙次 丟丟身體不舒服了😭 明天下午見 開播 要想我喲 嗚嗚😢\n", 269 | " | https://www.facebook.com/diudiu333\n", 270 | " | 1723714034327589\n", 271 | " | 3171969702835341\n", 272 | " | 167.0\n", 273 | " | 4123.0\n", 274 | " | 8.0\n", 275 | " | 164.0\n", 276 | " | 0.0\n", 277 | " | 3.0\n", 278 | " | 3992.0\n", 279 | " | 96.0\n", 280 | " | 7.0\n", 281 | " | 5.0\n", 282 | " | 20.0\n", 283 | " | 2022-01-03 12:42:18\n", 284 | " | 
| 78\n", 287 | " | 丟丟妹\n", 288 | " | 2020-04-17 19:46:50\n", 289 | " | 帥哥美女今天星期五,丟妹偷懶去 明天下午二點 準時見❤️❤️😄有人會報到嗎 刷起來 😂\n", 290 | " | https://www.facebook.com/diudiu333\n", 291 | " | 1723714034327589\n", 292 | " | 3141849955847316\n", 293 | " | 116.0\n", 294 | " | 4804.0\n", 295 | " | 20.0\n", 296 | " | 114.0\n", 297 | " | 0.0\n", 298 | " | 6.0\n", 299 | " | 4744.0\n", 300 | " | 48.0\n", 301 | " | 1.0\n", 302 | " | 0.0\n", 303 | " | 5.0\n", 304 | " | 2022-01-03 12:42:18\n", 305 | " | 
| 79\n", 308 | " | 丟丟妹\n", 309 | " | 2020-04-15 20:48:08\n", 310 | " | 丟丟 今天太累累了 😢 直播暫停乙次 王董想要替代丟丟當家 睡覺去 ~~ 可以 (到來...\n", 311 | " | https://www.facebook.com/diudiu333\n", 312 | " | 1723714034327589\n", 313 | " | 3136987929666852\n", 314 | " | 145.0\n", 315 | " | 6943.0\n", 316 | " | 9.0\n", 317 | " | 131.0\n", 318 | " | 0.0\n", 319 | " | 23.0\n", 320 | " | 6847.0\n", 321 | " | 59.0\n", 322 | " | 0.0\n", 323 | " | 0.0\n", 324 | " | 14.0\n", 325 | " | 2022-01-03 12:42:18\n", 326 | " | 
80 rows × 18 columns
\n", 330 | "| \n", 469 | " | NAME\n", 470 | " | TIME\n", 471 | " | MESSAGE\n", 472 | " | LINK\n", 473 | " | PAGEID\n", 474 | " | POSTID\n", 475 | " | COMMENTCOUNT\n", 476 | " | REACTIONCOUNT\n", 477 | " | SHARECOUNT\n", 478 | " | DISPLAYCOMMENTCOUNT\n", 479 | " | ANGER\n", 480 | " | DOROTHY\n", 481 | " | HAHA\n", 482 | " | LIKE\n", 483 | " | LOVE\n", 484 | " | SORRY\n", 485 | " | SUPPORT\n", 486 | " | TOTO\n", 487 | " | WOW\n", 488 | " | UPDATETIME\n", 489 | " | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0\n", 494 | " | 馬英九\n", 495 | " | 2022-01-01 08:00:56\n", 496 | " | 今天是中華民國一百一十一年元旦,清晨的氣溫很低,看著國旗在空中飄揚,心中交集著感謝與感慨之情...\n", 497 | " | https://www.facebook.com/MaYingjeou/\n", 498 | " | 118250504903757\n", 499 | " | 4936286629766763\n", 500 | " | 1243.0\n", 501 | " | 38906.0\n", 502 | " | 403.0\n", 503 | " | 1022.0\n", 504 | " | 5.0\n", 505 | " | NaN\n", 506 | " | 22.0\n", 507 | " | 38254.0\n", 508 | " | 458.0\n", 509 | " | 4.0\n", 510 | " | 157.0\n", 511 | " | NaN\n", 512 | " | 6.0\n", 513 | " | 2022-01-05 23:16:30\n", 514 | " | 
| 1\n", 517 | " | 馬英九\n", 518 | " | 2021-12-21 17:05:14\n", 519 | " | 冬至天氣冷颼颼,來一碗熱熱的燒湯圓 🤤 大家喜歡吃芝麻湯圓還是花生湯圓呢? #冬至 #吃湯圓...\n", 520 | " | https://www.facebook.com/MaYingjeou/\n", 521 | " | 118250504903757\n", 522 | " | 4893842230677870\n", 523 | " | 1047.0\n", 524 | " | 28432.0\n", 525 | " | 179.0\n", 526 | " | 844.0\n", 527 | " | 15.0\n", 528 | " | NaN\n", 529 | " | 64.0\n", 530 | " | 27961.0\n", 531 | " | 325.0\n", 532 | " | 2.0\n", 533 | " | 55.0\n", 534 | " | NaN\n", 535 | " | 10.0\n", 536 | " | 2022-01-05 23:16:30\n", 537 | " | 
| 2\n", 540 | " | 馬英九\n", 541 | " | 2021-12-17 17:00:00\n", 542 | " | 公民投票是憲法賦予人民的權利 1218,穿得暖暖,出門投票! #四個都同意人民最有利 #返鄉...\n", 543 | " | https://www.facebook.com/MaYingjeou/\n", 544 | " | 118250504903757\n", 545 | " | 4874626962599397\n", 546 | " | 1883.0\n", 547 | " | 45037.0\n", 548 | " | 519.0\n", 549 | " | 1372.0\n", 550 | " | 31.0\n", 551 | " | NaN\n", 552 | " | 63.0\n", 553 | " | 43941.0\n", 554 | " | 336.0\n", 555 | " | 16.0\n", 556 | " | 634.0\n", 557 | " | NaN\n", 558 | " | 16.0\n", 559 | " | 2022-01-05 23:16:30\n", 560 | " | 
| 3\n", 563 | " | 馬英九\n", 564 | " | 2021-12-11 09:00:00\n", 565 | " | 公投前最後一個周末,你收到投票通知單了嗎? 萊劑不是保健品 公投重新綁大選 三接興建傷藻礁⋯...\n", 566 | " | https://www.facebook.com/MaYingjeou/\n", 567 | " | 118250504903757\n", 568 | " | 4846491325412961\n", 569 | " | 3254.0\n", 570 | " | 39639.0\n", 571 | " | 809.0\n", 572 | " | 1742.0\n", 573 | " | 60.0\n", 574 | " | NaN\n", 575 | " | 78.0\n", 576 | " | 38702.0\n", 577 | " | 196.0\n", 578 | " | 11.0\n", 579 | " | 568.0\n", 580 | " | NaN\n", 581 | " | 24.0\n", 582 | " | 2022-01-05 23:16:30\n", 583 | " | 
| 4\n", 586 | " | 馬英九\n", 587 | " | 2021-11-14 13:52:11\n", 588 | " | 鯉魚潭鐵三初體驗🏊♂️🚴♂️🏃♂️ 順利完賽💪💪💪 #2021年花蓮太平洋鐵人三項錦標...\n", 589 | " | https://www.facebook.com/MaYingjeou/\n", 590 | " | 118250504903757\n", 591 | " | 4764707240258037\n", 592 | " | 2409.0\n", 593 | " | 73538.0\n", 594 | " | 501.0\n", 595 | " | 2017.0\n", 596 | " | 15.0\n", 597 | " | NaN\n", 598 | " | 41.0\n", 599 | " | 72407.0\n", 600 | " | 557.0\n", 601 | " | 4.0\n", 602 | " | 292.0\n", 603 | " | NaN\n", 604 | " | 222.0\n", 605 | " | 2022-01-05 23:16:30\n", 606 | " | 
| ...\n", 609 | " | ...\n", 610 | " | ...\n", 611 | " | ...\n", 612 | " | ...\n", 613 | " | ...\n", 614 | " | ...\n", 615 | " | ...\n", 616 | " | ...\n", 617 | " | ...\n", 618 | " | ...\n", 619 | " | ...\n", 620 | " | ...\n", 621 | " | ...\n", 622 | " | ...\n", 623 | " | ...\n", 624 | " | ...\n", 625 | " | ...\n", 626 | " | ...\n", 627 | " | ...\n", 628 | " | ...\n", 629 | " | 
| 475\n", 632 | " | 馬英九\n", 633 | " | 2015-10-19 18:10:33\n", 634 | " | 「六堆忠義祠」源於清康熙60年,六堆團練協助平定朱一貴起事,清廷興建了「西勢忠義亭」奉祀在戰...\n", 635 | " | https://www.facebook.com/MaYingjeou/\n", 636 | " | 118250504903757\n", 637 | " | 1050731538322311\n", 638 | " | 1176.0\n", 639 | " | 21803.0\n", 640 | " | 316.0\n", 641 | " | 705.0\n", 642 | " | 0.0\n", 643 | " | NaN\n", 644 | " | 0.0\n", 645 | " | 21801.0\n", 646 | " | 2.0\n", 647 | " | 0.0\n", 648 | " | 0.0\n", 649 | " | NaN\n", 650 | " | 0.0\n", 651 | " | 2022-01-05 23:16:30\n", 652 | " | 
| 476\n", 655 | " | 馬英九\n", 656 | " | 2015-10-12 18:07:34\n", 657 | " | \n", 658 | " | https://www.facebook.com/MaYingjeou/\n", 659 | " | 118250504903757\n", 660 | " | 1047937641935034\n", 661 | " | NaN\n", 662 | " | NaN\n", 663 | " | NaN\n", 664 | " | NaN\n", 665 | " | 0.0\n", 666 | " | NaN\n", 667 | " | 0.0\n", 668 | " | 0.0\n", 669 | " | 0.0\n", 670 | " | 0.0\n", 671 | " | 0.0\n", 672 | " | NaN\n", 673 | " | 0.0\n", 674 | " | 2022-01-05 23:16:30\n", 675 | " | 
| 477\n", 678 | " | 馬英九\n", 679 | " | 2015-10-11 13:40:25\n", 680 | " | \n", 681 | " | https://www.facebook.com/MaYingjeou/\n", 682 | " | 118250504903757\n", 683 | " | 1047435965318535\n", 684 | " | NaN\n", 685 | " | NaN\n", 686 | " | NaN\n", 687 | " | NaN\n", 688 | " | 0.0\n", 689 | " | NaN\n", 690 | " | 0.0\n", 691 | " | 0.0\n", 692 | " | 0.0\n", 693 | " | 0.0\n", 694 | " | 0.0\n", 695 | " | NaN\n", 696 | " | 0.0\n", 697 | " | 2022-01-05 23:16:30\n", 698 | " | 
| 478\n", 701 | " | 馬英九\n", 702 | " | 2015-10-10 14:08:46\n", 703 | " | 今天是民國104年國慶日,讓我們一起祝中華民國生日快樂! 今年的國慶日,還有特別的意義。今年...\n", 704 | " | https://www.facebook.com/MaYingjeou/\n", 705 | " | 118250504903757\n", 706 | " | 1047031385358993\n", 707 | " | 2219.0\n", 708 | " | 36765.0\n", 709 | " | 679.0\n", 710 | " | 1357.0\n", 711 | " | 3.0\n", 712 | " | NaN\n", 713 | " | 1.0\n", 714 | " | 36761.0\n", 715 | " | 0.0\n", 716 | " | 0.0\n", 717 | " | 0.0\n", 718 | " | NaN\n", 719 | " | 0.0\n", 720 | " | 2022-01-05 23:16:30\n", 721 | " | 
| 479\n", 724 | " | 馬英九\n", 725 | " | 2015-10-09 19:58:31\n", 726 | " | 故宮博物院於民國14年的國慶日成立,今年是它的90歲生日。故宮成立後幾經波折,對日抗戰爆發後...\n", 727 | " | https://www.facebook.com/MaYingjeou/\n", 728 | " | 118250504903757\n", 729 | " | 1046725805389551\n", 730 | " | 993.0\n", 731 | " | 14761.0\n", 732 | " | 306.0\n", 733 | " | 590.0\n", 734 | " | 3.0\n", 735 | " | NaN\n", 736 | " | 0.0\n", 737 | " | 14757.0\n", 738 | " | 1.0\n", 739 | " | 0.0\n", 740 | " | 0.0\n", 741 | " | NaN\n", 742 | " | 0.0\n", 743 | " | 2022-01-05 23:16:30\n", 744 | " | 
480 rows × 20 columns
\n", 748 | "| \n", 233 | " | ACTORID\n", 234 | " | NAME\n", 235 | " | GROUPID\n", 236 | " | POSTID\n", 237 | " | TIME\n", 238 | " | CONTENT\n", 239 | " | COMMENTCOUNT\n", 240 | " | SHARECOUNT\n", 241 | " | LIKECOUNT\n", 242 | " | UPDATETIME\n", 243 | " | 
|---|---|---|---|---|---|---|---|---|---|---|
| 0\n", 248 | " | \"100008360976636\"\n", 249 | " | 蔣忠祐\n", 250 | " | 197223143437\n", 251 | " | 10161825149923438\n", 252 | " | 2022-01-04 19:28:20\n", 253 | " | 小弟想問一下,因為本身只寫過python也只會ML其他的都不太熟悉。 如果我今天想要寫一個簡...\n", 254 | " | 49\n", 255 | " | 7\n", 256 | " | 42\n", 257 | " | 2022-01-05 22:47:28\n", 258 | " | 
| 1\n", 261 | " | 1372450133\n", 262 | " | 陳柏勳\n", 263 | " | 197223143437\n", 264 | " | 10161825076258438\n", 265 | " | 2022-01-04 18:16:42\n", 266 | " | 各位高手大大 小弟剛自學 python 我先在 Jupyter 寫一個檔案 並存檔為Hell...\n", 267 | " | 14\n", 268 | " | 0\n", 269 | " | 10\n", 270 | " | 2022-01-05 22:47:28\n", 271 | " | 
| 2\n", 274 | " | \"100029978773864\"\n", 275 | " | 沈宗叡\n", 276 | " | 197223143437\n", 277 | " | 10161818603678438\n", 278 | " | 2022-01-01 14:46:35\n", 279 | " | 在練習網路上的題目: https://tioj.ck.tp.edu.tw/problems/...\n", 280 | " | 15\n", 281 | " | 3\n", 282 | " | 3\n", 283 | " | 2022-01-05 22:47:28\n", 284 | " | 
| 3\n", 287 | " | 696780981\n", 288 | " | MaoYang Chien\n", 289 | " | 197223143437\n", 290 | " | 10161826718243438\n", 291 | " | 2022-01-05 11:29:06\n", 292 | " | 這個開源工具使用 python、flask 等技術開發 如果你用「專案管理」的方法來完成一堂...\n", 293 | " | 0\n", 294 | " | 9\n", 295 | " | 24\n", 296 | " | 2022-01-05 22:47:28\n", 297 | " | 
| 4\n", 300 | " | \"100006223006386\"\n", 301 | " | 劉奕德\n", 302 | " | 197223143437\n", 303 | " | 10161816780833438\n", 304 | " | 2021-12-31 23:20:43\n", 305 | " | 想詢問版上各位大大 近期在做廠房的耗電量改善 想到python可以應用在建立模型及預測 請問...\n", 306 | " | 37\n", 307 | " | 2\n", 308 | " | 29\n", 309 | " | 2022-01-05 22:47:28\n", 310 | " | 
| ...\n", 313 | " | ...\n", 314 | " | ...\n", 315 | " | ...\n", 316 | " | ...\n", 317 | " | ...\n", 318 | " | ...\n", 319 | " | ...\n", 320 | " | ...\n", 321 | " | ...\n", 322 | " | ...\n", 323 | " | 
| 3935\n", 326 | " | \"105673814305452\"\n", 327 | " | 用圖片高效學程式\n", 328 | " | 105673814305452\n", 329 | " | 10160837021273438\n", 330 | " | 2020-12-27 13:00:34\n", 331 | " | 【圖解演算法教學】Bubble Sort 大隊接力賽 這次首次嘗試以「動畫」形式,來演示Bu...\n", 332 | " | 2\n", 333 | " | 9\n", 334 | " | 14\n", 335 | " | 2022-01-05 22:47:28\n", 336 | " | 
| 3936\n", 339 | " | \"100051655785472\"\n", 340 | " | 劉昶林\n", 341 | " | 197223143437\n", 342 | " | 10160814944163438\n", 343 | " | 2020-12-19 23:56:39\n", 344 | " | 嗨各位, 我現在有個問題,我想要將yolov4(darknet) 跟 Arduino做結合,...\n", 345 | " | 34\n", 346 | " | 13\n", 347 | " | 72\n", 348 | " | 2022-01-05 22:47:28\n", 349 | " | 
| 3937\n", 352 | " | \"100002220077374\"\n", 353 | " | Hugh LI\n", 354 | " | 197223143437\n", 355 | " | 10160840505808438\n", 356 | " | 2020-12-28 20:01:25\n", 357 | " | 想問一下各位一個蠻基礎的問題 (先撇開其他條件) c=s[a:b] c[::-1] 為甚麼...\n", 358 | " | 10\n", 359 | " | 1\n", 360 | " | 12\n", 361 | " | 2022-01-05 22:47:28\n", 362 | " | 
| 3938\n", 365 | " | 504805657\n", 366 | " | Roy Hsu\n", 367 | " | 197223143437\n", 368 | " | 10160837961533438\n", 369 | " | 2020-12-27 22:27:50\n", 370 | " | 我想實現 用python 去更換 資料夾的icon ( windows ) 網路上看到這個 ...\n", 371 | " | 6\n", 372 | " | 1\n", 373 | " | 3\n", 374 | " | 2022-01-05 22:47:28\n", 375 | " | 
| 3939\n", 378 | " | \"100001849455014\"\n", 379 | " | Vivi Chen\n", 380 | " | 197223143437\n", 381 | " | 10160837910348438\n", 382 | " | 2020-12-27 22:05:17\n", 383 | " | Test 的 SayHello 函式在初始化後, 就不用再輸入'Amy' 最終目的就是只要在...\n", 384 | " | 6\n", 385 | " | 1\n", 386 | " | 9\n", 387 | " | 2022-01-05 22:47:28\n", 388 | " | 
3940 rows × 10 columns
\n", 392 | "