├── README.md
├── backlink_monitoring.py
└── v2
├── README.md
└── backlink_monitoring.py
/README.md:
--------------------------------------------------------------------------------
1 | # Backlink checker
2 | [
](https://github.com/topics/python) [
](https://github.com/topics/web-scraping)
3 |
4 | [](https://discord.gg/GbxmdGhZjq)
5 |
6 | ## Table of Contents
7 |
8 | - [Packages Required](#packages-required)
9 | - [Checking backlinks](#checking-backlinks)
10 | - [STEP 1: Check if backlink is reachable](#step-1-check-if-backlink-is-reachable)
11 | - [STEP 2: Check if backlink HTML has noindex element](#step-2-check-if-backlink-html-has-noindex-element)
12 | - [STEP 3: Check if backlink HTML contains a link to a referent page](#step-3-check-if-backlink-html-contains-a-link-to-a-referent-page)
13 | - [STEP 4: Check if referent page is marked as "nofollow"](#step-4-check-if-referent-page-is-marked-as-nofollow)
14 | - [Assigning results to Pandas DataFrame](#assigning-results-to-pandas-dataframe)
15 | - [Pushing results to Slack](#pushing-results-to-slack)
16 |
17 |
18 | Backlink checker is a simple tool, which checks backlink quality, identifies problematic backlinks, and outputs them to a specific [Slack](https://slack.com/) channel.
19 |
20 | The tool tries to reach a backlink, which is supposed to contain a referent link, and checks if it indeed does. If a backlink contains a referent link, the tool retrieves the HTML of that backlink and checks for certain HTML elements, which indicate good quality of backlink.
21 |
22 | ## Packages Required
23 |
24 | The first step is to prepare the environment. The backlink checker is written in Python. The most common Python packages for creating any web crawling tool are Requests and Beautiful Soup 4 - a library needed for pulling data out of HTML. Also, make sure you have Pandas package installed, as it will be used for some simple data wrangling.
25 |
26 | These packages can be installed using the `pip install` command.
27 |
30 |
31 | ```python
32 | pip install beautifulsoup4 requests pandas
33 | ```
34 |
35 | This will install all the three needed packages.
36 |
37 | Important: Note that version 4 of [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) is being installed here. Earlier versions are now obsolete.
38 |
39 | ## Checking backlinks
40 |
41 | The script scrapes backlink websites and checks for several backlink quality signs:
42 | - if backlink is reachable
43 | - if backlink contains _noindex_ element or not
44 | - if backlink contains a link to a referent page
45 | - if link to referent's page is marked as _nofollow_
46 |
47 | ### STEP 1: Check if backlink is reachable
48 |
49 | The first step is to try to reach the backlink. This can be done using the Requests library's `get()` method.
50 |
51 | ```python
52 | try:
53 | resp = requests.get(
54 | backlink,
55 | allow_redirects=True
56 | )
57 | except Exception as e:
58 | return ("Backlink not reachable", "None")
59 |
60 | response_code = resp.status_code
61 | if response_code != 200:
62 | return ("Backlink not reachable", response_code)
63 | ```
64 |
65 | If a request returns an error (such as `404 Not Found`) or backlink cannot be reached, backlink is assigned _Backlink not reachable_ status.
66 |
67 | ### STEP 2: Check if backlink HTML has `noindex` element
68 |
69 | To be able to navigate in the HTML of a backlink, a Beautiful soup object needs to be created.
70 |
71 | ```python
72 | bsObj = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
73 | ```
74 |
75 | Note that if you do not have lxml installed already, you can do that by running `pip install lxml`.
76 |
77 | Beautiful Soup's `find_all()` method can be used to find if there are `` tags with `noindex` attributes in HTML. If that's true, let's assign _Noindex_ status to that backlink.
78 |
79 | ```python
80 | if len(bsObj.findAll('meta', content=re.compile("noindex"))) > 0:
81 | return('Noindex', response_code)
82 | ```
83 |
84 | ### STEP 3: Check if backlink HTML contains a link to a referent page
85 |
86 | Next, it can be found if HTML contains an anchor tag (marked as `a`) with a referent link. If there was no referent link found, let's assign _Link was not found_ status to that particular backlink.
87 |
88 | ```python
89 | elements = bsObj.findAll('a', href=re.compile(our_link))
90 | if elements == []:
91 | return ('Link was not found', response_code)
92 | ```
93 |
94 | ### STEP 4: Check if referent page is marked as `nofollow`
95 |
96 | Finally, let's check if an HTML element, containing a link to a referent page, has a `nofollow` tag. This tag can be found in the `rel` attribute.
97 |
98 | ```python
99 | try:
100 | if 'nofollow' in element['rel']:
101 | return ('Link found, nofollow', response_code)
102 | except KeyError:
103 | return ('Link found, dofollow', response_code)
104 | ```
105 |
106 | Based on the result, let's assign either _Link found, nofollow_ or _Link found, dofollow_ status.
107 |
108 |
109 | ## Assigning results to Pandas DataFrame
110 |
111 | After getting status for each backlink and referent link pair, let's append this information (along with the response code from a backlink) to pandas DataFrame.
112 |
113 | ```python
114 | df = None
115 | for backlink, referent_link in zip(backlinks_list, referent_links_list):
116 | (status, response_code) = get_page(backlink, referent_link)
117 | if df is not None:
118 | df = df.append([[backlink, status, response_code]])
119 | else:
120 | df = pd.DataFrame(data=[[backlink, status, response_code]])
121 | df.columns = ['Backlink', 'Status', 'Response code']
122 | ```
123 |
124 | `get_page()` function refers to the 4-step process that was described above (please see the complete code for the better understanding).
125 |
126 |
127 | ## Pushing results to Slack
128 |
129 | In order to be able to automatically report backlinks and their statuses in a convenient way, a Slack app could be used. You will need to create an app in Slack and assign incoming webhook to connect it and Slack's channel you would like to post notifications to. More on Slack apps and webhooks: https://api.slack.com/messaging/webhooks
130 |
131 | ```python
132 | SLACK_WEBHOOK = "YOUR_SLACK_CHANNEL_WEBHOOK"
133 | ```
134 |
135 | Although the following piece of code could look a bit complicated, all that it does is formatting data into a readable format and pushing that data to Slack channel via POST request to Slack webhook.
136 |
137 | ```python
138 | cols = df.columns.tolist()
139 | dict_df = df.to_dict()
140 | header = ''
141 | rows = []
142 |
143 | for i in range(len(df)):
144 | row = ''
145 | for col in cols:
146 | row += "`" + str(dict_df[col][i]) + "` "
147 | row = ':black_small_square:' + row
148 | rows.append(row)
149 |
150 | data = ["*" + "Backlinks" "*\n"] + rows
151 |
152 | slack_data = {
153 | "text": '\n'.join(data)
154 | }
155 |
156 | requests.post(webhook_url = SLACK_WEBHOOK, json = slack_data)
157 | ```
158 |
159 |
160 | That's it! In this example, Slack was used for reporting purposes, but it is possible to adjust the code so that backlinks and their statuses would be exported to a .csv file, google spreadsheets, or database.
161 |
162 | Please see [backlink_monitoring_oxylabs.py](https://github.com/oxylabs/backlink-monitoring/blob/main/backlink_monitoring.py) for the complete code.
163 |
164 |
--------------------------------------------------------------------------------
/backlink_monitoring.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | ###############################
4 | # This script scrapes backlink websites and checks if it:
5 | # a) gives a valid response
6 | # b) its HTML contains a link to our websites
7 | # c) it has no "noindex" element in HTML
8 | # d) there is no "noffolow" tag in our link's element in HTML
9 | # Info about backlinks is then saved to local .csv file. Also, a notication is pushed to a Slack channel with potentially problematic backlinks and their statuses
10 | ###############################
11 |
12 | # Import BeautifulSoup - library for puling data out of HTML
13 | from bs4 import BeautifulSoup
14 | from bs4.dammit import EncodingDetector
15 |
16 | # Import other standart Python libraries, that will be used later
17 | import requests
18 | import json
19 | import pandas as pd
20 | import random
21 | from datetime import datetime
22 | import traceback
23 | import re
24 | import os
25 |
26 | # More on Slack apps and webhooks: https://api.slack.com/messaging/webhooks
27 | SLACK_WEBHOOK = "YOUR_SLACK_CHANNEL_WEBHOOK"
28 |
29 |
30 | def get_page(website, our_link):
31 | # Add "http://" prefix to website if there is no protocol defined for a backlink
32 | if website.startswith('http') is not True:
33 | website = 'http://' + website
34 |
35 | # Get contents of a website
36 | try:
37 | resp = requests.get(
38 | website,
39 | allow_redirects=True
40 | )
41 | except Exception as e:
42 | return ("Website not reachable", "None")
43 |
44 | # Check if a website is reachable
45 | response_code = resp.status_code
46 | if response_code != 200:
47 | return ("Website not reachable", response_code)
48 |
49 | # Try to find encoding, decode content and load it to BeautifulSoup object
50 | http_encoding = resp.encoding if 'charset' in resp.headers.get(
51 | 'content-type', '').lower() else None
52 | html_encoding = EncodingDetector.find_declared_encoding(
53 | resp.content, is_html=True)
54 | encoding = html_encoding or http_encoding
55 | bsObj = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
56 |
57 | # Check if website's HTML contains "noindex" element
58 | if len(bsObj.findAll('meta', content=re.compile("noindex"))) > 0:
59 | return('Noindex', response_code)
60 |
61 | # Check if HTML contains a link to your page
62 | our_link = our_link.lstrip('https:').rstrip('/')
63 | elements = bsObj.findAll('a', href=re.compile(our_link))
64 | if elements == []:
65 | return ('Link was not found', response_code)
66 | else:
67 | element = elements[0]
68 |
69 | # Find your link and check if its element contains a "nofollow" tag
70 | try:
71 | if 'nofollow' in element['rel']:
72 | return ('Link found, nofollow', response_code)
73 | except KeyError:
74 | return ('Link found, dofollow', response_code)
75 |
76 | # If every check is passed, return "Link found, dofollow" status
77 | return ('Link found, dofollow', response_code)
78 |
79 |
80 | def push_to_slack(df, webhook_url=SLACK_WEBHOOK):
81 | df.reset_index(drop=True, inplace=True)
82 |
83 | # Align text beautifully
84 | for col in df.columns.tolist():
85 | max_len = len(max(df[col].astype(str).tolist(), key=len))
86 | for value in df[col].tolist():
87 | df.loc[df[col].astype(str) == str(value), col] = str(
88 | value) + (max_len - len(str(value)))*" "
89 |
90 | cols = df.columns.tolist()
91 | dict_df = df.to_dict()
92 | header = ''
93 | rows = []
94 |
95 | for i in range(len(df)):
96 | row = ''
97 | for col in cols:
98 | row += "`" + str(dict_df[col][i]) + "` "
99 | row = ':black_small_square:' + row
100 | rows.append(row)
101 |
102 | if rows == []:
103 | rows = ['\n' + "`" + "No problematic backlinks were found" + "`"]
104 |
105 | data = ["*" + "Problematic backlinks" "*\n"] + rows
106 |
107 | slack_data = {
108 | "text": '\n'.join(data)
109 | }
110 |
111 | response = requests.post(webhook_url, json=slack_data)
112 |
113 | return df
114 |
115 |
116 | def main():
117 | # Wrap everything in try-except clause in order to catch all errors
118 | try:
119 | # 1) Get backlinks and our links
120 | websites = ['https://example.com', 'www.example.co.uk',
121 | 'http://www.geekscab.com/2019/07/how-much-information-can-ip-address.html'] # A LIST OF BACKLINKS
122 | our_links = ['https://oxylabs.io/blog/what-is-web-scraping', 'https://oxylabs.io/blog/the-difference-between-data-center-and-residential-proxies',
123 | 'https://oxylabs.io/blog/what-is-proxy'] # A LIST OF YOUR LINKS
124 |
125 | # 2) Scrape a website and check if:
126 | # a) gives a valid response (not a blank / error)
127 | # b) its HTML contains a link to our websites
128 | # c) there is no "noffolow" tag in our link's element
129 |
130 | df = None
131 | print('Scraping and parsing data from websites...')
132 | for website, our_link in zip(websites, our_links):
133 | (status, response_code) = get_page(website, our_link)
134 | if df is not None:
135 | df = df.append([[website, status, response_code]])
136 | else:
137 | df = pd.DataFrame(data=[[website, status, response_code]])
138 | df.columns = ['Website', 'Status', 'Response code']
139 | df.reset_index(inplace=True, drop=True)
140 |
141 | # Save to .csv file locally
142 | df.to_csv('statuses.csv', index=None)
143 |
144 | # 3) Push notication to Slack
145 | print('Pushing notification to Slack...')
146 | df_to_push = df.loc[df['Status'] !=
147 | 'Link found, dofollow'][['Website', 'Status']]
148 | push_to_slack(df_to_push)
149 |
150 | except:
151 | tb = traceback.format_exc()
152 | print(str(datetime.now()) + '\n' + tb + '\n')
153 | return -1
154 |
155 | return 1
156 |
157 |
158 | if __name__ == '__main__':
159 | main()
160 |
--------------------------------------------------------------------------------
/v2/README.md:
--------------------------------------------------------------------------------
1 | # Backlink checker
2 | [
](https://github.com/topics/python) [
](https://github.com/topics/web-scraping)
3 |
4 | ## Table of Contents
5 |
6 | - [Packages Required](#packages-required)
7 | - [Checking backlinks](#checking-backlinks)
8 | - [STEP 1: Check if backlink is reachable](#step-1-check-if-backlink-is-reachable)
9 | - [STEP 2: Check if backlink HTML has noindex element](#step-2-check-if-backlink-html-has-noindex-element)
10 | - [STEP 3: Check if backlink HTML contains a link to a referent page](#step-3-check-if-backlink-html-contains-a-link-to-a-referent-page)
11 | - [STEP 4: Check if referent page is marked as "nofollow"](#step-4-check-if-referent-page-is-marked-as-nofollow)
12 | - [Assigning results to Pandas DataFrame](#assigning-results-to-pandas-dataframe)
13 | - [Pushing results to Slack](#pushing-results-to-slack)
14 |
15 |
16 | Backlink checker is a simple tool, which checks backlink quality, identifies problematic backlinks, and outputs them to a specific [Slack](https://slack.com/) channel.
17 |
18 | The tool tries to reach a backlink, which is supposed to contain a referent link, and checks if it indeed does. If a backlink contains a referent link, the tool retrieves the HTML of that backlink and checks for certain HTML elements, which indicate good quality of backlink.
19 |
20 | ## Packages Required
21 |
22 | The first step is to prepare the environment. The backlink checker is written in Python. The most common Python packages for creating any web crawling tool are Requests and Beautiful Soup 4 - a library needed for pulling data out of HTML. Also, make sure you have Pandas package installed, as it will be used for some simple data wrangling.
23 |
24 | These packages can be installed using the `pip install` command.
25 |
28 |
29 | ```python
30 | pip install beautifulsoup4 requests pandas
31 | ```
32 |
33 | This will install all the three needed packages.
34 |
35 | Important: Note that version 4 of [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) is being installed here. Earlier versions are now obsolete.
36 |
37 | ## Checking backlinks
38 |
39 | The script scrapes backlink websites and checks for several backlink quality signs:
40 | - if backlink is reachable
41 | - if backlink contains _noindex_ element or not
42 | - if backlink contains a link to a referent page
43 | - if link to referent's page is marked as _nofollow_
44 |
45 | ### STEP 1: Check if backlink is reachable
46 |
47 | The first step is to try to reach the backlink. This can be done using the Requests library's `get()` method.
48 |
49 | ```python
50 | try:
51 | resp = requests.get(
52 | backlink,
53 | allow_redirects=True
54 | )
55 | except Exception as e:
56 | return ("Backlink not reachable", "None")
57 |
58 | response_code = resp.status_code
59 | if response_code != 200:
60 | return ("Backlink not reachable", response_code)
61 | ```
62 |
63 | If a request returns an error (such as `404 Not Found`) or backlink cannot be reached, backlink is assigned _Backlink not reachable_ status.
64 |
65 | ### STEP 2: Check if backlink HTML has `noindex` element
66 |
67 | To be able to navigate in the HTML of a backlink, a Beautiful soup object needs to be created.
68 |
69 | ```python
70 | bsObj = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
71 | ```
72 |
73 | Note that if you do not have lxml installed already, you can do that by running `pip install lxml`.
74 |
75 | Beautiful Soup's `find_all()` method can be used to find if there are `` tags with `noindex` attributes in HTML. If that's true, let's assign _Noindex_ status to that backlink.
76 |
77 | ```python
78 | if len(bsObj.findAll('meta', content=re.compile("noindex"))) > 0:
79 | return('Noindex', response_code)
80 | ```
81 |
82 | ### STEP 3: Check if backlink HTML contains a link to a referent page
83 |
84 | Next, it can be found if HTML contains an anchor tag (marked as `a`) with a referent link. If there was no referent link found, let's assign _Link was not found_ status to that particular backlink.
85 |
86 | ```python
87 | elements = bsObj.findAll('a', href=re.compile(our_link))
88 | if elements == []:
89 | return ('Link was not found', response_code)
90 | ```
91 |
92 | ### STEP 4: Check if referent page is marked as `nofollow`
93 |
94 | Finally, let's check if an HTML element, containing a link to a referent page, has a `nofollow` tag. This tag can be found in the `rel` attribute.
95 |
96 | ```python
97 | try:
98 | if 'nofollow' in element['rel']:
99 | return ('Link found, nofollow', response_code)
100 | except KeyError:
101 | return ('Link found, dofollow', response_code)
102 | ```
103 |
104 | Based on the result, let's assign either _Link found, nofollow_ or _Link found, dofollow_ status.
105 |
106 |
107 | ## Assigning results to Pandas DataFrame
108 |
109 | After getting status for each backlink and referent link pair, let's append this information (along with the response code from a backlink) to pandas DataFrame.
110 |
111 | ```python
112 | df = None
113 | for backlink, referent_link in zip(backlinks_list, referent_links_list):
114 | (status, response_code) = get_page(backlink, referent_link)
115 | if df is not None:
116 | df = df.append([[backlink, status, response_code]])
117 | else:
118 | df = pd.DataFrame(data=[[backlink, status, response_code]])
119 | df.columns = ['Backlink', 'Status', 'Response code']
120 | ```
121 |
122 | `get_page()` function refers to the 4-step process that was described above (please see the complete code for the better understanding).
123 |
124 |
125 | ## Pushing results to Slack
126 |
127 | In order to be able to automatically report backlinks and their statuses in a convenient way, a Slack app could be used. You will need to create an app in Slack and assign incoming webhook to connect it and Slack's channel you would like to post notifications to. More on Slack apps and webhooks: https://api.slack.com/messaging/webhooks
128 |
129 | ```python
130 | SLACK_WEBHOOK = "YOUR_SLACK_CHANNEL_WEBHOOK"
131 | ```
132 |
133 | Although the following piece of code could look a bit complicated, all that it does is formatting data into a readable format and pushing that data to Slack channel via POST request to Slack webhook.
134 |
135 | ```python
136 | cols = df.columns.tolist()
137 | dict_df = df.to_dict()
138 | header = ''
139 | rows = []
140 |
141 | for i in range(len(df)):
142 | row = ''
143 | for col in cols:
144 | row += "`" + str(dict_df[col][i]) + "` "
145 | row = ':black_small_square:' + row
146 | rows.append(row)
147 |
148 | data = ["*" + "Backlinks" "*\n"] + rows
149 |
150 | slack_data = {
151 | "text": '\n'.join(data)
152 | }
153 |
154 | requests.post(webhook_url = SLACK_WEBHOOK, json = slack_data)
155 | ```
156 |
157 |
158 | That's it! In this example, Slack was used for reporting purposes, but it is possible to adjust the code so that backlinks and their statuses would be exported to a .csv file, google spreadsheets, or database.
159 |
160 | Please see [backlink_monitoring_oxylabs.py](/backlink_monitoring_oxylabs.py) for the complete code.
161 |
--------------------------------------------------------------------------------
/v2/backlink_monitoring.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 |
3 | from bs4 import BeautifulSoup
4 | from bs4.dammit import EncodingDetector
5 | import requests
6 | import pandas as pd
7 | from datetime import datetime
8 | import traceback
9 | import re
10 |
11 | SLACK_WEBHOOK = "YOUR_SLACK_CHANNEL_WEBHOOK" # REPLACE WITH YOUR WEBHOOK
12 |
13 | def get_page(backlink, referent_link):
14 | # Add "http://" prefix to backlink if there is no protocol defined for a backlink
15 | if backlink.startswith('http') is not True:
16 | backlink = 'http://' + backlink
17 |
18 | try:
19 | resp = requests.get(
20 | backlink,
21 | allow_redirects=True
22 | )
23 | except Exception as e:
24 | return ("Backlink not reachable", "None")
25 |
26 | response_code = resp.status_code
27 | if response_code != 200:
28 | return ("Backlink not reachable", response_code)
29 |
30 | # Try to find encoding, decode content and load it to BeautifulSoup object
31 | http_encoding = resp.encoding if 'charset' in resp.headers.get('content-type', '').lower() else None
32 | html_encoding = EncodingDetector.find_declared_encoding(resp.content, is_html=True)
33 | encoding = html_encoding or http_encoding
34 | bsObj = BeautifulSoup(resp.content, 'lxml', from_encoding=encoding)
35 |
36 | if len(bsObj.findAll('meta', content=re.compile("noindex"))) > 0:
37 | return('Noindex', response_code)
38 |
39 | referent_link = referent_link.lstrip('https:').rstrip('/')
40 | elements = bsObj.findAll('a', href=re.compile(referent_link))
41 | if elements == []:
42 | return ('Link was not found', response_code)
43 | else:
44 | element = elements[0]
45 |
46 | try:
47 | if 'nofollow' in element['rel']:
48 | return ('Link found, nofollow', response_code)
49 | except KeyError:
50 | return ('Link found, dofollow', response_code)
51 |
52 | # If every check is passed, return "Link found, dofollow" status
53 | return ('Link found, dofollow', response_code)
54 |
55 |
56 | def push_to_slack(df, webhook_url = SLACK_WEBHOOK):
57 | df.reset_index(drop= True, inplace= True)
58 |
59 | # Align text beautifully
60 | for col in df.columns.tolist():
61 | max_len = len(max(df[col].astype(str).tolist(), key=len))
62 | for value in df[col].tolist():
63 | df.loc[df[col].astype(str) == str(value), col] = str(value) + (max_len - len(str(value)))*" "
64 |
65 | cols = df.columns.tolist()
66 | dict_df = df.to_dict()
67 | header = ''
68 | rows = []
69 |
70 | for i in range(len(df)):
71 | row = ''
72 | for col in cols:
73 | row += "`" + str(dict_df[col][i]) + "` "
74 | row = ':black_small_square:' + row
75 | rows.append(row)
76 |
77 | data = ["*" + "Backlinks" "*\n"] + rows
78 |
79 | slack_data = {
80 | "text": '\n'.join(data)
81 | }
82 |
83 | requests.post(webhook_url, json = slack_data)
84 |
85 | return df
86 |
87 |
88 | def main():
89 | # Wrap everything in try-except clause in order to catch all errors
90 | try:
91 | backlinks_list = ['https://example.com', 'www.example.co.uk', 'http://www.test.com'] # REPLACE WITH YOUR BACKLINKS
92 | referent_links_list = ['https://oxylabs.io/blog/what-is-web-scraping', 'https://oxylabs.io/blog/the-difference-between-data-center-and-residential-proxies', 'https://oxylabs.io/blog/what-is-proxy'] # REPLACE WITH YOUR REFERENT LINKS
93 |
94 | df = None
95 | print('Scraping and parsing data from backlinks...')
96 | for backlink, referent_link in zip(backlinks_list, referent_links_list):
97 | (status, response_code) = get_page(backlink, referent_link)
98 | if df is not None:
99 | df = df.append([[backlink, status, response_code]])
100 | else:
101 | df = pd.DataFrame(data=[[backlink, status, response_code]])
102 | df.columns = ['Backlink', 'Status', 'Response code']
103 | df.reset_index(inplace=True, drop=True)
104 |
105 | print('Pushing notification to Slack...')
106 | push_to_slack(df)
107 |
108 | except:
109 | tb = traceback.format_exc()
110 | print(str(datetime.now()) + '\n' + tb + '\n')
111 | return -1
112 |
113 | return 1
114 |
115 | if __name__ == '__main__':
116 | main()
117 |
--------------------------------------------------------------------------------