├── DataFiles
├── 1.Benign_list_big_final.csv
├── 2.online-valid.csv
├── 3.legitimate.csv
├── 4.phishing.csv
├── 5.urldata.csv
└── README.md
├── Phishing Website Detection by Machine Learning Techniques Presentation.pdf
├── Phishing Website Detection_Models & Training.ipynb
├── README.md
├── URL Feature Extraction.ipynb
├── URLFeatureExtraction.py
└── XGBoostClassifier.pickle.dat
/DataFiles/README.md:
--------------------------------------------------------------------------------
1 | # Data Files
2 |
3 | This folder has the raw & extracted datafiles of this project. The description of each file is as follows:
4 |
5 | * [1.Benign_list_big_final.csv:](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/1.Benign_list_big_final.csv) This file has list of legitimate urls. The total count is 35,300. The source of the dataset is University of New Brunswick, https://www.unb.ca/cic/datasets/url-2016.html.
6 |
7 | * [2.online-valid.csv:](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/2.online-valid.csv) This file is downloaded from the opensource service called PhishTank. This service provide a set of phishing URLs in multiple formats like csv, json etc. that gets updated hourly. To download the latest data: https://www.phishtank.com/developer_info.php.
8 |
9 | * [3.legitimate.csv:](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/3.legitimate.csv) This file has the extracted features of the 5000 legitimate URLs which are randonmly selected from the '1.Benign_list_big_final.csv' file.
10 |
11 | * [4.phishing.csv](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/4.phishing.csv) This file has the extracted features of the 5000 phishing URLs which are randonmly selected from the '2.online-valid.csv' file.
12 |
13 | * [5.urldata.csv](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/5.urldata.csv) This file is nothing but a combination of the above two files. It contains extracted features of 10,000 URLs both legitimate & phishing.
14 |
--------------------------------------------------------------------------------
/Phishing Website Detection by Machine Learning Techniques Presentation.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/3cf68511e4a85988180be1d41687c7980181221c/Phishing Website Detection by Machine Learning Techniques Presentation.pdf
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Phishing Website Detection by Machine Learning Techniques
2 |
3 | ## Objective
4 | A phishing website is a common social engineering method that mimics trustful uniform resource locators (URLs) and webpages. The objective of this project is to train machine learning models and deep neural nets on the dataset created to predict phishing websites. Both phishing and benign URLs of websites are gathered to form a dataset and from them required URL and website content-based features are extracted. The performance level of each model is measures and compared.
5 |
6 | ## Data Collection
7 | The set of phishing URLs are collected from opensource service called **PhishTank**. This service provide a set of phishing URLs in multiple formats like csv, json etc. that gets updated hourly. To download the data: https://www.phishtank.com/developer_info.php. From this dataset, 5000 random phishing URLs are collected to train the ML models.
8 |
9 | The legitimate URLs are obatined from the open datasets of the University of New Brunswick, https://www.unb.ca/cic/datasets/url-2016.html. This dataset has a collection of benign, spam, phishing, malware & defacement URLs. Out of all these types, the benign url dataset is considered for this project. From this dataset, 5000 random legitimate URLs are collected to train the ML models.
10 |
11 | The above mentioned datasets are uploaded to the '[DataFiles](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/tree/master/DataFiles)' folder of this repository.
12 |
13 | ## Feature Extraction
14 | The below mentioned category of features are extracted from the URL data:
15 |
16 | 1. Address Bar based Features
17 | In this category 9 features are extracted.
18 | 2. Domain based Features
19 | In this category 4 features are extracted.
20 | 3. HTML & Javascript based Features
21 | In this category 4 features are extracted.
22 |
23 | *The details pertaining to these features are mentioned in the [URL Feature Extraction.ipynb.](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/URL%20Feature%20Extraction.ipynb)[](https://colab.research.google.com/github/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/URL%20Feature%20Extraction.ipynb)*
24 |
25 | So, all together 17 features are extracted from the 10,000 URL dataset and are stored in '[5.urldata.csv](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/DataFiles/5.urldata.csv)' file in the DataFiles folder.
26 | The features are referenced from the https://archive.ics.uci.edu/ml/datasets/Phishing+Websites.
27 |
28 | ## Models & Training
29 |
30 | Before stating the ML model training, the data is split into 80-20 i.e., 8000 training samples & 2000 testing samples. From the dataset, it is clear that this is a supervised machine learning task. There are two major types of supervised machine learning problems, called classification and regression.
31 |
32 | This data set comes under classification problem, as the input URL is classified as phishing (1) or legitimate (0). The supervised machine learning models (classification) considered to train the dataset in this project are:
33 |
34 | * Decision Tree
35 | * Random Forest
36 | * Multilayer Perceptrons
37 | * XGBoost
38 | * Autoencoder Neural Network
39 | * Support Vector Machines
40 |
41 | All these models are trained on the dataset and evaluation of the model is done with the test dataset. The elaborate details of the models & its training are mentioned in [Phishing Website Detection_Models & Training.ipynb](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/Phishing%20Website%20Detection_Models%20%26%20Training.ipynb)[](https://colab.research.google.com/github/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/Phishing%20Website%20Detection_Models%20%26%20Training.ipynb)
42 |
43 | ## Presentation
44 |
45 | The short video presentaion for this project is @ https://youtu.be/I1refTZp-pg.
46 | The slide presentaion used in this video is [Phishing Website Detection by Machine Learning Techniques Presentation.pdf](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/Phishing%20Website%20Detection%20by%20Machine%20Learning%20Techniques%20Presentation.pdf)
47 |
48 | ## End Results
49 | From the obtained results of the above models, XGBoost Classifier has highest model performance of 86.4%. So the model is saved to the file '[XGBoostClassifier.pickle.dat](https://github.com/shreyagopal/Phishing-Website-Detection-by-Machine-Learning-Techniques/blob/master/XGBoostClassifier.pickle.dat)'
50 |
51 | ### Next Steps
52 |
53 | This project can be further extended to creation of browser extention or developed a GUI which takes the URL and predicts it's nature i.e., legitimate of phishing. *As of now, I am working towards the creation of browser extention for this project. And may even try the GUI option also.* The further developments will be updated at the earliest.
54 |
--------------------------------------------------------------------------------
/URL Feature Extraction.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "nbformat": 4,
3 | "nbformat_minor": 0,
4 | "metadata": {
5 | "colab": {
6 | "name": "URL Feature Extraction.ipynb",
7 | "provenance": [],
8 | "collapsed_sections": [],
9 | "toc_visible": true
10 | },
11 | "kernelspec": {
12 | "name": "python3",
13 | "display_name": "Python 3"
14 | },
15 | "accelerator": "GPU"
16 | },
17 | "cells": [
18 | {
19 | "cell_type": "markdown",
20 | "metadata": {
21 | "id": "x8WZXl8z-1VC",
22 | "colab_type": "text"
23 | },
24 | "source": [
25 | "# **Phishing Website Detection Feature Extraction**\n",
26 | "\n",
27 | "*Final project of AI & Cybersecurity Course*"
28 | ]
29 | },
30 | {
31 | "cell_type": "markdown",
32 | "metadata": {
33 | "id": "SUwXaC2JiIMF",
34 | "colab_type": "text"
35 | },
36 | "source": [
37 | "# **1. Objective:**\n",
38 | "A phishing website is a common social engineering method that mimics trustful uniform resource locators (URLs) and webpages. The objective of this notebook is to collect data & extract the selctive features form the URLs.\n",
39 | "\n",
40 | "*This project is worked on Google Collaboratory.*\n",
41 | "\n"
42 | ]
43 | },
44 | {
45 | "cell_type": "markdown",
46 | "metadata": {
47 | "id": "pkIGg-nGiqEO",
48 | "colab_type": "text"
49 | },
50 | "source": [
51 | "# **2. Collecting the Data:**\n",
52 | "For this project, we need a bunch of urls of type legitimate (0) and phishing (1). \n",
53 | "\n",
54 | "The collection of phishing urls is rather easy because of the opensource service called PhishTank. This service provide a set of phishing URLs in multiple formats like csv, json etc. that gets updated hourly. To download the data: https://www.phishtank.com/developer_info.php\n",
55 | "\n",
56 | "For the legitimate URLs, I found a source that has a collection of benign, spam, phishing, malware & defacement URLs. The source of the dataset is University of New Brunswick, https://www.unb.ca/cic/datasets/url-2016.html. The number of legitimate URLs in this collection are 35,300. The URL collection is downloaded & from that, *'Benign_list_big_final.csv'* is the file of our interest. This file is then uploaded to the Colab for the feature extraction. \n"
57 | ]
58 | },
59 | {
60 | "cell_type": "markdown",
61 | "metadata": {
62 | "id": "SRyCQJnSmACu",
63 | "colab_type": "text"
64 | },
65 | "source": [
66 | "## **2.1. Phishing URLs:**\n",
67 | "\n",
68 | "The phishing URLs are collected from the PhishTank from the link provided. The csv file of phishing URLs is obtained by using wget command. After downlaoding the dataset, it is loaded into a DataFrame."
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "metadata": {
74 | "id": "PH13wfswmyDv",
75 | "colab_type": "code",
76 | "colab": {}
77 | },
78 | "source": [
79 | "#importing required packages for this module\n",
80 | "import pandas as pd"
81 | ],
82 | "execution_count": 0,
83 | "outputs": []
84 | },
85 | {
86 | "cell_type": "code",
87 | "metadata": {
88 | "id": "FF5vM84YriWc",
89 | "colab_type": "code",
90 | "outputId": "34b29509-57f2-48c9-a862-db8390b6af1c",
91 | "colab": {
92 | "base_uri": "https://localhost:8080/",
93 | "height": 392
94 | }
95 | },
96 | "source": [
97 | "#Downloading the phishing URLs file\n",
98 | "!wget http://data.phishtank.com/data/online-valid.csv"
99 | ],
100 | "execution_count": 0,
101 | "outputs": [
102 | {
103 | "output_type": "stream",
104 | "text": [
105 | "--2020-05-10 07:33:37-- http://data.phishtank.com/data/online-valid.csv\n",
106 | "Resolving data.phishtank.com (data.phishtank.com)... 104.16.101.75, 104.17.177.85, 2606:4700::6810:654b, ...\n",
107 | "Connecting to data.phishtank.com (data.phishtank.com)|104.16.101.75|:80... connected.\n",
108 | "HTTP request sent, awaiting response... 301 Moved Permanently\n",
109 | "Location: https://data.phishtank.com/data/online-valid.csv [following]\n",
110 | "--2020-05-10 07:33:37-- https://data.phishtank.com/data/online-valid.csv\n",
111 | "Connecting to data.phishtank.com (data.phishtank.com)|104.16.101.75|:443... connected.\n",
112 | "HTTP request sent, awaiting response... 302 Found\n",
113 | "Location: https://d1750zhbc38ec0.cloudfront.net/datadumps/verified_online.csv?Expires=1589096027&Signature=NeznemrBS2h3ozoDsM8x9fZ73pTe1OhCjCyYEEtKcqjyJlO62TdCD9eAh4tC0fvlytZAq4ihqhtRGtgkwaWfw6QJE8HhE-UfnzUlOxU6w-lnHJppNbsbWsIqCjYeBoNbGvLTpa4CklK5Lo7PV6vd3bSl8wAq0PNjyct7f6qyO2nazZilc0NIdzHp2t-XwAozQj39S7czLORAzloGH98cqa1XBc3honvarNeV3S6d8QJCO8dHf3zk201KUSJFRIky6sFZP3--z5aDSL06fZj-yAyIDE-Xn0SNaiqLFVuMQUx0tTo5eIdk98zC2D7R5XOvAkGdpo1fGHT45f77MzUv4Q__&Key-Pair-Id=APKAILB45UG3RB4CSOJA [following]\n",
114 | "--2020-05-10 07:33:37-- https://d1750zhbc38ec0.cloudfront.net/datadumps/verified_online.csv?Expires=1589096027&Signature=NeznemrBS2h3ozoDsM8x9fZ73pTe1OhCjCyYEEtKcqjyJlO62TdCD9eAh4tC0fvlytZAq4ihqhtRGtgkwaWfw6QJE8HhE-UfnzUlOxU6w-lnHJppNbsbWsIqCjYeBoNbGvLTpa4CklK5Lo7PV6vd3bSl8wAq0PNjyct7f6qyO2nazZilc0NIdzHp2t-XwAozQj39S7czLORAzloGH98cqa1XBc3honvarNeV3S6d8QJCO8dHf3zk201KUSJFRIky6sFZP3--z5aDSL06fZj-yAyIDE-Xn0SNaiqLFVuMQUx0tTo5eIdk98zC2D7R5XOvAkGdpo1fGHT45f77MzUv4Q__&Key-Pair-Id=APKAILB45UG3RB4CSOJA\n",
115 | "Resolving d1750zhbc38ec0.cloudfront.net (d1750zhbc38ec0.cloudfront.net)... 143.204.101.142, 143.204.101.147, 143.204.101.48, ...\n",
116 | "Connecting to d1750zhbc38ec0.cloudfront.net (d1750zhbc38ec0.cloudfront.net)|143.204.101.142|:443... connected.\n",
117 | "HTTP request sent, awaiting response... 200 OK\n",
118 | "Length: 3232768 (3.1M) [text/csv]\n",
119 | "Saving to: ‘online-valid.csv’\n",
120 | "\n",
121 | "online-valid.csv 100%[===================>] 3.08M 5.13MB/s in 0.6s \n",
122 | "\n",
123 | "2020-05-10 07:33:38 (5.13 MB/s) - ‘online-valid.csv’ saved [3232768/3232768]\n",
124 | "\n"
125 | ],
126 | "name": "stdout"
127 | }
128 | ]
129 | },
130 | {
131 | "cell_type": "markdown",
132 | "metadata": {
133 | "id": "paesHH9rnX8r",
134 | "colab_type": "text"
135 | },
136 | "source": [
137 | "The above command downlaods the file of phishing URLs, *online-valid.csv* and stores in the */content/* folder. "
138 | ]
139 | },
140 | {
141 | "cell_type": "code",
142 | "metadata": {
143 | "id": "GaGVL9gYKXma",
144 | "colab_type": "code",
145 | "outputId": "fad0a947-4996-44bf-d46f-89abc4306e62",
146 | "colab": {
147 | "base_uri": "https://localhost:8080/",
148 | "height": 305
149 | }
150 | },
151 | "source": [
152 | "#loading the phishing URLs data to dataframe\n",
153 | "data0 = pd.read_csv(\"online-valid.csv\")\n",
154 | "data0.head()"
155 | ],
156 | "execution_count": 0,
157 | "outputs": [
158 | {
159 | "output_type": "execute_result",
160 | "data": {
161 | "text/html": [
162 | "
\n",
163 | "\n",
176 | "
\n",
177 | " \n",
178 | " \n",
179 | " | \n",
180 | " phish_id | \n",
181 | " url | \n",
182 | " phish_detail_url | \n",
183 | " submission_time | \n",
184 | " verified | \n",
185 | " verification_time | \n",
186 | " online | \n",
187 | " target | \n",
188 | "
\n",
189 | " \n",
190 | " \n",
191 | " \n",
192 | " 0 | \n",
193 | " 6557033 | \n",
194 | " http://u1047531.cp.regruhosting.ru/acces-inges... | \n",
195 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
196 | " 2020-05-09T22:01:43+00:00 | \n",
197 | " yes | \n",
198 | " 2020-05-09T22:03:07+00:00 | \n",
199 | " yes | \n",
200 | " Other | \n",
201 | "
\n",
202 | " \n",
203 | " 1 | \n",
204 | " 6557032 | \n",
205 | " http://hoysalacreations.com/wp-content/plugins... | \n",
206 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
207 | " 2020-05-09T22:01:37+00:00 | \n",
208 | " yes | \n",
209 | " 2020-05-09T22:03:07+00:00 | \n",
210 | " yes | \n",
211 | " Other | \n",
212 | "
\n",
213 | " \n",
214 | " 2 | \n",
215 | " 6557011 | \n",
216 | " http://www.accsystemprblemhelp.site/checkpoint... | \n",
217 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
218 | " 2020-05-09T21:54:31+00:00 | \n",
219 | " yes | \n",
220 | " 2020-05-09T21:55:38+00:00 | \n",
221 | " yes | \n",
222 | " Facebook | \n",
223 | "
\n",
224 | " \n",
225 | " 3 | \n",
226 | " 6557010 | \n",
227 | " http://www.accsystemprblemhelp.site/login_atte... | \n",
228 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
229 | " 2020-05-09T21:53:48+00:00 | \n",
230 | " yes | \n",
231 | " 2020-05-09T21:54:34+00:00 | \n",
232 | " yes | \n",
233 | " Facebook | \n",
234 | "
\n",
235 | " \n",
236 | " 4 | \n",
237 | " 6557009 | \n",
238 | " https://firebasestorage.googleapis.com/v0/b/so... | \n",
239 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
240 | " 2020-05-09T21:49:27+00:00 | \n",
241 | " yes | \n",
242 | " 2020-05-09T21:51:24+00:00 | \n",
243 | " yes | \n",
244 | " Microsoft | \n",
245 | "
\n",
246 | " \n",
247 | "
\n",
248 | "
"
249 | ],
250 | "text/plain": [
251 | " phish_id ... target\n",
252 | "0 6557033 ... Other\n",
253 | "1 6557032 ... Other\n",
254 | "2 6557011 ... Facebook\n",
255 | "3 6557010 ... Facebook\n",
256 | "4 6557009 ... Microsoft\n",
257 | "\n",
258 | "[5 rows x 8 columns]"
259 | ]
260 | },
261 | "metadata": {
262 | "tags": []
263 | },
264 | "execution_count": 3
265 | }
266 | ]
267 | },
268 | {
269 | "cell_type": "code",
270 | "metadata": {
271 | "id": "mAZAvSe2n1oT",
272 | "colab_type": "code",
273 | "outputId": "da2fbbb6-871f-4070-df86-cc9a135ac37a",
274 | "colab": {
275 | "base_uri": "https://localhost:8080/",
276 | "height": 35
277 | }
278 | },
279 | "source": [
280 | "data0.shape"
281 | ],
282 | "execution_count": 0,
283 | "outputs": [
284 | {
285 | "output_type": "execute_result",
286 | "data": {
287 | "text/plain": [
288 | "(14858, 8)"
289 | ]
290 | },
291 | "metadata": {
292 | "tags": []
293 | },
294 | "execution_count": 4
295 | }
296 | ]
297 | },
298 | {
299 | "cell_type": "markdown",
300 | "metadata": {
301 | "id": "fBFvH8h0oFkO",
302 | "colab_type": "text"
303 | },
304 | "source": [
305 | "So, the data has thousands of phishing URLs. But the problem here is, this data gets updated hourly. Without getting into the risk of data imbalance, I am considering a margin value of 10,000 phishing URLs & 5000 legitimate URLs. \n",
306 | "\n",
307 | "Thereby, picking up 5000 samples from the above dataframe randomly."
308 | ]
309 | },
310 | {
311 | "cell_type": "code",
312 | "metadata": {
313 | "id": "9CTCI_EgERPM",
314 | "colab_type": "code",
315 | "outputId": "cb74e74c-5591-4523-e077-bbf13ef89245",
316 | "colab": {
317 | "base_uri": "https://localhost:8080/",
318 | "height": 305
319 | }
320 | },
321 | "source": [
322 | "#Collecting 5,000 Phishing URLs randomly\n",
323 | "phishurl = data0.sample(n = 5000, random_state = 12).copy()\n",
324 | "phishurl = phishurl.reset_index(drop=True)\n",
325 | "phishurl.head()"
326 | ],
327 | "execution_count": 0,
328 | "outputs": [
329 | {
330 | "output_type": "execute_result",
331 | "data": {
332 | "text/html": [
333 | "\n",
334 | "\n",
347 | "
\n",
348 | " \n",
349 | " \n",
350 | " | \n",
351 | " phish_id | \n",
352 | " url | \n",
353 | " phish_detail_url | \n",
354 | " submission_time | \n",
355 | " verified | \n",
356 | " verification_time | \n",
357 | " online | \n",
358 | " target | \n",
359 | "
\n",
360 | " \n",
361 | " \n",
362 | " \n",
363 | " 0 | \n",
364 | " 6485787 | \n",
365 | " https://eevee.tv/Bootstrap/assets/css/acces | \n",
366 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
367 | " 2020-04-04T03:01:00+00:00 | \n",
368 | " yes | \n",
369 | " 2020-04-04T03:03:56+00:00 | \n",
370 | " yes | \n",
371 | " Other | \n",
372 | "
\n",
373 | " \n",
374 | " 1 | \n",
375 | " 6422543 | \n",
376 | " https://appleid.apple.com-sa.pm/appleid/? | \n",
377 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
378 | " 2020-02-27T17:01:01+00:00 | \n",
379 | " yes | \n",
380 | " 2020-03-17T01:50:51+00:00 | \n",
381 | " yes | \n",
382 | " Other | \n",
383 | "
\n",
384 | " \n",
385 | " 2 | \n",
386 | " 6543602 | \n",
387 | " https://grandcup.xyz/ | \n",
388 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
389 | " 2020-05-02T23:07:29+00:00 | \n",
390 | " yes | \n",
391 | " 2020-05-02T23:09:03+00:00 | \n",
392 | " yes | \n",
393 | " Steam | \n",
394 | "
\n",
395 | " \n",
396 | " 3 | \n",
397 | " 6528783 | \n",
398 | " https://villa-azzurro.com/onedrive/ | \n",
399 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
400 | " 2020-04-25T20:54:02+00:00 | \n",
401 | " yes | \n",
402 | " 2020-04-25T21:46:55+00:00 | \n",
403 | " yes | \n",
404 | " Other | \n",
405 | "
\n",
406 | " \n",
407 | " 4 | \n",
408 | " 6498136 | \n",
409 | " http://mygpstrip.net/ii/u.php | \n",
410 | " http://www.phishtank.com/phish_detail.php?phis... | \n",
411 | " 2020-04-10T15:01:56+00:00 | \n",
412 | " yes | \n",
413 | " 2020-04-10T16:01:37+00:00 | \n",
414 | " yes | \n",
415 | " Other | \n",
416 | "
\n",
417 | " \n",
418 | "
\n",
419 | "
"
420 | ],
421 | "text/plain": [
422 | " phish_id url ... online target\n",
423 | "0 6485787 https://eevee.tv/Bootstrap/assets/css/acces ... yes Other\n",
424 | "1 6422543 https://appleid.apple.com-sa.pm/appleid/? ... yes Other\n",
425 | "2 6543602 https://grandcup.xyz/ ... yes Steam\n",
426 | "3 6528783 https://villa-azzurro.com/onedrive/ ... yes Other\n",
427 | "4 6498136 http://mygpstrip.net/ii/u.php ... yes Other\n",
428 | "\n",
429 | "[5 rows x 8 columns]"
430 | ]
431 | },
432 | "metadata": {
433 | "tags": []
434 | },
435 | "execution_count": 5
436 | }
437 | ]
438 | },
439 | {
440 | "cell_type": "code",
441 | "metadata": {
442 | "id": "-FOfv0bspc8N",
443 | "colab_type": "code",
444 | "outputId": "48e76e11-37d7-4ba1-e04a-c2fa661e9219",
445 | "colab": {
446 | "base_uri": "https://localhost:8080/",
447 | "height": 35
448 | }
449 | },
450 | "source": [
451 | "phishurl.shape"
452 | ],
453 | "execution_count": 0,
454 | "outputs": [
455 | {
456 | "output_type": "execute_result",
457 | "data": {
458 | "text/plain": [
459 | "(5000, 8)"
460 | ]
461 | },
462 | "metadata": {
463 | "tags": []
464 | },
465 | "execution_count": 6
466 | }
467 | ]
468 | },
469 | {
470 | "cell_type": "markdown",
471 | "metadata": {
472 | "id": "Sb4afcd5pges",
473 | "colab_type": "text"
474 | },
475 | "source": [
476 | "As of now we collected 5000 phishing URLs. Now, we need to collect the legitimate URLs.\n",
477 | "\n",
478 | "## **2.2. Legitimate URLs:**\n",
479 | "\n",
480 | "From the uploaded *Benign_list_big_final.csv* file, the URLs are loaded into a dataframe."
481 | ]
482 | },
483 | {
484 | "cell_type": "code",
485 | "metadata": {
486 | "id": "0wkw4wGAsIbT",
487 | "colab_type": "code",
488 | "outputId": "4395a2bd-dd8b-49ea-fb1e-36cf0b67e75f",
489 | "colab": {
490 | "base_uri": "https://localhost:8080/",
491 | "height": 200
492 | }
493 | },
494 | "source": [
495 | "#Loading legitimate files \n",
496 | "data1 = pd.read_csv(\"Benign_list_big_final.csv\")\n",
497 | "data1.columns = ['URLs']\n",
498 | "data1.head()"
499 | ],
500 | "execution_count": 0,
501 | "outputs": [
502 | {
503 | "output_type": "execute_result",
504 | "data": {
505 | "text/html": [
506 | "\n",
507 | "\n",
520 | "
\n",
521 | " \n",
522 | " \n",
523 | " | \n",
524 | " URLs | \n",
525 | "
\n",
526 | " \n",
527 | " \n",
528 | " \n",
529 | " 0 | \n",
530 | " http://1337x.to/torrent/1110018/Blackhat-2015-... | \n",
531 | "
\n",
532 | " \n",
533 | " 1 | \n",
534 | " http://1337x.to/torrent/1122940/Blackhat-2015-... | \n",
535 | "
\n",
536 | " \n",
537 | " 2 | \n",
538 | " http://1337x.to/torrent/1124395/Fast-and-Furio... | \n",
539 | "
\n",
540 | " \n",
541 | " 3 | \n",
542 | " http://1337x.to/torrent/1145504/Avengers-Age-o... | \n",
543 | "
\n",
544 | " \n",
545 | " 4 | \n",
546 | " http://1337x.to/torrent/1160078/Avengers-age-o... | \n",
547 | "
\n",
548 | " \n",
549 | "
\n",
550 | "
"
551 | ],
552 | "text/plain": [
553 | " URLs\n",
554 | "0 http://1337x.to/torrent/1110018/Blackhat-2015-...\n",
555 | "1 http://1337x.to/torrent/1122940/Blackhat-2015-...\n",
556 | "2 http://1337x.to/torrent/1124395/Fast-and-Furio...\n",
557 | "3 http://1337x.to/torrent/1145504/Avengers-Age-o...\n",
558 | "4 http://1337x.to/torrent/1160078/Avengers-age-o..."
559 | ]
560 | },
561 | "metadata": {
562 | "tags": []
563 | },
564 | "execution_count": 7
565 | }
566 | ]
567 | },
568 | {
569 | "cell_type": "markdown",
570 | "metadata": {
571 | "id": "MdvE4YfWtCJr",
572 | "colab_type": "text"
573 | },
574 | "source": [
575 | "As stated above, 5000 legitimate URLs are randomaly picked from the above dataframe."
576 | ]
577 | },
578 | {
579 | "cell_type": "code",
580 | "metadata": {
581 | "id": "EQRtf9Ybs5sv",
582 | "colab_type": "code",
583 | "outputId": "227e262b-1483-4549-8bdf-49da2f321b06",
584 | "colab": {
585 | "base_uri": "https://localhost:8080/",
586 | "height": 200
587 | }
588 | },
589 | "source": [
590 | "#Collecting 5,000 Legitimate URLs randomly\n",
591 | "legiurl = data1.sample(n = 5000, random_state = 12).copy()\n",
592 | "legiurl = legiurl.reset_index(drop=True)\n",
593 | "legiurl.head()"
594 | ],
595 | "execution_count": 0,
596 | "outputs": [
597 | {
598 | "output_type": "execute_result",
599 | "data": {
600 | "text/html": [
601 | "\n",
602 | "\n",
615 | "
\n",
616 | " \n",
617 | " \n",
618 | " | \n",
619 | " URLs | \n",
620 | "
\n",
621 | " \n",
622 | " \n",
623 | " \n",
624 | " 0 | \n",
625 | " http://graphicriver.net/search?date=this-month... | \n",
626 | "
\n",
627 | " \n",
628 | " 1 | \n",
629 | " http://ecnavi.jp/redirect/?url=http://www.cros... | \n",
630 | "
\n",
631 | " \n",
632 | " 2 | \n",
633 | " https://hubpages.com/signin?explain=follow+Hub... | \n",
634 | "
\n",
635 | " \n",
636 | " 3 | \n",
637 | " http://extratorrent.cc/torrent/4190536/AOMEI+B... | \n",
638 | "
\n",
639 | " \n",
640 | " 4 | \n",
641 | " http://icicibank.com/Personal-Banking/offers/o... | \n",
642 | "
\n",
643 | " \n",
644 | "
\n",
645 | "
"
646 | ],
647 | "text/plain": [
648 | " URLs\n",
649 | "0 http://graphicriver.net/search?date=this-month...\n",
650 | "1 http://ecnavi.jp/redirect/?url=http://www.cros...\n",
651 | "2 https://hubpages.com/signin?explain=follow+Hub...\n",
652 | "3 http://extratorrent.cc/torrent/4190536/AOMEI+B...\n",
653 | "4 http://icicibank.com/Personal-Banking/offers/o..."
654 | ]
655 | },
656 | "metadata": {
657 | "tags": []
658 | },
659 | "execution_count": 8
660 | }
661 | ]
662 | },
663 | {
664 | "cell_type": "code",
665 | "metadata": {
666 | "id": "QrpSRXzDuKwW",
667 | "colab_type": "code",
668 | "outputId": "8b8e5220-be59-4893-9dd5-3ffc381d2b1d",
669 | "colab": {
670 | "base_uri": "https://localhost:8080/",
671 | "height": 35
672 | }
673 | },
674 | "source": [
675 | "legiurl.shape"
676 | ],
677 | "execution_count": 0,
678 | "outputs": [
679 | {
680 | "output_type": "execute_result",
681 | "data": {
682 | "text/plain": [
683 | "(5000, 1)"
684 | ]
685 | },
686 | "metadata": {
687 | "tags": []
688 | },
689 | "execution_count": 9
690 | }
691 | ]
692 | },
693 | {
694 | "cell_type": "markdown",
695 | "metadata": {
696 | "id": "xbzZbsWIEV6J",
697 | "colab_type": "text"
698 | },
699 | "source": [
700 | "# **3. Feature Extraction:**\n",
701 | "\n",
702 | "In this step, features are extracted from the URLs dataset.\n",
703 | "\n",
704 | "The extracted features are categorized into\n",
705 | "\n",
706 | "\n",
707 | "1. Address Bar based Features\n",
708 | "2. Domain based Features\n",
709 | "3. HTML & Javascript based Features\n",
710 | "\n"
711 | ]
712 | },
713 | {
714 | "cell_type": "markdown",
715 | "metadata": {
716 | "id": "vABNo39RQljI",
717 | "colab_type": "text"
718 | },
719 | "source": [
720 | "### **3.1. Address Bar Based Features:**\n",
721 | "\n",
722 | "Many features can be extracted that can be consided as address bar base features. Out of them, below mentioned were considered for this project.\n",
723 | "\n",
724 | "\n",
725 | "* Domain of URL\n",
726 | "* IP Address in URL\n",
727 | "* \"@\" Symbol in URL\n",
728 | "* Length of URL\n",
729 | "* Depth of URL\n",
730 | "* Redirection \"//\" in URL\n",
731 | "* \"http/https\" in Domain name\n",
732 | "* Using URL Shortening Services “TinyURL”\n",
733 | "* Prefix or Suffix \"-\" in Domain\n",
734 | "\n",
735 | "Each of these features are explained and the coded below:"
736 | ]
737 | },
738 | {
739 | "cell_type": "code",
740 | "metadata": {
741 | "id": "Rk4HFWsEKXpS",
742 | "colab_type": "code",
743 | "colab": {}
744 | },
745 | "source": [
746 | "# importing required packages for this section\n",
747 | "from urllib.parse import urlparse,urlencode\n",
748 | "import ipaddress\n",
749 | "import re"
750 | ],
751 | "execution_count": 0,
752 | "outputs": []
753 | },
754 | {
755 | "cell_type": "markdown",
756 | "metadata": {
757 | "id": "xd-UZd3c60-a",
758 | "colab_type": "text"
759 | },
760 | "source": [
761 | "#### **3.1.1. Domain of the URL**\n",
762 | "Here, we are just extracting the domain present in the URL. This feature doesn't have much significance in the training. May even be dropped while training the model."
763 | ]
764 | },
765 | {
766 | "cell_type": "code",
767 | "metadata": {
768 | "id": "S0QorYenhaOD",
769 | "colab_type": "code",
770 | "colab": {}
771 | },
772 | "source": [
773 | "# 1.Domain of the URL (Domain) \n",
774 | "def getDomain(url): \n",
775 | " domain = urlparse(url).netloc\n",
776 | " if re.match(r\"^www.\",domain):\n",
777 | "\t domain = domain.replace(\"www.\",\"\")\n",
778 | " return domain"
779 | ],
780 | "execution_count": 0,
781 | "outputs": []
782 | },
783 | {
784 | "cell_type": "markdown",
785 | "metadata": {
786 | "id": "1EPO6HJ87Pdv",
787 | "colab_type": "text"
788 | },
789 | "source": [
790 | "#### **3.1.2. IP Address in the URL**\n",
791 | "\n",
792 | "Checks for the presence of IP address in the URL. URLs may have IP address instead of domain name. If an IP address is used as an alternative of the domain name in the URL, we can be sure that someone is trying to steal personal information with this URL.\n",
793 | "\n",
794 | "If the domain part of URL has IP address, the value assigned to this feature is 1 (phishing) or else 0 (legitimate).\n",
795 | "\n"
796 | ]
797 | },
798 | {
799 | "cell_type": "code",
800 | "metadata": {
801 | "id": "SX-4mbq27QBj",
802 | "colab_type": "code",
803 | "colab": {}
804 | },
805 | "source": [
806 | "# 2.Checks for IP address in URL (Have_IP)\n",
807 | "def havingIP(url):\n",
808 | " try:\n",
809 | " ipaddress.ip_address(url)\n",
810 | " ip = 1\n",
811 | " except:\n",
812 | " ip = 0\n",
813 | " return ip\n"
814 | ],
815 | "execution_count": 0,
816 | "outputs": []
817 | },
818 | {
819 | "cell_type": "markdown",
820 | "metadata": {
821 | "id": "Vcy-zay47S-q",
822 | "colab_type": "text"
823 | },
824 | "source": [
825 | "#### **3.1.3. \"@\" Symbol in URL**\n",
826 | "\n",
827 | "Checks for the presence of '@' symbol in the URL. Using “@” symbol in the URL leads the browser to ignore everything preceding the “@” symbol and the real address often follows the “@” symbol. \n",
828 | "\n",
829 | "If the URL has '@' symbol, the value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
830 | ]
831 | },
832 | {
833 | "cell_type": "code",
834 | "metadata": {
835 | "id": "XZQZi3K17TcR",
836 | "colab_type": "code",
837 | "colab": {}
838 | },
839 | "source": [
840 | "# 3.Checks the presence of @ in URL (Have_At)\n",
841 | "def haveAtSign(url):\n",
842 | " if \"@\" in url:\n",
843 | " at = 1 \n",
844 | " else:\n",
845 | " at = 0 \n",
846 | " return at"
847 | ],
848 | "execution_count": 0,
849 | "outputs": []
850 | },
851 | {
852 | "cell_type": "markdown",
853 | "metadata": {
854 | "id": "mhFeCv2N9KLU",
855 | "colab_type": "text"
856 | },
857 | "source": [
858 | "#### **3.1.4. Length of URL**\n",
859 | "\n",
860 | "Computes the length of the URL. Phishers can use long URL to hide the doubtful part in the address bar. In this project, if the length of the URL is greater than or equal 54 characters then the URL classified as phishing otherwise legitimate.\n",
861 | "\n",
862 | "If the length of URL >= 54 , the value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
863 | ]
864 | },
865 | {
866 | "cell_type": "code",
867 | "metadata": {
868 | "id": "fnQazil39Kra",
869 | "colab_type": "code",
870 | "colab": {}
871 | },
872 | "source": [
873 | "# 4.Finding the length of URL and categorizing (URL_Length)\n",
874 | "def getLength(url):\n",
875 | " if len(url) < 54:\n",
876 | " length = 0 \n",
877 | " else:\n",
878 | " length = 1 \n",
879 | " return length"
880 | ],
881 | "execution_count": 0,
882 | "outputs": []
883 | },
884 | {
885 | "cell_type": "markdown",
886 | "metadata": {
887 | "id": "8ICyOWg59LHt",
888 | "colab_type": "text"
889 | },
890 | "source": [
891 | "#### **3.1.5. Depth of URL**\n",
892 | "\n",
893 | "Computes the depth of the URL. This feature calculates the number of sub pages in the given url based on the '/'.\n",
894 | "\n",
895 | "The value of feature is a numerical based on the URL."
896 | ]
897 | },
898 | {
899 | "cell_type": "code",
900 | "metadata": {
901 | "id": "yILgNFf_9L3X",
902 | "colab_type": "code",
903 | "colab": {}
904 | },
905 | "source": [
906 | "# 5.Gives number of '/' in URL (URL_Depth)\n",
907 | "def getDepth(url):\n",
908 | " s = urlparse(url).path.split('/')\n",
909 | " depth = 0\n",
910 | " for j in range(len(s)):\n",
911 | " if len(s[j]) != 0:\n",
912 | " depth = depth+1\n",
913 | " return depth"
914 | ],
915 | "execution_count": 0,
916 | "outputs": []
917 | },
918 | {
919 | "cell_type": "markdown",
920 | "metadata": {
921 | "id": "T5-eL0bBBRdx",
922 | "colab_type": "text"
923 | },
924 | "source": [
925 | "#### **3.1.6. Redirection \"//\" in URL**\n",
926 | "\n",
927 | "Checks the presence of \"//\" in the URL. The existence of “//” within the URL path means that the user will be redirected to another website. The location of the “//” in URL is computed. We find that if the URL starts with “HTTP”, that means the “//” should appear in the sixth position. However, if the URL employs “HTTPS” then the “//” should appear in seventh position.\n",
928 | "\n",
929 | "If the \"//\" is anywhere in the URL apart from after the protocal, thee value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
930 | ]
931 | },
932 | {
933 | "cell_type": "code",
934 | "metadata": {
935 | "id": "RIJEiq51BSy0",
936 | "colab_type": "code",
937 | "colab": {}
938 | },
939 | "source": [
940 | "# 6.Checking for redirection '//' in the url (Redirection)\n",
941 | "def redirection(url):\n",
942 | " pos = url.rfind('//')\n",
943 | " if pos > 6:\n",
944 | " if pos > 7:\n",
945 | " return 1\n",
946 | " else:\n",
947 | " return 0\n",
948 | " else:\n",
949 | " return 0"
950 | ],
951 | "execution_count": 0,
952 | "outputs": []
953 | },
954 | {
955 | "cell_type": "markdown",
956 | "metadata": {
957 | "id": "hHWQDIrtBa7n",
958 | "colab_type": "text"
959 | },
960 | "source": [
961 | "#### **3.1.7. \"http/https\" in Domain name**\n",
962 | "\n",
963 | "Checks for the presence of \"http/https\" in the domain part of the URL. The phishers may add the “HTTPS” token to the domain part of a URL in order to trick users.\n",
964 | "\n",
965 | "If the URL has \"http/https\" in the domain part, the value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
966 | ]
967 | },
968 | {
969 | "cell_type": "code",
970 | "metadata": {
971 | "id": "h2vW23O1BbWl",
972 | "colab_type": "code",
973 | "colab": {}
974 | },
975 | "source": [
976 | "# 7.Existence of “HTTPS” Token in the Domain Part of the URL (https_Domain)\n",
977 | "def httpDomain(url):\n",
978 | " domain = urlparse(url).netloc\n",
979 | " if 'https' in domain:\n",
980 | " return 1\n",
981 | " else:\n",
982 | " return 0"
983 | ],
984 | "execution_count": 0,
985 | "outputs": []
986 | },
987 | {
988 | "cell_type": "markdown",
989 | "metadata": {
990 | "id": "rKL4jpeaPIvA",
991 | "colab_type": "text"
992 | },
993 | "source": [
994 | "#### **3.1.8. Using URL Shortening Services “TinyURL”**\n",
995 | "\n",
996 | "URL shortening is a method on the “World Wide Web” in which a URL may be made considerably smaller in length and still lead to the required webpage. This is accomplished by means of an “HTTP Redirect” on a domain name that is short, which links to the webpage that has a long URL. \n",
997 | "\n",
998 | "If the URL is using Shortening Services, the value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
999 | ]
1000 | },
1001 | {
1002 | "cell_type": "code",
1003 | "metadata": {
1004 | "id": "UdC9pUdTAVRU",
1005 | "colab_type": "code",
1006 | "colab": {}
1007 | },
1008 | "source": [
1009 | "#listing shortening services\n",
1010 | "shortening_services = r\"bit\\.ly|goo\\.gl|shorte\\.st|go2l\\.ink|x\\.co|ow\\.ly|t\\.co|tinyurl|tr\\.im|is\\.gd|cli\\.gs|\" \\\n",
1011 | " r\"yfrog\\.com|migre\\.me|ff\\.im|tiny\\.cc|url4\\.eu|twit\\.ac|su\\.pr|twurl\\.nl|snipurl\\.com|\" \\\n",
1012 | " r\"short\\.to|BudURL\\.com|ping\\.fm|post\\.ly|Just\\.as|bkite\\.com|snipr\\.com|fic\\.kr|loopt\\.us|\" \\\n",
1013 | " r\"doiop\\.com|short\\.ie|kl\\.am|wp\\.me|rubyurl\\.com|om\\.ly|to\\.ly|bit\\.do|t\\.co|lnkd\\.in|db\\.tt|\" \\\n",
1014 | " r\"qr\\.ae|adf\\.ly|goo\\.gl|bitly\\.com|cur\\.lv|tinyurl\\.com|ow\\.ly|bit\\.ly|ity\\.im|q\\.gs|is\\.gd|\" \\\n",
1015 | " r\"po\\.st|bc\\.vc|twitthis\\.com|u\\.to|j\\.mp|buzurl\\.com|cutt\\.us|u\\.bb|yourls\\.org|x\\.co|\" \\\n",
1016 | " r\"prettylinkpro\\.com|scrnch\\.me|filoops\\.info|vzturl\\.com|qr\\.net|1url\\.com|tweez\\.me|v\\.gd|\" \\\n",
1017 | " r\"tr\\.im|link\\.zip\\.net\""
1018 | ],
1019 | "execution_count": 0,
1020 | "outputs": []
1021 | },
1022 | {
1023 | "cell_type": "code",
1024 | "metadata": {
1025 | "id": "IUkU9UbbnKpY",
1026 | "colab_type": "code",
1027 | "colab": {}
1028 | },
1029 | "source": [
1030 | "# 8. Checking for Shortening Services in URL (Tiny_URL)\n",
1031 | "def tinyURL(url):\n",
1032 | " match=re.search(shortening_services,url)\n",
1033 | " if match:\n",
1034 | " return 1\n",
1035 | " else:\n",
1036 | " return 0"
1037 | ],
1038 | "execution_count": 0,
1039 | "outputs": []
1040 | },
1041 | {
1042 | "cell_type": "markdown",
1043 | "metadata": {
1044 | "id": "HS-BuQJzPkaZ",
1045 | "colab_type": "text"
1046 | },
1047 | "source": [
1048 | "#### **3.1.9. Prefix or Suffix \"-\" in Domain**\n",
1049 | "\n",
1050 | "Checking the presence of '-' in the domain part of URL. The dash symbol is rarely used in legitimate URLs. Phishers tend to add prefixes or suffixes separated by (-) to the domain name so that users feel that they are dealing with a legitimate webpage. \n",
1051 | "\n",
1052 | "If the URL has '-' symbol in the domain part of the URL, the value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
1053 | ]
1054 | },
1055 | {
1056 | "cell_type": "code",
1057 | "metadata": {
1058 | "id": "vLyjiIUgPjuw",
1059 | "colab_type": "code",
1060 | "colab": {}
1061 | },
1062 | "source": [
1063 | "# 9.Checking for Prefix or Suffix Separated by (-) in the Domain (Prefix/Suffix)\n",
1064 | "def prefixSuffix(url):\n",
1065 | " if '-' in urlparse(url).netloc:\n",
1066 | " return 1 # phishing\n",
1067 | " else:\n",
1068 | " return 0 # legitimate"
1069 | ],
1070 | "execution_count": 0,
1071 | "outputs": []
1072 | },
1073 | {
1074 | "cell_type": "markdown",
1075 | "metadata": {
1076 | "id": "zO485F_BPk-k",
1077 | "colab_type": "text"
1078 | },
1079 | "source": [
1080 | "### **3.2. Domain Based Features:**\n",
1081 | "\n",
1082 | "Many features can be extracted that come under this category. Out of them, below mentioned were considered for this project.\n",
1083 | "\n",
1084 | "* DNS Record\n",
1085 | "* Website Traffic \n",
1086 | "* Age of Domain\n",
1087 | "* End Period of Domain\n",
1088 | "\n",
1089 | "Each of these features are explained and the coded below:"
1090 | ]
1091 | },
1092 | {
1093 | "cell_type": "code",
1094 | "metadata": {
1095 | "id": "NbkEYJ_JOVa7",
1096 | "colab_type": "code",
1097 | "outputId": "f08b25f8-3852-432c-e141-8eb57ff916d8",
1098 | "colab": {
1099 | "base_uri": "https://localhost:8080/",
1100 | "height": 232
1101 | }
1102 | },
1103 | "source": [
1104 | "!pip install python-whois"
1105 | ],
1106 | "execution_count": 0,
1107 | "outputs": [
1108 | {
1109 | "output_type": "stream",
1110 | "text": [
1111 | "Collecting python-whois\n",
1112 | "\u001b[?25l Downloading https://files.pythonhosted.org/packages/f0/ab/11c2d01db2554bbaabb2c32b06b6a73f7277372533484c320c78a304dfd7/python-whois-0.7.2.tar.gz (90kB)\n",
1113 | "\r\u001b[K |███▋ | 10kB 24.0MB/s eta 0:00:01\r\u001b[K |███████▎ | 20kB 6.5MB/s eta 0:00:01\r\u001b[K |███████████ | 30kB 6.8MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 40kB 7.8MB/s eta 0:00:01\r\u001b[K |██████████████████▏ | 51kB 7.6MB/s eta 0:00:01\r\u001b[K |█████████████████████▉ | 61kB 8.6MB/s eta 0:00:01\r\u001b[K |█████████████████████████▌ | 71kB 8.4MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▏ | 81kB 9.3MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 92kB 5.5MB/s \n",
1114 | "\u001b[?25hRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from python-whois) (0.16.0)\n",
1115 | "Building wheels for collected packages: python-whois\n",
1116 | " Building wheel for python-whois (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
1117 | " Created wheel for python-whois: filename=python_whois-0.7.2-cp36-none-any.whl size=85245 sha256=900afbc18f144913762a57978778098dda65b687b3b5a1f14f7998e9631564e8\n",
1118 | " Stored in directory: /root/.cache/pip/wheels/69/e6/62/1e6a746ca8e690f472611511b6948c325b232aaf693245ce46\n",
1119 | "Successfully built python-whois\n",
1120 | "Installing collected packages: python-whois\n",
1121 | "Successfully installed python-whois-0.7.2\n"
1122 | ],
1123 | "name": "stdout"
1124 | }
1125 | ]
1126 | },
1127 | {
1128 | "cell_type": "code",
1129 | "metadata": {
1130 | "id": "esZ7FcvlOMZu",
1131 | "colab_type": "code",
1132 | "colab": {}
1133 | },
1134 | "source": [
1135 | "# importing required packages for this section\n",
1136 | "import re\n",
1137 | "from bs4 import BeautifulSoup\n",
1138 | "import whois\n",
1139 | "import urllib\n",
1140 | "import urllib.request\n",
1141 | "from datetime import datetime"
1142 | ],
1143 | "execution_count": 0,
1144 | "outputs": []
1145 | },
1146 | {
1147 | "cell_type": "markdown",
1148 | "metadata": {
1149 | "id": "4ExXkkXYZWWZ",
1150 | "colab_type": "text"
1151 | },
1152 | "source": [
1153 | "#### **3.2.1. DNS Record**\n",
1154 | "\n",
1155 | "For phishing websites, either the claimed identity is not recognized by the WHOIS database or no records founded for the hostname. \n",
1156 | "If the DNS record is empty or not found then, the value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
1157 | ]
1158 | },
1159 | {
1160 | "cell_type": "code",
1161 | "metadata": {
1162 | "id": "8O5D1jH0IDgf",
1163 | "colab_type": "code",
1164 | "colab": {}
1165 | },
1166 | "source": [
1167 | "# 11.DNS Record availability (DNS_Record)\n",
1168 | "# obtained in the featureExtraction function itself"
1169 | ],
1170 | "execution_count": 0,
1171 | "outputs": []
1172 | },
1173 | {
1174 | "cell_type": "markdown",
1175 | "metadata": {
1176 | "id": "M5DKTVPMZ1Yk",
1177 | "colab_type": "text"
1178 | },
1179 | "source": [
1180 | "#### **3.2.2. Web Traffic**\n",
1181 | "\n",
1182 | "This feature measures the popularity of the website by determining the number of visitors and the number of pages they visit. However, since phishing websites live for a short period of time, they may not be recognized by the Alexa database (Alexa the Web Information Company., 1996). By reviewing our dataset, we find that in worst scenarios, legitimate websites ranked among the top 100,000. Furthermore, if the domain has no traffic or is not recognized by the Alexa database, it is classified as “Phishing”.\n",
1183 | "\n",
1184 | "If the rank of the domain < 100000, the vlaue of this feature is 1 (phishing) else 0 (legitimate)."
1185 | ]
1186 | },
1187 | {
1188 | "cell_type": "code",
1189 | "metadata": {
1190 | "id": "mtwQiRotZ2GD",
1191 | "colab_type": "code",
1192 | "colab": {}
1193 | },
1194 | "source": [
1195 | "# 12.Web traffic (Web_Traffic)\n",
1196 | "def web_traffic(url):\n",
1197 | " try:\n",
1198 | " #Filling the whitespaces in the URL if any\n",
1199 | " url = urllib.parse.quote(url)\n",
1200 | " rank = BeautifulSoup(urllib.request.urlopen(\"http://data.alexa.com/data?cli=10&dat=s&url=\" + url).read(), \"xml\").find(\n",
1201 | " \"REACH\")['RANK']\n",
1202 | " rank = int(rank)\n",
1203 | " except TypeError:\n",
1204 | " return 1\n",
1205 | " if rank <100000:\n",
1206 | " return 1\n",
1207 | " else:\n",
1208 | " return 0"
1209 | ],
1210 | "execution_count": 0,
1211 | "outputs": []
1212 | },
1213 | {
1214 | "cell_type": "markdown",
1215 | "metadata": {
1216 | "id": "jKHhfv2AacXq",
1217 | "colab_type": "text"
1218 | },
1219 | "source": [
1220 | "#### **3.2.3. Age of Domain**\n",
1221 | "\n",
1222 | "This feature can be extracted from WHOIS database. Most phishing websites live for a short period of time. The minimum age of the legitimate domain is considered to be 12 months for this project. Age here is nothing but different between creation and expiration time.\n",
1223 | "\n",
1224 | "If age of domain > 12 months, the vlaue of this feature is 1 (phishing) else 0 (legitimate)."
1225 | ]
1226 | },
1227 | {
1228 | "cell_type": "code",
1229 | "metadata": {
1230 | "id": "li03hqJgH__j",
1231 | "colab_type": "code",
1232 | "colab": {}
1233 | },
1234 | "source": [
1235 | "# 13.Survival time of domain: The difference between termination time and creation time (Domain_Age) \n",
1236 | "def domainAge(domain_name):\n",
1237 | " creation_date = domain_name.creation_date\n",
1238 | " expiration_date = domain_name.expiration_date\n",
1239 | " if (isinstance(creation_date,str) or isinstance(expiration_date,str)):\n",
1240 | " try:\n",
1241 | " creation_date = datetime.strptime(creation_date,'%Y-%m-%d')\n",
1242 | " expiration_date = datetime.strptime(expiration_date,\"%Y-%m-%d\")\n",
1243 | " except:\n",
1244 | " return 1\n",
1245 | " if ((expiration_date is None) or (creation_date is None)):\n",
1246 | " return 1\n",
1247 | " elif ((type(expiration_date) is list) or (type(creation_date) is list)):\n",
1248 | " return 1\n",
1249 | " else:\n",
1250 | " ageofdomain = abs((expiration_date - creation_date).days)\n",
1251 | " if ((ageofdomain/30) < 6):\n",
1252 | " age = 1\n",
1253 | " else:\n",
1254 | " age = 0\n",
1255 | " return age"
1256 | ],
1257 | "execution_count": 0,
1258 | "outputs": []
1259 | },
1260 | {
1261 | "cell_type": "markdown",
1262 | "metadata": {
1263 | "id": "AbjRrzA7aenm",
1264 | "colab_type": "text"
1265 | },
1266 | "source": [
1267 | "#### **3.2.4. End Period of Domain**\n",
1268 | "\n",
1269 | "This feature can be extracted from WHOIS database. For this feature, the remaining domain time is calculated by finding the different between expiration time & current time. The end period considered for the legitimate domain is 6 months or less for this project. \n",
1270 | "\n",
1271 | "If end period of domain > 6 months, the vlaue of this feature is 1 (phishing) else 0 (legitimate)."
1272 | ]
1273 | },
1274 | {
1275 | "cell_type": "code",
1276 | "metadata": {
1277 | "id": "NueO81-ttKYd",
1278 | "colab_type": "code",
1279 | "colab": {}
1280 | },
1281 | "source": [
1282 | "# 14.End time of domain: The difference between termination time and current time (Domain_End) \n",
1283 | "def domainEnd(domain_name):\n",
1284 | " expiration_date = domain_name.expiration_date\n",
1285 | " if isinstance(expiration_date,str):\n",
1286 | " try:\n",
1287 | " expiration_date = datetime.strptime(expiration_date,\"%Y-%m-%d\")\n",
1288 | " except:\n",
1289 | " return 1\n",
1290 | " if (expiration_date is None):\n",
1291 | " return 1\n",
1292 | " elif (type(expiration_date) is list):\n",
1293 | " return 1\n",
1294 | " else:\n",
1295 | " today = datetime.now()\n",
1296 | " end = abs((expiration_date - today).days)\n",
1297 | " if ((end/30) < 6):\n",
1298 | " end = 0\n",
1299 | " else:\n",
1300 | " end = 1\n",
1301 | " return end"
1302 | ],
1303 | "execution_count": 0,
1304 | "outputs": []
1305 | },
1306 | {
1307 | "cell_type": "markdown",
1308 | "metadata": {
1309 | "id": "Oln3Xj-9t-Y6",
1310 | "colab_type": "text"
1311 | },
1312 | "source": [
1313 | "## **3.3. HTML and JavaScript based Features**\n",
1314 | "\n",
1315 | "Many features can be extracted that come under this category. Out of them, below mentioned were considered for this project.\n",
1316 | "\n",
1317 | "* IFrame Redirection\n",
1318 | "* Status Bar Customization\n",
1319 | "* Disabling Right Click\n",
1320 | "* Website Forwarding\n",
1321 | "\n",
1322 | "Each of these features are explained and the coded below:"
1323 | ]
1324 | },
1325 | {
1326 | "cell_type": "code",
1327 | "metadata": {
1328 | "id": "lw0JmOGEQPwb",
1329 | "colab_type": "code",
1330 | "colab": {}
1331 | },
1332 | "source": [
1333 | "# importing required packages for this section\n",
1334 | "import requests"
1335 | ],
1336 | "execution_count": 0,
1337 | "outputs": []
1338 | },
1339 | {
1340 | "cell_type": "markdown",
1341 | "metadata": {
1342 | "id": "RES6bSWPy-Bj",
1343 | "colab_type": "text"
1344 | },
1345 | "source": [
1346 | "### **3.3.1. IFrame Redirection**\n",
1347 | "\n",
1348 | "IFrame is an HTML tag used to display an additional webpage into one that is currently shown. Phishers can make use of the “iframe” tag and make it invisible i.e. without frame borders. In this regard, phishers make use of the “frameBorder” attribute which causes the browser to render a visual delineation. \n",
1349 | "\n",
1350 | "If the iframe is empty or repsonse is not found then, the value assigned to this feature is 1 (phishing) or else 0 (legitimate)."
1351 | ]
1352 | },
1353 | {
1354 | "cell_type": "code",
1355 | "metadata": {
1356 | "id": "F2gpZEMSQGpu",
1357 | "colab_type": "code",
1358 | "colab": {}
1359 | },
1360 | "source": [
1361 | "# 15. IFrame Redirection (iFrame)\n",
1362 | "def iframe(response):\n",
1363 | " if response == \"\":\n",
1364 | " return 1\n",
1365 | " else:\n",
1366 | " if re.findall(r\"[