├── COPYRIGHT
├── README.md
├── additive_util.py
├── iterative_additive.png
├── predict_cube.ipynb
├── predict_cube_iterative.ipynb
└── prepare_date_cube.ipynb
/COPYRIGHT:
--------------------------------------------------------------------------------
1 | Copyright 2018-19 Northwestern University
2 |
3 | Access and use of this software shall impose the following obligations
4 | and understandings on the user. The user is granted the right, without
5 | any fee or cost, to use, copy, modify, alter, enhance and distribute
6 | this software, and any derivative works thereof, and its supporting
7 | documentation for any purpose whatsoever, provided that this entire
8 | notice appears in all copies of the software, derivative works and
9 | supporting documentation. Further, Northwestern University requests
10 | that the user credit Northwestern University in any publications that
11 | result from the use of this software or in any product that includes
12 | this software. The name Northwestern University, however, may not be
13 | used in any advertising or publicity to endorse or promote any
14 | products or commercial entity unless specific written permission is
15 | obtained from Northwestern University. The user also understands that
16 | Northwestern University is not obligated to provide the user with
17 | any support, consulting, training or assistance of any kind with regard
18 | to the use, operation and performance of this software nor to provide
19 | the user with any updates, revisions, new versions or "bug fixes."
20 |
21 | THIS SOFTWARE IS PROVIDED BY NORTHWESTERN UNIVERSITY "AS IS" AND ANY
22 | EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
24 | PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL NORTHWESTERN UNIVERSITY BE
25 | LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
26 | DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
27 | WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
28 | ARISING OUT OF OR IN CONNECTION WITH THE ACCESS, USE OR PERFORMANCE
29 | OF THIS SOFTWARE.
30 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # ml-iter-additive
2 | This software provides the code for an iterative machine learning framework that uses extremely randomized trees (also known as extreme random forests) for predicting temperature profiles in an additive manufacturing process.
3 |
4 |
5 |
6 |
7 |
8 | Requirements:
9 |
10 | * Scikit-Learn 0.19.1
11 | * Numpy 1.14
12 | * Pandas 0.22
13 | * XGBoost 0.7 or higher
14 |
15 | ## Files
16 |
17 | #### Core Files
18 | - additive_util.py: Core utility file for this repository (including incorporating neighbor information)
19 |
20 |
21 | #### Notebooks
22 | - predict_cube.ipynb
23 | - predict_date_cube.ipynb
24 | - predict_cube_iterative.ipynb
25 |
26 | ## Data
27 |
28 | The prepared dataset is available at https://www.dropbox.com/s/cbwyhy18ofw0t7j/data.zip?dl=0
29 |
30 | ## Citation
31 |
32 | If you use this code or data, please cite:
33 |
34 | A. Paul, M.Mozaffar, Z. Yang, W. Liao, A. Choudhary, J. Cao and A. Agrawal. A real-time iterative approach for temperature profile prediction in additive manufacturing processes. 6th IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2019
35 |
36 |
37 | ## Developer Team & Collaborators
38 |
39 | The code was developed by the CUCIS group at the Electrical and Computer Engineering Department in Northwestern University.
40 |
41 | 1. Arindam Paul (arindam.paul@eecs.northwestern.edu)
42 | 2. Jagat Sesh Challa (jagatsesh@northwestern.edu)
43 | 3. Ankit Agrawal (ankitag@eecs.northwestern.edu)
44 | 4. Wei-keng Liao (wkliao@eecs.northwestern.edu)
45 | 5. Alok Choudhary (choudhar@eecs.northwestern.edu)
46 |
47 | The development team would like thank the collaborators Mojtaba Mozaffar and Prof. Jian Cao from Northwestern's Advanced Manufacturing Processes Laboratory .
48 |
49 |
50 | ## Questions/Comments
51 |
52 | email: arindam.paul@eecs.northwestern.edu or ankitag@eecs.northwestern.edu
53 | Copyright (C) 2019, Northwestern University.
54 | See COPYRIGHT notice in top-level directory.
55 |
56 | ## Funding Support
57 |
58 | This work was performed under the following financial assistance award 70NANB19H005 from U.S. Department of Commerce, National Institute of Standards and Technology as part of the Center for Hierarchical Materials Design (CHiMaD). Partial support is also acknowledged from DOE awards DE-SC0014330, DE-SC0019358.
59 |
--------------------------------------------------------------------------------
/additive_util.py:
--------------------------------------------------------------------------------
1 | import pandas as pd
2 | import numpy as np
3 | import warnings
4 | import math
5 | import time
6 | import json
7 | import os
8 | from numpy import save,load
9 |
10 | DEFAULT_TEMP = 300
11 | # NON_EXISTING_TEMP = -99
12 | NON_EXISTING_TEMP = 300
13 |
14 | class Coordinate(object):
15 | """
16 | Class for treating voxels in 3-D coordinates
17 | """
18 | def __init__(self, x,y,z ):
19 | self.X = x
20 | self.Y = y
21 | self.Z = z
22 |
23 | def __str__(self):
24 | return "Coordinate(%s,%s,%s)"%(self.X, self.Y, self.Z)
25 |
26 | def getX(self):
27 | return self.X
28 |
29 | def getY(self):
30 | return self.Y
31 |
32 | def getZ(self):
33 | return self.Z
34 |
35 | def getXYZ(self):
36 | return self.X, self.Y, self.Z
37 |
38 | def distance(self, other):
39 | dx = self.X - other.X
40 | dy = self.Y - other.Y
41 | dz = self.Z - other.Z
42 | return math.sqrt(dx**2 + dy**2 + dz**2)
43 |
44 | def modLog(num):
45 | """
46 | Returns log if it exist, else returns 0
47 | """
48 | try:
49 | return log(num)
50 | except:
51 | return 0
52 |
53 | def loadNumpy(name,path='.'):
54 | """
55 | Loads numpy file
56 | - If no path is provided, the home directory . is considered
57 | as default path of numpy file
58 | """
59 | if ".npy" in name:
60 | fullPath = path+'/'+name
61 | else:
62 | fullPath = path+'/'+name+'.npy'
63 | return load(fullPath, allow_pickle=True)
64 |
65 |
66 | def saveNumpy(obj, name, path='.'):
67 | """
68 | Saves numpy file
69 | """
70 | if ".npy" not in name:
71 | fullPath = path+'/'+name
72 | save(fullPath, obj, allow_pickle=True)
73 | print(name,'saved successfully in',path)
74 | else:
75 | fullPath = path+'/'+name.split(".npy")[0]
76 | save(fullPath, obj, allow_pickle=True)
77 | print(name,'saved successfully in',path)
78 |
79 |
80 | def loadDict(name,path='.'):
81 | """
82 | Load dictionary of voxels
83 | """
84 | if ".dict" not in name:
85 | name += '.dict'
86 |
87 | if ".npy" in name:
88 | fullPath = path+'/'+name
89 | else:
90 | fullPath = path+'/'+name+'.npy'
91 | return load(fullPath).tolist()
92 |
93 |
94 | def saveDict(obj, name, path='.'):
95 | """
96 | Save dictionary of voxels
97 | """
98 | if ".dict" not in name:
99 | fullPath = path+'/'+name+'.dict'
100 | save(fullPath, obj)
101 | print(name,'saved successfully in',path)
102 | else:
103 | fullPath = path+'/'+name
104 | save(fullPath, obj)
105 | print(name,'saved successfully in',path)
106 |
--------------------------------------------------------------------------------
/iterative_additive.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NU-CUCIS/ml-iter-additive/b7e308d8caf3bff134691f087b44a33f041e1591/iterative_additive.png
--------------------------------------------------------------------------------
/predict_cube.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {},
7 | "outputs": [],
8 | "source": [
9 | "%matplotlib inline\n",
10 | "import pandas as pd\n",
11 | "import numpy as np\n",
12 | "import math,os\n",
13 | "from numpy.random import choice\n",
14 | "import scikitplot as skplt\n",
15 | "from time import time"
16 | ]
17 | },
18 | {
19 | "cell_type": "code",
20 | "execution_count": 2,
21 | "metadata": {},
22 | "outputs": [],
23 | "source": [
24 | "import warnings\n",
25 | "warnings.filterwarnings('ignore')"
26 | ]
27 | },
28 | {
29 | "cell_type": "code",
30 | "execution_count": 3,
31 | "metadata": {},
32 | "outputs": [],
33 | "source": [
34 | "pd.options.display.max_rows = 20\n",
35 | "pd.set_option('display.max_columns', None)"
36 | ]
37 | },
38 | {
39 | "cell_type": "code",
40 | "execution_count": 4,
41 | "metadata": {},
42 | "outputs": [],
43 | "source": [
44 | "from sklearn.linear_model import LinearRegression,Ridge,SGDRegressor,ElasticNet\n",
45 | "from sklearn.model_selection import train_test_split\n",
46 | "from sklearn.utils import shuffle\n",
47 | "from sklearn.utils.validation import check_array \n",
48 | "from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error\n",
49 | "from sklearn.svm import LinearSVR\n",
50 | "from sklearn.tree import DecisionTreeRegressor\n",
51 | "from sklearn.ensemble import RandomForestRegressor,ExtraTreesRegressor,GradientBoostingRegressor,AdaBoostRegressor,BaggingRegressor"
52 | ]
53 | },
54 | {
55 | "cell_type": "code",
56 | "execution_count": 5,
57 | "metadata": {},
58 | "outputs": [],
59 | "source": [
60 | "historicalColumns,neighborColumns,neighborColumnsAggregated = [],[],[]\n",
61 | "\n",
62 | "for historical in range(5):\n",
63 | " historicalColumns += ['Tminus'+str(historical+1)]\n",
64 | "\n",
65 | "for neighbor in range(26):\n",
66 | " neighborColumns += ['T'+str(neighbor+1)+'_t-1']\n",
67 | " \n",
68 | "for neighborDegree in range(3):\n",
69 | " neighborColumnsAggregated += ['T_nbhDeg'+str(neighborDegree+1)+'_t-1']\n",
70 | "\n",
71 | "columns = ['voxelLat','voxelLong','voxelVert','voxelType','timestep','x_voxel','y_voxel','z_voxel','layerNum','time_creation', 'time_elapsed', 'x_laser','y_laser','z_laser','x_distance','y_distance','z_distance','euclidean_distance_laser'] + historicalColumns+ neighborColumns + neighborColumnsAggregated + ['T_self']"
72 | ]
73 | },
74 | {
75 | "cell_type": "code",
76 | "execution_count": 6,
77 | "metadata": {},
78 | "outputs": [
79 | {
80 | "data": {
81 | "text/plain": [
82 | "53"
83 | ]
84 | },
85 | "execution_count": 6,
86 | "metadata": {},
87 | "output_type": "execute_result"
88 | }
89 | ],
90 | "source": [
91 | "len(columns)"
92 | ]
93 | },
94 | {
95 | "cell_type": "code",
96 | "execution_count": 7,
97 | "metadata": {},
98 | "outputs": [],
99 | "source": [
100 | "def roundup(a, digits=4):\n",
101 | " n = 10**-digits\n",
102 | " return round(math.ceil(a / n) * n, digits)\n",
103 | "\n",
104 | "def isEven(num):\n",
105 | " if num%2 ==0:\n",
106 | " return True\n",
107 | " return False\n",
108 | "\n",
109 | "def modLog(num):\n",
110 | " try:\n",
111 | " return log(num)\n",
112 | " except:\n",
113 | " return 0\n",
114 | "\n",
115 | "def loadNumpy(name,path='.'):\n",
116 | " if \".npy\" in name:\n",
117 | " fullPath = path+'/'+name\n",
118 | " else:\n",
119 | " fullPath = path+'/'+name+'.npy'\n",
120 | " return np.load(fullPath, allow_pickle=True)"
121 | ]
122 | },
123 | {
124 | "cell_type": "code",
125 | "execution_count": 8,
126 | "metadata": {},
127 | "outputs": [],
128 | "source": [
129 | "#featureColumns = ['timestep','x_distance','y_distance','z_distance','layerNum','Tminus1','Tminus2']+neighborColumns\n",
130 | "featureColumns = ['timestep','x_distance','y_distance','z_distance','time_elapsed'] + historicalColumns + neighborColumns\n",
131 | "\n",
132 | "featureDisplay = featureColumns"
133 | ]
134 | },
135 | {
136 | "cell_type": "code",
137 | "execution_count": 9,
138 | "metadata": {},
139 | "outputs": [
140 | {
141 | "data": {
142 | "text/plain": [
143 | "36"
144 | ]
145 | },
146 | "execution_count": 9,
147 | "metadata": {},
148 | "output_type": "execute_result"
149 | }
150 | ],
151 | "source": [
152 | "len(featureColumns)"
153 | ]
154 | },
155 | {
156 | "cell_type": "code",
157 | "execution_count": 10,
158 | "metadata": {},
159 | "outputs": [],
160 | "source": [
161 | "def plot_feature_importances(et):\n",
162 | " skplt.estimators.plot_feature_importances(et,text_fontsize=16,max_num_features=6,figsize=(24,4),feature_names=featureDisplay)"
163 | ]
164 | },
165 | {
166 | "cell_type": "code",
167 | "execution_count": 11,
168 | "metadata": {},
169 | "outputs": [],
170 | "source": [
171 | "def mean_absolute_percentage_error(y_true, y_pred):\n",
172 | "\t'''\n",
173 | "\tscikit(sklearn) does not have support for mean absolute percentage error MAPE.\n",
174 | "\tThis is because the denominator can theoretically be 0 and so the value would be undefined.\n",
175 | "\tSo this is our implementation\n",
176 | "\t'''\n",
177 | "# \ty_true = check_array(y_true)\n",
178 | "# \ty_pred = check_array(y_pred)\n",
179 | "\n",
180 | "\treturn np.mean(np.abs((y_true - y_pred) / y_true)) * 100\n",
181 | "\n",
182 | "def r2(y_true,y_pred):\n",
183 | " return roundup(r2_score(y_true,y_pred))\n",
184 | "\n",
185 | "def mse(y_true,y_pred):\n",
186 | " return roundup(mean_squared_error(y_true,y_pred))\n",
187 | "\n",
188 | "def mae(y_true,y_pred):\n",
189 | " return roundup(mean_absolute_error(y_true,y_pred))\n",
190 | "\n",
191 | "def mape(y_true, y_pred):\n",
192 | " return roundup(mean_absolute_percentage_error(y_true,y_pred))"
193 | ]
194 | },
195 | {
196 | "cell_type": "code",
197 | "execution_count": 12,
198 | "metadata": {},
199 | "outputs": [],
200 | "source": [
201 | "def combineDataFrames(prefix,columns=columns):\n",
202 | " List = []\n",
203 | " nums_start,nums_stop = [],[]\n",
204 | " for item in os.listdir('../data/cube-20-20-10-800-processed'):\n",
205 | " if \"cubeAgg-20-20-10-800_\" in item and \".npy\" in item:\n",
206 | " timeStep_start = int(item.split('cubeAgg-20-20-10-800_')[1].split('_')[0])\n",
207 | " nums_start += [timeStep_start]\n",
208 | " \n",
209 | " timeStep_stop = int(item.split('_')[2].split('.npy')[0])\n",
210 | " nums_stop += [timeStep_stop]\n",
211 | " \n",
212 | " nums_start = sorted(nums_start)\n",
213 | " nums_stop = sorted(nums_stop)\n",
214 | " \n",
215 | "# print (nums_start)\n",
216 | "# print (nums_stop)\n",
217 | " \n",
218 | " array = loadNumpy('../data/cube-20-20-10-800-processed/'+prefix+'_'+str(nums_start[0])+'_'+str(nums_stop[0])+'.npy')\n",
219 | " for i in range(1,len(nums_start)):\n",
220 | " newFile = '../data/cube-20-20-10-800-processed/'+prefix+'_'+str(nums_start[i])+'_'+str(nums_stop[i])+'.npy'\n",
221 | " array = np.append(array,loadNumpy(newFile),axis=0)\n",
222 | " return pd.DataFrame(array,columns=columns)\n",
223 | "\n",
224 | "\n",
225 | "# def combineDataFrames(columns=columns):\n",
226 | "# dir = './temp/'\n",
227 | "# array = loadNumpy(dir+'file1.npy')\n",
228 | "# array = np.append(array,loadNumpy(dir+'file2.npy'),axis=0)\n",
229 | " \n",
230 | "# return pd.DataFrame(array,columns=columns)"
231 | ]
232 | },
233 | {
234 | "cell_type": "code",
235 | "execution_count": 13,
236 | "metadata": {},
237 | "outputs": [],
238 | "source": [
239 | "# df_big = combineDataFrames('data_big')\n",
240 | "df_big = combineDataFrames('cubeAgg-20-20-10-800')"
241 | ]
242 | },
243 | {
244 | "cell_type": "code",
245 | "execution_count": 14,
246 | "metadata": {},
247 | "outputs": [
248 | {
249 | "data": {
250 | "text/plain": [
251 | "(9705190, 53)"
252 | ]
253 | },
254 | "execution_count": 14,
255 | "metadata": {},
256 | "output_type": "execute_result"
257 | }
258 | ],
259 | "source": [
260 | "df_big.shape"
261 | ]
262 | },
263 | {
264 | "cell_type": "code",
265 | "execution_count": 23,
266 | "metadata": {},
267 | "outputs": [],
268 | "source": [
269 | "df_train = df_big[df_big['timestep'] < 200.0]\n",
270 | "df_test = df_big[(df_big['timestep'] >= 200.0) & (df_big['timestep'] < 1200.0)]\n",
271 | "\n",
272 | "\n",
273 | "X_train = df_train.loc[:,featureColumns]\n",
274 | "y_train = df_train['T_self']\n",
275 | "\n",
276 | "X_test = df_test.loc[:,featureColumns]\n",
277 | "y_test = df_test['T_self']\n",
278 | "\n",
279 | "X_train,y_train = shuffle(X_train,y_train,random_state=300)\n",
280 | "X_test,y_test = shuffle(X_test,y_test,random_state=300)"
281 | ]
282 | },
283 | {
284 | "cell_type": "code",
285 | "execution_count": 24,
286 | "metadata": {},
287 | "outputs": [
288 | {
289 | "data": {
290 | "text/plain": [
291 | "(0.8276, 4.846)"
292 | ]
293 | },
294 | "execution_count": 24,
295 | "metadata": {},
296 | "output_type": "execute_result"
297 | }
298 | ],
299 | "source": [
300 | "et_500 = ExtraTreesRegressor(n_estimators=5, n_jobs=-1, random_state=300)\n",
301 | "et_500.fit(X_train,y_train)\n",
302 | "predicted = et_500.predict(X_test)\n",
303 | "r2(y_test,predicted) ,mape(y_test,predicted)"
304 | ]
305 | },
306 | {
307 | "cell_type": "code",
308 | "execution_count": 21,
309 | "metadata": {},
310 | "outputs": [
311 | {
312 | "data": {
313 | "image/png": "iVBORw0KGgoAAAANSUhEUgAABW0AAAEPCAYAAAA55ObxAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAHYdJREFUeJzt3XuUZWV5J+DfK+0N4ygEshzUtiUSEUzGyeBEo0kq6CTRKOTmyk0jOkGdmWgyuZnbAkSzMhkvcRyTiJdoAo46QSI43h1oEwWNaNSAiuEqCgkgrQgS5PLOH/vUcKhUd53TVd21u+p51jqreu+zv73fOvWdrl2/8+1vV3cHAAAAAIBxuNt6FwAAAAAAwJ2EtgAAAAAAIyK0BQAAAAAYEaEtAAAAAMCICG0BAAAAAEZEaAsAAAAAMCJCWwAAAACAERHaAgCwU1V1eVXdXFU3Tj0OWeU+F6rqS2tV44zHfFNVvWRvHnNnquqkqjptvesAAGC8hLYAAKzkqd39LVOPq9azmKrasp7HX419uXYAAPYeoS0AALulqh5TVedW1Ver6tNVtTD13LOq6nNV9fWqurSqnjtZf58k70lyyPTI3aUjYZeOxp2M+H1hVX0myU1VtWXS7u1VdW1VXVZVL5ix7m1V1ZMar6yqHVX1vKp6dFV9ZvL9vHpq++Oq6iNV9eqq+lpVfb6qnjD1/CFVdVZVXV9VF1fV8VPPnVRVp1fVaVV1Q5LnJfmdJD89+d4/vavXa/q1qKpfq6prqurqqnrW1PP3rqqXV9UVk/o+XFX3XulnBADAePmkHwCAuVXVA5O8K8kzkrw3yROSvL2qDu/ua5Nck+QpSS5N8v1J3lNVH+/uT1bVk5Kc1t0PmtrfLIf92SQ/muS6JHckeWeSMyfrH5Tkg1V1UXe/b8Zv43uSHDap76zJ9/HEJHdP8ndV9Zfd/aGpbU9PclCSn0hyRlU9tLuvT/LWJBckOSTJ4Uk+UFWXdPfZk7bHJnlakl9Ics/JPh7W3U+fqmWnr9fk+QckuV+SByb5D0lOr6p3dPeOJC9LcmSS703yj5Na75jhZwQAwEgZaQsAwEreMRmp+dWqesdk3dOTvLu7393dd3T3B5Kcn+TJSdLd7+ruS3rwoSTvT/J9q6zjVd19ZXffnOTRSQ7u7pO7+5vdfWmS1yX5mTn29+Lu/ufufn+Sm5K8pbuv6e4vJ/mbJP92attrkryyu2/t7rcluSjJj1bVg5M8LskLJ/v6VJLXZwhoF53X3e+YvE43L1fIDK/XrUlOnhz/3UluTPLwqrpbkmcn+eXu/nJ3397d53b3LVnhZwQAwHgZaQsAwEp+rLs/uGTdQ5I8raqeOrXu7knOSZLJaNoTk3xHhoEC+yf5+1XWceWS4x9SVV+dWrdfhrB1Vv809e+bl1n+lqnlL3d3Ty1fkWFk7SFJru/ury957qid1L2sGV6vr3T3bVPL35jUd1CSeyW5ZJnd7vJnBADAeAltAQDYHVcmObW7j1/6RFXdM8nbM4w2PbO7b52M0F2cA6GXtskw0nX/qeUHLLPNdLsrk1zW3YftTvG74YFVVVPB7dYMUypcleTAqrrvVHC7NcmXp9ou/X7vsjzD67Ur1yX55yTfnuTTS57b6c8IAIBxMz0CAAC747QkT62qH66q/arqXpMbZj0oyT0yzN16bZLbJqNIf2iq7T8l+daqut/Uuk8leXJVHVhVD0jyKysc/2+TfH1yc7J7T2p4ZFU9es2+w7v6tiQvqKq7V9XTkjwiw9QDVyY5N8kfTF6D70ryHzO8PjvzT0m2TaY2SFZ+vXaqu+9I8mdJXjG5Idp+VfXYSRC8q58RAAAjJrQFAGBuk7Dy2CS/kyFsvDLJbyS522TE6QuS/O8kO5L8XIZRqYttP5/kLUkuncyTe0iSUzOMFL08w3yub1vh+LdnuHHXo5JclmHE6esz3KxrT/hYhpuWXZfk95P8VHd/ZfLczybZlmHU7V8lOXGZ6SSm/eXk61eq6pMrvV4z+PUMUyl8PMn1Sf4ww89hpz+jOfYNAMA6qLtOzQUAAEyrquOS/GJ3P369awEAYHPwKTsAAAAAwIgIbQEAAAAARsT0CAAAAAAAI2KkLQAAAADAiAhtAQAAAABGZMt6F7CSgw46qLdt27beZQAAAAAArMonPvGJ67r74JW2G31ou23btpx//vnrXQYAAAAAwKpU1RWzbGd6BAAAAACAERHaAgAAAACMiNAWAAAAAGBEhLYAAAAAACMitAUAAAAAGBGhLQAAAADAiAhtAQAAAABGRGgLAAAAADAiQlsYoYWFhSwsLKx3GQAAAACsA6EtAAAAAMCICG0BAAAAAEZEaAsAAAAAMCJCWwAAAACAERHaAgAAAACMiNAWAAAAAGBEhLYAAAAAACMitAUAAAAAGBGhLQAAAADAiAhtAQAAAABGRGgLAAAAADAiQlsAAAAAgBER2gIAAAAAjIjQFgAAAABgRIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARkRoCwAAAAAwIkJbAAAAAIARmSm0raoHV9XpVfW1qrqhqs6oqq3zHqyqfququqo+PH+pAAAAAAAb34qhbVXtn+TsJIcneWaSZyQ5LMk5VXWfWQ9UVYcm+b0k1+xeqQAAAAAAG9+WGbY5PsmhSR7e3RcnSVV9Jsk/JHluklfMeKw/TfLmJA+f8bgAAAAAAJvOLNMjHJPko4uBbZJ092VJPpLk2FkOUlU/l+S7k/z27hQJAAAAALBZzBLaHpnkgmXWX5jkiJUaV9UBSf4oyW929/XzlQcAAAAAsLnMEtoemGTHMuuvT3LADO1fmuQLSd40e1kAAAAAAJvTHp1btqq+L8kvJPnu7u452j0nyXOSZOvWrXuoOgAAAACA8ZllpO2OLD+idmcjcKedkuQNSb5UVfevqvtnCIr3myzfc7lG3f3a7j6qu486+OCDZygRAAAAAGBjmGWk7YUZ5rVd6ogkn12h7SMmj+ct89yOJP81yStnqAEAAAAAYFOYJbQ9K8nLqurQ7r40SapqW5LHJfmtFdr+4DLrXplkvyTPT3LxzJUCAAAAAGwCs4S2r0vyS0nOrKrfS9JJXpzkygzTHyRJquohSS5JcnJ3n5wk3b196c6q6qtJtiz3HAAAAADAZrfinLbdfVOSo5N8IcmpSd6c5LIkR3f3jVObVoYRtLPMkwsAAAAAwDJmGWmb7v5ikp9cYZvLMwS3K+1rYZZjAgAAAABsRkbFAgAAAACMiNAWAAAAAGBEhLYAAAAAACMitAUAAAAAGBGhLQAAAADAiAhtAQAAAABGRGgLAAAAADAiQlsAAAAAgBER2gIAAAAAjIjQFgAAAABgRIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARkRoCwAAAAAwIkJbAAAAAIAREdoCAAAAAIyI0BYAAAAAYESEtgAAAAAAIyK0BQAAAAAYEaEtAAAAAMCICG0BAAAAAEZEaAsAAAAAMCJCWwAAAACAERHaAgAAAACMiNAWAAAAAGBEhLYAAAAAACMitAUAAAAAGBGhLQAAAADAiAhtAQAAAABGRGgLAAAAADAiQlsAAAAAgBER2gIAAAAAjIjQFgAAAABgRIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARkRoCwAAAAAwIkJbAAAAAIAREdoCAAAAAIyI0BYAAAAAYESEtgAAAAAAIyK0BQAAAAAYEaEtAAAAAMCICG0BAAAAAEZEaAvAaCwsLGRhYWG9ywAAAIB1tWW9C2CTqFrvCvZNXrfZda93BQAAAABrwkhbAAAAAIAREdoCAAAAAIyI0BYAAFg35jMHAPiXhLYAAAAAACMitAUAAAAAGBGhLQAAAADAiAhtAQAAAABGRGgLAAAAADAiQlsAAAAAgBER2gIAAAAAjIjQFgAAAABgRGYKbavqwVV1elV9rapuqKozqmrrDO2OqqrXVtXnq+obVfXFqnpzVT109aUDAADAvmlhYSELCwvrXQYAI7ViaFtV+yc5O8nhSZ6Z5BlJDktyTlXdZ4XmP5PkyCSvSvKkJL+V5LuTnF9VD15F3QAAAAAAG9KWGbY5PsmhSR7e3RcnSVV9Jsk/JHluklfsou0fdve10yuq6iNJLpvs94TdKRoAAAAAYKOaZXqEY5J8dDGwTZLuvizJR5Icu6uGSwPbyborklyb5IHzlQoAAAAAsPHNEtoemeSCZdZfmOSIeQ9YVY9I8m1JPjdvWwAAAACAjW6W0PbAJDuWWX99kgPmOVhVbUnymgwjbd8wT1sAAAAAgM1gltB2Lb06yfcmeXp3LxcEJ0mq6jlVdX5VnX/ttf9ihgUAAAAAgA1rltB2R5YfUbuzEbjLqqr/luQ5SZ7d3e/f1bbd/druPqq7jzr44INnPQQAAAAAwD5vywzbXJhhXtuljkjy2VkOUlW/m+SFSZ7f3afOXh4AAAAAwOYyy0jbs5I8pqoOXVxRVduSPG7y3C5V1QuSvCTJ73b3q3evTAAAAACAzWGW0PZ1SS5PcmZVHVtVxyQ5M8mVSU5Z3KiqHlJVt1XVCVPrfibJK5O8N8nZVfWYqccRa/mNAAAAAABsBCtOj9DdN1XV0Un+KMmpSSrJ/03yK91949SmlWS/3DUI/pHJ+h+ZPKZ9KMnCblcOAAAAALABzTKnbbr7i0l+coVtLs8Q0E6vOy7JcbtXGgAAAADA5jPL9AgAAAAAAOwlQlsAAAAAgBER2gIAAAAAjIjQFgAAAABgRIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARkRoCwAAAAAwIkJbAAAAAIAREdoCAIzAwsJCFhYW1rsMAABgBIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARkRoCwAAAAAwIkJbAAAAAIAREdoCAAAAAIyI0BYAAAAAYESEtgAAAAAAIyK0BQAAAAAYEaEtAAAAAMCICG0BAAAAAEZEaAsAAAAAMCJCWwAAAACAERHaAgAAAACMiNAWAAAAAGBEtqx3AQAbXtV6V7Dv8ZrNp3u9KwAAAGANGWkLAAAAADAiQlsAAAAA9oqFhYUsLCysdxkwekJbAAAAAIARMactALBnmJt493jdZmc+ZwAANigjbQEAAAAARkRoCwAAAAAwIkJbAAAAAIARMactAACsJfMS7x6v2+zM5wwAG57QFgAAgLUhfJ+f12w+PrQANgnTIwAAAAAAjIjQFgAAAABgRIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARkRoCwAAAAAwIkJbAAAAAIAREdoCAAAAAIyI0BYAAAAAYESEtgAAAAAAI7JlvQsAAAAA2KdVrXcF+x6v2Xy617sC9jIjbQEAAAAARkRoCwAAAAAwIkJbAAAAAIARMactAMAIbF/vAgAAgNEw0hYAAAAAYESEtgAAAAAAI2J6BBih7etdAAAAAADrxkhbAAAAAIAREdoCAAAAAIyI0BYAAAAAYERmCm2r6sFVdXpVfa2qbqiqM6pq64xt71VVL62qq6vq5qo6r6q+f3VlAwAAAABsTCuGtlW1f5Kzkxye5JlJnpHksCTnVNV9ZjjGG5Icn+SEJE9JcnWS91XVo3a3aAAAAACAjWrLDNscn+TQJA/v7ouTpKo+k+Qfkjw3ySt21rCq/k2Sn0vy7O5+42Tdh5JcmOTkJMesqnoAAAAAgA1mlukRjkny0cXANkm6+7IkH0ly7Axtb03ytqm2tyV5a5Ifrqp7zl0xAAAAAMAGNktoe2SSC5ZZf2GSI2Zoe1l3f2OZtvdI8rAZjg8AAAAAsGnMEtoemGTHMuuvT3LAKtouPg8AAAAAwMQsc9rudVX1nCTPSZKtW7euczWsie71rgDWj/4/u4WF4ev27etZBWtF32ez0vfn4//+jUX/n52+v7Ho+7PT92Ems4y03ZHlR9TubBTtrG2TO0fc3kV3v7a7j+ruow4++OAZSgQAAAAA2BhmGWl7YYa5aZc6IslnZ2j741W1/5J5bY9I8s0kFy/fDAAAAICNZrsRtjCTWUbanpXkMVV16OKKqtqW5HGT53blnUnunuRpU223JPnpJO/v7lvmrBcAAAAAYEObJbR9XZLLk5xZVcdW1TFJzkxyZZJTFjeqqodU1W1VdcLiuu7+uyRvS/LKqvrFqnpCkrcmeWiSE9fu2wAAAAAA2BhWDG27+6YkRyf5QpJTk7w5yWVJju7uG6c2rST7LbPPZyV5Y5KXJHlXkgcn+ZHu/uSqqwcAAAAA2GBmmdM23f3FJD+5wjaXZwhul66/OcmvTh4AAAAAAOzCLNMjAAAAAACwlwhtAQAAAABGRGgLAAAAADAiQlsAAAAAgBER2gIAAAAAjIjQFgAAAABgRIS2AAAAAAAjIrQFAAAAABiRLetdAAAs2r59+3qXAAAAAOvOSFsAAAAAgBER2gIAAAAAjIjQFgAAAABgRMxpCwAAAHuZufwB2BUjbQEAAAAARkRoCwAAAAAwIkJbAAAAAIAREdoCAAAAAIyI0BYAAAAAYESEtgAAAAAAIyK0BQAAAAAYkS3rXQAAALB5bd++fb1LAAAYHSNtAQAAAABGRGgLAAAAADAiQlsAAAAAgBER2gIAAAAAjIjQFgAAAABgRIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARqS6e71r2KWqujbJFetdB6yDg5Jct95FwDrQ99nM9H82K32fzUrfZ7PS99nMHtLdB6+00ehDW9isqur87j5qveuAvU3fZzPT/9ms9H02K32fzUrfh5WZHgEAAAAAYESEtgAAAAAAIyK0hfF67XoXAOtE32cz0//ZrPR9Nit9n81K34cVmNMWAAAAAGBEjLQFAAAAABgRoS1Mqaqe4XH5Gh7rpLXY1ypqeHxVvamqLqiq29bqe2NjmOf9UFWPrKpTquoTVfXNqtrpZRxV9aiqem9V3VhVN1TVWVX1sDlr21ZVJ1XVoXO0+dWqemdVXT2G9x/7lt39/VBVd6+qv588/4tzHnNh0s9nOl+rqvtW1cuqavvkvdVVtTDPMWE5c/4+OG4nz39qzmPq/+xVe+K8Z3K+srN93X+O2pz3sMfswXP+A6rq9VV1XVXdVFUfrKrvnLM2fZ9Nbct6FwAj89gly3+V5NNJTppad8saHutLa7Sv3fWEJN+X5PwkneS+61sOIzPP++HfJXlyhr50yzJtkyRVdViSv0lyQZKfz/B76MQkf11Vj+rua2asbduk3YeTXDpjm+OT3JDkHUmeN2MbWLS7vx9+PclBu3nMhQz9/CVJ7phh+29N8uwkn0zygSQ/sZvHhaV2p/8/LXc9z7lpzmMuRP9n71rz854pf5DkrCXrvj5HbdvivIc9Z0+c81eSd2bou89PsiPJbyc5Z3LOP+vfwdui77OJCW1hSnd/dHq5qm5Jct3S9XviWOvkxd39oiSpqtOSPH6d62FE5nw/nNrdfz7Z7iXZ+R8vL0xye5IndfdXJ9t/LMnFGcKt31yj8pdzZHffUVVb4gSOOe3O74fJqJDfy/DHw5v3bIVJkiu6+8DJsZ8YoRVrZDfPjz7V3Rfv2cruQv9nVfbQec+iS9fh3N95DzPZQ33/mCSPS3J0d58z2f68JJdlON9/wRqVvxx9nw3D9Aiwm2qYVuBLVXVUVZ1bVTdX1UVV9aOT53+1qi6fXKJ3ZlUdvKT9XS7VmFz20VV1WFW9q4ZLx6+oqhOmLw2cuuxw25L9nbT08pSq+uWq+tykth1VdX5V/fji8909y8gVWNEcfekxSc5bDGwnbb+UYeTtj++01ZTJ5a7nTBY/MHXZ1sIa1Qhr5U+TvDXJufM2nPx+OHGyeOtiP99Vm3Z3WTYI/Z+x25vnFM57GJM5+tUxSa5aDGwnbb+WYfTtsbPsQN8HoS2s1r9K8hdJXp8hcLomydur6uVJfjDJf0nyK5N///GM+/yrJGcn+bEMl3S8KMkz5y2sqn4+ycuTvCXDJSw/n+T0JAfOuy9YQ7cn+eYy629J8u1Vda8Z9vHJDO+tZPiU/rGTxyfXpEJYA5P/g4/KMLp8d7w+yRsm/3587uznsK/4cFXdPplT8DVVNc/5h/7PRvMHNdw/4ms1zOU/z7yeznvYFx2ZYVDGUhcm2VpV3zLDPvR9Nj3TI8Dq3DfJ87r7r5Okqq7KMP/PU5Ic0d23T9Y/Msnzq2q/xXW78PLufuPk3x+sqqOT/GySN+6izXIem+Qz3X3y1Lp3z7kPWGsXJfneqrp7d9+aDDePyXBiV0kOSHL1rnbQ3TdU1Wcni58byVQj8P9V1QFJXpHkhd193Yx/mNxFd3+pqhbne/tYd9+2pkXCnnN1kpOTfCzJzRkuj31hksdV1aO7+59X2oH+zwZyS5JTkrw/ybVJDk/yO0nOrap/392fW2kHznvYRx2Y5PJl1l8/+XpAkht3tQN9H4y0hdW6aTGwnfj85OsHl4Szn8/wIcm/nmGf71qyfEGSrbtR28eTPKqq/mdVPbGq9t+NfcBae1WSByZ5TVU9sKoekuEDicVQa1WXM1XV3apqy9TD7znWw0uTXJI7Rwquqarab0k/rz1xHNgd3f2+7j6xu9/d3ed090uSPD3JIydfV0X/Z1/S3Vd39/O6+4zu/pvufl2S789wA+DfXe3+nfewWen7bBY6NqzOV6cXunvxsu8dS7ZbXD/Lpd/XL1m+ZcZ2S/1Fkv+U5HuSvC/J9VV1xtK5cGFv6u4PZ7jM6acy3FX88iT3S/LnGd4nS/v/vP4sya1Tjz9b5f5gLlX1PUmeleEOyferqvtnmEonSe5dVfdfg5Dpkty1n889hQ7sZWcluSnJo9dgX/o/+7TuvjLJh7M27wfnPYzVjgyjaZc6cOr51dD32RRMjwD7nsXLCu+xZP23Ti9MbshxSpJTJpfq/lCGOW7fliHIhXXR3X9SVW9I8rAkN3T3lVX1ngyXwN66yt2flOTVU8vXrXJ/MK9HZPhQfPsyz71q8jggSz70m9NTk9xzavmyVewL9qa1uFmY/s9GsRbvh5PivIdxujDD359LHZHki929y6kRZnBS9H02AaEt7HuumHx9ZJIvJElVbcnyvxSTJN29I8nbJiPAnrvHK4QVdPctGU7mMrkZxxOT/MIcu7hl8vXeS/Z7eZafPwv2lvdmuPnktAdkuCnkyzJMgTPrHyrT/fzriyu7++9XWSPsbT+W5D5J/naONvo/G1JVbc1wg713zNHMeQ/7mrOSPKuqfqC7P5QkVfWvMnzw9r/m2I++z6YmtIV9z8czXBr40sncPbck+c+566iTVNVrM/yRc16Sa5J8R5JnZLgRwuI2Byf5gcni1iT7V9VPTZY/292fDcxgMmfykyeLh0/WLfaly7v7/Mm6B2WYtuPcDH33qAyXkZ/R3W+Z45BfSHJbkmdX1fWTfV3U3V/fWYOqOirJttw5NdARUzW+u7u/McfxYVnd/Y9J/nF63dS0NBd19/Y5drf4f/CvTUaj3774XtqZqnpShnBs8c7kP1BVB2WYg/09cxwbdktVfSDJORnm5F+8EdmvZ7hR65vn2JX+z2jNcd7z8gznHedluBHZwzOc99yR5PfnOKTzHkZh1r6fIbQ9L8lpVfUbGaZD+O0MNx7+73McUt9nUxPawj6mu2+rqmOT/HGSN2WYA/SVGe7SfOLUph/JMK/iMzLMGXpVktOWbHNkkr9ccojF5RdluOwEZvFt2Xlf+vMkx03+fWuG6Tmem+S+GT6AODnJ/5jnYN39lar6pQx3JP9Qkv0yjG7cvotmv5S7zn34tMkjSR4an9YzPv8nyZ9k+GDuhAx/6Kw0H+6fJnnI1PJJk69XZPgDBva0CzOcezwowwfKVyZ5TZIXT66ymJX+z5jNet5zYYYPq4/LcNPVryQ5O8mLuvuiWQ/mvIcRmanvd/cdVfWUDFcZ/UmGe7Scl+QHJ/M6z0TfZ7OrYdpLAAAAAADG4G4rbwIAAAAAwN5iegQARqWq9suuL4G9o7vv2Fv1wJ4wuYHkrtzeLodig9L/4U7Oe9is9H1YmZG2AIzNJRnmvt3Z44T1Kw1Wb3Jjsl318Vtz500iYUPR/+FfcN7DZqXvwwrMaQvAqFTVd2a4ec3OXNXdV+2temCtVdU9knzXCpvt8s7IsK/S/+GunPewWen7sDKhLQAAAADAiJgeAQAAAABgRIS2AAAAAAAjIrQFAAAAABgRoS0AAAAAwIgIbQEAAAAARuT/Af2jHhzfs8H1AAAAAElFTkSuQmCC\n",
314 | "text/plain": [
315 | ""
316 | ]
317 | },
318 | "metadata": {
319 | "needs_background": "light"
320 | },
321 | "output_type": "display_data"
322 | }
323 | ],
324 | "source": [
325 | "plot_feature_importances(et_500)"
326 | ]
327 | }
328 | ],
329 | "metadata": {
330 | "kernelspec": {
331 | "display_name": "Python 3",
332 | "language": "python",
333 | "name": "python3"
334 | },
335 | "language_info": {
336 | "codemirror_mode": {
337 | "name": "ipython",
338 | "version": 3
339 | },
340 | "file_extension": ".py",
341 | "mimetype": "text/x-python",
342 | "name": "python",
343 | "nbconvert_exporter": "python",
344 | "pygments_lexer": "ipython3",
345 | "version": "3.7.4"
346 | }
347 | },
348 | "nbformat": 4,
349 | "nbformat_minor": 4
350 | }
351 |
--------------------------------------------------------------------------------
/predict_cube_iterative.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "code",
5 | "execution_count": 1,
6 | "metadata": {},
7 | "outputs": [],
8 | "source": [
9 | "%matplotlib inline\n",
10 | "import pandas as pd\n",
11 | "import numpy as np\n",
12 | "import math,os\n",
13 | "from numpy.random import choice\n",
14 | "import scikitplot as skplt\n",
15 | "from time import time"
16 | ]
17 | },
18 | {
19 | "cell_type": "code",
20 | "execution_count": 2,
21 | "metadata": {},
22 | "outputs": [],
23 | "source": [
24 | "import warnings\n",
25 | "warnings.filterwarnings('ignore')"
26 | ]
27 | },
28 | {
29 | "cell_type": "code",
30 | "execution_count": 3,
31 | "metadata": {},
32 | "outputs": [],
33 | "source": [
34 | "pd.options.display.max_rows = 100\n",
35 | "pd.set_option('display.max_columns', None)"
36 | ]
37 | },
38 | {
39 | "cell_type": "code",
40 | "execution_count": 4,
41 | "metadata": {},
42 | "outputs": [],
43 | "source": [
44 | "from sklearn.linear_model import LinearRegression,Ridge,SGDRegressor,ElasticNet\n",
45 | "from sklearn.model_selection import train_test_split\n",
46 | "from sklearn.utils import shuffle\n",
47 | "from sklearn.utils.validation import check_array \n",
48 | "from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error\n",
49 | "from sklearn.svm import LinearSVR\n",
50 | "from sklearn.tree import DecisionTreeRegressor\n",
51 | "from sklearn.ensemble import RandomForestRegressor,ExtraTreesRegressor,GradientBoostingRegressor,AdaBoostRegressor,BaggingRegressor"
52 | ]
53 | },
54 | {
55 | "cell_type": "code",
56 | "execution_count": 5,
57 | "metadata": {},
58 | "outputs": [],
59 | "source": [
60 | "historicalColumns,neighborColumns,neighborColumnsAggregated = [],[],[]\n",
61 | "\n",
62 | "for historical in range(5):\n",
63 | " historicalColumns += ['Tminus'+str(historical+1)]\n",
64 | "\n",
65 | "for neighbor in range(26):\n",
66 | " neighborColumns += ['T'+str(neighbor+1)+'_t-1']\n",
67 | " \n",
68 | "for neighborDegree in range(3):\n",
69 | " neighborColumnsAggregated += ['T_nbhDeg'+str(neighborDegree+1)+'_t-1']\n",
70 | "\n",
71 | "columns = ['voxelLat','voxelLong','voxelVert','voxelType','timestep','x_voxel','y_voxel','z_voxel','layerNum','time_creation', 'time_elapsed', 'x_laser','y_laser','z_laser','x_distance','y_distance','z_distance','euclidean_distance_laser'] + historicalColumns+ neighborColumns + neighborColumnsAggregated + ['T_self']"
72 | ]
73 | },
74 | {
75 | "cell_type": "code",
76 | "execution_count": 6,
77 | "metadata": {},
78 | "outputs": [],
79 | "source": [
80 | "def roundup(a, digits=4):\n",
81 | " n = 10**-digits\n",
82 | " return round(math.ceil(a / n) * n, digits)\n",
83 | "\n",
84 | "def isEven(num):\n",
85 | " if num%2 ==0:\n",
86 | " return True\n",
87 | " return False\n",
88 | "\n",
89 | "def modLog(num):\n",
90 | " try:\n",
91 | " return log(num)\n",
92 | " except:\n",
93 | " return 0\n",
94 | "\n",
95 | "def loadNumpy(name,path='.'):\n",
96 | " if \".npy\" in name:\n",
97 | " fullPath = path+'/'+name\n",
98 | " else:\n",
99 | " fullPath = path+'/'+name+'.npy'\n",
100 | " return np.load(fullPath, allow_pickle=True)"
101 | ]
102 | },
103 | {
104 | "cell_type": "code",
105 | "execution_count": 7,
106 | "metadata": {},
107 | "outputs": [],
108 | "source": [
109 | "featureColumns = ['timestep','x_distance','y_distance','z_distance','time_elapsed'] + historicalColumns + neighborColumns\n",
110 | "featureDisplay = featureColumns\n",
111 | "\n",
112 | "# featureDisplay[7] = 'T_immediate_x-1'\n",
113 | "# featureDisplay[8] = 'T_immediate_x+1'\n",
114 | "# featureDisplay[9] = 'T_immediate_y-1'\n",
115 | "# featureDisplay[10] = 'T_immediate_y+1'\n",
116 | "# featureDisplay[11] = 'T_immediate_z-1'\n",
117 | "\n",
118 | "# featureDisplay[13] = 'T_immediate_x-1,y-1'\n",
119 | "# featureDisplay[14] = 'T_immediate_x-1,y+1'\n",
120 | "# featureDisplay[15] = 'T_immediate_x+1,y-1'\n",
121 | "# featureDisplay[16] = 'T_immediate_x+1,y+1'\n",
122 | "\n",
123 | "# featureDisplay[21] = 'T_immediate_x-1,z-1'"
124 | ]
125 | },
126 | {
127 | "cell_type": "code",
128 | "execution_count": 8,
129 | "metadata": {},
130 | "outputs": [],
131 | "source": [
132 | "def plot_feature_importances(et):\n",
133 | " skplt.estimators.plot_feature_importances(et,text_fontsize=16,max_num_features=6,figsize=(24,4),feature_names=featureDisplay)"
134 | ]
135 | },
136 | {
137 | "cell_type": "code",
138 | "execution_count": 9,
139 | "metadata": {},
140 | "outputs": [],
141 | "source": [
142 | "def mean_absolute_percentage_error(y_true, y_pred):\n",
143 | "\t'''\n",
144 | "\tscikit(sklearn) does not have support for mean absolute percentage error MAPE.\n",
145 | "\tThis is because the denominator can theoretically be 0 and so the value would be undefined.\n",
146 | "\tSo this is our implementation\n",
147 | "\t'''\n",
148 | "# \ty_true = check_array(y_true)\n",
149 | "# \ty_pred = check_array(y_pred)\n",
150 | "\n",
151 | "\treturn np.mean(np.abs((y_true - y_pred) / y_true)) * 100\n",
152 | "\n",
153 | "def r2(y_true,y_pred):\n",
154 | " return roundup(r2_score(y_true,y_pred))\n",
155 | "\n",
156 | "def mse(y_true,y_pred):\n",
157 | " return roundup(mean_squared_error(y_true,y_pred))\n",
158 | "\n",
159 | "def mae(y_true,y_pred):\n",
160 | " return roundup(mean_absolute_error(y_true,y_pred))\n",
161 | "\n",
162 | "def mape(y_true, y_pred):\n",
163 | " return roundup(mean_absolute_percentage_error(y_true,y_pred))"
164 | ]
165 | },
166 | {
167 | "cell_type": "code",
168 | "execution_count": 10,
169 | "metadata": {},
170 | "outputs": [],
171 | "source": [
172 | "def combineDataFrames(prefix,columns=columns):\n",
173 | " List = []\n",
174 | " nums_start,nums_stop = [],[]\n",
175 | " for item in os.listdir('../data/cube-20-20-10-800-processed'):\n",
176 | " if \"cubeAgg-20-20-10-800_\" in item and \".npy\" in item:\n",
177 | " timeStep_start = int(item.split('cubeAgg-20-20-10-800_')[1].split('_')[0])\n",
178 | " nums_start += [timeStep_start]\n",
179 | " \n",
180 | " timeStep_stop = int(item.split('_')[2].split('.npy')[0])\n",
181 | " nums_stop += [timeStep_stop]\n",
182 | " \n",
183 | " nums_start = sorted(nums_start)\n",
184 | " nums_stop = sorted(nums_stop)\n",
185 | " \n",
186 | "# print (nums_start)\n",
187 | "# print (nums_stop)\n",
188 | " \n",
189 | " array = loadNumpy('../data/cube-20-20-10-800-processed/'+prefix+'_'+str(nums_start[0])+'_'+str(nums_stop[0])+'.npy')\n",
190 | " for i in range(1,len(nums_start)):\n",
191 | " newFile = '../data/cube-20-20-10-800-processed/'+prefix+'_'+str(nums_start[i])+'_'+str(nums_stop[i])+'.npy'\n",
192 | " array = np.append(array,loadNumpy(newFile),axis=0)\n",
193 | " return pd.DataFrame(array,columns=columns)\n",
194 | "\n",
195 | "\n",
196 | "# def combineDataFrames(columns=columns):\n",
197 | "# dir = './temp/'\n",
198 | "# array = loadNumpy(dir+'file1.npy')\n",
199 | "# array = np.append(array,loadNumpy(dir+'file2.npy'),axis=0)\n",
200 | " \n",
201 | "# return pd.DataFrame(array,columns=columns)"
202 | ]
203 | },
204 | {
205 | "cell_type": "code",
206 | "execution_count": 11,
207 | "metadata": {},
208 | "outputs": [],
209 | "source": [
210 | "df_big = combineDataFrames('cubeAgg-20-20-10-800')"
211 | ]
212 | },
213 | {
214 | "cell_type": "markdown",
215 | "metadata": {},
216 | "source": [
217 | "## Iterative Prediction (Trail Run)"
218 | ]
219 | },
220 | {
221 | "cell_type": "code",
222 | "execution_count": null,
223 | "metadata": {},
224 | "outputs": [],
225 | "source": [
226 | "df_train_1 = df_big[df_big.timestep < 200.0]\n",
227 | "df_test_1 = df_big[(df_big.timestep >= 200.0) & (df_big.timestep < 220.0)]\n",
228 | "\n",
229 | "# featureColumns = ['timestep','x_distance','y_distance','z_distance','layerNum','Tminus1','Tminus2']+neighborColumns\n",
230 | "\n",
231 | "X_train_1, y_train_1= shuffle(df_train_1.loc[:,featureColumns ], df_train_1['T_self'].values, random_state=300)\n",
232 | "X_test_1,y_test_1 = shuffle(df_test_1.loc[:,featureColumns],df_test_1['T_self'],random_state=300)\n",
233 | "\n",
234 | "et_1 = ExtraTreesRegressor(n_estimators=10, n_jobs=-1,random_state=300)\n",
235 | "et_1.fit(X_train_1,y_train_1)\n",
236 | "y_predicted_1 = et_1.predict(X_test_1)\n",
237 | "print('Iteration 1 over....')\n",
238 | "\n",
239 | "X_train_2, y_train_2= X_train_1.append(X_test_1, ignore_index=True), np.append( y_train_1, y_predicted_1)\n",
240 | "X_train_2,y_train_2 = shuffle(X_train_2,y_train_2,random_state=300)\n",
241 | "\n",
242 | "df_test_2 = df_big[(df_big.timestep >= 220.0) & (df_big.timestep < 240.0)]\n",
243 | "X_test_2,y_test_2 = shuffle(df_test_2.loc[:,featureColumns],df_test_2['T_self'],random_state=300)\n",
244 | "\n",
245 | "et_2 = ExtraTreesRegressor(n_estimators=10, n_jobs=-1,random_state=300)\n",
246 | "et_2.fit(X_train_2,y_train_2)\n",
247 | "y_predicted_2 = et_2.predict(X_test_2)\n",
248 | "print('Iteration 2 over....')\n",
249 | "\n",
250 | "X_train_3, y_train_3= X_train_2.append(X_test_2, ignore_index=True), np.append( y_train_2, y_predicted_2)\n",
251 | "X_train_3,y_train_3 = shuffle(X_train_3,y_train_3,random_state=300)\n",
252 | "\n",
253 | "df_test_3 = df_big[(df_big.timestep >= 240.0) & (df_big.timestep < 260.0)]\n",
254 | "X_test_3,y_test_3 = shuffle(df_test_3.loc[:,featureColumns],df_test_3['T_self'],random_state=300)\n",
255 | "\n",
256 | "et_3 = ExtraTreesRegressor(n_estimators=10, n_jobs=-1,random_state=300)\n",
257 | "et_3.fit(X_train_3,y_train_3)\n",
258 | "y_predicted_3 = et_3.predict(X_test_3)\n",
259 | "print('Iteration 3 over....')\n",
260 | "\n",
261 | "X_train_4, y_train_4= X_train_3.append(X_test_3, ignore_index=True), np.append( y_train_3, y_predicted_3)\n",
262 | "X_train_4,y_train_4 = shuffle(X_train_4,y_train_4,random_state=300)\n",
263 | "\n",
264 | "df_test_4 = df_big[(df_big.timestep >= 260.0) & (df_big.timestep < 280.0)]\n",
265 | "X_test_4,y_test_4 = shuffle(df_test_4.loc[:,featureColumns],df_test_4['T_self'],random_state=300)\n",
266 | "\n",
267 | "et_4 = ExtraTreesRegressor(n_estimators=10, n_jobs=-1,random_state=300)\n",
268 | "et_4.fit(X_train_4,y_train_4)\n",
269 | "y_predicted_4 = et_4.predict(X_test_4)\n",
270 | "print('Iteration 4 over....')\n",
271 | "\n",
272 | "X_train_5, y_train_5= X_train_4.append(X_test_4, ignore_index=True), np.append( y_train_4, y_predicted_4)\n",
273 | "X_train_5,y_train_5 = shuffle(X_train_5,y_train_5,random_state=300)\n",
274 | "\n",
275 | "df_test_5 = df_big[(df_big.timestep >= 280.0) & (df_big.timestep < 300.0)]\n",
276 | "X_test_5,y_test_5 = shuffle(df_test_5.loc[:,featureColumns],df_test_5['T_self'],random_state=300)\n",
277 | "\n",
278 | "et_5 = ExtraTreesRegressor(n_estimators=10, n_jobs=-1,random_state=300)\n",
279 | "et_5.fit(X_train_5,y_train_5)\n",
280 | "y_predicted_5 = et_5.predict(X_test_5)\n",
281 | "print('Iteration 5 over....')\n",
282 | "\n",
283 | "\n",
284 | "print ('Iterative training results')\n",
285 | "print r2(y_test_1,y_predicted_1), mape(y_test_1,y_predicted_1)\n",
286 | "print r2(y_test_2,y_predicted_2), mape(y_test_2,y_predicted_2)\n",
287 | "print r2(y_test_3,y_predicted_3), mape(y_test_3,y_predicted_3)\n",
288 | "print r2(y_test_4,y_predicted_4), mape(y_test_4,y_predicted_4)\n",
289 | "print r2(y_test_5,y_predicted_5), mape(y_test_5,y_predicted_5)\n",
290 | "\n",
291 | "print ('One step training results')\n",
292 | "et_direct = ExtraTreesRegressor(n_estimators=10, n_jobs=-1,random_state=300)\n",
293 | "et_direct.fit(X_train_1,y_train_1)\n",
294 | "y_predicted = et_direct.predict(X_test_5)\n",
295 | "\n",
296 | "r2(y_test_5,y_predicted) ,mape(y_test_5,y_predicted)"
297 | ]
298 | },
299 | {
300 | "cell_type": "markdown",
301 | "metadata": {},
302 | "source": [
303 | "## Iterative vs Non-iterative prediction"
304 | ]
305 | },
306 | {
307 | "cell_type": "code",
308 | "execution_count": 12,
309 | "metadata": {},
310 | "outputs": [],
311 | "source": [
312 | "def compare_iterative_direct_prediction(n=5, init=100, TIMESTEP_ITER = 50, n_estimators=10):\n",
313 | " \n",
314 | " df_train_i_minus_1 = df_big[df_big.timestep < init]\n",
315 | " df_test_i_minus_1 = df_big[(df_big.timestep >= init) & (df_big.timestep < init+TIMESTEP_ITER)]\n",
316 | "# print (df_test_i_minus_1)\n",
317 | "\n",
318 | " X_train_i_minus_1, y_train_i_minus_1= shuffle(df_train_i_minus_1.loc[:,featureColumns ], df_train_i_minus_1['T_self'].values, random_state=300)\n",
319 | " X_test_i_minus_1,y_test_i_minus_1 = shuffle(df_test_i_minus_1.loc[:,featureColumns],df_test_i_minus_1['T_self'],random_state=300)\n",
320 | "\n",
321 | " et_i_minus_1 = ExtraTreesRegressor(n_estimators=n_estimators, n_jobs=-1,random_state=300)\n",
322 | " start = time()\n",
323 | " et_i_minus_1.fit(X_train_i_minus_1,y_train_i_minus_1)\n",
324 | " y_predicted_i_minus_1 = et_i_minus_1.predict(X_test_i_minus_1)\n",
325 | " \n",
326 | " START = init\n",
327 | " STOP = init + TIMESTEP_ITER\n",
328 | " \n",
329 | "# print (\"START is: \",START)\n",
330 | "# print (\"STOP is: \", STOP)\n",
331 | " \n",
332 | "# print (\"r2 is: \",r2(y_test_i_minus_1,y_predicted_i_minus_1) , \"mape is: \", mape(y_test_i_minus_1,y_predicted_i_minus_1))\n",
333 | " \n",
334 | " temp_y = y_test_i_minus_1\n",
335 | " temp_predicted = y_predicted_i_minus_1\n",
336 | "\n",
337 | " print('Iteration 1 over....')\n",
338 | "# print('\\n')\n",
339 | " \n",
340 | " \n",
341 | " for i in range(2,n+1):\n",
342 | " X_train_i, y_train_i= X_train_i_minus_1.append(X_test_i_minus_1, ignore_index=True), np.append(y_train_i_minus_1, y_predicted_i_minus_1)\n",
343 | "# print(X_train_i.iloc[:,0:2].tail(50))\n",
344 | "# print(X_train_i_minus_1.iloc[:,0:2].tail(50))\n",
345 | " X_train_i,y_train_i = shuffle(X_train_i,y_train_i,random_state=300)\n",
346 | " START = START + TIMESTEP_ITER\n",
347 | " STOP = STOP + TIMESTEP_ITER\n",
348 | "# print (\"START is: \",START)\n",
349 | "# print (\"STOP is: \", STOP)\n",
350 | " df_test_i = df_big[(df_big.timestep >= START) & (df_big.timestep < STOP)]\n",
351 | "# print (df_test_i)\n",
352 | " X_test_i,y_test_i = shuffle(df_test_i.loc[:,featureColumns],df_test_i['T_self'],random_state=300)\n",
353 | "\n",
354 | " et_i = ExtraTreesRegressor(n_estimators=n_estimators, n_jobs=-1,random_state=300)\n",
355 | " et_i.fit(X_train_i,y_train_i)\n",
356 | " y_predicted_i = et_i.predict(X_test_i)\n",
357 | "# print (\"r2 is: \", r2(y_test_i,y_predicted_i),\"mape is: \",mape(y_test_i,y_predicted_i))\n",
358 | " \n",
359 | " temp_y = temp_y.append(y_test_i)\n",
360 | " temp_predicted = np.append(temp_predicted,y_predicted_i)\n",
361 | " \n",
362 | " X_train_i_minus_1 = X_train_i\n",
363 | " X_test_i_minus_1 = X_test_i\n",
364 | " y_train_i_minus_1 = y_train_i\n",
365 | "# y_test_i_minus_1 = y_test_i\n",
366 | " y_predicted_i_minus_1 = y_predicted_i\n",
367 | " \n",
368 | " print('Iteration '+str(i)+' over....')\n",
369 | "# print('\\n')\n",
370 | "# if i==2:\n",
371 | "# break\n",
372 | "\n",
373 | " stop = time()\n",
374 | "# print ('time elapsed for iterative is ',(stop-start),'seconds')\n",
375 | "# print ('\\n')\n",
376 | "\n",
377 | "# print (r2(y_test_i,y_predicted_i) ,mape(y_test_i,y_predicted_i))\n",
378 | "\n",
379 | " print('\\n')\n",
380 | " print ('Iterative Prediction Accuracy for all timesteps predicted: ')\n",
381 | " \n",
382 | "# print (temp_y)\n",
383 | "# print (temp_predicted)\n",
384 | " \n",
385 | "# print (len(temp_y),len(temp_predicted))\n",
386 | " \n",
387 | " print (\"r2 is: \", r2(temp_y,temp_predicted) ,\"mape is: \", mape(temp_y,temp_predicted)) \n",
388 | " print('\\n')\n",
389 | " \n",
390 | "\n",
391 | " print ('Non-iterative accuracy for last TIME_STEP_ITER timesteps: ')\n",
392 | " df_train_1 = df_big[df_big.timestep < init]\n",
393 | " X_train_1, y_train_1= shuffle(df_train_1.loc[:,featureColumns ], df_train_1['T_self'].values, random_state=300)\n",
394 | " start = time()\n",
395 | " et_direct = ExtraTreesRegressor(n_estimators=n_estimators, n_jobs=-1,random_state=300)\n",
396 | " et_direct.fit(X_train_1,y_train_1)\n",
397 | " y_predicted_direct = et_direct.predict(X_test_i)\n",
398 | "\n",
399 | " stop = time()\n",
400 | "# print ('time elapsed for direct is ',(stop-start),'seconds')\n",
401 | " print (\"r2 is: \", r2(y_test_i,y_predicted_direct), \"mape is: \", mape(y_test_i,y_predicted_direct))\n",
402 | " print('\\n')\n",
403 | "\n",
404 | " \n",
405 | " \n",
406 | " print ('Non-iterative accuracy for all timesteps predicted: ')\n",
407 | " \n",
408 | " df_train = df_big[df_big['timestep'] < init]\n",
409 | " df_test = df_big[(df_big['timestep'] >= init) & (df_big['timestep'] < (init+ (TIMESTEP_ITER * n) ))]\n",
410 | "\n",
411 | "\n",
412 | " X_train = df_train.loc[:,featureColumns]\n",
413 | " y_train = df_train['T_self']\n",
414 | "\n",
415 | " X_test = df_test.loc[:,featureColumns]\n",
416 | " y_test = df_test['T_self']\n",
417 | "\n",
418 | " X_train,y_train = shuffle(X_train,y_train,random_state=300)\n",
419 | " X_test,y_test = shuffle(X_test,y_test,random_state=300)\n",
420 | " \n",
421 | " et_direct = ExtraTreesRegressor(n_estimators=n_estimators, n_jobs=-1,random_state=300)\n",
422 | " et_direct.fit(X_train,y_train)\n",
423 | " predicted = et_direct.predict(X_test)\n",
424 | " print (\"r2 is: \", r2(y_test,predicted), \"mape is: \", mape(y_test,predicted))"
425 | ]
426 | },
427 | {
428 | "cell_type": "code",
429 | "execution_count": 13,
430 | "metadata": {},
431 | "outputs": [
432 | {
433 | "name": "stdout",
434 | "output_type": "stream",
435 | "text": [
436 | "Iteration 1 over....\n",
437 | "Iteration 2 over....\n",
438 | "Iteration 3 over....\n",
439 | "Iteration 4 over....\n",
440 | "Iteration 5 over....\n",
441 | "Iteration 6 over....\n",
442 | "Iteration 7 over....\n",
443 | "Iteration 8 over....\n",
444 | "Iteration 9 over....\n",
445 | "Iteration 10 over....\n",
446 | "\n",
447 | "\n",
448 | "Iterative Prediction Accuracy for all timesteps predicted: \n",
449 | "r2 is: 0.9601 mape is: 2.9672\n",
450 | "\n",
451 | "\n",
452 | "Non-iterative accuracy for last TIME_STEP_ITER timesteps: \n",
453 | "r2 is: 0.9183 mape is: 3.2508\n",
454 | "\n",
455 | "\n",
456 | "Non-iterative accuracy for all timesteps predicted: \n",
457 | "r2 is: 0.9614 mape is: 2.4556\n"
458 | ]
459 | }
460 | ],
461 | "source": [
462 | "compare_iterative_direct_prediction(n=10,init=200,TIMESTEP_ITER=20,n_estimators=10) # No of iterations, Initial timestep to start prediction, No of timesteps for which temperatures are to be predicted in each interatino, No of estimators in Extra Trees model"
463 | ]
464 | },
465 | {
466 | "cell_type": "code",
467 | "execution_count": 14,
468 | "metadata": {},
469 | "outputs": [
470 | {
471 | "name": "stdout",
472 | "output_type": "stream",
473 | "text": [
474 | "Iteration 1 over....\n",
475 | "Iteration 2 over....\n",
476 | "Iteration 3 over....\n",
477 | "Iteration 4 over....\n",
478 | "Iteration 5 over....\n",
479 | "Iteration 6 over....\n",
480 | "Iteration 7 over....\n",
481 | "Iteration 8 over....\n",
482 | "Iteration 9 over....\n",
483 | "Iteration 10 over....\n",
484 | "Iteration 11 over....\n",
485 | "Iteration 12 over....\n",
486 | "Iteration 13 over....\n",
487 | "Iteration 14 over....\n",
488 | "Iteration 15 over....\n",
489 | "Iteration 16 over....\n",
490 | "Iteration 17 over....\n",
491 | "Iteration 18 over....\n",
492 | "Iteration 19 over....\n",
493 | "Iteration 20 over....\n",
494 | "\n",
495 | "\n",
496 | "Iterative Prediction Accuracy for all timesteps predicted: \n",
497 | "r2 is: 0.9562 mape is: 2.6861\n",
498 | "\n",
499 | "\n",
500 | "Non-iterative accuracy for last TIME_STEP_ITER timesteps: \n",
501 | "r2 is: 0.8914 mape is: 3.9053\n",
502 | "\n",
503 | "\n",
504 | "Non-iterative accuracy for all timesteps predicted: \n",
505 | "r2 is: 0.9373 mape is: 3.1593\n"
506 | ]
507 | }
508 | ],
509 | "source": [
510 | "compare_iterative_direct_prediction(n=20,init=200,TIMESTEP_ITER=20,n_estimators=10)"
511 | ]
512 | },
513 | {
514 | "cell_type": "code",
515 | "execution_count": 15,
516 | "metadata": {},
517 | "outputs": [
518 | {
519 | "name": "stdout",
520 | "output_type": "stream",
521 | "text": [
522 | "Iteration 1 over....\n",
523 | "Iteration 2 over....\n",
524 | "Iteration 3 over....\n",
525 | "Iteration 4 over....\n",
526 | "Iteration 5 over....\n",
527 | "Iteration 6 over....\n",
528 | "Iteration 7 over....\n",
529 | "Iteration 8 over....\n",
530 | "Iteration 9 over....\n",
531 | "Iteration 10 over....\n",
532 | "Iteration 11 over....\n",
533 | "Iteration 12 over....\n",
534 | "Iteration 13 over....\n",
535 | "Iteration 14 over....\n",
536 | "Iteration 15 over....\n",
537 | "Iteration 16 over....\n",
538 | "Iteration 17 over....\n",
539 | "Iteration 18 over....\n",
540 | "Iteration 19 over....\n",
541 | "Iteration 20 over....\n",
542 | "Iteration 21 over....\n",
543 | "Iteration 22 over....\n",
544 | "Iteration 23 over....\n",
545 | "Iteration 24 over....\n",
546 | "Iteration 25 over....\n",
547 | "Iteration 26 over....\n",
548 | "Iteration 27 over....\n",
549 | "Iteration 28 over....\n",
550 | "Iteration 29 over....\n",
551 | "Iteration 30 over....\n",
552 | "\n",
553 | "\n",
554 | "Iterative Prediction Accuracy for all timesteps predicted: \n",
555 | "r2 is: 0.935 mape is: 3.375\n",
556 | "\n",
557 | "\n",
558 | "Non-iterative accuracy for last TIME_STEP_ITER timesteps: \n",
559 | "r2 is: 0.822 mape is: 4.61\n",
560 | "\n",
561 | "\n",
562 | "Non-iterative accuracy for all timesteps predicted: \n",
563 | "r2 is: 0.9058 mape is: 3.5953\n"
564 | ]
565 | }
566 | ],
567 | "source": [
568 | "compare_iterative_direct_prediction(n=30,init=200,TIMESTEP_ITER=20,n_estimators=10)"
569 | ]
570 | },
571 | {
572 | "cell_type": "code",
573 | "execution_count": 16,
574 | "metadata": {},
575 | "outputs": [
576 | {
577 | "name": "stdout",
578 | "output_type": "stream",
579 | "text": [
580 | "Iteration 1 over....\n",
581 | "Iteration 2 over....\n",
582 | "Iteration 3 over....\n",
583 | "Iteration 4 over....\n",
584 | "Iteration 5 over....\n",
585 | "Iteration 6 over....\n",
586 | "Iteration 7 over....\n",
587 | "Iteration 8 over....\n",
588 | "Iteration 9 over....\n",
589 | "Iteration 10 over....\n",
590 | "Iteration 11 over....\n",
591 | "Iteration 12 over....\n",
592 | "Iteration 13 over....\n",
593 | "Iteration 14 over....\n",
594 | "Iteration 15 over....\n",
595 | "Iteration 16 over....\n",
596 | "Iteration 17 over....\n",
597 | "Iteration 18 over....\n",
598 | "Iteration 19 over....\n",
599 | "Iteration 20 over....\n",
600 | "Iteration 21 over....\n",
601 | "Iteration 22 over....\n",
602 | "Iteration 23 over....\n",
603 | "Iteration 24 over....\n",
604 | "Iteration 25 over....\n",
605 | "Iteration 26 over....\n",
606 | "Iteration 27 over....\n",
607 | "Iteration 28 over....\n",
608 | "Iteration 29 over....\n",
609 | "Iteration 30 over....\n",
610 | "Iteration 31 over....\n",
611 | "Iteration 32 over....\n",
612 | "Iteration 33 over....\n",
613 | "Iteration 34 over....\n",
614 | "Iteration 35 over....\n",
615 | "Iteration 36 over....\n",
616 | "Iteration 37 over....\n",
617 | "Iteration 38 over....\n",
618 | "Iteration 39 over....\n",
619 | "Iteration 40 over....\n",
620 | "\n",
621 | "\n",
622 | "Iterative Prediction Accuracy for all timesteps predicted: \n",
623 | "r2 is: 0.9243 mape is: 3.0525\n",
624 | "\n",
625 | "\n",
626 | "Non-iterative accuracy for last TIME_STEP_ITER timesteps: \n",
627 | "r2 is: 0.7719 mape is: 4.22\n",
628 | "\n",
629 | "\n",
630 | "Non-iterative accuracy for all timesteps predicted: \n",
631 | "r2 is: 0.8772 mape is: 3.959\n"
632 | ]
633 | }
634 | ],
635 | "source": [
636 | "compare_iterative_direct_prediction(n=40,init=200,TIMESTEP_ITER=20,n_estimators=10)"
637 | ]
638 | },
639 | {
640 | "cell_type": "code",
641 | "execution_count": 17,
642 | "metadata": {},
643 | "outputs": [
644 | {
645 | "name": "stdout",
646 | "output_type": "stream",
647 | "text": [
648 | "Iteration 1 over....\n",
649 | "Iteration 2 over....\n",
650 | "Iteration 3 over....\n",
651 | "Iteration 4 over....\n",
652 | "Iteration 5 over....\n",
653 | "Iteration 6 over....\n",
654 | "Iteration 7 over....\n",
655 | "Iteration 8 over....\n",
656 | "Iteration 9 over....\n",
657 | "Iteration 10 over....\n",
658 | "Iteration 11 over....\n",
659 | "Iteration 12 over....\n",
660 | "Iteration 13 over....\n",
661 | "Iteration 14 over....\n",
662 | "Iteration 15 over....\n",
663 | "Iteration 16 over....\n",
664 | "Iteration 17 over....\n",
665 | "Iteration 18 over....\n",
666 | "Iteration 19 over....\n",
667 | "Iteration 20 over....\n",
668 | "Iteration 21 over....\n",
669 | "Iteration 22 over....\n",
670 | "Iteration 23 over....\n",
671 | "Iteration 24 over....\n",
672 | "Iteration 25 over....\n",
673 | "Iteration 26 over....\n",
674 | "Iteration 27 over....\n",
675 | "Iteration 28 over....\n",
676 | "Iteration 29 over....\n",
677 | "Iteration 30 over....\n",
678 | "Iteration 31 over....\n",
679 | "Iteration 32 over....\n",
680 | "Iteration 33 over....\n",
681 | "Iteration 34 over....\n",
682 | "Iteration 35 over....\n",
683 | "Iteration 36 over....\n",
684 | "Iteration 37 over....\n",
685 | "Iteration 38 over....\n",
686 | "Iteration 39 over....\n",
687 | "Iteration 40 over....\n",
688 | "Iteration 41 over....\n",
689 | "Iteration 42 over....\n",
690 | "Iteration 43 over....\n",
691 | "Iteration 44 over....\n",
692 | "Iteration 45 over....\n",
693 | "Iteration 46 over....\n",
694 | "Iteration 47 over....\n",
695 | "Iteration 48 over....\n",
696 | "Iteration 49 over....\n",
697 | "Iteration 50 over....\n",
698 | "\n",
699 | "\n",
700 | "Iterative Prediction Accuracy for all timesteps predicted: \n",
701 | "r2 is: 0.9041 mape is: 3.4048\n",
702 | "\n",
703 | "\n",
704 | "Non-iterative accuracy for last TIME_STEP_ITER timesteps: \n",
705 | "r2 is: 0.7242 mape is: 5.3229\n",
706 | "\n",
707 | "\n",
708 | "Non-iterative accuracy for all timesteps predicted: \n",
709 | "r2 is: 0.8448 mape is: 4.2655\n"
710 | ]
711 | }
712 | ],
713 | "source": [
714 | "compare_iterative_direct_prediction(n=50,init=200,TIMESTEP_ITER=20,n_estimators=10)"
715 | ]
716 | },
717 | {
718 | "cell_type": "code",
719 | "execution_count": null,
720 | "metadata": {},
721 | "outputs": [],
722 | "source": []
723 | }
724 | ],
725 | "metadata": {
726 | "kernelspec": {
727 | "display_name": "Python 3",
728 | "language": "python",
729 | "name": "python3"
730 | },
731 | "language_info": {
732 | "codemirror_mode": {
733 | "name": "ipython",
734 | "version": 3
735 | },
736 | "file_extension": ".py",
737 | "mimetype": "text/x-python",
738 | "name": "python",
739 | "nbconvert_exporter": "python",
740 | "pygments_lexer": "ipython3",
741 | "version": "3.7.4"
742 | }
743 | },
744 | "nbformat": 4,
745 | "nbformat_minor": 4
746 | }
747 |
--------------------------------------------------------------------------------