├── LICENSE
├── README.md
├── screenshots
├── flappy_01.png
├── flappy_02.png
├── flappy_03.png
├── flappy_04.png
├── flappy_05.png
├── flappy_06.png
├── flappy_07.png
├── flappy_08.png
├── flappy_09.png
└── flappy_10.png
└── source
├── assets
├── fnt_chars_black.fnt
├── fnt_chars_black.png
├── fnt_digits_blue.fnt
├── fnt_digits_blue.png
├── fnt_digits_green.fnt
├── fnt_digits_green.png
├── fnt_digits_red.fnt
├── fnt_digits_red.png
├── img_bird.png
├── img_buttons.png
├── img_ground.png
├── img_logo.png
├── img_pause.png
├── img_target.png
└── img_tree.png
├── gameplay.js
├── genetic.js
├── index.html
├── phaser.min.js
└── synaptic.min.js
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Srdjan Susnic
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Machine Learning for Flappy Bird using Neural Network and Genetic Algorithm
2 |
3 | Here is the source code for a HTML5 project that implements a machine learning algorithm in the Flappy Bird video game using neural networks and a genetic algorithm. The program teaches a little bird how to flap optimally in order to fly safely through barriers as long as possible.
4 |
5 | The complete tutorial with much more details and demo you can find here:
6 | [http://www.askforgametask.com/tutorial/machine-learning-algorithm-flappy-bird](http://www.askforgametask.com/tutorial/machine-learning-algorithm-flappy-bird)
7 |
8 | Here you can also watch a short video with a simple presentation of the algorithm:
9 | [https://www.youtube.com/watch?v=aeWmdojEJf0](https://www.youtube.com/watch?v=aeWmdojEJf0)
10 |
11 | All code is written in HTML5 using [Phaser framework](http://phaser.io/) and [Synaptic Neural Network library](https://synaptic.juancazala.com) for neural network implementation.
12 |
13 | 
14 |
15 | ## Neural Network Architecture
16 |
17 | To play the game, each unit (bird) has its own neural network consisted of the next 3 layers:
18 | 1. an input layer with 2 neurons presenting what a bird sees:
19 |
20 | ```
21 | 1) horizontal distance between the bird and the closest gap
22 | 2) height difference between the bird and the closest gap
23 | ```
24 |
25 | 2. a hidden layer with 6 neurons
26 | 3. an output layer with 1 neuron used to provide an action as follows:
27 |
28 | ```
29 | if output > 0.5 then flap else do nothing
30 | ```
31 |
32 | 
33 |
34 |
35 | There is used [Synaptic Neural Network library](https://synaptic.juancazala.com) to implement entire artificial neural network instead of making a new one from the scratch.
36 |
37 | ## The Main Concept of Machine Learning
38 |
39 | The main concept of machine learning implemented in this program is based on the neuro-evolution form. It uses evolutionary algorithms such as a genetic algorithm to train artificial neural networks. Here are the main steps:
40 |
41 | 1. create a new population of 10 units (birds) with a **random neural network**
42 | 2. let all units play the game simultaneously by using their own neural networks
43 | 3. for each unit calculate its **fitness** function to measure its quality as:
44 |
45 | ```
46 | fitness = total travelled distance - distance to the closest gap
47 | ```
48 |
49 | 
50 |
51 |
52 | 4. when all units are killed, evaluate the current population to the next one using **genetic algorithm operators** (selection, crossover and mutation) as follows:
53 |
54 | ```
55 | 1. sort the units of the current population in decreasing order by their fitness ranking
56 | 2. select the top 4 units and mark them as the winners of the current population
57 | 3. the 4 winners are directly passed on to the next population
58 | 4. to fill the rest of the next population, create 6 offsprings as follows:
59 | - 1 offspring is made by a crossover of two best winners
60 | - 3 offsprings are made by a crossover of two random winners
61 | - 2 offsprings are direct copy of two random winners
62 | 5. to add some variations, apply random mutations on each offspring.
63 | ```
64 |
65 | 5. go back to the step 2
66 |
67 | ## Implementation
68 |
69 | ### Requirements
70 |
71 | Since the program is written in HTML5 using [Phaser framework](http://phaser.io/) and [Synaptic Neural Network library](https://synaptic.juancazala.com) you need these files:
72 |
73 | - **phaser.min.js**
74 | - **synaptic.min.js**
75 |
76 | ### gameplay.js
77 | The entire game logic is implemented in **gameplay.js** file. It consists of the following classes:
78 |
79 | - `App.Main`, the main routine with the following essential functions:
80 | - _preload()_ to preload all assets
81 | - _create()_ to create all objects and initialize a new genetic algorithm object
82 | - _update()_ to run the main loop in which the Flappy Bird game is played by using AI neural networks and the population is evolved by using genetic algorithm
83 | - _drawStatus()_ to display information of all units
84 |
85 | - `TreeGroup Class`, extended Phaser Group class to represent a moving barrier. This group contains a top and a bottom Tree sprite.
86 |
87 | - `Tree Class`, extended Phaser Sprite class to represent a Tree sprite.
88 |
89 | - `Bird Class`, extended Phaser Sprite class to represent a Bird sprite.
90 |
91 | - `Text Class`, extended Phaser BitmapText class used for drawing text.
92 |
93 | ### genetic.js
94 |
95 | The genetic algorithm is implemented in **genetic.js** file which consists of the following class:
96 |
97 | - `GeneticAlgorithm Class`, the main class to handle all genetic algorithm operations. It needs two parameters: **_max_units_** to set a total number of units in population and **_top_units_** to set a number of top units (winners) used for evolving population. Here are its essential functions:
98 |
99 | - _reset()_ to reset genetic algorithm parameters
100 | - _createPopulation()_ to create a new population
101 | - _activateBrain()_ to activate the AI neural network of an unit and get its output action according to the inputs
102 | - _evolvePopulation()_ to evolve the population by using genetic operators (selection, crossover and mutations)
103 | - _selection()_ to select the best units from the current population
104 | - _crossOver()_ to perform a single point crossover between two parents
105 | - _mutation()_ to perform random mutations on an offspring
106 |
--------------------------------------------------------------------------------
/screenshots/flappy_01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_01.png
--------------------------------------------------------------------------------
/screenshots/flappy_02.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_02.png
--------------------------------------------------------------------------------
/screenshots/flappy_03.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_03.png
--------------------------------------------------------------------------------
/screenshots/flappy_04.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_04.png
--------------------------------------------------------------------------------
/screenshots/flappy_05.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_05.png
--------------------------------------------------------------------------------
/screenshots/flappy_06.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_06.png
--------------------------------------------------------------------------------
/screenshots/flappy_07.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_07.png
--------------------------------------------------------------------------------
/screenshots/flappy_08.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_08.png
--------------------------------------------------------------------------------
/screenshots/flappy_09.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_09.png
--------------------------------------------------------------------------------
/screenshots/flappy_10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/screenshots/flappy_10.png
--------------------------------------------------------------------------------
/source/assets/fnt_chars_black.fnt:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
128 |
129 |
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 |
138 |
139 |
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
149 |
150 |
151 |
152 |
153 |
154 |
155 |
156 |
157 |
158 |
159 |
160 |
161 |
162 |
163 |
164 |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 |
173 |
174 |
175 |
176 |
177 |
178 |
179 |
180 |
181 |
182 |
183 |
184 |
185 |
186 |
187 |
188 |
189 |
190 |
191 |
192 |
193 |
194 |
195 |
196 |
197 |
198 |
199 |
200 |
201 |
202 |
203 |
204 |
--------------------------------------------------------------------------------
/source/assets/fnt_chars_black.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/fnt_chars_black.png
--------------------------------------------------------------------------------
/source/assets/fnt_digits_blue.fnt:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------
/source/assets/fnt_digits_blue.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/fnt_digits_blue.png
--------------------------------------------------------------------------------
/source/assets/fnt_digits_green.fnt:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------
/source/assets/fnt_digits_green.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/fnt_digits_green.png
--------------------------------------------------------------------------------
/source/assets/fnt_digits_red.fnt:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------
/source/assets/fnt_digits_red.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/fnt_digits_red.png
--------------------------------------------------------------------------------
/source/assets/img_bird.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/img_bird.png
--------------------------------------------------------------------------------
/source/assets/img_buttons.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/img_buttons.png
--------------------------------------------------------------------------------
/source/assets/img_ground.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/img_ground.png
--------------------------------------------------------------------------------
/source/assets/img_logo.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/img_logo.png
--------------------------------------------------------------------------------
/source/assets/img_pause.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/img_pause.png
--------------------------------------------------------------------------------
/source/assets/img_target.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/img_target.png
--------------------------------------------------------------------------------
/source/assets/img_tree.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/assets/img_tree.png
--------------------------------------------------------------------------------
/source/gameplay.js:
--------------------------------------------------------------------------------
1 | /***********************************************************************************
2 | /* Create a new Phaser Game on window load
3 | /***********************************************************************************/
4 |
5 | window.onload = function () {
6 | var game = new Phaser.Game(1280, 720, Phaser.CANVAS, 'game');
7 |
8 | game.state.add('Main', App.Main);
9 | game.state.start('Main');
10 | };
11 |
12 | /***********************************************************************************
13 | /* Main program
14 | /***********************************************************************************/
15 |
16 | var App = {};
17 |
18 | App.Main = function(game){
19 | this.STATE_INIT = 1;
20 | this.STATE_START = 2;
21 | this.STATE_PLAY = 3;
22 | this.STATE_GAMEOVER = 4;
23 |
24 | this.BARRIER_DISTANCE = 300;
25 | }
26 |
27 | App.Main.prototype = {
28 | preload : function(){
29 | this.game.load.spritesheet('imgBird', 'assets/img_bird.png', 36, 36, 20);
30 | this.game.load.spritesheet('imgTree', 'assets/img_tree.png', 90, 400, 2);
31 | this.game.load.spritesheet('imgButtons', 'assets/img_buttons.png', 110, 40, 3);
32 |
33 | this.game.load.image('imgTarget', 'assets/img_target.png');
34 | this.game.load.image('imgGround', 'assets/img_ground.png');
35 | this.game.load.image('imgPause', 'assets/img_pause.png');
36 | this.game.load.image('imgLogo', 'assets/img_logo.png');
37 |
38 | this.load.bitmapFont('fnt_chars_black', 'assets/fnt_chars_black.png', 'assets/fnt_chars_black.fnt');
39 | this.load.bitmapFont('fnt_digits_blue', 'assets/fnt_digits_blue.png', 'assets/fnt_digits_blue.fnt');
40 | this.load.bitmapFont('fnt_digits_green', 'assets/fnt_digits_green.png', 'assets/fnt_digits_green.fnt');
41 | this.load.bitmapFont('fnt_digits_red', 'assets/fnt_digits_red.png', 'assets/fnt_digits_red.fnt');
42 | },
43 |
44 | create : function(){
45 | // set scale mode to cover the entire screen
46 | this.scale.scaleMode = Phaser.ScaleManager.SHOW_ALL;
47 | this.scale.pageAlignVertically = true;
48 | this.scale.pageAlignHorizontally = true;
49 |
50 | // set a blue color for the background of the stage
51 | this.game.stage.backgroundColor = "#89bfdc";
52 |
53 | // keep game running if it loses the focus
54 | this.game.stage.disableVisibilityChange = true;
55 |
56 | // start the Phaser arcade physics engine
57 | this.game.physics.startSystem(Phaser.Physics.ARCADE);
58 |
59 | // set the gravity of the world
60 | this.game.physics.arcade.gravity.y = 1300;
61 |
62 | // create a new Genetic Algorithm with a population of 10 units which will be evolving by using 4 top units
63 | this.GA = new GeneticAlgorithm(10, 4);
64 |
65 | // create a BirdGroup which contains a number of Bird objects
66 | this.BirdGroup = this.game.add.group();
67 | for (var i = 0; i < this.GA.max_units; i++){
68 | this.BirdGroup.add(new Bird(this.game, 0, 0, i));
69 | }
70 |
71 | // create a BarrierGroup which contains a number of Tree Groups
72 | // (each Tree Group contains a top and bottom Tree object)
73 | this.BarrierGroup = this.game.add.group();
74 | for (var i = 0; i < 4; i++){
75 | new TreeGroup(this.game, this.BarrierGroup, i);
76 | }
77 |
78 | // create a Target Point sprite
79 | this.TargetPoint = this.game.add.sprite(0, 0, 'imgTarget');
80 | this.TargetPoint.anchor.setTo(0.5);
81 |
82 | // create a scrolling Ground object
83 | this.Ground = this.game.add.tileSprite(0, this.game.height-100, this.game.width-370, 100, 'imgGround');
84 | this.Ground.autoScroll(-200, 0);
85 |
86 | // create a BitmapData image for drawing head-up display (HUD) on it
87 | this.bmdStatus = this.game.make.bitmapData(370, this.game.height);
88 | this.bmdStatus.addToWorld(this.game.width - this.bmdStatus.width, 0);
89 |
90 | // create text objects displayed in the HUD header
91 | new Text(this.game, 1047, 10, "In1 In2 Out", "right", "fnt_chars_black"); // Input 1 | Input 2 | Output
92 | this.txtPopulationPrev = new Text(this.game, 1190, 10, "", "right", "fnt_chars_black"); // No. of the previous population
93 | this.txtPopulationCurr = new Text(this.game, 1270, 10, "", "right", "fnt_chars_black"); // No. of the current population
94 |
95 | // create text objects for each bird to show their info on the HUD
96 | this.txtStatusPrevGreen = []; // array of green text objects to show info of top units from the previous population
97 | this.txtStatusPrevRed = []; // array of red text objects to show info of weak units from the previous population
98 | this.txtStatusCurr = []; // array of blue text objects to show info of all units from the current population
99 |
100 | for (var i=0; i this.TargetPoint.x) isNextTarget = true;
200 |
201 | // check if a bird flies out of vertical bounds
202 | if (bird.y<0 || bird.y>610) this.onDeath(bird);
203 |
204 | // perform a proper action (flap yes/no) for this bird by activating its neural network
205 | this.GA.activateBrain(bird, this.TargetPoint);
206 | }
207 | }, this);
208 |
209 | // if any bird passed through the current target barrier then set the next target barrier
210 | if (isNextTarget){
211 | this.score++;
212 | this.targetBarrier = this.getNextBarrier(this.targetBarrier.index);
213 | }
214 |
215 | // if the first barrier went out of the left bound then restart it on the right side
216 | if (this.firstBarrier.getWorldX() < -this.firstBarrier.width){
217 | this.firstBarrier.restart(this.lastBarrier.getWorldX() + this.BARRIER_DISTANCE);
218 |
219 | this.firstBarrier = this.getNextBarrier(this.firstBarrier.index);
220 | this.lastBarrier = this.getNextBarrier(this.lastBarrier.index);
221 | }
222 |
223 | // increase the travelled distance
224 | this.distance += Math.abs(this.firstBarrier.topTree.deltaX);
225 |
226 | this.drawStatus();
227 | break;
228 |
229 | case this.STATE_GAMEOVER: // when all birds are killed evolve the population
230 | this.GA.evolvePopulation();
231 | this.GA.iteration++;
232 |
233 | this.state = this.STATE_START;
234 | break;
235 | }
236 | },
237 |
238 | drawStatus : function(){
239 | this.bmdStatus.fill(180, 180, 180); // clear bitmap data by filling it with a gray color
240 | this.bmdStatus.rect(0, 0, this.bmdStatus.width, 35, "#8e8e8e"); // draw the HUD header rect
241 |
242 | this.BirdGroup.forEach(function(bird){
243 | var y = 85 + bird.index*50;
244 |
245 | this.bmdStatus.draw(bird, 25, y-25); // draw bird's image
246 | this.bmdStatus.rect(0, y, this.bmdStatus.width, 2, "#888"); // draw line separator
247 |
248 | if (bird.alive){
249 | var brain = this.GA.Population[bird.index].toJSON();
250 | var scale = this.GA.SCALE_FACTOR*0.02;
251 |
252 | this.bmdStatus.rect(62, y, 9, -(50 - brain.neurons[0].activation/scale), "#000088"); // input 1
253 | this.bmdStatus.rect(90, y, 9, brain.neurons[1].activation/scale, "#000088"); // input 2
254 |
255 | if (brain.neurons[brain.neurons.length-1].activation<0.5) this.bmdStatus.rect(118, y, 9, -20, "#880000"); // output: flap = no
256 | else this.bmdStatus.rect(118, y, 9, -40, "#008800"); // output: flap = yes
257 | }
258 |
259 | // draw bird's fitness and score
260 | this.txtStatusCurr[bird.index].setText(bird.fitness_curr.toFixed(2)+"\n" + bird.score_curr);
261 | }, this);
262 | },
263 |
264 | getNextBarrier : function(index){
265 | return this.BarrierGroup.getAt((index + 1) % this.BarrierGroup.length);
266 | },
267 |
268 | onDeath : function(bird){
269 | this.GA.Population[bird.index].fitness = bird.fitness_curr;
270 | this.GA.Population[bird.index].score = bird.score_curr;
271 |
272 | bird.death();
273 | if (this.BirdGroup.countLiving() == 0) this.state = this.STATE_GAMEOVER;
274 | },
275 |
276 | onRestartClick : function(){
277 | this.state = this.STATE_INIT;
278 | },
279 |
280 | onMoreGamesClick : function(){
281 | window.open("http://www.askforgametask.com", "_blank");
282 | },
283 |
284 | onPauseClick : function(){
285 | this.game.paused = true;
286 | this.btnPause.input.reset();
287 | this.sprPause.revive();
288 | },
289 |
290 | onResumeClick : function(){
291 | if (this.game.paused){
292 | this.game.paused = false;
293 | this.btnPause.input.enabled = true;
294 | this.sprPause.kill();
295 | }
296 | }
297 | }
298 |
299 | /***********************************************************************************
300 | /* TreeGroup Class extends Phaser.Group
301 | /***********************************************************************************/
302 |
303 | var TreeGroup = function(game, parent, index){
304 | Phaser.Group.call(this, game, parent);
305 |
306 | this.index = index;
307 |
308 | this.topTree = new Tree(this.game, 0); // create a top Tree object
309 | this.bottomTree = new Tree(this.game, 1); // create a bottom Tree object
310 |
311 | this.add(this.topTree); // add the top Tree to this group
312 | this.add(this.bottomTree); // add the bottom Tree to this group
313 | };
314 |
315 | TreeGroup.prototype = Object.create(Phaser.Group.prototype);
316 | TreeGroup.prototype.constructor = TreeGroup;
317 |
318 | TreeGroup.prototype.restart = function(x) {
319 | this.topTree.reset(0, 0);
320 | this.bottomTree.reset(0, this.topTree.height + 130);
321 |
322 | this.x = x;
323 | this.y = this.game.rnd.integerInRange(110-this.topTree.height, -20);
324 |
325 | this.setAll('body.velocity.x', -200);
326 | };
327 |
328 | TreeGroup.prototype.getWorldX = function() {
329 | return this.topTree.world.x;
330 | };
331 |
332 | TreeGroup.prototype.getGapX = function() {
333 | return this.bottomTree.world.x + this.bottomTree.width;
334 | };
335 |
336 | TreeGroup.prototype.getGapY = function() {
337 | return this.bottomTree.world.y - 65;
338 | };
339 |
340 | /***********************************************************************************
341 | /* Tree Class extends Phaser.Sprite
342 | /***********************************************************************************/
343 |
344 | var Tree = function(game, frame) {
345 | Phaser.Sprite.call(this, game, 0, 0, 'imgTree', frame);
346 |
347 | this.game.physics.arcade.enableBody(this);
348 |
349 | this.body.allowGravity = false;
350 | this.body.immovable = true;
351 | };
352 |
353 | Tree.prototype = Object.create(Phaser.Sprite.prototype);
354 | Tree.prototype.constructor = Tree;
355 |
356 | /***********************************************************************************
357 | /* Bird Class extends Phaser.Sprite
358 | /***********************************************************************************/
359 |
360 | var Bird = function(game, x, y, index) {
361 | Phaser.Sprite.call(this, game, x, y, 'imgBird');
362 |
363 | this.index = index;
364 | this.anchor.setTo(0.5);
365 |
366 | // add flap animation and start to play it
367 | var i=index*2;
368 | this.animations.add('flap', [i, i+1]);
369 | this.animations.play('flap', 8, true);
370 |
371 | // enable physics on the bird
372 | this.game.physics.arcade.enableBody(this);
373 | };
374 |
375 | Bird.prototype = Object.create(Phaser.Sprite.prototype);
376 | Bird.prototype.constructor = Bird;
377 |
378 | Bird.prototype.restart = function(iteration){
379 | this.fitness_prev = (iteration == 1) ? 0 : this.fitness_curr;
380 | this.fitness_curr = 0;
381 |
382 | this.score_prev = (iteration == 1) ? 0: this.score_curr;
383 | this.score_curr = 0;
384 |
385 | this.alpha = 1;
386 | this.reset(150, 300 + this.index * 20);
387 | };
388 |
389 | Bird.prototype.flap = function(){
390 | this.body.velocity.y = -400;
391 | };
392 |
393 | Bird.prototype.death = function(){
394 | this.alpha = 0.5;
395 | this.kill();
396 | };
397 |
398 | /***********************************************************************************
399 | /* Text Class extends Phaser.BitmapText
400 | /***********************************************************************************/
401 |
402 | var Text = function(game, x, y, text, align, font){
403 | Phaser.BitmapText.call(this, game, x, y, font, text, 16);
404 |
405 | this.align = align;
406 |
407 | if (align == "right") this.anchor.setTo(1, 0);
408 | else this.anchor.setTo(0.5);
409 |
410 | this.game.add.existing(this);
411 | };
412 |
413 | Text.prototype = Object.create(Phaser.BitmapText.prototype);
414 | Text.prototype.constructor = Text;
415 |
416 |
--------------------------------------------------------------------------------
/source/genetic.js:
--------------------------------------------------------------------------------
1 | /***********************************************************************************
2 | /* Genetic Algorithm implementation
3 | /***********************************************************************************/
4 |
5 | var GeneticAlgorithm = function(max_units, top_units){
6 | this.max_units = max_units; // max number of units in population
7 | this.top_units = top_units; // number of top units (winners) used for evolving population
8 |
9 | if (this.max_units < this.top_units) this.top_units = this.max_units;
10 |
11 | this.Population = []; // array of all units in current population
12 |
13 | this.SCALE_FACTOR = 200; // the factor used to scale normalized input values
14 | }
15 |
16 | GeneticAlgorithm.prototype = {
17 | // resets genetic algorithm parameters
18 | reset : function(){
19 | this.iteration = 1; // current iteration number (it is equal to the current population number)
20 | this.mutateRate = 1; // initial mutation rate
21 |
22 | this.best_population = 0; // the population number of the best unit
23 | this.best_fitness = 0; // the fitness of the best unit
24 | this.best_score = 0; // the score of the best unit ever
25 | },
26 |
27 | // creates a new population
28 | createPopulation : function(){
29 | // clear any existing population
30 | this.Population.splice(0, this.Population.length);
31 |
32 | for (var i=0; i 0.5) bird.flap();
65 | },
66 |
67 | // evolves the population by performing selection, crossover and mutations on the units
68 | evolvePopulation : function(){
69 | // select the top units of the current population to get an array of winners
70 | // (they will be copied to the next population)
71 | var Winners = this.selection();
72 |
73 | if (this.mutateRate == 1 && Winners[0].fitness < 0){
74 | // If the best unit from the initial population has a negative fitness
75 | // then it means there is no any bird which reached the first barrier!
76 | // Playing as the God, we can destroy this bad population and try with another one.
77 | this.createPopulation();
78 | } else {
79 | this.mutateRate = 0.2; // else set the mutatation rate to the real value
80 | }
81 |
82 | // fill the rest of the next population with new units using crossover and mutation
83 | for (var i=this.top_units; i this.best_fitness){
119 | this.best_population = this.iteration;
120 | this.best_fitness = Winners[0].fitness;
121 | this.best_score = Winners[0].score;
122 | }
123 |
124 | // sort the units of the new population in ascending order by their index
125 | this.Population.sort(function(unitA, unitB){
126 | return unitA.index - unitB.index;
127 | });
128 | },
129 |
130 | // selects the best units from the current population
131 | selection : function(){
132 | // sort the units of the current population in descending order by their fitness
133 | var sortedPopulation = this.Population.sort(
134 | function(unitA, unitB){
135 | return unitB.fitness - unitA.fitness;
136 | }
137 | );
138 |
139 | // mark the top units as the winners!
140 | for (var i=0; i max) value = max;
200 |
201 | // normalize the clamped value
202 | return (value/max);
203 | }
204 | }
--------------------------------------------------------------------------------
/source/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 | Flappy Bird Genetic Algorithm
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
--------------------------------------------------------------------------------
/source/phaser.min.js:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/ssusnic/Machine-Learning-Flappy-Bird/2b03276ea25f2ea508ca6917197be4017f31eda2/source/phaser.min.js
--------------------------------------------------------------------------------
/source/synaptic.min.js:
--------------------------------------------------------------------------------
1 | /*!
2 | * The MIT License (MIT)
3 | *
4 | * Copyright (c) 2016 Juan Cazala - juancazala.com
5 | *
6 | * Permission is hereby granted, free of charge, to any person obtaining a copy
7 | * of this software and associated documentation files (the "Software"), to deal
8 | * in the Software without restriction, including without limitation the rights
9 | * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 | * copies of the Software, and to permit persons to whom the Software is
11 | * furnished to do so, subject to the following conditions:
12 | *
13 | * The above copyright notice and this permission notice shall be included in
14 | * all copies or substantial portions of the Software.
15 | *
16 | * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 | * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 | * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 | * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 | * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 | * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 | * THE SOFTWARE
23 | *
24 | *
25 | *
26 | * ********************************************************************************************
27 | * SYNAPTIC (v1.0.8)
28 | * ********************************************************************************************
29 | *
30 | * Synaptic is a javascript neural network library for node.js and the browser, its generalized
31 | * algorithm is architecture-free, so you can build and train basically any type of first order
32 | * or even second order neural network architectures.
33 | *
34 | * http://en.wikipedia.org/wiki/Recurrent_neural_network#Second_Order_Recurrent_Neural_Network
35 | *
36 | * The library includes a few built-in architectures like multilayer perceptrons, multilayer
37 | * long-short term memory networks (LSTM) or liquid state machines, and a trainer capable of
38 | * training any given network, and includes built-in training tasks/tests like solving an XOR,
39 | * passing a Distracted Sequence Recall test or an Embeded Reber Grammar test.
40 | *
41 | * The algorithm implemented by this library has been taken from Derek D. Monner's paper:
42 | *
43 | *
44 | * A generalized LSTM-like training algorithm for second-order recurrent neural networks
45 | * http://www.overcomplete.net/papers/nn2012.pdf
46 | *
47 | * There are references to the equations in that paper commented through the source code.
48 | *
49 | */
50 | /******/ (function(modules) { // webpackBootstrap
51 | /******/ // The module cache
52 | /******/ var installedModules = {};
53 |
54 | /******/ // The require function
55 | /******/ function __webpack_require__(moduleId) {
56 |
57 | /******/ // Check if module is in cache
58 | /******/ if(installedModules[moduleId])
59 | /******/ return installedModules[moduleId].exports;
60 |
61 | /******/ // Create a new module (and put it into the cache)
62 | /******/ var module = installedModules[moduleId] = {
63 | /******/ exports: {},
64 | /******/ id: moduleId,
65 | /******/ loaded: false
66 | /******/ };
67 |
68 | /******/ // Execute the module function
69 | /******/ modules[moduleId].call(module.exports, module, module.exports, __webpack_require__);
70 |
71 | /******/ // Flag the module as loaded
72 | /******/ module.loaded = true;
73 |
74 | /******/ // Return the exports of the module
75 | /******/ return module.exports;
76 | /******/ }
77 |
78 |
79 | /******/ // expose the modules object (__webpack_modules__)
80 | /******/ __webpack_require__.m = modules;
81 |
82 | /******/ // expose the module cache
83 | /******/ __webpack_require__.c = installedModules;
84 |
85 | /******/ // __webpack_public_path__
86 | /******/ __webpack_require__.p = "";
87 |
88 | /******/ // Load entry module and return exports
89 | /******/ return __webpack_require__(0);
90 | /******/ })
91 | /************************************************************************/
92 | /******/ ([
93 | /* 0 */
94 | /***/ function(module, exports, __webpack_require__) {
95 |
96 | var __WEBPACK_AMD_DEFINE_ARRAY__, __WEBPACK_AMD_DEFINE_RESULT__;var Synaptic = {
97 | Neuron: __webpack_require__(1),
98 | Layer: __webpack_require__(3),
99 | Network: __webpack_require__(4),
100 | Trainer: __webpack_require__(5),
101 | Architect: __webpack_require__(6)
102 | };
103 |
104 | // CommonJS & AMD
105 | if (true)
106 | {
107 | !(__WEBPACK_AMD_DEFINE_ARRAY__ = [], __WEBPACK_AMD_DEFINE_RESULT__ = function(){ return Synaptic }.apply(exports, __WEBPACK_AMD_DEFINE_ARRAY__), __WEBPACK_AMD_DEFINE_RESULT__ !== undefined && (module.exports = __WEBPACK_AMD_DEFINE_RESULT__));
108 | }
109 |
110 | // Node.js
111 | if (typeof module !== 'undefined' && module.exports)
112 | {
113 | module.exports = Synaptic;
114 | }
115 |
116 | // Browser
117 | if (typeof window == 'object')
118 | {
119 | (function(){
120 | var oldSynaptic = window['synaptic'];
121 | Synaptic.ninja = function(){
122 | window['synaptic'] = oldSynaptic;
123 | return Synaptic;
124 | };
125 | })();
126 |
127 | window['synaptic'] = Synaptic;
128 | }
129 |
130 |
131 | /***/ },
132 | /* 1 */
133 | /***/ function(module, exports, __webpack_require__) {
134 |
135 | /* WEBPACK VAR INJECTION */(function(module) {// export
136 | if (module) module.exports = Neuron;
137 |
138 | /******************************************************************************************
139 | NEURON
140 | *******************************************************************************************/
141 |
142 | function Neuron() {
143 | this.ID = Neuron.uid();
144 | this.label = null;
145 | this.connections = {
146 | inputs: {},
147 | projected: {},
148 | gated: {}
149 | };
150 | this.error = {
151 | responsibility: 0,
152 | projected: 0,
153 | gated: 0
154 | };
155 | this.trace = {
156 | elegibility: {},
157 | extended: {},
158 | influences: {}
159 | };
160 | this.state = 0;
161 | this.old = 0;
162 | this.activation = 0;
163 | this.selfconnection = new Neuron.connection(this, this, 0); // weight = 0 -> not connected
164 | this.squash = Neuron.squash.LOGISTIC;
165 | this.neighboors = {};
166 | this.bias = Math.random() * .2 - .1;
167 | }
168 |
169 | Neuron.prototype = {
170 |
171 | // activate the neuron
172 | activate: function(input) {
173 | // activation from enviroment (for input neurons)
174 | if (typeof input != 'undefined') {
175 | this.activation = input;
176 | this.derivative = 0;
177 | this.bias = 0;
178 | return this.activation;
179 | }
180 |
181 | // old state
182 | this.old = this.state;
183 |
184 | // eq. 15
185 | this.state = this.selfconnection.gain * this.selfconnection.weight *
186 | this.state + this.bias;
187 |
188 | for (var i in this.connections.inputs) {
189 | var input = this.connections.inputs[i];
190 | this.state += input.from.activation * input.weight * input.gain;
191 | }
192 |
193 | // eq. 16
194 | this.activation = this.squash(this.state);
195 |
196 | // f'(s)
197 | this.derivative = this.squash(this.state, true);
198 |
199 | // update traces
200 | var influences = [];
201 | for (var id in this.trace.extended) {
202 | // extended elegibility trace
203 | var neuron = this.neighboors[id];
204 |
205 | // if gated neuron's selfconnection is gated by this unit, the influence keeps track of the neuron's old state
206 | var influence = neuron.selfconnection.gater == this ? neuron.old : 0;
207 |
208 | // index runs over all the incoming connections to the gated neuron that are gated by this unit
209 | for (var incoming in this.trace.influences[neuron.ID]) { // captures the effect that has an input connection to this unit, on a neuron that is gated by this unit
210 | influence += this.trace.influences[neuron.ID][incoming].weight *
211 | this.trace.influences[neuron.ID][incoming].from.activation;
212 | }
213 | influences[neuron.ID] = influence;
214 | }
215 |
216 | for (var i in this.connections.inputs) {
217 | var input = this.connections.inputs[i];
218 |
219 | // elegibility trace - Eq. 17
220 | this.trace.elegibility[input.ID] = this.selfconnection.gain * this.selfconnection
221 | .weight * this.trace.elegibility[input.ID] + input.gain * input.from
222 | .activation;
223 |
224 | for (var id in this.trace.extended) {
225 | // extended elegibility trace
226 | var xtrace = this.trace.extended[id];
227 | var neuron = this.neighboors[id];
228 | var influence = influences[neuron.ID];
229 |
230 | // eq. 18
231 | xtrace[input.ID] = neuron.selfconnection.gain * neuron.selfconnection
232 | .weight * xtrace[input.ID] + this.derivative * this.trace.elegibility[
233 | input.ID] * influence;
234 | }
235 | }
236 |
237 | // update gated connection's gains
238 | for (var connection in this.connections.gated) {
239 | this.connections.gated[connection].gain = this.activation;
240 | }
241 |
242 | return this.activation;
243 | },
244 |
245 | // back-propagate the error
246 | propagate: function(rate, target) {
247 | // error accumulator
248 | var error = 0;
249 |
250 | // whether or not this neuron is in the output layer
251 | var isOutput = typeof target != 'undefined';
252 |
253 | // output neurons get their error from the enviroment
254 | if (isOutput)
255 | this.error.responsibility = this.error.projected = target - this.activation; // Eq. 10
256 |
257 | else // the rest of the neuron compute their error responsibilities by backpropagation
258 | {
259 | // error responsibilities from all the connections projected from this neuron
260 | for (var id in this.connections.projected) {
261 | var connection = this.connections.projected[id];
262 | var neuron = connection.to;
263 | // Eq. 21
264 | error += neuron.error.responsibility * connection.gain * connection.weight;
265 | }
266 |
267 | // projected error responsibility
268 | this.error.projected = this.derivative * error;
269 |
270 | error = 0;
271 | // error responsibilities from all the connections gated by this neuron
272 | for (var id in this.trace.extended) {
273 | var neuron = this.neighboors[id]; // gated neuron
274 | var influence = neuron.selfconnection.gater == this ? neuron.old : 0; // if gated neuron's selfconnection is gated by this neuron
275 |
276 | // index runs over all the connections to the gated neuron that are gated by this neuron
277 | for (var input in this.trace.influences[id]) { // captures the effect that the input connection of this neuron have, on a neuron which its input/s is/are gated by this neuron
278 | influence += this.trace.influences[id][input].weight * this.trace.influences[
279 | neuron.ID][input].from.activation;
280 | }
281 | // eq. 22
282 | error += neuron.error.responsibility * influence;
283 | }
284 |
285 | // gated error responsibility
286 | this.error.gated = this.derivative * error;
287 |
288 | // error responsibility - Eq. 23
289 | this.error.responsibility = this.error.projected + this.error.gated;
290 | }
291 |
292 | // learning rate
293 | rate = rate || .1;
294 |
295 | // adjust all the neuron's incoming connections
296 | for (var id in this.connections.inputs) {
297 | var input = this.connections.inputs[id];
298 |
299 | // Eq. 24
300 | var gradient = this.error.projected * this.trace.elegibility[input.ID];
301 | for (var id in this.trace.extended) {
302 | var neuron = this.neighboors[id];
303 | gradient += neuron.error.responsibility * this.trace.extended[
304 | neuron.ID][input.ID];
305 | }
306 | input.weight += rate * gradient; // adjust weights - aka learn
307 | }
308 |
309 | // adjust bias
310 | this.bias += rate * this.error.responsibility;
311 | },
312 |
313 | project: function(neuron, weight) {
314 | // self-connection
315 | if (neuron == this) {
316 | this.selfconnection.weight = 1;
317 | return this.selfconnection;
318 | }
319 |
320 | // check if connection already exists
321 | var connected = this.connected(neuron);
322 | if (connected && connected.type == "projected") {
323 | // update connection
324 | if (typeof weight != 'undefined')
325 | connected.connection.weight = weight;
326 | // return existing connection
327 | return connected.connection;
328 | } else {
329 | // create a new connection
330 | var connection = new Neuron.connection(this, neuron, weight);
331 | }
332 |
333 | // reference all the connections and traces
334 | this.connections.projected[connection.ID] = connection;
335 | this.neighboors[neuron.ID] = neuron;
336 | neuron.connections.inputs[connection.ID] = connection;
337 | neuron.trace.elegibility[connection.ID] = 0;
338 |
339 | for (var id in neuron.trace.extended) {
340 | var trace = neuron.trace.extended[id];
341 | trace[connection.ID] = 0;
342 | }
343 |
344 | return connection;
345 | },
346 |
347 | gate: function(connection) {
348 | // add connection to gated list
349 | this.connections.gated[connection.ID] = connection;
350 |
351 | var neuron = connection.to;
352 | if (!(neuron.ID in this.trace.extended)) {
353 | // extended trace
354 | this.neighboors[neuron.ID] = neuron;
355 | var xtrace = this.trace.extended[neuron.ID] = {};
356 | for (var id in this.connections.inputs) {
357 | var input = this.connections.inputs[id];
358 | xtrace[input.ID] = 0;
359 | }
360 | }
361 |
362 | // keep track
363 | if (neuron.ID in this.trace.influences)
364 | this.trace.influences[neuron.ID].push(connection);
365 | else
366 | this.trace.influences[neuron.ID] = [connection];
367 |
368 | // set gater
369 | connection.gater = this;
370 | },
371 |
372 | // returns true or false whether the neuron is self-connected or not
373 | selfconnected: function() {
374 | return this.selfconnection.weight !== 0;
375 | },
376 |
377 | // returns true or false whether the neuron is connected to another neuron (parameter)
378 | connected: function(neuron) {
379 | var result = {
380 | type: null,
381 | connection: false
382 | };
383 |
384 | if (this == neuron) {
385 | if (this.selfconnected()) {
386 | result.type = 'selfconnection';
387 | result.connection = this.selfconnection;
388 | return result;
389 | } else
390 | return false;
391 | }
392 |
393 | for (var type in this.connections) {
394 | for (var connection in this.connections[type]) {
395 | var connection = this.connections[type][connection];
396 | if (connection.to == neuron) {
397 | result.type = type;
398 | result.connection = connection;
399 | return result;
400 | } else if (connection.from == neuron) {
401 | result.type = type;
402 | result.connection = connection;
403 | return result;
404 | }
405 | }
406 | }
407 |
408 | return false;
409 | },
410 |
411 | // clears all the traces (the neuron forgets it's context, but the connections remain intact)
412 | clear: function() {
413 |
414 | for (var trace in this.trace.elegibility)
415 | this.trace.elegibility[trace] = 0;
416 |
417 | for (var trace in this.trace.extended)
418 | for (var extended in this.trace.extended[trace])
419 | this.trace.extended[trace][extended] = 0;
420 |
421 | this.error.responsibility = this.error.projected = this.error.gated = 0;
422 | },
423 |
424 | // all the connections are randomized and the traces are cleared
425 | reset: function() {
426 | this.clear();
427 |
428 | for (var type in this.connections)
429 | for (var connection in this.connections[type])
430 | this.connections[type][connection].weight = Math.random() * .2 - .1;
431 | this.bias = Math.random() * .2 - .1;
432 |
433 | this.old = this.state = this.activation = 0;
434 | },
435 |
436 | // hardcodes the behaviour of the neuron into an optimized function
437 | optimize: function(optimized, layer) {
438 |
439 | optimized = optimized || {};
440 | var store_activation = [];
441 | var store_trace = [];
442 | var store_propagation = [];
443 | var varID = optimized.memory || 0;
444 | var neurons = optimized.neurons || 1;
445 | var inputs = optimized.inputs || [];
446 | var targets = optimized.targets || [];
447 | var outputs = optimized.outputs || [];
448 | var variables = optimized.variables || {};
449 | var activation_sentences = optimized.activation_sentences || [];
450 | var trace_sentences = optimized.trace_sentences || [];
451 | var propagation_sentences = optimized.propagation_sentences || [];
452 | var layers = optimized.layers || { __count: 0, __neuron: 0 };
453 |
454 | // allocate sentences
455 | var allocate = function(store){
456 | var allocated = layer in layers && store[layers.__count];
457 | if (!allocated)
458 | {
459 | layers.__count = store.push([]) - 1;
460 | layers[layer] = layers.__count;
461 | }
462 | };
463 | allocate(activation_sentences);
464 | allocate(trace_sentences);
465 | allocate(propagation_sentences);
466 | var currentLayer = layers.__count;
467 |
468 | // get/reserve space in memory by creating a unique ID for a variablel
469 | var getVar = function() {
470 | var args = Array.prototype.slice.call(arguments);
471 |
472 | if (args.length == 1) {
473 | if (args[0] == 'target') {
474 | var id = 'target_' + targets.length;
475 | targets.push(varID);
476 | } else
477 | var id = args[0];
478 | if (id in variables)
479 | return variables[id];
480 | return variables[id] = {
481 | value: 0,
482 | id: varID++
483 | };
484 | } else {
485 | var extended = args.length > 2;
486 | if (extended)
487 | var value = args.pop();
488 |
489 | var unit = args.shift();
490 | var prop = args.pop();
491 |
492 | if (!extended)
493 | var value = unit[prop];
494 |
495 | var id = prop + '_';
496 | for (var property in args)
497 | id += args[property] + '_';
498 | id += unit.ID;
499 | if (id in variables)
500 | return variables[id];
501 |
502 | return variables[id] = {
503 | value: value,
504 | id: varID++
505 | };
506 | }
507 | };
508 |
509 | // build sentence
510 | var buildSentence = function() {
511 | var args = Array.prototype.slice.call(arguments);
512 | var store = args.pop();
513 | var sentence = "";
514 | for (var i in args)
515 | if (typeof args[i] == 'string')
516 | sentence += args[i];
517 | else
518 | sentence += 'F[' + args[i].id + ']';
519 |
520 | store.push(sentence + ';');
521 | };
522 |
523 | // helper to check if an object is empty
524 | var isEmpty = function(obj) {
525 | for (var prop in obj) {
526 | if (obj.hasOwnProperty(prop))
527 | return false;
528 | }
529 | return true;
530 | };
531 |
532 | // characteristics of the neuron
533 | var noProjections = isEmpty(this.connections.projected);
534 | var noGates = isEmpty(this.connections.gated);
535 | var isInput = layer == 'input' ? true : isEmpty(this.connections.inputs);
536 | var isOutput = layer == 'output' ? true : noProjections && noGates;
537 |
538 | // optimize neuron's behaviour
539 | var rate = getVar('rate');
540 | var activation = getVar(this, 'activation');
541 | if (isInput)
542 | inputs.push(activation.id);
543 | else {
544 | activation_sentences[currentLayer].push(store_activation);
545 | trace_sentences[currentLayer].push(store_trace);
546 | propagation_sentences[currentLayer].push(store_propagation);
547 | var old = getVar(this, 'old');
548 | var state = getVar(this, 'state');
549 | var bias = getVar(this, 'bias');
550 | if (this.selfconnection.gater)
551 | var self_gain = getVar(this.selfconnection, 'gain');
552 | if (this.selfconnected())
553 | var self_weight = getVar(this.selfconnection, 'weight');
554 | buildSentence(old, ' = ', state, store_activation);
555 | if (this.selfconnected())
556 | if (this.selfconnection.gater)
557 | buildSentence(state, ' = ', self_gain, ' * ', self_weight, ' * ',
558 | state, ' + ', bias, store_activation);
559 | else
560 | buildSentence(state, ' = ', self_weight, ' * ', state, ' + ',
561 | bias, store_activation);
562 | else
563 | buildSentence(state, ' = ', bias, store_activation);
564 | for (var i in this.connections.inputs) {
565 | var input = this.connections.inputs[i];
566 | var input_activation = getVar(input.from, 'activation');
567 | var input_weight = getVar(input, 'weight');
568 | if (input.gater)
569 | var input_gain = getVar(input, 'gain');
570 | if (this.connections.inputs[i].gater)
571 | buildSentence(state, ' += ', input_activation, ' * ',
572 | input_weight, ' * ', input_gain, store_activation);
573 | else
574 | buildSentence(state, ' += ', input_activation, ' * ',
575 | input_weight, store_activation);
576 | }
577 | var derivative = getVar(this, 'derivative');
578 | switch (this.squash) {
579 | case Neuron.squash.LOGISTIC:
580 | buildSentence(activation, ' = (1 / (1 + Math.exp(-', state, ')))',
581 | store_activation);
582 | buildSentence(derivative, ' = ', activation, ' * (1 - ',
583 | activation, ')', store_activation);
584 | break;
585 | case Neuron.squash.TANH:
586 | var eP = getVar('aux');
587 | var eN = getVar('aux_2');
588 | buildSentence(eP, ' = Math.exp(', state, ')', store_activation);
589 | buildSentence(eN, ' = 1 / ', eP, store_activation);
590 | buildSentence(activation, ' = (', eP, ' - ', eN, ') / (', eP, ' + ', eN, ')', store_activation);
591 | buildSentence(derivative, ' = 1 - (', activation, ' * ', activation, ')', store_activation);
592 | break;
593 | case Neuron.squash.IDENTITY:
594 | buildSentence(activation, ' = ', state, store_activation);
595 | buildSentence(derivative, ' = 1', store_activation);
596 | break;
597 | case Neuron.squash.HLIM:
598 | buildSentence(activation, ' = +(', state, ' > 0)', store_activation);
599 | buildSentence(derivative, ' = 1', store_activation);
600 | case Neuron.squash.RELU:
601 | buildSentence(activation, ' = ', state, ' > 0 ? ', state, ' : 0', store_activation);
602 | buildSentence(derivative, ' = ', state, ' > 0 ? 1 : 0', store_activation);
603 | break;
604 | }
605 |
606 | for (var id in this.trace.extended) {
607 | // calculate extended elegibility traces in advance
608 |
609 | var neuron = this.neighboors[id];
610 | var influence = getVar('influences[' + neuron.ID + ']');
611 | var neuron_old = getVar(neuron, 'old');
612 | var initialized = false;
613 | if (neuron.selfconnection.gater == this)
614 | {
615 | buildSentence(influence, ' = ', neuron_old, store_trace);
616 | initialized = true;
617 | }
618 | for (var incoming in this.trace.influences[neuron.ID]) {
619 | var incoming_weight = getVar(this.trace.influences[neuron.ID]
620 | [incoming], 'weight');
621 | var incoming_activation = getVar(this.trace.influences[neuron.ID]
622 | [incoming].from, 'activation');
623 |
624 | if (initialized)
625 | buildSentence(influence, ' += ', incoming_weight, ' * ', incoming_activation, store_trace);
626 | else {
627 | buildSentence(influence, ' = ', incoming_weight, ' * ', incoming_activation, store_trace);
628 | initialized = true;
629 | }
630 | }
631 | }
632 |
633 | for (var i in this.connections.inputs) {
634 | var input = this.connections.inputs[i];
635 | if (input.gater)
636 | var input_gain = getVar(input, 'gain');
637 | var input_activation = getVar(input.from, 'activation');
638 | var trace = getVar(this, 'trace', 'elegibility', input.ID, this.trace
639 | .elegibility[input.ID]);
640 | if (this.selfconnected()) {
641 | if (this.selfconnection.gater) {
642 | if (input.gater)
643 | buildSentence(trace, ' = ', self_gain, ' * ', self_weight,
644 | ' * ', trace, ' + ', input_gain, ' * ', input_activation,
645 | store_trace);
646 | else
647 | buildSentence(trace, ' = ', self_gain, ' * ', self_weight,
648 | ' * ', trace, ' + ', input_activation, store_trace);
649 | } else {
650 | if (input.gater)
651 | buildSentence(trace, ' = ', self_weight, ' * ', trace, ' + ',
652 | input_gain, ' * ', input_activation, store_trace);
653 | else
654 | buildSentence(trace, ' = ', self_weight, ' * ', trace, ' + ',
655 | input_activation, store_trace);
656 | }
657 | } else {
658 | if (input.gater)
659 | buildSentence(trace, ' = ', input_gain, ' * ', input_activation,
660 | store_trace);
661 | else
662 | buildSentence(trace, ' = ', input_activation, store_trace);
663 | }
664 | for (var id in this.trace.extended) {
665 | // extended elegibility trace
666 | var neuron = this.neighboors[id];
667 | var influence = getVar('influences[' + neuron.ID + ']');
668 |
669 | var trace = getVar(this, 'trace', 'elegibility', input.ID, this.trace
670 | .elegibility[input.ID]);
671 | var xtrace = getVar(this, 'trace', 'extended', neuron.ID, input.ID,
672 | this.trace.extended[neuron.ID][input.ID]);
673 | if (neuron.selfconnected())
674 | var neuron_self_weight = getVar(neuron.selfconnection, 'weight');
675 | if (neuron.selfconnection.gater)
676 | var neuron_self_gain = getVar(neuron.selfconnection, 'gain');
677 | if (neuron.selfconnected())
678 | if (neuron.selfconnection.gater)
679 | buildSentence(xtrace, ' = ', neuron_self_gain, ' * ',
680 | neuron_self_weight, ' * ', xtrace, ' + ', derivative, ' * ',
681 | trace, ' * ', influence, store_trace);
682 | else
683 | buildSentence(xtrace, ' = ', neuron_self_weight, ' * ',
684 | xtrace, ' + ', derivative, ' * ', trace, ' * ',
685 | influence, store_trace);
686 | else
687 | buildSentence(xtrace, ' = ', derivative, ' * ', trace, ' * ',
688 | influence, store_trace);
689 | }
690 | }
691 | for (var connection in this.connections.gated) {
692 | var gated_gain = getVar(this.connections.gated[connection], 'gain');
693 | buildSentence(gated_gain, ' = ', activation, store_activation);
694 | }
695 | }
696 | if (!isInput) {
697 | var responsibility = getVar(this, 'error', 'responsibility', this.error
698 | .responsibility);
699 | if (isOutput) {
700 | var target = getVar('target');
701 | buildSentence(responsibility, ' = ', target, ' - ', activation,
702 | store_propagation);
703 | for (var id in this.connections.inputs) {
704 | var input = this.connections.inputs[id];
705 | var trace = getVar(this, 'trace', 'elegibility', input.ID, this.trace
706 | .elegibility[input.ID]);
707 | var input_weight = getVar(input, 'weight');
708 | buildSentence(input_weight, ' += ', rate, ' * (', responsibility,
709 | ' * ', trace, ')', store_propagation);
710 | }
711 | outputs.push(activation.id);
712 | } else {
713 | if (!noProjections && !noGates) {
714 | var error = getVar('aux');
715 | for (var id in this.connections.projected) {
716 | var connection = this.connections.projected[id];
717 | var neuron = connection.to;
718 | var connection_weight = getVar(connection, 'weight');
719 | var neuron_responsibility = getVar(neuron, 'error',
720 | 'responsibility', neuron.error.responsibility);
721 | if (connection.gater) {
722 | var connection_gain = getVar(connection, 'gain');
723 | buildSentence(error, ' += ', neuron_responsibility, ' * ',
724 | connection_gain, ' * ', connection_weight,
725 | store_propagation);
726 | } else
727 | buildSentence(error, ' += ', neuron_responsibility, ' * ',
728 | connection_weight, store_propagation);
729 | }
730 | var projected = getVar(this, 'error', 'projected', this.error.projected);
731 | buildSentence(projected, ' = ', derivative, ' * ', error,
732 | store_propagation);
733 | buildSentence(error, ' = 0', store_propagation);
734 | for (var id in this.trace.extended) {
735 | var neuron = this.neighboors[id];
736 | var influence = getVar('aux_2');
737 | var neuron_old = getVar(neuron, 'old');
738 | if (neuron.selfconnection.gater == this)
739 | buildSentence(influence, ' = ', neuron_old, store_propagation);
740 | else
741 | buildSentence(influence, ' = 0', store_propagation);
742 | for (var input in this.trace.influences[neuron.ID]) {
743 | var connection = this.trace.influences[neuron.ID][input];
744 | var connection_weight = getVar(connection, 'weight');
745 | var neuron_activation = getVar(connection.from, 'activation');
746 | buildSentence(influence, ' += ', connection_weight, ' * ',
747 | neuron_activation, store_propagation);
748 | }
749 | var neuron_responsibility = getVar(neuron, 'error',
750 | 'responsibility', neuron.error.responsibility);
751 | buildSentence(error, ' += ', neuron_responsibility, ' * ',
752 | influence, store_propagation);
753 | }
754 | var gated = getVar(this, 'error', 'gated', this.error.gated);
755 | buildSentence(gated, ' = ', derivative, ' * ', error,
756 | store_propagation);
757 | buildSentence(responsibility, ' = ', projected, ' + ', gated,
758 | store_propagation);
759 | for (var id in this.connections.inputs) {
760 | var input = this.connections.inputs[id];
761 | var gradient = getVar('aux');
762 | var trace = getVar(this, 'trace', 'elegibility', input.ID, this
763 | .trace.elegibility[input.ID]);
764 | buildSentence(gradient, ' = ', projected, ' * ', trace,
765 | store_propagation);
766 | for (var id in this.trace.extended) {
767 | var neuron = this.neighboors[id];
768 | var neuron_responsibility = getVar(neuron, 'error',
769 | 'responsibility', neuron.error.responsibility);
770 | var xtrace = getVar(this, 'trace', 'extended', neuron.ID,
771 | input.ID, this.trace.extended[neuron.ID][input.ID]);
772 | buildSentence(gradient, ' += ', neuron_responsibility, ' * ',
773 | xtrace, store_propagation);
774 | }
775 | var input_weight = getVar(input, 'weight');
776 | buildSentence(input_weight, ' += ', rate, ' * ', gradient,
777 | store_propagation);
778 | }
779 |
780 | } else if (noGates) {
781 | buildSentence(responsibility, ' = 0', store_propagation);
782 | for (var id in this.connections.projected) {
783 | var connection = this.connections.projected[id];
784 | var neuron = connection.to;
785 | var connection_weight = getVar(connection, 'weight');
786 | var neuron_responsibility = getVar(neuron, 'error',
787 | 'responsibility', neuron.error.responsibility);
788 | if (connection.gater) {
789 | var connection_gain = getVar(connection, 'gain');
790 | buildSentence(responsibility, ' += ', neuron_responsibility,
791 | ' * ', connection_gain, ' * ', connection_weight,
792 | store_propagation);
793 | } else
794 | buildSentence(responsibility, ' += ', neuron_responsibility,
795 | ' * ', connection_weight, store_propagation);
796 | }
797 | buildSentence(responsibility, ' *= ', derivative,
798 | store_propagation);
799 | for (var id in this.connections.inputs) {
800 | var input = this.connections.inputs[id];
801 | var trace = getVar(this, 'trace', 'elegibility', input.ID, this
802 | .trace.elegibility[input.ID]);
803 | var input_weight = getVar(input, 'weight');
804 | buildSentence(input_weight, ' += ', rate, ' * (',
805 | responsibility, ' * ', trace, ')', store_propagation);
806 | }
807 | } else if (noProjections) {
808 | buildSentence(responsibility, ' = 0', store_propagation);
809 | for (var id in this.trace.extended) {
810 | var neuron = this.neighboors[id];
811 | var influence = getVar('aux');
812 | var neuron_old = getVar(neuron, 'old');
813 | if (neuron.selfconnection.gater == this)
814 | buildSentence(influence, ' = ', neuron_old, store_propagation);
815 | else
816 | buildSentence(influence, ' = 0', store_propagation);
817 | for (var input in this.trace.influences[neuron.ID]) {
818 | var connection = this.trace.influences[neuron.ID][input];
819 | var connection_weight = getVar(connection, 'weight');
820 | var neuron_activation = getVar(connection.from, 'activation');
821 | buildSentence(influence, ' += ', connection_weight, ' * ',
822 | neuron_activation, store_propagation);
823 | }
824 | var neuron_responsibility = getVar(neuron, 'error',
825 | 'responsibility', neuron.error.responsibility);
826 | buildSentence(responsibility, ' += ', neuron_responsibility,
827 | ' * ', influence, store_propagation);
828 | }
829 | buildSentence(responsibility, ' *= ', derivative,
830 | store_propagation);
831 | for (var id in this.connections.inputs) {
832 | var input = this.connections.inputs[id];
833 | var gradient = getVar('aux');
834 | buildSentence(gradient, ' = 0', store_propagation);
835 | for (var id in this.trace.extended) {
836 | var neuron = this.neighboors[id];
837 | var neuron_responsibility = getVar(neuron, 'error',
838 | 'responsibility', neuron.error.responsibility);
839 | var xtrace = getVar(this, 'trace', 'extended', neuron.ID,
840 | input.ID, this.trace.extended[neuron.ID][input.ID]);
841 | buildSentence(gradient, ' += ', neuron_responsibility, ' * ',
842 | xtrace, store_propagation);
843 | }
844 | var input_weight = getVar(input, 'weight');
845 | buildSentence(input_weight, ' += ', rate, ' * ', gradient,
846 | store_propagation);
847 | }
848 | }
849 | }
850 | buildSentence(bias, ' += ', rate, ' * ', responsibility,
851 | store_propagation);
852 | }
853 | return {
854 | memory: varID,
855 | neurons: neurons + 1,
856 | inputs: inputs,
857 | outputs: outputs,
858 | targets: targets,
859 | variables: variables,
860 | activation_sentences: activation_sentences,
861 | trace_sentences: trace_sentences,
862 | propagation_sentences: propagation_sentences,
863 | layers: layers
864 | }
865 | }
866 | }
867 |
868 |
869 | // represents a connection between two neurons
870 | Neuron.connection = function Connection(from, to, weight) {
871 |
872 | if (!from || !to)
873 | throw new Error("Connection Error: Invalid neurons");
874 |
875 | this.ID = Neuron.connection.uid();
876 | this.from = from;
877 | this.to = to;
878 | this.weight = typeof weight == 'undefined' ? Math.random() * .2 - .1 :
879 | weight;
880 | this.gain = 1;
881 | this.gater = null;
882 | }
883 |
884 |
885 | // squashing functions
886 | Neuron.squash = {};
887 |
888 | // eq. 5 & 5'
889 | Neuron.squash.LOGISTIC = function(x, derivate) {
890 | if (!derivate)
891 | return 1 / (1 + Math.exp(-x));
892 | var fx = Neuron.squash.LOGISTIC(x);
893 | return fx * (1 - fx);
894 | };
895 | Neuron.squash.TANH = function(x, derivate) {
896 | if (derivate)
897 | return 1 - Math.pow(Neuron.squash.TANH(x), 2);
898 | var eP = Math.exp(x);
899 | var eN = 1 / eP;
900 | return (eP - eN) / (eP + eN);
901 | };
902 | Neuron.squash.IDENTITY = function(x, derivate) {
903 | return derivate ? 1 : x;
904 | };
905 | Neuron.squash.HLIM = function(x, derivate) {
906 | return derivate ? 1 : x > 0 ? 1 : 0;
907 | };
908 | Neuron.squash.RELU = function(x, derivate) {
909 | if (derivate)
910 | return x > 0 ? 1 : 0;
911 | return x > 0 ? x : 0;
912 | };
913 |
914 | // unique ID's
915 | (function() {
916 | var neurons = 0;
917 | var connections = 0;
918 | Neuron.uid = function() {
919 | return neurons++;
920 | }
921 | Neuron.connection.uid = function() {
922 | return connections++;
923 | }
924 | Neuron.quantity = function() {
925 | return {
926 | neurons: neurons,
927 | connections: connections
928 | }
929 | }
930 | })();
931 |
932 | /* WEBPACK VAR INJECTION */}.call(exports, __webpack_require__(2)(module)))
933 |
934 | /***/ },
935 | /* 2 */
936 | /***/ function(module, exports) {
937 |
938 | module.exports = function(module) {
939 | if(!module.webpackPolyfill) {
940 | module.deprecate = function() {};
941 | module.paths = [];
942 | // module.parent = undefined by default
943 | module.children = [];
944 | module.webpackPolyfill = 1;
945 | }
946 | return module;
947 | }
948 |
949 |
950 | /***/ },
951 | /* 3 */
952 | /***/ function(module, exports, __webpack_require__) {
953 |
954 | /* WEBPACK VAR INJECTION */(function(module) {// export
955 | if (module) module.exports = Layer;
956 |
957 | // import
958 | var Neuron = __webpack_require__(1)
959 | , Network = __webpack_require__(4)
960 |
961 | /*******************************************************************************************
962 | LAYER
963 | *******************************************************************************************/
964 |
965 | function Layer(size, label) {
966 | this.size = size | 0;
967 | this.list = [];
968 | this.label = label || null;
969 | this.connectedTo = [];
970 |
971 | while (size--) {
972 | var neuron = new Neuron();
973 | this.list.push(neuron);
974 | }
975 | }
976 |
977 | Layer.prototype = {
978 |
979 | // activates all the neurons in the layer
980 | activate: function(input) {
981 |
982 | var activations = [];
983 |
984 | if (typeof input != 'undefined') {
985 | if (input.length != this.size)
986 | throw new Error("INPUT size and LAYER size must be the same to activate!");
987 |
988 | for (var id in this.list) {
989 | var neuron = this.list[id];
990 | var activation = neuron.activate(input[id]);
991 | activations.push(activation);
992 | }
993 | } else {
994 | for (var id in this.list) {
995 | var neuron = this.list[id];
996 | var activation = neuron.activate();
997 | activations.push(activation);
998 | }
999 | }
1000 | return activations;
1001 | },
1002 |
1003 | // propagates the error on all the neurons of the layer
1004 | propagate: function(rate, target) {
1005 |
1006 | if (typeof target != 'undefined') {
1007 | if (target.length != this.size)
1008 | throw new Error("TARGET size and LAYER size must be the same to propagate!");
1009 |
1010 | for (var id = this.list.length - 1; id >= 0; id--) {
1011 | var neuron = this.list[id];
1012 | neuron.propagate(rate, target[id]);
1013 | }
1014 | } else {
1015 | for (var id = this.list.length - 1; id >= 0; id--) {
1016 | var neuron = this.list[id];
1017 | neuron.propagate(rate);
1018 | }
1019 | }
1020 | },
1021 |
1022 | // projects a connection from this layer to another one
1023 | project: function(layer, type, weights) {
1024 |
1025 | if (layer instanceof Network)
1026 | layer = layer.layers.input;
1027 |
1028 | if (layer instanceof Layer) {
1029 | if (!this.connected(layer))
1030 | return new Layer.connection(this, layer, type, weights);
1031 | } else
1032 | throw new Error("Invalid argument, you can only project connections to LAYERS and NETWORKS!");
1033 |
1034 |
1035 | },
1036 |
1037 | // gates a connection betwenn two layers
1038 | gate: function(connection, type) {
1039 |
1040 | if (type == Layer.gateType.INPUT) {
1041 | if (connection.to.size != this.size)
1042 | throw new Error("GATER layer and CONNECTION.TO layer must be the same size in order to gate!");
1043 |
1044 | for (var id in connection.to.list) {
1045 | var neuron = connection.to.list[id];
1046 | var gater = this.list[id];
1047 | for (var input in neuron.connections.inputs) {
1048 | var gated = neuron.connections.inputs[input];
1049 | if (gated.ID in connection.connections)
1050 | gater.gate(gated);
1051 | }
1052 | }
1053 | } else if (type == Layer.gateType.OUTPUT) {
1054 | if (connection.from.size != this.size)
1055 | throw new Error("GATER layer and CONNECTION.FROM layer must be the same size in order to gate!");
1056 |
1057 | for (var id in connection.from.list) {
1058 | var neuron = connection.from.list[id];
1059 | var gater = this.list[id];
1060 | for (var projected in neuron.connections.projected) {
1061 | var gated = neuron.connections.projected[projected];
1062 | if (gated.ID in connection.connections)
1063 | gater.gate(gated);
1064 | }
1065 | }
1066 | } else if (type == Layer.gateType.ONE_TO_ONE) {
1067 | if (connection.size != this.size)
1068 | throw new Error("The number of GATER UNITS must be the same as the number of CONNECTIONS to gate!");
1069 |
1070 | for (var id in connection.list) {
1071 | var gater = this.list[id];
1072 | var gated = connection.list[id];
1073 | gater.gate(gated);
1074 | }
1075 | }
1076 | connection.gatedfrom.push({layer: this, type: type});
1077 | },
1078 |
1079 | // true or false whether the whole layer is self-connected or not
1080 | selfconnected: function() {
1081 |
1082 | for (var id in this.list) {
1083 | var neuron = this.list[id];
1084 | if (!neuron.selfconnected())
1085 | return false;
1086 | }
1087 | return true;
1088 | },
1089 |
1090 | // true of false whether the layer is connected to another layer (parameter) or not
1091 | connected: function(layer) {
1092 | // Check if ALL to ALL connection
1093 | var connections = 0;
1094 | for (var here in this.list) {
1095 | for (var there in layer.list) {
1096 | var from = this.list[here];
1097 | var to = layer.list[there];
1098 | var connected = from.connected(to);
1099 | if (connected.type == 'projected')
1100 | connections++;
1101 | }
1102 | }
1103 | if (connections == this.size * layer.size)
1104 | return Layer.connectionType.ALL_TO_ALL;
1105 |
1106 | // Check if ONE to ONE connection
1107 | connections = 0;
1108 | for (var neuron in this.list) {
1109 | var from = this.list[neuron];
1110 | var to = layer.list[neuron];
1111 | var connected = from.connected(to);
1112 | if (connected.type == 'projected')
1113 | connections++;
1114 | }
1115 | if (connections == this.size)
1116 | return Layer.connectionType.ONE_TO_ONE;
1117 | },
1118 |
1119 | // clears all the neuorns in the layer
1120 | clear: function() {
1121 | for (var id in this.list) {
1122 | var neuron = this.list[id];
1123 | neuron.clear();
1124 | }
1125 | },
1126 |
1127 | // resets all the neurons in the layer
1128 | reset: function() {
1129 | for (var id in this.list) {
1130 | var neuron = this.list[id];
1131 | neuron.reset();
1132 | }
1133 | },
1134 |
1135 | // returns all the neurons in the layer (array)
1136 | neurons: function() {
1137 | return this.list;
1138 | },
1139 |
1140 | // adds a neuron to the layer
1141 | add: function(neuron) {
1142 | this.neurons[neuron.ID] = neuron || new Neuron();
1143 | this.list.push(neuron);
1144 | this.size++;
1145 | },
1146 |
1147 | set: function(options) {
1148 | options = options || {};
1149 |
1150 | for (var i in this.list) {
1151 | var neuron = this.list[i];
1152 | if (options.label)
1153 | neuron.label = options.label + '_' + neuron.ID;
1154 | if (options.squash)
1155 | neuron.squash = options.squash;
1156 | if (options.bias)
1157 | neuron.bias = options.bias;
1158 | }
1159 | return this;
1160 | }
1161 | }
1162 |
1163 | // represents a connection from one layer to another, and keeps track of its weight and gain
1164 | Layer.connection = function LayerConnection(fromLayer, toLayer, type, weights) {
1165 | this.ID = Layer.connection.uid();
1166 | this.from = fromLayer;
1167 | this.to = toLayer;
1168 | this.selfconnection = toLayer == fromLayer;
1169 | this.type = type;
1170 | this.connections = {};
1171 | this.list = [];
1172 | this.size = 0;
1173 | this.gatedfrom = [];
1174 |
1175 | if (typeof this.type == 'undefined')
1176 | {
1177 | if (fromLayer == toLayer)
1178 | this.type = Layer.connectionType.ONE_TO_ONE;
1179 | else
1180 | this.type = Layer.connectionType.ALL_TO_ALL;
1181 | }
1182 |
1183 | if (this.type == Layer.connectionType.ALL_TO_ALL ||
1184 | this.type == Layer.connectionType.ALL_TO_ELSE) {
1185 | for (var here in this.from.list) {
1186 | for (var there in this.to.list) {
1187 | var from = this.from.list[here];
1188 | var to = this.to.list[there];
1189 | if(this.type == Layer.connectionType.ALL_TO_ELSE && from == to)
1190 | continue;
1191 | var connection = from.project(to, weights);
1192 |
1193 | this.connections[connection.ID] = connection;
1194 | this.size = this.list.push(connection);
1195 | }
1196 | }
1197 | } else if (this.type == Layer.connectionType.ONE_TO_ONE) {
1198 |
1199 | for (var neuron in this.from.list) {
1200 | var from = this.from.list[neuron];
1201 | var to = this.to.list[neuron];
1202 | var connection = from.project(to, weights);
1203 |
1204 | this.connections[connection.ID] = connection;
1205 | this.size = this.list.push(connection);
1206 | }
1207 | }
1208 |
1209 | fromLayer.connectedTo.push(this);
1210 | }
1211 |
1212 | // types of connections
1213 | Layer.connectionType = {};
1214 | Layer.connectionType.ALL_TO_ALL = "ALL TO ALL";
1215 | Layer.connectionType.ONE_TO_ONE = "ONE TO ONE";
1216 | Layer.connectionType.ALL_TO_ELSE = "ALL TO ELSE";
1217 |
1218 | // types of gates
1219 | Layer.gateType = {};
1220 | Layer.gateType.INPUT = "INPUT";
1221 | Layer.gateType.OUTPUT = "OUTPUT";
1222 | Layer.gateType.ONE_TO_ONE = "ONE TO ONE";
1223 |
1224 | (function() {
1225 | var connections = 0;
1226 | Layer.connection.uid = function() {
1227 | return connections++;
1228 | }
1229 | })();
1230 |
1231 | /* WEBPACK VAR INJECTION */}.call(exports, __webpack_require__(2)(module)))
1232 |
1233 | /***/ },
1234 | /* 4 */
1235 | /***/ function(module, exports, __webpack_require__) {
1236 |
1237 | /* WEBPACK VAR INJECTION */(function(module) {// export
1238 | if (module) module.exports = Network;
1239 |
1240 | // import
1241 | var Neuron = __webpack_require__(1)
1242 | , Layer = __webpack_require__(3)
1243 | , Trainer = __webpack_require__(5)
1244 |
1245 | /*******************************************************************************************
1246 | NETWORK
1247 | *******************************************************************************************/
1248 |
1249 | function Network(layers) {
1250 | if (typeof layers != 'undefined') {
1251 | this.layers = layers || {
1252 | input: null,
1253 | hidden: {},
1254 | output: null
1255 | };
1256 | this.optimized = null;
1257 | }
1258 | }
1259 | Network.prototype = {
1260 |
1261 | // feed-forward activation of all the layers to produce an ouput
1262 | activate: function(input) {
1263 |
1264 | if (this.optimized === false)
1265 | {
1266 | this.layers.input.activate(input);
1267 | for (var layer in this.layers.hidden)
1268 | this.layers.hidden[layer].activate();
1269 | return this.layers.output.activate();
1270 | }
1271 | else
1272 | {
1273 | if (this.optimized == null)
1274 | this.optimize();
1275 | return this.optimized.activate(input);
1276 | }
1277 | },
1278 |
1279 | // back-propagate the error thru the network
1280 | propagate: function(rate, target) {
1281 |
1282 | if (this.optimized === false)
1283 | {
1284 | this.layers.output.propagate(rate, target);
1285 | var reverse = [];
1286 | for (var layer in this.layers.hidden)
1287 | reverse.push(this.layers.hidden[layer]);
1288 | reverse.reverse();
1289 | for (var layer in reverse)
1290 | reverse[layer].propagate(rate);
1291 | }
1292 | else
1293 | {
1294 | if (this.optimized == null)
1295 | this.optimize();
1296 | this.optimized.propagate(rate, target);
1297 | }
1298 | },
1299 |
1300 | // project a connection to another unit (either a network or a layer)
1301 | project: function(unit, type, weights) {
1302 |
1303 | if (this.optimized)
1304 | this.optimized.reset();
1305 |
1306 | if (unit instanceof Network)
1307 | return this.layers.output.project(unit.layers.input, type, weights);
1308 |
1309 | if (unit instanceof Layer)
1310 | return this.layers.output.project(unit, type, weights);
1311 |
1312 | throw new Error("Invalid argument, you can only project connections to LAYERS and NETWORKS!");
1313 | },
1314 |
1315 | // let this network gate a connection
1316 | gate: function(connection, type) {
1317 | if (this.optimized)
1318 | this.optimized.reset();
1319 | this.layers.output.gate(connection, type);
1320 | },
1321 |
1322 | // clear all elegibility traces and extended elegibility traces (the network forgets its context, but not what was trained)
1323 | clear: function() {
1324 |
1325 | this.restore();
1326 |
1327 | var inputLayer = this.layers.input,
1328 | outputLayer = this.layers.output;
1329 |
1330 | inputLayer.clear();
1331 | for (var layer in this.layers.hidden) {
1332 | var hiddenLayer = this.layers.hidden[layer];
1333 | hiddenLayer.clear();
1334 | }
1335 | outputLayer.clear();
1336 |
1337 | if (this.optimized)
1338 | this.optimized.reset();
1339 | },
1340 |
1341 | // reset all weights and clear all traces (ends up like a new network)
1342 | reset: function() {
1343 |
1344 | this.restore();
1345 |
1346 | var inputLayer = this.layers.input,
1347 | outputLayer = this.layers.output;
1348 |
1349 | inputLayer.reset();
1350 | for (var layer in this.layers.hidden) {
1351 | var hiddenLayer = this.layers.hidden[layer];
1352 | hiddenLayer.reset();
1353 | }
1354 | outputLayer.reset();
1355 |
1356 | if (this.optimized)
1357 | this.optimized.reset();
1358 | },
1359 |
1360 | // hardcodes the behaviour of the whole network into a single optimized function
1361 | optimize: function() {
1362 |
1363 | var that = this;
1364 | var optimized = {};
1365 | var neurons = this.neurons();
1366 |
1367 | for (var i in neurons) {
1368 | var neuron = neurons[i].neuron;
1369 | var layer = neurons[i].layer;
1370 | while (neuron.neuron)
1371 | neuron = neuron.neuron;
1372 | optimized = neuron.optimize(optimized, layer);
1373 | }
1374 | for (var i in optimized.propagation_sentences)
1375 | optimized.propagation_sentences[i].reverse();
1376 | optimized.propagation_sentences.reverse();
1377 |
1378 | var hardcode = "";
1379 | hardcode += "var F = Float64Array ? new Float64Array(" + optimized.memory +
1380 | ") : []; ";
1381 | for (var i in optimized.variables)
1382 | hardcode += "F[" + optimized.variables[i].id + "] = " + (optimized.variables[
1383 | i].value || 0) + "; ";
1384 | hardcode += "var activate = function(input){\n";
1385 | for (var i in optimized.inputs)
1386 | hardcode += "F[" + optimized.inputs[i] + "] = input[" + i + "]; ";
1387 | for (var currentLayer in optimized.activation_sentences) {
1388 | if (optimized.activation_sentences[currentLayer].length > 0) {
1389 | for (var currentNeuron in optimized.activation_sentences[currentLayer]) {
1390 | hardcode += optimized.activation_sentences[currentLayer][currentNeuron].join(" ");
1391 | hardcode += optimized.trace_sentences[currentLayer][currentNeuron].join(" ");
1392 | }
1393 | }
1394 | }
1395 | hardcode += " var output = []; "
1396 | for (var i in optimized.outputs)
1397 | hardcode += "output[" + i + "] = F[" + optimized.outputs[i] + "]; ";
1398 | hardcode += "return output; }; "
1399 | hardcode += "var propagate = function(rate, target){\n";
1400 | hardcode += "F[" + optimized.variables.rate.id + "] = rate; ";
1401 | for (var i in optimized.targets)
1402 | hardcode += "F[" + optimized.targets[i] + "] = target[" + i + "]; ";
1403 | for (var currentLayer in optimized.propagation_sentences)
1404 | for (var currentNeuron in optimized.propagation_sentences[currentLayer])
1405 | hardcode += optimized.propagation_sentences[currentLayer][currentNeuron].join(" ") + " ";
1406 | hardcode += " };\n";
1407 | hardcode +=
1408 | "var ownership = function(memoryBuffer){\nF = memoryBuffer;\nthis.memory = F;\n};\n";
1409 | hardcode +=
1410 | "return {\nmemory: F,\nactivate: activate,\npropagate: propagate,\nownership: ownership\n};";
1411 | hardcode = hardcode.split(";").join(";\n");
1412 |
1413 | var constructor = new Function(hardcode);
1414 |
1415 | var network = constructor();
1416 | network.data = {
1417 | variables: optimized.variables,
1418 | activate: optimized.activation_sentences,
1419 | propagate: optimized.propagation_sentences,
1420 | trace: optimized.trace_sentences,
1421 | inputs: optimized.inputs,
1422 | outputs: optimized.outputs,
1423 | check_activation: this.activate,
1424 | check_propagation: this.propagate
1425 | }
1426 |
1427 | network.reset = function() {
1428 | if (that.optimized) {
1429 | that.optimized = null;
1430 | that.activate = network.data.check_activation;
1431 | that.propagate = network.data.check_propagation;
1432 | }
1433 | }
1434 |
1435 | this.optimized = network;
1436 | this.activate = network.activate;
1437 | this.propagate = network.propagate;
1438 | },
1439 |
1440 | // restores all the values from the optimized network the their respective objects in order to manipulate the network
1441 | restore: function() {
1442 | if (!this.optimized)
1443 | return;
1444 |
1445 | var optimized = this.optimized;
1446 |
1447 | var getValue = function() {
1448 | var args = Array.prototype.slice.call(arguments);
1449 |
1450 | var unit = args.shift();
1451 | var prop = args.pop();
1452 |
1453 | var id = prop + '_';
1454 | for (var property in args)
1455 | id += args[property] + '_';
1456 | id += unit.ID;
1457 |
1458 | var memory = optimized.memory;
1459 | var variables = optimized.data.variables;
1460 |
1461 | if (id in variables)
1462 | return memory[variables[id].id];
1463 | return 0;
1464 | }
1465 |
1466 | var list = this.neurons();
1467 |
1468 | // link id's to positions in the array
1469 | var ids = {};
1470 | for (var i in list) {
1471 | var neuron = list[i].neuron;
1472 | while (neuron.neuron)
1473 | neuron = neuron.neuron;
1474 |
1475 | neuron.state = getValue(neuron, 'state');
1476 | neuron.old = getValue(neuron, 'old');
1477 | neuron.activation = getValue(neuron, 'activation');
1478 | neuron.bias = getValue(neuron, 'bias');
1479 |
1480 | for (var input in neuron.trace.elegibility)
1481 | neuron.trace.elegibility[input] = getValue(neuron, 'trace',
1482 | 'elegibility', input);
1483 |
1484 | for (var gated in neuron.trace.extended)
1485 | for (var input in neuron.trace.extended[gated])
1486 | neuron.trace.extended[gated][input] = getValue(neuron, 'trace',
1487 | 'extended', gated, input);
1488 | }
1489 |
1490 | // get connections
1491 | for (var i in list) {
1492 | var neuron = list[i].neuron;
1493 | while (neuron.neuron)
1494 | neuron = neuron.neuron;
1495 |
1496 | for (var j in neuron.connections.projected) {
1497 | var connection = neuron.connections.projected[j];
1498 | connection.weight = getValue(connection, 'weight');
1499 | connection.gain = getValue(connection, 'gain');
1500 | }
1501 | }
1502 | },
1503 |
1504 | // returns all the neurons in the network
1505 | neurons: function() {
1506 |
1507 | var neurons = [];
1508 |
1509 | var inputLayer = this.layers.input.neurons(),
1510 | outputLayer = this.layers.output.neurons();
1511 |
1512 | for (var neuron in inputLayer)
1513 | neurons.push({
1514 | neuron: inputLayer[neuron],
1515 | layer: 'input'
1516 | });
1517 |
1518 | for (var layer in this.layers.hidden) {
1519 | var hiddenLayer = this.layers.hidden[layer].neurons();
1520 | for (var neuron in hiddenLayer)
1521 | neurons.push({
1522 | neuron: hiddenLayer[neuron],
1523 | layer: layer
1524 | });
1525 | }
1526 | for (var neuron in outputLayer)
1527 | neurons.push({
1528 | neuron: outputLayer[neuron],
1529 | layer: 'output'
1530 | });
1531 |
1532 | return neurons;
1533 | },
1534 |
1535 | // returns number of inputs of the network
1536 | inputs: function() {
1537 | return this.layers.input.size;
1538 | },
1539 |
1540 | // returns number of outputs of hte network
1541 | outputs: function() {
1542 | return this.layers.output.size;
1543 | },
1544 |
1545 | // sets the layers of the network
1546 | set: function(layers) {
1547 |
1548 | this.layers = layers;
1549 | if (this.optimized)
1550 | this.optimized.reset();
1551 | },
1552 |
1553 | setOptimize: function(bool){
1554 | this.restore();
1555 | if (this.optimized)
1556 | this.optimized.reset();
1557 | this.optimized = bool? null : false;
1558 | },
1559 |
1560 | // returns a json that represents all the neurons and connections of the network
1561 | toJSON: function(ignoreTraces) {
1562 |
1563 | this.restore();
1564 |
1565 | var list = this.neurons();
1566 | var neurons = [];
1567 | var connections = [];
1568 |
1569 | // link id's to positions in the array
1570 | var ids = {};
1571 | for (var i in list) {
1572 | var neuron = list[i].neuron;
1573 | while (neuron.neuron)
1574 | neuron = neuron.neuron;
1575 | ids[neuron.ID] = i;
1576 |
1577 | var copy = {
1578 | trace: {
1579 | elegibility: {},
1580 | extended: {}
1581 | },
1582 | state: neuron.state,
1583 | old: neuron.old,
1584 | activation: neuron.activation,
1585 | bias: neuron.bias,
1586 | layer: list[i].layer
1587 | };
1588 |
1589 | copy.squash = neuron.squash == Neuron.squash.LOGISTIC ? "LOGISTIC" :
1590 | neuron.squash == Neuron.squash.TANH ? "TANH" :
1591 | neuron.squash == Neuron.squash.IDENTITY ? "IDENTITY" :
1592 | neuron.squash == Neuron.squash.HLIM ? "HLIM" :
1593 | null;
1594 |
1595 | neurons.push(copy);
1596 | }
1597 |
1598 | // get connections
1599 | for (var i in list) {
1600 | var neuron = list[i].neuron;
1601 | while (neuron.neuron)
1602 | neuron = neuron.neuron;
1603 |
1604 | for (var j in neuron.connections.projected) {
1605 | var connection = neuron.connections.projected[j];
1606 | connections.push({
1607 | from: ids[connection.from.ID],
1608 | to: ids[connection.to.ID],
1609 | weight: connection.weight,
1610 | gater: connection.gater ? ids[connection.gater.ID] : null,
1611 | });
1612 | }
1613 | if (neuron.selfconnected())
1614 | connections.push({
1615 | from: ids[neuron.ID],
1616 | to: ids[neuron.ID],
1617 | weight: neuron.selfconnection.weight,
1618 | gater: neuron.selfconnection.gater ? ids[neuron.selfconnection.gater.ID] : null,
1619 | });
1620 | }
1621 |
1622 | return {
1623 | neurons: neurons,
1624 | connections: connections
1625 | }
1626 | },
1627 |
1628 | // export the topology into dot language which can be visualized as graphs using dot
1629 | /* example: ... console.log(net.toDotLang());
1630 | $ node example.js > example.dot
1631 | $ dot example.dot -Tpng > out.png
1632 | */
1633 | toDot: function(edgeConnection) {
1634 | if (! typeof edgeConnection)
1635 | edgeConnection = false;
1636 | var code = "digraph nn {\n rankdir = BT\n";
1637 | var layers = [this.layers.input].concat(this.layers.hidden, this.layers.output);
1638 | for (var layer in layers) {
1639 | for (var to in layers[layer].connectedTo) { // projections
1640 | var connection = layers[layer].connectedTo[to];
1641 | var layerTo = connection.to;
1642 | var size = connection.size;
1643 | var layerID = layers.indexOf(layers[layer]);
1644 | var layerToID = layers.indexOf(layerTo);
1645 | /* http://stackoverflow.com/questions/26845540/connect-edges-with-graph-dot
1646 | * DOT does not support edge-to-edge connections
1647 | * This workaround produces somewhat weird graphs ...
1648 | */
1649 | if ( edgeConnection) {
1650 | if (connection.gatedfrom.length) {
1651 | var fakeNode = "fake" + layerID + "_" + layerToID;
1652 | code += " " + fakeNode +
1653 | " [label = \"\", shape = point, width = 0.01, height = 0.01]\n";
1654 | code += " " + layerID + " -> " + fakeNode + " [label = " + size + ", arrowhead = none]\n";
1655 | code += " " + fakeNode + " -> " + layerToID + "\n";
1656 | } else
1657 | code += " " + layerID + " -> " + layerToID + " [label = " + size + "]\n";
1658 | for (var from in connection.gatedfrom) { // gatings
1659 | var layerfrom = connection.gatedfrom[from].layer;
1660 | var layerfromID = layers.indexOf(layerfrom);
1661 | code += " " + layerfromID + " -> " + fakeNode + " [color = blue]\n";
1662 | }
1663 | } else {
1664 | code += " " + layerID + " -> " + layerToID + " [label = " + size + "]\n";
1665 | for (var from in connection.gatedfrom) { // gatings
1666 | var layerfrom = connection.gatedfrom[from].layer;
1667 | var layerfromID = layers.indexOf(layerfrom);
1668 | code += " " + layerfromID + " -> " + layerToID + " [color = blue]\n";
1669 | }
1670 | }
1671 | }
1672 | }
1673 | code += "}\n";
1674 | return {
1675 | code: code,
1676 | link: "https://chart.googleapis.com/chart?chl=" + escape(code.replace("/ /g", "+")) + "&cht=gv"
1677 | }
1678 | },
1679 |
1680 | // returns a function that works as the activation of the network and can be used without depending on the library
1681 | standalone: function() {
1682 | if (!this.optimized)
1683 | this.optimize();
1684 |
1685 | var data = this.optimized.data;
1686 |
1687 | // build activation function
1688 | var activation = "function (input) {\n";
1689 |
1690 | // build inputs
1691 | for (var i in data.inputs)
1692 | activation += "F[" + data.inputs[i] + "] = input[" + i + "];\n";
1693 |
1694 | // build network activation
1695 | for (var neuron in data.activate) { // shouldn't this be layer?
1696 | for (var sentence in data.activate[neuron])
1697 | activation += data.activate[neuron][sentence].join('') + "\n";
1698 | }
1699 |
1700 | // build outputs
1701 | activation += "var output = [];\n";
1702 | for (var i in data.outputs)
1703 | activation += "output[" + i + "] = F[" + data.outputs[i] + "];\n";
1704 | activation += "return output;\n}";
1705 |
1706 | // reference all the positions in memory
1707 | var memory = activation.match(/F\[(\d+)\]/g);
1708 | var dimension = 0;
1709 | var ids = {};
1710 | for (var address in memory) {
1711 | var tmp = memory[address].match(/\d+/)[0];
1712 | if (!(tmp in ids)) {
1713 | ids[tmp] = dimension++;
1714 | }
1715 | }
1716 | var hardcode = "F = {\n";
1717 | for (var i in ids)
1718 | hardcode += ids[i] + ": " + this.optimized.memory[i] + ",\n";
1719 | hardcode = hardcode.substring(0, hardcode.length - 2) + "\n};\n";
1720 | hardcode = "var run = " + activation.replace(/F\[(\d+)]/g, function(
1721 | index) {
1722 | return 'F[' + ids[index.match(/\d+/)[0]] + ']'
1723 | }).replace("{\n", "{\n" + hardcode + "") + ";\n";
1724 | hardcode += "return run";
1725 |
1726 | // return standalone function
1727 | return new Function(hardcode)();
1728 | },
1729 |
1730 |
1731 | // Return a HTML5 WebWorker specialized on training the network stored in `memory`.
1732 | // Train based on the given dataSet and options.
1733 | // The worker returns the updated `memory` when done.
1734 | worker: function(memory, set, options) {
1735 |
1736 | // Copy the options and set defaults (options might be different for each worker)
1737 | var workerOptions = {};
1738 | if(options) workerOptions = options;
1739 | workerOptions.rate = options.rate || .2;
1740 | workerOptions.iterations = options.iterations || 100000;
1741 | workerOptions.error = options.error || .005;
1742 | workerOptions.cost = options.cost || null;
1743 | workerOptions.crossValidate = options.crossValidate || null;
1744 |
1745 | // Cost function might be different for each worker
1746 | costFunction = "var cost = " + (options && options.cost || this.cost || Trainer.cost.MSE) + ";\n";
1747 | var workerFunction = Network.getWorkerSharedFunctions();
1748 | workerFunction = workerFunction.replace(/var cost = options && options\.cost \|\| this\.cost \|\| Trainer\.cost\.MSE;/g, costFunction);
1749 |
1750 | // Set what we do when training is finished
1751 | workerFunction = workerFunction.replace('return results;',
1752 | 'postMessage({action: "done", message: results, memoryBuffer: F}, [F.buffer]);');
1753 |
1754 | // Replace log with postmessage
1755 | workerFunction = workerFunction.replace("console.log('iterations', iterations, 'error', error, 'rate', currentRate)",
1756 | "postMessage({action: 'log', message: {\n" +
1757 | "iterations: iterations,\n" +
1758 | "error: error,\n" +
1759 | "rate: currentRate\n" +
1760 | "}\n" +
1761 | "})");
1762 |
1763 | // Replace schedule with postmessage
1764 | workerFunction = workerFunction.replace("abort = this.schedule.do({ error: error, iterations: iterations, rate: currentRate })",
1765 | "postMessage({action: 'schedule', message: {\n" +
1766 | "iterations: iterations,\n" +
1767 | "error: error,\n" +
1768 | "rate: currentRate\n" +
1769 | "}\n" +
1770 | "})");
1771 |
1772 | if (!this.optimized)
1773 | this.optimize();
1774 |
1775 | var hardcode = "var inputs = " + this.optimized.data.inputs.length + ";\n";
1776 | hardcode += "var outputs = " + this.optimized.data.outputs.length + ";\n";
1777 | hardcode += "var F = new Float64Array([" + this.optimized.memory.toString() + "]);\n";
1778 | hardcode += "var activate = " + this.optimized.activate.toString() + ";\n";
1779 | hardcode += "var propagate = " + this.optimized.propagate.toString() + ";\n";
1780 | hardcode +=
1781 | "onmessage = function(e) {\n" +
1782 | "if (e.data.action == 'startTraining') {\n" +
1783 | "train(" + JSON.stringify(set) + "," + JSON.stringify(workerOptions) + ");\n" +
1784 | "}\n" +
1785 | "}";
1786 |
1787 | var workerSourceCode = workerFunction + '\n' + hardcode;
1788 | var blob = new Blob([workerSourceCode]);
1789 | var blobURL = window.URL.createObjectURL(blob);
1790 |
1791 | return new Worker(blobURL);
1792 | },
1793 |
1794 | // returns a copy of the network
1795 | clone: function() {
1796 | return Network.fromJSON(this.toJSON());
1797 | }
1798 | };
1799 |
1800 | /**
1801 | * Creates a static String to store the source code of the functions
1802 | * that are identical for all the workers (train, _trainSet, test)
1803 | *
1804 | * @return {String} Source code that can train a network inside a worker.
1805 | * @static
1806 | */
1807 | Network.getWorkerSharedFunctions = function() {
1808 | // If we already computed the source code for the shared functions
1809 | if(typeof Network._SHARED_WORKER_FUNCTIONS !== 'undefined')
1810 | return Network._SHARED_WORKER_FUNCTIONS;
1811 |
1812 | // Otherwise compute and return the source code
1813 | // We compute them by simply copying the source code of the train, _trainSet and test functions
1814 | // using the .toString() method
1815 |
1816 | // Load and name the train function
1817 | var train_f = Trainer.prototype.train.toString();
1818 | train_f = train_f.replace('function (set', 'function train(set') + '\n';
1819 |
1820 | // Load and name the _trainSet function
1821 | var _trainSet_f = Trainer.prototype._trainSet.toString().replace(/this.network./g, '');
1822 | _trainSet_f = _trainSet_f.replace('function (set', 'function _trainSet(set') + '\n';
1823 | _trainSet_f = _trainSet_f.replace('this.crossValidate', 'crossValidate');
1824 | _trainSet_f = _trainSet_f.replace('crossValidate = true', 'crossValidate = { }');
1825 |
1826 | // Load and name the test function
1827 | var test_f = Trainer.prototype.test.toString().replace(/this.network./g, '');
1828 | test_f = test_f.replace('function (set', 'function test(set') + '\n';
1829 |
1830 | return Network._SHARED_WORKER_FUNCTIONS = train_f + _trainSet_f + test_f;
1831 | };
1832 |
1833 | // rebuild a network that has been stored in a json using the method toJSON()
1834 | Network.fromJSON = function(json) {
1835 |
1836 | var neurons = [];
1837 |
1838 | var layers = {
1839 | input: new Layer(),
1840 | hidden: [],
1841 | output: new Layer()
1842 | };
1843 |
1844 | for (var i in json.neurons) {
1845 | var config = json.neurons[i];
1846 |
1847 | var neuron = new Neuron();
1848 | neuron.trace.elegibility = {};
1849 | neuron.trace.extended = {};
1850 | neuron.state = config.state;
1851 | neuron.old = config.old;
1852 | neuron.activation = config.activation;
1853 | neuron.bias = config.bias;
1854 | neuron.squash = config.squash in Neuron.squash ? Neuron.squash[config.squash] : Neuron.squash.LOGISTIC;
1855 | neurons.push(neuron);
1856 |
1857 | if (config.layer == 'input')
1858 | layers.input.add(neuron);
1859 | else if (config.layer == 'output')
1860 | layers.output.add(neuron);
1861 | else {
1862 | if (typeof layers.hidden[config.layer] == 'undefined')
1863 | layers.hidden[config.layer] = new Layer();
1864 | layers.hidden[config.layer].add(neuron);
1865 | }
1866 | }
1867 |
1868 | for (var i in json.connections) {
1869 | var config = json.connections[i];
1870 | var from = neurons[config.from];
1871 | var to = neurons[config.to];
1872 | var weight = config.weight;
1873 | var gater = neurons[config.gater];
1874 |
1875 | var connection = from.project(to, weight);
1876 | if (gater)
1877 | gater.gate(connection);
1878 | }
1879 |
1880 | return new Network(layers);
1881 | };
1882 |
1883 | /* WEBPACK VAR INJECTION */}.call(exports, __webpack_require__(2)(module)))
1884 |
1885 | /***/ },
1886 | /* 5 */
1887 | /***/ function(module, exports, __webpack_require__) {
1888 |
1889 | /* WEBPACK VAR INJECTION */(function(module) {// export
1890 | if (module) module.exports = Trainer;
1891 |
1892 | /*******************************************************************************************
1893 | TRAINER
1894 | *******************************************************************************************/
1895 |
1896 | function Trainer(network, options) {
1897 | options = options || {};
1898 | this.network = network;
1899 | this.rate = options.rate || .2;
1900 | this.iterations = options.iterations || 100000;
1901 | this.error = options.error || .005;
1902 | this.cost = options.cost || null;
1903 | this.crossValidate = options.crossValidate || null;
1904 | }
1905 |
1906 | Trainer.prototype = {
1907 |
1908 | // trains any given set to a network
1909 | train: function(set, options) {
1910 |
1911 | var error = 1;
1912 | var iterations = bucketSize = 0;
1913 | var abort = false;
1914 | var currentRate;
1915 | var cost = options && options.cost || this.cost || Trainer.cost.MSE;
1916 | var crossValidate = false, testSet, trainSet;
1917 |
1918 | var start = Date.now();
1919 |
1920 | if (options) {
1921 | if (options.shuffle) {
1922 | //+ Jonas Raoni Soares Silva
1923 | //@ http://jsfromhell.com/array/shuffle [v1.0]
1924 | function shuffle(o) { //v1.0
1925 | for (var j, x, i = o.length; i; j = Math.floor(Math.random() * i), x = o[--i], o[i] = o[j], o[j] = x);
1926 | return o;
1927 | };
1928 | }
1929 | if (options.iterations)
1930 | this.iterations = options.iterations;
1931 | if (options.error)
1932 | this.error = options.error;
1933 | if (options.rate)
1934 | this.rate = options.rate;
1935 | if (options.cost)
1936 | this.cost = options.cost;
1937 | if (options.schedule)
1938 | this.schedule = options.schedule;
1939 | if (options.customLog){
1940 | // for backward compatibility with code that used customLog
1941 | console.log('Deprecated: use schedule instead of customLog')
1942 | this.schedule = options.customLog;
1943 | }
1944 | if (this.crossValidate || options.crossValidate) {
1945 | if(!this.crossValidate) this.crossValidate = {};
1946 | crossValidate = true;
1947 | if (options.crossValidate.testSize)
1948 | this.crossValidate.testSize = options.crossValidate.testSize;
1949 | if (options.crossValidate.testError)
1950 | this.crossValidate.testError = options.crossValidate.testError;
1951 | }
1952 | }
1953 |
1954 | currentRate = this.rate;
1955 | if(Array.isArray(this.rate)) {
1956 | var bucketSize = Math.floor(this.iterations / this.rate.length);
1957 | }
1958 |
1959 | if(crossValidate) {
1960 | var numTrain = Math.ceil((1 - this.crossValidate.testSize) * set.length);
1961 | trainSet = set.slice(0, numTrain);
1962 | testSet = set.slice(numTrain);
1963 | }
1964 |
1965 | var lastError = 0;
1966 | while ((!abort && iterations < this.iterations && error > this.error)) {
1967 | if (crossValidate && error <= this.crossValidate.testError) {
1968 | break;
1969 | }
1970 |
1971 | var currentSetSize = set.length;
1972 | error = 0;
1973 | iterations++;
1974 |
1975 | if(bucketSize > 0) {
1976 | var currentBucket = Math.floor(iterations / bucketSize);
1977 | currentRate = this.rate[currentBucket] || currentRate;
1978 | }
1979 |
1980 | if(typeof this.rate === 'function') {
1981 | currentRate = this.rate(iterations, lastError);
1982 | }
1983 |
1984 | if (crossValidate) {
1985 | this._trainSet(trainSet, currentRate, cost);
1986 | error += this.test(testSet).error;
1987 | currentSetSize = 1;
1988 | } else {
1989 | error += this._trainSet(set, currentRate, cost);
1990 | currentSetSize = set.length;
1991 | }
1992 |
1993 | // check error
1994 | error /= currentSetSize;
1995 | lastError = error;
1996 |
1997 | if (options) {
1998 | if (this.schedule && this.schedule.every && iterations %
1999 | this.schedule.every == 0)
2000 | abort = this.schedule.do({ error: error, iterations: iterations, rate: currentRate });
2001 | else if (options.log && iterations % options.log == 0) {
2002 | console.log('iterations', iterations, 'error', error, 'rate', currentRate);
2003 | };
2004 | if (options.shuffle)
2005 | shuffle(set);
2006 | }
2007 | }
2008 |
2009 | var results = {
2010 | error: error,
2011 | iterations: iterations,
2012 | time: Date.now() - start
2013 | };
2014 |
2015 | return results;
2016 | },
2017 |
2018 | // trains any given set to a network, using a WebWorker (only for the browser). Returns a Promise of the results.
2019 | trainAsync: function(set, options) {
2020 | var train = this.workerTrain.bind(this);
2021 | return new Promise(function(resolve, reject) {
2022 | try {
2023 | train(set, resolve, options, true)
2024 | } catch(e) {
2025 | reject(e)
2026 | }
2027 | })
2028 | },
2029 |
2030 | // preforms one training epoch and returns the error (private function used in this.train)
2031 | _trainSet: function(set, currentRate, costFunction) {
2032 | var errorSum = 0;
2033 | for (var train in set) {
2034 | var input = set[train].input;
2035 | var target = set[train].output;
2036 |
2037 | var output = this.network.activate(input);
2038 | this.network.propagate(currentRate, target);
2039 |
2040 | errorSum += costFunction(target, output);
2041 | }
2042 | return errorSum;
2043 | },
2044 |
2045 | // tests a set and returns the error and elapsed time
2046 | test: function(set, options) {
2047 |
2048 | var error = 0;
2049 | var input, output, target;
2050 | var cost = options && options.cost || this.cost || Trainer.cost.MSE;
2051 |
2052 | var start = Date.now();
2053 |
2054 | for (var test in set) {
2055 | input = set[test].input;
2056 | target = set[test].output;
2057 | output = this.network.activate(input);
2058 | error += cost(target, output);
2059 | }
2060 |
2061 | error /= set.length;
2062 |
2063 | var results = {
2064 | error: error,
2065 | time: Date.now() - start
2066 | };
2067 |
2068 | return results;
2069 | },
2070 |
2071 | // trains any given set to a network using a WebWorker [deprecated: use trainAsync instead]
2072 | workerTrain: function(set, callback, options, suppressWarning) {
2073 |
2074 | if (!suppressWarning) {
2075 | console.warn('Deprecated: do not use `workerTrain`, use `trainAsync` instead.')
2076 | }
2077 | var that = this;
2078 |
2079 | if (!this.network.optimized)
2080 | this.network.optimize();
2081 |
2082 | // Create a new worker
2083 | var worker = this.network.worker(this.network.optimized.memory, set, options);
2084 |
2085 | // train the worker
2086 | worker.onmessage = function(e) {
2087 | switch(e.data.action) {
2088 | case 'done':
2089 | var iterations = e.data.message.iterations;
2090 | var error = e.data.message.error;
2091 | var time = e.data.message.time;
2092 |
2093 | that.network.optimized.ownership(e.data.memoryBuffer);
2094 |
2095 | // Done callback
2096 | callback({
2097 | error: error,
2098 | iterations: iterations,
2099 | time: time
2100 | });
2101 |
2102 | // Delete the worker and all its associated memory
2103 | worker.terminate();
2104 | break;
2105 |
2106 | case 'log':
2107 | console.log(e.data.message);
2108 |
2109 | case 'schedule':
2110 | if (options && options.schedule && typeof options.schedule.do === 'function') {
2111 | var scheduled = options.schedule.do
2112 | scheduled(e.data.message)
2113 | }
2114 | break;
2115 | }
2116 | };
2117 |
2118 | // Start the worker
2119 | worker.postMessage({action: 'startTraining'});
2120 | },
2121 |
2122 | // trains an XOR to the network
2123 | XOR: function(options) {
2124 |
2125 | if (this.network.inputs() != 2 || this.network.outputs() != 1)
2126 | throw new Error("Incompatible network (2 inputs, 1 output)");
2127 |
2128 | var defaults = {
2129 | iterations: 100000,
2130 | log: false,
2131 | shuffle: true,
2132 | cost: Trainer.cost.MSE
2133 | };
2134 |
2135 | if (options)
2136 | for (var i in options)
2137 | defaults[i] = options[i];
2138 |
2139 | return this.train([{
2140 | input: [0, 0],
2141 | output: [0]
2142 | }, {
2143 | input: [1, 0],
2144 | output: [1]
2145 | }, {
2146 | input: [0, 1],
2147 | output: [1]
2148 | }, {
2149 | input: [1, 1],
2150 | output: [0]
2151 | }], defaults);
2152 | },
2153 |
2154 | // trains the network to pass a Distracted Sequence Recall test
2155 | DSR: function(options) {
2156 | options = options || {};
2157 |
2158 | var targets = options.targets || [2, 4, 7, 8];
2159 | var distractors = options.distractors || [3, 5, 6, 9];
2160 | var prompts = options.prompts || [0, 1];
2161 | var length = options.length || 24;
2162 | var criterion = options.success || 0.95;
2163 | var iterations = options.iterations || 100000;
2164 | var rate = options.rate || .1;
2165 | var log = options.log || 0;
2166 | var schedule = options.schedule || {};
2167 | var cost = options.cost || this.cost || Trainer.cost.CROSS_ENTROPY;
2168 |
2169 | var trial, correct, i, j, success;
2170 | trial = correct = i = j = success = 0;
2171 | var error = 1,
2172 | symbols = targets.length + distractors.length + prompts.length;
2173 |
2174 | var noRepeat = function(range, avoid) {
2175 | var number = Math.random() * range | 0;
2176 | var used = false;
2177 | for (var i in avoid)
2178 | if (number == avoid[i])
2179 | used = true;
2180 | return used ? noRepeat(range, avoid) : number;
2181 | };
2182 |
2183 | var equal = function(prediction, output) {
2184 | for (var i in prediction)
2185 | if (Math.round(prediction[i]) != output[i])
2186 | return false;
2187 | return true;
2188 | };
2189 |
2190 | var start = Date.now();
2191 |
2192 | while (trial < iterations && (success < criterion || trial % 1000 != 0)) {
2193 | // generate sequence
2194 | var sequence = [],
2195 | sequenceLength = length - prompts.length;
2196 | for (i = 0; i < sequenceLength; i++) {
2197 | var any = Math.random() * distractors.length | 0;
2198 | sequence.push(distractors[any]);
2199 | }
2200 | var indexes = [],
2201 | positions = [];
2202 | for (i = 0; i < prompts.length; i++) {
2203 | indexes.push(Math.random() * targets.length | 0);
2204 | positions.push(noRepeat(sequenceLength, positions));
2205 | }
2206 | positions = positions.sort();
2207 | for (i = 0; i < prompts.length; i++) {
2208 | sequence[positions[i]] = targets[indexes[i]];
2209 | sequence.push(prompts[i]);
2210 | }
2211 |
2212 | //train sequence
2213 | var distractorsCorrect;
2214 | var targetsCorrect = distractorsCorrect = 0;
2215 | error = 0;
2216 | for (i = 0; i < length; i++) {
2217 | // generate input from sequence
2218 | var input = [];
2219 | for (j = 0; j < symbols; j++)
2220 | input[j] = 0;
2221 | input[sequence[i]] = 1;
2222 |
2223 | // generate target output
2224 | var output = [];
2225 | for (j = 0; j < targets.length; j++)
2226 | output[j] = 0;
2227 |
2228 | if (i >= sequenceLength) {
2229 | var index = i - sequenceLength;
2230 | output[indexes[index]] = 1;
2231 | }
2232 |
2233 | // check result
2234 | var prediction = this.network.activate(input);
2235 |
2236 | if (equal(prediction, output))
2237 | if (i < sequenceLength)
2238 | distractorsCorrect++;
2239 | else
2240 | targetsCorrect++;
2241 | else {
2242 | this.network.propagate(rate, output);
2243 | }
2244 |
2245 | error += cost(output, prediction);
2246 |
2247 | if (distractorsCorrect + targetsCorrect == length)
2248 | correct++;
2249 | }
2250 |
2251 | // calculate error
2252 | if (trial % 1000 == 0)
2253 | correct = 0;
2254 | trial++;
2255 | var divideError = trial % 1000;
2256 | divideError = divideError == 0 ? 1000 : divideError;
2257 | success = correct / divideError;
2258 | error /= length;
2259 |
2260 | // log
2261 | if (log && trial % log == 0)
2262 | console.log("iterations:", trial, " success:", success, " correct:",
2263 | correct, " time:", Date.now() - start, " error:", error);
2264 | if (schedule.do && schedule.every && trial % schedule.every == 0)
2265 | schedule.do({
2266 | iterations: trial,
2267 | success: success,
2268 | error: error,
2269 | time: Date.now() - start,
2270 | correct: correct
2271 | });
2272 | }
2273 |
2274 | return {
2275 | iterations: trial,
2276 | success: success,
2277 | error: error,
2278 | time: Date.now() - start
2279 | }
2280 | },
2281 |
2282 | // train the network to learn an Embeded Reber Grammar
2283 | ERG: function(options) {
2284 |
2285 | options = options || {};
2286 | var iterations = options.iterations || 150000;
2287 | var criterion = options.error || .05;
2288 | var rate = options.rate || .1;
2289 | var log = options.log || 500;
2290 | var cost = options.cost || this.cost || Trainer.cost.CROSS_ENTROPY;
2291 |
2292 | // gramar node
2293 | var Node = function() {
2294 | this.paths = [];
2295 | };
2296 | Node.prototype = {
2297 | connect: function(node, value) {
2298 | this.paths.push({
2299 | node: node,
2300 | value: value
2301 | });
2302 | return this;
2303 | },
2304 | any: function() {
2305 | if (this.paths.length == 0)
2306 | return false;
2307 | var index = Math.random() * this.paths.length | 0;
2308 | return this.paths[index];
2309 | },
2310 | test: function(value) {
2311 | for (var i in this.paths)
2312 | if (this.paths[i].value == value)
2313 | return this.paths[i];
2314 | return false;
2315 | }
2316 | };
2317 |
2318 | var reberGrammar = function() {
2319 |
2320 | // build a reber grammar
2321 | var output = new Node();
2322 | var n1 = (new Node()).connect(output, "E");
2323 | var n2 = (new Node()).connect(n1, "S");
2324 | var n3 = (new Node()).connect(n1, "V").connect(n2, "P");
2325 | var n4 = (new Node()).connect(n2, "X");
2326 | n4.connect(n4, "S");
2327 | var n5 = (new Node()).connect(n3, "V");
2328 | n5.connect(n5, "T");
2329 | n2.connect(n5, "X");
2330 | var n6 = (new Node()).connect(n4, "T").connect(n5, "P");
2331 | var input = (new Node()).connect(n6, "B");
2332 |
2333 | return {
2334 | input: input,
2335 | output: output
2336 | }
2337 | };
2338 |
2339 | // build an embeded reber grammar
2340 | var embededReberGrammar = function() {
2341 | var reber1 = reberGrammar();
2342 | var reber2 = reberGrammar();
2343 |
2344 | var output = new Node();
2345 | var n1 = (new Node).connect(output, "E");
2346 | reber1.output.connect(n1, "T");
2347 | reber2.output.connect(n1, "P");
2348 | var n2 = (new Node).connect(reber1.input, "P").connect(reber2.input,
2349 | "T");
2350 | var input = (new Node).connect(n2, "B");
2351 |
2352 | return {
2353 | input: input,
2354 | output: output
2355 | }
2356 |
2357 | };
2358 |
2359 | // generate an ERG sequence
2360 | var generate = function() {
2361 | var node = embededReberGrammar().input;
2362 | var next = node.any();
2363 | var str = "";
2364 | while (next) {
2365 | str += next.value;
2366 | next = next.node.any();
2367 | }
2368 | return str;
2369 | };
2370 |
2371 | // test if a string matches an embeded reber grammar
2372 | var test = function(str) {
2373 | var node = embededReberGrammar().input;
2374 | var i = 0;
2375 | var ch = str.charAt(i);
2376 | while (i < str.length) {
2377 | var next = node.test(ch);
2378 | if (!next)
2379 | return false;
2380 | node = next.node;
2381 | ch = str.charAt(++i);
2382 | }
2383 | return true;
2384 | };
2385 |
2386 | // helper to check if the output and the target vectors match
2387 | var different = function(array1, array2) {
2388 | var max1 = 0;
2389 | var i1 = -1;
2390 | var max2 = 0;
2391 | var i2 = -1;
2392 | for (var i in array1) {
2393 | if (array1[i] > max1) {
2394 | max1 = array1[i];
2395 | i1 = i;
2396 | }
2397 | if (array2[i] > max2) {
2398 | max2 = array2[i];
2399 | i2 = i;
2400 | }
2401 | }
2402 |
2403 | return i1 != i2;
2404 | };
2405 |
2406 | var iteration = 0;
2407 | var error = 1;
2408 | var table = {
2409 | "B": 0,
2410 | "P": 1,
2411 | "T": 2,
2412 | "X": 3,
2413 | "S": 4,
2414 | "E": 5
2415 | };
2416 |
2417 | var start = Date.now();
2418 | while (iteration < iterations && error > criterion) {
2419 | var i = 0;
2420 | error = 0;
2421 |
2422 | // ERG sequence to learn
2423 | var sequence = generate();
2424 |
2425 | // input
2426 | var read = sequence.charAt(i);
2427 | // target
2428 | var predict = sequence.charAt(i + 1);
2429 |
2430 | // train
2431 | while (i < sequence.length - 1) {
2432 | var input = [];
2433 | var target = [];
2434 | for (var j = 0; j < 6; j++) {
2435 | input[j] = 0;
2436 | target[j] = 0;
2437 | }
2438 | input[table[read]] = 1;
2439 | target[table[predict]] = 1;
2440 |
2441 | var output = this.network.activate(input);
2442 |
2443 | if (different(output, target))
2444 | this.network.propagate(rate, target);
2445 |
2446 | read = sequence.charAt(++i);
2447 | predict = sequence.charAt(i + 1);
2448 |
2449 | error += cost(target, output);
2450 | }
2451 | error /= sequence.length;
2452 | iteration++;
2453 | if (iteration % log == 0) {
2454 | console.log("iterations:", iteration, " time:", Date.now() - start,
2455 | " error:", error);
2456 | }
2457 | }
2458 |
2459 | return {
2460 | iterations: iteration,
2461 | error: error,
2462 | time: Date.now() - start,
2463 | test: test,
2464 | generate: generate
2465 | }
2466 | },
2467 |
2468 | timingTask: function(options){
2469 |
2470 | if (this.network.inputs() != 2 || this.network.outputs() != 1)
2471 | throw new Error("Invalid Network: must have 2 inputs and one output");
2472 |
2473 | if (typeof options == 'undefined')
2474 | options = {};
2475 |
2476 | // helper
2477 | function getSamples (trainingSize, testSize){
2478 |
2479 | // sample size
2480 | var size = trainingSize + testSize;
2481 |
2482 | // generate samples
2483 | var t = 0;
2484 | var set = [];
2485 | for (var i = 0; i < size; i++) {
2486 | set.push({ input: [0,0], output: [0] });
2487 | }
2488 | while(t < size - 20) {
2489 | var n = Math.round(Math.random() * 20);
2490 | set[t].input[0] = 1;
2491 | for (var j = t; j <= t + n; j++){
2492 | set[j].input[1] = n / 20;
2493 | set[j].output[0] = 0.5;
2494 | }
2495 | t += n;
2496 | n = Math.round(Math.random() * 20);
2497 | for (var k = t+1; k <= (t + n) && k < size; k++)
2498 | set[k].input[1] = set[t].input[1];
2499 | t += n;
2500 | }
2501 |
2502 | // separate samples between train and test sets
2503 | var trainingSet = []; var testSet = [];
2504 | for (var l = 0; l < size; l++)
2505 | (l < trainingSize ? trainingSet : testSet).push(set[l]);
2506 |
2507 | // return samples
2508 | return {
2509 | train: trainingSet,
2510 | test: testSet
2511 | }
2512 | }
2513 |
2514 | var iterations = options.iterations || 200;
2515 | var error = options.error || .005;
2516 | var rate = options.rate || [.03, .02];
2517 | var log = options.log === false ? false : options.log || 10;
2518 | var cost = options.cost || this.cost || Trainer.cost.MSE;
2519 | var trainingSamples = options.trainSamples || 7000;
2520 | var testSamples = options.trainSamples || 1000;
2521 |
2522 | // samples for training and testing
2523 | var samples = getSamples(trainingSamples, testSamples);
2524 |
2525 | // train
2526 | var result = this.train(samples.train, {
2527 | rate: rate,
2528 | log: log,
2529 | iterations: iterations,
2530 | error: error,
2531 | cost: cost
2532 | });
2533 |
2534 | return {
2535 | train: result,
2536 | test: this.test(samples.test)
2537 | }
2538 | }
2539 | };
2540 |
2541 | // Built-in cost functions
2542 | Trainer.cost = {
2543 | // Eq. 9
2544 | CROSS_ENTROPY: function(target, output)
2545 | {
2546 | var crossentropy = 0;
2547 | for (var i in output)
2548 | crossentropy -= (target[i] * Math.log(output[i]+1e-15)) + ((1-target[i]) * Math.log((1+1e-15)-output[i])); // +1e-15 is a tiny push away to avoid Math.log(0)
2549 | return crossentropy;
2550 | },
2551 | MSE: function(target, output)
2552 | {
2553 | var mse = 0;
2554 | for (var i in output)
2555 | mse += Math.pow(target[i] - output[i], 2);
2556 | return mse / output.length;
2557 | },
2558 | BINARY: function(target, output){
2559 | var misses = 0;
2560 | for (var i in output)
2561 | misses += Math.round(target[i] * 2) != Math.round(output[i] * 2);
2562 | return misses;
2563 | }
2564 | }
2565 |
2566 | /* WEBPACK VAR INJECTION */}.call(exports, __webpack_require__(2)(module)))
2567 |
2568 | /***/ },
2569 | /* 6 */
2570 | /***/ function(module, exports, __webpack_require__) {
2571 |
2572 | /* WEBPACK VAR INJECTION */(function(module) {// import
2573 | var Layer = __webpack_require__(3)
2574 | , Network = __webpack_require__(4)
2575 | , Trainer = __webpack_require__(5)
2576 |
2577 | /*******************************************************************************************
2578 | ARCHITECT
2579 | *******************************************************************************************/
2580 |
2581 | // Collection of useful built-in architectures
2582 | var Architect = {
2583 |
2584 | // Multilayer Perceptron
2585 | Perceptron: function Perceptron() {
2586 |
2587 | var args = Array.prototype.slice.call(arguments); // convert arguments to Array
2588 | if (args.length < 3)
2589 | throw new Error("not enough layers (minimum 3) !!");
2590 |
2591 | var inputs = args.shift(); // first argument
2592 | var outputs = args.pop(); // last argument
2593 | var layers = args; // all the arguments in the middle
2594 |
2595 | var input = new Layer(inputs);
2596 | var hidden = [];
2597 | var output = new Layer(outputs);
2598 |
2599 | var previous = input;
2600 |
2601 | // generate hidden layers
2602 | for (var level in layers) {
2603 | var size = layers[level];
2604 | var layer = new Layer(size);
2605 | hidden.push(layer);
2606 | previous.project(layer);
2607 | previous = layer;
2608 | }
2609 | previous.project(output);
2610 |
2611 | // set layers of the neural network
2612 | this.set({
2613 | input: input,
2614 | hidden: hidden,
2615 | output: output
2616 | });
2617 |
2618 | // trainer for the network
2619 | this.trainer = new Trainer(this);
2620 | },
2621 |
2622 | // Multilayer Long Short-Term Memory
2623 | LSTM: function LSTM() {
2624 |
2625 | var args = Array.prototype.slice.call(arguments); // convert arguments to array
2626 | if (args.length < 3)
2627 | throw new Error("not enough layers (minimum 3) !!");
2628 |
2629 | var last = args.pop();
2630 | var option = {
2631 | peepholes: Layer.connectionType.ALL_TO_ALL,
2632 | hiddenToHidden: false,
2633 | outputToHidden: false,
2634 | outputToGates: false,
2635 | inputToOutput: true,
2636 | };
2637 | if (typeof last != 'number') {
2638 | var outputs = args.pop();
2639 | if (last.hasOwnProperty('peepholes'))
2640 | option.peepholes = last.peepholes;
2641 | if (last.hasOwnProperty('hiddenToHidden'))
2642 | option.hiddenToHidden = last.hiddenToHidden;
2643 | if (last.hasOwnProperty('outputToHidden'))
2644 | option.outputToHidden = last.outputToHidden;
2645 | if (last.hasOwnProperty('outputToGates'))
2646 | option.outputToGates = last.outputToGates;
2647 | if (last.hasOwnProperty('inputToOutput'))
2648 | option.inputToOutput = last.inputToOutput;
2649 | } else
2650 | var outputs = last;
2651 |
2652 | var inputs = args.shift();
2653 | var layers = args;
2654 |
2655 | var inputLayer = new Layer(inputs);
2656 | var hiddenLayers = [];
2657 | var outputLayer = new Layer(outputs);
2658 |
2659 | var previous = null;
2660 |
2661 | // generate layers
2662 | for (var layer in layers) {
2663 | // generate memory blocks (memory cell and respective gates)
2664 | var size = layers[layer];
2665 |
2666 | var inputGate = new Layer(size).set({
2667 | bias: 1
2668 | });
2669 | var forgetGate = new Layer(size).set({
2670 | bias: 1
2671 | });
2672 | var memoryCell = new Layer(size);
2673 | var outputGate = new Layer(size).set({
2674 | bias: 1
2675 | });
2676 |
2677 | hiddenLayers.push(inputGate);
2678 | hiddenLayers.push(forgetGate);
2679 | hiddenLayers.push(memoryCell);
2680 | hiddenLayers.push(outputGate);
2681 |
2682 | // connections from input layer
2683 | var input = inputLayer.project(memoryCell);
2684 | inputLayer.project(inputGate);
2685 | inputLayer.project(forgetGate);
2686 | inputLayer.project(outputGate);
2687 |
2688 | // connections from previous memory-block layer to this one
2689 | if (previous != null) {
2690 | var cell = previous.project(memoryCell);
2691 | previous.project(inputGate);
2692 | previous.project(forgetGate);
2693 | previous.project(outputGate);
2694 | }
2695 |
2696 | // connections from memory cell
2697 | var output = memoryCell.project(outputLayer);
2698 |
2699 | // self-connection
2700 | var self = memoryCell.project(memoryCell);
2701 |
2702 | // hidden to hidden recurrent connection
2703 | if (option.hiddenToHidden)
2704 | memoryCell.project(memoryCell, Layer.connectionType.ALL_TO_ELSE);
2705 |
2706 | // out to hidden recurrent connection
2707 | if (option.outputToHidden)
2708 | outputLayer.project(memoryCell);
2709 |
2710 | // out to gates recurrent connection
2711 | if (option.outputToGates) {
2712 | outputLayer.project(inputGate);
2713 | outputLayer.project(outputGate);
2714 | outputLayer.project(forgetGate);
2715 | }
2716 |
2717 | // peepholes
2718 | memoryCell.project(inputGate, option.peepholes);
2719 | memoryCell.project(forgetGate, option.peepholes);
2720 | memoryCell.project(outputGate, option.peepholes);
2721 |
2722 | // gates
2723 | inputGate.gate(input, Layer.gateType.INPUT);
2724 | forgetGate.gate(self, Layer.gateType.ONE_TO_ONE);
2725 | outputGate.gate(output, Layer.gateType.OUTPUT);
2726 | if (previous != null)
2727 | inputGate.gate(cell, Layer.gateType.INPUT);
2728 |
2729 | previous = memoryCell;
2730 | }
2731 |
2732 | // input to output direct connection
2733 | if (option.inputToOutput)
2734 | inputLayer.project(outputLayer);
2735 |
2736 | // set the layers of the neural network
2737 | this.set({
2738 | input: inputLayer,
2739 | hidden: hiddenLayers,
2740 | output: outputLayer
2741 | });
2742 |
2743 | // trainer
2744 | this.trainer = new Trainer(this);
2745 | },
2746 |
2747 | // Liquid State Machine
2748 | Liquid: function Liquid(inputs, hidden, outputs, connections, gates) {
2749 |
2750 | // create layers
2751 | var inputLayer = new Layer(inputs);
2752 | var hiddenLayer = new Layer(hidden);
2753 | var outputLayer = new Layer(outputs);
2754 |
2755 | // make connections and gates randomly among the neurons
2756 | var neurons = hiddenLayer.neurons();
2757 | var connectionList = [];
2758 |
2759 | for (var i = 0; i < connections; i++) {
2760 | // connect two random neurons
2761 | var from = Math.random() * neurons.length | 0;
2762 | var to = Math.random() * neurons.length | 0;
2763 | var connection = neurons[from].project(neurons[to]);
2764 | connectionList.push(connection);
2765 | }
2766 |
2767 | for (var j = 0; j < gates; j++) {
2768 | // pick a random gater neuron
2769 | var gater = Math.random() * neurons.length | 0;
2770 | // pick a random connection to gate
2771 | var connection = Math.random() * connectionList.length | 0;
2772 | // let the gater gate the connection
2773 | neurons[gater].gate(connectionList[connection]);
2774 | }
2775 |
2776 | // connect the layers
2777 | inputLayer.project(hiddenLayer);
2778 | hiddenLayer.project(outputLayer);
2779 |
2780 | // set the layers of the network
2781 | this.set({
2782 | input: inputLayer,
2783 | hidden: [hiddenLayer],
2784 | output: outputLayer
2785 | });
2786 |
2787 | // trainer
2788 | this.trainer = new Trainer(this);
2789 | },
2790 |
2791 | Hopfield: function Hopfield(size) {
2792 |
2793 | var inputLayer = new Layer(size);
2794 | var outputLayer = new Layer(size);
2795 |
2796 | inputLayer.project(outputLayer, Layer.connectionType.ALL_TO_ALL);
2797 |
2798 | this.set({
2799 | input: inputLayer,
2800 | hidden: [],
2801 | output: outputLayer
2802 | });
2803 |
2804 | var trainer = new Trainer(this);
2805 |
2806 | var proto = Architect.Hopfield.prototype;
2807 |
2808 | proto.learn = proto.learn || function(patterns)
2809 | {
2810 | var set = [];
2811 | for (var p in patterns)
2812 | set.push({
2813 | input: patterns[p],
2814 | output: patterns[p]
2815 | });
2816 |
2817 | return trainer.train(set, {
2818 | iterations: 500000,
2819 | error: .00005,
2820 | rate: 1
2821 | });
2822 | };
2823 |
2824 | proto.feed = proto.feed || function(pattern)
2825 | {
2826 | var output = this.activate(pattern);
2827 |
2828 | var pattern = [];
2829 | for (var i in output)
2830 | pattern[i] = output[i] > .5 ? 1 : 0;
2831 |
2832 | return pattern;
2833 | }
2834 | }
2835 | }
2836 |
2837 | // Extend prototype chain (so every architectures is an instance of Network)
2838 | for (var architecture in Architect) {
2839 | Architect[architecture].prototype = new Network();
2840 | Architect[architecture].prototype.constructor = Architect[architecture];
2841 | }
2842 |
2843 | // export
2844 | if (module) module.exports = Architect;
2845 | /* WEBPACK VAR INJECTION */}.call(exports, __webpack_require__(2)(module)))
2846 |
2847 | /***/ }
2848 | /******/ ]);
--------------------------------------------------------------------------------