├── .gitignore ├── .rspec ├── .travis.yml ├── Examples ├── configuration.rb ├── default.rb └── train.csv ├── Gemfile ├── LICENSE.txt ├── README.md ├── Rakefile ├── agnet.gemspec ├── bin ├── console └── setup ├── lib ├── agnet.rb └── agnet │ └── version.rb └── spec ├── agnet_spec.rb └── spec_helper.rb /.gitignore: -------------------------------------------------------------------------------- 1 | /.bundle/ 2 | /.yardoc 3 | /Gemfile.lock 4 | /_yardoc/ 5 | /coverage/ 6 | /doc/ 7 | /pkg/ 8 | /spec/reports/ 9 | /tmp/ 10 | .DS_Store 11 | /gem_hist/ 12 | -------------------------------------------------------------------------------- /.rspec: -------------------------------------------------------------------------------- 1 | --format documentation 2 | --color 3 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: ruby 2 | rvm: 3 | - 2.2.1 4 | before_install: gem install bundler -v 1.11.2 5 | -------------------------------------------------------------------------------- /Examples/configuration.rb: -------------------------------------------------------------------------------- 1 | require 'agnet' 2 | 3 | net = Agnet.new(bits: 255, # dictated by data 4 | input_nodes: 784, # dictated by data 5 | hidden_nodes: 100, # the greater the number of nodes, the slower 6 | output_nodes: 10, # dictated by classes, 10 for 10 digits 0-9 7 | input_bias: 1.0, #hyperparameters 8 | hidden_bias: 1.0, 9 | hidden_layer_function: 'sigmoid', # choices between 'sigmoid' 10 | output_layer_function: 'soft_max', # and 'soft_max' currently 11 | learning_rate: 0.05 12 | ) 13 | 14 | net.train('Examples/train.csv', training_size: 30, testing_size: 10) 15 | 16 | net.training_score 17 | 18 | net.test(test_first: 5) # only test first 5 in testing_data set 19 | 20 | net.testing_score 21 | -------------------------------------------------------------------------------- /Examples/default.rb: -------------------------------------------------------------------------------- 1 | require 'agnet' 2 | 3 | net = Agnet.new(hidden_nodes: 20) 4 | net.train('Examples/train.csv') 5 | net.training_score 6 | net.test 7 | net.testing_score 8 | -------------------------------------------------------------------------------- /Gemfile: -------------------------------------------------------------------------------- 1 | source 'https://rubygems.org' 2 | 3 | # Specify your gem's dependencies in agnet.gemspec 4 | gemspec 5 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2016 Scott Silverstein 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in 13 | all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 21 | THE SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Agnet 2 | 3 | 2 Layer Feed forward Neural Network. 4 | 5 | A little about the gem and neural networks. 6 | 7 | 1. It takes a csv file with an array of 784 bits. These represent a 28 x 28 = 784 pixel image of a hand drawn number from 0-9. 8 | 9 | 2. This array is sent through a neural network, which applies a function to the array which results in a 10 digit output array. 10 | 11 | 3. The function converts this 784 digit array into a 10 digit array containing most likely classification of the hand drawn image. 12 | - An image with a hand drawn 1, sent through this function would result in a output array like 13 | [0.01, 0.98, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01] 14 | - However, as a learning machine, the function does not start out correctly classifying the images. It must learn from a labeled training set of these images. 15 | 16 | 4. I have set the gem to load 1,000 labeled training samples into the function when you call the training method and pass in a csv. 17 | 18 | 5. These samples get sent through the function and the result is compared to the label. The error between the result and label is then used to make automatic adjustments to the function. These adjustments accumulate and the function's accuracy in classifying a sample correctly rises. It will top out in the 80-95 range. 19 | 20 | 21 | ###Diagram of the Network 22 | 23 | [![Mnist Neural Net](http://neuralnetworksanddeeplearning.com/images/tikz12.png)](http://neuralnetworksanddeeplearning.com/chap1.html) 24 | http://neuralnetworksanddeeplearning.com/chap1.html 25 | 26 | ###The algorithm behind the training function 27 | 28 | [![AgNet Webm](http://i.imgur.com/MfTKqMv.gif)](https://zippy.gfycat.com/SparklingUnsightlyGopher.webm) 29 | 30 | Click image for .webm/pausable version 31 | 32 | Neural nets are cool because the technology around them have been responsible for big advances in AI over the past few years. From advancement in computer vision to the recent developments in computer Go playing, neural nets have been breaking some cool new barriers. Since the rudimentary idea is runnable on consumer computers, it can be fun for anyone to experiment with. 33 | 34 | ## Installation 35 | 36 | Add this line to your application's Gemfile: 37 | 38 | ```ruby 39 | gem 'agnet' 40 | ``` 41 | 42 | And then execute: 43 | 44 | $ bundle 45 | 46 | Or install it yourself as: 47 | 48 | $ gem install agnet 49 | 50 | ## Usage 51 | ### Initialize a network 52 | ```ruby 53 | net = Agnet.new # default configuration, values detailed below in Readme 54 | ``` 55 | 56 | ### Or Configure and Initialize a custom network 57 | ```ruby 58 | net = Agnet.new(bits: INT, 59 | input_nodes: INT, 60 | hidden_nodes: INT, 61 | output_nodes: INT, 62 | input_bias: FLOAT, 63 | hidden_bias: FLOAT) 64 | # Each hash is an optional parameter for configuration 65 | ``` 66 | 67 | ### Train your Network with a csv file 68 | ```ruby 69 | net.train('path_to_your_csv_data_file') 70 | # defaults to training_size: 10000, testing_size: 10000 / 5 71 | 72 | net.train('path_to_your_csv_data_file', 73 | training_size: 60000, 74 | testing_size: 10000) # can set custom sizes for training and testing 75 | 76 | ``` 77 | Returns average of last 100 iterations during the training phase 78 | 79 | (currently supports csv in format header label,pixel0,pixel1,...pixel783) 80 | 81 | label from 0 to 9 (10 output nodes) 784 pixels (28x28 px image) 82 | 83 | ### Classify a potion of data with a test of your trained net 84 | ```ruby 85 | net.test # test all your samples loaded into the testing_data Array 86 | 87 | # if you only want to do a short test and test a chunk of the testing_data 88 | 89 | net.test(test_first: 10) # only tests the first 10 samples in the testing_data 90 | 91 | ``` 92 | 93 | Returns average correct through test Float between 0 and 1 94 | 95 | (the net.train(file) method is set to grab 10000 rows for training data and 2000 rows for test data) 96 | 97 | ### Classify an individual sample 98 | ```ruby 99 | net.classify([784 pixel array with values from 0-255]) 100 | ``` 101 | 102 | Returns output activation array [10 values from 0.0 to 1.0] 103 | 104 | ### Training Scores 105 | ```ruby 106 | net.training_score 107 | ``` 108 | 109 | Returns array with scores and iterations and training score log 110 | 111 | [iteration,training_score_log,total_running_average,running_average_100,running_average_1000,running_average_5000] 112 | 113 | ### Training Scores 114 | ```ruby 115 | net.testing_score 116 | ``` 117 | 118 | Returns array with scores and iterations and testing score log 119 | 120 | [iteration,testing_score_log,total_running_average,running_average_100,running_average_1000,running_average_5000] 121 | 122 | ## Configuration 123 | 124 | ####Default Configuration 125 | 126 | - input_nodes = 784 127 | - hidden_nodes = 15 128 | - output_nodes = 10 129 | - hidden_function = 'sigmoid' 130 | - output_function = 'soft_max' 131 | - input_bias = 1.0 132 | - hidden_bias = 1.0 133 | - bits = 255.0 134 | - learning_rate = 0.05 135 | - training_size = 10000 136 | - testing_size = 2000 137 | 138 | ####Optional Parameters 139 | 140 | Initialize 141 | 142 | - bits 143 | - input_nodes 144 | - hidden_nodes 145 | - output_nodes 146 | - input_bias 147 | - hidden_bias 148 | - learning_rate 149 | 150 | Train 151 | 152 | - training_size 153 | - testing_size 154 | 155 | Test 156 | 157 | - test_first 158 | 159 | 160 | ## Development 161 | 162 | After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment. 163 | 164 | To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org). 165 | 166 | ## Contributing 167 | 168 | Bug reports and pull requests are welcome on GitHub at https://github.com/scsilver/agnet. 169 | 170 | 171 | ## License 172 | 173 | The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT). 174 | 175 | ##Attribution 176 | 177 | Michael Nielsen's articles on [http://neuralnetworksanddeeplearning.com/](http://neuralnetworksanddeeplearning.com/) for network design reference and diagram 178 | -------------------------------------------------------------------------------- /Rakefile: -------------------------------------------------------------------------------- 1 | require "bundler/gem_tasks" 2 | require "rspec/core/rake_task" 3 | 4 | RSpec::Core::RakeTask.new(:spec) 5 | 6 | task :default => :spec 7 | -------------------------------------------------------------------------------- /agnet.gemspec: -------------------------------------------------------------------------------- 1 | # coding: utf-8 2 | lib = File.expand_path('../lib', __FILE__) 3 | $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib) 4 | require 'agnet/version' 5 | 6 | Gem::Specification.new do |spec| 7 | spec.name = "agnet" 8 | spec.version = Agnet::VERSION 9 | spec.authors = ["Scott Silverstein"] 10 | spec.email = ["scsilver.umd@gmail.com"] 11 | 12 | spec.summary = %q{2 Layer Neural Network} 13 | spec.description = %q{AgNet is a 2 Layer feed forward neural network with backpropogation for training. Can be used to approximate many functions and is useful for classification tasks such as character recognition. } 14 | spec.homepage = "https://github.com/scsilver/AgNet" 15 | spec.license = "MIT" 16 | 17 | # Prevent pushing this gem to RubyGems.org by setting 'allowed_push_host', or 18 | # delete this section to allow pushing this gem to any host. 19 | 20 | 21 | spec.files = Dir["lib/agnet.rb"] + Dir["lib/agnet/train.csv"] + Dir["lib/agnet/version.rb"] 22 | spec.bindir = "exe" 23 | spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) } 24 | spec.require_paths = ["lib", "lib/data", "lib/agnet"] 25 | 26 | spec.add_development_dependency "bundler", "~> 1.11" 27 | spec.add_development_dependency "rake", "~> 10.0" 28 | spec.add_development_dependency "rspec", "~> 3.0" 29 | end 30 | -------------------------------------------------------------------------------- /bin/console: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | 3 | require "bundler/setup" 4 | require "agnet" 5 | 6 | # You can add fixtures and/or initialization code here to make experimenting 7 | # with your gem easier. You can also use a different console, if you like. 8 | 9 | # (If you use this, don't forget to add pry to your Gemfile!) 10 | # require "pry" 11 | # Pry.start 12 | 13 | require "irb" 14 | IRB.start 15 | -------------------------------------------------------------------------------- /bin/setup: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -euo pipefail 3 | IFS=$'\n\t' 4 | set -vx 5 | 6 | bundle install 7 | 8 | # Do any other automated setup that you need to do here 9 | -------------------------------------------------------------------------------- /lib/agnet.rb: -------------------------------------------------------------------------------- 1 | require 'agnet/version' 2 | require 'csv' 3 | require 'matrix' 4 | require 'pry' 5 | 6 | class Agnet 7 | def initialize(opt = {}) 8 | # (input_nodes, hidden_nodes, output_nodes, function, 9 | # input_activation, input_bias, hidden_bias, bits, training_size, learning_rate) 10 | @function = Array.new(2) 11 | opt[:input_nodes] ? @in_nodes = opt[:input_nodes] : @in_nodes = 784 12 | opt[:bits] ? @bits = opt[:bits].to_f : @bits = 255.0 13 | opt[:hidden_nodes] ? @hdn_nodes = opt[:hidden_nodes] : @hdn_nodes = 15 14 | opt[:output_nodes] ? @out_nodes = opt[:output_nodes] : @out_nodes = 10 15 | opt[:hidden_layer_function] ? @function[0] = opt[:hidden_layer_function] : @function[0] = 'sigmoid' 16 | opt[:output_layer_function] ? @function[1] = opt[:output_layer_function] : @function[1] = 'soft_max' 17 | opt[:input_bias] ? @input_bias = opt[:input_bias] : @input_bias = 1.0 18 | opt[:hidden_bias] ? @hidden_bias = opt[:hidden_bias] : @hidden_bias = 1.0 19 | opt[:learning_rate] ? @learning_rate = opt[:learning_rate] : @learning_rate = 0.05 20 | @input_activation = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 21 | @label = 1 22 | @weights = Array.new(2) 23 | @training_score_log = [] 24 | @testing_score_log = [] 25 | @out_z = [] 26 | @hdn_z = [] 27 | @error = nil 28 | @training_data = [] 29 | @testing_data = [] 30 | @iteration = 0 31 | end 32 | 33 | def train(path, opt = {}) 34 | @training_data = [] 35 | @testing_data = [] 36 | @training_score_log = [] 37 | opt[:training_size] ? @training_size = opt[:training_size] : @training_size = 3000 38 | opt[:testing_size] ? @testing_size = opt[:testing_size] : @testing_size = (@training_size / 5.0).ceil 39 | @iteration = 0 40 | 41 | load_data(path, @training_size, @testing_size) 42 | set_initial_weights 43 | scale_initial_weights 44 | 45 | @training_data.each_with_index do |row, i| 46 | @iteration = (i + 1) 47 | @input_activation = row[1..@in_nodes] 48 | @label = row[0] 49 | 50 | classify(@input_activation) 51 | 52 | training_score 53 | output_error 54 | back_prop_output 55 | hidden_weights_change 56 | output_weights_change 57 | weights_change 58 | puts @weights[0][0, 0] 59 | puts @weights[1][0, 0] 60 | end 61 | @label = nil 62 | @running_average_100 63 | end 64 | 65 | def classify(sample) 66 | @input_activation = sample 67 | normalize_input_activation 68 | hidden_layer_weighted_sum 69 | hidden_layer_activation 70 | output_layer_weighted_sum 71 | @output_layer_activation = output_layer_activation 72 | puts 'Activation: ', @output_layer_activation 73 | gs = guess(@output_layer_activation) 74 | puts 'Guess: ', gs 75 | puts 'Label: ', @label 76 | @output_layer_activation 77 | end 78 | 79 | def test(opt = {}) 80 | @testing_score_log = [] 81 | opt[:test_first] ? test_first = opt[:test_first] : test_first = @testing_size 82 | @iteration = 0 83 | test_chunk = @testing_data.first(test_first) 84 | test_chunk.each_with_index do |row, i| 85 | @iteration = (i + 1) 86 | @input_activation = row[1..@in_nodes] 87 | @label = row[0] 88 | classify(@input_activation) 89 | testing_score 90 | end 91 | @label = nil 92 | @total_running_average 93 | end 94 | 95 | def it_score 96 | @it_score 97 | end 98 | 99 | 100 | def iteration 101 | @iteration 102 | end 103 | 104 | def training_score 105 | @it_score = @label == guess(@output_layer_activation) 106 | @training_score_log << @it_score 107 | count_true = @training_score_log.count(true) 108 | puts 'Iteration: ', @iteration 109 | puts '# Correct : ', count_true 110 | puts 'Total Acuracy: ', @total_running_average = count_true.to_f / @iteration.to_f 111 | puts 'Last 100 Acuracy: ', @running_average_100 = @training_score_log.last(100).count(true).to_f / 100.0 112 | puts 'Last 1000 Acuracy: ', @running_average_1000 = @training_score_log.last(1000).count(true).to_f / @training_score_log.last(1000).count.to_f 113 | puts 'Last 5000 Acuracy: ', @running_average_5000 = @training_score_log.last(5000).count(true).to_f / @training_score_log.last(5000).count.to_f 114 | @it_score = nil 115 | [@iteration,@training_score_log,@total_running_average,@running_average_100,@running_average_1000,@running_average_5000] 116 | 117 | end 118 | 119 | def testing_score 120 | @it_score = @label == guess(@output_layer_activation) 121 | @testing_score_log << @it_score 122 | count_true = @testing_score_log.count(true) 123 | puts 'Iteration: ', @iteration 124 | puts '# Correct : ', count_true 125 | puts 'Total Acuracy: ', @total_running_average = count_true.to_f / @iteration.to_f 126 | puts 'Last 100 Acuracy: ', @running_average_100 = @testing_score_log.last(100).count(true).to_f / 100.0 127 | puts 'Last 1000 Acuracy: ', @running_average_1000 = @testing_score_log.last(1000).count(true).to_f / @testing_score_log.last(1000).count.to_f 128 | puts 'Last 5000 Acuracy: ', @running_average_5000 = @testing_score_log.last(5000).count(true).to_f / @testing_score_log.last(5000).count.to_f 129 | @it_score = nil 130 | [@iteration,@testing_score_log,@total_running_average,@running_average_100,@running_average_1000,@running_average_5000] 131 | end 132 | 133 | def load_data(path, training_size, testing_size) 134 | CSV.foreach(path) do |row| 135 | if @training_data.size < training_size 136 | @training_data << row.map(&:to_i) 137 | puts 'Row: ', @training_data.size 138 | else 139 | @testing_data << row.map(&:to_i) 140 | puts 'Row: ', (@training_data.size + @testing_data.size) 141 | end 142 | break if @training_data.size + @testing_data.size == training_size + testing_size 143 | end 144 | end 145 | 146 | def scale_initial_weights 147 | @weights = set_initial_factors 148 | .map.with_index { |a, i| a * initial_weights[i] } 149 | end 150 | 151 | def set_initial_factors 152 | hdn_init_factor = 1 / (@in_nodes**0.5) 153 | [hdn_init_factor] << 1 / (@hdn_nodes**0.5) 154 | end 155 | 156 | def set_initial_weights 157 | h_w = [] 158 | o_w = [] 159 | @hdn_nodes.times do |i| 160 | if i == 0 161 | h_w = Vector.elements(Array.new(@in_nodes + 1) { (rand(0.1) - 0.5) }) 162 | .covector 163 | else 164 | n_w = Vector.elements(Array.new(@in_nodes + 1) { (rand(0.1) - 0.5) }) 165 | .covector 166 | h_w = h_w.vstack(n_w) 167 | end 168 | end 169 | @out_nodes.times do |i| 170 | if i == 0 171 | o_w = Vector.elements(Array.new(@hdn_nodes + 1) { (rand(0.1) - 0.5) }) 172 | .covector 173 | else 174 | l_w = Vector.elements(Array.new(@hdn_nodes + 1) { (rand(0.1) - 0.5) }) 175 | .covector 176 | o_w = o_w.vstack(l_w) 177 | end 178 | end 179 | @initial_weights = [h_w] 180 | @initial_weights << o_w 181 | end 182 | 183 | def initial_weights 184 | @initial_weights 185 | end 186 | 187 | def normalize_input_activation 188 | array = Array.new(@in_nodes + 1) { 0 } 189 | array = @input_activation 190 | array[@in_nodes] = @input_bias * @bits 191 | @normalize_input_activation = Vector.elements(array) / @bits 192 | end 193 | 194 | def hidden_layer_weighted_sum 195 | @hdn_z = [] 196 | @hdn_nodes.times do |i| 197 | @hdn_z << @weights[0].row(i).inner_product(@normalize_input_activation) 198 | end 199 | back_prop_derivation 200 | @hdn_z 201 | end 202 | 203 | def hidden_layer_activation 204 | @hidden_layer_activation = activation_function(@hdn_z, 0) 205 | end 206 | 207 | def back_prop_derivation 208 | @back_prop_derivation = sigmoid_prime(@hdn_z) 209 | end 210 | 211 | def activation_function(z, layer) 212 | fin = Array.new(z.size) 213 | if @function[layer] == 'sigmoid' 214 | fin = sigmoid(z) 215 | elsif @function[layer] == 'soft_max' 216 | fin = soft_max(z) 217 | end 218 | fin 219 | end 220 | def sigmoid(z) 221 | res = Array.new(z.size) 222 | z.each_with_index do |zee, i| 223 | res[i] = (1.0 / (1.0 + Math.exp(-zee))) 224 | end 225 | res 226 | end 227 | 228 | def sigmoid_prime(z) 229 | res = Array.new(z.size) 230 | z.each_with_index do |zee, i| 231 | res[i] = ((1.0 / (1.0 + Math.exp(-zee))) * 232 | (1 - (1.0 / (1.0 + Math.exp(-zee))))) 233 | end 234 | res 235 | end 236 | 237 | def soft_max(z) 238 | res = Array.new(z.size) 239 | z.each_with_index do |z, i| 240 | res[i] = Math.exp(z) 241 | end 242 | sum = res.inject(:+) 243 | res.each_with_index do |l, i| 244 | res[i] = l / sum 245 | end 246 | res 247 | end 248 | 249 | def vectorize_hidden_layer 250 | array = Array.new(@hdn_nodes + 1) 251 | array = @hidden_layer_activation 252 | array[@hdn_nodes] = @hidden_bias 253 | @vectorize_hidden_layer = Vector.elements(array) 254 | end 255 | 256 | def output_layer_weighted_sum 257 | @out_z = [] 258 | @out_nodes.times do |i| 259 | @out_z << @weights[1].row(i) 260 | .inner_product(vectorize_hidden_layer) 261 | end 262 | @out_z 263 | end 264 | 265 | def output_layer_activation 266 | @output_layer_activation = activation_function(@out_z, 1) 267 | end 268 | 269 | def guess(outputs) 270 | outputs.find_index(outputs.max(1)[0]) 271 | end 272 | 273 | def vectorize_output_layer 274 | 275 | Vector.elements(@output_layer_activation) 276 | end 277 | 278 | def label_array 279 | y = Array.new(10) { 0 } 280 | y[@label] = 1 281 | y 282 | end 283 | 284 | def output_error 285 | @output_error = vectorize_output_layer - Vector.elements(label_array) 286 | puts 'Error: ', @output_error.collect(&:abs).inject(:+) 287 | @output_error 288 | end 289 | 290 | def back_prop_output 291 | 292 | 293 | h_e = [] 294 | @hdn_nodes.times do |w| 295 | h_e << (@weights[1].column(w).inner_product(@output_error) * 296 | @back_prop_derivation[w]) 297 | end 298 | @back_prop_output = h_e 299 | end 300 | 301 | def output_weights_change 302 | 303 | 304 | dwo = [] 305 | @out_nodes.times do |l| 306 | if l == 0 307 | r = [] 308 | (@hdn_nodes + 1).times do |o| 309 | r << @output_error[l] * @vectorize_hidden_layer[o] 310 | end 311 | dwo = Vector.elements(r).covector 312 | r=[] 313 | else 314 | r = [] 315 | (@hdn_nodes + 1).times do |o| 316 | r << @output_error[l] * @vectorize_hidden_layer[o] 317 | end 318 | dwv = Vector.elements(r).covector 319 | dwo = dwo.vstack(dwv) 320 | r=[] 321 | end 322 | end 323 | @output_weights_change = dwo 324 | end 325 | 326 | def hidden_weights_change 327 | 328 | 329 | dwo = [] 330 | @hdn_nodes.times do |l| 331 | if l == 0 332 | r = [] 333 | (@in_nodes + 1).times do |o| 334 | r << @back_prop_output[l] * @normalize_input_activation[o] 335 | end 336 | dwo = Vector.elements(r).covector 337 | r = [] 338 | else 339 | r = [] 340 | (@in_nodes + 1).times do |o| 341 | r << @back_prop_output[l] * @normalize_input_activation[o] 342 | end 343 | dwv = Vector.elements(r).covector 344 | dwo = dwo.vstack(dwv) 345 | r= [] 346 | end 347 | end 348 | @hidden_weights_change = dwo 349 | end 350 | 351 | def weights_change 352 | @weights[0] = (@weights[0] - @hidden_weights_change * @learning_rate * 10) 353 | @weights[1] = (@weights[1] - @output_weights_change * @learning_rate) 354 | @weights 355 | end 356 | 357 | def weights 358 | @weights 359 | end 360 | end 361 | -------------------------------------------------------------------------------- /lib/agnet/version.rb: -------------------------------------------------------------------------------- 1 | class Agnet 2 | VERSION = "0.2.0" 3 | end 4 | -------------------------------------------------------------------------------- /spec/agnet_spec.rb: -------------------------------------------------------------------------------- 1 | require 'spec_helper' 2 | 3 | describe Agnet do 4 | it 'has a version number' do 5 | expect(Agnet::VERSION).not_to be nil 6 | end 7 | 8 | describe 'public instance methods' do 9 | before do 10 | @input_nodes = 784 11 | @hidden_nodes = 15 12 | @output_nodes = 10 13 | @input_activation = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 14 | @input_bias = 1.0 15 | @hidden_bias = 1.0 16 | @bits = 255.0 17 | @training_size = 10 18 | @testing_size = 10 19 | 20 | @learning_rate = 0.5 21 | @function = 'sigmax' 22 | @weights = Array.new(2) 23 | @label = 1 24 | @test_input_activation = [0, 1, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] 25 | end 26 | subject do 27 | Agnet.new 28 | end 29 | 30 | let(:input) do 31 | end 32 | 33 | let(:output) { subject.process(input) } 34 | context 'responds to its methods' do 35 | it { expect(subject).to respond_to(:set_initial_weights) } 36 | it { expect(subject).to respond_to(:set_initial_factors) } 37 | end 38 | 39 | context 'executes methods correctly' do 40 | context '#load_data' do 41 | # it 'returns a vector' do 42 | # expect(subject.load_data('/Users/scott/Documents/code/custom_gems/agnet/train.csv')).to be_a(Array) 43 | # end 44 | # it 'has values in range 0 to @bits -1' do 45 | # expect(subject.load_data('/Users/scott/Documents/code/custom_gems/agnet/train.csv')[0][0]).to be_between(0, @bits).inclusive 46 | # end 47 | end 48 | context '#train' do 49 | it 'adjusts weights' do 50 | weights = subject.weights 51 | subject.train('') 52 | expect(weights).not_to eq(subject.weights) 53 | end 54 | end 55 | context '#iteration' do 56 | it 'returns current iteration' do 57 | expect(subject.iteration).to be_a(Integer) 58 | end 59 | end 60 | context '#classify' do 61 | it 'returns guess array for data' do 62 | subject.train('/Users/scott/Documents/code/custom_gems/agnet/train.csv') 63 | expect(subject.classify(@test_input_activation)).to be_a(Array) 64 | end 65 | end 66 | context '#training_score' do 67 | it 'updates boolean for each iteration' do 68 | # subject.train('/Users/scott/Documents/code/custom_gems/agnet/train.csv') 69 | # 70 | # it_score = @it_score 71 | # subject.training_score 72 | # expect(it_score).not_to eq(@it_score) 73 | end 74 | it 'updates training-score log with boolean' do 75 | end 76 | it 'updates total running average' do 77 | end 78 | it 'updates 5000 running average' do 79 | end 80 | it 'updates 1000 running average' do 81 | end 82 | it 'updates 100 running average' do 83 | end 84 | end 85 | 86 | context '#test_score' do 87 | it 'updates boolean for each iteration' do 88 | end 89 | it 'updates test-score log with boolean' do 90 | end 91 | it 'updates total running average' do 92 | end 93 | it 'updates 5000 running average' do 94 | end 95 | it 'updates 1000 running average' do 96 | end 97 | it 'updates 100 running average' do 98 | end 99 | end 100 | context '#logs' do 101 | end 102 | context '#set_initial_factors' do 103 | it 'sets array of initial factors for hdn and output weight matrix' do 104 | expect(subject.set_initial_factors.size).to eq(2) 105 | end 106 | it 'sets array values according to function 1 / inputsize^0.5' do 107 | expect(subject.set_initial_factors[0]).to eq(1 / @input_nodes**0.5) 108 | expect(subject.set_initial_factors[1]).to eq(1 / @hidden_nodes**0.5) 109 | end 110 | end 111 | context '#set_initial_weights' do 112 | it 'sets initial hidden layer weight matrix' do 113 | expect(subject.set_initial_weights[0].column_count) 114 | .to eq(@input_nodes + 1) 115 | expect(subject.set_initial_weights[0].row_count).to eq(@hidden_nodes) 116 | end 117 | it 'sets initial output layer weight matrix' do 118 | expect(subject.set_initial_weights[1].column_count) 119 | .to eq(@hidden_nodes + 1) 120 | expect(subject.set_initial_weights[1].row_count).to eq(@output_nodes) 121 | end 122 | it 'sets initial hidden weight matrix values between 0.5 and -0.5' do 123 | expect(subject.set_initial_weights[0][0, 0]).to be_within(0.5).of(0) 124 | end 125 | it 'sets initial output weight matrix values between 0.5 and -0.5' do 126 | expect(subject.set_initial_weights[1][0, 0]).to be_within(0.5).of(0) 127 | end 128 | end 129 | context '#scale_initial_weights' do 130 | it 'mulitplys hidden initial factor to hidden weight matrix calcs' do 131 | subject.set_initial_weights 132 | 133 | expect(subject.scale_initial_weights[0]) 134 | .to eq(subject.initial_weights[0] * subject.set_initial_factors[0]) 135 | end 136 | it 'mulitplys output initial factor to ouput weight matrix calcs' do 137 | subject.set_initial_weights 138 | 139 | expect(subject.scale_initial_weights[1]) 140 | .to eq(subject.initial_weights[1] * subject.set_initial_factors[1]) 141 | end 142 | end 143 | context '#normalize_input_activation' do 144 | it 'returns a vector' do 145 | expect(subject.normalize_input_activation).to be_a(Vector) 146 | end 147 | it 'returns vector with values between 0 and 1' do 148 | expect(subject.normalize_input_activation[0]) 149 | .to be_within(0.5).of(0.5) 150 | end 151 | it 'has a vector size of input_nodes + 1 for a bias value' do 152 | expect(subject.normalize_input_activation.size) 153 | .to eq(@input_nodes + 1) 154 | end 155 | end 156 | context '#hidden_layer_weighted_sum' do 157 | it 'calcs weight sum vector for nodes in hdn layer with hdn weights' do 158 | 159 | subject.set_initial_weights 160 | subject.scale_initial_weights 161 | subject.normalize_input_activation 162 | 163 | 164 | expect(subject.hidden_layer_weighted_sum[0]) 165 | .to eq(subject.scale_initial_weights[0] 166 | .row(0).inner_product(subject.normalize_input_activation)) 167 | end 168 | it 'returns array of size hidden nodes' do 169 | subject.set_initial_weights 170 | subject.scale_initial_weights 171 | subject.normalize_input_activation 172 | 173 | expect(subject.hidden_layer_weighted_sum.size).to eq(@hidden_nodes) 174 | end 175 | end 176 | context '#hidden_layer_activation' do 177 | it 'applys activation function to each hidden node value' do 178 | subject.set_initial_weights 179 | subject.scale_initial_weights 180 | subject.normalize_input_activation 181 | subject.hidden_layer_weighted_sum 182 | 183 | expect(subject.hidden_layer_activation) 184 | .to eq(subject.activation_function(subject 185 | .hidden_layer_weighted_sum, 0)) 186 | end 187 | end 188 | context '#activation_function' do 189 | it 'returns select activation_function' do 190 | subject.set_initial_weights 191 | subject.normalize_input_activation 192 | # expect(subject.activation_function([.3,.6])) 193 | # .to eq(subject.hidden_layer_weighted_sum) 194 | end 195 | end 196 | 197 | context '#vectorize_hidden_layer' do 198 | it 'returns a vector' do 199 | subject.set_initial_weights 200 | subject.scale_initial_weights 201 | subject.normalize_input_activation 202 | subject.hidden_layer_weighted_sum 203 | subject.hidden_layer_activation 204 | 205 | expect(subject.vectorize_hidden_layer).to be_a(Vector) 206 | end 207 | it 'the initial values do not change from array to vector ' do 208 | subject.set_initial_weights 209 | subject.scale_initial_weights 210 | subject.normalize_input_activation 211 | subject.hidden_layer_weighted_sum 212 | subject.hidden_layer_activation 213 | 214 | expect(subject.vectorize_hidden_layer[0]) 215 | .to eq(subject.hidden_layer_activation[0]) 216 | end 217 | it 'adds bias to end of vector' do 218 | subject.set_initial_weights 219 | subject.scale_initial_weights 220 | subject.normalize_input_activation 221 | subject.hidden_layer_weighted_sum 222 | subject.hidden_layer_activation 223 | 224 | expect(subject.vectorize_hidden_layer.size) 225 | .to eq(subject.hidden_layer_activation.size + 1) 226 | end 227 | end 228 | context '#output_layer_weighted_sum' do 229 | it 'calcs weight sum vector for nodes in hdn layer with hdn weights' do 230 | subject.set_initial_weights 231 | subject.scale_initial_weights 232 | subject.normalize_input_activation 233 | subject.hidden_layer_weighted_sum 234 | subject.hidden_layer_activation 235 | subject.vectorize_hidden_layer 236 | 237 | expect(subject.output_layer_weighted_sum[0]) 238 | .to eq(subject.scale_initial_weights[1] 239 | .row(0).inner_product(subject.vectorize_hidden_layer)) 240 | end 241 | it 'returns array of size output nodes' do 242 | subject.set_initial_weights 243 | subject.scale_initial_weights 244 | subject.normalize_input_activation 245 | subject.hidden_layer_weighted_sum 246 | subject.hidden_layer_activation 247 | subject.vectorize_hidden_layer 248 | 249 | expect(subject.output_layer_weighted_sum.size) 250 | .to eq(@output_nodes) 251 | end 252 | end 253 | context '#output_layer_activation' do 254 | it 'applys activation function to each output node value' do 255 | subject.set_initial_weights 256 | subject.scale_initial_weights 257 | subject.normalize_input_activation 258 | subject.hidden_layer_weighted_sum 259 | subject.hidden_layer_activation 260 | subject.vectorize_hidden_layer 261 | subject.output_layer_weighted_sum 262 | 263 | expect(subject.output_layer_activation) 264 | .to eq(subject.activation_function(subject 265 | .output_layer_weighted_sum,1)) 266 | end 267 | end 268 | context '#guess' do 269 | it 'returns the index of the max value in an array' do 270 | array = [2, 4.3, 6.32, 1] 271 | 272 | expect(subject.guess(array)) 273 | .to eq(2) 274 | end 275 | end 276 | context '#vectorize_output_layer' do 277 | it 'returns a vector' do 278 | subject.set_initial_weights 279 | subject.scale_initial_weights 280 | subject.normalize_input_activation 281 | subject.hidden_layer_weighted_sum 282 | subject.hidden_layer_activation 283 | subject.vectorize_hidden_layer 284 | subject.output_layer_weighted_sum 285 | subject.output_layer_activation 286 | 287 | 288 | expect(subject.vectorize_output_layer).to be_a(Vector) 289 | end 290 | it 'the initial values do not change from array to vector ' do 291 | subject.set_initial_weights 292 | subject.scale_initial_weights 293 | subject.normalize_input_activation 294 | subject.hidden_layer_weighted_sum 295 | subject.hidden_layer_activation 296 | subject.vectorize_hidden_layer 297 | subject.output_layer_weighted_sum 298 | subject.output_layer_activation 299 | 300 | 301 | expect(subject.vectorize_output_layer[0]) 302 | .to eq(subject.output_layer_activation[0]) 303 | end 304 | it 'adds bias to end of vector' do 305 | subject.set_initial_weights 306 | subject.scale_initial_weights 307 | subject.normalize_input_activation 308 | subject.hidden_layer_weighted_sum 309 | subject.hidden_layer_activation 310 | subject.vectorize_hidden_layer 311 | subject.output_layer_weighted_sum 312 | subject.output_layer_activation 313 | 314 | 315 | expect(subject.vectorize_output_layer.size) 316 | .to eq(subject.output_layer_activation.size) 317 | end 318 | end 319 | context '#label_array' do 320 | it 'returns an array' do 321 | subject.set_initial_weights 322 | subject.scale_initial_weights 323 | subject.normalize_input_activation 324 | subject.hidden_layer_weighted_sum 325 | subject.hidden_layer_activation 326 | subject.vectorize_hidden_layer 327 | subject.output_layer_weighted_sum 328 | subject.output_layer_activation 329 | subject.vectorize_output_layer 330 | 331 | expect(subject.label_array).to be_a(Array) 332 | end 333 | it 'has a size of output size' do 334 | subject.set_initial_weights 335 | subject.scale_initial_weights 336 | subject.normalize_input_activation 337 | subject.hidden_layer_weighted_sum 338 | subject.hidden_layer_activation 339 | subject.vectorize_hidden_layer 340 | subject.output_layer_weighted_sum 341 | subject.output_layer_activation 342 | subject.vectorize_output_layer 343 | 344 | 345 | expect(subject.label_array.size).to eq(@output_nodes) 346 | end 347 | it 'has 1 max that equals 1' do 348 | subject.set_initial_weights 349 | subject.scale_initial_weights 350 | subject.normalize_input_activation 351 | subject.hidden_layer_weighted_sum 352 | subject.hidden_layer_activation 353 | subject.vectorize_hidden_layer 354 | subject.output_layer_weighted_sum 355 | subject.output_layer_activation 356 | subject.vectorize_output_layer 357 | 358 | expect(subject.label_array.count(1)).to eq(1) 359 | end 360 | it 'has other values equal 0' do 361 | subject.set_initial_weights 362 | subject.scale_initial_weights 363 | subject.normalize_input_activation 364 | subject.hidden_layer_weighted_sum 365 | subject.hidden_layer_activation 366 | subject.vectorize_hidden_layer 367 | subject.output_layer_weighted_sum 368 | subject.output_layer_activation 369 | subject.vectorize_output_layer 370 | 371 | expect(subject.label_array.count(0)).to eq(9) 372 | end 373 | end 374 | context '#output_error' do 375 | it 'returns an array' do 376 | subject.set_initial_weights 377 | subject.scale_initial_weights 378 | subject.normalize_input_activation 379 | subject.hidden_layer_weighted_sum 380 | subject.hidden_layer_activation 381 | subject.vectorize_hidden_layer 382 | subject.output_layer_weighted_sum 383 | subject.output_layer_activation 384 | subject.vectorize_output_layer 385 | subject.label_array 386 | 387 | expect(subject.output_error).to be_a(Vector) 388 | end 389 | it 'has a size of output size' do 390 | subject.set_initial_weights 391 | subject.scale_initial_weights 392 | subject.normalize_input_activation 393 | subject.hidden_layer_weighted_sum 394 | subject.hidden_layer_activation 395 | subject.vectorize_hidden_layer 396 | subject.output_layer_weighted_sum 397 | subject.output_layer_activation 398 | subject.vectorize_output_layer 399 | subject.label_array 400 | 401 | expect(subject.output_error.size).to eq(@output_nodes) 402 | end 403 | it 'returns the difference between label and output as vector' do 404 | subject.set_initial_weights 405 | subject.scale_initial_weights 406 | subject.normalize_input_activation 407 | subject.hidden_layer_weighted_sum 408 | subject.hidden_layer_activation 409 | subject.vectorize_hidden_layer 410 | subject.output_layer_weighted_sum 411 | subject.output_layer_activation 412 | subject.vectorize_output_layer 413 | subject.label_array 414 | 415 | expect(subject.output_error).to eq(subject.vectorize_output_layer - 416 | Vector.elements(subject.label_array)) 417 | end 418 | end 419 | context '#back_prop_output' do 420 | it 'returns an array' do 421 | subject.set_initial_weights 422 | subject.scale_initial_weights 423 | subject.normalize_input_activation 424 | subject.hidden_layer_weighted_sum 425 | subject.hidden_layer_activation 426 | subject.vectorize_hidden_layer 427 | subject.output_layer_weighted_sum 428 | subject.output_layer_activation 429 | subject.vectorize_output_layer 430 | subject.label_array 431 | subject.output_error 432 | 433 | subject.normalize_input_activation 434 | 435 | expect(subject.back_prop_output).to be_a(Array) 436 | end 437 | it 'back propogates output error' do 438 | subject.set_initial_weights 439 | subject.scale_initial_weights 440 | subject.normalize_input_activation 441 | subject.hidden_layer_weighted_sum 442 | subject.hidden_layer_activation 443 | subject.vectorize_hidden_layer 444 | subject.output_layer_weighted_sum 445 | subject.output_layer_activation 446 | subject.vectorize_output_layer 447 | subject.label_array 448 | subject.output_error 449 | 450 | expect(subject.back_prop_output[0]) 451 | .to eq(subject.weights[1] 452 | .column(0).inner_product(subject.output_error) * 453 | subject.back_prop_derivation[0]) 454 | end 455 | end 456 | context '#hidden_weights_change' do 457 | it 'returns a matrix' do 458 | subject.set_initial_weights 459 | subject.scale_initial_weights 460 | subject.normalize_input_activation 461 | subject.hidden_layer_weighted_sum 462 | subject.hidden_layer_activation 463 | subject.vectorize_hidden_layer 464 | subject.output_layer_weighted_sum 465 | subject.output_layer_activation 466 | subject.vectorize_output_layer 467 | subject.label_array 468 | subject.output_error 469 | subject.back_prop_output 470 | 471 | expect(subject.hidden_weights_change).to be_a(Matrix) 472 | end 473 | it 'matrix has hidden_nodes # of rows' do 474 | subject.set_initial_weights 475 | subject.scale_initial_weights 476 | subject.normalize_input_activation 477 | subject.hidden_layer_weighted_sum 478 | subject.hidden_layer_activation 479 | subject.vectorize_hidden_layer 480 | subject.output_layer_weighted_sum 481 | subject.output_layer_activation 482 | subject.vectorize_output_layer 483 | subject.label_array 484 | subject.output_error 485 | subject.back_prop_output 486 | 487 | expect(subject.hidden_weights_change.row_count).to eq(@hidden_nodes) 488 | end 489 | it 'matrix has input_nodes + 1 # of columns' do 490 | subject.set_initial_weights 491 | subject.scale_initial_weights 492 | subject.normalize_input_activation 493 | subject.hidden_layer_weighted_sum 494 | subject.hidden_layer_activation 495 | subject.vectorize_hidden_layer 496 | subject.output_layer_weighted_sum 497 | subject.output_layer_activation 498 | subject.vectorize_output_layer 499 | subject.label_array 500 | subject.output_error 501 | subject.back_prop_output 502 | 503 | expect(subject.hidden_weights_change.column_count) 504 | .to eq(@input_nodes + 1) 505 | end 506 | end 507 | context '#output_weights_change' do 508 | it 'returns a matrix' do 509 | subject.set_initial_weights 510 | subject.scale_initial_weights 511 | subject.normalize_input_activation 512 | subject.hidden_layer_weighted_sum 513 | subject.hidden_layer_activation 514 | subject.vectorize_hidden_layer 515 | subject.output_layer_weighted_sum 516 | subject.output_layer_activation 517 | subject.vectorize_output_layer 518 | subject.label_array 519 | subject.output_error 520 | subject.back_prop_output 521 | 522 | expect(subject.output_weights_change).to be_a(Matrix) 523 | end 524 | it 'matrix has output_nodes # of rows' do 525 | subject.set_initial_weights 526 | subject.scale_initial_weights 527 | subject.normalize_input_activation 528 | subject.hidden_layer_weighted_sum 529 | subject.hidden_layer_activation 530 | subject.vectorize_hidden_layer 531 | subject.output_layer_weighted_sum 532 | subject.output_layer_activation 533 | subject.vectorize_output_layer 534 | subject.label_array 535 | subject.output_error 536 | subject.back_prop_output 537 | 538 | expect(subject.output_weights_change.row_count).to eq(@output_nodes) 539 | end 540 | it 'matrix has hidden_nodes + 1 # of columns' do 541 | subject.set_initial_weights 542 | subject.scale_initial_weights 543 | subject.normalize_input_activation 544 | subject.hidden_layer_weighted_sum 545 | subject.hidden_layer_activation 546 | subject.vectorize_hidden_layer 547 | subject.output_layer_weighted_sum 548 | subject.output_layer_activation 549 | subject.vectorize_output_layer 550 | subject.label_array 551 | subject.output_error 552 | subject.back_prop_output 553 | 554 | expect(subject.output_weights_change.column_count) 555 | .to eq(@hidden_nodes + 1) 556 | end 557 | end 558 | context '#weights_change' do 559 | it 'returns an array' do 560 | subject.set_initial_weights 561 | subject.scale_initial_weights 562 | subject.normalize_input_activation 563 | subject.hidden_layer_weighted_sum 564 | subject.hidden_layer_activation 565 | subject.vectorize_hidden_layer 566 | subject.output_layer_weighted_sum 567 | subject.output_layer_activation 568 | subject.vectorize_output_layer 569 | subject.label_array 570 | subject.output_error 571 | subject.back_prop_output 572 | subject.hidden_weights_change 573 | subject.output_weights_change 574 | 575 | expect(subject.weights_change).to be_a(Array) 576 | end 577 | it 'has first value in array that is a matrix' do 578 | subject.set_initial_weights 579 | subject.scale_initial_weights 580 | subject.normalize_input_activation 581 | subject.hidden_layer_weighted_sum 582 | subject.hidden_layer_activation 583 | subject.vectorize_hidden_layer 584 | subject.output_layer_weighted_sum 585 | subject.output_layer_activation 586 | subject.vectorize_output_layer 587 | subject.label_array 588 | subject.output_error 589 | subject.back_prop_output 590 | subject.hidden_weights_change 591 | subject.output_weights_change 592 | 593 | expect(subject.weights_change[0]).to be_a(Matrix) 594 | end 595 | it 'has first value in array that has output_nodes # of rows' do 596 | subject.set_initial_weights 597 | subject.scale_initial_weights 598 | subject.normalize_input_activation 599 | subject.hidden_layer_weighted_sum 600 | subject.hidden_layer_activation 601 | subject.vectorize_hidden_layer 602 | subject.output_layer_weighted_sum 603 | subject.output_layer_activation 604 | subject.vectorize_output_layer 605 | subject.label_array 606 | subject.output_error 607 | subject.back_prop_output 608 | subject.hidden_weights_change 609 | subject.output_weights_change 610 | 611 | expect(subject.weights_change[0].row_count).to eq(@hidden_nodes) 612 | end 613 | it 'has first value in array that has hidden_nodes + 1 # of columns' do 614 | subject.set_initial_weights 615 | subject.scale_initial_weights 616 | subject.normalize_input_activation 617 | subject.hidden_layer_weighted_sum 618 | subject.hidden_layer_activation 619 | subject.vectorize_hidden_layer 620 | subject.output_layer_weighted_sum 621 | subject.output_layer_activation 622 | subject.vectorize_output_layer 623 | subject.label_array 624 | subject.output_error 625 | subject.back_prop_output 626 | subject.hidden_weights_change 627 | subject.output_weights_change 628 | 629 | expect(subject.weights_change[0].column_count).to eq(@input_nodes + 1) 630 | end 631 | it 'has second value in array that is a matrix' do 632 | subject.set_initial_weights 633 | subject.scale_initial_weights 634 | subject.normalize_input_activation 635 | subject.hidden_layer_weighted_sum 636 | subject.hidden_layer_activation 637 | subject.vectorize_hidden_layer 638 | subject.output_layer_weighted_sum 639 | subject.output_layer_activation 640 | subject.vectorize_output_layer 641 | subject.label_array 642 | subject.output_error 643 | subject.back_prop_output 644 | subject.hidden_weights_change 645 | subject.output_weights_change 646 | 647 | expect(subject.weights_change[1]).to be_a(Matrix) 648 | end 649 | it 'has second value in array that has output_nodes # of rows' do 650 | subject.set_initial_weights 651 | subject.scale_initial_weights 652 | subject.normalize_input_activation 653 | subject.hidden_layer_weighted_sum 654 | subject.hidden_layer_activation 655 | subject.vectorize_hidden_layer 656 | subject.output_layer_weighted_sum 657 | subject.output_layer_activation 658 | subject.vectorize_output_layer 659 | subject.label_array 660 | subject.output_error 661 | subject.back_prop_output 662 | subject.hidden_weights_change 663 | subject.output_weights_change 664 | 665 | expect(subject.weights_change[1].row_count).to eq(@output_nodes) 666 | end 667 | it 'has second value in array that has hidden_nodes + 1 # of columns' do 668 | subject.set_initial_weights 669 | subject.scale_initial_weights 670 | subject.normalize_input_activation 671 | subject.hidden_layer_weighted_sum 672 | subject.hidden_layer_activation 673 | subject.vectorize_hidden_layer 674 | subject.output_layer_weighted_sum 675 | subject.output_layer_activation 676 | subject.vectorize_output_layer 677 | subject.label_array 678 | subject.output_error 679 | subject.back_prop_output 680 | subject.hidden_weights_change 681 | subject.output_weights_change 682 | 683 | expect(subject.weights_change[1].column_count) 684 | .to eq(@hidden_nodes + 1) 685 | end 686 | end 687 | end 688 | end 689 | end 690 | -------------------------------------------------------------------------------- /spec/spec_helper.rb: -------------------------------------------------------------------------------- 1 | $LOAD_PATH.unshift File.expand_path('../../lib', __FILE__) 2 | require 'agnet' 3 | --------------------------------------------------------------------------------