├── .gitignore ├── LICENSE.txt ├── README.txt ├── ann-test.lisp ├── ann.lisp ├── cl-ann.asd └── package.lisp /.gitignore: -------------------------------------------------------------------------------- 1 | *~ 2 | *#*# 3 | 4 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Lucas Vieira 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.txt: -------------------------------------------------------------------------------- 1 | This is an implementation for a linear, artificial neural network, in Common Lisp. 2 | This project is supposed to be a port of a similar algorithm I once built using C++. 3 | 4 | The implementation in Common Lisp was made in two days, and I used my own C++ code as 5 | basis, though I stripped out most of the object-oriented programming aspect (except 6 | encapsulation and methods, which works differently in CL -- and comes in handy!). 7 | I also tried to keep imperative approaches and data mutation to a minimum, though 8 | it relies quite a lot on it. I am not ashamed on that, since I did not want to put 9 | many constraints on my coding style, but I do believe I did some clever functional 10 | tricks wherever I could. 11 | 12 | I also make extensible usage of arrays and the loop macro, so if you're allergic to 13 | those, be warned. 14 | 15 | Plus, I am not trying to necessarily formalize it as a Quicklisp package, but you can 16 | clone/symlink it into you local-projects folder and load the 'ann system, and it will 17 | work just fine. I hope. 18 | 19 | Mind that, for now, the Eta and Alpha values of an ANN are global to the whole system, 20 | and the transfer function is fixed to a simple f(x) = tanh x operation (which means 21 | its derivative is f(x) = 1 - x^2). 22 | 23 | This project is distributed under the MIT License. 24 | 25 | ----- 26 | 27 | Installation 28 | 29 | One will be able to install cl-ann by means of Quicklisp + Ultralisp. See ultralisp.org 30 | for more details. 31 | 32 | After installing Quicklisp and Ultralisp, one may use the following to test the project: 33 | 34 | (ql:quickload :cl-ann) 35 | (cl-ann/test:xor-run-test) 36 | 37 | ----- 38 | 39 | TODO list 40 | 41 | - Make the system more flexible in terms of used variables 42 | - Make XOR test more contained 43 | 44 | ----- 45 | 46 | Here is a small documentation for the meaningful parts of this project: 47 | 48 | 49 | * package "cl-ann" 50 | This is the core package of the system. Contains the whole implementation of the 51 | artificial neural network. 52 | 53 | 54 | ** *neuron-eta* 55 | [variable] Overall network training rate. Must be a number [0.0..1.0]. 56 | 0.0 is a slow learner; 0.2 is a medium learner; 1.0 is a reckless learner. 57 | Defaults to 0.15. 58 | 59 | ** *neuron-alpha* 60 | [variable] Weight-changing momentum for the inter-layer connections between neurons. 61 | Must be a number [0.0..1.0]. 0.0 is no momentum, 0.5 is moderate momentum. 62 | Defaults to 0.5 63 | 64 | ** ann 65 | [struct] Represents a whole artificial neural network, with its internal data. 66 | 67 | ** neuron 68 | [struct] Represents a single neuron in a layer of the artificial neural network. 69 | 70 | ** connection 71 | [struct] Represents a single connection between a neuron from layer A to another 72 | neuron to the next layer B. 73 | 74 | ** (build-ann topology) 75 | [function] Builds an artificial neural network using a topology, given as a list 76 | of natural numbers. For example, a topology '(2 4 1) yields a neural network with 77 | two input neurons and one output neuron, and a single hidden layer containing four 78 | neurons. Each layer will also have a single extra bias neuron at end. 79 | 80 | ** (feed-forward ann input-values) 81 | [method] Feeds forward the given input values to the neural network, effectively 82 | performing the pattern recognition. The values fed must be a list of single-float 83 | values which pair up with each input neuron -- e.g. a neural network with two 84 | input neurons must also be fed a list of two single-floats, like '(1.0 0.0). 85 | The bias neuron of the input layer is disregarded as input; and if needed, remember 86 | to coerce each input value to a float. 87 | 88 | ** (back-propagate ann target-values) 89 | [method] Backpropagates an expected result after a successful feed-forward operation, 90 | effectively training the artificial neural network to better recognize the pattern 91 | for the last executed test case. The target values given must be a list of single-float 92 | values which pair up with each output neuron -- e.g. a neural network with one output 93 | neuron contains yields a single result number, and so the target value must be a list 94 | containing one single-float, like '(1.0). 95 | The bias neuron of the output layer is disregarded as target; and if needed, remember 96 | to coerce each target value to a float. 97 | 98 | ** (collect-results ann) 99 | [method] Yields a list, containing the results that the neural network yields after 100 | processing the last fed-forward data. This results in a list of single-floats, containing 101 | the output value for every neuron on the output layer, except the bias neuron's. 102 | 103 | ** (run-training ann input-test-cases target-test-cases &optional show-output) 104 | [method] Performs automated training cycle comprised of feeding forward each test case 105 | and backpropagating the target for that test case. The input cases must be a list of lists; 106 | each sublist must be a possible and valid input for a feed-forward operation. The target test 107 | cases are also a list of lists, and must also be comprised of possible and valid inputs for a 108 | backpropagation operation -- e.g. for a single input case (1.0, 0.0) which expects a target 109 | 1.0, one can create a list '((1.0 0.0)) of input tests and a list '((1.0)) of target tests. 110 | One can also specify whether outputting the current test case and the yielded value is also 111 | needed by specifying show-output. 112 | The whole training process is monitored using the time function, so at the end of the training, 113 | the function will yield information regarding execution time, CPU usage and etc. 114 | 115 | 116 | 117 | 118 | * package "cl-ann/test" 119 | This package contains tests for the overall artificial neural network implementation. 120 | 121 | 122 | ** *xor-ann* 123 | [parameter] When not nil, contains an instantiation of a neural network which is supposed 124 | to work as a test for learning the exclusive-or operation. Since we cannot simulate bits 125 | on this ANN implementation, we use 1.0 and 0.0 as 1 and 0. 126 | 127 | ** (xor-begin) 128 | [function] Creates a neural network on parameter *xor-ann* and generates test cases for 129 | training it in exclusive-or operations. 130 | By default, this function generates a neural network with topology (2 4 1) and 20,000 131 | test cases. 132 | 133 | ** (xor-train &optional show-input) 134 | [function] Performs training on *xor-ann* so it can perform exclusive-or operations, 135 | then releases all generated test cases for garbage collection. 136 | Since this function uses the default training method defined on the artificial neural 137 | network infrastructure, one can also opt to show the results for each backpropagated 138 | test case on console by specifying it through show-input. 139 | This function will perform nothing unless *xor-ann* is not nil and the test cases are 140 | properly generated. 141 | 142 | ** (xor-finish) 143 | [function] Performs a last test, this time with no training whatsoever, only by feeding 144 | forward the four possible cases of the exclusive-or operation and showing them onscreen 145 | in a readable way, along with the yielded output values. 146 | This function will perform nothing unless *xor-ann* is not nil. 147 | 148 | ** (xor-run-test &optional show-input) 149 | [function] Performs the whole creation and training cycle for the artificial neural 150 | network which simulates the exclusive-or operation. 151 | One can also opt to show the results of each training test case on console by specifying 152 | it through show-input. 153 | -------------------------------------------------------------------------------- /ann-test.lisp: -------------------------------------------------------------------------------- 1 | ;;;; ann-test.lisp 2 | 3 | (in-package #:cl-ann/test) 4 | 5 | ;;; ================ 6 | 7 | ;;; The following test is for building a neural network which can accurately 8 | ;;; perform the XOR operation. 9 | ;;; From my previous calculations on another implementations, about 20,000 10 | ;;; iterations of tests using random data should be enough. 11 | 12 | ;;; For any test, we need to 13 | ;;; a) Effectively build our neural network; 14 | ;;; b) Create test data, be it random or not. 15 | ;;; Test data consists of two lists, of input and target values, where the 16 | ;;; first has sublists with the same amount of input neurons (less the bias 17 | ;;; neuron), and the second has the same amount of output neurons (less the 18 | ;;; bias neuron as well); 19 | ;;; c) Train the network, using the generated test data, by feeding-forward the 20 | ;;; input test cases and back-propagating the expected target values; 21 | ;;; d) We then run a few test cases on our own, specially treating the output so 22 | ;;; that we can present a better interface to the user; and finally 23 | ;;; e) We put the generated test network at the user's disposition by using some 24 | ;;; kind of interface for that. Here, I figured it would be better to do this 25 | ;;; by exposing the symbols holding the test networks, since I know that, if 26 | ;;; you're programming in Lisp, you probably already know what you're doing, 27 | ;;; so I don't have to underestimate you with any weird encapsulation. 28 | 29 | ;;; Test networks 30 | (defparameter *xor-ann* nil 31 | "Single instantiation of a neural network for holding a neural network which 32 | learns to recognize the pattern of an exclusive-or binary operation.") 33 | 34 | ;;; Training cases 35 | (defparameter *xor-test-cases* nil) 36 | 37 | (defun xor-begin () 38 | "Initializes the test for the simulation of EXCLUSIVE-OR operation." 39 | ;; The XOR neural net uses a topology of two input neurons, 40 | ;; one hidden layer with four neurons, and a single output. 41 | (format t "Creating the neural network~&") 42 | (setf *xor-ann* (build-ann '(2 4 1))) 43 | ;; Generate test cases 44 | (format t "Generating test cases~&") 45 | (setf *xor-test-cases* (list nil nil)) 46 | (loop for x from 0 below 20000 47 | for test-case = (list (coerce (floor (random 2.0)) 'float) 48 | (coerce (floor (random 2.0)) 'float)) 49 | ;; Manually deducing the values with no clever arithmetic 50 | ;; because I'm not confident that I can do it on Common 51 | ;; Lisp. So bear with me. 52 | for result = 53 | (list (cond ((and (= (car test-case) 0.0) 54 | (= (cadr test-case) 0.0)) 55 | 0.0) 56 | ((and (= (car test-case) 1.0) 57 | (= (cadr test-case) 0.0)) 58 | 1.0) 59 | ((and (= (car test-case) 0.0) 60 | (= (cadr test-case) 1.0)) 61 | 1.0) 62 | ((and (= (car test-case) 1.0) 63 | (= (cadr test-case) 1.0)) 64 | 0.0))) 65 | do (push test-case (car *xor-test-cases*)) 66 | (push result (cadr *xor-test-cases*)))) 67 | 68 | (defun xor-train (&optional (show-input nil)) 69 | "Train the neural network which simulates the XOR operation." 70 | (when (and *xor-ann* *xor-test-cases*) 71 | (format t "Training the network using ~a test cases~&" 72 | (list-length (cadr *xor-test-cases*))) 73 | (run-training *xor-ann* 74 | (car *xor-test-cases*) 75 | (cadr *xor-test-cases*) 76 | show-input) 77 | (setf *xor-test-cases* nil))) 78 | 79 | (defun xor-finish () 80 | "Performs a final checking of the results on the XOR neural network." 81 | (when *xor-ann* 82 | (format t "Performing final test: evaluate all four possible test cases.~&") 83 | (labels ((interpret-result (value) 84 | (cond ((<= value 0.25) "certainly false") 85 | ((<= value 0.375) "probably false") 86 | ((<= value 0.625) "uncertain") 87 | ((<= value 0.75) "probably true") 88 | (t "certainly true")))) 89 | (format t "Interpreting results for common operations:~&") 90 | (feed-forward *xor-ann* '(0.0 0.0)) 91 | (let ((result (car (collect-results *xor-ann*)))) 92 | (format t "(0.0 0.0) => ~a ~a~&" 93 | (interpret-result result) 94 | result)) 95 | (feed-forward *xor-ann* '(1.0 0.0)) 96 | (let ((result (car (collect-results *xor-ann*)))) 97 | (format t "(1.0 0.0) => ~a ~a~&" 98 | (interpret-result result) 99 | result)) 100 | (feed-forward *xor-ann* '(0.0 1.0)) 101 | (let ((result (car (collect-results *xor-ann*)))) 102 | (format t "(0.0 1.0) => ~a ~a~&" 103 | (interpret-result result) 104 | result)) 105 | (feed-forward *xor-ann* '(1.0 1.0)) 106 | (let ((result (car (collect-results *xor-ann*)))) 107 | (format t "(1.0 1.0) => ~a ~a~&" 108 | (interpret-result result) 109 | result))))) 110 | 111 | (defun xor-run-test (&optional (show-input nil)) 112 | "Performs the whole test cycle on the XOR neural network at once." 113 | (format t "Performing artificial neural network test: learning the XOR operation.~&") 114 | (when *xor-ann* 115 | (format t "WARNING: this will rebuild cl-ann/test:*xor-ann*.~&")) 116 | (xor-begin) 117 | (xor-train show-input) 118 | (xor-finish) 119 | (format t "Testing finished.~&Use cl-ann/test:*xor-ann* for further experiments.~&")) 120 | -------------------------------------------------------------------------------- /ann.lisp: -------------------------------------------------------------------------------- 1 | ;;;; ann.lisp 2 | 3 | (in-package #:cl-ann) 4 | 5 | ;; A connection specifies weights between neurons 6 | ;; of different layers 7 | (defstruct connection 8 | "Represents a single connection between a neuron from layer A to 9 | another neuron to the next layer B." 10 | (weight 0.0 :type single-float) ;; bug? 11 | (delta-weight 0.0 :type single-float)) 12 | 13 | ;; Representation for neuron data 14 | (defstruct neuron 15 | "Represents a single neuron in a layer of the artificial neural 16 | network." 17 | index 18 | output-weights 19 | (output 0.0 :type single-float) 20 | (gradient 0.0 :type single-float)) 21 | 22 | ;; Overall network training rate 23 | (defvar *neuron-eta* 0.15 24 | "Defines an overall network training rate. 25 | Value must be between 0.0 and 1.0, where 0.0 26 | is a slow learner, 0.2 is a medium learner, 27 | and 1.0 is a reckless learner. Defaults to 28 | 0.15.") 29 | 30 | ;; Momentum; multiplier of the last weight change 31 | (defvar *neuron-alpha* 0.5 32 | "Defines the weight-changing momentum for 33 | the inter-layer connections between neurons. 34 | Value must be between 0.0 and 1.0, where 0.0 35 | is no momentum, and 0.5 is moderate momentum. 36 | Defaults to 0.5.") 37 | 38 | ;; Represents an actual neural network. 39 | (defstruct ann 40 | "Represents a whole artificial neural network, with its internal data." 41 | layers 42 | (error 0.0 :type single-float) 43 | (recent-avg-error 0.0 :type single-float) 44 | (recent-avg-smth-factor 0.0 :type single-float)) 45 | 46 | ;; Transfer function 47 | (defun transfer-function (x) 48 | (tanh x)) 49 | 50 | ;; Transfer function derivative 51 | (defun transfer-function-derivative (x) 52 | (- 1.0 (* x x))) 53 | 54 | 55 | ;;; ============== 56 | 57 | ;; Though I decided to implement this from the general part 58 | ;; to the most specific part, I will be using a wishful thinking 59 | ;; approach, so I'll begin by building the neural network, and then 60 | ;; getting to stuff I need. 61 | 62 | (defun build-neuron (outputs index) 63 | "Builds a single neuron, given the amount of outputs and the current index on the layer." 64 | (make-neuron :index index 65 | :output-weights (make-array outputs 66 | :element-type 'connection 67 | :initial-contents 68 | (loop for x from 0 to (1- outputs) 69 | collect (make-connection :weight (random 1.0)))))) 70 | 71 | 72 | (defun build-ann (topology) 73 | "Builds a new neural network. topology is a list containing the network topology." 74 | ;; Assume a topology of at least two layers (input and output). Otherwise, yield 75 | ;; an error 76 | (when (< (length topology) 2) 77 | (error "Invalid number of layers. ANN must have at least an input and output layer.")) 78 | (let ((net (make-array (length topology) 79 | ;; This also ensures an extra bias neuron per layer 80 | :initial-contents 81 | (loop for num in topology collect (make-array (1+ num))))) 82 | (outputs (mapcar #'1+ (append (cdr topology) '(0))))) 83 | ;; Iterate through layer 84 | (loop for layer across net 85 | for layer-size in topology 86 | for number-of-outputs in outputs 87 | ;; Create every neuron 88 | do (loop for index from 0 to layer-size ; extra one here is bias 89 | do (setf (aref layer index) 90 | (build-neuron number-of-outputs 91 | index))) 92 | ;; Last neuron of layer should have a bias of 1.0 93 | (setf (neuron-output (aref layer layer-size)) 1.0)) 94 | (make-ann :layers net))) 95 | 96 | 97 | ;;; ============== 98 | 99 | ;; All we're looking for is a way to feed forward the input values and obtain a 100 | ;; result; and then, to correct then, we backpropagate expected values. 101 | 102 | ;; For that we'll define methods, just to make sure we're using our neural network. 103 | 104 | ;;; ============== 105 | 106 | ;;; Necessary for feeding-forward the information 107 | 108 | (defmacro get-ann-layer (net layer-index) 109 | `(aref (ann-layers ,net) ,layer-index)) 110 | 111 | (defmacro get-ann-number-of-layers (net) 112 | `(length (ann-layers ,net))) 113 | 114 | (defmethod feed-forward ((neuron neuron) (previous-layer vector)) 115 | "Feeds forward the input values from the last layer to the current neuron." 116 | (setf (neuron-output neuron) 117 | (transfer-function 118 | (reduce #'+ 119 | (loop for prev-neuron across previous-layer 120 | collect (* (neuron-output prev-neuron) 121 | (connection-weight (aref (neuron-output-weights prev-neuron) 122 | (neuron-index neuron))))))))) 123 | 124 | (defmethod feed-forward ((net ann) input-values) 125 | "Feeds forward the input values for a neural network. 126 | The given input values must have the same number of neurons as the first layer 127 | of the neural network, minus the bias neuron." 128 | ;; Make sure our input values are exactly what we want 129 | (when (not (= (length input-values) 130 | (1- (length (get-ann-layer net 0))))) 131 | (error "Invalid number of input values for the ANN.")) 132 | ;; Assign / latch the input values to the input neurons 133 | (loop for neuron across (get-ann-layer net 0) 134 | for input in input-values 135 | do (setf (neuron-output neuron) input)) 136 | ;; Forward-propagate the values on each neuron 137 | (loop for layer-index from 1 below (get-ann-number-of-layers net) 138 | for previous-layer-index = (1- layer-index) 139 | do (loop for neuron across (get-ann-layer net layer-index) 140 | do (feed-forward neuron (get-ann-layer net previous-layer-index))))) 141 | 142 | 143 | ;;; ============= 144 | 145 | ;;; Necessary for back-propagating the results 146 | 147 | (defmacro get-ann-last-layer (net) 148 | `(let ((layer-index (1- (get-ann-number-of-layers ,net)))) 149 | (aref (ann-layers ,net) layer-index))) 150 | 151 | (defmethod calculate-output-gradient ((neuron neuron) target) 152 | "Calculates the output gradient for the current neuron." 153 | (* (- target (neuron-output neuron)) ; delta 154 | (transfer-function-derivative (neuron-output neuron)))) 155 | 156 | (defmethod calculate-hidden-gradient ((neuron neuron) next-layer) 157 | "Calculates the gradient of a neuron on a hidden layer." 158 | (let ((sum-of-layers-weighted-gradients 159 | (reduce #'+ 160 | (loop for next-layer-neuron across next-layer 161 | for index from 0 below (length next-layer) 162 | collect (* (connection-weight (aref (neuron-output-weights neuron) 163 | index)) 164 | (neuron-gradient next-layer-neuron)))))) 165 | (* sum-of-layers-weighted-gradients 166 | (transfer-function-derivative (neuron-output neuron))))) 167 | 168 | (defmethod update-input-weights ((neuron neuron) previous-layer) 169 | ;; The weights which need to be updated are on the connections 170 | ;; of the previous layer's containers 171 | (loop for previous-neuron across previous-layer 172 | for old-delta-weight = (connection-delta-weight 173 | (aref (neuron-output-weights previous-neuron) 174 | (neuron-index neuron))) 175 | for new-delta-weight = (+ (* *neuron-eta* ; overall net learning rate 176 | (neuron-output previous-neuron) 177 | (neuron-gradient neuron)) 178 | (* *neuron-alpha* ; learning momentum 179 | old-delta-weight)) 180 | do (setf (connection-delta-weight (aref (neuron-output-weights previous-neuron) 181 | (neuron-index neuron))) 182 | new-delta-weight) 183 | (incf (connection-weight (aref (neuron-output-weights previous-neuron) 184 | (neuron-index neuron))) 185 | new-delta-weight))) 186 | 187 | (defmethod back-propagate ((net ann) target-values) 188 | "Backpropagates the target values to effectively make the ANN learn. 189 | The given output values must have the same number of neurons as the last layer 190 | of the neural network, minus the bias neuron." 191 | ;; Make sure our output values are exactly what we want 192 | (when (not (= (length target-values) 193 | (1- (length (get-ann-last-layer net))))) 194 | (error "Invalid number of target values for ANN backpropagation.")) 195 | ;; Calculate overall ANN error using root mean square error of output layer 196 | (setf (ann-error net) 197 | (let ((sum-of-squared-deltas 198 | (reduce #'+ 199 | (loop for neuron across (get-ann-last-layer net) 200 | for target in target-values 201 | collect (let ((delta (- target (neuron-output neuron)))) 202 | (* delta delta)))))) 203 | (sqrt (/ sum-of-squared-deltas 204 | (1- (length (get-ann-last-layer net))))))) 205 | ;; Implement a recent average measurement, just for kicks 206 | (setf (ann-recent-avg-error net) 207 | (/ (* (ann-recent-avg-error net) 208 | (+ (ann-recent-avg-smth-factor net) 209 | (ann-error net))) 210 | (1+ (ann-recent-avg-smth-factor net)))) 211 | ;; Calculate output layer gradients 212 | (loop for neuron across (get-ann-last-layer net) 213 | for target in target-values 214 | do (setf (neuron-gradient neuron) 215 | (calculate-output-gradient neuron target))) 216 | ;; Calculate gradients on hidden layers 217 | ;; Reverse iteration from rightmost hidden layer to first hidden layer 218 | (loop for layer-index from (- (get-ann-number-of-layers net) 2) downto 1 219 | for next-layer-index = (1+ layer-index) 220 | do (loop for neuron across (get-ann-layer net layer-index) 221 | do (setf (neuron-gradient neuron) 222 | (calculate-hidden-gradient neuron 223 | (get-ann-layer net next-layer-index))))) 224 | ;; Update connection weights for hidden layers 225 | (loop for layer-index from (1- (get-ann-number-of-layers net)) downto 1 226 | for previous-layer-index = (1- layer-index) 227 | do (loop for neuron across (get-ann-layer net layer-index) 228 | do (update-input-weights neuron (get-ann-layer net previous-layer-index))))) 229 | 230 | 231 | ;;; ============= 232 | 233 | ;;; The following are mostly related to collecting results and training. 234 | 235 | (defmethod collect-results ((net ann)) 236 | "Collects the result of the last operation on the neural network. 237 | Returns a list containing all the results but the bias." 238 | (butlast (loop for neuron across (get-ann-last-layer net) 239 | collect (neuron-output neuron)))) 240 | 241 | 242 | ;;; ============= 243 | 244 | ;;; The following are mostly related to training the ANN. 245 | 246 | (defmethod run-training ((net ann) (input-test-cases list) (target-test-cases list) &optional (show-output nil)) 247 | "Performs an automated training on the neural network, and also counts the time taken. 248 | Requires several test cases, where the input list is a list of lists, where each sublist 249 | is a different input case for the exact amount of input neurons; the test targets also 250 | follow the same principle, however they must pair up their number with the output neurons. 251 | All amounts for these test cases must disconsider the bias neuron of each layer." 252 | (time (loop for test-input in input-test-cases 253 | for test-target in target-test-cases 254 | do (feed-forward net test-input) 255 | (back-propagate net test-target) 256 | (when show-output 257 | (format t "Case: ~a, Target: ~a, Answer: ~a~&" 258 | test-input 259 | test-target 260 | (collect-results net)))))) 261 | -------------------------------------------------------------------------------- /cl-ann.asd: -------------------------------------------------------------------------------- 1 | ;;;; cl-ann.asd 2 | 3 | (asdf:defsystem #:cl-ann 4 | :description "Artificial Neural Network implementation" 5 | :author "Lucas Vieira " 6 | :license "MIT" 7 | :serial t 8 | :components ((:file "package") 9 | (:file "ann") 10 | (:file "ann-test"))) 11 | 12 | -------------------------------------------------------------------------------- /package.lisp: -------------------------------------------------------------------------------- 1 | ;;;; package.lisp 2 | 3 | (defpackage #:cl-ann 4 | (:use #:cl) 5 | (:export :*neuron-eta* 6 | :*neuron-alpha* 7 | :ann 8 | :neuron 9 | :connection 10 | :build-ann 11 | :feed-forward 12 | :back-propagate 13 | :collect-results 14 | :run-training)) 15 | 16 | (defpackage #:cl-ann/test 17 | (:use #:cl #:cl-ann) 18 | (:export :*xor-ann* 19 | :xor-begin 20 | :xor-train 21 | :xor-finish 22 | :xor-run-test)) 23 | --------------------------------------------------------------------------------