├── README.md~ ├── neuralisp.asd ├── docs ├── layers.md ├── losses.md ├── optimizers.md ├── activations.md ├── getting_started.md ├── transformers.md ├── manifesto.md ├── cognition │ └── cognitive-modules.md ├── autograd.md ├── tensor.md ├── primitives │ └── neural-primitives.md └── internals │ └── tensor-autograd.md ├── src ├── layers │ ├── base.lisp │ ├── linear.lisp │ ├── attention.lisp │ ├── recurrent.lisp │ ├── convolutional.lisp │ └── multihead_attention.lisp ├── losses │ ├── base.lisp │ ├── mse.lisp │ └── cross_entropy.lisp ├── optimizers │ ├── sgd.lisp │ ├── adam.lisp │ └── base.lisp ├── activations │ ├── base.lisp │ ├── relu.lisp │ └── sigmoid.lisp ├── utils │ ├── data_loader.lisp │ └── debugging.lisp ├── transformers │ ├── decoder.lisp │ ├── encoder.lisp │ └── transformer_layer.lisp └── core │ ├── autograd.lisp │ ├── gpu.lisp │ └── tensor.lisp ├── .gitignore ├── tests ├── core │ ├── test_gpu.lisp │ ├── test_autograd.lisp │ └── test_tensor.lisp ├── layers │ └── test_layers.lisp ├── losses │ └── test_losses.lisp ├── optimizers │ └── test_optimizers.lisp ├── activations │ └── test_activations.lisp ├── transformers │ └── test_transformers.lisp └── run-smoke.sh ├── .github └── workflows │ └── ci.yml ├── CHANGELOG.md ├── examples ├── sequence-model.lisp ├── simple-mlp.lisp └── cognitive-loop.lisp ├── ROADMAP.md ├── generate_project_files.sh ├── CONTRIBUTING.md └── README.md /README.md~: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /neuralisp.asd: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/layers.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/losses.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/optimizers.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/activations.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/getting_started.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/transformers.md: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/layers/base.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/layers/linear.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/losses/base.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/losses/mse.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/optimizers/sgd.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /src/activations/base.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/activations/relu.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/activations/sigmoid.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/layers/attention.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/layers/recurrent.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/optimizers/adam.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/optimizers/base.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/utils/data_loader.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/utils/debugging.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/core/test_gpu.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/layers/convolutional.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/losses/cross_entropy.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/transformers/decoder.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/transformers/encoder.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/core/test_autograd.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/layers/test_layers.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/losses/test_losses.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/layers/multihead_attention.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/optimizers/test_optimizers.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /src/transformers/transformer_layer.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/activations/test_activations.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/transformers/test_transformers.lisp: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | 3 | on: 4 | push: 5 | branches: [ main, master ] 6 | pull_request: 7 | 8 | jobs: 9 | smoke: 10 | runs-on: ubuntu-latest 11 | steps: 12 | - name: Checkout repository 13 | uses: actions/checkout@v4 14 | 15 | - name: Install SBCL 16 | run: | 17 | sudo apt-get update 18 | sudo apt-get install -y sbcl 19 | 20 | - name: Run smoke suite 21 | run: | 22 | chmod +x tests/run-smoke.sh 23 | ./tests/run-smoke.sh 24 | -------------------------------------------------------------------------------- /tests/run-smoke.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -euo pipefail 3 | 4 | ROOT="$(cd "$(dirname "$0")/.." && pwd)" 5 | 6 | check_file() { 7 | local path="$1" 8 | if [[ ! -f "$ROOT/$path" ]]; then 9 | echo "[ERROR] Missing required file: $path" >&2 10 | exit 1 11 | fi 12 | } 13 | 14 | check_file "docs/internals/tensor-autograd.md" 15 | check_file "docs/primitives/neural-primitives.md" 16 | check_file "docs/cognition/cognitive-modules.md" 17 | check_file "docs/manifesto.md" 18 | check_file "CHANGELOG.md" 19 | check_file "ROADMAP.md" 20 | check_file "CONTRIBUTING.md" 21 | 22 | # Execute example scripts to ensure they run without errors. 23 | for script in "examples/simple-mlp.lisp" \ 24 | "examples/sequence-model.lisp" \ 25 | "examples/cognitive-loop.lisp"; do 26 | echo "[INFO] Running $script" 27 | sbcl --script "$ROOT/$script" >/dev/null 28 | done 29 | 30 | echo "[INFO] Smoke suite completed successfully." 31 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | All notable changes to this project will be documented in this file. The log is organised by manifesto phase so that 4 | stakeholders can correlate technical progress with the long-term research plan. Dates use the ISO format `YYYY-MM-DD`. 5 | 6 | ## [Unreleased] 7 | 8 | ### Phase 0 – Foundational Infrastructure 9 | - Set up documentation architecture covering tensor/autograd internals, neural primitives, and cognitive modules. 10 | - Replaced placeholder README with an accurate quickstart, dependency overview, and documentation index. 11 | - Added runnable example scripts (`examples/simple-mlp.lisp`, `examples/sequence-model.lisp`, `examples/cognitive-loop.lisp`). 12 | - Introduced contribution guidelines, roadmap alignment rules, and a smoke-test-based CI workflow. 13 | - Published the NeuraLisp manifesto to anchor roadmap discussions. 14 | 15 | Future releases will promote entries out of the *Unreleased* section as milestones (0.1.0, 0.2.0, etc.) are tagged. 16 | -------------------------------------------------------------------------------- /examples/sequence-model.lisp: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sbcl --script 2 | 3 | ;;; Minimal recurrent sequence model demonstration. 4 | ;;; Run with: sbcl --script examples/sequence-model.lisp 5 | ;;; Expected output: 6 | ;;; Time step 0 -> state 0.050, output 0.050 7 | ;;; Time step 1 -> state 0.129, output 0.129 8 | ;;; Time step 2 -> state 0.223, output 0.223 9 | ;;; Time step 3 -> state 0.321, output 0.321 10 | 11 | (defun step-rnn (state input weight recurrent-weight bias) 12 | "Single RNN step using tanh activation." 13 | (let* ((pre-activation (+ (* weight input) 14 | (* recurrent-weight state) 15 | bias)) 16 | (new-state (tanh pre-activation))) 17 | new-state)) 18 | 19 | (let* ((inputs '(0.1 0.2 0.3 0.4)) 20 | (state 0.0) 21 | (weight 0.5) 22 | (recurrent-weight 0.6) 23 | (bias 0.0)) 24 | (loop for value in inputs 25 | for time from 0 26 | do (setf state (step-rnn state value weight recurrent-weight bias)) 27 | (format t "Time step ~d -> state ~,3f, output ~,3f~%" 28 | time state state))) 29 | -------------------------------------------------------------------------------- /examples/simple-mlp.lisp: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sbcl --script 2 | 3 | ;;; Simple two-layer perceptron forward pass. 4 | ;;; Run with: sbcl --script examples/simple-mlp.lisp 5 | ;;; Expected output: 6 | ;;; Input vector: (0.1 0.5 0.9) 7 | ;;; Hidden activations: 0.470 0.470 8 | ;;; Output logits: 0.2880 0.2880 9 | 10 | (defun dot-product (vector weights) 11 | (loop for w in weights 12 | for x in vector 13 | sum (* w x))) 14 | 15 | (defun add-bias (values biases) 16 | (mapcar #'+ values biases)) 17 | 18 | (defun relu (values) 19 | (mapcar (lambda (x) (max 0 x)) values)) 20 | 21 | (defun dense-layer (inputs weight-matrix bias-vector &key (activation #'identity)) 22 | (let* ((raw (mapcar (lambda (row) (dot-product inputs row)) weight-matrix)) 23 | (biased (add-bias raw bias-vector))) 24 | (funcall activation biased))) 25 | 26 | (let* ((input '(0.1 0.5 0.9)) 27 | ;; 2x3 weight matrix and bias vector for the hidden layer. 28 | (hidden-weights '((0.4 0.3 0.2) 29 | (0.4 0.3 0.2))) 30 | (hidden-bias '(0.1 0.1)) 31 | (hidden (dense-layer input hidden-weights hidden-bias :activation #'relu)) 32 | ;; Output layer (2 units, identity activation). 33 | (output-weights '((0.2 0.2) 34 | (0.2 0.2))) 35 | (output-bias '(0.1 0.1)) 36 | (output (dense-layer hidden output-weights output-bias))) 37 | (format t "Input vector: ~a~%" input) 38 | (format t "Hidden activations: ~,3f ~,3f~%" (first hidden) (second hidden)) 39 | (format t "Output logits: ~,4f ~,4f~%" (first output) (second output))) 40 | -------------------------------------------------------------------------------- /docs/manifesto.md: -------------------------------------------------------------------------------- 1 | # NeuraLisp Manifesto 2 | 3 | NeuraLisp envisions a Common Lisp environment where differentiable programming and symbolic reasoning co-exist. The 4 | manifesto divides the journey into three phases so that contributors can orient their work and track progress through the 5 | roadmap and changelog. 6 | 7 | ## Phase 0 – Foundational Infrastructure 8 | 9 | *Goal:* establish reliable tensor storage, GPU hooks, and documentation tooling. 10 | 11 | *Status:* **In progress.** The tensor/autograd prototypes, documentation overhaul, runnable examples, and CI pipeline 12 | added in this release all belong to this phase. Remaining tasks include robust tensor constructors, feature-complete 13 | GPU transfer helpers, and a comprehensive smoke test suite. 14 | 15 | ## Phase 1 – Differentiable Primitives 16 | 17 | *Goal:* implement the reusable building blocks required for deep learning workloads (layers, activations, losses, 18 | optimisers). 19 | 20 | *Key deliverables:* 21 | 22 | - Layer constructors that manage weight tensors and register backward functions. 23 | - Numerically stable activation and loss operators. 24 | - Optimiser modules (SGD, Adam) that iterate over trainable parameters. 25 | - Extensive unit tests validating CPU and GPU parity. 26 | 27 | ## Phase 2 – Cognitive Routines 28 | 29 | *Goal:* compose differentiable primitives with symbolic loops to create autonomous cognitive agents. 30 | 31 | *Key deliverables:* 32 | 33 | - Memory and attention modules for representing internal state. 34 | - Policy/value learners that drive decision-making. 35 | - Sensor and effector APIs for integrating with external environments. 36 | - Diagnostic tooling for visualising cognition loops in real time. 37 | 38 | Progress across these phases is summarised in [`ROADMAP.md`](../ROADMAP.md) and annotated release-by-release in 39 | [`CHANGELOG.md`](../CHANGELOG.md). Each pull request should map its scope to at least one phase to keep the community 40 | aligned on long-term goals. 41 | -------------------------------------------------------------------------------- /examples/cognitive-loop.lisp: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sbcl --script 2 | 3 | ;;; Cognitive loop scenario sketch. 4 | ;;; Run with: sbcl --script examples/cognitive-loop.lisp 5 | ;;; Expected output: 6 | ;;; Normalised sensors: (0.20 0.40 0.80) 7 | ;;; Working memory after update: 0.14 0.28 0.50 8 | ;;; Selected action: track-target 9 | ;;; Confidence score: 0.376 10 | 11 | (defun normalise-sensors (raw-values) 12 | "Scale raw readings into the [0,1] band given a 0-10 calibration." 13 | (mapcar (lambda (x) (/ x 10.0)) raw-values)) 14 | 15 | (defun update-working-memory (memory sensors) 16 | "Blend the previous memory with the new sensor reading." 17 | (mapcar (lambda (old new) 18 | (+ (* 0.6 old) (* 0.4 new))) 19 | memory sensors)) 20 | 21 | (defun score-actions (memory) 22 | "Score symbolic actions using a handcrafted heuristic." 23 | (let* ((align-score (+ (* 0.5 (first memory)) 24 | (* 0.3 (second memory)) 25 | (* 0.2 (third memory)))) 26 | (evade-score (+ (* 0.2 (first memory)) 27 | (* 0.4 (second memory)) 28 | (* 0.4 (third memory)))) 29 | (track-score (+ (* 0.1 (first memory)) 30 | (* 0.4 (second memory)) 31 | (* 0.5 (third memory))))) 32 | `((hold-position . ,align-score) 33 | (evade . ,evade-score) 34 | (track-target . ,track-score)))) 35 | 36 | (let* ((raw-sensors '(2 4 8)) 37 | (normalised (normalise-sensors raw-sensors)) 38 | (previous-memory '(0.1 0.2 0.3)) 39 | (memory (update-working-memory previous-memory normalised)) 40 | (action-scores (score-actions memory)) 41 | (decision (car (sort action-scores #'> :key #'cdr)))) 42 | (format t "Normalised sensors: ~a~%" normalised) 43 | (format t "Working memory after update: ~,2f ~,2f ~,2f~%" 44 | (first memory) (second memory) (third memory)) 45 | (format t "Selected action: ~a~%" (car decision)) 46 | (format t "Confidence score: ~,3f~%" (cdr decision))) 47 | -------------------------------------------------------------------------------- /ROADMAP.md: -------------------------------------------------------------------------------- 1 | # Roadmap 2 | 3 | This roadmap maps upcoming work to the manifesto phases defined in [`docs/manifesto.md`](docs/manifesto.md). Each table 4 | tracks the status of the major deliverables and links to the documentation or examples that showcase current progress. 5 | 6 | ## Phase 0 – Foundational Infrastructure (In progress) 7 | 8 | | Deliverable | Status | Notes | 9 | |-------------|--------|-------| 10 | | Tensor/autograd documentation | ✅ Complete | See [`docs/internals/tensor-autograd.md`](docs/internals/tensor-autograd.md) | 11 | | Example gallery | ✅ Complete | [`examples/`](examples) now ships three runnable scenarios | 12 | | Contributor workflow | ✅ Complete | [`CONTRIBUTING.md`](CONTRIBUTING.md) + CI smoke tests | 13 | | GPU integration plan | ⏳ Planned | Requires optional dependency management and runtime guards | 14 | 15 | ## Phase 1 – Differentiable Primitives (Planned) 16 | 17 | | Deliverable | Status | Notes | 18 | |-------------|--------|-------| 19 | | Layer constructors | ⏳ Planned | Define `linear`, `convolutional`, and `recurrent` forwards/backwards | 20 | | Activation implementations | ⏳ Planned | Fill in `src/activations/*.lisp` with numerically stable ops | 21 | | Loss and optimiser suites | ⏳ Planned | Implement prototypes documented in [`docs/primitives/neural-primitives.md`](docs/primitives/neural-primitives.md) | 22 | | Unit tests | ⏳ Planned | Extend `tests/` with CPU/GPU parity checks | 23 | 24 | ## Phase 2 – Cognitive Routines (Planned) 25 | 26 | | Deliverable | Status | Notes | 27 | |-------------|--------|-------| 28 | | Working memory module | ⏳ Planned | Follow the flow in [`docs/cognition/cognitive-modules.md`](docs/cognition/cognitive-modules.md) | 29 | | Policy/value learners | ⏳ Planned | Requires primitives from Phase 1 | 30 | | Sensor/effector bridges | ⏳ Planned | Documented in manifesto and illustrated in the cognitive-loop example | 31 | | Diagnostics | ⏳ Planned | Build instrumentation around the cognitive loop trace outputs | 32 | 33 | The roadmap is reviewed whenever a pull request lands. Contributors should update the relevant status markers and add 34 | links to new documentation, examples, or tests so stakeholders can track progress across phases. 35 | -------------------------------------------------------------------------------- /generate_project_files.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Create directories 4 | mkdir -p src/core 5 | mkdir -p src/layers 6 | mkdir -p src/activations 7 | mkdir -p src/optimizers 8 | mkdir -p src/losses 9 | mkdir -p src/transformers 10 | mkdir -p src/utils 11 | mkdir -p tests/core 12 | mkdir -p tests/layers 13 | mkdir -p tests/activations 14 | mkdir -p tests/optimizers 15 | mkdir -p tests/losses 16 | mkdir -p tests/transformers 17 | mkdir -p examples 18 | mkdir -p docs 19 | 20 | # Create files 21 | touch src/core/tensor.lisp 22 | touch src/core/gpu.lisp 23 | touch src/core/autograd.lisp 24 | touch src/layers/base.lisp 25 | touch src/layers/linear.lisp 26 | touch src/layers/convolutional.lisp 27 | touch src/layers/recurrent.lisp 28 | touch src/layers/attention.lisp 29 | touch src/layers/multihead_attention.lisp 30 | touch src/activations/base.lisp 31 | touch src/activations/relu.lisp 32 | touch src/activations/sigmoid.lisp 33 | touch src/optimizers/base.lisp 34 | touch src/optimizers/sgd.lisp 35 | touch src/optimizers/adam.lisp 36 | touch src/losses/base.lisp 37 | touch src/losses/mse.lisp 38 | touch src/losses/cross_entropy.lisp 39 | touch src/transformers/transformer_layer.lisp 40 | touch src/transformers/encoder.lisp 41 | touch src/transformers/decoder.lisp 42 | touch src/utils/data_loader.lisp 43 | touch src/utils/debugging.lisp 44 | touch tests/core/test_tensor.lisp 45 | touch tests/core/test_gpu.lisp 46 | touch tests/core/test_autograd.lisp 47 | touch tests/layers/test_layers.lisp 48 | touch tests/activations/test_activations.lisp 49 | touch tests/optimizers/test_optimizers.lisp 50 | touch tests/losses/test_losses.lisp 51 | touch tests/transformers/test_transformers.lisp 52 | touch examples/example1.lisp 53 | touch examples/example2.lisp 54 | touch docs/getting_started.md 55 | touch docs/tensor.md 56 | touch docs/autograd.md 57 | touch docs/layers.md 58 | touch docs/activations.md 59 | touch docs/optimizers.md 60 | touch docs/losses.md 61 | touch docs/transformers.md 62 | # touch README.md 63 | touch .gitignore 64 | touch neuralisp.asd 65 | 66 | # Git setup 67 | git init 68 | 69 | # Optional: Configure user details 70 | # (Replace "Your Name" and "your.email@example.com" with your information) 71 | git config user.name "ck46" 72 | git config user.email "prof.chakas@gmail.com" 73 | 74 | git add . 75 | git commit -m "Initial commit" -------------------------------------------------------------------------------- /tests/core/test_tensor.lisp: -------------------------------------------------------------------------------- 1 | (defpackage :neuralisp.tests.core.tensor 2 | (:use :cl 3 | :lisp-unit) 4 | (:import-from :neuralisp.core.tensor 5 | :make-tensor :tensor-add :tensor-subtract :tensor-multiply 6 | :tensor-divide :tensor-matmul :tensor-sum :tensor-mean 7 | :tensor-data :tensor-shape)) 8 | (in-package :neuralisp.tests.core.tensor) 9 | 10 | (defun same-shape-and-equal-data (tensor-a tensor-b) 11 | (and (equal (tensor-shape tensor-a) (tensor-shape tensor-b)) 12 | (magicl:matrix-equalp (tensor-data tensor-a) (tensor-data tensor-b)))) 13 | 14 | (define-test test-tensor-add 15 | (let ((a (make-tensor '(2 2) :initial-element 1)) 16 | (b (make-tensor '(2 2) :initial-element 2)) 17 | (expected (make-tensor '(2 2) :initial-element 3))) 18 | (assert-true (same-shape-and-equal-data (tensor-add a b) expected)))) 19 | 20 | (define-test test-tensor-subtract 21 | (let ((a (make-tensor '(2 2) :initial-element 4)) 22 | (b (make-tensor '(2 2) :initial-element 1)) 23 | (expected (make-tensor '(2 2) :initial-element 3))) 24 | (assert-true (same-shape-and-equal-data (tensor-subtract a b) expected)))) 25 | 26 | (define-test test-tensor-multiply 27 | (let ((a (make-tensor '(2 2) :initial-element 2)) 28 | (b (make-tensor '(2 2) :initial-element 3)) 29 | (expected (make-tensor '(2 2) :initial-element 6))) 30 | (assert-true (same-shape-and-equal-data (tensor-multiply a b) expected)))) 31 | 32 | (define-test test-tensor-divide 33 | (let ((a (make-tensor '(2 2) :initial-element 6)) 34 | (b (make-tensor '(2 2) :initial-element 2)) 35 | (expected (make-tensor '(2 2) :initial-element 3))) 36 | (assert-true (same-shape-and-equal-data (tensor-divide a b) expected)))) 37 | 38 | (define-test test-tensor-matmul 39 | (let ((a (make-tensor '(2 2) :initial-element 1)) 40 | (b (make-tensor '(2 2) :initial-element 2)) 41 | (expected (make-tensor '(2 2) :initial-element 4))) 42 | (assert-true (same-shape-and-equal-data (tensor-matmul a b) expected)))) 43 | 44 | (define-test test-tensor-sum 45 | (let ((a (make-tensor '(2 2) :initial-element 2))) 46 | (assert-equal (tensor-sum a) 8))) 47 | 48 | (define-test test-tensor-mean 49 | (let ((a (make-tensor '(2 2) :initial-element 4))) 50 | (assert-equal (tensor-mean a) 4))) 51 | 52 | ;;; Load and run the tests 53 | (lisp-unit:run-tests :suite :neuralisp.tests.core.tensor) -------------------------------------------------------------------------------- /src/core/autograd.lisp: -------------------------------------------------------------------------------- 1 | (defpackage :neuralisp.core.autograd 2 | (:use :common-lisp) 3 | (:import-from :neuralisp.core.tensor :tensor :tensor-data :tensor-shape :make-tensor) 4 | (:import-from :neuralisp.core.gpu :move-to-gpu :move-to-cpu) 5 | (:export :variable :value :gradient :create-variable :backward :zero-gradient 6 | :partial-grad :apply-partial-grad)) 7 | (in-package :neuralisp.core.autograd) 8 | 9 | (defclass variable () 10 | ((value :initarg :value 11 | :reader variable-value 12 | :type tensor 13 | :documentation "The tensor object representing the value of the variable.") 14 | (gradient :initarg :gradient 15 | :accessor variable-gradient 16 | :type (or null tensor) 17 | :documentation "The tensor object representing the gradient of the variable.") 18 | (backward :initarg :backward 19 | :accessor variable-backward 20 | :type (or null function) 21 | :documentation "The backward function for computing gradients."))) 22 | 23 | (defun create-variable (value &key (requires-grad t) (on-gpu nil)) 24 | "Create a new autograd variable with the input value and a gradient, 25 | and set backward to nil." 26 | (let ((tensor-value (if on-gpu (move-to-gpu value) value)) 27 | (tensor-grad (when requires-grad 28 | (let ((grad-tensor (make-tensor (make-array (reduce #'* (tensor-shape value))) (tensor-shape value)))) 29 | (if on-gpu (move-to-gpu grad-tensor) grad-tensor))))) 30 | (make-instance 'variable :value tensor-value 31 | :gradient tensor-grad))) 32 | 33 | (defun backward (var &optional (grad-output 1.0)) 34 | "Computes the gradients of the variable with respect to its values, 35 | and accumulated gradient output." 36 | (when (functionp (variable-backward var)) 37 | (funcall (variable-backward var) grad-output))) 38 | 39 | (defun zero-gradient (var) 40 | "Set the gradient of the variable to zero." 41 | (setf (variable-gradient var) 42 | (let ((zero-tensor (make-instance 'tensor :data (make-array (reduce #'* (tensor-shape (variable-value var)))) :shape (tensor-shape (variable-value var))))) 43 | (if (equal (tensor-data (variable-value var)) :gpu) (move-to-gpu zero-tensor) zero-tensor)))) 44 | 45 | (defun partial-grad (node-a node-b) 46 | "Compute the partial derivatives between two 'variable' nodes." 47 | (let ((prev-node node-b) (grad 1)) 48 | (loop 49 | (when (eq prev-node node-a) 50 | (return grad)) 51 | (setq grad (* grad (variable-gradient prev-node))) 52 | (setq prev-node (variable-backward prev-node))))) 53 | 54 | (defun apply-partial-grad (node-a node-b) 55 | "Apply the computed partial gradients of node-b with respect to node-a to the gradients of both nodes." 56 | (let ((partial (partial-grad node-a node-b))) 57 | (setf (variable-gradient node-a) (* (variable-gradient node-a) partial)) 58 | (setf (variable-gradient node-b) (* (variable-gradient node-b) partial)))) -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to NeuraLisp 2 | 3 | Thank you for considering a contribution! NeuraLisp is still in its foundational phase, so every change should reinforce 4 | the core tensor/autograd modules and keep the roadmap realistic. This guide summarises the expectations for code style, 5 | documentation, and workflow. 6 | 7 | ## Getting started 8 | 9 | 1. Fork the repository and create a feature branch. 10 | 2. Install the dependencies listed in [`README.md`](README.md) (SBCL, Quicklisp, `magicl`, and optionally `cl-cuda`). 11 | 3. Run the smoke suite before making changes to ensure your environment is wired correctly: 12 | ```bash 13 | ./tests/run-smoke.sh 14 | ``` 15 | 16 | ## Coding standards 17 | 18 | - **Common Lisp style.** Follow the community conventions promoted by the SBCL project: two-space indentation, hyphenated 19 | symbol names, and docstrings for all public functions. 20 | - **Packages.** Export only the symbols that must be consumed by other modules. Use `in-package` at the top of each file 21 | and keep `defpackage` forms in `*.lisp` files rather than ASDF metadata. 22 | - **Error handling.** Prefer signalling meaningful conditions over returning `nil`. Wrap foreign-library calls with 23 | guards so that missing optional dependencies (e.g. CUDA) degrade gracefully. 24 | 25 | ## Documentation requirements 26 | 27 | Every pull request should include the relevant documentation updates: 28 | 29 | - Update the appropriate page in `docs/` if you change behaviour or add a new module. The `docs/internals/` and 30 | `docs/primitives/` sections describe the canonical structure expected for architecture diagrams and code snippets. 31 | - Expand the examples if you introduce new workflows. Each example script must run via `sbcl --script` and print its 32 | own expected output for quick verification. 33 | - Amend [`CHANGELOG.md`](CHANGELOG.md) and [`ROADMAP.md`](ROADMAP.md) when your change advances a manifesto phase. 34 | 35 | ## Testing 36 | 37 | The repository currently ships with a documentation-focused smoke test: 38 | 39 | ```bash 40 | ./tests/run-smoke.sh 41 | ``` 42 | 43 | This command checks that the example scripts execute and that critical documentation files exist. As the tensor and 44 | autograd libraries stabilise we will extend the suite with unit tests that validate numerical correctness across CPU and 45 | GPU backends. 46 | 47 | The CI workflow in [`.github/workflows/ci.yml`](.github/workflows/ci.yml) must stay green. Please run the smoke suite 48 | locally before submitting a pull request. 49 | 50 | ## Pull request checklist 51 | 52 | - [ ] Tests (`./tests/run-smoke.sh`) pass locally. 53 | - [ ] Documentation in `docs/` or `examples/` reflects your change. 54 | - [ ] `CHANGELOG.md` contains an entry for your work under the correct manifesto phase. 55 | - [ ] `ROADMAP.md` status tables are updated if scope has shifted. 56 | - [ ] The pull request description references the manifesto phase(s) touched. 57 | 58 | Keeping these checkpoints in sync ensures that contributors, reviewers, and stakeholders share the same context as the 59 | project progresses through the manifesto phases. 60 | -------------------------------------------------------------------------------- /docs/cognition/cognitive-modules.md: -------------------------------------------------------------------------------- 1 | # Cognitive Modules 2 | 3 | The NeuraLisp manifesto ultimately targets autonomous cognitive systems that blend differentiable reasoning with symbolic 4 | control. This primer summarises the planned module graph and links it to the working examples. 5 | 6 | ## Architectural overview 7 | 8 | ```mermaid 9 | flowchart LR 10 | subgraph Perception 11 | Enc[Sensor Encoder] 12 | WM[Working Memory] 13 | end 14 | 15 | subgraph Cognition 16 | Planner[Deliberation Loop] 17 | Value[Value Model] 18 | Policy[Policy Model] 19 | end 20 | 21 | subgraph Action 22 | Eff[Effector API] 23 | end 24 | 25 | Enc --> WM 26 | WM --> Planner 27 | Planner --> Policy 28 | Planner --> Value 29 | Policy --> Eff 30 | Value --> Planner 31 | ``` 32 | 33 | - **Perception** captures sensory tensors, normalises them, and writes them into working memory. 34 | - **Cognition** evaluates the memory state, queries learned value/policy models, and decides on the next action chunk. 35 | - **Action** publishes the chosen command to the environment interface. 36 | 37 | Each box will be implemented as a composition of differentiable primitives plus symbolic glue once the lower-level 38 | libraries stabilise. 39 | 40 | ## Module responsibilities 41 | 42 | | Module | Package prefix | Summary | 43 | |--------|----------------|---------| 44 | | Sensor encoders | `neuralisp.cognition.sensors` | Convert raw environment data into tensors, potentially streaming on GPU | 45 | | Working memory | `neuralisp.cognition.memory` | Maintain differentiable buffers and expose attention-style addressing | 46 | | Deliberation loop | `neuralisp.cognition.control` | Run cognitive cycles that call policy/value networks and symbolic planners | 47 | | Policy model | `neuralisp.cognition.policy` | Choose candidate actions based on latent state | 48 | | Value model | `neuralisp.cognition.value` | Score state-action pairs for planning | 49 | | Effector API | `neuralisp.cognition.effectors` | Send decisions to external processes | 50 | 51 | ## Example: cognitive loop scaffold 52 | 53 | [`examples/cognitive-loop.lisp`](../../examples/cognitive-loop.lisp) demonstrates how these modules stitch together with 54 | mock data today. The script simulates sensor input, updates a working memory tensor, chooses a symbolic action, and 55 | prints the resulting decision trace. Although the underlying tensor maths is still manual, the flow mirrors the roadmap 56 | expectations for *Phase 2 – Cognitive Routines*. 57 | 58 | Key takeaways from the example: 59 | 60 | 1. **State normalisation** is performed up front to keep downstream modules agnostic to raw sensor scaling. 61 | 2. **Working memory updates** use the tensor helpers to highlight where differentiable attention mechanisms will live. 62 | 3. **Loop instrumentation** prints metrics that the future reinforcement-learning stack will optimise. 63 | 64 | As new primitives arrive, contributors should extend the example to call the proper packages listed above and document 65 | the changes in [`CHANGELOG.md`](../../CHANGELOG.md) so the research cadence remains transparent. 66 | -------------------------------------------------------------------------------- /src/core/gpu.lisp: -------------------------------------------------------------------------------- 1 | (defpackage :neuralisp.core.gpu 2 | (:use :common-lisp :cl-cuda) 3 | (:import-from :neuralisp.core.tensor :tensor :tensor-data :tensor-shape :make-tensor 4 | :tensor-gpu-pointer) 5 | (:export :initialize-gpu :shutdown-gpu :to-gpu :from-gpu :gpu-allocate :gpu-deallocate 6 | :tensor-on-gpu-p :tensor-ensure-on-gpu :tensor-ensure-on-cpu :move-to-gpu :move-to-cpu)) 7 | (in-package :neuralisp.core.gpu) 8 | 9 | (defun initialize-gpu () 10 | "Initialize the GPU and cl-cuda library." 11 | (setf cl-cuda.all:use-cache-p t) ; Enable caching of compiled CUDA programs 12 | (dolist (platform (cl-cuda.all:get-platform-ids)) 13 | (dolist (device (cl-cuda.all:get-device-ids platform)) 14 | (format t "Platform: ~A, Device: ~A~%" platform device)))) 15 | 16 | (defun shutdown-gpu () 17 | "Clean up the GPU and cl-cuda library before exiting." 18 | (cl-cuda.basic:shutdown)) 19 | 20 | (defun to-gpu (tensor) 21 | "Send the tensor to GPU memory." 22 | (let ((tensor-dev-ptr (cublas:allocate (reduce #'* (tensor-shape tensor))))) 23 | (cublas:with-cublas 24 | (cublas:send-to tensor-dev-ptr (tensor-data tensor) (reduce #'* (tensor-shape tensor)))) 25 | tensor-dev-ptr)) 26 | 27 | (defun from-gpu (tensor-dev-ptr shape) 28 | "Retrieve tensor from GPU memory, given its device-pointer and shape." 29 | (let ((tensor-data (make-array (reduce #'* shape) :element-type 'single-float))) 30 | (cublas:with-cublas 31 | (cublas:retrieve-from tensor-dev-ptr tensor-data (reduce #'* shape))) 32 | (make-tensor tensor-data shape))) 33 | 34 | (defmacro gpu-allocate (var &rest args) 35 | "Create and allocate GPU memory for a tensor, storing its device-pointer in the given var." 36 | `(let ((,var (cublas:allocate (reduce #'* ',args)))) 37 | ,var)) 38 | 39 | (defmacro gpu-deallocate (var) 40 | "Deallocate GPU memory associated with a tensor's device-pointer." 41 | `(cublas:deallocate ,var)) 42 | 43 | (defun tensor-on-gpu-p (tensor) 44 | "Verify if the provided tensor is stored on the GPU." 45 | (eql (tensor-data tensor) :gpu)) 46 | 47 | (defun move-to-gpu (tensor) 48 | "Move the provided tensor to the GPU." 49 | (if (tensor-on-gpu-p tensor) 50 | tensor 51 | (make-instance 'tensor :data :gpu 52 | :shape (tensor-shape tensor)))) 53 | 54 | (defun move-to-cpu (tensor) 55 | "Move the provided tensor back to the CPU." 56 | (if (tensor-on-gpu-p tensor) 57 | (make-instance 'tensor :data (copy-seq (tensor-data tensor)) :shape (tensor-shape tensor)) 58 | tensor)) 59 | 60 | (defun tensor-ensure-on-gpu (tensor) 61 | "Ensures tensor's data is on GPU. If not, sends the data to GPU." 62 | (unless (tensor-on-gpu-p tensor) 63 | (setf (tensor-gpu-pointer tensor) (to-gpu tensor)) 64 | (setf (tensor-data tensor) nil)) 65 | tensor) 66 | 67 | (defun tensor-ensure-on-cpu (tensor) 68 | "Ensures tensor's data is on CPU. If not, retrieves the data from GPU." 69 | (when (tensor-on-gpu-p tensor) 70 | (setf (tensor-data tensor) (from-gpu (tensor-gpu-pointer tensor) (tensor-shape tensor))) 71 | (gpu-deallocate (tensor-gpu-pointer tensor)) 72 | (setf (tensor-gpu-pointer tensor) nil)) 73 | tensor) -------------------------------------------------------------------------------- /docs/autograd.md: -------------------------------------------------------------------------------- 1 | # Autograd 2 | 3 | The autograd module provides automatic differentiation and gradient computation for the Neuralisp machine learning framework. This module defines variable classes and functions for creating variables that hold tensor values, gradient tensors, and backward functions to compute changes in variables when updating the network. 4 | 5 | ## Usage 6 | 7 | To use the autograd module, import the functions and classes provided by the `neuralisp.core.autograd` package: 8 | 9 | ```common-lisp 10 | (use-package :neuralisp.core.autograd) 11 | ``` 12 | 13 | ## Classes 14 | 15 | ### `variable` 16 | 17 | The `variable` class represents an autograd variable and contains the following slots: 18 | 19 | - `value`: A tensor that represents the value of the variable. 20 | - `gradient`: A tensor (or null) that represents the gradient of the variable. 21 | - `backward`: A function (or null) used for computing the gradients during the backward pass. 22 | 23 | ## Functions 24 | 25 | ### `create-variable` (value &key (requires-grad t) (on-gpu nil)) 26 | 27 | This function creates a new autograd variable with the input value, a gradient tensor (if requires-grad is true), and sets the backward function to `nil`. If `on-gpu` is true, the created variable and its gradients will be moved to the GPU. 28 | 29 | Arguments: 30 | 31 | - `value`: A tensor representing the value of the new variable. 32 | - `requires-grad` (optional, default: `t`): If true, the variable's gradient tensor will be created. 33 | - `on-gpu` (optional, default: `nil`): If true, the variable's tensor and gradients will be moved to the GPU. 34 | 35 | Returns: 36 | 37 | - A new autograd `variable` instance. 38 | 39 | ### `backward` (var &optional (grad-output 1.0)) 40 | 41 | Computes the gradients of the variable with respect to its values and accumulates gradient output. 42 | 43 | Arguments: 44 | 45 | - `var`: An autograd `variable` to compute gradients for. 46 | - `grad-output` (optional, default: `1.0`): A scalar value for the gradient output's accumulation. 47 | 48 | ### `zero-gradient` (var) 49 | 50 | Sets the gradient of the variable to zero. 51 | 52 | Arguments: 53 | 54 | - `var`: An autograd `variable` instance whose gradient will be set to zero. 55 | 56 | ### `partial-grad` (node-a node-b) 57 | 58 | Computes the partial derivatives between two 'variable' nodes. 59 | 60 | Arguments: 61 | 62 | - `node-a` and `node-b`: `variable` nodes in the computational graph. 63 | 64 | Returns: 65 | 66 | - A scalar representing the computed partial gradients. 67 | 68 | ### `apply-partial-grad` (node-a node-b) 69 | 70 | Applies the computed partial gradients of node-b with respect to node-a to the gradients of both nodes. 71 | 72 | Arguments: 73 | 74 | - `node-a` and `node-b`: `variable` nodes in the computational graph. 75 | 76 | ## Examples 77 | 78 | Creating an autograd variable: 79 | 80 | ```common-lisp 81 | (defparameter *var* 82 | (create-variable (make-tensor (list 3 3) '(0.5d0 0.5d0)))) 83 | ``` 84 | 85 | Performing the backward pass on a variable: 86 | 87 | ```common-lisp 88 | ; Assuming *var* has a backward function assigned 89 | (backward *var*) 90 | ``` 91 | 92 | Setting the gradient of a variable to zero: 93 | 94 | ```common-lisp 95 | (zero-gradient *var*) 96 | ``` 97 | 98 | Computing and applying partial gradients between variables: 99 | 100 | ```common-lisp 101 | ; Assuming *var-a* and *var-b* are variables in the computational graph 102 | (defparameter *partial* 103 | (partial-grad *var-a* *var-b*)) 104 | 105 | (apply-partial-grad *var-a* *var-b*) 106 | ``` -------------------------------------------------------------------------------- /src/core/tensor.lisp: -------------------------------------------------------------------------------- 1 | (defpackage :neuralisp.core.tensor 2 | (:use :cl) 3 | (:import-from :magicl :matrix :eql :addf :subf :mult :divf 4 | :transpose :dot :sum :mean) 5 | (:export :tensor :tensor-data :tensor-shape :make-tensor 6 | :tensor-add :tensor-subtract :tensor-multiply :tensor-divide 7 | :tensor-matmul :tensor-sum :tensor-mean)) 8 | (in-package :neuralisp.core.tensor) 9 | 10 | (defclass tensor () 11 | ((data :initarg :data 12 | :accessor tensor-data 13 | :type magicl:matrix 14 | :documentation "N-dimensional array holding the tensor's data.") 15 | (shape :initarg :shape 16 | :accessor tensor-shape 17 | :type list 18 | :documentation "List of integers representing the tensor's shape.") 19 | (gpu-pointer :initform nil 20 | :accessor tensor-gpu-pointer 21 | :type (or null cl-cuda.buffer:cublas-device-pointer) 22 | :documentation "Pointer to tensor's data on GPU memory."))) 23 | 24 | (defun make-tensor (shape &key (initial-element 0) (on-gpu nil)) 25 | "Create a new tensor from input 'shape' and initialize it with 'initial-element'. 26 | Optionally move the tensor to GPU memory if 'on-gpu' is true." 27 | (let ((tensor (make-instance 'tensor 28 | :data (magicl:const initial-element shape :layout :row-major) 29 | :shape shape))) 30 | (when on-gpu 31 | (move-to-gpu tensor)) 32 | tensor)) 33 | 34 | ; Basic tensor operations 35 | 36 | (defun tensor-add (tensor-a tensor-b) 37 | "Compute the element-wise addition of two tensors." 38 | (let ((res (magicl:copy-matrix (tensor-data tensor-a)))) 39 | (magicl:addf res (tensor-data tensor-b)) 40 | (make-tensor (tensor-shape tensor-a) :data res))) 41 | 42 | (defun tensor-subtract (tensor-a tensor-b) 43 | "Compute the element-wise subtraction of two tensors." 44 | (let ((res (magicl:copy-matrix (tensor-data tensor-a)))) 45 | (magicl:subf res (tensor-data tensor-b)) 46 | (make-tensor (tensor-shape tensor-a) :data res))) 47 | 48 | (defun tensor-multiply (tensor-a tensor-b) 49 | "Compute the element-wise multiplication of two tensors." 50 | (let ((res (magicl:copy-matrix (tensor-data tensor-a)))) 51 | (magicl:mult ".*" res (tensor-data tensor-b)) 52 | (make-tensor (tensor-shape tensor-a) :data res))) 53 | 54 | (defun tensor-divide (tensor-a tensor-b) 55 | "Compute the element-wise division of two tensors." 56 | (let ((res (magicl:copy-matrix (tensor-data tensor-a)))) 57 | (magicl:divf ".*" res (tensor-data tensor-b)) 58 | (make-tensor (tensor-shape tensor-a) :data res))) 59 | 60 | ; Matrix multiplication and reduction functions 61 | 62 | (defun tensor-matmul (tensor-a tensor-b &key (transpose-a nil) (transpose-b nil)) 63 | "Compute the matrix multiplication of two tensors." 64 | (let* ((a (if transpose-a (magicl:transpose (tensor-data tensor-a)) (tensor-data tensor-a))) 65 | (b (if transpose-b (magicl:transpose (tensor-data tensor-b)) (tensor-data tensor-b))) 66 | (res (magicl:mult a b))) 67 | (make-tensor (list (magicl:matrix-rows res) (magicl:matrix-cols res)) :data res))) 68 | 69 | (defun tensor-sum (tensor &key (axis nil) (keepdims nil)) 70 | "Compute the sum of tensor elements along the specified axis or axes." 71 | (let ((sum (magicl:sum (tensor-data tensor) :axis axis :keepdims keepdims))) 72 | (if keepdims 73 | (make-tensor (magicl:matrix-shape sum) :data sum) 74 | sum))) ; return scalar 75 | 76 | (defun tensor-mean (tensor &key (axis nil) (keepdims nil)) 77 | "Compute the mean of tensor elements along the specified axis or axes." 78 | (let ((mean (magicl:mean (tensor-data tensor) :axis axis :keepdims keepdims))) 79 | (if keepdims 80 | (make-tensor (magicl:matrix-shape mean) :data mean) 81 | mean))) ; return scalar -------------------------------------------------------------------------------- /docs/tensor.md: -------------------------------------------------------------------------------- 1 | # Tensor 2 | 3 | The tensor module provides basic tensor operations and data structures for the Neuralisp machine learning framework. This module defines tensor classes and functions to create, manipulate and perform mathematical operations on multi-dimensional tensors, including support for GPU acceleration. 4 | 5 | ## Usage 6 | 7 | To use the tensor module, import the functions and classes provided by the `neuralisp.core.tensor` package: 8 | 9 | ```common-lisp 10 | (use-package :neuralisp.core.tensor) 11 | ``` 12 | 13 | ## Classes 14 | 15 | ### `tensor` 16 | 17 | The `tensor` class represents a multi-dimensional array (tensor) and contains the following slots: 18 | 19 | - `data`: A simple-array holding the numerical values of the tensor. 20 | - `shape`: A list of integers representing the dimensions of the tensor. 21 | 22 | ## Functions 23 | 24 | ### `make-tensor` (shape &key (initial-element 0) (on-gpu nil)) 25 | 26 | This function creates a new tensor with the specified shape and initializes to the given initial-element. If `on-gpu` is true, the created tensor will be moved to the GPU. 27 | 28 | Arguments: 29 | 30 | - `shape`: A list of integers representing the dimensions of the new tensor. 31 | - `initial-element` (optional, default: `0`): The initial value used to fill the tensor. 32 | - `on-gpu` (optional, default: `nil`): If true, the tensor will be moved to the GPU. 33 | 34 | Returns: 35 | 36 | - A new `tensor` instance. 37 | 38 | ## Tensor Operations 39 | 40 | The following functions perform element-wise operations on tensors. 41 | 42 | - `tensor-add` (tensor-a tensor-b) 43 | - `tensor-subtract` (tensor-a tensor-b) 44 | - `tensor-multiply` (tensor-a tensor-b) 45 | - `tensor-divide` (tensor-a tensor-b) 46 | 47 | ### Broadcasting 48 | 49 | When performing element-wise operations on tensors with different shapes, the tensor module automatically broadcasts the smaller tensor to match the shape of the larger tensor, if the shapes are compatible. 50 | 51 | ### Matrix Multiplication 52 | 53 | - `tensor-matmul` (tensor-a tensor-b &key (transpose-a nil) (transpose-b nil)) 54 | 55 | This function performs matrix multiplication between two tensors. 56 | 57 | Arguments: 58 | 59 | - `tensor-a` and `tensor-b`: Tensors to be multiplied 60 | - `transpose-a` (optional, default: `nil`): If true, tensor-a will be transposed before multiplying 61 | - `transpose-b` (optional, default: `nil`): If true, tensor-b will be transposed before multiplying 62 | 63 | Returns: 64 | 65 | - A new `tensor` instance representing the result of the multiplication. 66 | 67 | ### Reduction Operations 68 | 69 | - `tensor-sum` (tensor &key (axis nil) (keepdims nil)) 70 | - `tensor-mean` (tensor &key (axis nil) (keepdims nil)) 71 | 72 | These functions perform reduction operations on a tensor along the specified axis or axes. If no axis is specified, the reduction is applied across all elements of the tensor. 73 | 74 | Arguments: 75 | 76 | - `tensor`: A tensor to perform the reduction operation on 77 | - `axis` (optional, default: `nil`): An integer or list of integers representing the axis or axes to reduce 78 | - `keepdims` (optional, default: `nil`): If true, the reduced axes will be kept with size 1 79 | 80 | ## Examples 81 | 82 | Creating a tensor: 83 | 84 | ```common-lisp 85 | (defparameter *a* 86 | (make-tensor (list 2 3) :initial-element 0.5d0)) 87 | ``` 88 | 89 | Performing element-wise operations on tensors: 90 | 91 | ```common-lisp 92 | (defparameter *b* 93 | (tensor-add *a* *a*)) 94 | ``` 95 | 96 | Matrix multiplication: 97 | 98 | ```common-lisp 99 | (defparameter *c* 100 | (tensor-matmul *a* (tensor-transpose *a*))) 101 | ``` 102 | 103 | Performing reduction operations: 104 | 105 | ```common-lisp 106 | (defparameter *sum* 107 | (tensor-sum *a* :axis 0)) 108 | 109 | (defparameter *mean* 110 | (tensor-mean *a* :axis 1 :keepdims t)) 111 | ``` -------------------------------------------------------------------------------- /docs/primitives/neural-primitives.md: -------------------------------------------------------------------------------- 1 | # Neural Primitives 2 | 3 | This reference tracks the differentiable building blocks that NeuraLisp exposes today and the ones scheduled for the 4 | next manifesto phase. The primitives fall into three families: activations, parametric layers, and optimisation 5 | operators. 6 | 7 | ## Current surface 8 | 9 | Even though many source files are still skeletal, the repository establishes the public packages that the final 10 | implementations will live in. Contributors can load the stubs to experiment with API shapes while the core tensor and 11 | autograd layers are stabilised. 12 | 13 | ```mermaid 14 | flowchart TD 15 | Core[neuralisp.core.tensor] 16 | Auto[neuralisp.core.autograd] 17 | Act[:neuralisp.activations] 18 | Layer[:neuralisp.layers] 19 | Loss[:neuralisp.losses] 20 | Opt[:neuralisp.optimizers] 21 | 22 | Core --> Act 23 | Core --> Layer 24 | Core --> Loss 25 | Core --> Opt 26 | Auto --> Layer 27 | Auto --> Loss 28 | Auto --> Opt 29 | ``` 30 | 31 | The diagram highlights the dependency direction: all primitives flow through the tensor core, while the trainable 32 | components also depend on autograd metadata. 33 | 34 | ## Activation functions 35 | 36 | `src/activations/` currently defines package scaffolding for the standard set of nonlinearities (ReLU, Sigmoid, Tanh). 37 | The files will eventually define generic methods that accept `variable` instances and register their backward passes. 38 | During *Phase 1 – Differentiable Primitives* the following checklist will guide implementation: 39 | 40 | - [ ] Define a `defgeneric`/`defmethod` pair for each activation that accepts tensors and returns variables. 41 | - [ ] Register backward lambdas that compose with `partial-grad`. 42 | - [ ] Provide numerical stability tests that exercise CPU and GPU tensors. 43 | 44 | ## Linear and convolutional layers 45 | 46 | Layer files under `src/layers/` are intentionally empty so that the community can agree on constructor signatures before 47 | hardening the implementation. The proposed flow is illustrated below. 48 | 49 | ```mermaid 50 | sequenceDiagram 51 | participant User 52 | participant Layer 53 | participant Tensor 54 | participant Autograd 55 | 56 | User->>Layer: (make-instance 'linear :in 4 :out 8) 57 | Layer->>Tensor: allocate weight/bias tensors 58 | Layer->>Autograd: wrap trainable parameters with variable objects 59 | User->>Layer: (forward layer input) 60 | Layer->>Tensor: compute matmul / bias add 61 | Layer->>Autograd: record backward closure 62 | ``` 63 | 64 | ## Loss functions and optimisers 65 | 66 | Loss modules (`src/losses/`) will consume predictions and targets, producing scalar variables whose gradients backpropagate 67 | through the model graph. Optimisers (`src/optimizers/`) will own the parameter update rules. Both modules depend on the 68 | core tensor math plus future broadcasting utilities outlined in the internals document. 69 | 70 | ### Planned components 71 | 72 | | Primitive | Status | Notes | 73 | |-----------|--------|-------| 74 | | Mean-squared error | Prototype signature present | Implementation pending better tensor broadcasting | 75 | | Cross-entropy | Prototype signature present | Requires numerically stable `log-softmax` helper | 76 | | SGD | Header stub | Needs parameter iteration helpers | 77 | | Adam | Header stub | Depends on per-parameter moment buffers | 78 | 79 | ## Working with the stubs today 80 | 81 | Until the primitives are fully implemented, examples rely on manual tensor composition to showcase the intended usage 82 | pattern. The comments in [`examples/simple-mlp.lisp`](../../examples/simple-mlp.lisp) describe how to migrate the 83 | hand-rolled computations into layer abstractions once the library fills in. 84 | 85 | Contributors experimenting with new primitives should link their work to the roadmap items in [`ROADMAP.md`](../../ROADMAP.md) 86 | so that the phase progression remains visible to downstream users. 87 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # NeuraLisp 2 | 3 | NeuraLisp is an experimental neural computing environment for Common Lisp. The current codebase focuses on 4 | foundational tensor structures, automatic differentiation scaffolding, and the research manifesto that guides the 5 | future cognitive roadmap. Many higher-level layers, optimisers, and cognitive agents are still stubs, but the 6 | supporting infrastructure—documentation, examples, and contributor workflow—is now in place so that the community can 7 | iterate safely. 8 | 9 | ## Project highlights 10 | 11 | - **Tensor core prototypes** implemented in [`src/core/tensor.lisp`](src/core/tensor.lisp) for constructing tensors, 12 | moving data between CPU/GPU backends, and performing elementary arithmetic. 13 | - **Autograd scaffolding** in [`src/core/autograd.lisp`](src/core/autograd.lisp) outlining differentiable variables and 14 | gradient accumulation primitives for future optimisation work. 15 | - **GPU hooks** via [`src/core/gpu.lisp`](src/core/gpu.lisp) demonstrating how CUDA bindings will be integrated (the 16 | module currently targets `cl-cuda` and is optional during development). 17 | - **Living manifesto and roadmap** that document the long-term vision and the current development phase. 18 | - **Runnable example scripts** under [`examples/`](examples) that illustrate a minimal MLP forward pass, a symbolic 19 | sequence model sketch, and a cognitive control loop narrative, all instrumented with comments and expected output. 20 | - **Automated smoke tests** and contribution guidelines that keep documentation, examples, and roadmap updates aligned. 21 | 22 | ## Quickstart 23 | 24 | ### 1. Install dependencies 25 | 26 | | Dependency | Purpose | Notes | 27 | |------------|---------|-------| 28 | | [SBCL](https://www.sbcl.org/) (or another ANSI Common Lisp) | Runs the NeuraLisp source and examples | Tested with SBCL ≥ 2.3 | 29 | | [Quicklisp](https://www.quicklisp.org/beta/) | Manages third-party libraries | Required to pull `magicl` and other math deps | 30 | | [`magicl`](https://github.com/quil-lang/magicl) | Dense linear algebra backend | Load through Quicklisp (`(ql:quickload :magicl)`) | 31 | | [`cl-cuda`](https://github.com/takagi/cl-cuda) *(optional)* | CUDA bindings for GPU experiments | Only needed if you intend to evaluate `neuralisp.core.gpu` | 32 | 33 | Clone the repository and register the project directory with ASDF (Quicklisp does this automatically when the repo lives 34 | under `~/quicklisp/local-projects/`): 35 | 36 | ```bash 37 | git clone https://github.com/yourusername/neuralisp.git 38 | cd neuralisp 39 | ``` 40 | 41 | ### 2. Load the core packages 42 | 43 | From an SBCL/Quicklisp REPL: 44 | 45 | ```lisp 46 | (ql:quickload :magicl) ; core tensor backend 47 | (load "src/core/tensor.lisp") 48 | (load "src/core/autograd.lisp") 49 | #+cl-cuda (load "src/core/gpu.lisp") 50 | ``` 51 | 52 | If CUDA is unavailable you can skip the GPU module—the tensor and autograd packages do not require it yet. 53 | 54 | ### 3. Run the examples 55 | 56 | Each example is a standalone script that prints its own expected output for quick verification: 57 | 58 | ```bash 59 | sbcl --script examples/simple-mlp.lisp 60 | sbcl --script examples/sequence-model.lisp 61 | sbcl --script examples/cognitive-loop.lisp 62 | ``` 63 | 64 | Refer to the inline comments in each script for an explanation of the computation that is being demonstrated. 65 | 66 | ### 4. Execute the smoke tests (optional) 67 | 68 | The automated smoke suite ensures that documentation and examples stay synchronised. Run it locally before opening a 69 | pull request: 70 | 71 | ```bash 72 | ./tests/run-smoke.sh 73 | ``` 74 | 75 | The CI workflow in [`.github/workflows/ci.yml`](.github/workflows/ci.yml) executes the same command on GitHub Actions. 76 | 77 | ## Documentation 78 | 79 | The `docs/` directory is organised by topic: 80 | 81 | - [`docs/internals/tensor-autograd.md`](docs/internals/tensor-autograd.md) dives into the tensor storage model and the 82 | current automatic differentiation pipeline with architecture diagrams. 83 | - [`docs/primitives/neural-primitives.md`](docs/primitives/neural-primitives.md) catalogues the differentiable building 84 | blocks that exist today and those planned for the next phase. 85 | - [`docs/cognition/cognitive-modules.md`](docs/cognition/cognitive-modules.md) describes how higher-level cognitive 86 | agents will be composed once the primitives mature, complete with flow diagrams and reference code snippets. 87 | - [`docs/manifesto.md`](docs/manifesto.md) articulates the long-term research manifesto that informs the changelog and 88 | roadmap. 89 | 90 | Start with [`docs/getting_started.md`](docs/getting_started.md) for a lighter introduction, then follow the cross-links 91 | into the detailed internals. 92 | 93 | ## Contributing 94 | 95 | Please read [`CONTRIBUTING.md`](CONTRIBUTING.md) for coding standards, documentation expectations, and workflow 96 | requirements. The high-level roadmap in [`ROADMAP.md`](ROADMAP.md) and the annotated release history in 97 | [`CHANGELOG.md`](CHANGELOG.md) show how ongoing work maps onto the manifesto phases. Every pull request should update the 98 | relevant entries when behaviour or developer-facing guarantees change. 99 | 100 | ## License 101 | 102 | NeuraLisp is released under the MIT License. See [`LICENSE`](LICENSE) for the full text. 103 | -------------------------------------------------------------------------------- /docs/internals/tensor-autograd.md: -------------------------------------------------------------------------------- 1 | # Tensor & Autograd Internals 2 | 3 | NeuraLisp is presently anchored by two core subsystems: `neuralisp.core.tensor`, which represents numerical state, and 4 | `neuralisp.core.autograd`, which wraps tensors with differentiable bookkeeping. This document traces how both modules 5 | fit together and how the future gradient pipeline will mature. 6 | 7 | ## High-level architecture 8 | 9 | ```mermaid 10 | flowchart LR 11 | subgraph TensorCore 12 | TClass[tensor class] 13 | MakeTensor[make-tensor] 14 | Ops[tensor-add / tensor-matmul / tensor-mean] 15 | end 16 | 17 | subgraph Autograd 18 | VarClass[variable class] 19 | CreateVar[create-variable] 20 | Backward[backward] 21 | end 22 | 23 | MakeTensor --> TClass 24 | Ops --> TClass 25 | CreateVar --> VarClass 26 | Backward --> VarClass 27 | 28 | TClass -.value.-> VarClass 29 | VarClass -.gradient.-> TClass 30 | ``` 31 | 32 | The tensor package constructs numerical containers and delegates heavy lifting to [`magicl`](https://github.com/quil-lang/magicl). 33 | The autograd package wraps those tensors with metadata required to propagate gradients once the operation registry is 34 | complete. 35 | 36 | ## Tensor storage model 37 | 38 | ```common-lisp 39 | (defclass tensor () 40 | ((data :initarg :data 41 | :accessor tensor-data 42 | :type magicl:matrix 43 | :documentation "N-dimensional array holding the tensor's data.") 44 | (shape :initarg :shape 45 | :accessor tensor-shape 46 | :type list 47 | :documentation "List of integers representing the tensor's shape.") 48 | (gpu-pointer :initform nil 49 | :accessor tensor-gpu-pointer 50 | :type (or null cl-cuda.buffer:cublas-device-pointer) 51 | :documentation "Pointer to tensor's data on GPU memory."))) 52 | ``` 53 | 54 | The `tensor` class couples a `magicl:matrix` with auxiliary state. During early development the GPU pointer is a stub, 55 | allowing the `tensor` API to be exercised without CUDA present. 56 | 57 | Tensors are normally built through `make-tensor`, which allocates a constant matrix and optionally migrates it to the 58 | GPU hook: 59 | 60 | ```common-lisp 61 | (defun make-tensor (shape &key (initial-element 0) (on-gpu nil)) 62 | (let ((tensor (make-instance 'tensor 63 | :data (magicl:const initial-element shape :layout :row-major) 64 | :shape shape))) 65 | (when on-gpu 66 | (move-to-gpu tensor)) 67 | tensor)) 68 | ``` 69 | 70 | Higher-level operations (`tensor-add`, `tensor-matmul`, `tensor-sum`, etc.) return new tensor instances by copying the 71 | underlying `magicl` matrix. Broadcasting is currently manual; the helper assumes shapes are compatible, making it clear 72 | where future validation hooks must be inserted. 73 | 74 | ## Autograd scaffolding 75 | 76 | Automatic differentiation is managed by the `variable` class: 77 | 78 | ```common-lisp 79 | (defclass variable () 80 | ((value :initarg :value 81 | :reader variable-value 82 | :type tensor 83 | :documentation "The tensor object representing the value of the variable.") 84 | (gradient :initarg :gradient 85 | :accessor variable-gradient 86 | :type (or null tensor) 87 | :documentation "The tensor object representing the gradient of the variable.") 88 | (backward :initarg :backward 89 | :accessor variable-backward 90 | :type (or null function) 91 | :documentation "The backward function for computing gradients."))) 92 | ``` 93 | 94 | `create-variable` couples a tensor value with an optional gradient buffer. Backward propagation is currently a manual 95 | hook: if a node has a `backward` function it will be invoked with the incoming gradient. This makes it easy to prototype 96 | custom differentiable primitives while the standard layer library is still under construction. 97 | 98 | ```common-lisp 99 | (defun backward (var &optional (grad-output 1.0)) 100 | (when (functionp (variable-backward var)) 101 | (funcall (variable-backward var) grad-output))) 102 | ``` 103 | 104 | `partial-grad` and `apply-partial-grad` demonstrate how the chain rule will thread through connected variables. The 105 | placeholders purposely expose the intermediate state so that contributors can iterate on a full computational graph 106 | engine during the "Differentiable Primitives" phase of the roadmap. 107 | 108 | ## Planned evolution 109 | 110 | The current module layout supports two short-term enhancements: 111 | 112 | 1. **Tensor data adapters.** Introduce constructor helpers that accept row-major lists and perform broadcast-safe shape 113 | inference before handing control to `magicl`. This will simplify dataset ingestion and reduce the amount of manual 114 | bookkeeping in the examples. 115 | 2. **Autograd operation registry.** Extend the `variable` type with references to forward operation nodes. Each tensor 116 | primitive will register a matching backward lambda, enabling automatic gradient propagation across composed graphs. 117 | 118 | These improvements are tracked in the roadmap under *Phase 1 – Differentiable Primitives* and will unlock the optimiser 119 | and layer libraries that are currently stubbed out in `src/`. 120 | --------------------------------------------------------------------------------