├── .github └── dependabot.yml ├── .gitignore ├── .travis.yml ├── LICENSE ├── README.md ├── bench ├── list_swap_bench.exs └── snapshots │ ├── 2016-07-06_20-05-05.snapshot │ ├── 2016-07-06_20-05-15.snapshot │ ├── 2016-07-06_20-09-52.snapshot │ ├── 2016-07-06_20-15-54.snapshot │ └── 2016-07-07_15-43-51.snapshot ├── config └── config.exs ├── lib ├── tensor.ex └── tensor │ ├── matrix.ex │ ├── tensor.ex │ ├── tensor │ ├── helper.ex │ └── inspect.ex │ └── vector.ex ├── mix.exs ├── mix.lock └── test ├── matrix_test.exs ├── tensor_test.exs └── test_helper.exs /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | - package-ecosystem: mix 4 | directory: "/" 5 | schedule: 6 | interval: daily 7 | open-pull-requests-limit: 10 8 | ignore: 9 | - dependency-name: ex_doc 10 | versions: 11 | - 0.23.0 12 | - 0.24.0 13 | - 0.24.1 14 | - dependency-name: numbers 15 | versions: 16 | - 5.2.3 17 | - dependency-name: dialyxir 18 | versions: 19 | - 1.0.0 20 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # The directory Mix will write compiled artifacts to. 2 | /_build 3 | 4 | # If you run "mix test --cover", coverage assets end up here. 5 | /cover 6 | 7 | # The directory Mix downloads your dependencies sources to. 8 | /deps 9 | 10 | # Where 3rd-party dependencies like ExDoc output generated docs. 11 | /doc 12 | 13 | # If the VM crashes, it generates a dump, let's ignore it too. 14 | erl_crash.dump 15 | 16 | # Also ignore archive artifacts (built via "mix archive.build"). 17 | *.ez 18 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: elixir 2 | dist: trusty 3 | elixir: 4 | - 1.3.2 5 | - 1.4.5 6 | - 1.5.3 7 | - 1.6.6 8 | - 1.7.4 9 | - 1.8.2 10 | - 1.9.1 11 | after_script: 12 | - mix deps.get --only docs 13 | - MIX_ENV=docs mix inch.report 14 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) [2016] [Wiebe-Marten Wijnja] 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Tensor 2 | 3 | [![hex.pm version](https://img.shields.io/hexpm/v/tensor.svg)](https://hex.pm/packages/tensor) 4 | [![Build Status](https://travis-ci.org/Qqwy/tensor.svg?branch=master)](https://travis-ci.org/Qqwy/tensor) 5 | 6 | The Tensor library adds support for Vectors, Matrixes and higher-dimension Tensors to Elixir. 7 | These data structures allow easier creation and manipulation of multi-dimensional collections of things. 8 | One could use them for math, but also to build e.g. board game representations. 9 | 10 | The Tensor library builds them in a sparse way. 11 | 12 | 13 | ## Vector 14 | 15 | A Vector is a one-dimensional collection of elements. It can be viewed as a list with a known length. 16 | 17 | ```elixir 18 | iex> use Tensor 19 | iex> vec = Vector.new([1,2,3,4,5]) 20 | #Vector-(5)[1, 2, 3, 4, 5] 21 | iex> vec2 = Vector.new(~w{foo bar baz qux}) 22 | #Vector-(4)["foo", "bar", "baz", "qux"] 23 | iex> vec2[2] 24 | "baz" 25 | iex> Vector.add(vec, 3) 26 | #Vector-(5)[4, 5, 6, 7, 8] 27 | iex> Vector.add(vec, vec) 28 | #Vector-(5)[2, 4, 6, 8, 10] 29 | ``` 30 | 31 | It is nicer than a list because: 32 | 33 | - retrieving the length happens in O(1) 34 | - reading/writing elements to the list happens in O(log n), as maps are used internally. 35 | - concatenation, etc. is also < O(n), for the same reason. 36 | 37 | Vectors are very cool, so the following things have been defined to make working with them a bliss: 38 | 39 | - creating vectors from lists 40 | - appending values to vectors 41 | - reverse a vector 42 | 43 | When working with numerical vectors, you might also like to: 44 | 45 | - addition of a number to all elements in a vector. 46 | - elementwise addition of two vectors of the same size. 47 | - calculate the dot product of two numerical vectors 48 | 49 | 50 | ## Matrix 51 | 52 | A Matrix is a two-dimensional collection of elements, with known width and height. 53 | 54 | These are highly useful for certain mathematical calculations, but also for e.g. board games. 55 | 56 | Matrices are super useful, so there are many helper methods defined to work with them. 57 | 58 | ```elixir 59 | 60 | iex> use Tensor 61 | iex> mat = Matrix.new([[1,2,3],[4,5,6],[7,8,9]],3,3) 62 | #Matrix-(3×3) 63 | ┌ ┐ 64 | │ 1, 2, 3│ 65 | │ 4, 5, 6│ 66 | │ 7, 8, 9│ 67 | └ ┘ 68 | iex> Matrix.rotate_clockwise(mat) 69 | #Matrix-(3×3) 70 | ┌ ┐ 71 | │ 7, 4, 1│ 72 | │ 8, 5, 2│ 73 | │ 9, 6, 3│ 74 | └ ┘ 75 | iex> mat[0] 76 | #Vector-(3)[1, 2, 3] 77 | iex> mat[2][2] 78 | 9 79 | iex> Matrix.diag([1,2,3]) 80 | #Matrix-(3×3) 81 | ┌ ┐ 82 | │ 1, 0, 0│ 83 | │ 0, 2, 0│ 84 | │ 0, 0, 3│ 85 | └ ┘ 86 | 87 | iex> Matrix.add(mat, 2) 88 | #Matrix-(3×3) 89 | ┌ ┐ 90 | │ 3, 4, 5│ 91 | │ 6, 7, 8│ 92 | │ 9, 10, 11│ 93 | └ ┘ 94 | iex> Matrix.add(mat, mat) 95 | Matrix.add(mat, mat) 96 | #Matrix-(3×3) 97 | ┌ ┐ 98 | │ 2, 4, 6│ 99 | │ 8, 10, 12│ 100 | │ 14, 16, 18│ 101 | └ ┘ 102 | 103 | ``` 104 | 105 | The Matrix module lets you: 106 | 107 | - creating matrices from lists 108 | - creating an identity matrix 109 | - creating a diagonal matrix from a list. 110 | - Transpose matrices. 111 | - Rotate and flip matrices. 112 | - Check if a matrix is `square?`, `diagonal?`, or `symmetric?`. 113 | - creating row- or column matrices from vectors. 114 | - extract specific rows or columns from a matrix. 115 | - extract values from the main diagonal. 116 | 117 | 118 | As well as some common math operations 119 | 120 | - Add a number to all values inside a matrix 121 | - Multiply all values inside a matrix with a number 122 | - Matrix Multiplication, with two matrices. 123 | - the `trace` operation for square matrices. 124 | 125 | 126 | ## Higher-Dimension Tensor 127 | 128 | Tensors are implemented using maps internally. This means that read and write access to elements in them is O(log n). 129 | 130 | ```elixir 131 | iex> use Tensor 132 | iex> tensor = Tensor.new([[[1,2],[3,4],[5,6]],[[7,8],[9,10],[11,12]]], [3,3,2]) 133 | #Tensor(3×3×2) 134 | 1, 2 135 | 3, 4 136 | 5, 6 137 | 7, 8 138 | 9, 10 139 | 11, 12 140 | 0, 0 141 | 0, 0 142 | 0, 0 143 | iex> tensor[1] 144 | #Matrix-(3×2) 145 | ┌ ┐ 146 | │ 7, 8│ 147 | │ 9, 10│ 148 | │ 11, 12│ 149 | └ ┘ 150 | ``` 151 | 152 | Vector and Matrices are also Tensors. There exist some functions that only make sense when used on these one- or two-dimensional structures. Therefore, the extra Vector and Matrix modules exist. 153 | 154 | ## Sparcity 155 | 156 | The Vectors/Matrices/Tensors are stored in a *sparse* way. 157 | Only the values that differ from the __identity__ (defaults to `nil`) are actually stored in the Vector/Matrix/Tensor. 158 | 159 | This allows for smaller data sizes, as well as faster operations when peforming on, for instance, diagonal matrices. 160 | 161 | 162 | ## Numbers 163 | 164 | Tensor uses the [Numbers](https://hex.pm/packages/numbers) library for the implementations of elementwise addition/subtraction/multiplication etc. 165 | This means that you can fill a Tensor with e.g. [`Decimal`](https://hex.pm/packages/decimal)s or [`Ratio`](https://hex.pm/packages/ratio)nals, and it will **Just Work**! 166 | 167 | It even is the case that Tensor itself implements the Numeric behaviour, which means that you can nest Vectors/Matrices/Tensors in your Vectors/Matrices/Tensors, and doing math with them will still work!! 168 | _(as long as the elements inside the innermost Vector/Matrix/Tensor follow the Numeric behaviour as well, of course.)_ 169 | 170 | ## Syntactic Sugar 171 | 172 | For Tensors, many sugary protocols and behaviours have been implemented to let them play nicely with other parts of your applications: 173 | 174 | ### Access Behaviour 175 | 176 | Tensors have implementations of the Access Behaviour, which let you do: 177 | 178 | ```elixir 179 | iex> use Tensor 180 | iex> mat = Matrix.new([[1,2],[3,4]], 2,2) 181 | iex> mat[0] 182 | #Vector-(2)[1, 2] 183 | iex> mat[1][1] 184 | 4 185 | iex> put_in mat[1][0], 100 186 | #Matrix-(2×2) 187 | ┌ ┐ 188 | │ 1, 2│ 189 | │ 100, 4│ 190 | └ ┘ 191 | ``` 192 | 193 | It is even possible to use negative indices to look from the end of the Vector/Matrix/Tensor! 194 | 195 | ### Enumerable Protocol 196 | 197 | Tensors allow you to enumerate over the values inside, using the Enumerable protocol. 198 | Note that: 199 | 200 | - enumerating over a Vector will iterate over the values inside, 201 | - enumerating over a Matrix will iterate over the Vectors that make up the rows of the matrix 202 | - enumerating over an order-3 Tensor will iterate over the Matrices that make up the 2-dimensional slices of this Tensor, 203 | - *etc...* 204 | 205 | As there are many other ways to iterate over values inside tensors, functions like `Tensor.to_list` , `Matrix.columns` also exist. 206 | 207 | There are also functions like `Tensor.map`, which returns a new Tensor containg the results of this mapping operation. `Tensor.map` is nice in the way that it will only iterate over the 208 | actual values that have a value other than the default, which makes it fast. 209 | 210 | 211 | If you can think of other nice ways to enumerate over Tensors, please let me know, as these would make great additions to the library! 212 | 213 | 214 | ### Collectable Protocol 215 | 216 | If you want to build up a Vector from a collection of values, or a Matrix from a collection of Vectors, (or an order-3 tensor from a collection of Matrices, etc), you can do so by harnessing the power of the Collectable protocol. 217 | 218 | ```elixir 219 | iex> use Tensor 220 | iex> mat = Matrix.new(0,3) 221 | iex> v = Vector.new([1,2,3]) 222 | iex> Enum.into([v,v,v], mat) 223 | #Matrix-(3×3) 224 | ┌ ┐ 225 | │ 1, 2, 3│ 226 | │ 1, 2, 3│ 227 | │ 1, 2, 3│ 228 | └ ┘ 229 | ``` 230 | 231 | ### Inspect Protocol 232 | 233 | The Inspect protocol has been overridden for all Tensors. 234 | 235 | - Vectors are shown as a list with the length given. 236 | - Matrices are shown in a two-dimensional grid, with the dimensions given. 237 | - Three-dimensional tensors are shown with indentation and colour changes, to show the relationship of the values inside. 238 | - Four-dimensional Tensors and higher print their lower-dimension values from top-to-bottom. 239 | 240 | ### FunLand.Reducable Semiprotocol 241 | 242 | This is a lightweight version of the Enumerable protocol, with a simple implementation. 243 | 244 | ``` 245 | iex> use Tensor 246 | iex> x = Vector.new([1,2,3,4]) 247 | iex> FunLand.Reducable.reduce(x, 0, fn x, acc -> acc + x end) 248 | 10 249 | ``` 250 | 251 | ### Extractable Protocol 252 | 253 | This allows you to extract a single element per time from the Vector/Tensor/Matrix. 254 | Because it is fastest to extract the elements with the highest index, these are returned first. 255 | 256 | ```elixir 257 | 258 | iex> use Tensor 259 | iex> x = Matrix.new([[1,2],[3,4]], 2, 2) 260 | iex> {:ok, {item, x}} = Extractable.extract(x) 261 | iex> item 262 | #Vector<(2)[3, 4]> 263 | iex> {:ok, {item, x}} = Extractable.extract(x) 264 | iex> item 265 | #Vector<(2)[1, 2]> 266 | iex> Extractable.extract(x) 267 | {:error, :empty} 268 | ``` 269 | 270 | ### Insertable Protocol 271 | 272 | This allows you ti insert a single element per time into the Vector/Tensor/Matrix. 273 | Insertion always happens at the highest index location. (The size of the highest dimension of the Tensor is increased by one) 274 | 275 | ```elixir 276 | iex> use Tensor 277 | iex> x = Matrix.new(0, 2) 278 | iex> {:ok, x} = Insertable.insert(x, Vector.new([1, 2])) 279 | iex> {:ok, x} = Insertable.insert(x, Vector.new([3, 4])) 280 | #Matrix<(2×2) 281 | ┌ ┐ 282 | │ 1, 2│ 283 | │ 3, 4│ 284 | └ ┘ 285 | > 286 | ``` 287 | 288 | ### Efficiency 289 | 290 | The Tensor package is completely built in Elixir. It is _not_ a wrapper for any functionality written in other languages. 291 | 292 | This does mean that if you want to do heavy number crunching, you might want to look for something else. 293 | 294 | However, as Tensor uses as _sparse_ tensor implementation, many calculations 295 | can be much faster than you might expect from a _terse_ tensor implementation, depending on your input data. 296 | 297 | ## Installation 298 | 299 | The package can be installed by adding `tensor` to your list of dependencies in `mix.exs`: 300 | 301 | ```elixir 302 | def deps do 303 | [ 304 | {:tensor, "~> 2.0"} 305 | ] 306 | end 307 | ``` 308 | 309 | ## Changelog 310 | 311 | - 2.0.2 - Bugfix w.r.t. optional dependency `FunLand`. 312 | - 2.0.1 - Make `FunLand` an optional dependency. 313 | - 2.0.0 - Many changes, including Backwards incompatible ones: 314 | - Increase version number of `Numbers`. Backwards-incompatible change, as `mult` is now used instead of `mul` for multiplication. 315 | - Moving `Tensor`, `Vector` and `Matrix` all under the `Tensor` namespace (so they now are `Tensor.Tensor`, `Tensor.Vector`, `Tensor.Matrix`), to follow the HexPM rules of library management (which is, only use one single top-level module name). Write `use Tensor` to alias the modules in your code. 316 | - Also introduces `FunLand.Mappable`, `FunLand.Reducable`, `Extractable` and `Insertable` protocol implementations. 317 | 318 | - 1.2.0 - `Tensor.to_sparse_map`, `Tensor.from_sparse_map`. Also, hidden some functions that were supposed to be private but were not yet. 319 | - 1.1.0 - Add `Matrix.width` and `Matrix.height` functions. 320 | - 1.0.1 - Made documentation of `Matrix.new` more clear. Thank you, @wsmoak ! 321 | - 1.0.0 - First stable version. 322 | - 0.8 - Most functionality has been implemented. 323 | 324 | ## Roadmap 325 | 326 | - [x] Operation to swap any two arbitrary dimensions of a Tensor, a generalized version of `Matrix.transpose` 327 | - [x] Improve Tensor inspect output. 328 | - [x] Move more functionality to Tensor. 329 | - [x] Add Dyalizer specs to all important methods. 330 | - [x] Add aliases to common methods of Tensor to: 331 | - [x] Vector 332 | - [x] Matrix 333 | - [x] Ensure that when the identity value is stored, it is not actually stored in a Tensor, so Tensor is kept sparse. 334 | - [x] `Tensor.new` 335 | - [x] `Tensor.map` 336 | - [x] `Tensor.sparse_map_with_coordinates` 337 | - [x] `Tensor.dense_map_with_coordinates` 338 | - [x] `Tensor.merge` 339 | - [x] `Tensor.merge_with_coordinates` 340 | - [x] Possibility to use any kind of numbers, including custom data types, for `Tensor.add`, `Tensor.sub`, `Tensor.mul`, `Tensor.div` and `Tensor.pow`. 341 | - [ ] Write (doc)tests for all public functions. 342 | - [x] Improve documentation. 343 | 344 | -------------------------------------------------------------------------------- /bench/list_swap_bench.exs: -------------------------------------------------------------------------------- 1 | defmodule ListSwapBench do 2 | use Benchfella 3 | 4 | @list Enum.to_list(1..1000) 5 | 6 | for list_length <- [2,10,100,1_000] do 7 | @list_length list_length 8 | bench "old swap #{@list_length}" do 9 | Tensor.Helper.swap_elems_in_list(Enum.to_list(1..@list_length), 1, div(@list_length,2)) 10 | end 11 | 12 | bench "new swap #{@list_length}" do 13 | Tensor.Helper.swap2(Enum.to_list(1..@list_length), 1, div(@list_length,2)) 14 | end 15 | 16 | bench "map swap #{@list_length}" do 17 | Tensor.Helper.map_swap(Enum.to_list(1..@list_length), 1, div(@list_length,2)) 18 | end 19 | 20 | bench "map swap2 #{@list_length}" do 21 | Tensor.Helper.map_swap(Enum.to_list(1..@list_length), 1, div(@list_length,2)) 22 | end 23 | end 24 | 25 | 26 | 27 | end -------------------------------------------------------------------------------- /bench/snapshots/2016-07-06_20-05-05.snapshot: -------------------------------------------------------------------------------- 1 | duration:1.0;mem stats:false;sys mem stats:false 2 | module;test;tags;iterations;elapsed 3 | ListSwapBench new swap 100000 1208737 4 | ListSwapBench old swap 100000 1971744 5 | -------------------------------------------------------------------------------- /bench/snapshots/2016-07-06_20-05-15.snapshot: -------------------------------------------------------------------------------- 1 | duration:1.0;mem stats:false;sys mem stats:false 2 | module;test;tags;iterations;elapsed 3 | ListSwapBench new swap 100000 1155839 4 | ListSwapBench old swap 100000 1616536 5 | -------------------------------------------------------------------------------- /bench/snapshots/2016-07-06_20-09-52.snapshot: -------------------------------------------------------------------------------- 1 | duration:1.0;mem stats:false;sys mem stats:false 2 | module;test;tags;iterations;elapsed 3 | ListSwapBench new swap 10 1000000 1453465 4 | ListSwapBench new swap 100 200000 1616338 5 | ListSwapBench new swap 1000 50000 3743585 6 | ListSwapBench new swap 2 10000000 4718624 7 | ListSwapBench old swap 10 1000000 1429320 8 | ListSwapBench old swap 100 200000 1792739 9 | ListSwapBench old swap 1000 50000 5704660 10 | ListSwapBench old swap 2 10000000 8497680 11 | -------------------------------------------------------------------------------- /bench/snapshots/2016-07-06_20-15-54.snapshot: -------------------------------------------------------------------------------- 1 | duration:1.0;mem stats:false;sys mem stats:false 2 | module;test;tags;iterations;elapsed 3 | ListSwapBench map swap 10 500000 1899054 4 | ListSwapBench map swap 100 50000 1785739 5 | ListSwapBench map swap 1000 5000 2101422 6 | ListSwapBench map swap 2 1000000 2178125 7 | ListSwapBench new swap 10 1000000 1300966 8 | ListSwapBench new swap 100 500000 3930528 9 | ListSwapBench new swap 1000 50000 3708161 10 | ListSwapBench new swap 2 10000000 4675942 11 | ListSwapBench old swap 10 1000000 1404643 12 | ListSwapBench old swap 100 200000 1773905 13 | ListSwapBench old swap 1000 50000 3646036 14 | ListSwapBench old swap 2 10000000 8496380 15 | -------------------------------------------------------------------------------- /bench/snapshots/2016-07-07_15-43-51.snapshot: -------------------------------------------------------------------------------- 1 | duration:1.0;mem stats:false;sys mem stats:false 2 | module;test;tags;iterations;elapsed 3 | ListSwapBench map swap 10 500000 2685319 4 | ListSwapBench map swap 100 20000 1994892 5 | ListSwapBench map swap 1000 1000 1182608 6 | ListSwapBench map swap 2 500000 3094316 7 | ListSwapBench map swap2 10 100000 1017891 8 | ListSwapBench map swap2 100 20000 1979420 9 | ListSwapBench map swap2 1000 1000 1181287 10 | ListSwapBench map swap2 2 500000 3094359 11 | ListSwapBench new swap 10 500000 1831921 12 | ListSwapBench new swap 100 100000 2194436 13 | ListSwapBench new swap 1000 10000 2084968 14 | ListSwapBench new swap 2 1000000 1329664 15 | ListSwapBench old swap 10 500000 1982579 16 | ListSwapBench old swap 100 100000 2483612 17 | ListSwapBench old swap 1000 10000 2044092 18 | ListSwapBench old swap 2 1000000 2410540 19 | -------------------------------------------------------------------------------- /config/config.exs: -------------------------------------------------------------------------------- 1 | # This file is responsible for configuring your application 2 | # and its dependencies with the aid of the Mix.Config module. 3 | use Mix.Config 4 | 5 | # This configuration is loaded before any dependency and is restricted 6 | # to this project. If another project depends on this project, this 7 | # file won't be loaded nor affect the parent project. For this reason, 8 | # if you want to provide default values for your application for 9 | # 3rd-party users, it should be done in your "mix.exs" file. 10 | 11 | # You can configure for your application as: 12 | # 13 | # config :tensor, key: :value 14 | # 15 | # And access this configuration in your application as: 16 | # 17 | # Application.get_env(:tensor, :key) 18 | # 19 | # Or configure a 3rd-party app: 20 | # 21 | # config :logger, level: :info 22 | # 23 | 24 | # It is also possible to import configuration files, relative to this 25 | # directory. For example, you can emulate configuration per environment 26 | # by uncommenting the line below and defining dev.exs, test.exs and such. 27 | # Configuration from the imported file will override the ones defined 28 | # here (which is why it is important to import them last). 29 | # 30 | # import_config "#{Mix.env}.exs" 31 | -------------------------------------------------------------------------------- /lib/tensor.ex: -------------------------------------------------------------------------------- 1 | defmodule Tensor do 2 | @moduledoc """ 3 | Tensor library namespace. 4 | 5 | use `use Tensor` to alias `Tensor`, `Matrix` and `Vector`. 6 | """ 7 | defmacro __using__(_opts) do 8 | quote do 9 | alias Tensor.{Vector, Matrix, Tensor} 10 | end 11 | end 12 | end 13 | -------------------------------------------------------------------------------- /lib/tensor/matrix.ex: -------------------------------------------------------------------------------- 1 | defmodule Tensor.Matrix do 2 | alias Tensor.{Vector, Matrix, Tensor} 3 | 4 | defmodule Inspect do 5 | @doc false 6 | def inspect(matrix, _opts) do 7 | """ 8 | #Matrix<(#{Tensor.Inspect.dimension_string(matrix)}) 9 | #{inspect_contents(matrix)} 10 | > 11 | """ 12 | end 13 | 14 | defp inspect_contents(matrix) do 15 | contents_inspect = 16 | matrix 17 | |> Matrix.to_list 18 | |> Enum.map(fn row -> 19 | row 20 | |> Enum.map(fn elem -> 21 | elem 22 | |> inspect 23 | |> String.pad_leading(8) 24 | end) 25 | |> Enum.join(",") 26 | end) 27 | # |> Enum.join("│\n│") 28 | top_row_length = String.length(List.first(contents_inspect) || "") 29 | bottom_row_length = String.length(List.last(contents_inspect) || "") 30 | top = "┌#{String.pad_trailing("", top_row_length)}┐\n│" 31 | bottom = "│\n└#{String.pad_trailing("", bottom_row_length)}┘" 32 | contents_str = contents_inspect |> Enum.join("│\n│") 33 | "#{top}#{contents_str}#{bottom}" 34 | end 35 | end 36 | 37 | @doc """ 38 | Creates a new matrix of dimensions `height` x `width`. 39 | 40 | Optionally pass in a fourth argument, which will be the default values the matrix will be filled with. (default: `0`) 41 | """ 42 | def new(list_of_lists \\ [], height, width, identity \\ 0) when width >= 0 and height >= 0 and (width > 0 or height > 0) do 43 | Tensor.new(list_of_lists, [height, width], identity) 44 | end 45 | 46 | @doc """ 47 | Creates an 'identity' matrix. 48 | 49 | This is a square matrix of size `size` that has the `diag_identity` value (default: `1`) at the diagonal, and the rest is `0`. 50 | Optionally pass in a third argument, which is the value the rest of the elements in the matrix will be set to. 51 | """ 52 | def identity_matrix(diag_identity \\ 1, size, rest_identity \\ 0) when size > 0 do 53 | elems = Stream.cycle([diag_identity]) |> Enum.take(size) 54 | diag(elems, rest_identity) 55 | end 56 | 57 | @doc """ 58 | Creates a square matrix where the diagonal elements are filled with the elements of the given List or Vector. 59 | The second argument is an optional `identity` to be used for all elements not part of the diagonal. 60 | """ 61 | def diag(list_or_vector, identity \\ 0) 62 | 63 | def diag(vector = %Tensor{dimensions: [_length]}, identity) do 64 | diag(Tensor.to_list(vector), identity) 65 | end 66 | 67 | def diag(list = [_|_], identity) when is_list(list) do 68 | size = length(list) 69 | matrix = new([], size, size, identity) 70 | list 71 | |> Enum.with_index 72 | |> Enum.reduce(matrix, fn {e, i}, mat -> 73 | put_in(mat, [i,i], e) 74 | end) 75 | end 76 | 77 | @doc """ 78 | True if the matrix is square and the same as its transpose. 79 | """ 80 | def symmetric?(matrix = %Tensor{dimensions: [s,s]}) do 81 | matrix == matrix |> transpose 82 | end 83 | def symmetric?(%Tensor{dimensions: [_,_]}), do: false 84 | 85 | def square?(%Tensor{dimensions: [s,s]}), do: true 86 | def square?(%Tensor{dimensions: [_,_]}), do: false 87 | 88 | @doc """ 89 | Returns the `width` of the matrix. 90 | """ 91 | def width(%Tensor{dimensions: [_height, width]}), do: width 92 | @doc """ 93 | Returns the `height` of the matrix. 94 | """ 95 | def height(%Tensor{dimensions: [height, _width]}), do: height 96 | 97 | def transpose(matrix = %Tensor{dimensions: [_,_]}) do 98 | Tensor.transpose(matrix, 1) 99 | # new_contents = Enum.reduce(matrix.contents, %{}, fn {row_key, row_map}, new_row_map -> 100 | # Enum.reduce(row_map, new_row_map, fn {col_key, value}, new_row_map -> 101 | # map = Map.put_new(new_row_map, col_key, %{}) 102 | # put_in(map, [col_key, row_key], value) 103 | # end) 104 | # end) 105 | # %Tensor{identity: matrix.identity, contents: new_contents, dimensions: [h, w]} 106 | end 107 | 108 | @doc """ 109 | Takes a vector, and returns a 1×`n` matrix. 110 | """ 111 | def row_matrix(vector = %Tensor{dimensions: [_]}) do 112 | Tensor.lift(vector) 113 | end 114 | 115 | @doc """ 116 | """ 117 | def column_matrix(vector = %Tensor{dimensions: [_]}) do 118 | vector 119 | |> Tensor.lift 120 | |> Matrix.transpose 121 | end 122 | 123 | @doc """ 124 | Returns the rows of this matrix as a list of Vectors. 125 | """ 126 | def rows(matrix = %Tensor{dimensions: [_w,_h]}) do 127 | Tensor.slices(matrix) 128 | end 129 | 130 | @doc """ 131 | Builds a Matrix up from a list of vectors. 132 | 133 | Will only work as long as the vectors have the same length. 134 | """ 135 | def from_rows(list_of_vectors) do 136 | Tensor.from_slices(list_of_vectors) 137 | end 138 | 139 | @doc """ 140 | Returns the columns of this matrix as a list of Vectors. 141 | """ 142 | def columns(matrix = %Tensor{dimensions: [_,_]}) do 143 | matrix 144 | |> transpose 145 | |> rows 146 | end 147 | 148 | @doc """ 149 | Returns the `n`-th row of the matrix as a Vector. 150 | 151 | This is the same as doing matrix[n] 152 | """ 153 | def row(matrix, n) do 154 | matrix[n] 155 | end 156 | 157 | @doc """ 158 | Returns the `n`-th column of the matrix as a Vector. 159 | 160 | If you're doing a lot of calls to `column`, consider transposing the matrix 161 | and calling `rows` on that transposed matrix, as it will be faster. 162 | """ 163 | def column(matrix, n) do 164 | transpose(matrix)[n] 165 | end 166 | 167 | @doc """ 168 | Returns the values in the main diagonal (top left to bottom right) as list 169 | """ 170 | def main_diagonal(matrix = %Tensor{dimensions: [h,w]}) do 171 | for i <- 0..min(w,h)-1 do 172 | matrix[i][i] 173 | end 174 | end 175 | 176 | def flip_vertical(matrix = %Tensor{dimensions: [_w, h]}) do 177 | new_contents = 178 | for {r, v} <- matrix.contents, into: %{} do 179 | {h-1 - r, v} 180 | end 181 | %Tensor{matrix | contents: new_contents} 182 | end 183 | 184 | def flip_horizontal(matrix) do 185 | matrix 186 | |> transpose 187 | |> flip_vertical 188 | |> transpose 189 | end 190 | 191 | def rotate_counterclockwise(matrix) do 192 | matrix 193 | |> transpose 194 | |> flip_vertical 195 | end 196 | 197 | def rotate_clockwise(matrix) do 198 | matrix 199 | |> flip_vertical 200 | |> transpose 201 | end 202 | 203 | def rotate_180(matrix) do 204 | matrix 205 | |> flip_vertical 206 | |> flip_horizontal 207 | end 208 | 209 | 210 | @doc """ 211 | Returns the sum of the main diagonal of a square matrix. 212 | 213 | Note that this method will fail when called with a non-square matrix 214 | """ 215 | def trace(matrix = %Tensor{dimensions: [n,n]}) do 216 | Enum.sum(main_diagonal(matrix)) 217 | end 218 | 219 | def trace(%Tensor{dimensions: [_,_]}) do 220 | raise Tensor.ArithmeticError, "Matrix.trace/1 is not defined for non-square matrices!" 221 | end 222 | 223 | 224 | @doc """ 225 | Returns the current identity of matrix `matrix`. 226 | """ 227 | defdelegate identity(matrix), to: Tensor 228 | 229 | @doc """ 230 | `true` if `a` is a Matrix. 231 | """ 232 | defdelegate matrix?(a), to: Tensor 233 | 234 | @doc """ 235 | Returns the element at `index` from `matrix`. 236 | """ 237 | defdelegate fetch(matrix, index), to: Tensor 238 | 239 | @doc """ 240 | Returns the element at `index` from `matrix`. If `index` is out of bounds, returns `default`. 241 | """ 242 | defdelegate get(matrix, index, default), to: Tensor 243 | defdelegate pop(matrix, index, default), to: Tensor 244 | defdelegate get_and_update(matrix, index, function), to: Tensor 245 | 246 | defdelegate merge_with_index(matrix_a, matrix_b, function), to: Tensor 247 | defdelegate merge(matrix_a, matrix_b, function), to: Tensor 248 | 249 | defdelegate to_list(matrix), to: Tensor 250 | defdelegate lift(matrix), to: Tensor 251 | 252 | defdelegate map(matrix, function), to: Tensor 253 | defdelegate with_coordinates(matrix), to: Tensor 254 | defdelegate sparse_map_with_coordinates(matrix, function), to: Tensor 255 | defdelegate dense_map_with_coordinates(matrix, function), to: Tensor 256 | defdelegate to_sparse_map(matrix), to: Tensor 257 | 258 | @doc """ 259 | Converts a sparse map where each key is a [height, width] coordinate list, 260 | and each value is anything to a Matrix with the given height, width and contents. 261 | 262 | See `to_sparse_map/1` for the inverse operation. 263 | """ 264 | def from_sparse_map(matrix, height, width, identity \\ 0) do 265 | Tensor.from_sparse_map(matrix, [height, width], identity) 266 | end 267 | 268 | 269 | defdelegate add(a, b), to: Tensor 270 | defdelegate sub(a, b), to: Tensor 271 | defdelegate mult(a, b), to: Tensor 272 | defdelegate div(a, b), to: Tensor 273 | 274 | defdelegate add_number(a, b), to: Tensor 275 | defdelegate sub_number(a, b), to: Tensor 276 | defdelegate mult_number(a, b), to: Tensor 277 | defdelegate div_number(a, b), to: Tensor 278 | 279 | @doc """ 280 | Elementwise addition of matrixs `matrix_a` and `matrix_b`. 281 | """ 282 | defdelegate add_matrix(matrix_a, matrix_b), to: Tensor, as: :add_tensor 283 | 284 | @doc """ 285 | Elementwise subtraction of `matrix_b` from `matrix_a`. 286 | """ 287 | defdelegate sub_matrix(matrix_a, matrix_b), to: Tensor, as: :sub_tensor 288 | 289 | @doc """ 290 | Elementwise multiplication of `matrix_a` with `matrix_b`. 291 | """ 292 | defdelegate mult_matrix(matrix_a, matrix_b), to: Tensor, as: :mult_tensor 293 | 294 | @doc """ 295 | Elementwise division of `matrix_a` and `matrix_b`. 296 | Make sure that the identity of `matrix_b` isn't 0 before doing this. 297 | """ 298 | defdelegate div_matrix(matrix_a, matrix_b), to: Tensor, as: :div_tensor 299 | 300 | @doc """ 301 | Calculates the Matrix Product. This is a new matrix, obtained by multiplying 302 | taking the `m` rows of the `m_by_n_matrix`, the `p` columns of the `n_by_p_matrix` 303 | and calculating the dot-product (See `Vector.dot_product/2`) of these two `n`-length vectors. 304 | The resulting values are stored at position [m][p] in the final matrix. 305 | 306 | There is no way to perform this operation in a sparse way, so it is performed dense. 307 | The identities of the two matrices cannot be kept; `nil` is used as identity of the output Matrix. 308 | """ 309 | def product(m_by_n_matrix, n_by_p_matrix) 310 | def product(a = %Tensor{dimensions: [m,n]}, b = %Tensor{dimensions: [n,p]}) do 311 | b_t = transpose(b) 312 | list_of_lists = 313 | for r <- (0..m-1) do 314 | for c <- (0..p-1) do 315 | Vector.dot_product(a[r], b_t[c]) 316 | end 317 | end 318 | Tensor.new(list_of_lists, [m, p]) 319 | end 320 | 321 | 322 | 323 | def product(_a = %Tensor{dimensions: [_,_]}, _b = %Tensor{dimensions: [_,_]}) do 324 | raise Tensor.ArithmeticError, "Cannot compute Matrix.product if the width of matrix `a` does not match the height of matrix `b`!" 325 | end 326 | 327 | @doc """ 328 | Calculates the product of `matrix` with `matrix`, `exponent` times. 329 | If `exponent` == 0, then the result will be the identity matrix with the same dimensions as the given matrix. 330 | 331 | This is calculated using the fast [exponentiation by squaring](https://en.wikipedia.org/wiki/Exponentiation_by_squaring) algorithm. 332 | """ 333 | def power(matrix, exponent) 334 | 335 | def power(matrix = %Tensor{dimensions: [a,a]}, negative_number) when negative_number < 0 do 336 | product(Matrix.identity_matrix(-1, a), power(matrix, -negative_number)) 337 | end 338 | 339 | def power(%Tensor{dimensions: [a,a]}, 0), do: Matrix.identity_matrix(a) 340 | def power(matrix = %Tensor{dimensions: [a,a]}, 1), do: matrix 341 | 342 | def power(matrix = %Tensor{dimensions: [a,a]}, exponent) when rem(exponent, 2) == 0 do 343 | power(product(matrix, matrix), Kernel.div(exponent, 2)) 344 | end 345 | 346 | def power(matrix = %Tensor{dimensions: [a,a]}, exponent) when rem(exponent, 2) == 1 do 347 | product(matrix, power(product(matrix, matrix), Kernel.div(exponent, 2))) 348 | end 349 | 350 | def power(%Tensor{dimensions: [_,_]}) do 351 | raise Tensor.ArithmeticError, "Cannot compute Matrix.power with non-square matrices" 352 | end 353 | 354 | 355 | end 356 | 357 | -------------------------------------------------------------------------------- /lib/tensor/tensor.ex: -------------------------------------------------------------------------------- 1 | defmodule Tensor.Tensor do 2 | alias Tensor.Tensor.Helper 3 | alias Tensor.{Vector, Matrix, Tensor} 4 | require Helper 5 | import Helper, only: [use_if_exists?: 2] 6 | 7 | defstruct [:identity, contents: %{}, dimensions: [1]] 8 | 9 | defimpl Inspect do 10 | def inspect(tensor, opts) do 11 | case length(tensor.dimensions) do 12 | 1 -> 13 | Vector.Inspect.inspect(tensor, opts) 14 | 2 -> 15 | Matrix.Inspect.inspect(tensor, opts) 16 | _ -> 17 | Tensor.Inspect.inspect(tensor, opts) 18 | #"#Tensor-(#{tensor.dimensions |> Enum.join("×")}) (#{inspect tensor.contents})" 19 | end 20 | end 21 | end 22 | 23 | defmodule ArithmeticError do 24 | defexception message: "This arithmetic operation is not allowed when working with Vectors/Matrices/Tensors." 25 | end 26 | 27 | defmodule AccessError do 28 | defexception [:message] 29 | 30 | def exception(key: key) do 31 | %AccessError{message: "The requested key `#{inspect key}` could not be found inside this Vector/Matrix/Tensor. It probably is out of range"} 32 | end 33 | end 34 | 35 | defmodule CollectableError do 36 | defexception [:message] 37 | 38 | def exception(value), do: %CollectableError{message: """ 39 | Could not insert `#{inspect value}` to the Vector/Matrix/Tensor. 40 | Make sure that you pass in a list of Tensors that are order n-1 from the tensor you add them to, 41 | and that they have the same dimensions (save for the highest one). 42 | 43 | For instance, you can only add vectors of length 3 to a n×3 matrix, 44 | and matrices of size 2×4 can only be added to an order-3 tensor of size n×2×3 45 | """} 46 | end 47 | 48 | @behaviour Numbers.Numeric # Yes, you can use Tensors themselves with Numbers as well! 49 | 50 | @opaque tensor :: %Tensor{} 51 | 52 | 53 | @doc """ 54 | Returs true if the tensor is a 1-order Tensor, which is also known as a Vector. 55 | """ 56 | @spec vector?(tensor) :: boolean 57 | def vector?(%Tensor{dimensions: [_]}), do: true 58 | def vector?(%Tensor{}), do: false 59 | 60 | @doc """ 61 | Returs true if the tensor is a 2-order Tensor, which is also known as a Matrix. 62 | """ 63 | @spec matrix?(tensor) :: boolean 64 | def matrix?(%Tensor{dimensions: [_,_]}), do: true 65 | def matrix?(%Tensor{}), do: false 66 | 67 | 68 | @doc """ 69 | Returns the _order_ of the Tensor. 70 | 71 | This is 1 for Vectors, 2 for Matrices, etc. 72 | It is the amount of dimensions the tensor has. 73 | """ 74 | @spec order(tensor) :: non_neg_integer 75 | def order(tensor) do 76 | length(tensor.dimensions) 77 | end 78 | 79 | @doc """ 80 | Returns the dimensions of the tensor. 81 | """ 82 | @spec dimensions(tensor) :: [non_neg_integer] 83 | def dimensions(tensor = %Tensor{}) do 84 | tensor.dimensions 85 | end 86 | 87 | @doc """ 88 | Returns the identity, the default value a tensor inserts at a position when no other value is set. 89 | 90 | This is mostly used internally, and is used to allow Tensors to take a lot less space because 91 | only values that are not `empty` have to be stored. 92 | """ 93 | @spec identity(tensor) :: any 94 | def identity(tensor = %Tensor{}) do 95 | tensor.identity 96 | end 97 | 98 | 99 | @behaviour Access 100 | 101 | @doc """ 102 | Returns a Tensor of one order less, containing all fields for which the highest-order accessor location matches `index`. 103 | 104 | In the case of a Vector, returns the simple value at the given `index` location. 105 | In the case of a Matrix, returns a Vector containing the row at the given column indicated by `index`. 106 | 107 | 108 | `index` has to be an integer, smaller than the size of the highest dimension of the tensor. 109 | When `index` is negative, we will look from the right side of the Tensor. 110 | 111 | If `index` falls outside of the range of the Tensor's highest dimension, `:error` is returned. 112 | See also `get/3`. 113 | 114 | This is part of the `Access` Behaviour implementation for Tensor. 115 | """ 116 | @spec fetch(tensor, integer) :: {:ok, any} | :error 117 | def fetch(tensor, index) 118 | def fetch(%Tensor{}, index) when not(is_number(index)), do: :error 119 | def fetch(tensor = %Tensor{dimensions: [current_dimension|_]}, index) when is_number(index) do 120 | index = (index < 0) && (current_dimension + index) || index 121 | if index >= current_dimension || index < 0 do 122 | :error 123 | else 124 | if vector?(tensor) do # Return item inside vector. 125 | {:ok, Map.get(tensor.contents, index, tensor.identity)} 126 | else 127 | # Return lower dimension slice of tensor. 128 | contents = Map.get(tensor.contents, index, %{}) 129 | dimensions = tl(tensor.dimensions) 130 | {:ok, %Tensor{identity: tensor.identity, contents: contents, dimensions: dimensions}} 131 | end 132 | end 133 | end 134 | 135 | @doc """ 136 | Returns the element at `index` from `tensor`. If `index` is out of bounds, returns `default`. 137 | """ 138 | @spec get(tensor, integer, any) :: any 139 | def get(tensor, index, default) do 140 | case fetch(tensor, index) do 141 | {:ok, result} -> result 142 | :error -> default 143 | end 144 | end 145 | 146 | @doc """ 147 | Removes the element associated with `index` from the tensor. 148 | Returns a tuple, the first element being the removed element (or `nil` if nothing was removed), 149 | the second the updated Tensor with the element removed. 150 | 151 | `index` has to be an integer, smaller than the size of the highest dimension of the tensor. 152 | When `index` is negative, we will look from the right side of the Tensor. 153 | 154 | Notice that because of how Tensors are structured, the structure of the tensor will not change. 155 | Elements that are popped are reset to the 'identity' value. 156 | 157 | This is part of the Access Behaviour implementation for Tensor. 158 | 159 | ## Examples 160 | 161 | iex> mat = Matrix.new([[1,2],[3,4]], 2,2) 162 | iex> {vector, mat2} = Tensor.pop(mat, 0) 163 | iex> vector 164 | #Vector<(2)[1, 2]> 165 | iex> inspect(mat2) 166 | "#Matrix<(2×2) 167 | ┌ ┐ 168 | │ 0, 0│ 169 | │ 3, 4│ 170 | └ ┘ 171 | > 172 | " 173 | """ 174 | @spec pop(tensor, integer, any) :: { tensor | any, tensor} 175 | def pop(tensor, index, default \\ nil) 176 | 177 | def pop(tensor = %Tensor{}, index, _default) when not(is_integer(index)) do 178 | tensor 179 | end 180 | 181 | def pop(tensor = %Tensor{dimensions: [current_dimension|_]}, index, default) do 182 | index = (index < 0) && (current_dimension + index) || index 183 | if index < 0 || index >= current_dimension do 184 | tensor 185 | else 186 | if vector?(tensor) do 187 | {popped_value, new_contents} = Map.pop(tensor.contents, index, default) 188 | {popped_value, %Tensor{tensor | contents: new_contents} } 189 | else 190 | {popped_contents, new_contents} = Map.pop(tensor.contents, index, %{}) 191 | lower_dimensions = tl(tensor.dimensions) 192 | { 193 | %Tensor{contents: popped_contents, dimensions: lower_dimensions, identity: tensor.identity}, 194 | %Tensor{tensor | contents: new_contents} 195 | } 196 | end 197 | end 198 | end 199 | 200 | @doc """ 201 | Gets the value inside `tensor` at key `key`, and calls the passed function `fun` on it, 202 | which might update it, or return `:pop` if it ought to be removed. 203 | 204 | 205 | `key` has to be an integer, smaller than the size of the highest dimension of the tensor. 206 | When `key` is negative, we will look from the right side of the Tensor. 207 | 208 | """ 209 | @spec get_and_update(tensor, integer, (any -> {get, any})) :: {get, tensor} when get: var 210 | def get_and_update(tensor = %Tensor{dimensions: [current_dimension|_], identity: identity}, key, fun) do 211 | key = (key < 0) && (current_dimension + key) || key 212 | if !is_number(key) || key >= current_dimension do 213 | raise Tensor.AccessError, key 214 | end 215 | {result, contents} = 216 | if vector? tensor do 217 | current_value = Map.get(tensor.contents, key, identity) 218 | case fun.(current_value) do 219 | :pop -> {current_value, Map.delete(tensor.contents, key)} 220 | {get, ^identity} -> {get, Map.delete(tensor.contents, key)} 221 | {get, update} -> {get, Map.put(tensor.contents, key, update)} 222 | other -> raise "the given function must return a two-element tuple or :pop, got: #{inspect(other)}" 223 | end 224 | else 225 | {:ok, ll_tensor} = fetch(tensor, key) 226 | {result, ll_tensor2} = fun.(ll_tensor) 227 | {result, Map.put(tensor.contents, key, ll_tensor2.contents)} 228 | end 229 | {result, %Tensor{tensor | contents: contents}} 230 | end 231 | 232 | 233 | 234 | @doc """ 235 | Creates a new Tensor from a list of lists (of lists of lists of ...). 236 | The second argument should be the dimensions the tensor should become. 237 | The optional third argument is an identity value for the tensor, that all non-set values will default to. 238 | 239 | TODO: Solve this, maybe find a nicer way to create tensors. 240 | """ 241 | @spec new([], [integer], any) :: tensor 242 | def new(nested_list_of_values, dimensions \\ nil, identity \\ 0) do 243 | dimensions = dimensions || [length(nested_list_of_values)] 244 | # TODO: Dimension inference. 245 | contents = 246 | nested_list_of_values 247 | |> nested_list_to_sparse_nested_map(identity) 248 | %Tensor{contents: contents, identity: identity, dimensions: dimensions} 249 | end 250 | 251 | defp nested_list_to_sparse_nested_map(list, identity) do 252 | list 253 | |> Enum.with_index 254 | |> Enum.reduce(%{}, fn 255 | {sublist, index}, map when is_list(sublist) -> 256 | Map.put(map, index, nested_list_to_sparse_nested_map(sublist, identity)) 257 | {^identity, _index}, map -> 258 | map 259 | {item, index}, map -> 260 | Map.put(map, index, item) 261 | end) 262 | end 263 | 264 | @doc """ 265 | Converts the tensor as a nested list of values. 266 | 267 | For a Vector, returns a list of values 268 | For a Matrix, returns a list of lists of values 269 | For an order-3 Tensor, returns a list of lists of lists of values. 270 | Etc. 271 | """ 272 | @spec to_list(tensor) :: list 273 | def to_list(tensor) do 274 | do_to_list(tensor.contents, tensor.dimensions, tensor.identity) 275 | end 276 | 277 | defp do_to_list(_tensor_contents, [dimension | _dimensions], _identity) when dimension <= 0 do 278 | [] 279 | end 280 | 281 | defp do_to_list(tensor_contents, [dimension], identity) do 282 | for x <- 0..dimension-1 do 283 | Map.get(tensor_contents, x, identity) 284 | end 285 | end 286 | 287 | defp do_to_list(tensor_contents, [dimension | dimensions], identity) do 288 | for x <- 0..dimension-1 do 289 | do_to_list(Map.get(tensor_contents, x, %{}), dimensions, identity) 290 | end 291 | end 292 | 293 | @doc """ 294 | `lifts` a Tensor up one order, by adding a dimension of size `1` to the start. 295 | 296 | This transforms a length-`n` Vector to a 1×`n` Matrix, a `n`×`m` matrix to a `1`×`n`×`m` 3-order Tensor, etc. 297 | 298 | See also `Tensor.slices/1` 299 | """ 300 | @spec lift(tensor) :: tensor 301 | def lift(tensor) do 302 | %Tensor{ 303 | identity: tensor.identity, 304 | dimensions: [1|tensor.dimensions], 305 | contents: %{0 => tensor.contents} 306 | } 307 | end 308 | 309 | IO.inspect("FunLand loaded: #{Code.ensure_loaded?(FunLand)}") 310 | 311 | 312 | # if Code.ensure_loaded?(FunLand) do 313 | # Code.eval_quoted( 314 | # quote do 315 | # apply(Kernel, :use, [FunLand.Mappable]) 316 | # end, __ENV__) 317 | # end 318 | 319 | use_if_exists?(FunLand.Mappable, []) 320 | 321 | @doc """ 322 | Maps `fun` over all values in the Tensor. 323 | 324 | This is a _true_ mapping operation, as the result will be a new Tensor. 325 | 326 | `fun` gets the current value as input, and should return the new value to use. 327 | 328 | It is important that `fun` is a pure function, as internally it will only be mapped over all values 329 | that are non-empty, and once over the identity of the tensor. 330 | """ 331 | @spec map(tensor, (any -> any)) :: tensor 332 | def map(tensor, fun) do 333 | new_identity = fun.(tensor.identity) 334 | new_contents = do_map(tensor.contents, tensor.dimensions, fun, new_identity) 335 | %Tensor{tensor | identity: new_identity, contents: new_contents} 336 | end 337 | 338 | def do_map(tensor_contents, [_lowest_dimension], fun, new_identity) do 339 | for {k,v} <- tensor_contents, into: %{} do 340 | case fun.(v) do 341 | ^new_identity -> 342 | {:new_identity, new_identity} 343 | other_value -> 344 | {k, other_value} 345 | end 346 | end 347 | |> Map.delete(:new_identity) 348 | end 349 | 350 | def do_map(tensor_contents, [_dimension | dimensions], fun, new_identity) do 351 | for {k,v} <- tensor_contents, into: %{} do 352 | {k, do_map(v, dimensions, fun, new_identity)} 353 | end 354 | end 355 | 356 | @doc """ 357 | Returns a new tensor, where all values are `{list_of_coordinates, value}` tuples. 358 | 359 | Note that this new tuple is always dense, as the coordinates of all values are different. 360 | The identity is changed to `{:identity, original_identity}`. 361 | """ 362 | @spec with_coordinates(tensor) :: tensor 363 | def with_coordinates(tensor = %Tensor{}) do 364 | with_coordinates(tensor, []) 365 | end 366 | def with_coordinates(tensor = %Tensor{dimensions: [current_dimension]}, coordinates) do 367 | for i <- 0..(current_dimension-1), into: %Tensor{dimensions: [0]} do 368 | {[i|coordinates], tensor[i]} 369 | end 370 | end 371 | 372 | def with_coordinates(tensor = %Tensor{dimensions: [current_dimension | lower_dimensions]}, coordinates) do 373 | for i <- 0..(current_dimension-1), into: %Tensor{dimensions: [0 | lower_dimensions]} do 374 | with_coordinates(tensor[i], [i|coordinates]) 375 | end 376 | end 377 | 378 | @doc """ 379 | Maps a function over the values in the tensor. 380 | 381 | The function will receive a tuple of the form {list_of_coordinates, value}. 382 | 383 | Note that only the values that are not the same as the identity will call the function. 384 | The function will be called once to calculate the new identity. This call will be of shape {:identity, value}. 385 | 386 | Because of this _sparse/lazy_ invocation, it is important that `fun` is a pure function, as this is the only way 387 | to guarantee that the results will be the same, regardless of at what place the identity is used. 388 | """ 389 | @spec sparse_map_with_coordinates(tensor, ({list | :identity, any} -> any)) :: tensor 390 | def sparse_map_with_coordinates(tensor, fun) do 391 | new_identity = fun.({:identity, tensor.identity}) 392 | new_contents = do_sparse_map_with_coordinates(tensor.contents, tensor.dimensions, fun, [], new_identity) 393 | 394 | %Tensor{tensor | identity: new_identity, contents: new_contents} 395 | end 396 | 397 | defp do_sparse_map_with_coordinates(tensor_contents, [_lowest_dimension], fun, coordinates, new_identity) do 398 | for {k,v} <- tensor_contents, into: %{} do 399 | case fun.({:lists.reverse([k|coordinates]), v}) do 400 | ^new_identity -> 401 | {:new_identity, new_identity} 402 | other_value -> 403 | {k, other_value} 404 | end 405 | end 406 | |> Map.delete(:new_identity) # Values that become the new identity are removed from the sparse map. 407 | end 408 | 409 | defp do_sparse_map_with_coordinates(tensor_contents, [_current_dimension | lower_dimensions], fun, coordinates, new_identity) do 410 | for {k,v} <- tensor_contents, into: %{} do 411 | {k, do_sparse_map_with_coordinates(v, lower_dimensions, fun, [k|coordinates], new_identity)} 412 | end 413 | end 414 | 415 | @doc """ 416 | Returns a map where the keys are coordinate lists, 417 | and the values are the values stored in the matrix. 418 | 419 | This is a flattened representation of the values that are actually stored 420 | inside the sparse tensor. 421 | 422 | This representation could be used to do advanced manipulation of the 423 | sparsly-stored tensor elements. 424 | 425 | See `from_sparse_map/2` for the inverse operation. 426 | 427 | 428 | iex> mat = Matrix.new([[1,2,3],[4,5,6],[7,8,9]], 3,3) 429 | iex> Matrix.to_sparse_map(mat) 430 | %{[0, 0] => 1, [0, 1] => 2, [0, 2] => 3, [1, 0] => 4, [1, 1] => 5, [1, 2] => 6, 431 | [2, 0] => 7, [2, 1] => 8, [2, 2] => 9} 432 | 433 | """ 434 | def to_sparse_map(tensor) do 435 | do_to_sparse_map(tensor.contents, tensor.dimensions, []) 436 | end 437 | 438 | defp do_to_sparse_map(tensor_contents, [_lowest_dimension], coordinates) do 439 | for {k, v} <- tensor_contents, into: %{} do 440 | {:lists.reverse([k | coordinates]), v} 441 | end 442 | end 443 | 444 | defp do_to_sparse_map(tensor_contents, [_current_dimension | lower_dimensions], coordinates) do 445 | Enum.reduce(tensor_contents, %{}, fn {k, v}, result_map -> 446 | Map.merge(result_map, do_to_sparse_map(v, lower_dimensions, [k|coordinates])) 447 | end) 448 | end 449 | 450 | @doc """ 451 | Converts a sparse map where each key is a coordinate list, 452 | and each value is anything to a Tensor with the given dimensions and contents. 453 | 454 | See `to_sparse_map/1` for the inverse operation. 455 | 456 | iex> mat_map = %{[0, 0] => 1, [0, 1] => 2, [0, 2] => 3, [1, 0] => 4, [1, 1] => 5, [1, 2] => 6, [2, 0] => 7, [2, 1] => 8, [2, 2] => 9} 457 | iex> mat = Matrix.from_sparse_map(mat_map, 3, 3) 458 | iex> mat == Matrix.new([[1,2,3],[4,5,6],[7,8,9]], 3, 3) 459 | true 460 | """ 461 | def from_sparse_map(list, dimensions, identity \\ 0) when is_list(dimensions) do 462 | new_contents = 463 | Enum.reduce(list, %{}, fn {k, v}, tensor_contents -> 464 | do_from_sparse_map(v, k, tensor_contents) 465 | end) 466 | %Tensor{dimensions: dimensions, identity: identity, contents: new_contents} 467 | end 468 | 469 | defp do_from_sparse_map(value, [coordinate], tensor_contents) do 470 | Map.merge(tensor_contents, %{coordinate => value}) 471 | end 472 | 473 | defp do_from_sparse_map(value, [current_coordinate | lower_coordinates], tensor_contents) do 474 | put_in(tensor_contents[current_coordinate], do_from_sparse_map(value, lower_coordinates, Access.get(tensor_contents, current_coordinate, %{}))) 475 | end 476 | 477 | @doc """ 478 | Maps a function over _all_ values in the tensor, including all values that are equal to the tensor identity. 479 | This is useful to map a function with side effects over the Tensor. 480 | 481 | The function will be called once to calculate the new identity. This call will be of shape {:identity, value}. 482 | After the dense map, all values that are the same as the newly calculated identity are again removed, to make the Tensor sparse again. 483 | 484 | The function will receive a tuple of the form {list_of_coordinates, value}, 485 | """ 486 | @spec dense_map_with_coordinates(tensor, ({list | :identity, any} -> any)) :: tensor 487 | def dense_map_with_coordinates(tensor, fun) do 488 | new_identity = fun.({:identity, tensor.identity}) 489 | tensor = %Tensor{tensor | identity: new_identity} 490 | do_dense_map_with_coordinates(tensor, tensor.dimensions, fun, []) 491 | end 492 | 493 | defp do_dense_map_with_coordinates(tensor, [dimension], fun, coordinates) do 494 | for i <- 0..(dimension-1), into: %Tensor{dimensions: [0], identity: tensor.identity} do 495 | fun.({:lists.reverse([i|coordinates]), tensor[i]}) 496 | end 497 | end 498 | 499 | defp do_dense_map_with_coordinates(tensor, [dimension | lower_dimensions], fun, coordinates) do 500 | for i <- 0..(dimension-1), into: %Tensor{dimensions: [0|lower_dimensions], identity: tensor.identity} do 501 | do_dense_map_with_coordinates(tensor[i], lower_dimensions, fun, [i | coordinates]) 502 | end 503 | end 504 | 505 | 506 | @doc """ 507 | Returns a list containing all lower-dimension Tensors in the Tensor. 508 | 509 | For a Vector, this will just be a list of values. 510 | For a Matrix, this will be a list of rows. 511 | For a order-3 Tensor, this will be a list of matrices, etc. 512 | """ 513 | @spec slices(tensor) :: tensor | [] 514 | def slices(tensor = %Tensor{dimensions: [current_dimension | _lower_dimensions]}) do 515 | for i <- 0..current_dimension-1 do 516 | tensor[i] 517 | end 518 | end 519 | 520 | @doc """ 521 | Builds up a tensor from a list of slices in a lower dimension. 522 | A list of values will build a Vector. 523 | A list of same-length vectors will create a Matrix. 524 | A list of same-size matrices will create an order-3 Tensor. 525 | """ 526 | @spec from_slices([] | tensor) :: tensor 527 | def from_slices(list_of_slices = [%Tensor{dimensions: dimensions , identity: identity} | _rest]) do 528 | Enum.into(list_of_slices, Tensor.new([], [0 | dimensions], identity)) 529 | end 530 | 531 | def from_slices(list_of_values) do 532 | Tensor.new(list_of_values) 533 | end 534 | 535 | 536 | @doc """ 537 | Transposes the Tensor, by swapping the `a`-th dimension for the `b`-th dimension. 538 | 539 | This is done in three steps (outside <-> a, outside <-> b, outside <-> a), so it is not extremely fast. 540 | """ 541 | @spec transpose(tensor, non_neg_integer, non_neg_integer) :: tensor 542 | def transpose(tensor, dimension_a_index, dimension_b_index) do 543 | tensor 544 | |> transpose(dimension_a_index) 545 | |> transpose(dimension_b_index) 546 | |> transpose(dimension_a_index) 547 | end 548 | 549 | 550 | @doc """ 551 | Transposes the Tensor, by swapping the outermost dimension for the `b`-th dimension. 552 | """ 553 | @spec transpose(tensor, non_neg_integer) :: tensor 554 | def transpose(tensor, dimension_b_index) do 555 | # Note that dimensions are not correct as we change them. 556 | transposed_tensor = 557 | sparse_contents_map(tensor, fn {coords, v} -> 558 | {Helper.swap_elems_in_list(coords, 0, dimension_b_index), v} 559 | end) 560 | # So we recompute them, and return a tensor where the dimensions are updated as well. 561 | transposed_dimensions = Helper.swap_elems_in_list(tensor.dimensions, 0, dimension_b_index) 562 | %Tensor{tensor | dimensions: transposed_dimensions, contents: transposed_tensor.contents} 563 | end 564 | 565 | # Maps over a tensor's contents in a sparse way 566 | # 1. deflate contents 567 | # 2. map over deflated contents map where each key is a coords list. 568 | # 3. inflate contents 569 | # returns the new contents for the new tensor 570 | # Note that the new dimensions might be invalid if no special care is taken when they are changed, to keep them within bounds. 571 | defp sparse_contents_map(tensor, fun) do 572 | new_contents = 573 | tensor 574 | |> sparse_tensor_with_coordinates 575 | |> Map.fetch!(:contents) 576 | |> flatten_nested_map_of_tuples 577 | |> Enum.map(fun) 578 | |> Enum.into(%{}) 579 | |> inflate_map 580 | %Tensor{tensor | contents: new_contents} 581 | end 582 | 583 | # Returns a tensor where all internal values are changed to a `{coordinates, value}` tuples. 584 | defp sparse_tensor_with_coordinates(tensor) do 585 | Tensor.sparse_map_with_coordinates(tensor, fn {coords, v} -> {coords, v} end) 586 | end 587 | 588 | # Turns a map of the format `%{1 => %{2 => %{3 => {[1,2,3], 4} }}}` 589 | # into [{[1,2,3] => 4}] 590 | defp flatten_nested_map_of_tuples(nested_map_of_tuples = %{}) do 591 | values = Map.values(nested_map_of_tuples) 592 | if values != [] && match?({_,_}, hd(values)) do 593 | values 594 | else 595 | Enum.flat_map(values, &flatten_nested_map_of_tuples/1) 596 | end 597 | end 598 | 599 | 600 | # elements in map are supposed to be {list_of_coords, val} 601 | defp inflate_map(map) do 602 | Enum.reduce(map, %{}, fn {list_of_coords, val}, new_map -> 603 | Helper.put_in_path(new_map, list_of_coords, val) 604 | end) 605 | end 606 | 607 | defmodule DimensionsDoNotMatchError do 608 | defexception message: "The dimensions of the two given tensors do not match." 609 | end 610 | 611 | @doc """ 612 | Merges `tensor_a` with `tensor_b` by calling `fun` for each element that exists in at least one of them: 613 | 614 | - When a certain location is occupied in `tensor_a`, `fun` is called using `tensor_b`'s identity, with three arguments: `coords_list, tensor_a_val, tensor_b_identity` 615 | - When a certain location is occupied in `tensor_b`, `fun` is called using `tensor_a`'s identity, with three arguments: `coords_list, tensor_a_identity, tensor_b_val` 616 | - When a certain location is occupied in both `tensor_a` and `tensor_b`, `fun` is called with three arguments: `coords_list, tensor_a_val, tensor_b_val` 617 | 618 | Finally, `fun` is invoked one last time, with `:identity, tensor_a_identity, tensor_b_identity`. 619 | 620 | An error will be raised unless `tensor_a` and `tensor_b` have the same dimensions. 621 | """ 622 | # TODO: Throw custom error if dimensions do not match. 623 | @spec merge(%Tensor{}, %Tensor{}, ([integer] | :identity, a, a -> any)) :: %Tensor{} when a: any 624 | def merge_with_index(tensor_a = %Tensor{dimensions: dimensions}, tensor_b = %Tensor{dimensions: dimensions}, fun) do 625 | a_flat_contents = sparse_tensor_with_coordinates(tensor_a).contents |> flatten_nested_map_of_tuples |> Map.new 626 | b_flat_contents = sparse_tensor_with_coordinates(tensor_b).contents |> flatten_nested_map_of_tuples |> Map.new 627 | 628 | new_identity = fun.(:identity, tensor_a.identity, tensor_b.identity) 629 | 630 | a_diff = Tensor.Helper.map_difference(a_flat_contents, b_flat_contents) 631 | b_diff = Tensor.Helper.map_difference(b_flat_contents, a_flat_contents) 632 | 633 | a_overlap = Tensor.Helper.map_difference(a_flat_contents, a_diff) 634 | b_overlap = Tensor.Helper.map_difference(b_flat_contents, b_diff) 635 | 636 | overlap = Map.merge(a_overlap, b_overlap, fun) 637 | 638 | merged_a_diff = Enum.into(a_diff, %{}, fn {k, v} -> {k, fun.(k, v, tensor_b.identity)} end) 639 | merged_b_diff = Enum.into(b_diff, %{}, fn {k, v} -> {k, fun.(k, tensor_a.identity, v)} end) 640 | 641 | 642 | new_contents = 643 | overlap 644 | |> Map.merge(merged_a_diff) 645 | |> Map.merge(merged_b_diff) 646 | |> inflate_map 647 | 648 | %Tensor{dimensions: dimensions, identity: new_identity, contents: new_contents} 649 | |> make_sparse 650 | end 651 | 652 | def merge_with_index(_tensor_a, _tensor_b, _fun) do 653 | raise DimensionsDoNotMatchError 654 | end 655 | 656 | # Map the identity function over the tensor, to ensure that all values that are equal to the Tensor identity are removed again. 657 | # So it is sparse once again. 658 | defp make_sparse(tensor = %Tensor{}) do 659 | map(tensor, fn x -> x end) 660 | end 661 | 662 | 663 | @doc """ 664 | Merges `tensor_a` with `tensor_b` by calling `fun` for each element that exists in at least one of them: 665 | 666 | - When a certain location is occupied in `tensor_a`, `fun` is called using `tensor_b`'s identity, with two arguments: `tensor_a_val, tensor_b_identity` 667 | - When a certain location is occupied in `tensor_b`, `fun` is called using `tensor_a`'s identity, with two arguments: `tensor_a_identity, tensor_b_val` 668 | - When a certain location is occupied in both `tensor_a` and `tensor_b`, `fun` is called with two arguments: `tensor_a_val, tensor_b_val` 669 | 670 | Finally, `fun` is invoked one last time, with `tensor_a_identity, tensor_b_identity`. 671 | 672 | An error will be raised unless `tensor_a` and `tensor_b` have the same dimensions. 673 | """ 674 | @spec merge(%Tensor{}, %Tensor{}, (a, a -> any)) :: %Tensor{} when a: any 675 | def merge(tensor_a, tensor_b, fun) do 676 | merge_with_index(tensor_a, tensor_b, fn _k, a, b -> fun.(a, b) end) 677 | end 678 | 679 | 680 | @doc """ 681 | Elementwise addition. 682 | 683 | - If both `a` and `b` are Tensors, the same as calling `add_tensor/2`. 684 | - If one of `a` or `b` is any kind of number, the same as calling `add_number/2`. 685 | """ 686 | @spec add(tensor , tensor) :: tensor 687 | @spec add(Numeric.t, tensor) :: tensor 688 | @spec add(tensor, Numeric.t) :: tensor 689 | def add(a = %Tensor{}, b = %Tensor{}), do: add_tensor(a, b) 690 | def add(a = %Tensor{}, b), do: add_number(a, b) 691 | def add(a, b = %Tensor{}), do: add_number(a, b) 692 | 693 | @doc """ 694 | Elementwise subtraction. 695 | 696 | - If both `a` and `b` are Tensors, the same as calling `sub_tensor/2`. 697 | - If one of `a` or `b` is any kind of number, the same as calling `sub_number/2`. 698 | """ 699 | @spec sub(tensor , tensor) :: tensor 700 | @spec sub(Numeric.t, tensor) :: tensor 701 | @spec sub(tensor, Numeric.t) :: tensor 702 | def sub(a = %Tensor{}, b = %Tensor{}), do: sub_tensor(a, b) 703 | def sub(a = %Tensor{}, b), do: sub_number(a, b) 704 | def sub(a, b = %Tensor{}), do: sub_number(a, b) 705 | 706 | 707 | @doc """ 708 | Elementwise multiplication. 709 | 710 | - If both `a` and `b` are Tensors, the same as calling `mult_tensor/2`. 711 | - If one of `a` or `b` is any kind of number, the same as calling `mult_number/2`. 712 | """ 713 | @spec mult(tensor , tensor) :: tensor 714 | @spec mult(Numeric.t, tensor) :: tensor 715 | @spec mult(tensor, Numeric.t) :: tensor 716 | def mult(a = %Tensor{}, b = %Tensor{}), do: mult_tensor(a, b) 717 | def mult(a = %Tensor{}, b), do: mult_number(a, b) 718 | def mult(a, b = %Tensor{}), do: mult_number(a, b) 719 | 720 | 721 | @doc """ 722 | Elementwise multiplication. 723 | 724 | - If both `a` and `b` are Tensors, the same as calling `div_tensor/2`. 725 | - If one of `a` or `b` is any kind of number, the same as calling `div_number/2`. 726 | """ 727 | @spec div(tensor , tensor) :: tensor 728 | @spec div(Numeric.t, tensor) :: tensor 729 | @spec div(tensor, Numeric.t) :: tensor 730 | def div(a = %Tensor{}, b = %Tensor{}), do: div_tensor(a, b) 731 | def div(a = %Tensor{}, b), do: div_number(a, b) 732 | def div(a, b = %Tensor{}), do: div_number(a, b) 733 | 734 | 735 | 736 | @doc """ 737 | If the Tensor is the first argument `a`, adds the number `b` to all elements in Tensor `a`. 738 | 739 | If the Tensor is the second argument `b`, adds all numbers in `b` to `a`. 740 | 741 | _(There only is a difference in the outcomes of these two cases if on the underlying numeric type, addition is not commutative)_ 742 | """ 743 | @spec add_number(tensor, Numeric.t) :: tensor 744 | def add_number(a = %Tensor{}, b) do 745 | Tensor.map(a, &(Numbers.add(&1, b))) 746 | end 747 | def add_number(a, b = %Tensor{}) do 748 | Tensor.map(b, &(Numbers.add(a, &1))) 749 | end 750 | 751 | 752 | @doc """ 753 | If the Tensor is the first argument `a`, subtracts the number `b` from all elements in Tensor `a`. 754 | 755 | If the Tensor is the second argument `b`, the result is a Tensor filled with `a` subtracted by all numbers in `b`. 756 | _(There only is a difference in the outcomes of these two cases if on the underlying numeric type, multiplication is not commutative)_ 757 | """ 758 | @spec sub_number(tensor, Numeric.t) :: tensor 759 | def sub_number(a = %Tensor{}, b) do 760 | Tensor.map(a, &(Numbers.sub(&1, b))) 761 | end 762 | def sub_number(a, b = %Tensor{}) do 763 | Tensor.map(b, &(Numbers.sub(a, &1))) 764 | end 765 | 766 | @doc """ 767 | If the Tensor is the first argument `a`, multiplies all elements of Tensor `a` with the number `b`. 768 | 769 | If the Tensor is the second argument `b`, multiplies `a` with all elements of Tensor `b`. 770 | """ 771 | @spec mult_number(tensor, number) :: tensor 772 | def mult_number(a = %Tensor{}, b) do 773 | Tensor.map(a, &(Numbers.mult(&1, b))) 774 | end 775 | def mult_number(a, b = %Tensor{}) do 776 | Tensor.map(b, &(Numbers.mult(a, &1))) 777 | end 778 | 779 | 780 | @doc """ 781 | If the Tensor is the first argument `a`, divides all elements of Tensor `a` by the number `b`. 782 | 783 | If the Tensor is the second argument `b`, the result is a Tensor filled with `a` divided by all numbers in Tensor `b`. 784 | """ 785 | @spec div_number(tensor, number) :: tensor 786 | def div_number(a = %Tensor{}, b) do 787 | Tensor.map(a, &(Numbers.div(&1, b))) 788 | end 789 | def div_number(a, b = %Tensor{}) do 790 | Tensor.map(b, &(Numbers.div(a, &1))) 791 | end 792 | 793 | 794 | @doc """ 795 | Elementwise addition of the `tensor_a` and `tensor_b`. 796 | """ 797 | @spec add_tensor(tensor, tensor) :: tensor 798 | def add_tensor(tensor_a = %Tensor{}, tensor_b = %Tensor{}) do 799 | Tensor.merge(tensor_a, tensor_b, &(Numbers.add(&1, &2))) 800 | end 801 | 802 | @doc """ 803 | Elementwise substraction of the `tensor_b` from `tensor_a`. 804 | """ 805 | @spec sub_tensor(tensor, tensor) :: tensor 806 | def sub_tensor(tensor_a = %Tensor{}, tensor_b = %Tensor{}) do 807 | Tensor.merge(tensor_a, tensor_b, &(Numbers.sub(&1, &2))) 808 | end 809 | 810 | @doc """ 811 | Elementwise multiplication of the `tensor_a` with `tensor_b`. 812 | """ 813 | @spec mult_tensor(tensor, tensor) :: tensor 814 | def mult_tensor(tensor_a = %Tensor{}, tensor_b = %Tensor{}) do 815 | Tensor.merge(tensor_a, tensor_b, &(Numbers.mult(&1, &2))) 816 | end 817 | 818 | @doc """ 819 | Elementwise division of `tensor_a` by `tensor_b`. 820 | """ 821 | @spec div_tensor(tensor, tensor) :: tensor 822 | def div_tensor(tensor_a = %Tensor{}, tensor_b = %Tensor{}) do 823 | Tensor.merge(tensor_a, tensor_b, &(Numbers.div(&1, &2))) 824 | end 825 | 826 | @doc """ 827 | Returns the Tensor where all elements are converted to 828 | their absolute values. 829 | """ 830 | def abs(tensor = %Tensor{}) do 831 | Tensor.map(tensor, &(Numbers.abs(&1))) 832 | end 833 | 834 | @doc """ 835 | Returns the Tensor where all elements have been negated. 836 | """ 837 | def minus(tensor = %Tensor{}) do 838 | Tensor.map(tensor, &(Numbers.minus(&1))) 839 | end 840 | 841 | 842 | defimpl Enumerable do 843 | 844 | def count(tensor), do: {:ok, Enum.reduce(tensor.dimensions, 1, &(&1 * &2))} 845 | 846 | def member?(_tensor, _element), do: {:error, __MODULE__} 847 | 848 | def reduce(tensor, acc, fun) do 849 | tensor 850 | |> Tensor.slices 851 | |> do_reduce(acc, fun) 852 | end 853 | 854 | defp do_reduce(_, {:halt, acc}, _fun), do: {:halted, acc} 855 | defp do_reduce(list, {:suspend, acc}, fun), do: {:suspended, acc, &do_reduce(list, &1, fun)} 856 | defp do_reduce([], {:cont, acc}, _fun), do: {:done, acc} 857 | defp do_reduce([h | t], {:cont, acc}, fun), do: do_reduce(t, fun.(h, acc), fun) 858 | 859 | def slice(_tensor) do 860 | {:error, __MODULE__} 861 | end 862 | end 863 | 864 | 865 | # if Code.ensure_loaded?(FunLand) do 866 | # Code.eval_quoted( 867 | # quote do 868 | # apply(Kernel, :use, [FunLand.Reducable, auto_enumerable: false]) 869 | # end, __ENV__) 870 | # end 871 | use_if_exists?(FunLand.Reducable, auto_enumerable: false) 872 | def reduce(tensor, acc, fun) do 873 | case Extractable.extract(tensor) do 874 | {:error, :empty} -> acc 875 | {:ok, {item, rest}} -> 876 | reduce(rest, fun.(item, acc), fun) 877 | end 878 | end 879 | 880 | defimpl Collectable do 881 | # This implementation is sparse. Values that equal the identity are not inserted. 882 | def into(original ) do 883 | {original, fn 884 | # Building a higher-order tensor from lower-order tensors. 885 | tensor = %Tensor{dimensions: [cur_dimension| lower_dimensions]}, 886 | {:cont, elem = %Tensor{dimensions: elem_dimensions}} 887 | when lower_dimensions == elem_dimensions -> 888 | new_dimensions = [cur_dimension+1| lower_dimensions] 889 | new_tensor = %Tensor{tensor | dimensions: new_dimensions, contents: tensor.contents} 890 | put_in new_tensor, [cur_dimension], elem 891 | # Inserting values directly into a Vector 892 | tensor = %Tensor{dimensions: [length], identity: identity}, {:cont, elem} -> 893 | new_length = length+1 894 | new_contents = 895 | if elem == identity do 896 | tensor.contents 897 | else 898 | put_in(tensor.contents, [length], elem) 899 | end 900 | %Tensor{tensor | dimensions: [new_length], contents: new_contents} 901 | _, {:cont, elem} -> 902 | # Other operations not permitted 903 | raise Tensor.CollectableError, elem 904 | tensor, :done -> tensor 905 | _tensor, :halt -> :ok 906 | end} 907 | end 908 | end 909 | 910 | defimpl Extractable do 911 | def extract(%Tensor{dimensions: [0 | _lower_dimensions]}) do 912 | {:error, :empty} 913 | end 914 | def extract(tensor = %Tensor{dimensions: [cur_dimension | lower_dimensions]}) do 915 | new_dimension = cur_dimension - 1 916 | highest_elem = tensor[new_dimension] 917 | new_dimensions = [new_dimension | lower_dimensions] 918 | new_contents = Map.delete(tensor.contents, new_dimension) 919 | new_tensor = %Tensor{tensor | dimensions: new_dimensions, contents: new_contents} 920 | 921 | {:ok, {highest_elem, new_tensor}} 922 | end 923 | end 924 | 925 | defimpl Insertable do 926 | # Vector 927 | def insert(tensor = %Tensor{dimensions: [length], identity: identity}, item) do 928 | new_length = length + 1 929 | new_contents = 930 | if item == identity do 931 | tensor.contents 932 | else 933 | put_in(tensor.contents, [length], item) 934 | end 935 | new_vector = %Tensor{tensor | dimensions: [new_length], contents: new_contents} 936 | {:ok, new_vector} 937 | end 938 | # Matrix, Tensor 939 | def insert(tensor = %Tensor{dimensions: [cur_dimension | lower_dimensions]}, item = %Tensor{dimensions: lower_dimensions}) do 940 | new_dimension = cur_dimension + 1 941 | new_contents = put_in(tensor.contents, [cur_dimension], item.contents) 942 | new_tensor = %Tensor{tensor | dimensions: [new_dimension | lower_dimensions], contents: new_contents} 943 | {:ok, new_tensor} 944 | end 945 | def insert(%Tensor{}, _) do 946 | {:error, :invalid_item_type} 947 | end 948 | end 949 | end 950 | -------------------------------------------------------------------------------- /lib/tensor/tensor/helper.ex: -------------------------------------------------------------------------------- 1 | defmodule Tensor.Tensor.Helper do 2 | 3 | # LISTS 4 | 5 | @doc """ 6 | Swaps the element at position `pos_a` with the element at position `pos_b` inside a list. 7 | 8 | 9 | # Examples 10 | 11 | iex> swap_elems_in_list([1,2,3,4,5], 1, 3) 12 | [1, 4, 3, 2, 5] 13 | """ 14 | def swap_elems_in_list(list, pos_a, pos_a), do: list 15 | def swap_elems_in_list(list, pos_a, pos_b) when pos_a < pos_b do 16 | {initial, rest} = Enum.split(list, pos_a) 17 | {between, tail} = Enum.split(rest, pos_b - pos_a) 18 | a = hd(between) 19 | b = hd(tail) 20 | initial ++ [b] ++ tl(between) ++ [a] ++ tl(tail) 21 | end 22 | 23 | def swap_elems_in_list(list, pos_a, pos_b) when pos_b < pos_a, do: swap_elems_in_list(list, pos_b, pos_a) 24 | 25 | 26 | # MAPS 27 | 28 | @doc """ 29 | Puts `val` under `map` inside a nested map indicated with `keys`. 30 | This is required, as the normal `put_in` will fail if one of the levels 31 | indicated by `keys` is not initialized to a map yet. 32 | 33 | 34 | ## Examples: 35 | 36 | iex>put_in_path(%{}, [1,2,3], 4) 37 | %{1 => %{2 => %{3 => 4}}} 38 | """ 39 | def put_in_path(map, keys, val) do 40 | nestable_keys = keys |> Enum.map(&Access.key(&1, %{})) 41 | put_in(map, nestable_keys, val) 42 | end 43 | # def put_in_path(map, keys, val) do 44 | # do_put_in_path(map, keys, val, []) 45 | # end 46 | 47 | # defp do_put_in_path(map, [key], val, acc) do 48 | # new_acc = acc ++ [key] 49 | # put_in(map, new_acc, val) 50 | # end 51 | 52 | # defp do_put_in_path(map, [key | keys], val, acc) do 53 | # new_acc = acc ++ [key] 54 | # new_map = put_in(map, new_acc, get_in(map, new_acc) || %{}) 55 | # do_put_in_path(new_map, keys, val, new_acc) 56 | # end 57 | 58 | @doc """ 59 | Returns the keywise difference of two maps. 60 | So: Only the part of `map_a` is returned that has keys not in `map_b`. 61 | 62 | ## Examples: 63 | 64 | iex> Tensor.Helper.map_difference(%{a: 1, b: 2, c: 3, d: 4}, %{b: 3, d: 5}) 65 | %{a: 1, c: 3} 66 | 67 | """ 68 | def map_difference(map_a, map_b) do 69 | Map.keys(map_b) 70 | |> Enum.reduce(map_a, fn key, map -> 71 | {_, new_map} = Map.pop(map, key) 72 | new_map 73 | end) 74 | end 75 | 76 | @doc """ 77 | Returns the keywise difference of two maps. 78 | So: Only the part of `map_a` is returned that has keys also in `map_b`. 79 | 80 | ## Examples: 81 | 82 | iex> Tensor.Helper.map_intersection(%{a: 1, b: 2, c: 3, d: 4}, %{b: 3, d: 5}) 83 | %{b: 2, d: 4} 84 | 85 | """ 86 | def map_intersection(map_a, map_b) do 87 | diff = map_difference(map_a, map_b) 88 | map_difference(map_a, diff) 89 | end 90 | 91 | @doc false 92 | defmacro use_if_exists?(module, opts) do 93 | module = Macro.expand(module, __ENV__) 94 | if Code.ensure_loaded?(module) do 95 | quote do 96 | use unquote(module), unquote(opts) 97 | end 98 | end 99 | end 100 | 101 | end 102 | 103 | -------------------------------------------------------------------------------- /lib/tensor/tensor/inspect.ex: -------------------------------------------------------------------------------- 1 | defmodule Tensor.Tensor.Inspect do 2 | alias Tensor.{Tensor} 3 | def inspect(tensor, _opts) do 4 | """ 5 | #Tensor<(#{dimension_string(tensor)}) 6 | #{inspect_tensor_contents(tensor)} 7 | > 8 | """ 9 | end 10 | 11 | def dimension_string(tensor) do 12 | tensor.dimensions |> Enum.join("×") 13 | end 14 | 15 | defp inspect_tensor_contents(tensor = %Tensor{dimensions: dimensions}) when length(dimensions) == 3 do 16 | 17 | [_, deepness | _] = Tensor.dimensions(tensor) 18 | 19 | tensor 20 | |> Tensor.to_list 21 | |> Enum.map(fn slice -> 22 | slice 23 | |> Enum.with_index 24 | |> Enum.map(fn {row, index} -> 25 | rowstr = 26 | row 27 | |> Enum.map(fn elem -> 28 | elem 29 | |> inspect 30 | |> String.pad_leading(8) 31 | end) 32 | |> Enum.join(",") 33 | "#{String.pad_leading("", 2 * index)}#{color(deepness, rem(index, deepness))}#{rowstr}#{IO.ANSI.reset}" 34 | end) 35 | |> Enum.join("\n") 36 | end) 37 | |> Enum.join(slice_join_str(deepness)) 38 | 39 | end 40 | 41 | defp inspect_tensor_contents(tensor, is \\ []) do 42 | tensor 43 | |> Tensor.slices 44 | |> Enum.with_index 45 | |> Enum.map(fn {slice, i} -> 46 | IO.inspect(slice.dimensions) 47 | if Tensor.order(slice) <= 3 do 48 | """ 49 | #{inspect(:lists.reverse([i|is]))} 50 | #{inspect_tensor_contents(slice)} 51 | """ 52 | else 53 | inspect_tensor_contents(slice, [i|is]) 54 | end 55 | end) 56 | |> Enum.join("\n\n\n") 57 | end 58 | 59 | defp color(deepness, depth) when deepness <= 3, do: [[IO.ANSI.bright, IO.ANSI.white], [IO.ANSI.white], [IO.ANSI.bright, IO.ANSI.black]] |> Enum.fetch!(depth) 60 | defp color(deepness, depth) when deepness <= 5, do: [[IO.ANSI.bright, IO.ANSI.white], [IO.ANSI.white], [IO.ANSI.bright, IO.ANSI.blue], [IO.ANSI.blue], [IO.ANSI.bright, IO.ANSI.black]] |> Enum.fetch!(depth) 61 | defp color(deepness, depth) when deepness <= 6, do: [[IO.ANSI.bright, IO.ANSI.white], [IO.ANSI.white], [IO.ANSI.yellow], [IO.ANSI.bright, IO.ANSI.blue], [IO.ANSI.blue], [IO.ANSI.bright, IO.ANSI.black]] |> Enum.fetch!(depth) 62 | defp color(_deepness, _depth), do: [IO.ANSI.white] 63 | 64 | defp slice_join_str(deepness) when deepness < 4, do: "\n" 65 | defp slice_join_str(_deepness), do: "\n\n" 66 | end 67 | -------------------------------------------------------------------------------- /lib/tensor/vector.ex: -------------------------------------------------------------------------------- 1 | defmodule Tensor.Vector do 2 | alias Tensor.{Vector, Tensor} 3 | 4 | import Kernel, except: [length: 1] 5 | defmodule Inspect do 6 | @doc false 7 | def inspect(vector, _opts) do 8 | "#Vector<(#{Tensor.Inspect.dimension_string(vector)})#{inspect Vector.to_list(vector)}>" 9 | end 10 | end 11 | 12 | def new() do 13 | Tensor.new([], [0], 0) 14 | end 15 | 16 | def new(length_or_list_or_range, identity \\ 0) 17 | 18 | def new(list, identity) when is_list(list) do 19 | Tensor.new(list, [Kernel.length(list)], identity) 20 | end 21 | 22 | def new(length, identity) when is_number(length) do 23 | Tensor.new([], [length], identity) 24 | end 25 | 26 | def new(range = _.._, identity) do 27 | new(range |> Enum.to_list, identity) 28 | end 29 | 30 | def length(vector) do 31 | hd(vector.dimensions) 32 | end 33 | 34 | def from_list(list, identity \\ 0) do 35 | Tensor.new(list, [Kernel.length(list)], identity) 36 | end 37 | 38 | def reverse(vector = %Tensor{dimensions: [l]}) do 39 | new_contents = 40 | for {i, v} <- vector.contents, into: %{} do 41 | {l-1 - i, v} 42 | end 43 | %Tensor{vector | contents: new_contents} 44 | end 45 | 46 | def dot_product(a = %Tensor{dimensions: [l]}, b = %Tensor{dimensions: [l]}) do 47 | products = 48 | for i <- 0..(l-1) do 49 | a[i] * b[i] 50 | end 51 | Enum.sum(products) 52 | end 53 | def dot_product(_a, _b), do: raise Tensor.ArithmeticError, "Two Vectors have to have the same length to be able to compute the dot product" 54 | 55 | 56 | @doc """ 57 | Returns the current identity of vector `vector`. 58 | """ 59 | defdelegate identity(vector), to: Tensor 60 | 61 | @doc """ 62 | `true` if `a` is a Vector. 63 | """ 64 | defdelegate vector?(a), to: Tensor 65 | 66 | @doc """ 67 | Returns the element at `index` from `vector`. 68 | """ 69 | defdelegate fetch(vector, index), to: Tensor 70 | 71 | @doc """ 72 | Returns the element at `index` from `vector`. If `index` is out of bounds, returns `default`. 73 | """ 74 | defdelegate get(vector, index, default), to: Tensor 75 | defdelegate pop(vector, index, default), to: Tensor 76 | defdelegate get_and_update(vector, index, function), to: Tensor 77 | 78 | defdelegate merge_with_index(vector_a, vector_b, function), to: Tensor 79 | defdelegate merge(vector_a, vector_b, function), to: Tensor 80 | 81 | defdelegate to_list(vector), to: Tensor 82 | defdelegate lift(vector), to: Tensor 83 | 84 | defdelegate map(vector, function), to: Tensor 85 | defdelegate with_coordinates(vector), to: Tensor 86 | defdelegate sparse_map_with_coordinates(vector, function), to: Tensor 87 | defdelegate dense_map_with_coordinates(vector, function), to: Tensor 88 | 89 | 90 | defdelegate add(a, b), to: Tensor 91 | defdelegate sub(a, b), to: Tensor 92 | defdelegate mult(a, b), to: Tensor 93 | defdelegate div(a, b), to: Tensor 94 | 95 | defdelegate add_number(a, b), to: Tensor 96 | defdelegate sub_number(a, b), to: Tensor 97 | defdelegate mult_number(a, b), to: Tensor 98 | defdelegate div_number(a, b), to: Tensor 99 | 100 | @doc """ 101 | Elementwise addition of vectors `vector_a` and `vector_b`. 102 | """ 103 | defdelegate add_vector(vector_a, vector_b), to: Tensor, as: :add_tensor 104 | 105 | @doc """ 106 | Elementwise subtraction of `vector_b` from `vector_a`. 107 | """ 108 | defdelegate sub_vector(vector_a, vector_b), to: Tensor, as: :sub_tensor 109 | 110 | @doc """ 111 | Elementwise multiplication of `vector_a` with `vector_b`. 112 | """ 113 | defdelegate mult_vector(vector_a, vector_b), to: Tensor, as: :mult_tensor 114 | 115 | @doc """ 116 | Elementwise division of `vector_a` and `vector_b`. 117 | Make sure that the identity of `vector_b` isn't 0 before doing this. 118 | """ 119 | defdelegate div_vector(vector_a, vector_b), to: Tensor, as: :div_tensor 120 | 121 | end 122 | -------------------------------------------------------------------------------- /mix.exs: -------------------------------------------------------------------------------- 1 | defmodule Tensor.Mixfile do 2 | use Mix.Project 3 | 4 | def project do 5 | [app: :tensor, 6 | version: "2.1.2", 7 | elixir: "~> 1.3", 8 | build_embedded: Mix.env == :prod, 9 | start_permanent: Mix.env == :prod, 10 | description: description(), 11 | package: package(), 12 | deps: deps()] 13 | end 14 | 15 | # Configuration for the OTP application 16 | # 17 | # Type "mix help compile.app" for more information 18 | def application do 19 | [ 20 | applications: [ 21 | :logger, 22 | :numbers 23 | ] 24 | ] 25 | end 26 | 27 | # Dependencies can be Hex packages: 28 | # 29 | # {:mydep, "~> 0.3.0"} 30 | # 31 | # Or git/path repositories: 32 | # 33 | # {:mydep, git: "https://github.com/elixir-lang/mydep.git", tag: "0.1.0"} 34 | # 35 | # Type "mix help deps" for more examples and options 36 | defp deps do 37 | [ 38 | {:dialyxir, "~> 1.1", only: :dev}, 39 | {:ex_doc, ">= 0.14.0", only: :dev}, 40 | 41 | {:numbers, "~> 5.0"}, 42 | {:fun_land, "~> 0.10.0", optional: true}, 43 | {:extractable, "~> 0.2.0"}, 44 | {:insertable, "~> 0.2.0"}, 45 | ] 46 | end 47 | 48 | defp description do 49 | """ 50 | Tensor adds Vectors, Matrices and Tensors to your application. These are a lot faster than a list (of lists). 51 | """ 52 | end 53 | 54 | defp package do 55 | [# These are the default files included in the package 56 | name: :tensor, 57 | files: ["lib", "mix.exs", "README*", "LICENSE*"], 58 | maintainers: ["Wiebe-Marten/Qqwy"], 59 | licenses: ["MIT"], 60 | links: %{"GitHub" => "https://github.com/qqwy/tensor", 61 | }] 62 | end 63 | end 64 | -------------------------------------------------------------------------------- /mix.lock: -------------------------------------------------------------------------------- 1 | %{ 2 | "benchfella": {:hex, :benchfella, "0.3.2", "b9648e77fa8d8b8b9fe8f54293bee63f7de03909b3af6ab22a0e546716a396fb", [:mix], []}, 3 | "benchwarmer": {:hex, :benchwarmer, "0.0.2", "902e5c020608647b07c38b82103e4af6d2667dfd5d5d13c67382238de6943136", [:mix], []}, 4 | "coerce": {:hex, :coerce, "1.0.1", "211c27386315dc2894ac11bc1f413a0e38505d808153367bd5c6e75a4003d096", [:mix], [], "hexpm", "b44a691700f7a1a15b4b7e2ff1fa30bebd669929ac8aa43cffe9e2f8bf051cf1"}, 5 | "currying": {:hex, :currying, "1.0.3", "6d036c86c38663a795858c9e744e406a62d7e5a66243a7da15326e731cdcd672", [:mix], [], "hexpm", "7b9b8cd644ecac635f0abf6267ca46d497e73144a5882449a843461a014f1154"}, 6 | "dialyxir": {:hex, :dialyxir, "1.1.0", "c5aab0d6e71e5522e77beff7ba9e08f8e02bad90dfbeffae60eaf0cb47e29488", [:mix], [{:erlex, ">= 0.2.6", [hex: :erlex, repo: "hexpm", optional: false]}], "hexpm", "07ea8e49c45f15264ebe6d5b93799d4dd56a44036cf42d0ad9c960bc266c0b9a"}, 7 | "earmark": {:hex, :earmark, "1.3.2", "b840562ea3d67795ffbb5bd88940b1bed0ed9fa32834915125ea7d02e35888a5", [:mix], [], "hexpm"}, 8 | "earmark_parser": {:hex, :earmark_parser, "1.4.15", "b29e8e729f4aa4a00436580dcc2c9c5c51890613457c193cc8525c388ccb2f06", [:mix], [], "hexpm", "044523d6438ea19c1b8ec877ec221b008661d3c27e3b848f4c879f500421ca5c"}, 9 | "erlex": {:hex, :erlex, "0.2.6", "c7987d15e899c7a2f34f5420d2a2ea0d659682c06ac607572df55a43753aa12e", [:mix], [], "hexpm", "2ed2e25711feb44d52b17d2780eabf998452f6efda104877a3881c2f8c0c0c75"}, 10 | "ex_doc": {:hex, :ex_doc, "0.25.3", "3edf6a0d70a39d2eafde030b8895501b1c93692effcbd21347296c18e47618ce", [:mix], [{:earmark_parser, "~> 1.4.0", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_elixir, "~> 0.14", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1", [hex: :makeup_erlang, repo: "hexpm", optional: false]}], "hexpm", "9ebebc2169ec732a38e9e779fd0418c9189b3ca93f4a676c961be6c1527913f5"}, 11 | "extractable": {:hex, :extractable, "0.2.1", "cf32f0cf2328c073505be285fedecbc984a4f5fec300370dc9c6125bf4d99975", [:mix], [], "hexpm", "42532209510e365c3c9a56c33a2b5448ab1645c0ae124f261162f2c166061019"}, 12 | "fun_land": {:hex, :fun_land, "0.10.0", "edd9cc1907961048c7e87c85f9868956d22857557d2d52ee8e0d3997587e33a8", [:mix], [{:currying, "~> 1.0", [hex: :currying, repo: "hexpm", optional: false]}, {:numbers, "~> 5.0", [hex: :numbers, repo: "hexpm", optional: false]}], "hexpm", "0b52048e9de50b65805a7ed9699562fb16d9f253b5c708329b6d8e45eb3d5eb7"}, 13 | "insertable": {:hex, :insertable, "0.2.0", "f9ef5e484d1cc0756f1d248a54466aa6388142a81df18f588158e3eda2501395", [:mix], [], "hexpm", "518a8b5870344c784dec4560a296e3ef539cd08e58df7f425485bb6158ab70d6"}, 14 | "makeup": {:hex, :makeup, "1.0.5", "d5a830bc42c9800ce07dd97fa94669dfb93d3bf5fcf6ea7a0c67b2e0e4a7f26c", [:mix], [{:nimble_parsec, "~> 0.5 or ~> 1.0", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "cfa158c02d3f5c0c665d0af11512fed3fba0144cf1aadee0f2ce17747fba2ca9"}, 15 | "makeup_elixir": {:hex, :makeup_elixir, "0.15.1", "b5888c880d17d1cc3e598f05cdb5b5a91b7b17ac4eaf5f297cb697663a1094dd", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.1", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "db68c173234b07ab2a07f645a5acdc117b9f99d69ebf521821d89690ae6c6ec8"}, 16 | "makeup_erlang": {:hex, :makeup_erlang, "0.1.1", "3fcb7f09eb9d98dc4d208f49cc955a34218fc41ff6b84df7c75b3e6e533cc65f", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "174d0809e98a4ef0b3309256cbf97101c6ec01c4ab0b23e926a9e17df2077cbb"}, 17 | "nimble_parsec": {:hex, :nimble_parsec, "1.1.0", "3a6fca1550363552e54c216debb6a9e95bd8d32348938e13de5eda962c0d7f89", [:mix], [], "hexpm", "08eb32d66b706e913ff748f11694b17981c0b04a33ef470e33e11b3d3ac8f54b"}, 18 | "numbers": {:hex, :numbers, "5.2.4", "f123d5bb7f6acc366f8f445e10a32bd403c8469bdbce8ce049e1f0972b607080", [:mix], [{:coerce, "~> 1.0", [hex: :coerce, repo: "hexpm", optional: false]}, {:decimal, "~> 1.9 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "eeccf5c61d5f4922198395bf87a465b6f980b8b862dd22d28198c5e6fab38582"}, 19 | } 20 | -------------------------------------------------------------------------------- /test/matrix_test.exs: -------------------------------------------------------------------------------- 1 | defmodule MatrixTest do 2 | use ExUnit.Case 3 | use Tensor 4 | doctest Matrix 5 | 6 | test "the truth" do 7 | assert 1 + 1 == 2 8 | end 9 | 10 | test "Inspect" do 11 | matrix = Matrix.new([[1,2],[3,4]], 2,2) 12 | assert Inspect.inspect(matrix, []) == """ 13 | #Matrix<(2×2) 14 | ┌ ┐ 15 | │ 1, 2│ 16 | │ 3, 4│ 17 | └ ┘ 18 | > 19 | """ 20 | end 21 | 22 | test "identity_matrix" do 23 | inspect Matrix.identity_matrix(3) ==""" 24 | #Matrix<(3×3) 25 | ┌ ┐ 26 | │ 1, 0, 0│ 27 | │ 0, 1, 0│ 28 | │ 0, 0, 1│ 29 | └ ┘ 30 | > 31 | """ 32 | end 33 | 34 | test "transpose |> transpose is the same as original" do 35 | matrix = Matrix.new([[1,2],[3,4]], 2,2) 36 | assert matrix |> Matrix.transpose |> Matrix.transpose == matrix 37 | end 38 | 39 | test "Scalar Addition" do 40 | matrix = Matrix.new([[1,2],[3,4]], 2,2) 41 | result = Matrix.new([[3,4],[5,6]], 2,2, 2) 42 | 43 | assert Matrix.add(matrix, 2) == result 44 | end 45 | 46 | test "scalar addition is commutative with transposition" do 47 | matrix = Matrix.new([[1,2],[3,4]], 2,2) 48 | 49 | assert matrix |> Matrix.transpose |> Matrix.add(2) == matrix |> Matrix.add(2) |> Matrix.transpose 50 | end 51 | 52 | test "Matrix Multiplication" do 53 | m1 = Matrix.new([[2,3,4],[1,0,0]], 2,3) 54 | m2 = Matrix.new([[0,1000],[1,100],[0,10]], 3,2) 55 | assert Matrix.product(m1, m2) |> inspect == """ 56 | #Matrix<(2×2) 57 | ┌ ┐ 58 | │ 3, 2340│ 59 | │ 0, 1000│ 60 | └ ┘ 61 | > 62 | """ 63 | end 64 | 65 | test "matrix productiplication with the identity matrix results in same matrix" do 66 | m1 = Matrix.new([[2,3,4],[1,0,0]], 2,3) 67 | mid = Matrix.identity_matrix(3) 68 | 69 | assert Matrix.product(m1, mid) == m1 70 | end 71 | 72 | 73 | test "chess" do 74 | 75 | board_as_list = 76 | [ 77 | ["♜","♞","♝","♛","♚","♝","♞","♜"], 78 | ["♟","♟","♟","♟","♟","♟","♟","♟"], 79 | [" "," "," "," "," "," "," "," "], 80 | [" "," "," "," "," "," "," "," "], 81 | [" "," "," "," "," "," "," "," "], 82 | [" "," "," "," "," "," "," "," "], 83 | ["♙","♙","♙","♙","♙","♙","♙","♙"], 84 | ["♖","♘","♗","♕","♔","♗","♘","♖"] 85 | ] 86 | matrix = Matrix.new(board_as_list, 8,8) 87 | assert inspect(matrix) == 88 | """ 89 | #Matrix<(8×8) 90 | ┌ ┐ 91 | │ "♜", "♞", "♝", "♛", "♚", "♝", "♞", "♜"│ 92 | │ "♟", "♟", "♟", "♟", "♟", "♟", "♟", "♟"│ 93 | │ " ", " ", " ", " ", " ", " ", " ", " "│ 94 | │ " ", " ", " ", " ", " ", " ", " ", " "│ 95 | │ " ", " ", " ", " ", " ", " ", " ", " "│ 96 | │ " ", " ", " ", " ", " ", " ", " ", " "│ 97 | │ "♙", "♙", "♙", "♙", "♙", "♙", "♙", "♙"│ 98 | │ "♖", "♘", "♗", "♕", "♔", "♗", "♘", "♖"│ 99 | └ ┘ 100 | > 101 | """ 102 | 103 | end 104 | 105 | end 106 | -------------------------------------------------------------------------------- /test/tensor_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TensorTest do 2 | use ExUnit.Case 3 | use Tensor 4 | doctest Tensor 5 | 6 | test "the truth" do 7 | assert 1 + 1 == 2 8 | end 9 | 10 | test "transpose" do 11 | t3 = Tensor.new([[[1,2],[3,4]],[[5,6],[7,8]]], [2,2,2]) 12 | t3_transpose_1 = Tensor.new([[[1,2],[5,6]],[[3,4],[7,8]]], [2,2,2]) 13 | t3_transpose_2 = Tensor.new([[[1,5],[3,7]],[[2,6],[4,8]]], [2,2,2]) 14 | 15 | assert Tensor.transpose(t3, 1) == t3_transpose_1 16 | assert Tensor.transpose(t3, 2) == t3_transpose_2 17 | end 18 | 19 | test "tensor merge" do 20 | prefixes = Tensor.new(["foo", "bar"], [2], "") 21 | postfixes = Tensor.new(["baz", "qux"], [2], "") 22 | 23 | assert Tensor.merge(prefixes, postfixes, fn a, b -> a <> b end) == Tensor.new(["foobaz", "barqux"], [2], "") 24 | end 25 | 26 | test "merging tensors only works when same dimensions" do 27 | prefixes = Tensor.new(["foo", "bar"], [2], "") 28 | postfixes = Tensor.new(["baz"], [1], "") 29 | assert_raise(Tensor.DimensionsDoNotMatchError, fn -> Tensor.merge(prefixes, postfixes, fn a, b -> a <> b end) end) 30 | end 31 | 32 | test "Tensor.add_tensor" do 33 | mat = Matrix.new([[1,2],[3,4]], 2, 2) 34 | mat2 = Matrix.new([[1,1],[1,1]], 2, 2) 35 | assert Tensor.add_tensor(mat, mat2) == Matrix.new([[2,3],[4,5]], 2, 2) 36 | end 37 | 38 | test "Tensor.sub_tensor" do 39 | mat = Matrix.new([[1,2],[3,4]], 2, 2) 40 | mat2 = Matrix.new([[1,1],[1,1]], 2, 2) 41 | assert Tensor.sub_tensor(mat, mat2) == Matrix.new([[0,1],[2,3]], 2, 2) 42 | end 43 | 44 | test "Tensor.mult_tensor" do 45 | mat = Matrix.new([[1,2],[3,4]], 2, 2) 46 | assert Tensor.mult_tensor(mat, mat) == Matrix.new([[1,4],[9,16]], 2, 2) 47 | end 48 | 49 | test "Tensor.div_tensor" do 50 | mat = Matrix.new([[1,2],[3,4]], 2, 2, 1) 51 | assert Tensor.div_tensor(mat, mat) == Matrix.new([[1.0,1.0],[1.0,1.0]], 2, 2, 1.0) 52 | end 53 | 54 | 55 | 56 | 57 | test "map changes identity" do 58 | mat = Matrix.new([[1,2],[3,4]],2,2,3) 59 | mat2 = Tensor.map(mat, fn x -> x*x end) 60 | assert mat2.identity == 9 61 | end 62 | 63 | test "map removes values that have new identity" do 64 | mat = Matrix.new([[1,2],[3,4]],2,2,3) 65 | mat2 = Tensor.map(mat, fn x -> x*x end) 66 | assert mat2.contents == %{0 => %{0 => 1, 1 => 4}, 1 => %{1 => 16}} 67 | 68 | mat2b = Tensor.map(mat, fn _x -> 1 end) 69 | assert mat2b.identity == 1 70 | assert mat2b.contents == %{0 => %{}, 1 => %{}} 71 | 72 | end 73 | 74 | test "sparse_map_with_coordinates changes identity" do 75 | mat = Matrix.new([[1,2],[3,4]],2,2,3) 76 | mat2 = Tensor.sparse_map_with_coordinates(mat, fn {_coords, x} -> x*x end) 77 | assert mat2.identity == 9 78 | end 79 | 80 | test "sparse_map_with_coordinates removes values that have new identity" do 81 | mat = Matrix.new([[1,2],[3,4]],2,2,3) 82 | mat2 = Tensor.sparse_map_with_coordinates(mat, fn {_coords, x} -> x*x end) 83 | assert mat2.contents == %{0 => %{0 => 1, 1 => 4}, 1 => %{1 => 16}} 84 | 85 | mat2b = Tensor.sparse_map_with_coordinates(mat, fn {_coords, _x} -> 1 end) 86 | assert mat2b.identity == 1 87 | assert mat2b.contents == %{0 => %{}, 1 => %{}} 88 | 89 | end 90 | 91 | test "Extractable vector" do 92 | x = Vector.new([1,2,3,4]) 93 | {:ok, {item, x}} = Extractable.extract(x) 94 | assert item == 4 95 | {:ok, {item, x}} = Extractable.extract(x) 96 | assert item == 3 97 | {:ok, {item, x}} = Extractable.extract(x) 98 | assert item == 2 99 | {:ok, {item, x}} = Extractable.extract(x) 100 | assert item == 1 101 | assert Extractable.extract(x) == {:error, :empty} 102 | end 103 | 104 | test "Extractable matrix" do 105 | x = Matrix.new([[1,2],[3,4]], 2, 2) 106 | {:ok, {item, x}} = Extractable.extract(x) 107 | assert item == Vector.new([3,4]) 108 | {:ok, {item, x}} = Extractable.extract(x) 109 | assert item == Vector.new([1,2]) 110 | assert Extractable.extract(x) == {:error, :empty} 111 | end 112 | 113 | test "Insertable vector" do 114 | x = Vector.new() 115 | {:ok, x} = Insertable.insert(x, 10) 116 | {:ok, x} = Insertable.insert(x, 20) 117 | {:ok, x} = Insertable.insert(x, 30) 118 | assert x == Vector.new([10, 20, 30]) 119 | end 120 | 121 | test "Insertable matrix" do 122 | x = Matrix.new(0, 2) 123 | {:ok, x} = Insertable.insert(x, Vector.new([1, 2])) 124 | {:ok, x} = Insertable.insert(x, Vector.new([3, 4])) 125 | assert x == Matrix.new([[1,2],[3,4]], 2, 2) 126 | end 127 | 128 | test "Reducable vector" do 129 | x = Vector.new([1,2,3,4]) 130 | assert FunLand.Reducable.reduce(x, 0, fn x, acc -> acc + x end) == 10 131 | end 132 | 133 | test "Tensor Access get" do 134 | mat = Matrix.new([[1,0],[0,4]],2,2,0) 135 | assert mat.contents == %{0 => %{0 => 1}, 1 => %{1 => 4}} 136 | assert mat[0][0] == 1 137 | assert mat[0][1] == 0 138 | end 139 | 140 | test "Tensor Access put_in non-identity" do 141 | mat = Matrix.new([[1,0],[0,4]],2,2,0) 142 | assert mat.contents == %{0 => %{0 => 1}, 1 => %{1 => 4}} 143 | mat2 = put_in(mat[0][0], 6) 144 | assert mat2.contents == %{0 => %{0 => 6}, 1 => %{1 => 4}} 145 | end 146 | 147 | test "Tensor Access put_in removes identity value" do 148 | mat = Matrix.new([[1,0],[0,4]],2,2,0) 149 | assert mat.contents == %{0 => %{0 => 1}, 1 => %{1 => 4}} 150 | mat2 = put_in(mat[0][0], 0) 151 | assert mat2.contents == %{0 => %{}, 1 => %{1 => 4}} 152 | end 153 | 154 | test "Tensor Access update_in non-identity" do 155 | mat = Matrix.new([[1,0],[0,4]],2,2,0) 156 | assert mat.contents == %{0 => %{0 => 1}, 1 => %{1 => 4}} 157 | mat2 = update_in(mat[0][0], fn 1 -> 6; _ -> :bad end) 158 | assert mat2.contents == %{0 => %{0 => 6}, 1 => %{1 => 4}} 159 | end 160 | 161 | test "Tensor Access update_in non-identity to nil" do 162 | mat = Matrix.new([[1,0],[0,4]],2,2,0) 163 | assert mat.contents == %{0 => %{0 => 1}, 1 => %{1 => 4}} 164 | mat2 = update_in(mat[0][0], fn 1 -> nil; _ -> :bad end) 165 | assert mat2.contents == %{0 => %{0 => nil}, 1 => %{1 => 4}} 166 | end 167 | 168 | test "Tensor Access update_in removes identity value" do 169 | mat = Matrix.new([[1,0],[0,4]],2,2,0) 170 | assert mat.contents == %{0 => %{0 => 1}, 1 => %{1 => 4}} 171 | mat2 = update_in(mat[0][0], fn 1 -> 0; _ -> :bad end) 172 | assert mat2.contents == %{0 => %{}, 1 => %{1 => 4}} 173 | end 174 | 175 | test "Tensor Access update_in updates identity value" do 176 | mat = Matrix.new([[1,0],[0,4]],2,2,0) 177 | assert mat.contents == %{0 => %{0 => 1}, 1 => %{1 => 4}} 178 | mat2 = update_in(mat[0][1], fn 0 -> 6; _ -> :bad end) 179 | assert mat2.contents == %{0 => %{0 => 1, 1 => 6}, 1 => %{1 => 4}} 180 | end 181 | end 182 | -------------------------------------------------------------------------------- /test/test_helper.exs: -------------------------------------------------------------------------------- 1 | ExUnit.start() 2 | --------------------------------------------------------------------------------