├── .gitignore ├── .tool-versions ├── CONTRIBUTING.md ├── README.md ├── code └── general │ ├── comparing_strings_vs_atoms.exs │ ├── concat_vs_cons.exs │ ├── ets_vs_gen_server.exs │ ├── ets_vs_gen_server_write.exs │ ├── filter_map.exs │ ├── filtering_maps.exs │ ├── into_vs_map_into.exs │ ├── io_lists_vs_concatenation.exs │ ├── map_lookup_vs_pattern_matching.exs │ ├── map_put_vs_put_in.exs │ ├── send_after_vs_apply_after.exs │ ├── sort_vs_sort_by.exs │ ├── spawn_vs_spawn_link.exs │ ├── string_slice.exs │ └── string_split_large_strings.exs ├── config └── config.exs ├── mix.exs └── mix.lock /.gitignore: -------------------------------------------------------------------------------- 1 | _build/**/* 2 | .DS_Store 3 | deps/**/* -------------------------------------------------------------------------------- /.tool-versions: -------------------------------------------------------------------------------- 1 | erlang 23.0.2 2 | elixir 1.11.0-rc.0-otp-23 3 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to fast-elixir 2 | 3 | Thank you for contributing! Let's get you started! 4 | 5 | ## Adding a benchmark 6 | 7 | 1. Fork and clone the repository 8 | 9 | 2. Install dependencies: `mix deps.get` 10 | 11 | 3. Write a benchmark using the following template: 12 | 13 | ```elixir 14 | defmodule IdiomName.Fast do 15 | def function_name do 16 | end 17 | end 18 | 19 | defmodule IdiomName.Slow do 20 | def function_name do 21 | end 22 | end 23 | 24 | defmodule IdiomName.Benchmark do 25 | def benchmark do 26 | Benchee.run(%{ 27 | "Idiom Name Fast" => fn -> bench(IdiomName.Fast) end, 28 | "Idiom Name Slow" => fn -> bench(IdiomName.Slow) end, 29 | }, time: 10, print: [fast_warning: false]) 30 | end 31 | 32 | defp bench(module) do 33 | module.function_name 34 | end 35 | end 36 | 37 | IdiomName.Benchmark.benchmark() 38 | ``` 39 | 40 | 4. Run your benchmark: `mix run code//.exs` 41 | 42 | 5. Add the output along with a description to the [README](README.md). 43 | 44 | 6. Open a Pull Request! 45 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Fast Elixir 2 | 3 | There is a wonderful project in Ruby called [fast-ruby](https://github.com/JuanitoFatas/fast-ruby), from which I got the inspiration for this repo. The idea is to collect various idioms for writing performant code when there is more than one _essentially_ symantically identical way of computing something. There may be slight differences, so please be sure that when you're changing something that it doesn't change the correctness of your program. 4 | 5 | Each idiom has a corresponding code example that resides in [code](code). 6 | 7 | **Let's write faster code, together! <3** 8 | 9 | ## Measurement Tool 10 | 11 | We use [benchee](https://github.com/PragTob/benchee). 12 | 13 | ## Contributing 14 | 15 | Help us collect benchmarks! Please [read the contributing guide](CONTRIBUTING.md). 16 | 17 | ## Idioms 18 | 19 | - [Map Lookup vs. Pattern Matching Lookup](#map-lookup-vs-pattern-matching-lookup-code) 20 | - [IO Lists vs. String Concatenation](#io-lists-vs-string-concatenation-code) 21 | - [Combining lists with `|` vs. `++`](#combining-lists-with--vs--code) 22 | - [Putting into maps with `Map.put` and `put_in`](#putting-into-maps-with-mapput-and-put_in-code) 23 | - [Splitting Strings](#splitting-large-strings-code) 24 | - [`sort` vs. `sort_by`](#sort-vs-sort_by-code) 25 | - [Retrieving state from ets tables vs. Gen Servers](#retrieving-state-from-ets-tables-vs-gen-servers-code) 26 | - [Writing state in ets tables, persistent_term and Gen Servers](#writing-state-from-ets-tables-persistent-term-gen-servers-code) 27 | - [Comparing strings vs. atoms](#comparing-strings-vs-atoms-code) 28 | - [spawn vs. spawn_link](#spawn-vs-spawn_link-code) 29 | - [Replacements for Enum.filter_map/3](#replacements-for-enumfilter_map3-code) 30 | - [Filtering maps](#filtering-maps-code) 31 | 32 | #### Map Lookup vs. Pattern Matching Lookup [code](code/general/map_lookup_vs_pattern_matching.exs) 33 | 34 | If you need to lookup static values in a key-value based structure, you might at 35 | first consider assigning a map as a module attribute and looking that up. 36 | However, it's significantly faster to use pattern matching to define functions 37 | that behave like a key-value based data structure. 38 | 39 | ``` 40 | $ mix run code/general/map_lookup_vs_pattern_matching.exs 41 | Operating System: macOS 42 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 43 | Number of Available Cores: 16 44 | Available memory: 16 GB 45 | Elixir 1.11.0-rc.0 46 | Erlang 23.0.2 47 | 48 | Benchmark suite executing with the following configuration: 49 | warmup: 2 s 50 | time: 10 s 51 | memory time: 0 ns 52 | parallel: 1 53 | inputs: none specified 54 | Estimated total run time: 24 s 55 | 56 | Benchmarking Map Lookup... 57 | Benchmarking Pattern Matching... 58 | 59 | Name ips average deviation median 99th % 60 | Pattern Matching 909.12 K 1.10 μs ±3606.70% 1 μs 2 μs 61 | Map Lookup 792.96 K 1.26 μs ±532.10% 1 μs 2 μs 62 | 63 | Comparison: 64 | Pattern Matching 909.12 K 65 | Map Lookup 792.96 K - 1.15x slower +0.161 μs 66 | ``` 67 | 68 | #### IO Lists vs. String Concatenation [code](code/general/io_lists_vs_concatenation.exs) 69 | 70 | Chances are, eventually you'll need to concatenate strings for some sort of 71 | output. This could be in a web response, a CLI output, or writing to a file. The 72 | faster way to do this is to use IO Lists rather than string concatenation or 73 | interpolation. 74 | 75 | ``` 76 | $ mix run code/general/io_lists_vs_concatenation.exs 77 | Operating System: macOS 78 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 79 | Number of Available Cores: 16 80 | Available memory: 16 GB 81 | Elixir 1.11.0-rc.0 82 | Erlang 23.0.2 83 | 84 | Benchmark suite executing with the following configuration: 85 | warmup: 2 s 86 | time: 10 s 87 | memory time: 0 ns 88 | parallel: 1 89 | inputs: 100 3-character strings, 100 300-character strings, 5 3-character_strings, 5 300-character_strings, 50 3-character strings, 50 300-character strings 90 | Estimated total run time: 2.40 min 91 | 92 | Benchmarking IO List with input 100 3-character strings... 93 | Benchmarking IO List with input 100 300-character strings... 94 | Benchmarking IO List with input 5 3-character_strings... 95 | Benchmarking IO List with input 5 300-character_strings... 96 | Benchmarking IO List with input 50 3-character strings... 97 | Benchmarking IO List with input 50 300-character strings... 98 | Benchmarking Interpolation with input 100 3-character strings... 99 | Benchmarking Interpolation with input 100 300-character strings... 100 | Benchmarking Interpolation with input 5 3-character_strings... 101 | Benchmarking Interpolation with input 5 300-character_strings... 102 | Benchmarking Interpolation with input 50 3-character strings... 103 | Benchmarking Interpolation with input 50 300-character strings... 104 | 105 | ##### With input 100 3-character strings ##### 106 | Name ips average deviation median 99th % 107 | IO List 1.41 M 0.71 μs ±4475.40% 1 μs 2 μs 108 | Interpolation 0.31 M 3.27 μs ±76.91% 3 μs 11 μs 109 | 110 | Comparison: 111 | IO List 1.41 M 112 | Interpolation 0.31 M - 4.61x slower +2.56 μs 113 | 114 | ##### With input 100 300-character strings ##### 115 | Name ips average deviation median 99th % 116 | IO List 1.40 M 0.71 μs ±4411.36% 1 μs 1 μs 117 | Interpolation 0.20 M 4.90 μs ±248.22% 4 μs 22 μs 118 | 119 | Comparison: 120 | IO List 1.40 M 121 | Interpolation 0.20 M - 6.86x slower +4.18 μs 122 | 123 | ##### With input 5 3-character_strings ##### 124 | Name ips average deviation median 99th % 125 | IO List 5.15 M 194.15 ns ±2555.27% 0 ns 1000 ns 126 | Interpolation 1.84 M 544.12 ns ±4764.73% 0 ns 2000 ns 127 | 128 | Comparison: 129 | IO List 5.15 M 130 | Interpolation 1.84 M - 2.80x slower +349.96 ns 131 | 132 | ##### With input 5 300-character_strings ##### 133 | Name ips average deviation median 99th % 134 | IO List 5.03 M 198.76 ns ±4663.45% 0 ns 1000 ns 135 | Interpolation 1.92 M 521.81 ns ±193.09% 0 ns 1000 ns 136 | 137 | Comparison: 138 | IO List 5.03 M 139 | Interpolation 1.92 M - 2.63x slower +323.05 ns 140 | 141 | ##### With input 50 3-character strings ##### 142 | Name ips average deviation median 99th % 143 | IO List 1.94 M 0.52 μs ±6397.19% 0 μs 2 μs 144 | Interpolation 0.57 M 1.75 μs ±130.98% 2 μs 2 μs 145 | 146 | Comparison: 147 | IO List 1.94 M 148 | Interpolation 0.57 M - 3.40x slower +1.24 μs 149 | 150 | ##### With input 50 300-character strings ##### 151 | Name ips average deviation median 99th % 152 | IO List 2.06 M 0.49 μs ±8825.39% 0 μs 2 μs 153 | Interpolation 0.37 M 2.71 μs ±657.41% 2 μs 14 μs 154 | 155 | Comparison: 156 | IO List 2.06 M 157 | Interpolation 0.37 M - 5.58x slower +2.22 μs 158 | ``` 159 | 160 | #### Combining lists with `|` vs. `++` [code](code/general/concat_vs_cons.exs) 161 | 162 | Adding two lists together might seem like a simple problem to solve, but in 163 | Elixir there are a couple ways to solve that issue. We can use `++` to 164 | concatenate two lists easily: `[1, 2] ++ [3, 4] #=> [1, 2, 3, 4]`, but the 165 | problem with that approach is that once you start dealing with larger lists it 166 | becomes **VERY** slow! Because of this, when combining two lists, you should try 167 | and use the cons operator (`|`) whenever possible. This will require you to 168 | remember to flatten the resulting nested list, but it's a huge performance 169 | optimization on larger lists. 170 | 171 | ``` 172 | $ mix run ./code/general/concat_vs_cons.exs 173 | Operating System: macOS 174 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 175 | Number of Available Cores: 16 176 | Available memory: 16 GB 177 | Elixir 1.11.0-rc.0 178 | Erlang 23.0.2 179 | 180 | Benchmark suite executing with the following configuration: 181 | warmup: 2 s 182 | time: 10 s 183 | memory time: 0 ns 184 | parallel: 1 185 | inputs: 1,000 large items, 1,000 small items, 10 large items, 10 small items, 100 large items, 100 small items 186 | Estimated total run time: 3.60 min 187 | 188 | Benchmarking Concatenation with input 1,000 large items... 189 | Benchmarking Concatenation with input 1,000 small items... 190 | Benchmarking Concatenation with input 10 large items... 191 | Benchmarking Concatenation with input 10 small items... 192 | Benchmarking Concatenation with input 100 large items... 193 | Benchmarking Concatenation with input 100 small items... 194 | Benchmarking Cons + Flatten with input 1,000 large items... 195 | Benchmarking Cons + Flatten with input 1,000 small items... 196 | Benchmarking Cons + Flatten with input 10 large items... 197 | Benchmarking Cons + Flatten with input 10 small items... 198 | Benchmarking Cons + Flatten with input 100 large items... 199 | Benchmarking Cons + Flatten with input 100 small items... 200 | Benchmarking Cons + Reverse + Flatten with input 1,000 large items... 201 | Benchmarking Cons + Reverse + Flatten with input 1,000 small items... 202 | Benchmarking Cons + Reverse + Flatten with input 10 large items... 203 | Benchmarking Cons + Reverse + Flatten with input 10 small items... 204 | Benchmarking Cons + Reverse + Flatten with input 100 large items... 205 | Benchmarking Cons + Reverse + Flatten with input 100 small items... 206 | 207 | ##### With input 1,000 large items ##### 208 | Name ips average deviation median 99th % 209 | Cons + Reverse + Flatten 38.45 26.01 ms ±6.11% 25.91 ms 30.56 ms 210 | Cons + Flatten 38.38 26.06 ms ±6.39% 26.06 ms 29.32 ms 211 | Concatenation 0.179 5573.57 ms ±0.26% 5573.57 ms 5583.94 ms 212 | 213 | Comparison: 214 | Cons + Reverse + Flatten 38.45 215 | Cons + Flatten 38.38 - 1.00x slower +0.0501 ms 216 | Concatenation 0.179 - 214.32x slower +5547.56 ms 217 | 218 | ##### With input 1,000 small items ##### 219 | Name ips average deviation median 99th % 220 | Cons + Reverse + Flatten 3.78 K 264.27 μs ±19.49% 243 μs 496 μs 221 | Cons + Flatten 3.76 K 266.16 μs ±18.53% 246 μs 491.83 μs 222 | Concatenation 0.0626 K 15984.51 μs ±8.58% 15927 μs 20412.82 μs 223 | 224 | Comparison: 225 | Cons + Reverse + Flatten 3.78 K 226 | Cons + Flatten 3.76 K - 1.01x slower +1.90 μs 227 | Concatenation 0.0626 K - 60.49x slower +15720.24 μs 228 | 229 | ##### With input 10 large items ##### 230 | Name ips average deviation median 99th % 231 | Concatenation 8.33 K 120.04 μs ±31.79% 111 μs 268 μs 232 | Cons + Flatten 5.12 K 195.17 μs ±20.09% 181 μs 378 μs 233 | Cons + Reverse + Flatten 5.11 K 195.88 μs ±20.32% 181 μs 378 μs 234 | 235 | Comparison: 236 | Concatenation 8.33 K 237 | Cons + Flatten 5.12 K - 1.63x slower +75.13 μs 238 | Cons + Reverse + Flatten 5.11 K - 1.63x slower +75.85 μs 239 | 240 | ##### With input 10 small items ##### 241 | Name ips average deviation median 99th % 242 | Concatenation 575.41 K 1.74 μs ±1951.31% 1 μs 4 μs 243 | Cons + Flatten 331.62 K 3.02 μs ±972.07% 3 μs 7 μs 244 | Cons + Reverse + Flatten 330.05 K 3.03 μs ±853.79% 3 μs 8 μs 245 | 246 | Comparison: 247 | Concatenation 575.41 K 248 | Cons + Flatten 331.62 K - 1.74x slower +1.28 μs 249 | Cons + Reverse + Flatten 330.05 K - 1.74x slower +1.29 μs 250 | 251 | ##### With input 100 large items ##### 252 | Name ips average deviation median 99th % 253 | Cons + Reverse + Flatten 38.56 25.93 ms ±6.25% 25.85 ms 32.02 ms 254 | Cons + Flatten 38.35 26.08 ms ±6.30% 26.04 ms 30.68 ms 255 | Concatenation 0.180 5561.40 ms ±0.41% 5561.40 ms 5577.71 ms 256 | 257 | Comparison: 258 | Cons + Reverse + Flatten 38.56 259 | Cons + Flatten 38.35 - 1.01x slower +0.145 ms 260 | Concatenation 0.180 - 214.47x slower +5535.47 ms 261 | 262 | ##### With input 100 small items ##### 263 | Name ips average deviation median 99th % 264 | Cons + Flatten 38.68 K 25.85 μs ±32.87% 24 μs 69 μs 265 | Cons + Reverse + Flatten 38.23 K 26.16 μs ±39.65% 24 μs 70 μs 266 | Concatenation 4.33 K 230.99 μs ±50.47% 213 μs 590.06 μs 267 | 268 | Comparison: 269 | Cons + Flatten 38.68 K 270 | Cons + Reverse + Flatten 38.23 K - 1.01x slower +0.31 μs 271 | Concatenation 4.33 K - 8.94x slower +205.13 μs 272 | ``` 273 | 274 | #### Putting into maps with `Map.put` and `put_in` [code](code/general/map_put_vs_put_in.exs) 275 | 276 | Do not put data into root of map with `put_in`. It is ~2x slower than `Map.put`. Also `put_in/2` 277 | is more effective than `put_in/3`. 278 | 279 | ``` 280 | $ mix run ./code/general/map_put_vs_put_in.exs 281 | Operating System: macOS 282 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 283 | Number of Available Cores: 16 284 | Available memory: 16 GB 285 | Elixir 1.11.0-rc.0 286 | Erlang 23.0.2 287 | 288 | Benchmark suite executing with the following configuration: 289 | warmup: 2 s 290 | time: 10 s 291 | memory time: 0 ns 292 | parallel: 1 293 | inputs: Large (30,000 items), Medium (3,000 items), Small (30 items) 294 | Estimated total run time: 1.80 min 295 | 296 | Benchmarking Map.put/3 with input Large (30,000 items)... 297 | Benchmarking Map.put/3 with input Medium (3,000 items)... 298 | Benchmarking Map.put/3 with input Small (30 items)... 299 | Benchmarking put_in/2 with input Large (30,000 items)... 300 | Benchmarking put_in/2 with input Medium (3,000 items)... 301 | Benchmarking put_in/2 with input Small (30 items)... 302 | Benchmarking put_in/3 with input Large (30,000 items)... 303 | Benchmarking put_in/3 with input Medium (3,000 items)... 304 | Benchmarking put_in/3 with input Small (30 items)... 305 | 306 | ##### With input Large (30,000 items) ##### 307 | Name ips average deviation median 99th % 308 | Map.put/3 247.43 4.04 ms ±10.45% 3.97 ms 5.41 ms 309 | put_in/2 242.10 4.13 ms ±12.48% 4.01 ms 5.74 ms 310 | put_in/3 221.53 4.51 ms ±11.11% 4.41 ms 6.13 ms 311 | 312 | Comparison: 313 | Map.put/3 247.43 314 | put_in/2 242.10 - 1.02x slower +0.0888 ms 315 | put_in/3 221.53 - 1.12x slower +0.47 ms 316 | 317 | ##### With input Medium (3,000 items) ##### 318 | Name ips average deviation median 99th % 319 | Map.put/3 5.68 K 175.98 μs ±34.49% 150.98 μs 400.98 μs 320 | put_in/2 3.62 K 276.42 μs ±23.76% 252.98 μs 546.98 μs 321 | put_in/3 3.09 K 323.22 μs ±22.44% 296.98 μs 630.98 μs 322 | 323 | Comparison: 324 | Map.put/3 5.68 K 325 | put_in/2 3.62 K - 1.57x slower +100.44 μs 326 | put_in/3 3.09 K - 1.84x slower +147.23 μs 327 | 328 | ##### With input Small (30 items) ##### 329 | Name ips average deviation median 99th % 330 | Map.put/3 1040.86 K 0.96 μs ±3795.74% 0.98 μs 1.98 μs 331 | put_in/2 400.53 K 2.50 μs ±1295.21% 1.98 μs 2.98 μs 332 | put_in/3 338.63 K 2.95 μs ±1124.35% 1.98 μs 3.98 μs 333 | 334 | Comparison: 335 | Map.put/3 1040.86 K 336 | put_in/2 400.53 K - 2.60x slower +1.54 μs 337 | put_in/3 338.63 K - 3.07x slower +1.99 μs 338 | ``` 339 | 340 | #### Splitting Large Strings [code](code/general/string_split_large_strings.exs) 341 | 342 | Elixir's `String.split/2` is the fastest option for splitting strings by far, but 343 | using a String literal as the splitter instead of a regex will yield significant 344 | performance benefits. 345 | 346 | ``` 347 | $ mix run code/general/string_split_large_strings.exs 348 | Operating System: macOS 349 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 350 | Number of Available Cores: 16 351 | Available memory: 16 GB 352 | Elixir 1.11.0-rc.0 353 | Erlang 23.0.2 354 | 355 | Benchmark suite executing with the following configuration: 356 | warmup: 2 s 357 | time: 10 s 358 | memory time: 0 ns 359 | parallel: 1 360 | inputs: Large string (1 Million Numbers), Medium string (10 Thousand Numbers), Small string (1 Hundred Numbers) 361 | Estimated total run time: 2.40 min 362 | 363 | Benchmarking split with input Large string (1 Million Numbers)... 364 | Benchmarking split with input Medium string (10 Thousand Numbers)... 365 | Benchmarking split with input Small string (1 Hundred Numbers)... 366 | Benchmarking split erlang with input Large string (1 Million Numbers)... 367 | Benchmarking split erlang with input Medium string (10 Thousand Numbers)... 368 | Benchmarking split erlang with input Small string (1 Hundred Numbers)... 369 | Benchmarking split regex with input Large string (1 Million Numbers)... 370 | Benchmarking split regex with input Medium string (10 Thousand Numbers)... 371 | Benchmarking split regex with input Small string (1 Hundred Numbers)... 372 | Benchmarking splitter |> to_list with input Large string (1 Million Numbers)... 373 | Benchmarking splitter |> to_list with input Medium string (10 Thousand Numbers)... 374 | Benchmarking splitter |> to_list with input Small string (1 Hundred Numbers)... 375 | 376 | ##### With input Large string (1 Million Numbers) ##### 377 | Name ips average deviation median 99th % 378 | split 13.96 71.63 ms ±29.57% 59.81 ms 121.28 ms 379 | splitter |> to_list 3.24 308.26 ms ±14.54% 290.97 ms 442.09 ms 380 | split erlang 1.09 919.28 ms ±4.86% 939.75 ms 998.24 ms 381 | split regex 0.78 1286.40 ms ±9.80% 1253.48 ms 1489.63 ms 382 | 383 | Comparison: 384 | split 13.96 385 | splitter |> to_list 3.24 - 4.30x slower +236.62 ms 386 | split erlang 1.09 - 12.83x slower +847.65 ms 387 | split regex 0.78 - 17.96x slower +1214.77 ms 388 | 389 | ##### With input Medium string (10 Thousand Numbers) ##### 390 | Name ips average deviation median 99th % 391 | split 3813.15 0.26 ms ±45.13% 0.21 ms 0.57 ms 392 | splitter |> to_list 397.04 2.52 ms ±14.65% 2.48 ms 3.73 ms 393 | split erlang 137.55 7.27 ms ±8.52% 7.17 ms 9.35 ms 394 | split regex 93.73 10.67 ms ±7.46% 10.56 ms 13.07 ms 395 | 396 | Comparison: 397 | split 3813.15 398 | splitter |> to_list 397.04 - 9.60x slower +2.26 ms 399 | split erlang 137.55 - 27.72x slower +7.01 ms 400 | split regex 93.73 - 40.68x slower +10.41 ms 401 | 402 | ##### With input Small string (1 Hundred Numbers) ##### 403 | Name ips average deviation median 99th % 404 | split 365.94 K 2.73 μs ±634.81% 2 μs 14 μs 405 | splitter |> to_list 45.63 K 21.92 μs ±45.25% 20 μs 63 μs 406 | split erlang 14.19 K 70.48 μs ±48.03% 53 μs 186.91 μs 407 | split regex 9.87 K 101.28 μs ±24.68% 93 μs 222 μs 408 | 409 | Comparison: 410 | split 365.94 K 411 | splitter |> to_list 45.63 K - 8.02x slower +19.18 μs 412 | split erlang 14.19 K - 25.79x slower +67.74 μs 413 | split regex 9.87 K - 37.06x slower +98.55 μs 414 | ``` 415 | 416 | #### `sort` vs. `sort_by` [code](code/general/sort_vs_sort_by.exs) 417 | 418 | Sorting a list of maps or keyword lists can be done in various ways. However, since the sort 419 | behavior is fairly implicit if you're sorting without a defined sort function, and since the 420 | speed difference is quite small, it's probably best to use `sort/2` or `sort_by/2` in all 421 | cases when sorting lists and maps (including keyword lists and structs). 422 | 423 | ``` 424 | $ mix run code/general/sort_vs_sort_by.exs 425 | Operating System: macOS 426 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 427 | Number of Available Cores: 16 428 | Available memory: 16 GB 429 | Elixir 1.11.0-rc.0 430 | Erlang 23.0.2 431 | 432 | Benchmark suite executing with the following configuration: 433 | warmup: 2 s 434 | time: 10 s 435 | memory time: 0 ns 436 | parallel: 1 437 | inputs: none specified 438 | Estimated total run time: 36 s 439 | 440 | Benchmarking sort/1... 441 | Benchmarking sort/2... 442 | Benchmarking sort_by/2... 443 | 444 | Name ips average deviation median 99th % 445 | sort/1 7.82 K 127.86 μs ±23.45% 118 μs 269 μs 446 | sort/2 7.01 K 142.57 μs ±22.48% 132 μs 294 μs 447 | sort_by/2 6.68 K 149.62 μs ±22.70% 138 μs 308 μs 448 | 449 | Comparison: 450 | sort/1 7.82 K 451 | sort/2 7.01 K - 1.12x slower +14.71 μs 452 | sort_by/2 6.68 K - 1.17x slower +21.76 μs 453 | ``` 454 | 455 | #### Retrieving state from ets tables vs. Gen Servers [code](code/general/ets_vs_gen_server.exs) 456 | 457 | There are many differences between Gen Servers and ets tables, but many people 458 | have often praised ets tables for being extremely fast. For the simple case of 459 | retrieving information from a key-value store, the ets table is indeed much 460 | faster for reads. For more complicated use cases, and for comparisons of writes 461 | instead of reads, further benchmarks are needed, but so far ets lives up to its 462 | reputation for speed. 463 | 464 | ``` 465 | $ mix run code/general/ets_vs_gen_server.exs 466 | Operating System: macOS 467 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 468 | Number of Available Cores: 16 469 | Available memory: 16 GB 470 | Elixir 1.11.0-rc.0 471 | Erlang 23.0.2 472 | 473 | Benchmark suite executing with the following configuration: 474 | warmup: 2 s 475 | time: 10 s 476 | memory time: 0 ns 477 | parallel: 1 478 | inputs: none specified 479 | Estimated total run time: 24 s 480 | 481 | Benchmarking ets table... 482 | Benchmarking gen server... 483 | 484 | Name ips average deviation median 99th % 485 | ets table 5.11 M 0.196 μs ±8972.86% 0 μs 0.98 μs 486 | gen server 0.55 M 1.82 μs ±997.04% 1.98 μs 2.98 μs 487 | 488 | Comparison: 489 | ets table 5.11 M 490 | gen server 0.55 M - 9.31x slower +1.63 μs 491 | ``` 492 | 493 | #### Writing state in ets tables, persistent_term and Gen Servers [code](code/general/ets_vs_gen_server_write.exs) 494 | 495 | Not only is it faster to read from `ets` or `persistent_term` versus a `GenServer`, but it's also 496 | much faster to write state in these two options. If you have need for state that needs to be 497 | stored but without a lot of behavior around that state, `ets` or `persistent_term` is always going 498 | to be the better choice over a `GenServer`. `persistent_term` is the fastest to read from by far, 499 | but is global across the VM and also slower to write to, so in most cases `ets` will be the best 500 | choice for storing state and should be the default option to start with. 501 | 502 | ``` 503 | $ mix run code/general/ets_vs_gen_server_write.exs 504 | Operating System: macOS 505 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 506 | Number of Available Cores: 16 507 | Available memory: 16 GB 508 | Elixir 1.11.0-rc.0 509 | Erlang 23.0.2 510 | 511 | Benchmark suite executing with the following configuration: 512 | warmup: 2 s 513 | time: 10 s 514 | memory time: 0 ns 515 | parallel: 1 516 | inputs: none specified 517 | Estimated total run time: 36 s 518 | 519 | Benchmarking ets table... 520 | Benchmarking gen server... 521 | Benchmarking persistent term... 522 | 523 | Name ips average deviation median 99th % 524 | ets table 5.22 M 191.61 ns ±798.69% 0 ns 1000 ns 525 | persistent term 2.43 M 410.87 ns ±11324.51% 0 ns 1000 ns 526 | gen server 0.58 M 1715.61 ns ±367.31% 2000 ns 2000 ns 527 | 528 | Comparison: 529 | ets table 5.22 M 530 | persistent term 2.43 M - 2.14x slower +219.26 ns 531 | gen server 0.58 M - 8.95x slower +1524.00 ns 532 | ``` 533 | 534 | 535 | 536 | #### Comparing strings vs. atoms [code](code/general/comparing_strings_vs_atoms.exs) 537 | 538 | Because atoms are stored in a special table in the BEAM, comparing atoms is 539 | rather fast compared to comparing strings, where you need to compare each part 540 | of the list that underlies the string. When you have a choice of what type to 541 | use, atoms is the faster choice. However, what you probably should not do is 542 | to convert strings to atoms solely for the perceived speed benefit, since it 543 | ends up being much slower than just comparing the strings, even dozens of times. 544 | 545 | ``` 546 | $ mix run code/general/comparing_strings_vs_atoms.exs 547 | Operating System: macOS 548 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 549 | Number of Available Cores: 16 550 | Available memory: 16 GB 551 | Elixir 1.11.0-rc.0 552 | Erlang 23.0.2 553 | 554 | Benchmark suite executing with the following configuration: 555 | warmup: 2 s 556 | time: 10 s 557 | memory time: 0 ns 558 | parallel: 1 559 | inputs: Large (1-100), Medium (1-50), Small (1-5) 560 | Estimated total run time: 1.80 min 561 | 562 | Benchmarking Comparing atoms with input Large (1-100)... 563 | Benchmarking Comparing atoms with input Medium (1-50)... 564 | Benchmarking Comparing atoms with input Small (1-5)... 565 | Benchmarking Comparing strings with input Large (1-100)... 566 | Benchmarking Comparing strings with input Medium (1-50)... 567 | Benchmarking Comparing strings with input Small (1-5)... 568 | Benchmarking Converting to atoms and then comparing with input Large (1-100)... 569 | Benchmarking Converting to atoms and then comparing with input Medium (1-50)... 570 | Benchmarking Converting to atoms and then comparing with input Small (1-5)... 571 | 572 | ##### With input Large (1-100) ##### 573 | Name ips average deviation median 99th % 574 | Comparing atoms 3.74 M 267.46 ns ±12198.11% 0 ns 1000 ns 575 | Comparing strings 3.71 M 269.25 ns ±11719.28% 0 ns 1000 ns 576 | Converting to atoms and then comparing 0.94 M 1065.67 ns ±290.55% 1000 ns 2000 ns 577 | 578 | Comparison: 579 | Comparing atoms 3.74 M 580 | Comparing strings 3.71 M - 1.01x slower +1.79 ns 581 | Converting to atoms and then comparing 0.94 M - 3.98x slower +798.21 ns 582 | 583 | ##### With input Medium (1-50) ##### 584 | Name ips average deviation median 99th % 585 | Comparing atoms 3.70 M 270.08 ns ±11419.92% 0 ns 1000 ns 586 | Comparing strings 3.68 M 271.52 ns ±11603.67% 0 ns 1000 ns 587 | Converting to atoms and then comparing 1.34 M 743.76 ns ±2924.56% 1000 ns 1000 ns 588 | 589 | Comparison: 590 | Comparing atoms 3.70 M 591 | Comparing strings 3.68 M - 1.01x slower +1.44 ns 592 | Converting to atoms and then comparing 1.34 M - 2.75x slower +473.68 ns 593 | 594 | ##### With input Small (1-5) ##### 595 | Name ips average deviation median 99th % 596 | Comparing atoms 3.81 M 262.27 ns ±11438.39% 0 ns 1000 ns 597 | Comparing strings 3.69 M 270.86 ns ±11945.32% 0 ns 1000 ns 598 | Converting to atoms and then comparing 2.45 M 407.62 ns ±8371.44% 0 ns 1000 ns 599 | 600 | Comparison: 601 | Comparing atoms 3.81 M 602 | Comparing strings 3.69 M - 1.03x slower +8.59 ns 603 | Converting to atoms and then comparing 2.45 M - 1.55x slower +145.34 ns 604 | ``` 605 | 606 | ### spawn vs. spawn_link [code](code/general/spawn_vs_spawn_link.exs) 607 | 608 | There are two ways to spawn a process on the BEAM, `spawn` and `spawn_link`. 609 | Because `spawn_link` links the child process to the process which spawned it, it 610 | takes slightly longer. The way in which processes are spawned is unlikely to be 611 | a bottleneck in most applications, though, and the resiliency benefits of OTP 612 | supervision trees vastly outweighs the slightly slower run time of `spawn_link`, 613 | so that should still be favored in nearly every case in which processes need to 614 | be spawned. 615 | 616 | ``` 617 | $ mix run code/general/spawn_vs_spawn_link.exs 618 | Operating System: macOS 619 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 620 | Number of Available Cores: 16 621 | Available memory: 16 GB 622 | Elixir 1.11.0-rc.0 623 | Erlang 23.0.2 624 | 625 | Benchmark suite executing with the following configuration: 626 | warmup: 2 s 627 | time: 10 s 628 | memory time: 2 s 629 | parallel: 1 630 | inputs: none specified 631 | Estimated total run time: 28 s 632 | 633 | Benchmarking spawn/1... 634 | Benchmarking spawn_link/1... 635 | 636 | Name ips average deviation median 99th % 637 | spawn/1 636.00 K 1.57 μs ±1512.39% 1 μs 2 μs 638 | spawn_link/1 576.18 K 1.74 μs ±1402.58% 2 μs 2 μs 639 | 640 | Comparison: 641 | spawn/1 636.00 K 642 | spawn_link/1 576.18 K - 1.10x slower +0.163 μs 643 | 644 | Memory usage statistics: 645 | 646 | Name Memory usage 647 | spawn/1 72 B 648 | spawn_link/1 72 B - 1.00x memory usage +0 B 649 | 650 | **All measurements for memory usage were the same** 651 | ``` 652 | 653 | #### Replacements for Enum.filter_map/3 [code](code/general/filter_map.exs) 654 | 655 | Elixir used to have an `Enum.filter_map/3` function that would filter a list and 656 | also apply a function to each element in the list that was not removed, but it 657 | was deprecated in version 1.5. Luckily there are still four other ways to do 658 | that same thing! They're all mostly the same, but if you're looking for the 659 | options with the best performance your best bet is to use either a `for` 660 | comprehension or `Enum.reduce/3` and then `Enum.reverse/1`. Using 661 | `Enum.filter/2` and then `Enum.map/2` is also a fine choice, but it has higher 662 | memory usage than the other two options. 663 | 664 | The one option you should avoid is using `Enum.flat_map/2` as it is both slower 665 | and has higher memory usage. 666 | 667 | ``` 668 | $ mix run code/general/filter_map.exs 669 | Operating System: macOS 670 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 671 | Number of Available Cores: 16 672 | Available memory: 16 GB 673 | Elixir 1.11.0-rc.0 674 | Erlang 23.0.2 675 | 676 | Benchmark suite executing with the following configuration: 677 | warmup: 2 s 678 | time: 10 s 679 | memory time: 10 ms 680 | parallel: 1 681 | inputs: Large, Medium, Small 682 | Estimated total run time: 2.40 min 683 | 684 | Benchmarking filter |> map with input Large... 685 | Benchmarking filter |> map with input Medium... 686 | Benchmarking filter |> map with input Small... 687 | Benchmarking flat_map with input Large... 688 | Benchmarking flat_map with input Medium... 689 | Benchmarking flat_map with input Small... 690 | Benchmarking for comprehension with input Large... 691 | Benchmarking for comprehension with input Medium... 692 | Benchmarking for comprehension with input Small... 693 | Benchmarking reduce |> reverse with input Large... 694 | Benchmarking reduce |> reverse with input Medium... 695 | Benchmarking reduce |> reverse with input Small... 696 | 697 | ##### With input Large ##### 698 | Name ips average deviation median 99th % 699 | reduce |> reverse 12.12 82.51 ms ±4.60% 81.46 ms 97.24 ms 700 | for comprehension 12.12 82.51 ms ±4.53% 81.87 ms 94.38 ms 701 | filter |> map 10.78 92.75 ms ±4.91% 92.15 ms 103.58 ms 702 | flat_map 8.41 118.89 ms ±3.22% 118.22 ms 134.28 ms 703 | 704 | Comparison: 705 | reduce |> reverse 12.12 706 | for comprehension 12.12 - 1.00x slower +0.00348 ms 707 | filter |> map 10.78 - 1.12x slower +10.24 ms 708 | flat_map 8.41 - 1.44x slower +36.38 ms 709 | 710 | Memory usage statistics: 711 | 712 | Name Memory usage 713 | reduce |> reverse 7.57 MB 714 | for comprehension 7.57 MB - 1.00x memory usage +0 MB 715 | filter |> map 13.28 MB - 1.75x memory usage +5.71 MB 716 | flat_map 14.32 MB - 1.89x memory usage +6.75 MB 717 | 718 | **All measurements for memory usage were the same** 719 | 720 | ##### With input Medium ##### 721 | Name ips average deviation median 99th % 722 | for comprehension 1.27 K 788.69 μs ±14.54% 732 μs 1287.38 μs 723 | reduce |> reverse 1.26 K 792.37 μs ±14.73% 732 μs 1283.97 μs 724 | filter |> map 1.16 K 859.07 μs ±14.68% 802 μs 1377.75 μs 725 | flat_map 0.86 K 1157.55 μs ±15.68% 1093 μs 1838.80 μs 726 | 727 | Comparison: 728 | for comprehension 1.27 K 729 | reduce |> reverse 1.26 K - 1.00x slower +3.68 μs 730 | filter |> map 1.16 K - 1.09x slower +70.38 μs 731 | flat_map 0.86 K - 1.47x slower +368.87 μs 732 | 733 | Memory usage statistics: 734 | 735 | Name Memory usage 736 | for comprehension 57.13 KB 737 | reduce |> reverse 57.13 KB - 1.00x memory usage +0 KB 738 | filter |> map 109.12 KB - 1.91x memory usage +51.99 KB 739 | flat_map 130.66 KB - 2.29x memory usage +73.54 KB 740 | 741 | **All measurements for memory usage were the same** 742 | 743 | ##### With input Small ##### 744 | Name ips average deviation median 99th % 745 | reduce |> reverse 121.39 K 8.24 μs ±179.26% 8 μs 30 μs 746 | for comprehension 121.20 K 8.25 μs ±180.01% 8 μs 30 μs 747 | filter |> map 111.29 K 8.99 μs ±144.77% 8 μs 31 μs 748 | flat_map 85.08 K 11.75 μs ±119.95% 11 μs 37 μs 749 | 750 | Comparison: 751 | reduce |> reverse 121.39 K 752 | for comprehension 121.20 K - 1.00x slower +0.0133 μs 753 | filter |> map 111.29 K - 1.09x slower +0.75 μs 754 | flat_map 85.08 K - 1.43x slower +3.52 μs 755 | 756 | Memory usage statistics: 757 | 758 | Name Memory usage 759 | reduce |> reverse 1.09 KB 760 | for comprehension 1.09 KB - 1.00x memory usage +0 KB 761 | filter |> map 1.60 KB - 1.46x memory usage +0.51 KB 762 | flat_map 1.62 KB - 1.48x memory usage +0.52 KB 763 | 764 | **All measurements for memory usage were the same** 765 | ``` 766 | 767 | #### String.slice/3 vs :binary.part/3 [code](code/general/string_slice.exs) 768 | 769 | From `String.slice/3` [documentation](https://hexdocs.pm/elixir/String.html#slice/3): 770 | Remember this function works with Unicode graphemes and considers the slices to represent grapheme offsets. If you want to split on raw bytes, check `Kernel.binary_part/3` instead. 771 | 772 | ``` 773 | $ mix run code/general/string_slice.exs 774 | Operating System: macOS 775 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 776 | Number of Available Cores: 16 777 | Available memory: 16 GB 778 | Elixir 1.11.0-rc.0 779 | Erlang 23.0.2 780 | 781 | Benchmark suite executing with the following configuration: 782 | warmup: 100 ms 783 | time: 2 s 784 | memory time: 10 ms 785 | parallel: 1 786 | inputs: Large string (10 Thousand Numbers), Small string (10 Numbers) 787 | Estimated total run time: 12.66 s 788 | 789 | Benchmarking :binary.part/3 with input Large string (10 Thousand Numbers)... 790 | Benchmarking :binary.part/3 with input Small string (10 Numbers)... 791 | Benchmarking String.slice/3 with input Large string (10 Thousand Numbers)... 792 | Benchmarking String.slice/3 with input Small string (10 Numbers)... 793 | Benchmarking binary_part/3 with input Large string (10 Thousand Numbers)... 794 | Benchmarking binary_part/3 with input Small string (10 Numbers)... 795 | 796 | ##### With input Large string (10 Thousand Numbers) ##### 797 | Name ips average deviation median 99th % 798 | binary_part/3 11.14 M 89.78 ns ±2513.45% 100 ns 200 ns 799 | :binary.part/3 3.59 M 278.65 ns ±9466.55% 0 ns 1000 ns 800 | String.slice/3 0.90 M 1112.12 ns ±440.40% 1000 ns 2000 ns 801 | 802 | Comparison: 803 | binary_part/3 11.14 M 804 | :binary.part/3 3.59 M - 3.10x slower +188.87 ns 805 | String.slice/3 0.90 M - 12.39x slower +1022.34 ns 806 | 807 | Memory usage statistics: 808 | 809 | Name Memory usage 810 | binary_part/3 0 B 811 | :binary.part/3 0 B - 1.00x memory usage +0 B 812 | String.slice/3 880 B - ∞ x memory usage +880 B 813 | 814 | **All measurements for memory usage were the same** 815 | 816 | ##### With input Small string (10 Numbers) ##### 817 | Name ips average deviation median 99th % 818 | binary_part/3 3.64 M 274.57 ns ±7776.31% 0 ns 1000 ns 819 | :binary.part/3 3.56 M 281.06 ns ±9071.16% 0 ns 1000 ns 820 | String.slice/3 0.91 M 1103.31 ns ±246.39% 1000 ns 2000 ns 821 | 822 | Comparison: 823 | binary_part/3 3.64 M 824 | :binary.part/3 3.56 M - 1.02x slower +6.48 ns 825 | String.slice/3 0.91 M - 4.02x slower +828.73 ns 826 | 827 | Memory usage statistics: 828 | 829 | Name Memory usage 830 | binary_part/3 0 B 831 | :binary.part/3 0 B - 1.00x memory usage +0 B 832 | String.slice/3 880 B - ∞ x memory usage +880 B 833 | 834 | **All measurements for memory usage were the same** 835 | ``` 836 | 837 | #### Filtering maps [code](code/general/filtering_maps.exs) 838 | 839 | If we have a map and want to filter out key-value pairs from that map, there are 840 | several ways to do it. However, because of some optimizations in Erlang, 841 | `:maps.filter/2` is faster than any of the versions implemented in Elixir. 842 | If you look at the benchmark code, you'll notice that the function used for 843 | filtering takes two arguments (the key and value) instead of one (a tuple with 844 | the key and value), and it's this difference that is responsible for the 845 | decreased execution time and memory usage. 846 | 847 | ``` 848 | $ mix run code/general/filtering_maps.exs 849 | Operating System: macOS 850 | CPU Information: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz 851 | Number of Available Cores: 16 852 | Available memory: 16 GB 853 | Elixir 1.11.0-rc.0 854 | Erlang 23.0.2 855 | 856 | Benchmark suite executing with the following configuration: 857 | warmup: 2 s 858 | time: 10 s 859 | memory time: 1 s 860 | parallel: 1 861 | inputs: Large (10_000), Medium (100), Small (1) 862 | Estimated total run time: 2.60 min 863 | 864 | Benchmarking :maps.filter with input Large (10_000)... 865 | Benchmarking :maps.filter with input Medium (100)... 866 | Benchmarking :maps.filter with input Small (1)... 867 | Benchmarking Enum.filter/2 |> Enum.into/2 with input Large (10_000)... 868 | Benchmarking Enum.filter/2 |> Enum.into/2 with input Medium (100)... 869 | Benchmarking Enum.filter/2 |> Enum.into/2 with input Small (1)... 870 | Benchmarking Enum.filter/2 |> Map.new/1 with input Large (10_000)... 871 | Benchmarking Enum.filter/2 |> Map.new/1 with input Medium (100)... 872 | Benchmarking Enum.filter/2 |> Map.new/1 with input Small (1)... 873 | Benchmarking for with input Large (10_000)... 874 | Benchmarking for with input Medium (100)... 875 | Benchmarking for with input Small (1)... 876 | 877 | ##### With input Large (10_000) ##### 878 | Name ips average deviation median 99th % 879 | :maps.filter 669.86 1.49 ms ±14.38% 1.44 ms 2.31 ms 880 | Enum.filter/2 |> Enum.into/2 532.59 1.88 ms ±19.86% 1.78 ms 2.87 ms 881 | Enum.filter/2 |> Map.new/1 527.37 1.90 ms ±25.17% 1.79 ms 2.85 ms 882 | for 524.51 1.91 ms ±31.33% 1.80 ms 2.83 ms 883 | 884 | Comparison: 885 | :maps.filter 669.86 886 | Enum.filter/2 |> Enum.into/2 532.59 - 1.26x slower +0.38 ms 887 | Enum.filter/2 |> Map.new/1 527.37 - 1.27x slower +0.40 ms 888 | for 524.51 - 1.28x slower +0.41 ms 889 | 890 | Memory usage statistics: 891 | 892 | Name Memory usage 893 | :maps.filter 780.45 KB 894 | Enum.filter/2 |> Enum.into/2 782.85 KB - 1.00x memory usage +2.41 KB 895 | Enum.filter/2 |> Map.new/1 782.87 KB - 1.00x memory usage +2.42 KB 896 | for 782.86 KB - 1.00x memory usage +2.41 KB 897 | 898 | **All measurements for memory usage were the same** 899 | 900 | ##### With input Medium (100) ##### 901 | Name ips average deviation median 99th % 902 | :maps.filter 76.01 K 13.16 μs ±90.13% 12 μs 42 μs 903 | Enum.filter/2 |> Map.new/1 61.19 K 16.34 μs ±61.27% 15 μs 50 μs 904 | for 60.89 K 16.42 μs ±65.36% 15 μs 51 μs 905 | Enum.filter/2 |> Enum.into/2 60.60 K 16.50 μs ±60.52% 15 μs 51 μs 906 | 907 | Comparison: 908 | :maps.filter 76.01 K 909 | Enum.filter/2 |> Map.new/1 61.19 K - 1.24x slower +3.19 μs 910 | for 60.89 K - 1.25x slower +3.27 μs 911 | Enum.filter/2 |> Enum.into/2 60.60 K - 1.25x slower +3.35 μs 912 | 913 | Memory usage statistics: 914 | 915 | Name Memory usage 916 | :maps.filter 5.67 KB 917 | Enum.filter/2 |> Map.new/1 7.84 KB - 1.38x memory usage +2.17 KB 918 | for 7.84 KB - 1.38x memory usage +2.17 KB 919 | Enum.filter/2 |> Enum.into/2 7.84 KB - 1.38x memory usage +2.17 KB 920 | 921 | **All measurements for memory usage were the same** 922 | 923 | ##### With input Small (1) ##### 924 | Name ips average deviation median 99th % 925 | :maps.filter 2.46 M 406.55 ns ±6862.02% 0 ns 1000 ns 926 | for 1.81 M 551.70 ns ±4974.10% 0 ns 1000 ns 927 | Enum.filter/2 |> Map.new/1 1.78 M 562.13 ns ±5004.53% 0 ns 1000 ns 928 | Enum.filter/2 |> Enum.into/2 1.64 M 608.18 ns ±4796.51% 1000 ns 1000 ns 929 | 930 | Comparison: 931 | :maps.filter 2.46 M 932 | for 1.81 M - 1.36x slower +145.15 ns 933 | Enum.filter/2 |> Map.new/1 1.78 M - 1.38x slower +155.58 ns 934 | Enum.filter/2 |> Enum.into/2 1.64 M - 1.50x slower +201.63 ns 935 | 936 | Memory usage statistics: 937 | 938 | Name Memory usage 939 | :maps.filter 136 B 940 | for 248 B - 1.82x memory usage +112 B 941 | Enum.filter/2 |> Map.new/1 248 B - 1.82x memory usage +112 B 942 | Enum.filter/2 |> Enum.into/2 248 B - 1.82x memory usage +112 B 943 | 944 | **All measurements for memory usage were the same** 945 | ``` 946 | 947 | ## Something went wrong 948 | 949 | Something look wrong to you? :cry: Have a better example? :heart_eyes: Excellent! 950 | 951 | [Please open an Issue](https://github.com/devonestes/fast-elixir/issues/new) or [open a Pull Request](https://github.com/devonestes/fast-elixir/pulls) to fix it. 952 | 953 | Thank you in advance! :wink: :beer: 954 | 955 | ## Also Checkout 956 | 957 | - [Benchmarking in Practice](https://www.youtube.com/watch?v=7-mE5CKXjkw) 958 | 959 | Talk by [@PragTob](https://github.com/PragTob) from ElixirLive 2016 about benchmarking in Elixir. 960 | 961 | - [Credo](https://github.com/rrrene/credo) 962 | 963 | Wonderful static analysis tool by [@rrrene](https://github.com/rrrene). It's not _just_ about speed, but it will flag some performance issues. 964 | 965 | Brought to you by [@devoncestes](https://twitter.com/devoncestes) 966 | 967 | ## License 968 | 969 | This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/). 970 | 971 | ## Code License 972 | 973 | ### CC0 1.0 Universal 974 | 975 | To the extent possible under law, @devonestes has waived all copyright and related or neighboring rights to "fast-elixir". 976 | 977 | This work belongs to the community. 978 | -------------------------------------------------------------------------------- /code/general/comparing_strings_vs_atoms.exs: -------------------------------------------------------------------------------- 1 | defmodule Compare.Fast do 2 | def compare(first, second) do 3 | first == second 4 | end 5 | end 6 | 7 | defmodule Compare.Medium do 8 | def compare(first, second) do 9 | String.to_atom(first) == String.to_atom(second) 10 | end 11 | end 12 | 13 | defmodule Compare.Slow do 14 | def compare(first, second) do 15 | first == second 16 | end 17 | end 18 | 19 | defmodule Compare.Benchmark do 20 | @inputs %{ 21 | "Large (1-100)" => :large, 22 | "Medium (1-50)" => :medium, 23 | "Small (1-5)" => :small 24 | } 25 | 26 | @strings_right %{ 27 | large: Enum.join(1..100), 28 | medium: Enum.join(1..50), 29 | small: Enum.join(1..5) 30 | } 31 | 32 | @strings_left %{ 33 | large: Enum.join(2..101), 34 | medium: Enum.join(2..51), 35 | small: Enum.join(2..6) 36 | } 37 | 38 | @atoms_right %{ 39 | large: 1..100 |> Enum.join |> String.to_atom, 40 | medium: 1..50 |> Enum.join |> String.to_atom, 41 | small: 1..5 |> Enum.join |> String.to_atom 42 | } 43 | 44 | @atoms_left %{ 45 | large: 2..101 |> Enum.join |> String.to_atom, 46 | medium: 2..51 |> Enum.join |> String.to_atom, 47 | small: 2..6 |> Enum.join |> String.to_atom 48 | } 49 | 50 | def benchmark do 51 | Benchee.run( 52 | %{ 53 | "Comparing atoms" => fn key -> bench_func(@atoms_left[key], @atoms_right[key], Compare.Fast) end, 54 | "Converting to atoms and then comparing" => fn key -> bench_func(@strings_left[key], @strings_right[key], Compare.Medium) end, 55 | "Comparing strings" => fn key -> bench_func(@strings_left[key], @strings_right[key], Compare.Slow) end 56 | }, 57 | time: 10, 58 | inputs: @inputs, 59 | print: [fast_warning: false] 60 | ) 61 | end 62 | 63 | def bench_func(first, second, module) do 64 | module.compare(first, second) 65 | end 66 | end 67 | 68 | Compare.Benchmark.benchmark() 69 | -------------------------------------------------------------------------------- /code/general/concat_vs_cons.exs: -------------------------------------------------------------------------------- 1 | defmodule ListAdd.Fast do 2 | def add_lists(enumerator, list) do 3 | enumerator 4 | |> Enum.reduce([0], fn _, acc -> 5 | [acc | list] 6 | end) 7 | |> List.flatten() 8 | end 9 | end 10 | 11 | defmodule ListAdd.Medium do 12 | def add_lists(enumerator, list) do 13 | enumerator 14 | |> Enum.reduce([0], fn _, acc -> 15 | [list | acc] 16 | end) 17 | |> Enum.reverse() 18 | |> List.flatten() 19 | end 20 | end 21 | 22 | defmodule ListAdd.Slow do 23 | def add_lists(enumerator, list) do 24 | Enum.reduce(enumerator, [0], fn _, acc -> 25 | acc ++ list 26 | end) 27 | end 28 | end 29 | 30 | defmodule ListAdd.Benchmark do 31 | @small_list Enum.to_list(1..10) 32 | @large_list Enum.to_list(1..1_000) 33 | 34 | @inputs %{ 35 | "1,000 small items" => {1..1_000, @small_list}, 36 | "100 small items" => {1..100, @small_list}, 37 | "10 small items" => {1..10, @small_list}, 38 | "1,000 large items" => {1..1_000, @large_list}, 39 | "100 large items" => {1..1000, @large_list}, 40 | "10 large items" => {1..10, @large_list}, 41 | } 42 | 43 | def benchmark do 44 | Benchee.run( 45 | %{ 46 | "Cons + Flatten" => fn enumerator -> bench_func(enumerator, ListAdd.Fast) end, 47 | "Cons + Reverse + Flatten" => fn enumerator -> bench_func(enumerator, ListAdd.Medium) end, 48 | "Concatenation" => fn enumerator -> bench_func(enumerator, ListAdd.Slow) end 49 | }, 50 | time: 10, 51 | inputs: @inputs, 52 | print: [fast_warning: false] 53 | ) 54 | end 55 | 56 | def bench_func({enumerator, list}, module) do 57 | module.add_lists(enumerator, list) 58 | end 59 | end 60 | 61 | # Enum.each([ListAdd.Slow, ListAdd.Medium, ListAdd.Fast], fn module -> 62 | # IO.inspect( 63 | # [0, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3] == module.add_lists(0..4, [1, 2, 3]) 64 | # ) 65 | # end) 66 | 67 | ListAdd.Benchmark.benchmark() 68 | -------------------------------------------------------------------------------- /code/general/ets_vs_gen_server.exs: -------------------------------------------------------------------------------- 1 | defmodule RetrieveState.Fast do 2 | def get_state(ets_pid) do 3 | :ets.lookup(ets_pid, :stored_state) 4 | end 5 | end 6 | 7 | defmodule StateHolder do 8 | use GenServer 9 | 10 | def init(_), do: {:ok, %{stored_state: :returned_state}} 11 | 12 | def start_link(state \\ []), do: GenServer.start_link(__MODULE__, state, name: __MODULE__) 13 | 14 | def get_state(key), do: GenServer.call(__MODULE__, {:get_state, key}) 15 | 16 | def handle_call({:get_state, key}, _from, state), do: {:reply, state[key], state} 17 | end 18 | 19 | defmodule RetrieveState.Slow do 20 | def get_state do 21 | StateHolder.get_state(:stored_state) 22 | end 23 | end 24 | 25 | defmodule RetrieveState.Benchmark do 26 | def benchmark do 27 | ets_pid = :ets.new(:state_store, [:set, :public]) 28 | :ets.insert(ets_pid, {:stored_state, :returned_state}) 29 | StateHolder.start_link() 30 | 31 | Benchee.run( 32 | %{ 33 | "ets table" => fn -> RetrieveState.Fast.get_state(ets_pid) end, 34 | "gen server" => fn -> RetrieveState.Slow.get_state() end 35 | }, 36 | time: 10, 37 | print: [fast_warning: false] 38 | ) 39 | end 40 | end 41 | 42 | RetrieveState.Benchmark.benchmark() 43 | -------------------------------------------------------------------------------- /code/general/ets_vs_gen_server_write.exs: -------------------------------------------------------------------------------- 1 | defmodule RetrieveState.Fast do 2 | def put_state(ets_pid, state) do 3 | :ets.insert(ets_pid, {:stored_state, state}) 4 | end 5 | end 6 | 7 | defmodule StateHolder do 8 | use GenServer 9 | 10 | def init(_), do: {:ok, %{}} 11 | 12 | def start_link(state \\ []), do: GenServer.start_link(__MODULE__, state, name: __MODULE__) 13 | 14 | def put_state(value), do: GenServer.call(__MODULE__, {:put_state, value}) 15 | 16 | def handle_call({:put_state, value}, _from, state), do: {:reply, true, Map.put(state, :stored_state, value)} 17 | end 18 | 19 | defmodule RetrieveState.Medium do 20 | def put_state(value) do 21 | StateHolder.put_state(value) 22 | end 23 | end 24 | 25 | defmodule RetrieveState.Slow do 26 | def put_state(value) do 27 | :persistent_term.put(:stored_state, value) 28 | end 29 | end 30 | 31 | defmodule RetrieveState.Benchmark do 32 | def benchmark do 33 | ets_pid = :ets.new(:state_store, [:set, :public]) 34 | StateHolder.start_link() 35 | 36 | Benchee.run( 37 | %{ 38 | "ets table" => fn -> RetrieveState.Fast.put_state(ets_pid, :returned_value) end, 39 | "gen server" => fn -> RetrieveState.Medium.put_state(:returned_value) end, 40 | "persistent term" => fn -> RetrieveState.Slow.put_state(:returned_value) end 41 | }, 42 | time: 10, 43 | print: [fast_warning: false] 44 | ) 45 | end 46 | end 47 | 48 | RetrieveState.Benchmark.benchmark() 49 | -------------------------------------------------------------------------------- /code/general/filter_map.exs: -------------------------------------------------------------------------------- 1 | defmodule FilterMap.For do 2 | def filter_map(list, filter_fun, map_fun) do 3 | for num <- list, filter_fun.(num), do: map_fun.(num) 4 | end 5 | end 6 | 7 | defmodule FilterMap.FilterMap do 8 | def filter_map(list, filter_fun, map_fun) do 9 | list 10 | |> Enum.filter(filter_fun) 11 | |> Enum.map(map_fun) 12 | end 13 | end 14 | 15 | defmodule FilterMap.FlatMap do 16 | def filter_map(list, filter_fun, map_fun) do 17 | Enum.flat_map(list, fn num -> 18 | if filter_fun.(num) do 19 | [map_fun.(num)] 20 | else 21 | [] 22 | end 23 | end) 24 | end 25 | end 26 | 27 | defmodule FilterMap.ReduceReverse do 28 | def filter_map(list, filter_fun, map_fun) do 29 | list 30 | |> Enum.reduce([], fn num, acc -> 31 | if filter_fun.(num) do 32 | [map_fun.(num) | acc] 33 | else 34 | acc 35 | end 36 | end) 37 | |> Enum.reverse() 38 | end 39 | end 40 | 41 | defmodule FilterMap.Benchmark do 42 | @inputs %{ 43 | "Large" => 1..1_000_000, 44 | "Medium" => 1..10000, 45 | "Small" => 1..100 46 | } 47 | 48 | def benchmark do 49 | Benchee.run( 50 | %{ 51 | "for comprehension" => fn range -> bench_func(FilterMap.For, range) end, 52 | "filter |> map" => fn range -> bench_func(FilterMap.FilterMap, range) end, 53 | "flat_map" => fn range -> bench_func(FilterMap.FlatMap, range) end, 54 | "reduce |> reverse" => fn range -> bench_func(FilterMap.ReduceReverse, range) end 55 | }, 56 | time: 10, 57 | memory_time: 0.01, 58 | inputs: @inputs, 59 | print: [fast_warning: false] 60 | ) 61 | end 62 | 63 | def bench_func(module, range) do 64 | module.filter_map(range, &(rem(&1, 3) == 0), &(&1 + 1)) 65 | end 66 | end 67 | 68 | FilterMap.Benchmark.benchmark() 69 | -------------------------------------------------------------------------------- /code/general/filtering_maps.exs: -------------------------------------------------------------------------------- 1 | defmodule FilterMap.EnumFilterMapNew do 2 | def filter(map, func) do 3 | map 4 | |> Enum.filter(func) 5 | |> Map.new() 6 | end 7 | end 8 | 9 | defmodule FilterMap.EnumFilterEnumInto do 10 | def filter(map, func) do 11 | map 12 | |> Enum.filter(func) 13 | |> Enum.into(%{}) 14 | end 15 | end 16 | 17 | defmodule FilterMap.For do 18 | def filter(map, func) do 19 | for tuple <- map, func.(tuple), into: %{}, do: tuple 20 | end 21 | end 22 | 23 | defmodule FilterMap.MapsFilter do 24 | def filter(map, func) do 25 | :maps.filter(func, map) 26 | end 27 | end 28 | 29 | defmodule Compare.Benchmark do 30 | @inputs %{ 31 | "Large (10_000)" => 1..10_000 |> Enum.map(&{&1, &1+1}) |> Map.new(), 32 | "Medium (100)" => 1..100 |> Enum.map(&{&1, &1+1}) |> Map.new(), 33 | "Small (1)" => %{1 => 2} 34 | } 35 | 36 | def func({key, value}) do 37 | key != value 38 | end 39 | 40 | def func(key, value) do 41 | key != value 42 | end 43 | 44 | def benchmark do 45 | Benchee.run( 46 | %{ 47 | "Enum.filter/2 |> Map.new/1" => fn map -> bench_func(FilterMap.EnumFilterMapNew, map, &func/1) end, 48 | "Enum.filter/2 |> Enum.into/2" => fn map -> bench_func(FilterMap.EnumFilterEnumInto, map, &func/1) end, 49 | "for" => fn map -> bench_func(FilterMap.For, map, &func/1) end, 50 | ":maps.filter" => fn map -> bench_func(FilterMap.MapsFilter, map, &func/2) end 51 | }, 52 | time: 10, 53 | memory_time: 1, 54 | inputs: @inputs, 55 | print: [fast_warning: false] 56 | ) 57 | end 58 | 59 | def bench_func(module, map, func) do 60 | module.filter(map, func) 61 | end 62 | end 63 | 64 | Compare.Benchmark.benchmark() 65 | -------------------------------------------------------------------------------- /code/general/into_vs_map_into.exs: -------------------------------------------------------------------------------- 1 | defmodule Into.Benchmark do 2 | @inputs %{ 3 | "Large (30k)" => 0..30_000, 4 | "Medium (3k)" => 0..3000, 5 | "Small (30)" => 0..30 6 | } 7 | 8 | def benchmark do 9 | fun = fn num -> {num, num} end 10 | Benchee.run( 11 | %{ 12 | "Enum.into/3" => fn input -> Enum.into(input, %{}, fun) end, 13 | "Enum.map/2 |> Enum.into/2" => fn input -> input |> Enum.map(fun) |> Enum.into(%{}) end, 14 | "for |> into" => fn input -> for num <- input, into: %{}, do: {num, num} end 15 | }, 16 | time: 10, 17 | memory_time: 0.1, 18 | inputs: @inputs, 19 | print: [fast_warning: false] 20 | ) 21 | end 22 | end 23 | 24 | Into.Benchmark.benchmark() 25 | -------------------------------------------------------------------------------- /code/general/io_lists_vs_concatenation.exs: -------------------------------------------------------------------------------- 1 | # To make this a fair test and to use varying numbers of arguments as inputs, I needed to write 2 | # the actual modules in a very macro-heavy way. Both of these modules are essentially doing the 3 | # following: 4 | # 5 | # defmodule Output.Fast do 6 | # def output(a, b, c, d, e) do 7 | # [e | [d | [c | [a | b]]]] 8 | # end 9 | # 10 | # # and so on for the 50 and 100 argument versions of that function. 11 | # end 12 | # 13 | # defmodule Output.Slow do 14 | # def output(a, b, c, d, e) do 15 | # "#{e}#{d}#{c}#{a}#{e}" 16 | # end 17 | # 18 | # # and so on for the 50 and 100 argument versions of that function. 19 | # end 20 | 21 | defmodule Output.Fast do 22 | Enum.each([5, 50, 100], fn count -> 23 | [first, second | rest] = vars = Macro.generate_arguments(count, __MODULE__) 24 | 25 | starting = 26 | quote do 27 | [unquote(first) | unquote(second)] 28 | end 29 | 30 | cons_expr = 31 | Enum.reduce(rest, starting, fn var, acc -> 32 | quote do 33 | [unquote(var) | unquote(acc)] 34 | end 35 | end) 36 | 37 | def output(unquote_splicing(vars)) do 38 | unquote(cons_expr) 39 | end 40 | end) 41 | end 42 | 43 | defmodule Output.Slow do 44 | Enum.each([5, 50, 100], fn count -> 45 | [first | rest] = vars = Macro.generate_arguments(count, __MODULE__) 46 | 47 | starting = [ 48 | {:"::", [], [{{:., [], [Kernel, :to_string]}, [], [first]}, {:binary, [], Output.Slow}]} 49 | ] 50 | 51 | interpolation_expr = 52 | Enum.reduce(rest, starting, fn var, acc -> 53 | [ 54 | {:"::", [], [{{:., [], [Kernel, :to_string]}, [], [var]}, {:binary, [], Output.Slow}]} 55 | | acc 56 | ] 57 | end) 58 | 59 | def output(unquote_splicing(vars)) do 60 | unquote({:<<>>, [], interpolation_expr}) 61 | end 62 | end) 63 | end 64 | 65 | defmodule Output.Benchmark do 66 | def inputs do 67 | %{ 68 | "100 3-character strings" => generate_chunks(3, 100), 69 | "100 300-character strings" => generate_chunks(300, 100), 70 | "50 3-character strings" => generate_chunks(3, 50), 71 | "50 300-character strings" => generate_chunks(300, 50), 72 | "5 3-character_strings" => generate_chunks(3, 5), 73 | "5 300-character_strings" => generate_chunks(300, 5) 74 | } 75 | end 76 | 77 | def generate_chunks(chunk_size, count) do 78 | ?a..?z 79 | |> Stream.cycle() 80 | |> Stream.chunk_every(chunk_size) 81 | |> Stream.map(&to_string/1) 82 | |> Enum.take(count) 83 | end 84 | 85 | def benchmark do 86 | Benchee.run( 87 | %{ 88 | "IO List" => fn input -> apply(Output.Fast, :output, input) end, 89 | "Interpolation" => fn input -> apply(Output.Slow, :output, input) end 90 | }, 91 | inputs: inputs(), 92 | time: 10, 93 | print: [fast_warning: false] 94 | ) 95 | end 96 | end 97 | 98 | Output.Benchmark.benchmark() 99 | -------------------------------------------------------------------------------- /code/general/map_lookup_vs_pattern_matching.exs: -------------------------------------------------------------------------------- 1 | defmodule Lookup.Slow do 2 | @lookup %{ 3 | "one" => 1, 4 | "two" => 2, 5 | "three" => 3, 6 | "four" => 4, 7 | "five" => 5 8 | } 9 | 10 | def int_for(str) do 11 | @lookup[str] 12 | end 13 | end 14 | 15 | defmodule Lookup.Fast do 16 | def int_for(str) do 17 | do_int_for(str) 18 | end 19 | 20 | def do_int_for("one"), do: 1 21 | def do_int_for("two"), do: 2 22 | def do_int_for("three"), do: 3 23 | def do_int_for("four"), do: 4 24 | def do_int_for("five"), do: 5 25 | end 26 | 27 | defmodule Lookup.Benchmark do 28 | def benchmark do 29 | Benchee.run(%{ 30 | "Pattern Matching" => fn -> bench_func(Lookup.Fast) end, 31 | "Map Lookup" => fn -> bench_func(Lookup.Slow) end 32 | }, time: 10, print: [fast_warning: false]) 33 | end 34 | 35 | def bench_func(module) do 36 | module.int_for("one") 37 | module.int_for("two") 38 | module.int_for("three") 39 | module.int_for("four") 40 | module.int_for("five") 41 | module.int_for("one") 42 | module.int_for("two") 43 | module.int_for("three") 44 | module.int_for("four") 45 | module.int_for("five") 46 | module.int_for("one") 47 | module.int_for("two") 48 | module.int_for("three") 49 | module.int_for("four") 50 | module.int_for("five") 51 | end 52 | end 53 | 54 | Lookup.Benchmark.benchmark() 55 | -------------------------------------------------------------------------------- /code/general/map_put_vs_put_in.exs: -------------------------------------------------------------------------------- 1 | defmodule MapPut.Fast do 2 | def map_put(enumerator, map) do 3 | enumerator 4 | |> Enum.reduce(map, fn value, acc -> 5 | Map.put(acc, value, value) 6 | end) 7 | end 8 | end 9 | 10 | defmodule MapPut.Slower do 11 | def map_put(enumerator, map) do 12 | enumerator 13 | |> Enum.reduce(map, fn value, acc -> 14 | put_in(acc[value], value) 15 | end) 16 | end 17 | end 18 | 19 | defmodule MapPut.Slowest do 20 | def map_put(enumerator, map) do 21 | enumerator 22 | |> Enum.reduce(map, fn value, acc -> 23 | put_in(acc, [value], value) 24 | end) 25 | end 26 | end 27 | 28 | defmodule MapPut.Benchmark do 29 | @inputs %{ 30 | "Large (30,000 items)" => 1..10_000, 31 | "Medium (3,000 items)" => 1..1_000, 32 | "Small (30 items)" => 1..10 33 | } 34 | 35 | def benchmark do 36 | Benchee.run( 37 | %{ 38 | "Map.put/3" => fn enumerator -> bench_func(enumerator, MapPut.Fast) end, 39 | "put_in/2" => fn enumerator -> bench_func(enumerator, MapPut.Slower) end, 40 | "put_in/3" => fn enumerator -> bench_func(enumerator, MapPut.Slowest) end 41 | }, 42 | time: 10, 43 | inputs: @inputs, 44 | print: [fast_warning: false] 45 | ) 46 | end 47 | 48 | @map %{ 49 | a: 1, 50 | b: 2, 51 | c: 3 52 | } 53 | 54 | def bench_func(enumerator, module) do 55 | module.map_put(enumerator, @map) 56 | end 57 | end 58 | 59 | MapPut.Benchmark.benchmark() 60 | -------------------------------------------------------------------------------- /code/general/send_after_vs_apply_after.exs: -------------------------------------------------------------------------------- 1 | defmodule Apply.After do 2 | def call_me(arg) do 3 | send(self(), {:sent, arg}) 4 | end 5 | end 6 | 7 | defmodule SendVsApply do 8 | @arg {:from, :me} 9 | @time 1 10 | 11 | def benchmark do 12 | Benchee.run( 13 | %{ 14 | "send_after/2" => fn -> 15 | Process.send_after(self(), @arg, @time) 16 | end, 17 | "apply_after/4" => fn -> 18 | :timer.apply_after(@time, Apply.After, :call_me, [@arg]) 19 | end 20 | }, 21 | time: 10, 22 | memory_time: 1, 23 | print: [fast_warning: false] 24 | ) 25 | end 26 | end 27 | 28 | SendVsApply.benchmark() 29 | -------------------------------------------------------------------------------- /code/general/sort_vs_sort_by.exs: -------------------------------------------------------------------------------- 1 | defmodule Card, do: defstruct([:rank, :suit]) 2 | 3 | defmodule Sort.Fast do 4 | def sort(enumerable), do: Enum.sort(enumerable) 5 | end 6 | 7 | defmodule Sort.Medium do 8 | def sort(enumerable), do: Enum.sort(enumerable, &(&1.rank <= &2.rank)) 9 | end 10 | 11 | defmodule Sort.Slow do 12 | def sort(enumerable), do: Enum.sort_by(enumerable, & &1.rank) 13 | end 14 | 15 | defmodule Sort.Benchmark do 16 | def benchmark do 17 | Benchee.run( 18 | %{ 19 | "sort/1" => fn -> bench(Sort.Fast) end, 20 | "sort/2" => fn -> bench(Sort.Medium) end, 21 | "sort_by/2" => fn -> bench(Sort.Slow) end 22 | }, 23 | time: 10, 24 | print: [fast_warning: false] 25 | ) 26 | end 27 | 28 | defp bench(module) do 29 | cards = 30 | Enum.map(1..100, fn _ -> 31 | %Card{rank: Enum.random(0..100), suit: Enum.random(~w[red green blue]a)} 32 | end) 33 | 34 | module.sort(cards) 35 | end 36 | end 37 | 38 | Sort.Benchmark.benchmark() 39 | -------------------------------------------------------------------------------- /code/general/spawn_vs_spawn_link.exs: -------------------------------------------------------------------------------- 1 | defmodule Spawn.Benchmark do 2 | def benchmark do 3 | Benchee.run( 4 | %{ 5 | "spawn/1" => fn -> bench(&spawn/1) end, 6 | "spawn_link/1" => fn -> bench(&spawn_link/1) end 7 | }, 8 | time: 10, 9 | memory_time: 2, 10 | print: [fast_warning: false] 11 | ) 12 | end 13 | 14 | defp bench(fun) do 15 | me = self() 16 | fun.(fn -> send(me, nil) end) 17 | 18 | receive do 19 | nil -> nil 20 | end 21 | end 22 | end 23 | 24 | Spawn.Benchmark.benchmark() 25 | -------------------------------------------------------------------------------- /code/general/string_slice.exs: -------------------------------------------------------------------------------- 1 | defmodule StringSlice.Benchmark do 2 | @inputs %{ 3 | "Large string (10 Thousand Numbers)" => Enum.join(1..10_000, ","), 4 | "Small string (10 Numbers)" => Enum.join(1..10, ",") 5 | } 6 | 7 | def benchmark do 8 | Benchee.run( 9 | %{ 10 | "String.slice/3" => fn string -> string |> String.slice(3, 5) end, 11 | "binary_part/3" => fn string -> string |> binary_part(3, 5) end, 12 | ":binary.part/3" => fn string -> string |> :binary.part(3, 5) end 13 | }, 14 | warmup: 0.1, 15 | time: 2, 16 | memory_time: 0.01, 17 | inputs: @inputs, 18 | print: [fast_warning: false] 19 | ) 20 | end 21 | end 22 | 23 | StringSlice.Benchmark.benchmark() 24 | -------------------------------------------------------------------------------- /code/general/string_split_large_strings.exs: -------------------------------------------------------------------------------- 1 | defmodule Split.Fast do 2 | def split(str) do 3 | str |> String.splitter(",") |> Enum.to_list 4 | end 5 | end 6 | 7 | defmodule Split.Slow do 8 | def split(str) do 9 | String.split(str, ",") 10 | end 11 | end 12 | 13 | defmodule Split.Regex do 14 | def split(str) do 15 | String.split(str, ~r/,/) 16 | end 17 | end 18 | 19 | defmodule Split.Erlang do 20 | def split(str) do 21 | :string.split(str, ",", :all) 22 | end 23 | end 24 | 25 | 26 | defmodule Split.Benchmark do 27 | @inputs %{ 28 | "Large string (1 Million Numbers)" => Enum.join((1..1_000_000), ","), 29 | "Medium string (10 Thousand Numbers)" => Enum.join((1..10_000), ","), 30 | "Small string (1 Hundred Numbers)" => Enum.join((1..100), ",") 31 | } 32 | 33 | def benchmark do 34 | Benchee.run(%{ 35 | "splitter |> to_list" => fn(str) -> bench_func(Split.Fast, str) end, 36 | "split" => fn(str) -> bench_func(Split.Slow, str) end, 37 | "split regex" => fn(str) -> bench_func(Split.Regex, str) end, 38 | "split erlang" => fn(str) -> bench_func(Split.Erlang, str) end 39 | }, time: 10, inputs: @inputs, print: [fast_warning: false]) 40 | end 41 | 42 | def bench_func(module, str) do 43 | module.split(str) 44 | end 45 | end 46 | 47 | Split.Benchmark.benchmark() 48 | -------------------------------------------------------------------------------- /config/config.exs: -------------------------------------------------------------------------------- 1 | # This file is responsible for configuring your application 2 | # and its dependencies with the aid of the Mix.Config module. 3 | use Mix.Config 4 | 5 | # This configuration is loaded before any dependency and is restricted 6 | # to this project. If another project depends on this project, this 7 | # file won't be loaded nor affect the parent project. For this reason, 8 | # if you want to provide default values for your application for 9 | # 3rd-party users, it should be done in your "mix.exs" file. 10 | 11 | # You can configure for your application as: 12 | # 13 | # config :benchmarks, key: :value 14 | # 15 | # And access this configuration in your application as: 16 | # 17 | # Application.get_env(:benchmarks, :key) 18 | # 19 | # Or configure a 3rd-party app: 20 | # 21 | # config :logger, level: :info 22 | # 23 | 24 | # It is also possible to import configuration files, relative to this 25 | # directory. For example, you can emulate configuration per environment 26 | # by uncommenting the line below and defining dev.exs, test.exs and such. 27 | # Configuration from the imported file will override the ones defined 28 | # here (which is why it is important to import them last). 29 | # 30 | # import_config "#{Mix.env}.exs" 31 | -------------------------------------------------------------------------------- /mix.exs: -------------------------------------------------------------------------------- 1 | defmodule FastElixir.Mixfile do 2 | use Mix.Project 3 | 4 | def project do 5 | [ 6 | app: :fast_elixir, 7 | version: "0.1.0", 8 | elixir: "~> 1.6", 9 | build_embedded: Mix.env() == :prod, 10 | start_permanent: Mix.env() == :prod, 11 | deps: deps() 12 | ] 13 | end 14 | 15 | # Type "mix help deps" for more information 16 | defp deps do 17 | [{:benchee, "~> 1.0"}] 18 | end 19 | end 20 | -------------------------------------------------------------------------------- /mix.lock: -------------------------------------------------------------------------------- 1 | %{ 2 | "benchee": {:hex, :benchee, "1.0.1", "66b211f9bfd84bd97e6d1beaddf8fc2312aaabe192f776e8931cb0c16f53a521", [:mix], [{:deep_merge, "~> 1.0", [hex: :deep_merge, repo: "hexpm", optional: false]}], "hexpm", "3ad58ae787e9c7c94dd7ceda3b587ec2c64604563e049b2a0e8baafae832addb"}, 3 | "deep_merge": {:hex, :deep_merge, "1.0.0", "b4aa1a0d1acac393bdf38b2291af38cb1d4a52806cf7a4906f718e1feb5ee961", [:mix], [], "hexpm", "ce708e5f094b9cd4e8f2be4f00d2f4250c4095be93f8cd6d018c753894885430"}, 4 | "poison": {:hex, :poison, "3.1.0", "d9eb636610e096f86f25d9a46f35a9facac35609a7591b3be3326e99a0484665", [:mix], []}, 5 | } 6 | --------------------------------------------------------------------------------