├── .gitignore ├── LICENSE ├── README.md ├── build.sbt ├── project └── build.properties └── src ├── main └── scala │ └── MonadMacro.scala └── test └── scala ├── ADTs.scala ├── ChurchEncodings.scala ├── Filter.scala ├── GADTs.scala ├── InitialAlgebras.scala ├── LensStateIsYourFather.scala ├── MonadAlgebras.scala ├── MonadMacro.scala ├── coalgebras ├── README.md ├── package.scala └── scalaz │ ├── automata.scala │ ├── automatasample.scala │ ├── cofreeactor.scala │ ├── cofreecochurch.scala │ ├── cofreecomonad.scala │ ├── cofreeweb.scala │ ├── finaladhoc.scala │ ├── programmingapplicatively.scala │ ├── programmingimperatively.scala │ └── programmingwithexceptions.scala ├── hello-monads ├── README.md ├── partI.scala ├── partII.scala └── partIII.scala └── objectalgebras-vs-free-vs-eff ├── Eff.scala ├── FreeMonad.scala ├── FreeMonadCoproduct.scala ├── ObjectAlgebras.scala ├── ObjectAlgebrasMultipleEffects.scala └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | *.class 2 | *.log 3 | 4 | # sbt specific 5 | .cache 6 | .history 7 | .lib/ 8 | dist/* 9 | target/ 10 | lib_managed/ 11 | src_managed/ 12 | project/boot/ 13 | project/plugins/project/ 14 | 15 | # Scala-IDE specific 16 | .scala_dependencies 17 | .worksheet 18 | 19 | # ENSIME specific 20 | .ensime_cache/ 21 | .ensime 22 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2016, Habla Computing SL (http://hablapps.com) 2 | 3 | Licensed under the Apache License, Version 2.0 (the "License"); 4 | you may not use this file except in compliance with the License. 5 | You may obtain a copy of the License at 6 | 7 | http://www.apache.org/licenses/LICENSE-2.0 8 | 9 | Unless required by applicable law or agreed to in writing, software 10 | distributed under the License is distributed on an "AS IS" BASIS, 11 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | See the License for the specific language governing permissions and 13 | limitations under the License. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | This repository contains little (and not so little) snippets of code that illustrate techniques 2 | from functional programming, mainly. 3 | 4 | Current gists 5 | ============= 6 | 7 | Some of these gists make reference or build upon results of previous ones. You'll also find 8 | some explanations throughout the code, although, surely, not enough to make them self-contained. This is the list of current gists: 9 | 10 | * [ADTs](src/test/scala/ADTs.scala). How do we represent embedded DSLs using algebraic data types, and how 11 | do we implement both compositional and non-compositional interpreters. 12 | * [GADTs](src/test/scala/GADTs.scala). How do we represent embedded DSLs using generalized algebraic data types, and how do we implement both compositional and non-compositional interpreters. 13 | * [Church encodings](src/test/scala/ChurchEncodings.scala). What are Church encodings and how can we pattern match against them. 14 | * [Church encodings HK](https://github.com/hablapps/gist/blob/hablacats/src/test/scala/ChurchEncodingsHK.scala). What are Higher-Kinded Church encodings and how can we pattern match against them. 15 | * [Church vs. ADTs](src/test/scala/InitialAlgebras.scala). What is the relationship between these encodings? Algebras to the rescue! 16 | * [Church vs. GADTs](https://github.com/hablapps/gist/blob/hablacats/src/test/scala/IsomorphismsHK.scala). Some isomorphisms between GADTs and Higher-Kinded Church encodings. 17 | * [Natural Numbers](https://github.com/hablapps/gist/blob/hablacats/src/test/scala/NaturalEncodings.scala). Shows how we can represent Natural Numbers with different encodings: Church, Scott and Parigot. 18 | * [Bypassing Free](src/test/scala/objectalgebras-vs-free-vs-eff). Check out how we can exploit object algebras to obtain the very same benefits of free monads, in many circumstances. 19 | * [Coalgebras](src/test/scala/coalgebras). What are coalgebras? How are they related to monads and algebras in general? These gists attempt to shed some light to these questions. 20 | * [Lens, State Is Your Father](src/test/scala/LensStateIsYourFather.scala). Encodings associated to a [blog post](https://blog.hablapps.com/2016/11/10/lens-state-is-your-father/). There, we provide `IOCoalgebra` representations for several optics, along with some interesting insights. 21 | * [From "Hello, world!" to "Hello, monad!"](src/test/scala/hello-monads/). Code associated to the [blog post](https://blog.hablapps.com/2016/01/22/from-hello-world-to-hello-monad-part-i/) series about purification of effectful programs. 22 | * [Macro `monad`](src/main/scala/MonadMacro.scala). A macro that allows you to write monadic code using neither `flatMap`s nor for-comprehensions, but conventional syntax, i.e. semicolons. Written with a pure didactic purpose: showing that monadic code is, in essence, simple imperative code. 23 | 24 | Executing gists 25 | =============== 26 | 27 | Each gist is implemented as a Scalatest file. In order to check its assertions, just enter `sbt` and launch the test. For instance, in order to launch the `ADTs` gist, enter `sbt` and type the following: 28 | 29 | ```scala 30 | > test-only org.hablapps.gist.ADTs 31 | ``` 32 | -------------------------------------------------------------------------------- /build.sbt: -------------------------------------------------------------------------------- 1 | name := "gist" 2 | 3 | scalaVersion := "2.11.8" 4 | 5 | scalaOrganization := "org.typelevel" 6 | 7 | scalaBinaryVersion := "2.11" 8 | 9 | organization := "org.hablapps" 10 | 11 | version := "0.1-SNAPSHOT" 12 | 13 | addCompilerPlugin("org.spire-math" %% "kind-projector" % "0.9.3") 14 | 15 | addCompilerPlugin("org.scalamacros" %% "paradise" % "2.1.0" cross CrossVersion.full) 16 | 17 | resolvers ++= Seq( 18 | "Speech repo - releases" at "http://repo.hablapps.com/releases") 19 | 20 | libraryDependencies ++= Seq( 21 | "org.scalatest" %% "scalatest" % "3.0.0", 22 | "org.typelevel" %% "cats" % "0.9.0", 23 | "com.typesafe.akka" %% "akka-http-experimental" % "2.4.11", 24 | "com.typesafe.akka" %% "akka-http-spray-json-experimental" % "2.4.11", 25 | "com.lihaoyi" %% "sourcecode" % "0.1.2", 26 | "com.github.julien-truffaut" %% "monocle-core" % "1.3.2", 27 | "com.github.julien-truffaut" %% "monocle-generic" % "1.3.2", 28 | "com.github.julien-truffaut" %% "monocle-macro" % "1.3.2", 29 | "com.github.julien-truffaut" %% "monocle-state" % "1.3.2", 30 | "com.github.julien-truffaut" %% "monocle-refined" % "1.3.2", 31 | "com.github.julien-truffaut" %% "monocle-unsafe" % "1.3.2", 32 | "com.github.julien-truffaut" %% "monocle-law" % "1.3.2" % "test", 33 | "org.atnos" %% "eff-cats" % "2.0.0-RC26" 34 | ) 35 | 36 | dependencyOverrides += "org.scalaz" %% "scalaz-core" % "7.2.7-HABLAPPS" 37 | 38 | scalacOptions ++= Seq( 39 | "-unchecked", 40 | "-deprecation", 41 | "-Ypartial-unification", 42 | // "-Xprint:typer", 43 | // "-Xlog-implicit-conversions", 44 | "-feature", 45 | "-language:implicitConversions", 46 | "-language:postfixOps", 47 | "-language:higherKinds") 48 | 49 | initialCommands in console := """ 50 | |import org.hablapps.gist._ 51 | """.stripMargin 52 | -------------------------------------------------------------------------------- /project/build.properties: -------------------------------------------------------------------------------- 1 | sbt.version=0.13.13 2 | -------------------------------------------------------------------------------- /src/main/scala/MonadMacro.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import scala.reflect.macros._ 4 | import scala.language.experimental.macros 5 | 6 | /* 7 | This gist implements a macro to write monadic programs without using 8 | neither explicit `flatMap`s nor for-comprehensions, but conventional 9 | `val` definitions and semicolons. We wrote this macro just with a 10 | didactic purpose: showing that monadic code is simple imperative code. 11 | This is so true that the conventional imperative syntax of Scala can 12 | be used to write monadic code. You can find some examples in the 13 | following file. 14 | 15 | https://github.com/hablapps/gist/blob/master/src/test/scala/MonadMacro.scala 16 | 17 | Last, note that this macro is far from being complete. We just included 18 | coverage for typical use cases that allows us to illustrate our claim. 19 | */ 20 | object monad{ 21 | import cats.Monad 22 | 23 | /* 24 | Dirty trick to unlift programs. Not intended to be executed ever, 25 | but inside `monad` macro blocks. 26 | */ 27 | implicit class RunOp[P[_],A](program: P[A]){ 28 | def run: A = ??? 29 | } 30 | 31 | /* 32 | This macro allows us to transform a block of conventional imperative code 33 | into an imperative program over monad `P` 34 | */ 35 | def apply[P[_]: Monad,T](t: T): P[T] = macro impl[P,T] 36 | 37 | def impl[P[_], T]( 38 | c: whitebox.Context)( 39 | t: c.Expr[T])( 40 | M: c.Expr[Monad[P]])(implicit 41 | e1: c.WeakTypeTag[P[_]], 42 | e2: c.WeakTypeTag[T]): c.Expr[P[T]] = { 43 | import c.universe._ 44 | 45 | def liftValue(b: Tree): Tree = { 46 | b match { 47 | case Select(Apply(_,List(v)),TermName("run")) => v 48 | case _ => q"$M.pure($b)" 49 | } 50 | } 51 | 52 | def liftBlock(b: Block): Tree = 53 | b match { 54 | case Block(List(),i) => 55 | liftValue(i) 56 | case Block(head::tail,i) => 57 | val (name, tpe, value) = head match { 58 | case q"val $name: $tpe = $value" => (name,tpe,value) 59 | case q"$value" => (termNames.WILDCARD, tq"Unit", value) 60 | } 61 | val liftedValue = liftValue(value) 62 | val liftedTail = liftBlock(Block(tail,i)) 63 | q"$M.flatMap($liftedValue){ $name: $tpe => $liftedTail }" 64 | } 65 | 66 | val r: Tree = c.untypecheck(t.tree) match { 67 | case b: Block => liftBlock(b) 68 | case e => liftValue(e) 69 | } 70 | 71 | c.Expr[P[T]](r) 72 | } 73 | } -------------------------------------------------------------------------------- /src/test/scala/ADTs.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import org.scalatest._ 4 | 5 | /* 6 | This gist revolves around deep embeddings using a simple language for arithmetic 7 | expressions. Besides dealing with ADTs, we will also mention interpreters, both 8 | compositional and non-compositional ones, and show how can we implement compositional 9 | interpreters using `folds` (aka catamorphisms). 10 | */ 11 | class ADTs extends FlatSpec with Matchers{ 12 | 13 | /* 14 | Let's start with a simple ADT (Algebraic Data Type) representation of expressions. 15 | As it's customary, ADTs in Scala are represented as `sealed abstract class`es. 16 | */ 17 | sealed abstract class Expr 18 | case class Lit(i: Int) extends Expr 19 | case class Neg(e: Expr) extends Expr 20 | case class Add(e1: Expr, e2: Expr) extends Expr 21 | 22 | /* 23 | This kind of representation allows us to write expressions as follows 24 | */ 25 | val e1: Expr = Add(Lit(1), Neg(Lit(2))) // 1 + (-2) 26 | val e2: Expr = Neg(Neg(Lit(2))) // (-(-2)) 27 | val e3: Expr = Neg(Neg(Add(Lit(1), Neg(Add(Neg(Lit(1)),Lit(2)))))) 28 | val e4: Expr = Add(Lit(1), Add(Lit(1),Neg(Lit(2)))) 29 | 30 | /* 31 | In order to evaluate these expressions, converting them to strings, transforming 32 | them into normal forms, etc., we implement independent functions. These functions 33 | can be understood as different "interpreters" of the language of arithmetic expressions. 34 | We can consider two kinds of interpreters: compositional and non-compositional. 35 | In order to understand the different between both types of interpreters, note that 36 | arithmetic expressions have a recursive structure. If the result of some interpreter 37 | for a given expression can always be obtained from the result of the interpreter for 38 | its subexpressions, then the interpreter is said to be compositional. Otherwise, it's 39 | non-compositional. 40 | */ 41 | 42 | trait CompositionalInterpreters{ 43 | def eval(e: Expr): Int 44 | def write(e: Expr): String 45 | } 46 | 47 | trait NonCompositionalInterpreters{ 48 | def pushNeg(e: Expr): Expr 49 | def reassociate(e: Expr): Expr 50 | } 51 | 52 | /* 53 | And here there are some simple tests for the implemented functions 54 | */ 55 | case class TestCompositional(interpreters: CompositionalInterpreters){ 56 | import interpreters._ 57 | 58 | write(e1) should be("(1+(-2))") 59 | write(e2) should be("(-(-2))") 60 | write(e3) should be("(-(-(1+(-((-1)+2)))))") 61 | write(e4) should be("(1+(1+(-2)))") 62 | 63 | eval(e1) should be(-1) 64 | eval(e3) should be(eval(e4)) 65 | } 66 | 67 | case class TestNonCompositional(interpreters: NonCompositionalInterpreters){ 68 | import interpreters._ 69 | 70 | pushNeg(e1) should be(e1) 71 | pushNeg(e2) should be(Lit(2)) 72 | pushNeg(e3) should be(e4) 73 | 74 | reassociate(Add(Add(Add(Lit(1),Lit(2)),Lit(3)),Lit(4))) should 75 | be(Add(Lit(1), Add(Lit(2), Add(Lit(3), Lit(4))))) 76 | } 77 | 78 | /* 79 | We can implement our interpreters, both compositional and non-compositional, using pattern 80 | matching. 81 | */ 82 | object PatternMatchingInterpreters 83 | extends NonCompositionalInterpreters 84 | with CompositionalInterpreters{ 85 | 86 | def eval(e: Expr): Int = e match { 87 | case Lit(i) => i 88 | case Neg(e) => -eval(e) 89 | case Add(e1,e2) => eval(e1) + eval(e2) 90 | } 91 | 92 | def write(e: Expr): String = e match{ 93 | case Lit(i) => s"$i" 94 | case Neg(e) => s"(-${write(e)})" 95 | case Add(e1,e2) => s"(${write(e1)}+${write(e2)})" 96 | } 97 | 98 | def pushNeg(e: Expr): Expr = e match { 99 | case Lit(i) => e 100 | case Neg(Lit(_)) => e 101 | case Neg(Neg(e1)) => pushNeg(e1) 102 | case Neg(Add(e1,e2)) => Add(pushNeg(Neg(e1)), pushNeg(Neg(e2))) 103 | case Add(e1,e2) => Add(pushNeg(e1), pushNeg(e2)) 104 | } 105 | 106 | def reassociate(e: Expr): Expr = e match { 107 | case Add(Add(e1,e2), e3) => reassociate(Add(e1, Add(e2,e3))) 108 | case Add(e1, e2) => Add(e1, reassociate(e2)) 109 | case Neg(e1) => Neg(reassociate(e1)) 110 | case _ => e 111 | } 112 | } 113 | 114 | "Pattern matching" should "work" in { 115 | TestCompositional(PatternMatchingInterpreters) 116 | TestNonCompositional(PatternMatchingInterpreters) 117 | } 118 | 119 | /* 120 | Or else, we can try implementing those interpreters using predefined recursion 121 | schemes. For instance, we can try using `fold`s. 122 | */ 123 | 124 | def fold[A](lit: Int => A, neg: A => A, add: (A,A) => A): Expr => A = { 125 | case Lit(i) => lit(i) 126 | case Neg(e) => neg(fold(lit,neg,add)(e)) 127 | case Add(e1,e2) => add(fold(lit,neg,add)(e1),fold(lit,neg,add)(e2)) 128 | } 129 | 130 | /* 131 | But with this kind of recursion scheme we can only implement (in a direct way) 132 | compositional interpreters. 133 | */ 134 | object FoldInterpreters extends CompositionalInterpreters{ 135 | 136 | def eval(e: Expr): Int = 137 | fold[Int](i => i, -_, _ + _)(e) 138 | 139 | def write(e: Expr): String = 140 | fold[String]( 141 | i => s"$i", 142 | e => s"(-$e)", 143 | (e1,e2) => s"($e1+$e2)" 144 | )(e) 145 | } 146 | 147 | "Catamorphisms" should "work" in TestCompositional(FoldInterpreters) 148 | 149 | } 150 | 151 | object ADTs extends ADTs -------------------------------------------------------------------------------- /src/test/scala/ChurchEncodings.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | import org.scalatest._ 3 | 4 | /* 5 | The purpose of this gist is explaining what are church encodings of data types, 6 | and how can we implement functions that use pattern matching over them. We will 7 | use the common domain of arithmetic expressions to illustrate our findings. 8 | 9 | Throughout the code some references will be made to the deep encoding of arithmetic 10 | expressions using ADTs. You can find the relevant code in this gist: 11 | 12 | https://github.com/hablapps/gist/blob/master/src/test/scala/ADTs.scala 13 | 14 | For the most part, this gist can be considered as an Scala translation of this post: 15 | 16 | http://okmij.org/ftp/tagless-final/course/Boehm-Berarducci.html 17 | 18 | */ 19 | class ChurchEncodings extends FlatSpec with Matchers{ 20 | 21 | /* 22 | In embedded DSLs we distinguished between "data" and "functions". We'll 23 | see now that this distinction is somewhat artificial, and data too can be 24 | also regarded as purely functional, i.e. data can be represented by functions 25 | alone. 26 | */ 27 | object Church{ 28 | 29 | /* 30 | The key to thinking about data as functions is considering that the essence 31 | of data types are their constructors, and these constructors are, of course, 32 | functions. For instance, the constructors of the `Expr` ADT were `Lit`, `Neg` 33 | and `Add`. These constructors are automatically generated by Scala through 34 | the companion object of the `case class`. The types of these constructors 35 | are `Lit: Int => Expr`, `Neg: Expr => Expr` and `Add: (Expr, Expr) => Expr`. 36 | 37 | A Church encoding represent data types through its constructors. But, we can't 38 | make reference to the particular `Expr` ADT, of course, so we abstract away 39 | from any particular representation. The resulting Church encoding (actually, 40 | the Boehm-Berarducci encoding, since the Church encoding refers to the untyped 41 | lambda-calculus) is: 42 | */ 43 | 44 | trait Expr{ 45 | def apply[E](lit: Int => E, neg: E => E, add: (E,E) => E): E 46 | } 47 | 48 | /* 49 | Note the similarity with the implementation of the `fold` recursion scheme for 50 | the `Expr` ADTs. Indeed, Church encodings implement data as folds. However, as you can 51 | see, the new `Expr` type is still represented by a class (particularly, a `trait`). 52 | After all, this is Scala and any data type has to be represented as a class of objects. 53 | The only class member of this class is a polymorphic function that allows us to create 54 | objects of an arbitrary type `E`, using generic versions of the constructors that we introduced 55 | in our `Expr` ADTs. 56 | 57 | Let's see some examples of arithmetic expressions represented as Church encodings. 58 | These values represent the expressions "(1+(-2))" and "(-(-2))", respectively. 59 | And they do it in a rather generic fashion, i.e. in a completely independent way of the 60 | many possible types `E` that we may alternatively choose to represent our expressions. 61 | */ 62 | 63 | val e1: Expr = new Expr{ 64 | def apply[E](lit: Int => E, neg: E => E, add: (E,E) => E): E = 65 | add(lit(1), neg(lit(2))) 66 | } 67 | 68 | val e2: Expr = new Expr{ 69 | def apply[E](lit: Int => E, neg: E => E, add: (E,E) => E): E = 70 | neg(neg(lit(2))) 71 | } 72 | 73 | /* 74 | In a sense, values `e1` and `e2` are canonical ways of representing arithmetic 75 | expressions, and subsume any other possible representation. Thus, we can 76 | use these generic values as "recipes" that will allow us to create expressions 77 | written using concrete representations. We do this simply by passing their corresponding 78 | constructors to the polymorphic function (the actual "recipe"). 79 | 80 | For instance, we can create values of the ADT representation as follows: 81 | */ 82 | 83 | val e1_ADT: ADTs.Expr = e1(ADTs.Lit, ADTs.Neg, ADTs.Add) 84 | val e2_ADT: ADTs.Expr = e2(ADTs.Lit, ADTs.Neg, ADTs.Add) 85 | 86 | e1_ADT shouldBe ADTs.Add(ADTs.Lit(1), ADTs.Neg(ADTs.Lit(2))) 87 | e2_ADT shouldBe ADTs.Neg(ADTs.Neg(ADTs.Lit(2))) 88 | 89 | /* 90 | An alternative way of creating Church expressions is by using smart constructors 91 | that instantiate the trait `Expr` for us. 92 | */ 93 | 94 | object Expr{ 95 | def lit(i: Int): Expr = new Expr{ 96 | def apply[E](lit: Int => E, neg: E => E, add: (E,E) => E): E = 97 | lit(i) 98 | } 99 | 100 | def neg(e: Expr): Expr = new Expr{ 101 | def apply[E](lit: Int => E, neg: E => E, add: (E,E) => E): E = 102 | neg(e(lit,neg,add)) 103 | } 104 | 105 | def add(e1: Expr, e2: Expr): Expr = new Expr{ 106 | def apply[E](lit: Int => E, neg: E => E, add: (E,E) => E): E = 107 | add(e1(lit,neg,add), e2(lit,neg,add)) 108 | } 109 | } 110 | 111 | /* 112 | Using these constructors we can write arithmetic expressions in a very concise and 113 | elegant manner (just as we wrote them with the ADT's constructors). On the other hand, 114 | they create many intermediate objects which may not be necessary at all. 115 | */ 116 | import Expr.{lit, neg, add} 117 | val e1_v2: Expr = add(lit(1), neg(lit(2))) 118 | 119 | e1_v2(ADTs.Lit, ADTs.Neg, ADTs.Add) shouldBe ADTs.Add(ADTs.Lit(1), ADTs.Neg(ADTs.Lit(2))) 120 | 121 | /* 122 | What should it happen if we apply the smart constructors of the Church encoding itself 123 | to a Church value. Well, we should obtain a Church value, and that value shouldBe 124 | equivalent to the original one. In order to test the equivalence of two Church values 125 | we test the equality of the resulting value when applied to a concrete representation. 126 | */ 127 | 128 | val e1_v3: Expr = e1(lit, neg, add) 129 | e1_v3(ADTs.Lit, ADTs.Neg, ADTs.Add) shouldBe e1(ADTs.Lit, ADTs.Neg, ADTs.Add) 130 | } 131 | 132 | // Let's actually test the previous checks 133 | "Church encondings" should "work" in Church 134 | 135 | /* 136 | Ok, that's for values. But, how can we represent the interpreters `eval`, `write`, etc., 137 | that we implemented for the ADT representation? We start by considering compositional 138 | interpreters, which is the easy case. Indeed, since Church expressions are simply folds 139 | the new implementations recall almost exactly the implementations that we made for the 140 | `Expr` ADTs. 141 | */ 142 | object CompositionalInterpreters{ 143 | import Church._, Expr._ 144 | 145 | // Evaluation 146 | 147 | def eval(e: Expr): Int = e[Int]( 148 | i => i, 149 | e1 => -e1, 150 | (e1,e2) => e1 + e2 151 | ) 152 | 153 | eval(add(lit(1),lit(2))) shouldBe 3 154 | 155 | // Printing 156 | 157 | def write(e: Expr): String = e[String]( 158 | i => s"$i", 159 | e => s"(-$e)", 160 | (e1,e2) => s"($e1+$e2)" 161 | ) 162 | 163 | write(add(lit(1),lit(2))) shouldBe "(1+2)" 164 | } 165 | 166 | "Church functions" should "work" in CompositionalInterpreters 167 | 168 | /* 169 | What about non-compositional interpreters? Can we pattern match Church expressions in the 170 | same way that we did for ADTs? It turns out that we can! First of all, note that in order 171 | to apply pattern matching we should be able to represent two things: first, the kind of 172 | expression we are dealing with, i.e. have we received a simple literal, a negated expression 173 | or a sum of expressions? These are the different "cases"; second, we have to represent 174 | what we want to do in each "case". 175 | */ 176 | object DeconstructingChurch{ 177 | import Church._, Expr._ 178 | 179 | /* 180 | The following type actually encodes all the information we need to pattern 181 | match. 182 | */ 183 | trait Match{ 184 | def apply[W](dlit: Int => W, dneg: Expr => W, dadd: (Expr, Expr) => W): W 185 | } 186 | 187 | /* 188 | First, note that in order to instantiate this trait we have to implement its 189 | polymorphic function. And the only way to implement this function, i.e. obtaining 190 | a value of type `W`, is by using *one* of the arguments `dlit`, `dneg` or `dadd`. 191 | But in using one of these arguments we will have to provide either an `Int`, and 192 | expression `Expr`, or a pair of expressions. Hence, an instance of this type `Match` 193 | somehow encodes the information we need for pattern matching concerning the kind of 194 | expression we are dealing with. For instance: 195 | */ 196 | 197 | // A match of the expression `lit(1)` 198 | val litM1: Match = new Match{ 199 | def apply[W](dlit: Int => W, dneg: Expr => W, dadd: (Expr, Expr) => W): W = 200 | dlit(1) // Note how the interger represented by this match is simply 201 | // encoded as an argument of the function `dlit` 202 | } 203 | 204 | // A match of the expression `neg(lit(1))` 205 | val negM1: Match = new Match{ 206 | def apply[W](dlit: Int => W, dneg: Expr => W, dadd: (Expr, Expr) => W): W = 207 | dneg(lit(1)) 208 | } 209 | 210 | // A match of the expression `add(lit(1), lit(2))` 211 | val addM1: Match = new Match{ 212 | def apply[W](dlit: Int => W, dneg: Expr => W, dadd: (Expr, Expr) => W): W = 213 | dadd(lit(1),lit(2)) 214 | } 215 | 216 | /* 217 | Second, note that the "things" that we want to do for each different case of 218 | the pattern match are represented by the functions `dlit`, `dneg` and `dadd` 219 | themselves (you can look at these functions as the possible "continuations"). 220 | For instance, let's say that we want simply to return 1 if the match represents 221 | a literal, 2 if it represents a negated expression, and 3 if it represents a sum. 222 | */ 223 | 224 | litM1(_ => 1, _ => 2, (_,_) => 3) shouldBe 1 225 | negM1(_ => 1, _ => 2, (_,_) => 3) shouldBe 2 226 | addM1(_ => 1, _ => 2, (_,_) => 3) shouldBe 3 227 | 228 | /* 229 | Another interesting example that we will use later on is reconstructing the 230 | original expression being matched: 231 | */ 232 | 233 | val lit1: Expr = litM1(lit, neg, add) 234 | val neg1: Expr = negM1(lit, neg, add) 235 | val add1: Expr = addM1(lit, neg, add) 236 | 237 | import CompositionalInterpreters._ 238 | 239 | write(lit1) shouldBe "1" 240 | write(neg1) shouldBe "(-1)" 241 | write(add1) shouldBe "(1+2)" 242 | 243 | /* 244 | Given all this, the only thing that we need now in order to implement pattern 245 | matching-based functions over Church encodings is some way of obtaining for some 246 | arbitrary expression its corresponding match information, i.e. a function with signature 247 | `match: Expr => Match`. But the only way to implement this function is as a fold, 248 | so we need to find functions `Int => Match`, `Match => Match` and `(Match,Match) 249 | => Match` that can be passed to our expression. 250 | */ 251 | 252 | def `match`(e: Expr): Match = 253 | e(Match.lit, Match.neg, Match.add) 254 | 255 | /* 256 | In other words, in order to implement that function as a fold, we have to find 257 | a way of obtaining a match for literals, a match for negated expressions taking 258 | into account the match of the negated expression, and a match for sums taking 259 | into account the matches of the corresponding subexpressions. 260 | 261 | We implement these functions as part of the companion object for the `Match` type. 262 | */ 263 | object Match{ 264 | 265 | def lit(i: Int): Match = new Match{ 266 | def apply[W](dlit: Int => W, dneg: Expr => W, dadd: (Expr, Expr) => W): W = 267 | dlit(i) 268 | } 269 | 270 | def neg(e: Match): Match = new Match{ 271 | def apply[W](dlit: Int => W, dneg: Expr => W, dadd: (Expr, Expr) => W): W = 272 | dneg(e(Expr.lit, Expr.neg, Expr.add)) 273 | // Note how this match encodes the subexpression being negated, and 274 | // how we obtain this subexpression from its corresponding match. 275 | } 276 | 277 | def add(e1: Match, e2: Match): Match = new Match{ 278 | def apply[W](dlit: Int => W, dneg: Expr => W, dadd: (Expr, Expr) => W): W = 279 | dadd(e1(Expr.lit, Expr.neg, Expr.add),e2(Expr.lit, Expr.neg, Expr.add)) 280 | } 281 | } 282 | 283 | /* 284 | Now we are ready to implement both compositional and non-compositional interpreters 285 | for Church-encoded expressions. 286 | */ 287 | 288 | def write(e: Expr): String = 289 | `match`(e)( 290 | i => s"$i", 291 | e1 => "(-"+write(e1)+")", 292 | (e1, e2) => "("+write(e1)+"+"+write(e2)+")" 293 | ) 294 | 295 | write(lit(1)) shouldBe "1" 296 | write(neg(lit(1))) shouldBe "(-1)" 297 | write(add(lit(1),lit(2))) shouldBe "(1+2)" 298 | 299 | /* 300 | Note how we pattern match the inner expression `e1` in the negated case for the `pushNeg` 301 | interpreter. 302 | */ 303 | def pushNeg(e: Expr): Expr = 304 | `match`(e)( 305 | _ => e, 306 | e1 => `match`(e1)( 307 | _ => e, 308 | e2 => pushNeg(e2), 309 | (e2, e3) => add(pushNeg(neg(e2)), pushNeg(neg(e3))) 310 | ), 311 | (e1, e2) => add(pushNeg(e1),pushNeg(e2)) 312 | ) 313 | 314 | import ADTs.{Lit, Neg, Add} 315 | 316 | pushNeg(neg(lit(1)))(Lit, Neg, Add) should 317 | be(Neg(Lit(1))) 318 | 319 | pushNeg(neg(add(neg(lit(1)), lit(2))))(Lit, Neg, Add) should 320 | be(Add(Lit(1), Neg(Lit(2)))) 321 | } 322 | 323 | "Deconstructing Church" should "work" in DeconstructingChurch 324 | 325 | /* 326 | The last part of this gist will simply add some syntactic sugar to the above 327 | code, so that we can implement recursive functions over Church encodings exactly 328 | as we do with ADT-based representations. 329 | */ 330 | object ScalaExtractors{ 331 | import Church._, Expr._, DeconstructingChurch.{pushNeg => _, _} 332 | 333 | /* 334 | In order to achieve this extra level of conciseness and clarity, we use 335 | Scala extractors. These are given to us by the Scala compiler each time 336 | we implement a case class. Since we did not implement `Expr` as a case 337 | class, we have to implement them ourselves. 338 | */ 339 | 340 | object Lit{ 341 | def unapply(e: Expr): Option[Int] = 342 | `match`(e)(i => Some(1), _ => None, (_,_) => None) 343 | } 344 | 345 | object Neg{ 346 | def unapply(e: Expr): Option[Expr] = 347 | `match`(e)(i => None, e1 => Some(e1), (_,_) => None) 348 | } 349 | 350 | object Add{ 351 | def unapply(e: Expr): Option[(Expr, Expr)] = 352 | `match`(e)(_ => None, _ => None, (e1, e2) => Some((e1,e2))) 353 | } 354 | 355 | /* 356 | With these extractors we can implement the `pushNeg` interpreter in a more 357 | familiar way. 358 | */ 359 | 360 | def pushNeg(e: Expr): Expr = e match { 361 | case Lit(_) => e 362 | case Neg(Lit(_)) => e 363 | case Neg(Neg(e1)) => pushNeg(e1) 364 | case Neg(Add(e1,e2)) => add(pushNeg(neg(e1)), pushNeg(neg(e2))) 365 | case Add(e1,e2) => add(pushNeg(e1),pushNeg(e2)) 366 | } 367 | 368 | write(pushNeg(neg(neg(lit(1))))) shouldBe "1" 369 | write(pushNeg(neg(add(neg(lit(1)),lit(2))))) shouldBe "(1+(-2))" 370 | } 371 | 372 | "ScalaExtractors" should "work" in ScalaExtractors 373 | 374 | } 375 | 376 | object ChurchEncodings extends ChurchEncodings 377 | 378 | -------------------------------------------------------------------------------- /src/test/scala/Filter.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | trait Filter[F[_]]{ 4 | def filter[A](fa: F[A])(f: A => Boolean)(implicit 5 | F: sourcecode.File, L: sourcecode.Line): F[A] 6 | } 7 | 8 | object Filter{ 9 | 10 | def apply[F[_]](implicit S: Filter[F]) = S 11 | 12 | // Use in for-comprehensions 13 | 14 | implicit class FilterOps[F[_],A](fa: F[A])(implicit SF: Filter[F]){ 15 | def filter(f: A => Boolean)(implicit F: sourcecode.File, L: sourcecode.Line): F[A] = 16 | SF.filter(fa)(f) 17 | def withFilter(f: A => Boolean)(implicit F: sourcecode.File, L: sourcecode.Line): F[A] = 18 | filter(f) 19 | } 20 | 21 | import scalaz.MonadError, scalaz.syntax.monadError._ 22 | 23 | type Location = (_root_.sourcecode.File,_root_.sourcecode.Line) 24 | 25 | def FilterForMonadError[F[_],S](error: Location => S)( 26 | implicit merror: MonadError[F,S]) = 27 | new Filter[F]{ 28 | def filter[A](fa: F[A])(f: A => Boolean)(implicit 29 | F: sourcecode.File, L: sourcecode.Line): F[A] = 30 | merror.bind(fa)(a => 31 | if (f(a)) a.point else merror.raiseError(error((F,L))) 32 | ) 33 | } 34 | 35 | implicit def FilterForMonadErrorOnLocation[F[_]](implicit merror: MonadError[F,Location]) = 36 | FilterForMonadError(identity) 37 | } 38 | -------------------------------------------------------------------------------- /src/test/scala/GADTs.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import scala.language.existentials 4 | import org.scalatest._ 5 | import cats.{~>, Eval, Id, MonadState, MonadWriter, Monad} 6 | import cats.arrow.FunctionK 7 | import cats.data.{State, StateT, Writer, WriterT} 8 | 9 | /* 10 | The gist mainly illustrates how to implement compositional interpreters of GADTs 11 | using catamorphisms. Since GADTs are built using type constructors, the `fold` 12 | function to be implemented requires higher-kinded polymorphism. Moreover, 13 | in some cases we will need natural transformations instead of regular functions. 14 | 15 | You may want to check first a similar gist for regular ADTs: 16 | 17 | https://github.com/hablapps/gist/blob/master/src/test/scala/ADTs.scala 18 | */ 19 | class GADTs extends FlatSpec with Matchers{ 20 | 21 | /* 22 | The GADT to be used as example allow us to represent imperative IO programs, 23 | made of simple "read" and "write" instructions. 24 | */ 25 | sealed abstract class IO[_] 26 | case object Read extends IO[String] 27 | case class Write(msg: String) extends IO[Unit] 28 | case class FlatMap[A, B](p: IO[A], f: A => IO[B]) extends IO[B] 29 | case class Pure[A](a: A) extends IO[A] 30 | 31 | /* 32 | This kind of representation allows us to write the following IO programs 33 | */ 34 | val e1: IO[Unit] = FlatMap(Read, Write) 35 | 36 | val e2: IO[String] = 37 | FlatMap[Unit, String](e1, _ => Read) 38 | 39 | val e3: IO[String] = 40 | FlatMap[String, String](Read, s1 => 41 | FlatMap[Unit, String]( 42 | FlatMap[String, Unit](Read, s2 => Write(s1+s2)), _ => 43 | Read) 44 | ) 45 | 46 | /* 47 | In order to run IO programs, converting them to strings, transforming 48 | them into normal forms, etc., we implement independent functions. 49 | */ 50 | 51 | trait CompositionalInterpreters { 52 | // Side-effectful interpretation of IO programs 53 | def run[A](e: IO[A]): A 54 | // (Approximate) representation of IO programs as strings 55 | def write[A](io: IO[A]): String 56 | } 57 | 58 | trait NonCompositionalInterpreters{ 59 | // Reassociate `FlatMap`s to the right 60 | def reassociate[A](e: IO[A]): IO[A] 61 | } 62 | 63 | /* 64 | And here there are some simple tests for some of the implemented functions 65 | */ 66 | case class TestCompositional(interpreters: CompositionalInterpreters){ 67 | import interpreters._ 68 | 69 | write(e1) shouldBe "FlatMap(Read, Write)" 70 | write(e2) shouldBe "FlatMap(FlatMap(Read, Write), Read)" 71 | write(e3) shouldBe "FlatMap(Read, FlatMap(FlatMap(Read, Write), Read))" 72 | } 73 | 74 | case class TestNonCompositional(interpreters: NonCompositionalInterpreters with CompositionalInterpreters){ 75 | import interpreters._ 76 | 77 | write(reassociate(e1)) shouldBe 78 | write(e1) 79 | 80 | write(reassociate(e2)) shouldBe 81 | "FlatMap(Read, FlatMap(Write, Read))" 82 | 83 | write(reassociate(e3)) shouldBe 84 | "FlatMap(Read, FlatMap(Read, FlatMap(Write, Read)))" 85 | 86 | write(reassociate(FlatMap(e3, (s: String) => Read))) shouldBe 87 | "FlatMap(Read, FlatMap(Read, FlatMap(Write, FlatMap(Read, Read))))" 88 | } 89 | 90 | /* 91 | We will first implement these functions using pattern matching. 92 | */ 93 | object PatternMatchingInterpreters extends NonCompositionalInterpreters with CompositionalInterpreters{ 94 | 95 | def run[A](e: IO[A]): A = e match { 96 | case Read => scala.io.StdIn.readLine 97 | case Write(msg) => println(msg) 98 | case FlatMap(p, next) => run(next(run(p))) 99 | case Pure(a) => a 100 | } 101 | 102 | def write[A](io: IO[A]): String = { 103 | implicit val stringMonoid = new cats.kernel.instances.StringMonoid 104 | val monad = WriterT.catsDataMonadWriterForWriterT[Id, String] 105 | import monad._ 106 | 107 | def aux[A](_io: IO[A]): Writer[String, A] = 108 | _io match { 109 | case Read => 110 | writer(("Read", "")) 111 | case Write(_) => 112 | tell(s"Write") 113 | case FlatMap(p, f) => 114 | for { 115 | _ <- tell(s"FlatMap(") 116 | a <- aux(p) 117 | _ <- tell(", ") 118 | b <- aux(f(a)) 119 | _ <- tell(")") 120 | } yield b 121 | case Pure(a) => 122 | writer((s"Pure($a)", a)) 123 | } 124 | 125 | aux(io).written 126 | } 127 | 128 | def reassociate[A](e: IO[A]): IO[A] = e match { 129 | case FlatMap(FlatMap(p1, next1), next2) => 130 | reassociate(FlatMap(p1, next1 andThen (FlatMap(_,next2)))) 131 | case FlatMap(p1, next1) => 132 | FlatMap(reassociate(p1), next1 andThen reassociate) 133 | case other => other 134 | } 135 | } 136 | 137 | "Pattern matching" should "work" in { 138 | TestCompositional(PatternMatchingInterpreters) 139 | TestNonCompositional(PatternMatchingInterpreters) 140 | } 141 | 142 | /* 143 | This module implements the catamorphism for IO programs. Note that "read" and 144 | "write" interpretations are normal functions, since these instructions are not 145 | parameterised. On the contrary, interpretations for "sequenced" and "pure" programs 146 | are represented through natural transformations. 147 | */ 148 | object HKFold{ 149 | 150 | // Type alias for the interpretation of composite IO programs 151 | 152 | type FlatMapNatTrans[M[_]] = FlatMapNatTrans.F2[M, ?] ~> M 153 | 154 | object FlatMapNatTrans{ 155 | 156 | trait F2[M[_], A] { 157 | type I 158 | val fi: M[I] 159 | val f: I => M[A] 160 | } 161 | 162 | implicit def apply[M[_]: Monad]: FlatMapNatTrans[M] = 163 | new (FlatMapNatTrans[M]){ 164 | def apply[A](fa: F2[M,A]): M[A] = Monad[M].flatMap(fa.fi)(fa.f) 165 | } 166 | } 167 | 168 | // Type alias for the interpretation of pure programs 169 | 170 | type PureNatTrans[M[_]] = Id ~> M 171 | 172 | object PureNatTrans{ 173 | implicit def apply[M[_]: Monad]: PureNatTrans[M] = new (Id~>M){ 174 | def apply[X](a: X): M[X] = Monad[M].pure(a) 175 | } 176 | } 177 | 178 | // Higher-kinded catamorphism 179 | 180 | def fold[F[_]]( 181 | read: => F[String], 182 | write: String => F[Unit], 183 | flatMap: FlatMapNatTrans.F2[F, ?] ~> F, 184 | pure: PureNatTrans[F]): IO ~> F = { 185 | 186 | def foldFlatMap[A,B](fm: FlatMap[A,B]) = 187 | flatMap(new FlatMapNatTrans.F2[F, B] { 188 | type I = A 189 | val fi: F[A] = fold(read, write, flatMap, pure)(fm.p) 190 | val f: A => F[B] = (x: A) => fold(read, write, flatMap, pure)(fm.f(x)) 191 | }) 192 | 193 | new (IO ~> F) { 194 | def apply[A](io: IO[A]): F[A] = io match { 195 | case Read => read 196 | case Write(msg) => write(msg) 197 | case fm: FlatMap[_,_] => foldFlatMap(fm) 198 | case Pure(a) => pure(a) 199 | } 200 | } 201 | } 202 | } 203 | 204 | /* 205 | We can now implement the compositional interpreters using catamorphisms. Whenever 206 | possible we create natural transformations for `FlatMap` and `Pure` programs 207 | from available monad instances. 208 | */ 209 | object FoldInterpreters extends CompositionalInterpreters { 210 | import HKFold._ 211 | 212 | def run[A](io: IO[A]): A = 213 | fold[Id]( 214 | scala.io.StdIn.readLine, 215 | println, 216 | FlatMapNatTrans[Id], 217 | FunctionK.id)(io) 218 | 219 | 220 | def write[A](io: IO[A]): String = { 221 | implicit val stringMonoid = new cats.kernel.instances.StringMonoid 222 | val monad = WriterT.catsDataMonadWriterForWriterT[Id, String] 223 | import monad._ 224 | 225 | val FlatMapNatTransForWrite = new FlatMapNatTrans[Writer[String,?]]{ 226 | def apply[A](fa: FlatMapNatTrans.F2[Writer[String,?],A]): Writer[String,?][A] = 227 | for { 228 | _ <- tell(s"FlatMap(") 229 | a <- fa.fi 230 | _ <- tell(", ") 231 | b <- fa.f(a) 232 | _ <- tell(")") 233 | } yield b 234 | } 235 | 236 | fold[Writer[String, ?]]( 237 | writer(("Read", "")), 238 | msg => tell(s"Write"), 239 | FlatMapNatTransForWrite, 240 | PureNatTrans[Writer[String,?]])(io).written 241 | } 242 | } 243 | 244 | "Catamorphisms" should "work" in TestCompositional(FoldInterpreters) 245 | 246 | } 247 | 248 | object GADTs extends GADTs 249 | -------------------------------------------------------------------------------- /src/test/scala/InitialAlgebras.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | /* 4 | The purpose of this gist is explaining the relationships between ADT and Church 5 | encodings using algebraic concepts. We'll use the familiar domain of arithmetic expressions. 6 | Essentially, we'll see that given an algebraic theory for arithmetic expressions, ADT 7 | and Church encodings correspond to initial algebras of that theory, completely equivalent 8 | for all purposes (functionally speaking, since they may differ significantly in non-functional 9 | concerns such as efficiency, modularity, etc.). 10 | 11 | We will make reference to these others gists, on ADTs and Church encodings, respectively: 12 | * https://github.com/hablapps/gist/blob/master/src/test/scala/ADTs.scala 13 | * https://github.com/hablapps/gist/blob/master/src/test/scala/ChurchEncodings.scala 14 | 15 | There are two basic ways of representing algebras in Scala: as functor algebras, 16 | and as object algebras. We'll follow the later approach in this gist. For more information on 17 | object algebras check the following source: 18 | 19 | Extensibility for the Masses. Practical Extensibility with Object Algebras 20 | Bruno C. d. S. Oliveira and William R. Cook 21 | https://www.cs.utexas.edu/~wcook/Drafts/2012/ecoop2012.pdf 22 | 23 | */ 24 | 25 | import org.scalatest._ 26 | 27 | class InitialAlgebras extends FlatSpec with Matchers{ 28 | 29 | /* 30 | An algebraic theory is simply a collection of operations that allow us to build 31 | objects according to certain rules. In our case, we want to create arithmetic 32 | expressions. 33 | 34 | The following trait tells us that we can create arithmetic expressions using the 35 | operations, or constructors, `lit`, `neg` and `add`. Any type `E` for which 36 | we can implement the following trait qualifies as an arithmetic expression. 37 | 38 | As you can see, object algebras are directly represented by type classes in Scala. 39 | */ 40 | trait ExprAlg[E]{ 41 | def lit(i: Int): E 42 | def neg(e: E): E 43 | def add(e1: E, e2: E): E 44 | } 45 | 46 | object ExprAlg{ 47 | 48 | // Syntactical helper 49 | def apply[E](clit: Int => E, cneg: E => E, cadd: (E,E)=>E): ExprAlg[E] = 50 | new ExprAlg[E]{ 51 | def lit(i: Int) = clit(i) 52 | def neg(e: E) = cneg(e) 53 | def add(e1: E, e2: E) = cadd(e1,e2) 54 | } 55 | 56 | } 57 | 58 | /* 59 | Which types can be regarded as arithmetic expressions? I.e. Which kinds of values 60 | can be created according to the rules of arithmetic expresions? There are many 61 | examples: integers, strings, etc. For instance: 62 | */ 63 | 64 | object Eval{ 65 | 66 | val algebra: ExprAlg[Int] = 67 | ExprAlg(i => i, 68 | i => -i, 69 | (i1,i2) => i1 + i2) 70 | 71 | /* 72 | Using the algebra, we can create integers as if they were aritmethic expressions 73 | */ 74 | import algebra._ 75 | 76 | add(lit(1),lit(2)) shouldBe 3 77 | add(neg(add(neg(lit(1)),lit(2))),lit(3)) shouldBe 2 78 | } 79 | 80 | object Write{ 81 | 82 | val algebra: ExprAlg[String] = 83 | ExprAlg(i => i.toString, 84 | s => s"(-$s)", 85 | (s1,s2) => s"($s1+$s2)") 86 | 87 | /* 88 | This time, using the algebra we can create strings as if they were aritmethic expressions 89 | */ 90 | import algebra._ 91 | 92 | add(lit(1),lit(2)) shouldBe "(1+2)" 93 | add(neg(add(neg(lit(1)),lit(2))),lit(3)) shouldBe "((-((-1)+2))+3)" 94 | } 95 | 96 | "Sample algebras" should "work" in { Eval; Write } 97 | 98 | /* 99 | Although we have seen that strings and integers "are" arithmetic expressions, since there 100 | are expression algebras for them, it seems somewhat odd to say that. Intuitively, 101 | we may rather say that arithmetic expressions can be *interpreted* as strings or integers. 102 | From this perspective, expression algebras such as `Eval.algebra` and `Write.algebra` are 103 | the interpreters. 104 | 105 | Now, there are interpreters which are special, in the sense that they allow us to create 106 | arithmetic expressions that are fully general, and apparently free of any particular 107 | interpretation. For instance, let's consider an ADT representation of arithmetic expressions. 108 | We can indeed interpret the expression algebra over the `Expr` ADT. 109 | */ 110 | 111 | object ADTAlgebra{ 112 | 113 | // The constructors of the ADT are in a one-to-one correspondence with the 114 | // algebra operations 115 | import ADTs.{Lit,Neg,Add,Expr} 116 | 117 | val algebra: ExprAlg[Expr] = ExprAlg(Lit, Neg, Add) 118 | 119 | // Let's create some arithmetic expressions 120 | import algebra._ 121 | 122 | add(lit(1),lit(2)) shouldBe Add(Lit(1),Lit(2)) 123 | add(neg(add(neg(lit(1)),lit(2))),lit(3)) shouldBe Add(Neg(Add(Neg(Lit(1)),Lit(2))),Lit(3)) 124 | 125 | } 126 | 127 | /* 128 | But not only we can understand ADT expressions through the lens of expression algebras. 129 | More importantly, we can obtain *any* other interpretation from it. Technically, 130 | this means that the ADT algebra is initial, i.e. there is an algebra homomorphism (and 131 | only one) from the ADT algebra to any other one. More informally, ADT expressions carry 132 | no particular meaning at all. So, given a particular interpretation (e.g. an algebra 133 | for integers), we can interpret the ADT expression to obtain a value in the domain of 134 | that interpretation. This is what the `fold` function accomplishes. Taking this into 135 | account, we can say that initial domains are canonical ways of representing algebraic 136 | expressions. 137 | 138 | In sum, an initial algebra is an algebra `Alg[I]` such that for any other possible 139 | algebra `Alg[A]`, we can interpret `I` as `A`: 140 | */ 141 | 142 | trait InitialAlgebra[I, Alg[_]]{ 143 | def algebra: Alg[I] 144 | def fold[A](alg: Alg[A]): I => A 145 | } 146 | 147 | /* 148 | We already implemented `fold` for the ADT representation, so we can easily check that 149 | ADT expressions make up an initial algebra. 150 | */ 151 | object ADTInitial{ 152 | import ADTs.Expr 153 | 154 | val initial = new InitialAlgebra[Expr, ExprAlg]{ 155 | val algebra: ExprAlg[Expr] = ADTAlgebra.algebra 156 | 157 | def fold[A](alg: ExprAlg[A]): Expr => A = 158 | ADTs.fold(alg.lit, alg.neg, alg.add) 159 | } 160 | 161 | // We can now interpret ADT expressions as strings or integers using 162 | // the corresponding algebras (i.e. interpreters) 163 | import initial._, algebra._ 164 | 165 | val e1: Expr = add(lit(1),lit(2)) // we create ADT expressions using 166 | // the "smart" constructors of the 167 | // algebra, rather than the ADT constructors. 168 | 169 | fold(Eval.algebra)(e1) shouldBe 3 170 | fold(Write.algebra)(e1) shouldBe "(1+2)" 171 | } 172 | 173 | "ADTInitial" should "work" in ADTInitial 174 | 175 | /* 176 | Besides ADT representations, in which other ways can we represent expressions in a 177 | canonical way? In other words, how can we construct other initial algebras? There are at 178 | least two other ways: one uses fixed points of functors, and the other one Church encodings. 179 | We will review this second one now. 180 | 181 | To motivate Church encodings, note that we have being using two particular expressions 182 | throughout this gist: 183 | 184 | add(lit(1),lit(2)), and 185 | add(neg(add(neg(lit(1)),lit(2))),lit(3)) 186 | 187 | These expressions were written using the constructors `add`, `lit` and `neg` of particular 188 | expression algebras (`Eval.algebra`, `Write.algebra` or `ADTAlgebra.algebra`). Can we write 189 | these expressions just once, for any possible algebra? Yes, we can! 190 | */ 191 | object TowardsChurch{ 192 | 193 | // We simply build upon a generic algebra, instead of a particular one. 194 | def e0[E](alg: ExprAlg[E]): E = { 195 | import alg._ 196 | add(lit(1),lit(2)) 197 | } 198 | 199 | def e1[E](alg: ExprAlg[E]): E = { 200 | import alg._ 201 | add(neg(add(neg(lit(1)),lit(2))),lit(3)) 202 | } 203 | 204 | // Then, we can obtain the original interpretations by applying the corresponding 205 | // algebra to the generic expression 206 | 207 | import ADTs.{Lit, Add, Neg} 208 | 209 | e0(ADTAlgebra.algebra) shouldBe Add(Lit(1),Lit(2)) 210 | e0(Write.algebra) shouldBe "(1+2)" 211 | e0(Eval.algebra) shouldBe 3 212 | } 213 | 214 | "Towards church" should "work" in TowardsChurch 215 | 216 | /* 217 | Now, please note the close similarity between the representation of arithmetic expressions 218 | using ad-hoc polymorphic functions such as `e0` and `e1`, and the Church encoding of 219 | arithmetic expressions: 220 | 221 | https://github.com/hablapps/gist/blob/master/src/test/scala/ChurchEncodings.scala#L45 222 | 223 | Essentially, the Church encoding is just a reification of a polymorphic function which 224 | creates an expression using a number of constructors. This polymorphic function is almost 225 | identical to the ones that we wrote before, `e0` and `e1`, the difference just being that 226 | the constructors in these later functions are packaged within an algebra. 227 | 228 | It's actually very easy to come up with the proof that Church encodings are initial 229 | algebras. 230 | */ 231 | 232 | object ChurchInitial{ 233 | import ChurchEncodings.Church, Church._ 234 | 235 | val initial = new InitialAlgebra[Expr, ExprAlg]{ 236 | 237 | val algebra: ExprAlg[Expr] = ExprAlg(Expr.lit, Expr.neg, Expr.add) 238 | 239 | def fold[A](alg: ExprAlg[A]): Expr => A = 240 | _(alg.lit, alg.neg, alg.add) 241 | } 242 | 243 | /* 244 | Using the Church algebra we can write the expressions `e0` and `e1` step-by-step, 245 | instead of performing a single instantiation (not saying that this is good, simply 246 | that you can). 247 | */ 248 | 249 | import initial._, algebra._ 250 | 251 | val e0: Expr = add(lit(1),lit(2)) 252 | val e1: Expr = add(neg(add(neg(lit(1)),lit(2))),lit(3)) 253 | 254 | /* 255 | Being a canonical domain, we can interpret Church expressions as integers, string, 256 | or even ADT expressions (another canonical domain), using their corresponding 257 | interpreters. 258 | */ 259 | import ADTs.{Lit, Add, Neg} 260 | 261 | fold(ADTAlgebra.algebra)(e0) shouldBe Add(Lit(1),Lit(2)) 262 | fold(Eval.algebra)(e0) shouldBe 3 263 | fold(Write.algebra)(e0) shouldBe "(1+2)" 264 | } 265 | 266 | "Church Initial" should "work" in ChurchInitial 267 | 268 | } 269 | 270 | object InitialAlgebras extends InitialAlgebras -------------------------------------------------------------------------------- /src/test/scala/LensStateIsYourFather.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import org.scalatest._ 4 | 5 | import scalaz.{ Reader, State } 6 | import scalaz.Isomorphism.<=> 7 | 8 | import monocle.{ Getter, Lens, Optional, Setter, Fold } 9 | 10 | /* IMPORTANT: this gist belongs to the blog post "Lens, State Is Your Father". 11 | * Please, visit the following link to get a complete description of the 12 | * contents: 13 | * 14 | * https://blog.hablapps.com/2016/11/10/lens-state-is-your-father/ 15 | */ 16 | class LensStateIsYourFather extends FlatSpec with Matchers { 17 | 18 | type IOCoalgebra[IOAlg[_[_]], Step[_, _], S] = IOAlg[Step[S, ?]] 19 | 20 | object OpticsAsCoalgebras { 21 | import scalaz.{ Functor, Monad } 22 | import scalaz.syntax.monad._ 23 | 24 | /* IOLens */ 25 | 26 | trait LensAlg[A, P[_]] { 27 | def get: P[A] 28 | def set(a: A): P[Unit] 29 | 30 | def gets[B]( 31 | f: A => B)(implicit 32 | F: Functor[P]): P[B] = 33 | get map f 34 | 35 | def modify( 36 | f: A => A)(implicit 37 | M: Monad[P]): P[Unit] = 38 | get >>= (f andThen set) 39 | } 40 | 41 | type IOLens[S, A] = IOCoalgebra[LensAlg[A, ?[_]], State, S] 42 | 43 | object IOLens { 44 | 45 | def apply[S, A](_get: S => A)(_set: A => S => S): IOLens[S, A] = 46 | new LensAlg[A, State[S, ?]] { 47 | def get: State[S, A] = State.gets(_get) 48 | def set(a: A): State[S, Unit] = State.modify(_set(a)) 49 | } 50 | 51 | def lensIso[S, A] = new (Lens[S, A] <=> IOLens[S, A]) { 52 | 53 | def from: IOLens[S, A] => Lens[S, A] = 54 | ioln => Lens[S, A](ioln.get.eval)(a => ioln.set(a).exec) 55 | 56 | def to: Lens[S, A] => IOLens[S, A] = ln => new IOLens[S, A] { 57 | def get: State[S, A] = State.gets(ln.get) 58 | def set(a: A): State[S, Unit] = State.modify(ln.set(a)) 59 | } 60 | } 61 | } 62 | 63 | /* IOOptional */ 64 | 65 | trait OptionalAlg[A, P[_]] { 66 | def getOption: P[Option[A]] 67 | def set(a: A): P[Unit] 68 | } 69 | 70 | type IOOptional[S, A] = IOCoalgebra[OptionalAlg[A, ?[_]], State, S] 71 | 72 | object IOOptional { 73 | 74 | def optionalIso[S, A] = new (Optional[S, A] <=> IOOptional[S, A]) { 75 | 76 | def from: IOOptional[S, A] => Optional[S, A] = 77 | ioopt => Optional[S, A](ioopt.getOption.eval)(a => ioopt.set(a).exec) 78 | 79 | def to: Optional[S, A] => IOOptional[S, A] = opt => new IOOptional[S, A] { 80 | def getOption: State[S, Option[A]] = State.gets(opt.getOption) 81 | def set(a: A): State[S, Unit] = State.modify(opt.set(a)) 82 | } 83 | } 84 | } 85 | 86 | /* IOSetter */ 87 | 88 | trait SetterAlg[A, P[_]] { 89 | def modify(f: A => A): P[Unit] 90 | } 91 | 92 | type IOSetter[S, A] = IOCoalgebra[SetterAlg[A, ?[_]], State, S] 93 | 94 | object IOSetter { 95 | 96 | def setterIso[S, A] = new (Setter[S, A] <=> IOSetter[S, A]) { 97 | 98 | def from: IOSetter[S, A] => Setter[S, A] = 99 | iost => Setter[S, A](f => iost.modify(f).exec) 100 | 101 | def to: Setter[S, A] => IOSetter[S, A] = st => new IOSetter[S, A] { 102 | def modify(f: A => A): State[S, Unit] = State.modify(st.modify(f)) 103 | } 104 | } 105 | } 106 | 107 | /* IOGetter */ 108 | 109 | trait GetterAlg[A, P[_]] { 110 | def get: P[A] 111 | } 112 | 113 | type IOGetter[S, A] = IOCoalgebra[GetterAlg[A, ?[_]], Reader, S] 114 | 115 | object IOGetter { 116 | 117 | def getterIso[S, A] = new (Getter[S, A] <=> IOGetter[S, A]) { 118 | 119 | def from: IOGetter[S, A] => Getter[S, A] = 120 | iogt => Getter[S, A](iogt.get.run) 121 | 122 | def to: Getter[S, A] => IOGetter[S, A] = gt => new IOGetter[S, A] { 123 | def get: Reader[S, A] = Reader(gt.get) 124 | } 125 | } 126 | } 127 | } 128 | 129 | object OpticsAndStateConnections { 130 | import scalaz.{ Monad, MonadState } 131 | import OpticsAsCoalgebras.IOLens 132 | 133 | type MSLens[S, A] = MonadState[State[S, ?], A] 134 | 135 | def lensIso[S, A] = new (IOLens[S, A] <=> MSLens[S, A]) { 136 | 137 | def from: MSLens[S, A] => IOLens[S, A] = msln => new IOLens[S, A] { 138 | def get: State[S, A] = msln.get 139 | def set(a: A): State[S, Unit] = msln.put(a) 140 | } 141 | 142 | def to: IOLens[S, A] => MSLens[S, A] = ioln => new MSLens[S, A] { 143 | private val SM: Monad[State[S, ?]] = Monad[State[S, ?]] 144 | 145 | def point[A](a: => A): State[S, A] = SM.point(a) 146 | def bind[A, B](fa: State[S, A])(f: A => State[S, B]): State[S, B] = 147 | SM.bind(fa)(f) 148 | def get: State[S, A] = ioln.get 149 | def put(a: A): State[S, Unit] = ioln.set(a) 150 | def init: State[S, A] = get 151 | } 152 | } 153 | } 154 | 155 | object MonocleAndState { 156 | import Function.const 157 | import scalaz.syntax.monad._ 158 | import monocle.macros.GenLens 159 | import monocle.state.all._ 160 | import OpticsAsCoalgebras.IOLens 161 | 162 | case class Person(name: String, age: Int) 163 | val p: Person = Person("John", 30) 164 | 165 | /* Example using Monocle's state module */ 166 | 167 | val _age: Lens[Person, Int] = GenLens[Person](_.age) 168 | val increment: State[Person, Int] = _age mod (_ + 1) 169 | 170 | increment.run(p) shouldEqual (Person("John", 31), 31) 171 | 172 | /* Example using IOLens (returns Unit instead) */ 173 | 174 | val _ioage: IOLens[Person, Int] = 175 | IOLens[Person, Int](_.age)(age => _.copy(age = age)) 176 | val ioincrement: State[Person, Int] = 177 | (_ioage modify (_ + 1)) >> (_ioage.get) 178 | 179 | ioincrement.run(p) shouldEqual (Person("John", 31), 31) 180 | } 181 | 182 | "IOLens" should "work" in MonocleAndState 183 | 184 | object DiscussionAndOngoingWork { 185 | import scalaz.IndexedState 186 | import monocle.PLens 187 | 188 | type IOCoalgebra[IOAlg[_[_], _[_]], Step[_, _, _], S, T] = 189 | IOAlg[Step[S, S, ?], Step[S, T, ?]] 190 | 191 | trait PBind[F[_], G[_], H[_]] { 192 | def pbind[A, B](fa: F[A])(f: A => G[B]): H[B] 193 | } 194 | 195 | object PBind { 196 | 197 | private type IS[S, T, A] = IndexedState[S, T, A] 198 | 199 | implicit def IndexedStateInstance[S1, S2, S3] = 200 | new PBind[IS[S1, S2, ?], IS[S2, S3, ?], IS[S1, S3, ?]] { 201 | def pbind[A, B]( 202 | fa: IS[S1, S2, A])( 203 | f: A => IS[S2, S3, B]): IS[S1, S3, B] = 204 | fa flatMap f 205 | } 206 | } 207 | 208 | trait PLensAlg[A, B, P[_], Q[_]] { 209 | def get: P[A] 210 | def set(b: B): Q[Unit] 211 | 212 | import scalaz.{ Functor, Monad } 213 | import scalaz.Isomorphism.<=> 214 | import scalaz.syntax.monad._ 215 | 216 | def gets[C]( 217 | f: A => C)(implicit 218 | F: Functor[P]): P[C] = 219 | get map f 220 | 221 | def modify[H[_]]( 222 | f: A => B)(implicit 223 | PB: PBind[P, Q, H]): H[Unit] = 224 | PB.pbind(get)(f andThen set) 225 | } 226 | 227 | type IOPLens[S, T, A, B] = 228 | IOCoalgebra[PLensAlg[A, B, ?[_], ?[_]], IndexedState, S, T] 229 | 230 | object IOPLens { 231 | 232 | def apply[S, T, A, B]( 233 | _get: S => A)( 234 | _set: B => S => T): IOPLens[S, T, A, B] = new IOPLens[S, T, A, B] { 235 | def get: IndexedState[S, S, A] = 236 | IndexedState(s => (s, _get(s))) 237 | def set(b: B): IndexedState[S, T, Unit] = 238 | IndexedState(s => (_set(b)(s), ())) 239 | } 240 | 241 | def plensIso[S, T, A, B] = new (PLens[S, T, A, B] <=> IOPLens[S, T, A, B]) { 242 | 243 | def from: IOPLens[S, T, A, B] => PLens[S, T, A, B] = 244 | ioln => PLens(ioln.get.eval)(b => ioln.set(b).exec) 245 | 246 | def to: PLens[S, T, A, B] => IOPLens[S, T, A, B] = 247 | ln => new IOPLens[S, T, A, B] { 248 | def get: IndexedState[S, S, A] = 249 | IndexedState(s => (s, ln.get(s))) 250 | def set(b: B): scalaz.IndexedState[S, T, Unit] = 251 | IndexedState(s => (ln.set(b)(s), ())) 252 | } 253 | } 254 | } 255 | 256 | def _second[A, B, C]: IOPLens[(A, B), (A, C), B, C] = 257 | IOPLens[(A, B), (A, C), B, C](_._2)(b => s => (s._1, b)) 258 | 259 | val tp: (Int, String) = (1, "hi") 260 | 261 | _second[Int, String, Nothing].get.eval(tp) shouldEqual "hi" 262 | _second[Int, String, Nothing].gets(_.length).eval(tp) shouldEqual 2 263 | _second[Int, String, Int].modify(_.length).exec(tp) shouldEqual ((1, 2)) 264 | _second[Int, String, Char].set('a').exec(tp) shouldEqual ((1, 'a')) 265 | } 266 | 267 | "IOPLens" should "work" in DiscussionAndOngoingWork 268 | } 269 | -------------------------------------------------------------------------------- /src/test/scala/MonadAlgebras.scala: -------------------------------------------------------------------------------- 1 | 2 | object MonadAlgebras{ 3 | 4 | trait TC_MonadAlgebra[P[_]]{ 5 | def returns[A](a: A): P[A] 6 | def flatMap[A,B](fa: P[A])(f: A => P[B]): P[B] 7 | } 8 | 9 | sealed abstract class MonadF[P[_],_] 10 | case class Returns[P[_],A](a: A) extends MonadF[P,A] 11 | case class FlatMap[P[_],A,B](fa: P[A])(f: A => P[B]) extends MonadF[P,B] 12 | 13 | import scalaz.~> 14 | type F_MonadAlgebra[P[_]] = MonadF[P,?]~>P 15 | 16 | 17 | } -------------------------------------------------------------------------------- /src/test/scala/MonadMacro.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import org.scalatest._ 4 | 5 | /** 6 | Some tests for the `monad` macro. Run these tests as follows: 7 | 8 | test-only org.hablapps.gist.MonadMacro 9 | 10 | */ 11 | class MonadMacro extends FunSpec with Matchers with Inside{ 12 | import cats.Monad 13 | 14 | /** 15 | This is a simple program with just one "return" instruction. 16 | */ 17 | describe("Simple pure translation"){ 18 | 19 | def test[P[_]: Monad](i: Int): P[Int] = monad{ 20 | i + 1 21 | } 22 | 23 | it("should work with Option"){ 24 | import cats.instances.option._ 25 | test[Option](2) shouldBe Some(3) 26 | } 27 | 28 | it("should work with Id"){ 29 | import cats.Id 30 | test[Id](2) shouldBe 3 31 | } 32 | } 33 | 34 | 35 | /** 36 | Simple program with several nested `flatMap`s 37 | */ 38 | describe("Several simple flatMaps"){ 39 | 40 | def test[P[_]: Monad](i: Int): P[Int] = monad{ 41 | val s: String = "2" 42 | val j: Int = s.length + i 43 | j+1 44 | } 45 | 46 | it("should work with Option"){ 47 | import cats.instances.option._ 48 | test[Option](2) shouldBe Some(4) 49 | } 50 | 51 | it("should work with Id"){ 52 | import cats.Id 53 | test[Id](2) shouldBe 4 54 | } 55 | 56 | it("should work with reified programs"){ 57 | 58 | abstract class Program[_] 59 | case class Returns[A](a: A) extends Program[A] 60 | case class DoAndThen[A,B](a: Program[A], 61 | f: A => Program[B]) extends Program[B] 62 | 63 | object Program{ 64 | implicit val M = new Monad[Program]{ 65 | def pure[A](a: A) = Returns(a) 66 | def flatMap[A,B](p: Program[A])(f: A => Program[B]) = 67 | DoAndThen(p,f) 68 | def tailRecM[A,B](a: A)(f: A => Program[Either[A,B]]) = ??? 69 | } 70 | } 71 | 72 | inside(test[Program](2)) { 73 | case DoAndThen(Returns("2"), f) => 74 | inside(f("2")) { 75 | case DoAndThen(Returns(3), g) => 76 | g(3) shouldBe Returns(4) 77 | } 78 | } 79 | } 80 | } 81 | 82 | /** 83 | What if our pure function has to deal with programs? 84 | Then, we simulate their execution using a fake `run` method. 85 | */ 86 | describe("Simple example with .run"){ 87 | 88 | import monad._ 89 | 90 | def test[P[_]: Monad](p1: P[String], p2: P[Int]): P[Int] = monad{ 91 | val i: String = p1.run 92 | val j: Int = p2.run 93 | i.length + j 94 | } 95 | 96 | import cats.instances.option._ 97 | 98 | it("should work with Option"){ 99 | test[Option](Some("ab"),Some(1)) shouldBe Some(3) 100 | } 101 | 102 | } 103 | 104 | /** 105 | If our monadic program needs access to instructions 106 | of particular APIs (which is the normal case), we can 107 | also use the `.run` trick. 108 | */ 109 | describe("Monadic programs over particular APIs"){ 110 | import cats.Id 111 | 112 | // Non-declarative IO API & program 113 | 114 | object NonDeclarative{ 115 | 116 | trait IO{ 117 | def read(): String 118 | def write(msg: String): Unit 119 | } 120 | 121 | def echo()(io: IO): String = { 122 | val msg: String = io.read() 123 | io.write(msg) 124 | msg 125 | } 126 | } 127 | 128 | // Declarative IO programs with type classes 129 | 130 | trait IO[P[_]]{ 131 | def read(): P[String] 132 | def write(msg: String): P[Unit] 133 | } 134 | 135 | object IO{ 136 | object Syntax{ 137 | def read[P[_]]()(implicit IO: IO[P]) = IO.read() 138 | def write[P[_]](msg: String)(implicit IO: IO[P]) = IO.write(msg) 139 | } 140 | 141 | // Side-effectful interpretation 142 | 143 | implicit object IOId extends IO[Id]{ 144 | def read() = scala.io.StdIn.readLine() 145 | def write(msg: String) = println(msg) 146 | } 147 | 148 | // Simple state transformation for purely functional testing 149 | 150 | case class IOState(toBeRead: List[String], written: List[String]) 151 | 152 | object IOState{ 153 | import cats.data.State 154 | 155 | type Action[T] = State[IOState,T] 156 | 157 | implicit object IOAction extends IO[Action]{ 158 | def read(): Action[String] = 159 | for { 160 | s <- State.get 161 | _ <- State.set(s.copy(toBeRead = s.toBeRead.tail)) 162 | } yield s.toBeRead.head 163 | 164 | def write(msg: String): Action[Unit] = 165 | State.modify{ s => 166 | s.copy(written = msg :: s.written) 167 | } 168 | } 169 | } 170 | } 171 | 172 | // Three different monadic versions of the non-declarative `echo` program 173 | 174 | // With the `monad` macro 175 | object WithMonadMacro{ 176 | import IO.Syntax._, monad._ 177 | 178 | def echo[P[_]: Monad: IO](): P[String] = monad{ 179 | val msg: String = read().run 180 | write(msg).run 181 | msg 182 | } 183 | } 184 | 185 | // The `monad` macro generates the following program with 186 | // `flatMap`s and `pure`. 187 | object WithFlatMap{ 188 | import IO.Syntax._ 189 | 190 | def echo[P[_]: Monad: IO](): P[String] = 191 | Monad[P].flatMap(read()){ msg => 192 | Monad[P].flatMap(write(msg)){ _ => 193 | Monad[P].pure(msg) 194 | } 195 | } 196 | } 197 | 198 | // An alternative version with for-comprehensions. Just for 199 | // comparison with the previous ones. 200 | object WithForComprehensions{ 201 | import IO.Syntax._, cats.syntax.flatMap._, cats.syntax.functor._ 202 | 203 | def echo[P[_]: Monad: IO](): P[String] = for{ 204 | msg <- read() 205 | _ <- write(msg) 206 | } yield msg 207 | } 208 | 209 | // Test it! 210 | 211 | it("should work with State"){ 212 | import IO.IOState 213 | 214 | val initialState = IOState(List("hi!"),List()) 215 | 216 | WithMonadMacro.echo[IOState.Action]().run(initialState).value shouldBe 217 | WithFlatMap.echo[IOState.Action]().run(initialState).value 218 | 219 | WithMonadMacro.echo[IOState.Action]().run(initialState).value shouldBe 220 | WithForComprehensions.echo[IOState.Action]().run(initialState).value 221 | } 222 | 223 | it("should work with Id"){ 224 | // Uncomment to be prompted at the console 225 | // echo[Id]() shouldBe "hi!" 226 | } 227 | } 228 | } -------------------------------------------------------------------------------- /src/test/scala/coalgebras/README.md: -------------------------------------------------------------------------------- 1 | You'll find here an implementation of coalgebraic-related type classes and demonstrators. It's the accompanied code to the [Cádiz typelevel summit](http://typelevel.org/event/2016-09-conf-cadiz/)'s proposal "We are reative! Programming actor systems through cofree coalgebras". Some slides can be found [here](https://docs.google.com/presentation/d/16kBjlXNtPFnNjZCx2n4ZoPypPO5eUSt1cjAI7QxnVyU/edit?usp=sharing). 2 | 3 | The `coalgebras` object package contains common definitions used in the different gists: `FinalCoalgebra`, `CofreeCoalgebra`, `F-Coalgebra`, `IO-Coalgebra`, etc. The `cats` and `scalaz` subpackages contains the following gists (note: `cats` version under preparation). 4 | 5 | ### Machines 6 | 7 | * `automata.scala` ([scalaz](scalaz/automata.scala),cats), sample definition of a Moore automaton as an IO-coalgebra, i.e. as an interpretation of an input algebra over a state-based language; it's used throughout the other gists. 8 | * `automatasample.scala` ([scalaz](scalaz/automatasample.scala),cats), sample instantiation of the Moore automaton. 9 | 10 | ### Programming machines 11 | 12 | Given the IO language of the automaton, we can implement different kinds of programs over it. The IO language just provides the "instructions" or buttons of the machine, which can be combined as we wish: monadically, applicatively, etc. 13 | 14 | * `programmingimperatively.scala` ([scalaz](scalaz/programmingimperatively.scala),cats). Example of imperative programming over Moore automata. 15 | * `programmingapplicatively.scala` ([scalaz](scalaz/programmingapplicatively.scala),cats). Sometimes, monadic combinators are not really needed. In that case, we can simply use applicative ones. 16 | * `programmingwithexceptions.scala` ([scalaz](scalaz/programmingwithexceptions.scala),cats). We can also use pattern matching in for-comprehensions, with the help of the `MonadError` API. 17 | 18 | ### Universal machines 19 | 20 | These are final and cofree coalgebra instantiations of Moore machines. 21 | 22 | * `finaladhoc.scala` ([scalaz](scalaz/finaladhoc.scala),cats), represents the behaviour of Moore automata in terms of the accepted language. 23 | * `cofreecochurch.scala` ([scalaz](scalaz/cofreecochurch.scala),cats), represents the behaviour implicitly, in terms of the Church's dual encoding of greatest fix points. 24 | * `cofreecomonad.scala` ([scalaz](scalaz/cofreecomonad.scala),cats), uses the cofree comonad for F-algebras. 25 | * `cofreeactor.scala` ([scalaz](scalaz/cofreeactor.scala),cats), uses actors to provide a framework for execution in terms of cofree coalgebras. 26 | * `cofreeweb.scala` ([scalaz](scalaz/cofreeweb.scala),cats), does the same for a Web-based interface. 27 | 28 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/package.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | package object coalgebras{ 4 | 5 | /* Algebras */ 6 | 7 | import scalaz.~> 8 | type HK_F_Algebra[F[_[_], _], G[_]] = F[G,?]~>G 9 | 10 | /* Coalgebras */ 11 | 12 | type F_Coalgebra[F[_],S]= 13 | S=>F[S] 14 | 15 | /* IO Coalgebras */ 16 | 17 | object IO{ 18 | 19 | type Coalgebra[IOAlg[_[_]],Step[_,_],S]= 20 | IOAlg[Step[S,?]] 21 | 22 | import scalaz.StateT 23 | 24 | type CoalgebraFromFAlgebra[F[_[_],_],Step[_,_],S]= 25 | Coalgebra[HK_F_Algebra[F,?[_]], Step, S] 26 | } 27 | 28 | import scalaz.StateT 29 | type Entity[F[_],InputADT[_],S] = 30 | HK_F_Algebra[λ[(_[_],T) => InputADT[T]], StateT[F,S,?]] 31 | // More simply: InputADT~>StateT[F,S,?] 32 | 33 | /* Final coalgebras */ 34 | 35 | trait FinalCoalgebra[Final, Coalg[_]]{ 36 | def coalgebra: Coalg[Final] 37 | def unfold[X](coalg: Coalg[X]): X => Final 38 | } 39 | 40 | object FinalCoalgebra{ 41 | object Syntax{ 42 | implicit class Unfold[Coalg[_],X](coalg: Coalg[X]){ 43 | def unfold[Fi](implicit Fi: FinalCoalgebra[Fi,Coalg]) = 44 | Fi.unfold(coalg) 45 | } 46 | } 47 | 48 | implicit def toCoalg[Coalg[_],Fi](implicit Fi: FinalCoalgebra[Fi,Coalg]): Coalg[Fi] = 49 | Fi.coalgebra 50 | } 51 | 52 | /* Cofree coalgebras */ 53 | 54 | 55 | // We need this version of the cofree coalgebra type class for 56 | // the Web-based universal machine 57 | trait CofreeCoalgebra2[Cofree[_], Coalg[_]] { 58 | 59 | type YCat[_] 60 | 61 | def machine[Y]: Coalg[Cofree[Y]] 62 | def label[Y: YCat](cy: Cofree[Y]): Y 63 | def trace[X: Coalg, Y: YCat](f: X => Y): X => Cofree[Y] 64 | 65 | def trace[X: Coalg: YCat]: X => Cofree[X] = 66 | trace[X,X](identity[X]) 67 | 68 | } 69 | 70 | trait CofreeCoalgebra[Cofree[_], Coalg[_]]{ 71 | 72 | def machine[Y]: Coalg[Cofree[Y]] 73 | def label[X](cx: Cofree[X]): X 74 | def trace[X: Coalg, Y](f: X => Y): X => Cofree[Y] 75 | 76 | def trace[X: Coalg]: X => Cofree[X] = 77 | trace[X,X](identity[X]) 78 | } 79 | 80 | object CofreeCoalgebra{ 81 | import scalaz.Comonad 82 | 83 | implicit def Comonad[Cofree[_],Coalg[_]](implicit C: CofreeCoalgebra[Cofree,Coalg]) = 84 | new Comonad[Cofree]{ 85 | def copoint[X](cx: Cofree[X]): X = 86 | C.label(cx) 87 | 88 | def cobind[X, Y](cx: Cofree[X])(f: Cofree[X] => Y): Cofree[Y] = 89 | C.trace(f)(C.machine[X])(cx) 90 | 91 | def map[X, Y](cx: Cofree[X])(f: X => Y): Cofree[Y] = 92 | C.trace{ cx2: Cofree[X] => f(C.label(cx2)) }(C.machine[X])(cx) 93 | } 94 | 95 | implicit def toFinalCoalgebra[Cofree[_],Coalg[_]](implicit C: CofreeCoalgebra[Cofree,Coalg]) = 96 | new FinalCoalgebra[Cofree[Unit],Coalg]{ 97 | def coalgebra = C.machine[Unit] 98 | def unfold[X](coalg: Coalg[X]) = C.trace{_ : X => ()}(coalg) 99 | } 100 | 101 | object Syntax{ 102 | implicit class CofreeCoalgebraOps[Cofree[_],Coalg[_],Y]( 103 | cf: Cofree[Y])(implicit C: CofreeCoalgebra[Cofree,Coalg]){ 104 | def label(): Y = C.label(cf) 105 | } 106 | } 107 | } 108 | 109 | } -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/automata.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | import scalaz.{StateT, State} 6 | 7 | /* Coalgebra specification */ 8 | 9 | trait Automata[F[_],I,S] extends Automata.Input[I,StateT[F,S,?]]{ 10 | type Program[T]=StateT[F,S,T] 11 | } 12 | 13 | object Automata{ 14 | 15 | /* Input language of the machine */ 16 | 17 | trait Input[I,P[_]]{ 18 | def isFinal(): P[Boolean] 19 | def next(i: I): P[Unit] 20 | } 21 | 22 | object Input{ 23 | 24 | object Syntax{ 25 | def isFinal[I,P[_]]()(implicit I: Input[I,P]) = I.isFinal() 26 | def next[I,P[_]](i: I)(implicit I: Input[I,P]) = I.next(i) 27 | } 28 | 29 | abstract sealed class ADT[I,_] 30 | case class IsFinal[I]() extends ADT[I,Boolean] 31 | case class Next[I](i: I) extends ADT[I,Unit] 32 | 33 | type InputF[I,P[_],T]=ADT[I,T] 34 | 35 | type IOAutomata[F[_],I,S]= 36 | IO.CoalgebraFromFAlgebra[InputF[I,?[_],?],StateT[F,?,?],S] 37 | } 38 | 39 | /* Automata are entities */ 40 | 41 | implicit def toEntity[F[_],I,S](automata: Automata[F,I,S]): Entity[F,Input.ADT[I,?],S] = 42 | new Entity[F,Input.ADT[I,?],S]{ 43 | def apply[X](input: Input.ADT[I,X]): StateT[F,S,X] = input match{ 44 | case Input.IsFinal() => automata.isFinal 45 | case Input.Next(i) => automata.next(i) 46 | } 47 | } 48 | 49 | /* Auxiliary helpers */ 50 | 51 | import scalaz.Monad 52 | 53 | def apply[F[_]: Monad,I,S](_isFinal: S => Boolean, _next: I => S => S): Automata[F,I,S] = 54 | new Automata[F,I,S]{ 55 | val M = StateT.stateTMonadState[S,F] 56 | def isFinal(): Program[Boolean] = M.gets(_isFinal) 57 | def next(i: I): Program[Unit] = M.modify(_next(i)) 58 | } 59 | 60 | object Syntax{ 61 | implicit class AutomataOps[F[_]: Monad,I,S](s: S)(implicit A: Automata[F,I,S]){ 62 | def isFinal(): F[Boolean] = A.isFinal().eval(s) 63 | def next(i: I): F[S] = A.next(i).exec(s) 64 | } 65 | } 66 | } 67 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/automatasample.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | import org.scalatest.{FlatSpec, Matchers} 6 | 7 | object AnAutomata{ 8 | 9 | /* A particular automata */ 10 | 11 | import scalaz.Monad 12 | 13 | implicit def Even[F[_]: Monad]: Automata[F,Boolean,Int] = 14 | Automata[F, Boolean, Int]( 15 | _ % 2 == 0, 16 | i => _ + (if (i) 1 else 0)) 17 | } 18 | 19 | class AnAutomata extends FlatSpec with Matchers{ 20 | import AnAutomata._ 21 | 22 | "`Even` automata" should "work with `Id`" in { 23 | import scalaz.Id, Id._ 24 | 25 | Even[Id].isFinal().eval(1) shouldBe false 26 | Even[Id].isFinal().eval(2) shouldBe true 27 | 28 | Even[Id].next(true).exec(1) shouldBe 2 29 | Even[Id].next(false).exec(1) shouldBe 1 30 | 31 | // Using syntactic helpers 32 | 33 | import Automata.Syntax._ 34 | 35 | 1.isFinal() shouldBe false 36 | 2.isFinal() shouldBe true 37 | 38 | 1.next(true) shouldBe 2 39 | 1.next(false) shouldBe 1 40 | } 41 | 42 | "`Even` automata" should """work with `String \/ ?`""" in { 43 | import scalaz.\/, \/._ 44 | type Errorful[T]=String\/T 45 | 46 | Even[Errorful].isFinal().eval(1) shouldBe right(false) 47 | Even[Errorful].isFinal().eval(2) shouldBe right(true) 48 | 49 | Even[Errorful].next(true).exec(1) shouldBe right(2) 50 | Even[Errorful].next(false).exec(1) shouldBe right(1) 51 | 52 | // Using syntactic helpers 53 | 54 | import Automata.Syntax._, scalaz.syntax.either._ 55 | 56 | 1.isFinal() shouldBe false.right 57 | 2.isFinal() shouldBe true.right 58 | 59 | 1.next(true) shouldBe 2.right 60 | 1.next(false) shouldBe 1.right 61 | } 62 | } 63 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/cofreeactor.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | import akka.actor.{Actor, ActorLogging, ActorRef, ActorSystem, Props} 6 | import akka.pattern.{ask, pipe} 7 | import akka.util.Timeout 8 | import scala.concurrent.{Await, Future} 9 | import scala.concurrent.duration._ 10 | import scala.reflect.ClassTag 11 | import scalaz.{Const, StateT} 12 | import scalaz.std.scalaFuture._ 13 | 14 | import Automata.Input.{Next, IsFinal} 15 | 16 | object ActorEntity { 17 | 18 | trait Proof 19 | object Proof { 20 | implicit val instance: Proof = new Proof {} 21 | } 22 | 23 | case object Label 24 | case class LabelResponse[Y](y: Y) 25 | 26 | // ACTOR 27 | 28 | class AutomataActor[I: ClassTag, X, Y]( 29 | machine: Automata[Future, I, X], 30 | initialState: X, 31 | f: X => Y) extends Actor { 32 | 33 | import context.dispatcher 34 | 35 | var state: Future[X] = Future.successful(initialState) 36 | 37 | def receive = { 38 | case Next(i: I) => 39 | val step = state flatMap machine.next(i).run 40 | state = step.map(_._1) 41 | step.map(_._2) pipeTo sender 42 | case IsFinal() => 43 | val step = state flatMap machine.isFinal().run 44 | state = step.map(_._1) 45 | step.map(_._2) pipeTo sender 46 | case Label => 47 | state map (f andThen LabelResponse.apply) pipeTo sender 48 | } 49 | } 50 | 51 | object AutomataActor { 52 | def props[I: ClassTag, X, Y]( 53 | machine: Automata[Future, I, X], 54 | initialState: X, 55 | f: X => Y) = Props(new AutomataActor(machine, initialState, f)) 56 | } 57 | 58 | // COFREE COALGEBRA FOR ENTITIES 59 | 60 | type CofreeActor[Y] = Const[ActorRef,Y] 61 | 62 | def cofree[I: ClassTag]( 63 | as: ActorSystem)(implicit 64 | timeout: Timeout) = 65 | new CofreeCoalgebra2[CofreeActor, Automata[Future, I, ?]] { 66 | import as.dispatcher 67 | 68 | type YCat[Y] = Proof 69 | 70 | def label[Y: YCat](cx: Const[ActorRef, Y]): Y = 71 | Await.result((cx.getConst ? Label).mapTo[LabelResponse[Y]].map(_.y), 5 seconds) 72 | 73 | def machine[Y]: Automata[Future, I, Const[ActorRef, Y]] = 74 | new Automata[Future, I, Const[ActorRef, Y]] { 75 | def isFinal(): StateT[Future, Const[ActorRef, Y], Boolean] = 76 | StateT { case c@Const(actor) => 77 | (actor ? IsFinal()) 78 | .mapTo[Boolean] 79 | .map((c, _)) 80 | } 81 | def next(i: I): StateT[Future, Const[ActorRef, Y], Unit] = 82 | StateT { case c@Const(actor) => 83 | (actor ? Next(i)) 84 | .mapTo[Unit] 85 | .map((c, _)) 86 | } 87 | } 88 | 89 | def trace[X: Automata[Future, I, ?], Y: YCat]( 90 | f: X => Y): X => Const[ActorRef, Y] = 91 | x => Const[ActorRef, Y] { 92 | as.actorOf(AutomataActor.props[I, X, Y](implicitly[Automata[Future, I, X]], x, f)) 93 | } 94 | } 95 | 96 | } 97 | 98 | object ActorEntityTest extends App { 99 | import ActorEntity._ 100 | import AnAutomata._ 101 | 102 | implicit val system = ActorSystem("cofree-actor") 103 | import system.dispatcher 104 | implicit val timeout = Timeout(5 seconds) 105 | 106 | val CofreeAutomata = cofree[Boolean](system) 107 | 108 | val Const(even) = CofreeAutomata.trace[Int, String](_.toString)(Even[Future], implicitly)(0) 109 | 110 | system.actorOf(Props(new Actor with ActorLogging { 111 | 112 | even ! IsFinal() 113 | even ! IsFinal() 114 | even ! IsFinal() 115 | even ! Next(true) 116 | even ! IsFinal() 117 | even ! IsFinal() 118 | even ! IsFinal() 119 | even ! Next(false) 120 | even ! IsFinal() 121 | even ! IsFinal() 122 | even ! IsFinal() 123 | 124 | def receive = { 125 | case x => log.info(s"Received: $x") 126 | } 127 | 128 | })) 129 | } 130 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/cofreecochurch.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | object CoChurchFinalCoalgebra{ 6 | import scalaz.{Monad, State, StateT, ~>} 7 | import scalaz.Id, Id._ 8 | import scalaz.syntax.monad._ 9 | 10 | import Automata._, Input._ 11 | 12 | trait Cofree[F[_],I,Y]{ 13 | type X 14 | val x: X 15 | val f: X => Y 16 | val coalg: Automata[F,I,X] 17 | } 18 | 19 | def cofree[F[_]: Monad, I] = 20 | new CofreeCoalgebra[Cofree[F,I,?], Automata[F,I,?]]{ self => 21 | 22 | def machine[Y] = new Automata[F, I, Cofree[F,I,Y]]{ 23 | 24 | def isFinal() = StateT{ cofree => 25 | (cofree.coalg.isFinal().eval(cofree.x).map((cofree,_))) 26 | } 27 | 28 | def next(i: I) = StateT{ cofree => 29 | cofree.coalg.next(i).exec(cofree.x).map{ 30 | case x2 => (new Cofree[F,I,Y]{ 31 | type X = cofree.X 32 | val x = x2 33 | val f = cofree.f 34 | val coalg = cofree.coalg 35 | },()) 36 | } 37 | } 38 | } 39 | 40 | def label[X](cf: Cofree[F,I,X]): X = 41 | cf.f(cf.x) 42 | 43 | def trace[_X,Y](_f: _X => Y)(implicit 44 | _coalg: Automata[F,I,_X]): _X => Cofree[F,I,Y] = { _x: _X => 45 | new Cofree[F,I,Y]{ 46 | type X = _X 47 | val x = _x 48 | val f = _f 49 | val coalg = _coalg 50 | } 51 | } 52 | } 53 | } 54 | 55 | import org.scalatest.{FlatSpec, Matchers} 56 | 57 | class CoChurchFinalCoalgebra extends FlatSpec with Matchers{ 58 | import CofreeCoalgebra._, Syntax._ 59 | import CoChurchFinalCoalgebra._ 60 | import Automata.Syntax._, AnAutomata._ 61 | import scalaz.Id, Id._ 62 | 63 | "Cofree comonad cofree coalgebra" should "simulate `Even`" in { 64 | 65 | implicit val CofreeAutomata = cofree[Id,Boolean] 66 | implicit val CofreeAutomataMachine = CofreeAutomata.machine[String] 67 | 68 | // Language accepted from state 0 69 | 70 | val initial0: Cofree[Id,Boolean,String] = 71 | CofreeAutomata.trace[Int,String](_.toString)(Even[Id])(0) 72 | 73 | // Behaviours are machines! 74 | 75 | initial0.isFinal() shouldBe true 76 | initial0.label() shouldBe "0" 77 | 78 | initial0.next(true).isFinal() shouldBe false 79 | (initial0.next(true): Cofree[Id,Boolean,String]).label() shouldBe "1" 80 | } 81 | 82 | } 83 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/cofreecomonad.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | object CofreezFinalCoalgebra{ 6 | import scalaz.{Monad, State, StateT, ~>}, scalaz.Cofree._ 7 | 8 | import Automata._, Input._ 9 | 10 | type CofreeF[F[_],I,S] = ADT[I,?] ~> λ[T=>F[(S, T)]] 11 | type Cofree[F[_],I,Y] = scalaz.Cofree[CofreeF[F,I,?],Y] 12 | 13 | def cofree[F[_]: Monad, I] = 14 | new CofreeCoalgebra[Cofree[F,I,?], Automata[F,I,?]]{ self => 15 | 16 | def machine[Y] = new Automata[F, I, Cofree[F,I,Y]]{ 17 | def isFinal() = StateT{ _.tail(IsFinal()) } 18 | def next(i: I) = StateT{ _.tail(Next(i)) } 19 | } 20 | 21 | def label[X](cf: Cofree[F,I,X]): X = 22 | cf.extract 23 | 24 | import scalaz.std.tuple._, scalaz.syntax.functor._, scalaz.syntax.bitraverse._ 25 | 26 | def trace[X,Y](f: X => Y)(implicit 27 | coalg: Automata[F,I,X]): X => Cofree[F,I,Y] = { x: X => 28 | scalaz.Cofree[CofreeF[F,I,?],Y](f(x), new CofreeF[F,I,Cofree[F,I,Y]]{ 29 | def apply[O](input: ADT[I,O]) = 30 | coalg(input).apply(x) 31 | .map(_.bimap(trace(f)(coalg),identity)) 32 | }) 33 | } 34 | } 35 | } 36 | 37 | import org.scalatest.{FlatSpec, Matchers} 38 | 39 | class CofreezFinalCoalgebra extends FlatSpec with Matchers{ 40 | import CofreeCoalgebra._, Syntax._ 41 | import CofreezFinalCoalgebra._ 42 | import Automata.Syntax._, AnAutomata._ 43 | import scalaz.Id, Id._ 44 | 45 | "Cofree comonad cofree coalgebra" should "simulate `Even`" in { 46 | 47 | implicit val CofreeAutomata = cofree[Id,Boolean] 48 | implicit val CofreeAutomataMachine = CofreeAutomata.machine[String] 49 | 50 | // Language accepted from state 0 51 | 52 | val initial0: Cofree[Id,Boolean,String] = 53 | CofreeAutomata.trace[Int,String](_.toString)(Even[Id])(0) 54 | 55 | // Behaviours are machines! 56 | 57 | initial0.isFinal() shouldBe true 58 | initial0.label() shouldBe "0" 59 | 60 | initial0.next(true).isFinal() shouldBe false 61 | (initial0.next(true): Cofree[Id,Boolean,String]).label() shouldBe "1" 62 | } 63 | 64 | } 65 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/cofreeweb.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | import akka.http.scaladsl._ 6 | import akka.http.scaladsl.model._ 7 | import akka.http.scaladsl.unmarshalling._ 8 | import akka.http.scaladsl.marshalling._ 9 | import akka.stream.Materializer 10 | 11 | import scala.concurrent.Future 12 | import scalaz.{StateT, ~>} 13 | import scalaz.std.scalaFuture._ 14 | 15 | import CoChurchFinalCoalgebra.Cofree 16 | import Automata.Input.ADT 17 | 18 | object WebEntity { 19 | 20 | trait CofreeWeb[I, Y] { 21 | type X 22 | var x: X 23 | val f: X => Y 24 | val coalg: Automata[Future, I, X] 25 | 26 | val handler: HttpRequest => Future[HttpResponse] 27 | } 28 | 29 | trait MyTypeClass[I] { 30 | def fromEntity: FromEntityUnmarshaller[I] 31 | } 32 | 33 | def cofree[I]( 34 | host: String, 35 | port: Int, 36 | httpExt: HttpExt)(implicit 37 | mat: Materializer, 38 | typ: MyTypeClass[I]) = 39 | new CofreeCoalgebra2[CofreeWeb[I, ?], Automata[Future, I, ?]] { 40 | import httpExt.system.dispatcher 41 | 42 | type YCat[Y] = ToEntityMarshaller[Y] 43 | 44 | def label[Y: YCat](cy: CofreeWeb[I, Y]): Y = 45 | cy.f(cy.x) 46 | 47 | def machine[Y]: Automata[Future, I, CofreeWeb[I, Y]] = 48 | new Automata[Future, I, CofreeWeb[I, Y]] { 49 | def isFinal(): StateT[Future, CofreeWeb[I, Y], Boolean] = 50 | StateT[Future, CofreeWeb[I, Y], Boolean] { cw => 51 | cw.coalg 52 | .isFinal() 53 | .eval(cw.x) 54 | .map((cw, _)) 55 | } 56 | def next(i: I): StateT[Future, CofreeWeb[I, Y], Unit] = 57 | StateT[Future, CofreeWeb[I, Y], Unit] { cw => 58 | cw.coalg 59 | .next(i) 60 | .eval(cw.x) 61 | .map((cw, _)) 62 | } 63 | } 64 | 65 | def trace[_X: Automata[Future, I, ?], Y: YCat](_f: _X => Y): _X => CofreeWeb[I, Y] = 66 | _x => new CofreeWeb[I, Y] { 67 | type X = _X 68 | 69 | var x: X = _x 70 | val coalg = implicitly[Automata[Future, I, X]] 71 | val handler: HttpRequest => Future[HttpResponse] = { 72 | case HttpRequest(HttpMethods.POST, Uri.Path("/next"), _, entity, _) => 73 | typ.fromEntity(entity) flatMap { input => 74 | coalg.next(input).run(x) map { case (s, o) => 75 | x = s 76 | HttpResponse() 77 | } 78 | } 79 | case HttpRequest(HttpMethods.GET, Uri.Path("/isFinal"), _, _, _) => 80 | coalg.isFinal().run(x) map { case (s, o) => 81 | x = s 82 | HttpResponse(entity = HttpEntity(o.toString)) 83 | } 84 | case HttpRequest(HttpMethods.GET, Uri.Path("/"), _, _, _) => 85 | Marshal(f(x)).to[HttpResponse] 86 | } 87 | 88 | httpExt.bindAndHandleAsync(handler, host, port) 89 | 90 | val f: X => Y = _f 91 | } 92 | 93 | } 94 | } 95 | 96 | import akka.actor.ActorSystem 97 | import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport._ 98 | import akka.stream.ActorMaterializer 99 | import spray.json._ 100 | 101 | object WebEntityTest extends App with DefaultJsonProtocol { 102 | import WebEntity._ 103 | import AnAutomata._ 104 | 105 | case class Wrapper[A](a: A) 106 | object Wrapper { 107 | implicit def jsonFormat[A: JsonFormat]: RootJsonFormat[Wrapper[A]] = 108 | jsonFormat1(Wrapper.apply[A]) 109 | } 110 | 111 | implicit val system = ActorSystem("cofree-web") 112 | import system.dispatcher 113 | implicit val materializer = ActorMaterializer() 114 | implicit object foo extends MyTypeClass[Boolean] { 115 | def fromEntity: FromEntityUnmarshaller[Boolean] = 116 | implicitly[FromEntityUnmarshaller[Wrapper[Boolean]]].map(_.a) 117 | } 118 | val httpExt = Http() 119 | 120 | implicit val CofreeAutomata = cofree[Boolean]("localhost", 8080, httpExt) 121 | 122 | CofreeAutomata.trace[Int, String](_.toString)(Even[Future], implicitly[ToEntityMarshaller[String]])(0) 123 | 124 | } -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/finaladhoc.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | object AdHocFinalCoalgebra{ 6 | 7 | import scalaz.Id, Id.Id 8 | 9 | type Language[I] = List[I]=>Boolean 10 | 11 | object Language{ 12 | 13 | implicit def apply[I]: FinalCoalgebra[Language[I], Automata[Id,I,?]] = 14 | new FinalCoalgebra[Language[I], Automata[Id,I,?]]{ 15 | 16 | def coalgebra: Automata[Id,I,Language[I]] = 17 | Automata(_(Nil), 18 | input => language => word => language(input::word)) 19 | 20 | def unfold[X](coalg: Automata[Id,I,X]): X => Language[I] = 21 | x => { 22 | case Nil => coalg.isFinal().eval(x) 23 | case input::tail => 24 | unfold(coalg)(coalg.next(input).exec(x))(tail) 25 | } 26 | } 27 | } 28 | } 29 | 30 | import org.scalatest.{FlatSpec, Matchers} 31 | 32 | class AdHocFinalCoalgebra extends FlatSpec with Matchers{ 33 | import AdHocFinalCoalgebra._, Automata.Syntax._, FinalCoalgebra.Syntax._ 34 | 35 | "Ad-hoc final coalgebra" should "simulate `Even`" in { 36 | import AnAutomata._ 37 | import scalaz.Id, Id._ 38 | 39 | implicit val BoolLanguage = Language[Boolean] 40 | implicit val BoolLanguageCoalg = BoolLanguage.coalgebra 41 | 42 | // Language accepted from state 0 43 | 44 | val initial0: Language[Boolean] = BoolLanguage.unfold(Even[Id])(0) 45 | 46 | initial0(List()) shouldBe true 47 | initial0(List(true,false,true)) shouldBe true 48 | 49 | // Language accepted from state 1 50 | 51 | val initial1: Language[Boolean] = BoolLanguage.unfold(Even[Id])(1) 52 | 53 | initial1(List()) shouldBe false 54 | initial1(List(true,false,true)) shouldBe false 55 | initial1(List(true)) shouldBe true 56 | 57 | // Behaviours are machines! 58 | 59 | initial0.isFinal() shouldBe true 60 | initial1.isFinal() shouldBe false 61 | 62 | initial0.next(true).isFinal() shouldBe false 63 | initial1.next(true).isFinal() shouldBe true 64 | } 65 | 66 | 67 | } 68 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/programmingapplicatively.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | import org.scalatest.{FlatSpec, Matchers} 6 | 7 | /* Programming an automata applicatively */ 8 | 9 | object ApplicativeProgramming{ 10 | import scalaz.Apply, scalaz.syntax.apply._ 11 | import Automata.Input.Syntax._ 12 | 13 | def isTTFAccepted[P[_]: Automata.Input[Boolean,?[_]]: Apply]: P[Boolean] = 14 | next(true) *> 15 | next(true) *> 16 | next(false) *> 17 | isFinal 18 | 19 | def isAccepted[P[_]: Automata.Input[Boolean,?[_]]: Apply]( 20 | boolList: List[Boolean]): P[Boolean] = boolList match { 21 | case Nil => isFinal[Boolean,P] 22 | case head::tail => next(head) *> isAccepted(tail) 23 | } 24 | 25 | } 26 | 27 | class ApplicativeProgramming extends FlatSpec with Matchers{ 28 | import ApplicativeProgramming._ 29 | 30 | "Applicative programs" should "work" in { 31 | import AnAutomata._ 32 | import scalaz.Id, Id._ 33 | 34 | val EvenId = Even[Id] 35 | 36 | isTTFAccepted[EvenId.Program].eval(0) shouldBe true 37 | 38 | isAccepted(List()).eval(0) shouldBe true 39 | isAccepted(List(true)).eval(0) shouldBe false 40 | isAccepted(List(false)).eval(0) shouldBe true 41 | isAccepted(List(false,true)).eval(0) shouldBe false 42 | isAccepted(List(false,false)).eval(0) shouldBe true 43 | } 44 | 45 | } 46 | 47 | -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/programmingimperatively.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | import org.scalatest.{FlatSpec, Matchers} 6 | 7 | /* Programming an automata applicatively */ 8 | 9 | object ImperativeProgramming{ 10 | 11 | import scalaz.Monad, scalaz.syntax.monad._ 12 | import Automata.Input.Syntax._ 13 | 14 | def largestPrefix[P[_]: Automata.Input[Boolean,?[_]]: Monad]( 15 | boolList: List[Boolean]): P[List[Boolean]] = boolList match { 16 | case Nil => List[Boolean]().point 17 | case head::tail => 18 | next(head) >> isFinal.ifM( 19 | ifTrue = largestPrefix(tail) map ( head :: _ ), 20 | ifFalse = List[Boolean]().point 21 | ) 22 | } 23 | 24 | def split[P[_]: Automata.Input[Boolean,?[_]]: Monad]( 25 | boolList: List[Boolean]): P[(List[Boolean],List[Boolean])] = 26 | boolList match { 27 | case Nil => (List[Boolean](),List[Boolean]()).point[P] 28 | case head::tail => 29 | (next(head) >> isFinal).ifM( 30 | ifTrue = split(tail) map { 31 | case (list1,list2) => (head::list1,list2) 32 | }, 33 | ifFalse = (List[Boolean](),boolList).point 34 | ) 35 | } 36 | 37 | } 38 | 39 | class ImperativeProgramming extends FlatSpec with Matchers{ 40 | import ImperativeProgramming._ 41 | 42 | "Imperative programs" should "work" in { 43 | import AnAutomata._ 44 | import scalaz.Id, Id._ 45 | 46 | val EvenId = Even[Id] 47 | 48 | split[EvenId.Program](List(true,true,false)).eval(0) shouldBe (List(), List(true,true,false)) 49 | split[EvenId.Program](List(false,false,true)).eval(0) shouldBe (List(false,false), List(true)) 50 | split[EvenId.Program](List(false,false,false)).eval(0) shouldBe (List(false,false,false), List()) 51 | 52 | largestPrefix[EvenId.Program](List(true,false,true)).eval(0) shouldBe List() 53 | largestPrefix[EvenId.Program](List(false,false,true)).eval(0) shouldBe List(false,false) 54 | largestPrefix[EvenId.Program](List(false,false,false)).eval(0) shouldBe List(false,false,false) 55 | 56 | } 57 | } -------------------------------------------------------------------------------- /src/test/scala/coalgebras/scalaz/programmingwithexceptions.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package coalgebras 3 | package scalazimpl 4 | 5 | import org.scalatest.{FlatSpec, Matchers} 6 | 7 | /* Programming an automata with the possibility of errors */ 8 | object ProgrammingWithExceptions{ 9 | 10 | import scalaz.Monad, scalaz.syntax.monad._, Filter._ 11 | import Automata.Input.Syntax._ 12 | 13 | def test1[I, P[_]: Automata.Input[I,?[_]]: Monad: Filter]( 14 | i1: I, i2: I, i3: I): P[Unit] = 15 | for { 16 | true <- isFinal[I,P] 17 | _ <- next(i1) 18 | false <- isFinal 19 | _ <- next(i2) 20 | true <- isFinal 21 | _ <- next(i3) 22 | } yield () 23 | 24 | import ImperativeProgramming._ 25 | 26 | def isPrefix[I](l1: List[I], l2: List[I]): Boolean = (l1,l2) match { 27 | case (Nil,_) => true 28 | case (head1::tail1, head2::tail2) if head1 == head2 => isPrefix(tail1,tail2) 29 | case _ => false 30 | } 31 | 32 | def test2[P[_]: Automata.Input[Boolean,?[_]]: Monad: Filter]( 33 | l: List[Boolean]): P[Unit] = 34 | for { 35 | l1 <- largestPrefix(l) 36 | true <- isPrefix(l1,l).point 37 | l2 <- largestPrefix(l1) 38 | true <- isPrefix(l2,l1).point 39 | l3 <- largestPrefix(l2) 40 | true <- isPrefix(l3,l2).point 41 | } yield () 42 | } 43 | 44 | class ProgrammingWithExceptions extends FlatSpec with Matchers{ 45 | import ProgrammingWithExceptions._ 46 | 47 | "Programs with filter" should "work" in { 48 | import AnAutomata._ 49 | import scalaz.\/, \/._ 50 | import Filter._ 51 | 52 | val EvenEither = Even[Location \/ ?] 53 | 54 | test1[Boolean, EvenEither.Program](true,true,true).eval(0) shouldBe right(()) 55 | test2[EvenEither.Program](List(true,true,true)).eval(0) shouldBe right(()) 56 | } 57 | 58 | } -------------------------------------------------------------------------------- /src/test/scala/hello-monads/README.md: -------------------------------------------------------------------------------- 1 | From "Hello, world!" to "Hello, monad!" 2 | ============= 3 | 4 | Repo containing all code from the blog post series: 5 | 6 | * [Part I](https://blog.hablapps.com/2016/01/22/from-hello-world-to-hello-monad-part-i/) 7 | * [Part II](https://blog.hablapps.com/2017/01/09/from-hello-world-to-hello-monad-part-iiiii/) 8 | * [Part III](http://blog.hablapps.com/2017/05/30/from-hello-world-to-hello-monad-part-iiiiii/) 9 | -------------------------------------------------------------------------------- /src/test/scala/hello-monads/partI.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package hello 3 | 4 | // Hello, functional world! 5 | object Step1 { 6 | 7 | /* Impure program */ 8 | def hello(): Unit = 9 | println("Hello, world!") 10 | 11 | /* Functional purification */ 12 | object Fun { 13 | 14 | // Language 15 | type IOProgram = Print 16 | case class Print(msg: String) 17 | 18 | // Program 19 | def pureHello(): IOProgram = 20 | Print("Hello, world!") 21 | 22 | // Interpreter 23 | def run(program: IOProgram): Unit = 24 | program match { 25 | case Print(msg) => println(msg) 26 | } 27 | 28 | // Composition 29 | def hello(): Unit = run(pureHello()) 30 | } 31 | 32 | } 33 | -------------------------------------------------------------------------------- /src/test/scala/hello-monads/partII.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package hello 3 | 4 | import scala.io.StdIn.readLine 5 | 6 | // Say what? 7 | object Step2 { 8 | 9 | /* Impure program */ 10 | def sayWhat: String = readLine 11 | 12 | /* Functional solution */ 13 | object Fun { 14 | 15 | // Language 16 | type IOProgram[A] = IOEffect[A] 17 | 18 | sealed trait IOEffect[A] 19 | case class Write(s: String) extends IOEffect[Unit] 20 | case object Read extends IOEffect[String] 21 | 22 | // Program 23 | def pureSayWhat: IOProgram[String] = Read 24 | 25 | // Interpreter 26 | def run[A](program: IOProgram[A]): A = 27 | program match { 28 | case Write(msg) => println(msg) 29 | case Read => readLine 30 | } 31 | 32 | // Composition 33 | def sayWhat: String = run(pureSayWhat) 34 | 35 | } 36 | 37 | } 38 | 39 | // Say what? (reloaded) 40 | object Step3 { 41 | 42 | /* Impure program */ 43 | def helloSayWhat: String = { 44 | println("Hello, say something:") 45 | readLine 46 | } 47 | 48 | /* Functional solution */ 49 | object Fun { 50 | 51 | // Language 52 | sealed trait IOProgram[A] 53 | case class Single[A](e: IOEffect[A]) extends IOProgram[A] 54 | case class Sequence[A, B](p1: IOProgram[A], p2: IOProgram[B]) extends IOProgram[B] 55 | 56 | sealed trait IOEffect[A] 57 | case class Write(s: String) extends IOEffect[Unit] 58 | case object Read extends IOEffect[String] 59 | 60 | // Program 61 | def pureHelloSayWhat: IOProgram[String] = 62 | Sequence( 63 | Single(Write("Hello, say something:")), 64 | Single(Read)) 65 | 66 | // Interpreter 67 | def run[A](program: IOProgram[A]): A = 68 | program match { 69 | case Single(e) => runEffect(e) 70 | case Sequence(p1, p2) => 71 | run(p1) 72 | run(p2) 73 | } 74 | 75 | def runEffect[A](effect: IOEffect[A]): A = 76 | effect match { 77 | case Write(msg) => println(msg) 78 | case Read => readLine 79 | } 80 | 81 | // Composition 82 | def helloSayWhat: String = run(pureHelloSayWhat) 83 | 84 | } 85 | 86 | } 87 | 88 | // Echo, echo! 89 | object Step4 { 90 | 91 | /* Impure program */ 92 | def echo: Unit = { 93 | val read: String = readLine 94 | println(read) 95 | } 96 | 97 | /* Functional solution */ 98 | object Fun { 99 | 100 | // Language 101 | sealed trait IOProgram[A] 102 | case class Single[A](e: IOEffect[A]) extends IOProgram[A] 103 | case class Sequence[A, B](p1: IOProgram[A], p2: A => IOProgram[B]) extends IOProgram[B] 104 | 105 | sealed trait IOEffect[A] 106 | case class Write(s: String) extends IOEffect[Unit] 107 | case object Read extends IOEffect[String] 108 | 109 | // Program 110 | def pureEcho: IOProgram[Unit] = 111 | Sequence( 112 | Single(Read), (read: String) => 113 | Single(Write(read))) 114 | 115 | // Interpreter 116 | def run[A](program: IOProgram[A]): A = 117 | program match { 118 | case Single(e) => runEffect(e) 119 | case Sequence(p1, p2) => 120 | val res1 = run(p1) 121 | run(p2(res1)) 122 | } 123 | 124 | def runEffect[A](effect: IOEffect[A]): A = 125 | effect match { 126 | case Write(msg) => println(msg) 127 | case Read => readLine 128 | } 129 | 130 | // Composition 131 | def echo: Unit = run(pureEcho) 132 | 133 | } 134 | 135 | } 136 | 137 | // On pure values 138 | object Step5 { 139 | 140 | /* Impure program */ 141 | def echo: String = { 142 | val read: String = readLine 143 | println(read) 144 | read 145 | } 146 | 147 | /* Functional solution */ 148 | object Fun { 149 | 150 | // Language 151 | sealed trait IOProgram[A] 152 | case class Single[A](e: IOEffect[A]) extends IOProgram[A] 153 | case class Sequence[A, B](p1: IOProgram[A], p2: A => IOProgram[B]) extends IOProgram[B] 154 | case class Value[A](a: A) extends IOProgram[A] 155 | 156 | sealed trait IOEffect[A] 157 | case class Write(s: String) extends IOEffect[Unit] 158 | case object Read extends IOEffect[String] 159 | 160 | // Program 161 | def pureEcho: IOProgram[String] = 162 | Sequence( 163 | Single(Read), (read: String) => 164 | Sequence( 165 | Single(Write(read)), (_: Unit) => 166 | Value(read) 167 | ) 168 | ) 169 | 170 | // Interpreter 171 | def run[A](program: IOProgram[A]): A = 172 | program match { 173 | case Single(e) => runEffect(e) 174 | case Sequence(p1, p2) => 175 | val res1 = run(p1) 176 | run(p2(res1)) 177 | case Value(a) => a 178 | } 179 | 180 | def runEffect[A](effect: IOEffect[A]): A = 181 | effect match { 182 | case Write(msg) => println(msg) 183 | case Read => readLine 184 | } 185 | 186 | // Composition 187 | def echo: String = run(pureEcho) 188 | 189 | } 190 | 191 | } 192 | -------------------------------------------------------------------------------- /src/test/scala/hello-monads/partIII.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | package hello 3 | 4 | import scala.io.StdIn.readLine 5 | 6 | /** 7 | * Code for the post 'From "Hello, world!" to "Hello, monad!"'' (part III/III) 8 | * 9 | * https://purelyfunctional.wordpress.com/?p=1195 10 | * 11 | * It continues the code for the first and second part of this series that 12 | * you can find in files ./partI.scala and ./partII.scala. 13 | */ 14 | 15 | // First objection: Readability 16 | 17 | object Step6 { 18 | 19 | // Language 20 | 21 | sealed trait IOProgram[A]{ 22 | def flatMap[B](f: A => IOProgram[B]): IOProgram[B] = 23 | Sequence(this, f) 24 | def map[B](f: A => B): IOProgram[B] = 25 | flatMap(f andThen Value.apply) 26 | } 27 | case class Single[A](e: IOProgram.Effect[A]) extends IOProgram[A] 28 | case class Sequence[A, B](p1: IOProgram[A], 29 | p2: A => IOProgram[B]) extends IOProgram[B] 30 | case class Value[A](a: A) extends IOProgram[A] 31 | 32 | object IOProgram{ 33 | 34 | sealed trait Effect[A] 35 | case class Write(s: String) extends Effect[Unit] 36 | case object Read extends Effect[String] 37 | 38 | object Syntax{ 39 | def read(): IOProgram[String] = 40 | Single(Read) 41 | 42 | def write(msg: String): IOProgram[Unit] = 43 | Single(Write(msg)) 44 | } 45 | } 46 | 47 | // Program using `flatMap` and `map` operators 48 | 49 | object ProgramWithInfixOps{ 50 | import IOProgram.Syntax._ 51 | 52 | def echo(): IOProgram[String] = 53 | read() flatMap { msg => 54 | write(msg) map { _ => 55 | msg 56 | } 57 | } 58 | } 59 | 60 | // Program using for-comprehensions 61 | 62 | object ProgramWithForComprehensions{ 63 | import IOProgram.Syntax._ 64 | 65 | def echo(): IOProgram[String] = for{ 66 | msg <- read() 67 | _ <- write(msg) 68 | } yield msg 69 | } 70 | 71 | 72 | // Interpreter: Doesn't change from previous designs 73 | 74 | import ProgramWithInfixOps._ 75 | 76 | def run[A](program: IOProgram[A]): A = 77 | program match { 78 | case Single(e) => runEffect(e) 79 | case Sequence(p1, p2) => 80 | val res1 = run(p1) 81 | run(p2(res1)) 82 | case Value(a) => a 83 | } 84 | 85 | def runEffect[A](effect: IOProgram.Effect[A]): A = 86 | effect match { 87 | case IOProgram.Write(msg) => println(msg) 88 | case IOProgram.Read => readLine 89 | } 90 | 91 | // Composition: doesn't change either 92 | 93 | def consoleEcho: String = run(echo()) 94 | 95 | } 96 | 97 | 98 | // Modularity problems: Helo, Monad! 99 | object Step7 { 100 | 101 | // Other imperative language 102 | object MonolithicDSL{ 103 | 104 | sealed trait FileSystemProgram[A] 105 | case class Single[A](e: FileSystemProgram.Effect[A]) extends FileSystemProgram[A] 106 | case class Sequence[A, B](p1: FileSystemProgram[A], p2: A => FileSystemProgram[B]) extends FileSystemProgram[B] 107 | case class Value[A](a: A) extends FileSystemProgram[A] 108 | 109 | object FileSystemProgram{ 110 | sealed abstract class Effect[_] 111 | case class ReadFile(path: String) extends Effect[String] 112 | case class DeleteFile(path: String) extends Effect[Unit] 113 | case class WriteFile(path: String, content: String) extends Effect[Unit] 114 | } 115 | } 116 | 117 | // Abstract imperative DSL 118 | 119 | sealed trait ImperativeProgram[Effect[_],A]{ 120 | def flatMap[B](f: A => ImperativeProgram[Effect,B]): ImperativeProgram[Effect,B] = 121 | Sequence(this, f) 122 | def map[B](f: A => B): ImperativeProgram[Effect,B] = 123 | flatMap(f andThen Value.apply) 124 | } 125 | case class Single[Effect[_],A](e: Effect[A]) extends ImperativeProgram[Effect,A] 126 | case class Sequence[Effect[_],A, B](p1: ImperativeProgram[Effect,A], 127 | p2: A => ImperativeProgram[Effect,B]) extends ImperativeProgram[Effect,B] 128 | case class Value[Effect[_],A](a: A) extends ImperativeProgram[Effect,A] 129 | 130 | // Modular redefinition of IO programs 131 | 132 | type IOProgram[A] = ImperativeProgram[IOProgram.Effect, A] 133 | 134 | object IOProgram{ 135 | 136 | sealed trait Effect[A] 137 | case class Write(s: String) extends Effect[Unit] 138 | case object Read extends Effect[String] 139 | 140 | object Syntax{ 141 | def read(): IOProgram[String] = 142 | Single(Read) 143 | 144 | def write(msg: String): IOProgram[Unit] = 145 | Single(Write(msg)) 146 | } 147 | } 148 | 149 | // Program: doesn't change at all! 150 | 151 | import IOProgram.Syntax._ 152 | 153 | def echo(): IOProgram[String] = for{ 154 | msg <- read() 155 | _ <- write(msg) 156 | } yield msg 157 | 158 | } 159 | 160 | -------------------------------------------------------------------------------- /src/test/scala/objectalgebras-vs-free-vs-eff/Eff.scala: -------------------------------------------------------------------------------- 1 | package org.atnos 2 | 3 | import org.atnos.eff._ 4 | import all._ 5 | import org.atnos.eff.syntax.all._ 6 | import cats.data._ 7 | import cats.implicits._ 8 | 9 | object Main { 10 | 11 | // type alias for indicating the possibility of 12 | // creating IO instructions in a stack of effects R 13 | type _io[R] = IOInst |= R 14 | 15 | // IO Instructions 16 | sealed abstract class IOInst[_] 17 | case class Read() extends IOInst[String] 18 | case class Write(msg: String) extends IOInst[Unit] 19 | 20 | // IO Programs 21 | def read[R :_io]: Eff[R, String] = 22 | send[IOInst, R, String](Read()) 23 | 24 | def write[R :_io](msg: String): Eff[R, Unit] = 25 | send[IOInst, R, Unit](Write(msg)) 26 | 27 | // A particular IO Program 28 | 29 | object SingleEffectProgram { 30 | def echo[R :_io]: Eff[R, String] = for { 31 | msg <- read 32 | _ <- write(msg) 33 | } yield msg 34 | } 35 | 36 | // Interpretation of IO programs 37 | 38 | import cats._ 39 | 40 | // natural transformation from IO instructions to 41 | // Eval effects 42 | def consoleIO[R, U :_eval]: IOInst ~> Eff[U, ?] = 43 | new (IOInst ~> Eff[U, ?]) { 44 | import scala.io.StdIn 45 | def apply[T](inst: IOInst[T]): Eff[U, T] = inst match { 46 | case Read() => delay[U, T](StdIn.readLine) 47 | case Write(msg) => delay[U, T](println(msg)) 48 | } 49 | } 50 | 51 | // natural transformation from IO instructions to 52 | // State effects 53 | case class IOState(in: List[String], out: List[String]) { 54 | def addIn(i: String): IOState = copy(in = i :: in) 55 | def addOut(o: String): IOState = copy(out = o :: out) 56 | } 57 | 58 | type StateIO[A] = State[IOState, A] 59 | type _state[R] = StateIO |= R 60 | 61 | def stateAction[R, U :_state]: IOInst ~> Eff[U, ?] = 62 | new (IOInst ~> Eff[U, ?]) { 63 | def apply[T](inst: IOInst[T]): Eff[U, T] = inst match { 64 | case Read() => get[U, IOState].flatMap { 65 | case IOState(msg :: reads, writes) => 66 | put[U, IOState](IOState(reads, writes)).as(msg) 67 | 68 | case other => 69 | pure[U, T]("nothing to read from!") 70 | } 71 | 72 | case Write(msg) => get[U, IOState].flatMap { 73 | case IOState(reads, writes) => 74 | put[U, IOState](IOState(reads, msg :: writes)) 75 | } 76 | } 77 | } 78 | 79 | // Echo interpretations 80 | object SingleEffectInterpretations { 81 | import SingleEffectProgram._ 82 | 83 | def consoleEcho[R, U](implicit m: Member.Aux[IOInst, R, U], 84 | eval: Eval |= U): Eff[U, String] = 85 | interpret.translateNat(echo[R])(consoleIO) 86 | 87 | def ioToState[R, U, A](e: Eff[R, A])(implicit m: Member.Aux[IOInst, R, U], 88 | state: StateIO |= U): Eff[U, A] = 89 | interpret.translateNat(e)(stateAction) 90 | 91 | type S = Fx.fx2[IOInst, StateIO] 92 | 93 | ioToState(echo[S]).evalState(IOState(List("hi"),List())).run == "hi" 94 | 95 | ioToState(echo[S]).execState(IOState(List("hi"),List())).run == IOState(List(),List("hi")) 96 | } 97 | 98 | // Logging instructions 99 | 100 | trait LogInst[A] 101 | case class Logging(level: Level, msg: String) extends LogInst[Unit] 102 | 103 | type _log[R] = LogInst |= R 104 | 105 | sealed abstract class Level 106 | case object WARNING extends Level 107 | case object DEBUG extends Level 108 | case object INFO extends Level 109 | 110 | // Log Programs 111 | def log[R :_log](level: Level, msg: String): Eff[R, Unit] = 112 | send[LogInst, R, Unit](Logging(level, msg)) 113 | 114 | // Interpretations over IO actions 115 | def logAction[R, U :_io]: LogInst ~> Eff[U, ?] = 116 | new (LogInst ~> Eff[U, ?]) { 117 | def apply[T](t: LogInst[T]) = 118 | t match { case Logging(level, msg) => write(s"$level: $msg") } 119 | } 120 | 121 | def logToIo[R, U, A](e: Eff[R, A])(implicit m: Member.Aux[LogInst, R, U], 122 | io: IOInst |= U): Eff[U, A] = 123 | interpret.translateNat(e)(logAction) 124 | 125 | // Particular program 126 | 127 | object MultipleEffectProgram { 128 | 129 | def echo[R :_io :_log]: Eff[R, String] = for { 130 | msg <- read 131 | _ <- log(INFO, s"read '$msg'") 132 | _ <- write(msg) 133 | _ <- log(INFO, s"written '$msg'") 134 | } yield msg 135 | 136 | } 137 | 138 | // IO with logging programs work with io actions 139 | object MultipleEffectProgramTest { 140 | import MultipleEffectProgram._ 141 | import SingleEffectInterpretations._ 142 | 143 | val init: IOState = IOState(List("hi"),List()) 144 | 145 | type S = Fx.fx3[LogInst, StateIO, IOInst] 146 | 147 | ioToState(logToIo(echo[S])).evalState(init).run == "hi" 148 | ioToState(logToIo(echo[S])).execState(init).run == IOState(List(), List("INFO: written 'hi'", "hi", "INFO: read 'hi'")) 149 | } 150 | 151 | } 152 | -------------------------------------------------------------------------------- /src/test/scala/objectalgebras-vs-free-vs-eff/FreeMonad.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import org.scalatest._ 4 | import cats._ 5 | import cats.implicits._ 6 | import cats.data.State 7 | import cats.free.Free 8 | import scala.util.Try 9 | 10 | /* 11 | This gist shows the use of Free (monad) to successfully build programs 12 | that are interpretation free. 13 | 14 | IMPORTANT: Compare this pattern with the one described on `ObjectAlgebras.scala` 15 | */ 16 | class FreeMonad extends FlatSpec with Matchers { 17 | 18 | // We start off with the definition of our `Functor`. This represents 19 | // the instructions of our language. 20 | sealed abstract class IOF[_] 21 | case object Read extends IOF[String] 22 | case class Write(msg: String) extends IOF[Unit] 23 | 24 | // OPTIONAL: This would be the equivalent of the IOAlg defined in 25 | // `ObjectAlgebras.scala#L19` although is not needed using this technique. 26 | // type IOAlg[F[_]] = IOF ~> F 27 | 28 | // Free[IOF, A] is just an initial instance of the algebra formed by IOF[_] 29 | object InitialInstance { 30 | type IO[A] = Free[IOF, A] 31 | 32 | // We add some syntax to ease the proccess of writing programs with this 33 | // algebra. These are usually called "smart constructors". 34 | object IO { 35 | object syntax { 36 | def read: IO[String] = 37 | Free.liftF(Read) 38 | 39 | def write(msg: String): IO[Unit] = 40 | Free.liftF(Write(msg)) 41 | } 42 | 43 | // OPTIONAL: We could easily give an instance of `IOAlg[IO]` 44 | // object IOIOAlg extends IOAlg[IO] { 45 | // def apply[A](fa: IOF[A]): IO[A] = fa match { 46 | // case Read => syntax.read 47 | // case Write(msg) => syntax.write(msg) 48 | // } 49 | // } 50 | } 51 | } 52 | import InitialInstance.IO 53 | 54 | // Now we'll write some simple programs to show how we can use algebras 55 | // to produce interpretation-free programs. 56 | object GenericPrograms { 57 | import IO.syntax._ 58 | import IO._ 59 | 60 | // We just need to use the "smart constructors" defined above 61 | val echo: IO[Unit] = 62 | read >>= 63 | write 64 | 65 | val askName: IO[String] = 66 | write("What's your name?") >> 67 | read 68 | 69 | // Here we are still using the power of `Monad`, even when it's not 70 | // necessary at all. We only require it to be a `Functor`. 71 | val toInt: IO[Option[Int]] = 72 | read map { s => 73 | Try(s.toInt).toOption 74 | } 75 | 76 | // Something similar occurs here, we only needed an `Apply`. 77 | val doubleRead: IO[String] = 78 | (read |@| read).map(_ + _) 79 | 80 | } 81 | 82 | // Now we write an effectful instance of our algebra, it works with 83 | // the standard input/output of our system. 84 | object EvalInstance { 85 | import GenericPrograms._ 86 | 87 | object IOEval extends ~>[IOF, Eval] { 88 | def apply[A](fa: IOF[A]): Eval[A] = fa match { 89 | case Read => Eval.always(scala.io.StdIn.readLine) 90 | case Write(s) => Eval.always(println(s)) 91 | } 92 | } 93 | 94 | // It doesn't make much sense to write test for an impure interpreter 95 | // but just to show how it works. 96 | object Test { 97 | print("Write anything: ") 98 | echo.foldMap(IOEval).value shouldBe (()) 99 | 100 | print("Write `John Doe`: ") 101 | askName.foldMap(IOEval).value shouldBe "John Doe" 102 | 103 | print("Write `123`: ") 104 | toInt.foldMap(IOEval).value shouldBe Option(123) 105 | print("Write anything that's not a number: ") 106 | toInt.foldMap(IOEval).value shouldBe Option.empty 107 | 108 | println("Write `hello` and then `world`:") 109 | doubleRead.foldMap(IOEval).value shouldBe "helloworld" 110 | } 111 | } 112 | 113 | // And for test purposes we give a pure instance that works with 114 | // the `MonadState` algebra. 115 | object StateInstance { 116 | import GenericPrograms._ 117 | 118 | case class IOState(readList: List[String], writeList: List[String]) { 119 | def read: IOState = this.copy(readList = readList.tail) 120 | def write(s: String): IOState = this.copy(writeList = s :: writeList) 121 | } 122 | 123 | type IOAction[A] = State[IOState, A] 124 | 125 | object IOAction extends ~>[IOF, IOAction] { 126 | val monadState = implicitly[MonadState[IOAction, IOState]] 127 | import monadState._ 128 | 129 | def apply[A](fa: IOF[A]): IOAction[A] = fa match { 130 | case Read => 131 | for { 132 | h <- inspect(_.readList.head) 133 | _ <- modify(_.read) 134 | } yield h 135 | case Write(msg) => 136 | modify(_.write(msg)) 137 | } 138 | } 139 | 140 | // Usually we want to write tests for a pure interpretation like this one 141 | // We can verify every input, output and result. 142 | object Test { 143 | echo.foldMap(IOAction).run(IOState("hello!" :: Nil, Nil)).value shouldBe 144 | (IOState(Nil, "hello!" :: Nil), ()) 145 | 146 | askName.foldMap(IOAction).run(IOState("Javier Fuentes" :: Nil, Nil)).value shouldBe 147 | (IOState(Nil, "What's your name?" :: Nil), "Javier Fuentes") 148 | 149 | toInt.foldMap(IOAction).run(IOState("123" :: Nil, Nil)).value shouldBe 150 | (IOState(Nil, Nil), Option(123)) 151 | toInt.foldMap(IOAction).run(IOState("This is not an Integer" :: Nil, Nil)).value shouldBe 152 | (IOState(Nil, Nil), Option.empty) 153 | 154 | doubleRead.foldMap(IOAction).run(IOState("Reading 1" :: "Reading 2" :: Nil, Nil)).value shouldBe 155 | (IOState(Nil, Nil), "Reading 1Reading 2") 156 | } 157 | } 158 | 159 | // Uncomment this if you want to try the effectful interpreter 160 | // "IO programs with Eval" should "work fine" in { 161 | // EvalInstance.Test 162 | // } 163 | 164 | "IO programs with State" should "work fine" in { 165 | StateInstance.Test 166 | } 167 | 168 | } 169 | 170 | object FreeMonad extends FreeMonad -------------------------------------------------------------------------------- /src/test/scala/objectalgebras-vs-free-vs-eff/FreeMonadCoproduct.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import FreeMonad.{IOF, Read, Write, EvalInstance} 4 | 5 | import cats.{Eval, ~>} 6 | import cats.data.Coproduct 7 | import cats.free.{Free, Inject} 8 | 9 | /* 10 | This gist shows how we can combine effects (IOF from the previous gist 11 | `FreeMonad.scala` and FSF) using Coproduct and Free to successfully build 12 | programs that are interpretation free. 13 | 14 | IMPORTANT: Compare this pattern with the one described on 15 | `ObjectAlgebrasMultipleEffects.scala` 16 | */ 17 | object FreeMonadCoproduct extends App { 18 | 19 | // As always we define our `Functor`, representing File System operations. 20 | sealed abstract class FSF[_] 21 | case class ReadFile(path: String) extends FSF[String] 22 | case class DeleteFile(path: String) extends FSF[Unit] 23 | case class WriteFile(path: String, content: String) extends FSF[Unit] 24 | 25 | // This time though, our "smart constructors" are going to be quite 26 | // different as we need to inject instructions from the functors (IOF 27 | // and FSF) to a certain functor that includes both of them. This 28 | // certain functor will be a `Coproduct` of all the functors involved, 29 | // but we don't need to be concrete at this point, so we leave it generic. 30 | // 31 | // Given a `Inject[IOF, F]`, we can lift instructions from `IOF[A]` to 32 | // `Free[F, A]`. 33 | object Coproducts { 34 | 35 | type FSInj[F[_]] = Inject[FSF, F] 36 | object FSInj { 37 | object syntax { 38 | def readFile[F[_]: FSInj](path: String): Free[F, String] = 39 | Free.inject[FSF, F](ReadFile(path)) 40 | def deleteFile[F[_]: FSInj](path: String): Free[F, Unit] = 41 | Free.inject[FSF, F](DeleteFile(path)) 42 | def writeFile[F[_]: FSInj](path: String)(content: String): Free[F, Unit] = 43 | Free.inject[FSF, F](WriteFile(path, content)) 44 | } 45 | } 46 | 47 | // In opossition to object algebras, here we need new "smart constructors" 48 | // for IOF. The ones defined on `FreeMonad.scala` are no longer useful. 49 | type IOInj[F[_]] = Inject[IOF, F] 50 | object IOInj { 51 | object syntax { 52 | def read[F[_]: IOInj]: Free[F, String] = 53 | Free.inject[IOF, F](Read) 54 | def write[F[_]: IOInj](msg: String): Free[F, Unit] = 55 | Free.inject[IOF, F](Write(msg)) 56 | } 57 | } 58 | } 59 | 60 | // Now we can import syntax from all the algebras involved and create our programs. 61 | object GenericProgramsWithCoproduct { 62 | import cats.syntax.flatMap._ 63 | import Coproducts.IOInj, IOInj.syntax._ 64 | import Coproducts.FSInj, FSInj.syntax._ 65 | 66 | def copy[F[_]: IOInj: FSInj]: Free[F, Unit] = 67 | write("Name a file you want to copy: ") >> 68 | read >>= { fileName => 69 | readFile(fileName) >>= 70 | writeFile(s"$fileName copy") 71 | } 72 | } 73 | 74 | // We just need to add a new interpreter for `Eval` in this case. 75 | object EvalInstanceWithCoproduct { 76 | 77 | object FSEval extends ~>[FSF, Eval] { 78 | import java.io.{File, FileWriter, BufferedWriter} 79 | def apply[A](fa: FSF[A]): Eval[A] = fa match { 80 | case ReadFile(path) => Eval.always(scala.io.Source.fromFile(path).mkString) 81 | case DeleteFile(path) => Eval.always { 82 | new File(path).delete() 83 | () 84 | } 85 | case WriteFile(path, content) => Eval.always { 86 | val bw = new BufferedWriter(new FileWriter(new File(path))) 87 | bw.write(content) 88 | bw.close() 89 | } 90 | } 91 | } 92 | 93 | // We are very close to get it, what's left to do is: 94 | // - Concrete the `F[_]` to the coproduct of our effects (App[_]) 95 | // - Combine both interpreters 96 | // - Run it! 97 | object Test { 98 | import GenericProgramsWithCoproduct._ 99 | 100 | type App[A] = Coproduct[IOF, FSF, A] 101 | val interpreter: App ~> Eval = EvalInstance.IOEval or FSEval 102 | 103 | def apply() = 104 | copy[App].foldMap(interpreter).value 105 | 106 | } 107 | 108 | } 109 | 110 | EvalInstanceWithCoproduct.Test() 111 | 112 | } 113 | -------------------------------------------------------------------------------- /src/test/scala/objectalgebras-vs-free-vs-eff/ObjectAlgebras.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import org.scalatest._ 4 | import cats.{Apply, Functor, Monad, MonadState, Eval} 5 | import cats.data.State 6 | import scala.util.Try 7 | 8 | /* 9 | This gist shows how we can use and combine algebras (in this case a custom 10 | algebra IOAlg and Monad from cats library) to successfully build programs 11 | that are interpretation free. 12 | 13 | IMPORTANT: Compare this pattern with the one described on `FreeMonad.scala` 14 | */ 15 | class ObjectAlgebras extends FlatSpec with Matchers { 16 | 17 | // We start off with the definition of our custom algebra. This algebra 18 | // represents the effect of reading and writing from some IO interface. 19 | trait IOAlg[F[_]] { 20 | def read: F[String] 21 | def write(s: String): F[Unit] 22 | } 23 | 24 | // We add some syntax to ease the proccess of writing programs with this 25 | // algebra. These "smart constructors" are actually church representations 26 | // of the basic operations of our algebra. 27 | object IOAlg { 28 | object syntax { 29 | def read[F[_]](implicit F: IOAlg[F]): F[String] = 30 | F.read 31 | def write[F[_]](s: String)(implicit F: IOAlg[F]): F[Unit] = 32 | F.write(s) 33 | } 34 | } 35 | 36 | object InitialInstance { 37 | 38 | // OPTIONAL: This would be the initial instance of our algebra. It's 39 | // a church encoding representation of the initial algebra. This representation 40 | // is not needed for defining our simple programs as they are already 41 | // church representations on their own. 42 | // trait IO[A] { 43 | // def apply[F[_]: IOAlg]: F[A] 44 | // } 45 | } 46 | 47 | // Now we'll write some simple programs to show how we can use algebras 48 | // to produce interpretation-free programs. 49 | object GenericPrograms { 50 | import IOAlg.syntax._, cats.syntax.flatMap._, cats.syntax.functor._, cats.syntax.cartesian._ 51 | 52 | // In order to write this "echo" program, we need two algebras: 53 | // - IOAlg: to be able to write and read from somewhere 54 | // - Monad: to write sequential programs 55 | def echo[F[_]: IOAlg: Monad]: F[Unit] = 56 | read >>= 57 | write[F] 58 | 59 | def askName[F[_]: IOAlg: Monad]: F[String] = 60 | write("What's your name?") >> 61 | read 62 | 63 | // For the following scenario, we don't need the `Monad` algebra: it's 64 | // too much powerful for our needs. With `Functor` we have enough power, as 65 | // it allows us to modify the content of a structure, right what we 66 | // need. 67 | def toInt[F[_]: IOAlg: Functor]: F[Option[Int]] = 68 | read map { s => 69 | Try(s.toInt).toOption 70 | } 71 | 72 | // In this case we need an intermediate point between `Functor` and 73 | // `Monad` algebras. `Apply` algebra empowers us to do computations 74 | // in parallel and aggregate the results. 75 | def doubleRead[F[_]: IOAlg: Apply]: F[String] = 76 | (read |@| read).map(_ + _) 77 | 78 | } 79 | 80 | // Now we write an effectful instance of our algebra, it works with 81 | // the standard input/output of our system. 82 | object EvalInstance { 83 | import GenericPrograms._ 84 | 85 | implicit object IOEval extends IOAlg[Eval] { 86 | def read: Eval[String] = 87 | Eval.always(scala.io.StdIn.readLine) 88 | 89 | def write(s: String): Eval[Unit] = 90 | Eval.always(println(s)) 91 | } 92 | 93 | // It doesn't make much sense to write tests for an impure interpreter 94 | // but just to show how it works, here they are. 95 | object Test { 96 | print("Write anything: ") 97 | echo[Eval].value shouldBe (()) 98 | 99 | print("Write `John Doe`: ") 100 | askName[Eval].value shouldBe "John Doe" 101 | 102 | print("Write `123`: ") 103 | toInt[Eval].value shouldBe Option(123) 104 | print("Write anything that's not a number: ") 105 | toInt[Eval].value shouldBe Option.empty 106 | 107 | println("Write `hello` and then `world`:") 108 | doubleRead[Eval].value shouldBe "helloworld" 109 | } 110 | } 111 | 112 | // And for testing purposes we give a pure instance that works with 113 | // the `MonadState` algebra. 114 | object StateInstance { 115 | import GenericPrograms._ 116 | 117 | case class IOState(readList: List[String], writeList: List[String]) { 118 | def read: IOState = this.copy(readList = readList.tail) 119 | def write(s: String): IOState = this.copy(writeList = s :: writeList) 120 | } 121 | 122 | type IOAction[A] = State[IOState, A] 123 | 124 | implicit object stateIO extends IOAlg[IOAction] { 125 | val monadState = implicitly[MonadState[IOAction, IOState]] 126 | import monadState._ 127 | 128 | def read: IOAction[String] = 129 | for { 130 | s <- get 131 | _ <- set(s.read) 132 | } yield s.readList.head 133 | 134 | def write(msg: String): IOAction[Unit] = 135 | modify(_.write(msg)) 136 | } 137 | 138 | // Usually we want to write tests for a pure interpretation like this one 139 | // We can verify every input, output and result. 140 | object Test { 141 | echo[IOAction].run(IOState("hello!" :: Nil, Nil)).value shouldBe 142 | (IOState(Nil, "hello!" :: Nil), ()) 143 | 144 | askName[IOAction].run(IOState("Javier Fuentes" :: Nil, Nil)).value shouldBe 145 | (IOState(Nil, "What's your name?" :: Nil), "Javier Fuentes") 146 | 147 | toInt[IOAction].run(IOState("123" :: Nil, Nil)).value shouldBe 148 | (IOState(Nil, Nil), Option(123)) 149 | 150 | toInt[IOAction].run(IOState("This is not an Integer" :: Nil, Nil)).value shouldBe 151 | (IOState(Nil, Nil), Option.empty) 152 | 153 | doubleRead[IOAction].run(IOState("Reading 1" :: "Reading 2" :: Nil, Nil)).value shouldBe 154 | (IOState(Nil, Nil), "Reading 1Reading 2") 155 | } 156 | } 157 | 158 | // Uncomment this if you want to try the effectful interpreter 159 | // "IO programs with Eval" should "work fine" in { 160 | // EvalInstance.Test 161 | // } 162 | 163 | "IO programs with State" should "work fine" in { 164 | StateInstance.Test 165 | } 166 | 167 | } 168 | 169 | object ObjectAlgebras extends ObjectAlgebras -------------------------------------------------------------------------------- /src/test/scala/objectalgebras-vs-free-vs-eff/ObjectAlgebrasMultipleEffects.scala: -------------------------------------------------------------------------------- 1 | package org.hablapps.gist 2 | 3 | import ObjectAlgebras.{IOAlg, EvalInstance} 4 | 5 | import cats.{Monad, Eval} 6 | 7 | /* 8 | This gist shows how we can use and combine effects using object algebras ( 9 | IOAlg from the previous gist `ObjectAlgebras.scala` and FSAlg, an algebra to 10 | build File System operations). 11 | 12 | Actually we already saw how to combine algebras, as we treated Monad as a common algebra, 13 | just like IO is. But for the sake of completeness we'll mix in another effect. 14 | 15 | IMPORTANT: Compare this pattern with the one described on `FreeMonadCoproduct.scala` 16 | */ 17 | object ObjectAlgebrasMultipleEffects extends App { 18 | 19 | // As always we define our algebra to represent File System operations. 20 | trait FSAlg[F[_]] { 21 | def readFile(path: String): F[String] 22 | def deleteFile(path: String): F[Unit] 23 | def writeFile(path: String, content: String): F[Unit] 24 | } 25 | 26 | // Add the syntax 27 | object FSAlg { 28 | object syntax { 29 | def readFile[F[_]](path: String)(implicit FS: FSAlg[F]): F[String] = 30 | FS.readFile(path) 31 | def deleteFile[F[_]](path: String)(implicit FS: FSAlg[F]): F[Unit] = 32 | FS.deleteFile(path) 33 | def writeFile[F[_]](path: String)(content: String)(implicit FS: FSAlg[F]): F[Unit] = 34 | FS.writeFile(path, content) 35 | } 36 | } 37 | 38 | // Now we can import syntax from all the algebras involved and create our programs. 39 | object GenericProgramsMultipleEffects { 40 | import IOAlg.syntax._, FSAlg.syntax._, cats.syntax.flatMap._ 41 | 42 | def copy[F[_]: IOAlg: FSAlg: Monad]: F[Unit] = 43 | write("Name a file you want to copy: ") >> 44 | read >>= { fileName => 45 | readFile(fileName) >>= 46 | writeFile[F](s"$fileName copy") 47 | } 48 | } 49 | 50 | // We just need to add a concrete instance of our algebra, for `Eval` in this case. 51 | object EvalInstanceMultipleEffects { 52 | 53 | implicit object FSEval extends FSAlg[Eval] { 54 | import java.io.{File, FileWriter, BufferedWriter} 55 | def readFile(path: String): Eval[String] = 56 | Eval.always(scala.io.Source.fromFile(path).mkString) 57 | def deleteFile(path: String): Eval[Unit] = Eval.always { 58 | new File(path).delete() 59 | () 60 | } 61 | def writeFile(path: String, content: String): Eval[Unit] = Eval.always { 62 | val bw = new BufferedWriter(new FileWriter(new File(path))) 63 | bw.write(content) 64 | bw.close() 65 | } 66 | } 67 | 68 | // And here we are, we can already instantiate the program, as we have instances of 69 | // our 3 algebras (IOAlg, FSAlg, Monad) for `Eval`. 70 | object Test { 71 | import GenericProgramsMultipleEffects._ 72 | import EvalInstance.IOEval 73 | 74 | def apply() = copy[Eval].value 75 | 76 | } 77 | } 78 | 79 | EvalInstanceMultipleEffects.Test() 80 | 81 | } 82 | -------------------------------------------------------------------------------- /src/test/scala/objectalgebras-vs-free-vs-eff/README.md: -------------------------------------------------------------------------------- 1 | # Object algebras vs. Free vs. Eff 2 | 3 | This gist aims to compare the common approach of using Free (Monads) to describe programs that are free of interpretation, with the not so known approach of using object algebras (type classes) to achieve the very same thing. 4 | 5 | These two approaches are actually very related to each other: 6 | * Given the *Functor* used with `Free` (`IOF[_]`) we can define an algebra `type IOAlg[F[_]] = IOF ~> F` 7 | * That algebra is equivalent to the object algebra defined by the type class [`IOAlg[F[_]]`](ObjectAlgebras.scala#L19). 8 | * `Free[IOF, ?]` is just an initial instance of that algebra. (It's an initial instance of the *Monad* algebra as well :wink:) 9 | * Likewise, [`IO[A]`](ObjectAlgebras.scala#L41) is another initial instance of that algebra. 10 | 11 | The Eff monad is also an example of a Free monad with a different approach for combining algebras (or "effects"). 12 | 13 | ## Conclusion 14 | 15 | In both cases we are working with initial algebras, and therefore we can achieve the very same functionality: namely, programming at the most abstract level, without committing to particular interpretations. That said, the use of object algebras is much more recommendable when we are dealing with compositional interpreters. In those cases, they have two major advantages over *Free*: there is no need to create instances of temporal classes (the Free ADT), and it's trivial to combine algebras and use the exact level of generality (e.g. to use Apply or Functor, instead of Monad). On the other hand, the use of *Free* is more recommendable when we need more control over the execution of our programs, since we have at our disposal the reification of our program. 16 | 17 | ### Related Gists 18 | 19 | [Church vs. ADTs](https://github.com/hablapps/gist/blob/master/src/test/scala/InitialAlgebras.scala) What is the relationship between these encodings? Algebras to the rescue! 20 | 21 | [ChurchEncodingsHK](https://github.com/hablapps/gist/blob/hablacats/src/test/scala/ChurchEncodingsHK.scala): We can also create non-compositional interpreters for object algebras, not easy though. 22 | --------------------------------------------------------------------------------