├── .github └── workflows │ └── static.yml ├── 2016_03_05_see-ma-no-vars.md ├── 2016_03_12_chan-chan-chan.md ├── 2016_04_28_typelevel_isprime ├── build.sbt └── src │ └── main │ └── scala │ └── x │ └── X.scala ├── 2016_05-29-contributing.md ├── 2016_27_03_back-my-stack-traces.md ├── 2017_08_06_concurrency_in_new_language.md ├── 2017_12_14-variance-from-lsp.md ├── 2021_06_27_automatic-coloring-for-effects.md ├── 2021_11_03_business_vs_match.md ├── 2022_06_05_structured-concurrency-scala-future.md ├── 2023_05_05_two_cents_about_scala_web_development_in_industry.md ├── 2024_01_30_logic-monad-1.md ├── 2024_12_09_dependency-injection.md ├── 2024_12_30_dependency_injection_tf.md ├── README.md ├── feed.xml └── scripts └── generate-feed.sc /.github/workflows/static.yml: -------------------------------------------------------------------------------- 1 | # Simple workflow for deploying static content to GitHub Pages 2 | name: Deploy static content to Pages 3 | 4 | on: 5 | # Runs on pushes targeting the default branch 6 | push: 7 | branches: ["master"] 8 | 9 | # Allows you to run this workflow manually from the Actions tab 10 | workflow_dispatch: 11 | 12 | # Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages 13 | permissions: 14 | contents: read 15 | pages: write 16 | id-token: write 17 | 18 | # Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued. 19 | # However, do NOT cancel in-progress runs as we want to allow these production deployments to complete. 20 | concurrency: 21 | group: "pages" 22 | cancel-in-progress: false 23 | 24 | jobs: 25 | # Single deploy job since we're just deploying 26 | deploy: 27 | environment: 28 | name: github-pages 29 | url: ${{ steps.deployment.outputs.page_url }} 30 | runs-on: ubuntu-latest 31 | steps: 32 | - name: Checkout 33 | uses: actions/checkout@v4 34 | - name: Setup Pages 35 | uses: actions/configure-pages@v4 36 | - name: Upload artifact 37 | uses: actions/upload-pages-artifact@v3 38 | with: 39 | # Upload entire repository 40 | path: '.' 41 | - name: Deploy to GitHub Pages 42 | id: deployment 43 | uses: actions/deploy-pages@v4 44 | -------------------------------------------------------------------------------- /2016_03_05_see-ma-no-vars.md: -------------------------------------------------------------------------------- 1 | 2 | # idiomatic Go <=> Idiomatic Scala 3 | 4 | While preparing to [Wix R&D meetup](http://www.meetup.com/Wix-Ukraine-Meetup-Group/events/229042251/) found a way to improve 5 | [scala-gopher](https://github.com/rssh/scala-gopher) API. 6 | 7 | Using the previous versions of scala-gopher is very easy to translate Go code into Scala; readability of result code is near the same as origin, but it yet not looks 'right' for functional Scala programmer: idiomatic Go heavy use mutable variables. 8 | 9 | ## Fold over selector 10 | 11 | One of the typical patterns in Go program looks next: 12 | 13 | ``` 14 | 15 | for(;; ) { 16 | select { 17 | case v <- ch1 ... 18 | case ... 19 | } 20 | } 21 | ``` 22 | 23 | Natural analog in Scala contains select loop and state in mutable vars. But from next version of the scala-gopher we will have more functional alternative: fold over selector, 24 | 25 | ``` 26 | select.fold(...) ((state, select) => select match { 27 | case v <- ch .=> ... 28 | }) 29 | ``` 30 | Exists fluent syntax for case, when state consists from few variables: 31 | 32 | ``` 33 | def fib(out:Output[Long],quit:Future[Boolean]):Future[(Long,Long)] = 34 | select.afold((0L,1L)){ case ((x,y),s) => 35 | s match { 36 | case x: out.write => (y,x+y) 37 | case q: quit.read => CurrentFlowTermination.exit((x,y)) 38 | } 39 | } 40 | ``` 41 | 42 | See ma, no mutable variables. 43 | 44 | ## Effected input 45 | 46 | Another common pattern in Go programming: modifying of channels when reading/writing one. An immutable scala-gopher alternative will be EffectedInput, which holds channel, can participate in selector read and have an operation, which applies effect to the current state. 47 | 48 | let's look at an example: 49 | ``` 50 | def generate(n:Int, quit:Promise[Boolean]):Channel[Int] = 51 | { 52 | val channel = makeChannel[Int]() 53 | channel.awriteAll(2 to n) andThen (_ => quit success true) 54 | channel 55 | } 56 | 57 | def filter(in:Channel[Int]):Input[Int] = 58 | { 59 | val filtered = makeChannel[Int]() 60 | val sieve = makeEffectedInput(in) 61 | sieve.aforeach { prime => 62 | sieve <<= (_.filter(_ % prime != 0)) 63 | filtered <~ prime 64 | } 65 | filtered 66 | } 67 | 68 | ``` 69 | 70 | Here in 'filter', we generate a set of prime numbers, and make a sieve of Eratosthenes by sequentially applying 'filter'. 71 | 72 | ----------------------- 73 | [index](https://github.com/rssh/notes) 74 | -------------------------------------------------------------------------------- /2016_03_12_chan-chan-chan.md: -------------------------------------------------------------------------------- 1 | # chan chan chan 2 | 3 | Why channels of channels of channels can be useful? Why we often see statements like `make(chan chan chan int)` in Go code ? 4 | 5 | Let's look at example [here](https://rogpeppe.wordpress.com/2009/12/01/concurrent-idioms-1-broadcasting-values-in-go-with-linked-channels/) 6 | or [scala version](https://github.com/rssh/scala-gopher/blob/master/src/test/scala/example/BroadcasterSuite.scala) 7 | 8 | Why type of `listenc` is `Channel[Channel[Channel[Message]]]` ? 9 | 10 | The answer is - exchanging information between goroutines in dynamic ways, like an emulation of 11 | [asynchronous method calls](https://en.wikipedia.org/wiki/Asynchronous_method_invocation) between objects. 12 | 13 | Let we want to request information which must return to us some 'A'. 14 | Instead calling `o.f(x)` and invocation of a method which will return `A` on the stack, we create a channel 15 | (of type `Channel[A]`) and pass one to target goroutine via endpoint channel (which will have type `Channel[Channel[A]]`). 16 | After this goroutine will retrieve `a` and send one back to the canal which was received from the endpoint. 17 | 18 | So, if we see in signature `Channel[Channel[A]]` this usually means "entry for information request about 'A'". 19 | 20 | ----------------------- 21 | [index](https://github.com/rssh/notes) 22 | -------------------------------------------------------------------------------- /2016_04_28_typelevel_isprime/build.sbt: -------------------------------------------------------------------------------- 1 | 2 | name := "typelevel-peano-isprime" 3 | 4 | scalaVersion := "2.11.8" 5 | 6 | 7 | -------------------------------------------------------------------------------- /2016_04_28_typelevel_isprime/src/main/scala/x/X.scala: -------------------------------------------------------------------------------- 1 | package x 2 | 3 | import scala.concurrent._ 4 | import scala.concurrent.duration._ 5 | 6 | sealed trait Nat 7 | { 8 | type Current <: Nat 9 | type Next <: Nat 10 | type Prev <: Nat 11 | 12 | type Plus[X <: Nat] <: Nat 13 | type Mult[X <: Nat] <: Nat 14 | type Minus[X <: Nat] <: Nat 15 | type Ack[X <: Nat] <: Nat 16 | type Ack2[X <: Nat] <: Nat 17 | 18 | type IfZero[X<:Nat,Y<:Nat] <: Nat 19 | type IfOne[X<:Nat,Y<:Nat] <: Nat 20 | type IfLess[Y<:Nat,Z<:Nat,W<:Nat] <: Nat 21 | 22 | type IsPrime <: Nat 23 | type EmptyDivides[X<:Nat] <: Nat 24 | type EmptyDivides2[X<:Nat] <: Nat 25 | type DivideModule[X<:Nat] <: Nat 26 | } 27 | 28 | 29 | class Zero extends Nat 30 | { 31 | type Current = Zero 32 | type Next = Succ[Zero] 33 | type Prev = Zero 34 | 35 | type Plus[X<:Nat] = X 36 | type Mult[X<:Nat] = Nat._0 37 | type Minus[X<:Nat] = Zero 38 | type Ack[X <: Nat] = Succ[X] 39 | type Ack2[Y <: Nat] = Y#Ack[Nat._1] 40 | 41 | type IfZero[X<:Nat,Y<:Nat] = X 42 | type IfOne[X<:Nat,Y<:Nat] = Y 43 | type IfLess[Y<:Nat,Z<:Nat,W<:Nat] = Y#IfZero[W,Z] 44 | 45 | type IsPrime = Nat._1 46 | type EmptyDivides[X<:Nat] = Nat._1 47 | type EmptyDivides2[X<:Nat] = Nat._1 48 | type DivideModule[Y<:Nat] = Zero 49 | 50 | } 51 | 52 | 53 | class Succ[X<:Nat] extends Nat 54 | { 55 | type Current = Succ[X] 56 | type Next = Succ[Succ[X]] 57 | type Prev = X 58 | 59 | type Plus[Y<:Nat] = Succ[X#Plus[Y]] 60 | type Mult[Y<:Nat] = X#IfZero[X,Y#Plus[Prev#Mult[Y]]] 61 | type Minus[Y<:Nat] = Y#IfZero[X,X#Minus[Y#Prev]] 62 | type Ack[Y <: Nat] = Y#Ack2[X] 63 | type Ack2[Y <: Nat] = Y#Ack[Succ[Y]#Ack[X]] 64 | 65 | type IfZero[Y<:Nat,Z<:Nat] = Z 66 | type IfOne[Y<:Nat,Z<:Nat] = Z#IfZero[Y,Z] 67 | type IfLess[Y<:Nat,Z<:Nat,W<:Nat] = IfZero[Y#IfZero[W,Z],X#IfLess[Y#Prev,Z,W]] 68 | 69 | type IsPrime = EmptyDivides[X] 70 | type EmptyDivides[Y<:Nat] = Y#IfZero[Nat._1, 71 | Y#IfOne[Nat._1, 72 | DivideModule[Y]#IfZero[Nat._0,Y#Prev#EmptyDivides2[Succ[X]]]]] 73 | type EmptyDivides2[Y<:Nat] = Y#EmptyDivides[X] 74 | type DivideModule[Y<:Nat] = IfLess[Y,Y,X#DivideModule[X#Minus[Y]]] 75 | 76 | 77 | } 78 | 79 | object Nat 80 | { 81 | 82 | type _0 = Zero 83 | type _1 = Succ[_0] 84 | type _2 = Succ[_1] 85 | type _3 = Succ[_2] 86 | type _4 = _2#Plus[_2] 87 | type _16 = _4#Mult[_4] 88 | type _256 = _16#Mult[_16] 89 | 90 | } 91 | 92 | 93 | 94 | object Test 95 | { 96 | import Nat._ 97 | 98 | def longfun()(implicit evidence: Succ[_3]#IsPrime =:= Succ[_256]#IsPrime){ 99 | System.out.println("Hi!") 100 | } 101 | 102 | //def sofun()(implicit evidence: _4#Ack[_2] =:= _3#Ack[_4]){ 103 | // System.out.println("Hi!") 104 | //} 105 | 106 | def main(args: Array[String]):Unit = 107 | { 108 | val x = new _2() {} 109 | System.out.println(s"x=$x") 110 | longfun() 111 | } 112 | 113 | 114 | } 115 | -------------------------------------------------------------------------------- /2016_05-29-contributing.md: -------------------------------------------------------------------------------- 1 | 2 | # Some possible contribution tasks 3 | 4 | 1. More substitutes for Future callback methods. (project = trackedfuture, level = easy) 5 | 6 | - look at Scala Future interface: https://github.com/scala/scala/blob/2.11.x/src/library/scala/concurrent/Future.scala and find method which accept callback and not handled in https://github.com/rssh/trackedfuture/blob/master/agent/src/main/scala/trackedfuture/agent/TFMethodAdapter.scala. Today this can be `onSuccess` or `onFailure` 7 | - add an implementation of debugging variant to https://github.com/rssh/trackedfuture/blob/master/agent/src/main/scala/trackedfuture/runtime/TrackedFuture.scala 8 | - add a test (analogically to other tests) in example subproject 9 | 10 | 2. Write replacement for Await.result / Await.ready which in case of timeout will also show the state of the 11 | Future in Await argument if this is possible. (project = trackedfuture, level = middle) 12 | 13 | - Add to [StackTraces](https://github.com/rssh/trackedfuture/blob/master/agent/src/main/scala/trackedfuture/runtime/StackTraces.scala) field with current future, which will set on change of thread boundaries. 14 | - add setting of such field during the creation of appropriate thread-local 15 | - in ClassVisitor add translator for Await/result & Array/ready 16 | - in translated Await runtime handle timeout exception and in this handler search for currently 17 | set Future in Thread-Local of all threads. 18 | - If found - print stacktrace of this thread as additional information. 19 | 20 | 3. Implement select.timeout (project = scala-gopher, level=expert) 21 | 22 | 4. Implement lifting async throught hight-order functions (project = scala-gopher, level=expert, size=big ) 23 | 24 | If you decide to do something, then select timeframe, create issue and contact me 25 | -------------------------------------------------------------------------------- /2016_27_03_back-my-stack-traces.md: -------------------------------------------------------------------------------- 1 | 2 | # Give my stacktraces back 3 | 4 | One of the annoying things when debugging concurrent Scala programs - is luck of stack traces from asynchronous calls 5 | with ```Future```s. Ie. when we span task and receive an exception from one, then it is impossible to find in the exception trace, 6 | from where this task was spawned. 7 | 8 | I just wrote a debug agent, which fix this issue: https://github.com/rssh/trackedfuture 9 | 10 | ---------- 11 | [index](https://github.com/rssh/notes) 12 | -------------------------------------------------------------------------------- /2017_08_06_concurrency_in_new_language.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # So, I want to add concurrency to my language... 4 | 5 | // in addition to talk with @anemish about spectrum of design choices for [lasca language](https://github.com/nau/lasca-compiler) 6 | 7 | ## TLDR: 8 | * All hight-level models are not universal and ‘bad’ in some sense.
 9 | * Exists one ‘good’  low-level model, which will allow implementing all known high-level models on top of one, but it’s not free.  We can name one ‘stackless execution’ or ‘fast stack switching’  or  ‘total CPS.'  
 10 | 11 | ## Long version: What bad with: 12 | 13 | 1. Actors.  [ Erlang, Scala, Pony, …. ]
 14 | * Actors are not composable.  (Or - in other words, they are composable exactly as objects in OO:   if you have 2 actors:  (A, B), you have no automatically ‘Composition’ of ones.   Note, that this problem is not dependent on ‘Typed/Untyped’  problem:  even if you have actor behavior coded in some typed calculus (behavior or session types) you still have no ‘natural’ compositions.  
 15 | So, actors are an effective way of implementing low-level behavior,  but ‘actor programming’ is not scaled. 16 | 17 | 2. Channels  (CSP=Communicated Sequential Processes: Occam, Limbo, Go ….  )
 18 | * CSP operations are not safe by construction.  CSP, in some sense ‘better’ than actors:  if we have two processes, we can build a pipe between than. But it is easy to construct a set of two pipes, waiting for each other.
 19 | * Properties of original CSP model are difficult to implement in distributed environment:  (A B,  when A send a message to B then A resume execution after B receive one.) This property requires extra roundtrip if A and B are situated on different hosts.
 20 | * Most implementation of CSP in programming languages are too low-level and not introduce term ‘Process’ with an explicit operation, but rather force a programmer to think concerning informal, implicit processes. So, again - programming with channels is not scaled.
 21 | 22 | 3. Dataflow programming (sometimes named Ozz - style concurrency): [Ozz, Alice … ]
 23 | 24 | * The main problem is an absence of control of evaluation. Ie. in a situation where Future[X] is undistinguished from X it is hard to know exactly the sequence and timing characteristics. 
 25 | 26 | 27 | 4. Implicit parallelism. - too high level. 28 | 29 | 30 | For now, most programmers languages provide more limited mechanisms, such as asynchronous method calls (JavaScript, Python) for interpreted languages and async/await macro transformations (Scala, Nim) for compiling. (which are very limited without runtime support for switching stacks). 31 | 32 | 33 | What will be optimal concurrency mechanism for new general-purpose language? 34 | 35 | Set of runtime routines, which will allow: 36 | * spawn a lightweight process 37 | * suspend lightweight process, awaiting some result (MB, from another process) and allow using this result as a value 38 | * switch to another lightweight process (previously stored in value) (giving them a result, if needed) 39 | 40 | Exists two implementation techniques: 41 | * Total CPS [Continuation Passing Style] transform, which will ‘eliminate’ stack switching effect, by 42 | * Implement fast stack switching (using stalkless code-generator and runtime support) and automatic transformation of ‘all’ calls into asynchronous form. 43 | 44 | ---------- 45 | [index](https://github.com/rssh/notes) 46 | -------------------------------------------------------------------------------- /2017_12_14-variance-from-lsp.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Can we deduce variance rules (in scala, for example) from useful software engineering principle ? 4 | 5 | Sometimes, description of how to deduce scala variance rules from Liskov substitution principle can be helpfull for studens. (see [presentation](https://www.slideshare.net/rssh1/cocontr-variance-from-lsp) ) 6 | 7 | 8 | 9 | ---------- 10 | [index](https://github.com/rssh/notes) 11 | -------------------------------------------------------------------------------- /2021_06_27_automatic-coloring-for-effects.md: -------------------------------------------------------------------------------- 1 | ## Problem: automatic coloring of effect monads in [dotty-cps-async](https://github.com/rssh/dotty-cps-async) 2 | 3 | 4 | What is it at all, you can read https://rssh.github.io/dotty-cps-async/Features.html#automatic-coloring 5 |
6 | 7 | (or a quick recap here) 8 | 9 | 10 | If we have some async/await structure, then we should split our code technically into two parts (colors): one works with async expressions (i.e.,, F[T]) and one - sync. (T without F). 11 | 12 | If we want to put asynchronous expression into synchronous function, we should write `await(expr)` instead `expr`, for transforming asynchronous call into synchronous: 13 | 14 | ``` 15 | val c = async[F]{ 16 | val url = "http://www.example.com" 17 | val data = await(api.fetchUrl("http://www.example.com")) 18 | val theme = api.classifyText(data) 19 | val dmpInfo: String = await(api.retrieveDMPInfo(url, await(theme), "1")) 20 | dmpInfo 21 | } 22 | ``` 23 | 24 | Note that setting async/await is not adding something to business logic. Let's hide awaits which can be inserted automatically: 25 | 26 | ``` 27 | import cps.automaticColoring{*,given} 28 | 29 | val c = async[F]{ 30 | val url = "http://www.example.com" 31 | val data = api.fetchUrl("http://www.example.com") 32 | val theme = api.classifyText(data) 33 | val dmpInfo: String = api.retrieveDMPInfo(url, theme, "1") 34 | dmpInfo 35 | } 36 | ``` 37 | 38 | We see - code is much more cleaner. 39 | 40 | 41 |
42 | 43 | Automatic coloring is easy for caching monads, like futures. But what to do with effects monads like cats or scalaz IO, monix Task, or ziverge ZIO? 44 | 45 | ### Attempt 0: no coloring at all. 46 | 47 | The problem -- notation becomes impractical because in programming with effects, near any action is effectful, we need to place `await` on each line of the code. 48 | 49 | Look at the next block at code, 50 | 51 | ```scala 52 | val c = async[PureEffect] { 53 | val logger = await(PEToyLogger.make()) 54 | val counter = await(PEIntRef.make(-1)) 55 | while { 56 | val v = await(counter.increment()) 57 | await(logger.log(v.toString)) 58 | if (v % 3 == 0) then 59 | await(logger.log("Fizz")) 60 | if (v % 5 == 0) then 61 | await(logger.log("Buzz")) 62 | v < 10 63 | } do () 64 | await(logger.all()) 65 | } 66 | ``` 67 | 68 | The ergonomic of such code style is worse than the ergonomic of `for loops,` so we need to search for the solution. 69 | 70 | Note that automatic coloring for immediate monads (like futures) uses implicit conversions to transform `t: F[T]` into `await(t): T.` But this will not work for effect monads since multiple awaits will trigger multiple effects evaluation. So, code with awaits wilth multiple conversion will looks like: 71 | 72 | ```scala 73 | val c = async[PureEffect] { 74 | val logger = PEToyLogger.make() 75 | val counter = PEIntRef.make(-1) 76 | while { 77 | val v = await(counter).increment() 78 | logger.log(v.toString) 79 | if (await(v) % 3 == 0) then 80 | await(await(logger)).log("Fizz") 81 | if (await(v) % 5 == 0) then 82 | await(await(logger).log)("Buzz") 83 | await(v) < 10 84 | } do () 85 | await(logger.all()) 86 | } 87 | ``` 88 | 89 | Will increment v three times during each loop, and each time it will point to the new counter. 90 | 91 | ### Attempt 1 (published in 0.8.1) 92 | 93 | Let us memorize all values. If the user defines a value inside the async block, then this value will be evaluated once, and later usages of this value will not trigger reevaluation. This works in simple cases and not hard to remember, our FizBuff now can look as: 94 | 95 | ```scala 96 | val c = async[PureEffect] { 97 | val logger = PEToyLogger.make() 98 | val counter = PEIntRef.make(-1) 99 | while { 100 | val v = counter.increment() 101 | logger.log(await(v).toString) 102 | if (v % 3 == 0) then 103 | logger.log("Fizz") 104 | if (v % 5 == 0) then 105 | logger.log("Buzz") 106 | v < 10 107 | } do () 108 | await(logger.all()) 109 | } 110 | ``` 111 | 112 | But when we try to use this approach in scenarios where effects are passed to construct other effects, we start to see problematic cases. 113 | 114 | While the semantics of `val f = effect(ref,logger); doTenTimes(v)` is different from the semantics of `doTenTimes(effect(ref,logger))`? 115 | Can we create a solution, which will intuitively work in most cases? 116 | 117 | ### Attempt 2 (failure, implemented, not published) 118 | 119 | When we memoize effect, we created two values each time, one for the original and the other for the memoized. Can we use the first value for 'presence of awaits' and the second - for async construction of the other effects? 120 | 121 | This approach works, but the main problem -- we can't explain the type of the variable when looking at code. Each variable has two semantics, and we should deduce the kind of semantics applied from usage. Looks good, but too much type information is hidden. 122 | 123 | ### Attempt 3: (failure, partially implemented but not published) 124 | 125 | Ok, can we create a specialized monad with the semantics of 'already memoized effect' and using `async[Cached[PureEffect]]` instead `async[PureEffect]` for automatically translating instances of effects into caching effect monads. Interesting that the building of such a monad is not trivial. Problem - when we have an expression like `val x = pureEffect(..)`, the compiler already typed variable, and we can't change this type easily. So, we should wrap Cached[PureEffect[X]] back into PureEffect[X]. Potentially this can be interesting, but now I have stopped when the resulting construction becomes too heavy. 126 | 127 | ### Attempt 4: current 128 | 129 | Let us return to a relatively lightweight solution. We can define rules for variable memoization with the help of additional preliminary analysis. If some variable is used only in a synchronous context (i.e., via await), it should be colored as synchronous (i.e., cached). If some variable is passed to other functions as effect - it should be colored as asynchronous (i.e., uncached). If the variable is used in both synchronous and asynchronous contexts, we can't deduce the programmer’s intention and report an error. 130 | 131 | These rules are relatively intuitive and straightforward. However, as a side-effect, we catch typical errors when developers forget to specify the proper context where both synchronous and asynchronous cases are possible. 132 | 133 | Look at the line 6 of our auto-coloer fizz-buzz: 134 | 135 | ```scala 136 | val c = async[PureEffect] { 137 | val logger = PEToyLogger.make() 138 | val counter = PEIntRef.make(-1) 139 | while { 140 | val v = counter.increment() 141 | logger.log(v.toString) // [error here] 142 | if (v % 3 == 0) then 143 | logger.log("Fizz") 144 | if (v % 5 == 0) then 145 | logger.log("Buzz") 146 | v < 10 147 | } do () 148 | await(logger.all()) 149 | } 150 | ``` 151 | 152 | Here, toString is possible for both `PureEffect[X]` and `X`, so the compiler will not insert `await` here, and the program will print the internal string representation of effect. Coloring macro will report the error here. 153 | 154 | Also, preliminary analysis allows us to catch a situation where the variable, defined outside of the async block, is used in synchronous context more than once. 155 | 156 | Have ideas - let's discuss in https://github.com/rssh/dotty-cps-async/discussions/43 157 | 158 | ---------- 159 | [index](https://github.com/rssh/notes) 160 | -------------------------------------------------------------------------------- /2021_11_03_business_vs_match.md: -------------------------------------------------------------------------------- 1 | ## The worlds is  loosely coupled 2 | 3 | 'Business programming in a large' is orthogonal to the mathematic structures. I.e., in the naive view, deep mathematical abstractions should lead to beautiful small easy-maintainable programs. In practice, it's not: software systems in condensed abstracted style are often hard to support because of strong coupling. Math is like a crystal -- it has a stable structure that formed around the main idea. Business is like a natural object, a river or tree -- its structure formed by different thoughts, which often contradict each other. 4 | We can think about metrics here -- the rate changes should be commensurate with the cost of ongoing refactoring. 5 | 6 | -------------------------------------------------------------------------------- /2022_06_05_structured-concurrency-scala-future.md: -------------------------------------------------------------------------------- 1 | ## Structured concurrency with Scala Future 2 | 3 | ### FutureScope 4 | 5 | I sketched a [small API](https://github.com/rssh/dotty-cps-async/tree/master/jvm/src/test/scala/futureScope), how structured concurrency on top of plain Scala Futures can look. 6 | The idea is to have a scope context, which can be injected into an async block, and a set of structured operations defined for scope, automatically canceled when the main execution flow is terminated. 7 | 8 | ```Scala 9 | def doSomething = async[Future].in(Scope){ 10 | 11 | val firstWin = Promise[String]() 12 | firstWin completeWith FutureScope.spawn{ 13 | firstActivity() 14 | } 15 | firstWin completeWith FutureScope.spawn{ 16 | secondActivity() 17 | } 18 | 19 | await(firstWin.future) 20 | // unfinished activity will be canceled here. 21 | } 22 | ``` 23 | 24 | `FutureScope.spawn` is a main structured concurrency operator, fibers created by spawns is cancelled when the main flow is finished; when unhandled exception is raised inside child, wrapped exception is propagated to the main flow. 25 | 26 | ### EventFlow 27 | 28 | When organizing concurrent execution, we also need some mechanism for message passing between execution flows. 29 | 30 | `EventFlow` is an event queue that provides API for concurrent writing and linearized sequenced reading via an async iterator. The idea of linearized representation is inspired by joinads (see http://tryjoinads.org/ ); unlike joinads, we here do not maintain the execution state but provide a linearized API for reading that allows us to restore the state. 31 | 32 | The signature of the EventFlow follows: 33 | ```Scala 34 | trait EventFlow[E] { 35 | 36 | def events: AsyncIterator[Future, E] 37 | 38 | def post(e: E): Unit = 39 | postTry(Success(e)) 40 | 41 | def postFailure(e: Throwable): Unit = 42 | postTry(Failure(e)) 43 | 44 | def postTry(v: Try[E]): Unit 45 | 46 | def finish(): Unit 47 | 48 | } 49 | ``` 50 | 51 | A classical example, with the parallel search in the binary tree [], which should be finished after first success looks like: 52 | 53 | ```Scala 54 | enum BinaryTree[+T:Ordering] { 55 | case Empty extends BinaryTree[Nothing] 56 | case Node[T:Ordering](value: T, left: BinaryTree[T], right: BinaryTree[T]) extends BinaryTree[T] 57 | 58 | } 59 | 60 | object BinaryTree { 61 | 62 | import scala.concurrent.ExecutionContext.Implicits.global 63 | 64 | def findFirst[T:Ordering](tree: BinaryTree[T], p: T=>Boolean): Future[Option[T]] = async[Future].in(Scope) { 65 | val eventFlow = EventFlow[T]() 66 | val runner = findFirstInContext(tree,eventFlow,p,0) 67 | await(eventFlow.events.next) 68 | } 69 | 70 | 71 | def findFirstInContext[T:Ordering](tree: BinaryTree[T], events: EventFlow[T], p: T=> Boolean, level: Int)(using FutureScopeContext): Future[Unit] = 72 | async[Future]{ 73 | tree match 74 | case BinaryTree.Empty => 75 | case BinaryTree.Node(value, left, right) => 76 | if (p(value)) then 77 | events.post(value) 78 | else 79 | val p1 = FutureScope.spawn( findFirstInContext(left, events, p, level+1) ) 80 | val p2 = FutureScope.spawn( findFirstInContext(right, events, p, level+1) ) 81 | await(p1) 82 | await(p2) 83 | if (level == 0) then 84 | events.finish() 85 | } 86 | 87 | } 88 | ``` 89 | 90 | Here, we send to events found element. The search stops after reading the first found element (or exhausting the search tree). 91 | 92 | ### FutureGroup 93 | 94 | Yet one practical construction is a FutureGroup, which can be built from a collection of context functions, returning Future, and an EventFlow. When one of Future is finished, the result is sent to event flow. Also, we can cancel or join with the execution of all futures in the group. 95 | Consider next toy example: let we have a list of urls and want to fetch 10 pages which are readed first: 96 | 97 | ```Scala 98 | def readFirstN(networkApi: NetworkApi, urls: Seq[String], n:Int)(using ctx:FutureScopeContext): Future[Seq[String]] = 99 | async[Future].in(Scope.child(ctx)) { 100 | val all = FutureGroup.collect( urls.map(url => networkApi.fetch(url)) ) 101 | val successful = all.events.inTry.filter(_.isSuccess).take[Seq](n) 102 | await(successful).map(_.get) 103 | } 104 | ``` 105 | 106 | Here - FutureGroup.collect create a future group, then we read ten successful results from this group's events. Finishing the main flow will cancel all remaining processes in the group. 107 | 108 | ### Happy Eyeball 109 | 110 | The classical illustration of structured concurrency is an implementation of some subset of RFC8305 [Happy Eyeball](https://datatracker.ietf.org/doc/html/rfc8305), which specifies requirements for an algorithm of opening a connection to a host, which can be resolved to multiple IP addresses in multiply address families. We should open a client socket, preferring an IP6 address family (if one is available) and minimizing visual delay. 111 | 112 | Here are implementations of two subsets: 113 | - [HappyEyeball](https://github.com/rssh/dotty-cps-async/blob/master/jvm/src/test/scala/futureScope/examples/HappyEyeballs2.scala). here we model choosing of address family and opening connection. 114 | - [LiteHappyEyeball](https://github.com/rssh/dotty-cps-async/blob/master/jvm/src/test/scala/futureScope/examples/LiteHappyEyeballs.scala) – here we model only connection opening. It's interesting because we can compare it with [ZIO-based implementation of Adam Warski:](https://blog.softwaremill.com/happy-eyeballs-algorithm-using-zio-120997ba5152) which implements the same subset. 115 | (Here, I don't want to say that one style is better than the other: my view of programming styles is not a strong preference of one over another but rather a view of the landscape, where the difference between different places is a part of beauty). 116 | 117 | ### Why I'm writing this now: 118 | 119 | Currently, this is just a [directory](https://github.com/rssh/dotty-cps-async/tree/master/jvm/src/test/scala/futureScope) inside JVM tests in dotty-cps-async. To make this a usable library, we need to do some work: document API, add tests, implement obvious extensions, port to js/native, etc. 120 | Also exists a set of choices, which can be configured and is interesting to discuss: 121 | - should we cancel() child context implicitly (as now), or join or ask user finish with cancel or join ? 122 | - should we propagate exceptions or leave this duty to the user and issue a warning when see possible unhandled exception? 123 | 124 | [Discussion](https://github.com/rssh/dotty-cps-async/discussions/57) 125 | 126 | Now I have a minimal time budget because my attention is focused on other issues due to well-known factors. From the other side, I see that possibility of structured concurrency with Futures can be have practical impact for choosing Scala language for a set of the projects. So, I am throwing this into the air, hoping that somebody will find this helpful (and maybe create an open-source library, which I will be happy to contribute). I'm planning to return to this later, but can't say when. 127 | 128 | P.S. You can bring this time closer, by [donating to charity supporting Ukrainian army](https://aerorozvidka.xyz/). 129 | 130 | 131 | -------------------------------------------------------------------------------- /2023_05_05_two_cents_about_scala_web_development_in_industry.md: -------------------------------------------------------------------------------- 1 | 2 | # About current debates about shrinking the Scala user base and the slow adoption of Scala 3: 3 | 4 | 5 | The issue is that most Scala web backend projects use a Future-based stack. (Something like 53% at the end of 2022, as I remember). And after the Lightbend license change, they still need a clear path to migrate or how to start a new project in this space. Moving to effect systems, like cats-effect IO or ZIO, is an overkill[1][2], cask is `underkill`; tapir + netty, in principle, can be a foundation of the next solution. Still, it has yet to be ready (the last time I looked at it, the Netty backend did not supports websockets). And if the ecosystem has no proposal for more than 50% of the current market, it loses. 6 | 7 | 8 | Assembling a stack, which will align with the mainstream, is not a problem; we have all elements available now, from structured concurrency to async/await. All that is needed is to aggregate existing stuff, mainly writing documentation, some glue code, and provide one point of support. In an ideal world, I imagine some organization (maybe DAO) that provides subscription services and customer support for such a stack [like RedHat] and distribute part of the subscription fee to the authors of the components. If you are planning something similar - let me know ;) 9 | 10 | 11 | The problem is that nobody does this for economic reasons: the Lightbend story shows that the model "we support open-source stack for all for free and optionally sell support" is not profitable enough. 12 | 13 | 14 | Yet another complication is that after project Loom finishes incubation and becomes a part of JDK, we will see the second round of the disruption of async frameworks. There is no point in investing in a solution that will become obsolete in a few months. 15 | 16 | I'm sure that after those few monthes, we will see an exciting offering in this space. How niche/mainstream it will be (along with the Scala language itself) - it's a question that depends on founding a suitable business model. 17 | 18 | Update: 19 | 20 | [1] Is overkill is some simplification. It is better to say it is overkill, and converting Future to some effect system is a lot of work because those models are different: Future responsibility of running and canceling an appropriative process is on the supplier; in IO/ZIO, the consumer runs this process. Those semantics are different – you can't automatically or semi-automatically convert applications written with Future to IO/ZIOt: you will need to rethink all flow and redefine it entirely with the new distribution of responsibilities between consumer and supplier, 21 | 22 | [2] Having a good Future stack will be in the interests of the effect systems adherents, too, because it will be possible to convert neophytes to use effects in the next Scala project. With a user base, this will be a little easier. 23 | -------------------------------------------------------------------------------- /2024_01_30_logic-monad-1.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Scala and logical monad programming. 3 | --- 4 | 5 | 6 | Here (https://github.com/rssh/dotty-cps-async/tree/master/logic) is a small library, 'dotty-cps-async-logic,' where a type class and default implementation are defined for logical programming with backtracking. 7 | 8 | 9 | ## What is a logic monad for application developers? 10 | 11 | From the name, operations in this monad should allow us to perform logical operations over the logical computation wrapped in a monadic value. 12 | We can interpret the monad expression `M[A]` as a computation of some puzzle, which can have answers of type `A` 13 | 14 | To use those answers, we should represent the execution result as an asynchronous stream, which forces the computation of the following value when the previous value is consumed. 15 | 16 | How to generate logical values: the puzzle has no solution, or we should sequentially review a few possible solutions. 17 | Appropriative interface: 18 | 19 | ```Scala 20 | def empty[A]: M[A] 21 | def seqOr(x: M[A], y: M[A]): M[A] 22 | ``` 23 | 24 | The Haskell base library has two variants of standard interfaces for this: the traditional interface is `MonadPlus`, a modern - `Alternative`[^1]. 25 | 26 | [^1]: The history behind these two names is that MonadPlus has an appropriate signature. Still, often, people think that a MonadPlust operation (seqOr in our case) should form a monoid, which is not true for the case logical search:. Details: https://stackoverflow.com/questions/15722906/must-mplus-always-be-associative-haskell-wiki-vs-oleg-kiselyov 27 | 28 | In the traditional Haskell notation, `empty` is `mzero` and `seqOr` is `mplus`. 29 | 30 | We can represent other logical operations as operations on streams. The most useful are: 31 | 32 | ```Scala 33 | def filter[A](ma: M[A])(p: A=>Boolean):M[A] 34 | ``` 35 | Or better, let's use Scala3 extension syntax and get actual definitions: 36 | 37 | ```Scala 38 | extension[M[_]:CpsLogicMonad,A](ma: M[A]) 39 | .... 40 | 41 | // processing only the value that satisfies the condition. 42 | def filter(p: A=>Boolean): M[A] 43 | 44 | // interleaves the result streams from two choices. Often, this operation is named `fair or`. 45 | def interleave(mb: M[A]): M[A] 46 | 47 | // retrieves only the first result. This operation is often called 'cut' and associated with Prolog soft cut expression. 48 | def once: M[A] 49 | 50 | // combinator which continues evaluation via thenp if ma is not empty or return elsep 51 | def ifThenElse[B](thenp: a=>M[B], elsep: => M[B]) 52 | 53 | // specify only the fail path for the computation 54 | def otherwise(elsep: =>M[A]) 55 | 56 | ..... 57 | 58 | ``` 59 | 60 | The standard introduction of Haskell implementation is a `LogicT` article: *Oleg Kiselyov, Chung-chieh Shan, Daniel P. Friedman, and Amr Sabry. 2005. Backtracking, interleaving, and terminating monad transformers: *. Sometimes it is hard to understand, I recommend to first read *Ralf Hinze. 2000. Deriving backtracking monad transformers. * and then return to the LogicT article. 61 | 62 | In LogicT, all stream combination operations (i.e., `interleave`, `once`, ) are built by using one function `msplit`. 63 | 64 | ```Scala 65 | def msplit: M[Option[A,M[A]]] 66 | ``` 67 | 68 | We can make some operations fancier by providing callable synonyms inside direct syntax. For example, we can offer the same functionality as 69 | `filter` with `guard` statement. 70 | 71 | ```Scala 72 | inline def guard[M[_]](p: =>Boolean)(using CpsLogicMonadContext[M]): Unit 73 | ``` 74 | 75 | Note that not all logical operations are better represented as effects – for example, the monadic definition of `once` is simple and intuitive. Suppose we want to make an analog effect with a signature `def cut[M[_]](using CpsLogicContext[M]: Unit`. In that case, we will need to extend our monad with scope construction, and in all, this operation will not be intuitive and understandable without explanation. 76 | 77 | Therefore, both direct and monadic styles are helpful; it is better to have the ability to use both of these techniques when they are appropriate. It's why we have an `asynchronized` operator in direct style API for dotty-cps-async. 78 | 79 | 80 | ## Few examples? 81 | 82 | ### List all primes: 83 | 84 | ```Scala 85 | def primes: LogicStream[Int] = { 86 | eratosphen(2, TreeSet.empty[Int]) 87 | } 88 | 89 | 90 | def eratosphen(c:Int, knownPrimes: TreeSet[Int]): LogicStream[Int] = reify[LogicStream]{ 91 | guard( 92 | knownPrimes.takeWhile(x => x*x <= c).forall(x => c % x != 0) 93 | ) 94 | c 95 | } |+| eratosphen(c+1, knownPrimes + c) 96 | 97 | ``` 98 | 99 | ## Eight queens: 100 | 101 | ```Scala 102 | case class Pos(x:Int, y:Int) 103 | 104 | def isBeat(p1:Pos, p2:Pos):Boolean = 105 | (p1.x == p2.x) || (p1.y == p2.y) || (p1.x - p1.y == p2.x - p2.y) || (p1.x + p1.y == p2.x + p2.y) 106 | 107 | 108 | def isFree(p:Pos, prefix:IndexedSeq[Pos]):Boolean = 109 | prefix.forall(pp => !isBeat(p, pp)) 110 | 111 | 112 | def queens[M[_]:CpsLogicMonad](n:Int, prefix:IndexedSeq[Pos]=IndexedSeq.empty): M[IndexedSeq[Pos]] = reify[M] { 113 | if (prefix.length >= n) then 114 | prefix 115 | else 116 | val nextPos = (1 to n).map(Pos(prefix.length+1,_)).filter(pos => isFree(pos, prefix)) 117 | reflect(queens(n, prefix :+ reflect(all(nextPos)))) 118 | } 119 | 120 | ``` 121 | # Q/A 122 | 123 | ## What makes the monad logical and different from the other streaming monads? 124 | 125 | - It should be lazy (in most cases, enumeration of all possible results will cause a combinatorial explosion). 126 | - It should be possible to define logical operators efficiently. 127 | 128 | ## Can we define logical monadic operations on top of the existing streaming framework? 129 | 130 | Yes, when the streaming library can efficiently implement concatenation. In practice – optimized `mplus` implementation is not trivial. For example, for the synchronous variant, in the standard Scala library exists `LazyList`, where we can define all logic operations, ~~but running a long sequence of mplus will cause stack overflow errors~~ Update: and it is 131 | possible to defer concatenation. 132 | 133 | ## But such logic programming is quite limited because it is applicable only to 'generate and apply' algorithms. 134 | 135 | True. We need a notation of logical terms and unification for the beauty of an entirely logical programming environment, which is not defined here. Can we design a monad for this (?) – Sure, but this is the theme of the future blog post. 136 | 137 | -------------------------------------------------------------------------------- /2024_12_09_dependency-injection.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: Relative simple and small type-driven dependency injection 3 | --- 4 | # First, why is type-based injection better than name-based injection? 5 | 6 | 7 | Find time to modernize dependency injection in some services. The previous version involved simply passing the context object with fields for services. 8 | 9 | ```Scala 10 | trait AppContext { 11 | def service1(): Service1 12 | def service2(): Service2 13 | …. 14 | } 15 | ``` 16 | 17 | It has worked well, except for a few problems: 18 | - tests, where we need to create context objects with all fields, even if we need only one. 19 | - modularization: When we want to move part of functionality to the shared library, we should also create a library context, and our context should extend the library context. 20 | - one AppContext gives us a ‘dependency loop’: nearly all services depend on AppContext, which depends on all services. So, the recompilation of AppContext causes the recompilation of all services. 21 | 22 | However, for relatively small applications, it is possible to live with. 23 | 24 | 25 | If we switch to type-driven context resolving (i.e., use `AppContext[Service1]` instead of `appContext.service1` ), we will solve the modularization problem. 26 | 27 | The first look was at the approach described by @odersky in https://old.reddit.com/r/scala/comments/1eksdo2/automatic_dependency_injection_in_pure_scala/. 28 | (code: https://github.com/scala/scala3/blob/main/tests/run/Providers.scala ) 29 | 30 | Unfortunately, the proposed technique is not directly applicable to our cases. The main reason is that machinery with matсh types works only when all types in the tuple are distinct. Therefore, this means all components participating in dependency injection should not be traits. (because if we have two traits (A, B), we can't prove that A and B in general are distinct; therefore, Index[(A, B)] will not resolved. The limitation not to use traits as components is too strict for practical usage. Two other problems (absence of chaining of providers and absence of caching) are fixable and demonstrate a usual gap between 'proof of concept’ and real stuff. 31 | 32 | We will use approaches very close to what sideEffffECt describe in https://www.reddit.com/r/scala/comments/1eqqng2/the_simplest_dependency_injection_pure_scala_no/ with few additional steps. 33 | 34 | 35 | # Basic 36 | 37 | We will think that component `C` is provided if we can find AppProvider[C]] 38 | 39 | ```Scala 40 | trait AppContextProvider[T] { 41 | def get: T 42 | } 43 | 44 | ... 45 | 46 | object AppContext { 47 | 48 | 49 | def apply[T](using AppContextProvider[T]): T = 50 | summon[AppContextProvider[T]].get 51 | 52 | …. 53 | 54 | } 55 | ``` 56 | 57 | If we have an implicit instance of the object, we think it's provided: 58 | 59 | 60 | ```Scala 61 | object AppContextProvider { 62 | 63 | .... 64 | 65 | given ofGivem[T](using T): AppContextProvider[T] with { 66 | def get: T = summon[T] 67 | } 68 | 69 | 70 | } 71 | ``` 72 | 73 | Also, the usual practice for components is to define its implicit provider in the companion object. 74 | 75 | 76 | Example of UserSubscription looks like: 77 | 78 | ```Scala 79 | class UserSubscription(using AppContextProvider[EmailService], 80 | AppContextProvider[UserDatabase] 81 | ) { 82 | 83 | 84 | def subscribe(user: User): Unit = 85 | AppContext[EmailService].sendEmail(user, "You have been subscribed") 86 | AppContext[UserDatabase].insert(user) 87 | 88 | …. 89 | 90 | } 91 | 92 | object UserSubscription { 93 | // boilerplate 94 | given (using AppContextProvider[EmailService], 95 | AppContextProvider[UserDatabase]): 96 | AppContextProvider[UserSubscription] with 97 | def get: UserSubscription = new UserSubscription 98 | } 99 | ``` 100 | 101 | 102 | 103 | Okay, this works, but we have to write some boilerplate. Can we have the same in a shorter form, for example, a List of provided types instead of a set of implicit variants? 104 | 105 | # Shrinking boilerplate: 106 | 107 | Sure, we can pack the provided parameter types in the tuple and use macroses for extraction. 108 | 109 | ```Scala 110 | class UserSubscription(using AppContextProviders[(EmailService,UserDatabase)]) { 111 | 112 | 113 | def subscribe(user: User): Unit = 114 | AppContext[EmailService].sendEmail(user, "You have been subscribed") 115 | AppContext[UserDatabase].insert(user) 116 | 117 | 118 | …. 119 | 120 | 121 | } 122 | ``` 123 | 124 | How to do this: at first, we will need a type-level machinery, which will select a first supertype of `T` from the tuple `Xs`: 125 | 126 | ```Scala 127 | object TupleIndex { 128 | 129 | 130 | opaque type OfSubtype[Xs <: Tuple, T, N<:Int] = N 131 | 132 | 133 | extension [Xs <: Tuple, T, N<:Int](idx: TupleIndex.OfSubtype[Xs, T, N]) 134 | def index: Int = idx 135 | 136 | 137 | inline given zeroOfSubtype[XsH, XsT <:Tuple, T<:XsH]: OfSubtype[XsH *: XsT, T, 0] = 0 138 | 139 | 140 | inline given nextOfSubtype[XsH, XsT <:NonEmptyTuple, T, N <: Int](using idx: OfSubtype[XsT, T, N]): OfSubtype[XsH *: XsT, T, S[N]] = 141 | constValue[S[N]] 142 | 143 | 144 | } 145 | ``` 146 | 147 | 148 | Then, we can define a type for search in AppProviders: 149 | 150 | ```Scala 151 | trait AppContextProvidersSearch[Xs<:NonEmptyTuple] { 152 | 153 | 154 | def getProvider[T,N<:Int](using TupleIndex.OfSubtype[Xs,T,N]): AppContextProvider[T] 155 | 156 | 157 | def get[T, N<:Int](using TupleIndex.OfSubtype[Xs,T, N]): T = getProvider[T,N].get 158 | 159 | 160 | } 161 | 162 | 163 | trait AppContextProviders[Xs <: NonEmptyTuple] extends AppContextProvidersSearch[Xs] 164 | 165 | ``` 166 | 167 | 168 | and expression, which will assemble the instance of the AppContextProvider from available context providers when it will be called. 169 | 170 | 171 | ```Scala 172 | object AppContextProviders { 173 | 174 | inline given generate[T<:NonEmptyTuple]: AppContextProviders[T] = ${ generateImpl[T] } 175 | 176 | …… 177 | 178 | } 179 | ``` 180 | 181 | 182 | (complete code is available in the repository: https://github.com/rssh/scala-appcontext ; permalink to generateImpl: https://github.com/rssh/scala-appcontext/blob/666a02e788aa57922104569541511a16431690fb/shared/src/main/scala/com/github/rssh/appcontext/AppContextProviders.scala#L52 ) 183 | 184 | We separate `AppContextProvidersSearch` and `AppContextProviders` because we don't want to trigger `AppContextProviders` implicit generation during implicit search outside of service instance generation. 185 | Note that Scala currently has no way to make a macro that generates a given instance to fail an implicit search silently. We can only make errors during the search, which will abandon the whole compilation. 186 | 187 | Can we also remove the boilerplate when defining the implicit AppContext provider? 188 | I.e. 189 | 190 | ```Scala 191 | object UserSubscription { 192 | // boilerplate 193 | given (using AppContextProvider[EmailService], 194 | AppContextProvider[UserDatabase]): AppContextProvider[UserSubscription] with 195 | def get: UserSubscription = new UserSubscription 196 | } 197 | ``` 198 | 199 | Will become 200 | 201 | 202 | ```Scala 203 | object UserSubscription { 204 | 205 | given (using AppContextProviders[(EmailService, UserDatabase)]): AppContextProvider[UserSubscription] with 206 | def get: UserSubscription = new UserSubscription 207 | } 208 | ``` 209 | 210 | But this will still be boilerplate: We must enumerate dependencies twice and write trivial instance creation. On the other hand, this instance creation is not entirely meaningless: we can imagine the situation when it's not automatic. 211 | 212 | To minimize this kind of boilerplate, we can introduce a convention for `AppContextProviderModule`, which defines its dependencies in type and automatic generation of instance providers: 213 | 214 | ```Scala 215 | trait AppContextProviderModule[T] { 216 | 217 | 218 | /** 219 | * Dependencies providers: AppContextProviders[(T1,T2,...,Tn)], where T1,T2,...,Tn are dependencies. 220 | */ 221 | type DependenciesProviders 222 | 223 | 224 | /** 225 | * Component type, which we provide. 226 | */ 227 | type Component = T 228 | 229 | 230 | 231 | 232 | inline given provider(using dependenciesProvider: DependenciesProviders): AppContextProvider[Component] = ${ 233 | AppContextProviderModule.providerImpl[Component, DependenciesProviders]('dependenciesProvider) 234 | } 235 | 236 | … 237 | 238 | } 239 | ``` 240 | 241 | 242 | Now, the definition of UserSubscriber can look as described below: 243 | 244 | ```Scala 245 | class UserSubscription(using UserSubscription.DependenciesProviders) 246 | … 247 | 248 | 249 | object UserSubscription extends AppContextProviderModule[UserSubscription] { 250 | type DependenciesProviders = AppContextProviders[(EmailService, UserDatabase)] 251 | } 252 | ``` 253 | 254 | 255 | Is that all – not yet. 256 | 257 | # Caching 258 | 259 | Yet one facility usually needed from the dependency injection framework is caching. In all our examples, `AppContextProvider` returns a new instance of services. However, some services have a state that should be shared between all service clients. An example is a connection pool or service that gathers internal statistics into the local cache. 260 | 261 | Let’s add cache type to the AppContext: 262 | 263 | ```Scala 264 | object AppContext { 265 | 266 | 267 | … 268 | 269 | 270 | opaque type Cache = TrieMap[String, Any] 271 | 272 | 273 | opaque type CacheKey[T] = String 274 | 275 | 276 | inline def cacheKey[T] = ${ cacheKeyImpl[T] } 277 | 278 | 279 | extension (c: Cache) 280 | inline def get[T]: Option[T] 281 | inline def getOrCreate[T](value: => T): T 282 | inline def put[T](value: T): Unit 283 | 284 | 285 | } 286 | ``` 287 | 288 | 289 | And let's deploy a simple convention: if the service requires `AppContext.Cache` as a dependency, then we consider this service cached. I.e., with manual setup of `AppContextProvider` this should look like this: 290 | 291 | ```Scala 292 | object FuelUsage { 293 | 294 | given (using AppContextProviders[(AppContext.Cache, Wheel, Rotor, Tail)]): AppContextProvider[FuelUsage] with 295 | def get: FuelUsage = AppContext[AppContext.Cache].getOrCreate(FuelUsage) 296 | 297 | } 298 | ``` 299 | 300 | Automatically generated providers follow the same convention. 301 | 302 | The cache key now is just the name of a type. But now we are facing a problem: if we have more than one service implementation (test/tangible), there are different types. Usually, developers consider some ‘base type’ that the service should replace. Hoverer macroses can’t extract this information indirectly. So, let’s allow a developer to write this class in the annotation: 303 | 304 | ```Scala 305 | class appContextCacheClass[T] extends scala.annotation.StaticAnnotation 306 | ``` 307 | 308 | Cache operations will follow that annotation when calculating cacheKey[T]. 309 | Typical usage: 310 | 311 | ```Scala 312 | trait UserSubscription 313 | 314 | @appContextCacheClass[UserSubscription] 315 | class TestUserSubscription(using TestUserSubscription.DependenciesProviders) 316 | 317 | ... 318 | ``` 319 | 320 | # Preventing pitfalls 321 | 322 | Can this be considered a complete mini-framework? Still waiting. 323 | Let’s look at the following code: 324 | 325 | ```Scala 326 | case class Dependency1(name: String) 327 | 328 | 329 | object Dependency1 { 330 | given AppContextProvider[Dependency1] = AppContextProvider.of(Dependency1("dep1:module")) 331 | } 332 | 333 | 334 | case class Dependency2(name: String) 335 | 336 | class Component(using AppContextProvider[Dependency1,Dependency2]) { 337 | def doSomething(): String = { 338 | s”${AppContext[Dependency1]}:${AppContext[Dependency2]} 339 | } 340 | } 341 | 342 | val dep1 = Dependency1("dep1:local") 343 | val dep2 = Dependency2("dep2:local") 344 | val c = Component(using AppContextProviders.of(dep1, dep2)) 345 | println(c3.doSomething()) 346 | ``` 347 | 348 | What will be printed? 349 | 350 | The correct answer is `“dep1:module:dep2:local”`, because resolving of `Dependency1` from the companion object will be preferred over resolving from the `AppContextProvider` companion object. Unfortunately, I don’t know how to change this. 351 | 352 | We can add a check to determine whether supplied providers are needed. Again, unfortunately, we can’t add it ‘behind the scenes' by modifying the generator of `AppContextProvider` because the generator is inlined in the caller context for the component instance, where all dependencies should be resolved. 353 | We can write a macro that should be called from the context inside a component definition. This will require the developer to call it explicitly. 354 | 355 | I.e., a typical component definition will look like this: 356 | 357 | ```Scala 358 | class Component(using AppContextProviders[(Dependency1,Dependency2)]) { 359 | assert(AppContextProviders.allDependenciesAreNeeded) 360 | …. 361 | } 362 | ``` 363 | 364 | Now, we can use our micro-framework without pitfalls. Automatic checking that all listed dependencies are actual is a good idea, which can balance the necessity of the extra line of code. 365 | 366 | 367 | In the end, we have received something usable. After doing all the steps, I can understand why developers with the most expertise in another ecosystem can hate Scala. 368 | With any other language, a developer can get the default most-known dependency injection framework for this language and use one. But in Scala, we have no defaults. All alternatives are heavy or not type-driven. Building our small library takes time, distracting developers from business-related tasks. And we can’t eliminate the library users' need to write boilerplate code. 369 | On the other hand, things look good. Scala's flexibility allows one to quickly develop a ‘good enough’ solution despite the fragmented ecosystem. 370 | 371 | 372 | The repository for this mini-framework can be found at https://github.com/rssh/scala-appcontext 373 | 374 | -------------------------------------------------------------------------------- /2024_12_30_dependency_injection_tf.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: small type-driven dependency injection in effect systems. 3 | --- 4 | 5 | At first, the detective history with Reddit moderation: During the previous post from this series, there was a comment about the effect systems. I have given two answers: one with a technical overview and the second with the note that it is possible to use static AppContextProvider. A few days later, I accidentally saw this discussion on the other computer, where I was not logged into Reddit, and discovered that a moderator deleted my first reply. Interesting that I don’t see this if I logged by itself. Quite strange; I guess this is automatic moderation based on some pattern. [UPD: it was automatic moderation due to a link to `telegra.ph`, which was used by Ivan to post about a project] 6 | 7 | Prerequisite: the reader is familiar with the previous part: https://github.com/rssh/notes/blob/master/2024_12_09_dependency-injection.md 8 | 9 | Ok. Let’s adjust our type-based injection framework to the effect systems. This text is a result of joint work with Ivan Kyrylov during GSoC-2024. The main work was not about dependency injection but abstract representations of effect. Static dependency injection was a starting point for Ivan's journey. Our first attempt was based on another approach than here (we tried to automatically assemble a type map of needed injections, which, as a state of scala-3.x, is impossible because we can’t return context function from macros), but during this stage, we receive some understanding, what should work. 10 | 11 | First, what makes dependency injection different in the effect system environment? 12 | - Types of dependencies are encoded into the type of enclosing monad. 13 | - Retrieving of dependencies can be asynchronous: 14 | 15 | I.e., typical usage of dependency injection in the effect environment looks like: 16 | 17 | ## Tagless-Final style 18 | 19 | ```Scala 20 | def newSubscriber1[F[_]:InAppContext[(UserDatabase[F],EmailService)]:Effect](user: User):F[User] = { 21 | for{ 22 | db <- InAppContext.get[F, UserDatabase[F]] 23 | u <- db.insert(user) 24 | emailService <- AppContextEffect.get[F, EmailService] 25 | _ <- emailService.sendEmail(u, "You have been subscribed") 26 | } yield u 27 | ``` 28 | 29 | Here, we assume tagless-final style and dependencies `(UserDatabase[F],EmailService)` are listed as properties of `F[_]` I.e., exists 30 | `InAppContext[(UserDatabase[F],EmailService)][F]` from which we can retrieve monadized references during computations. 31 | 32 | We can define InAppContex as a reference to the list of providers: 33 | 34 | ```Scala 35 | type InAppContext[D <: NonEmptyTuple] = [F[_]] =>> AppContextAsyncProviders[F,D] 36 | ``` 37 | 38 | Where AppContextAsyncProviders is constructed in the same way as `AppContextProvider` for the core case. 39 | 40 | Before diving into details, let’s speak about the second difference: monadic (or asynchronous) retrieving of dependencies: 41 | 42 | ```Scala 43 | trait AppContextAsyncProvider[F[_],T] { 44 | 45 | def get: F[T] 46 | 47 | } 48 | ``` 49 | Here, the async provider returns the value wrapped in the monad. This wrapping makes sense when a monad provides additional logic necessary for the dependency lifecycle, such as acquiring or releasing a resource. Note that this form is not strictly needed because we can achieve the same logic by changing the API form. But let’s follow traditions. 50 | 51 | Of course, we can make Async provider from Sync: 52 | 53 | ```Scala 54 | given fromSync[F[_] : AppContextPure, T](using syncProvider: AppContextProvider[T]): AppContextAsyncProvider[F, T] with 55 | def get: F[T] = summon[Pure[F]].pure(syncProvider.get) 56 | ``` 57 | 58 | But note that for this, we should have defined somewhere `Pure` typeclass (which is absent in the Scala standard library). Also, in theory, `syncProvider.get` can produce side effects, but developers who prefer pure functional style will choose to delay potentially effectful invocation… Yet one issue – `pure` in cats is eager…, so maybe better wording exists… Let’s define our generic typeclass, [AppContextPure](https://github.com/rssh/scala-appcontext/blob/59014c7aecacf81ea3fb6f9415ed603001032248/tagless-final/shared/src/main/scala/com/github/rssh/appcontext/util/AppContextPure.scala#L5) and provide implementation based on dotty-cps-async (which becomes an optional dependency). 59 | If you have an idea for better wording, please write a comment. 60 | 61 | Ok, now return to constructing providers. 62 | Let we have a method with signature: 63 | 64 | ```Scala 65 | def newSubscriber1[F[_]:InAppContext[(UserDatabase[F],EmailService)]:Effect](user: User):F[User] = ... 66 | ``` 67 | 68 | When we call this method from upside of `newSubscriber` scope, we should synthesize `AppContextProviders[F,(UserDatabase[F],EmailService)]` by searching providers for the tuple elements. 69 | When we call this method from inside, we should resolve services if they are in the AppContextProviders tuple. At first glance, we can build macros that build AppContextProviders like in the core (described in the previous post). 70 | But wait, there is one difference: AppContextAsyncProviders are always in the lexical scope inside the `newSubscriber` function. This means that a search for an AsyncProvider can trigger the creation of an implicit instance of `AppContextAsyncProviders`. 71 | 72 | Let’s look at the next block of code: 73 | 74 | ```Scala 75 | trait ConnectionPool { 76 | def get[F[_]:Effect]():F[Connection] 77 | } 78 | 79 | trait UserDatabase[F[_]:Effect:InAppContext[ConnectionPool *: EmptyTuple]] { 80 | def insert(user: User):F[User] 81 | } 82 | 83 | object UserDatabase { 84 | given [F[_]:Effect:InAppContext[ConnectionPool *: EmptyTuple]] 85 | : AppContextAsyncProvider[F,UserDatabase[F]] 86 | } 87 | 88 | def newSubscriber1[F[_]:InAppContext[(UserDatabase[F],EmailService)]:Effect](user: User):F[User] = { 89 | ... 90 | } 91 | 92 | 93 | def main(): Unit = { 94 | given EmailService = new MyLocalEmailService 95 | given ConnectionPool = new MyLocalConnectionPool 96 | val user = User("John", "john@example.com") 97 | val r = newSubscriber1[ToyMonad](user) 98 | val fr = ToyMonad.run(r) 99 | } 100 | 101 | 102 | ``` 103 | (assuming minimal [ToyMonad](https://github.com/rssh/scala-appcontext/blob/59014c7aecacf81ea3fb6f9415ed603001032248/tagless-final/shared/src/test/scala/com/github/rssh/toymonad/ToyMonad.scala#L15) ) 104 | 105 | Here, new subscriber bounds will trigger search for `UserDatabase`(1) which will trigger a search for `ConnectionPool`(3) which at first will be searched in the `InAppContext[..][F]` scope which will trigger the building of `AppContextProviders[ConnectionPool*:EmptyTuple]`(4) which will be called because `InAppContext[(ConnectionPool *: EmptyTuple])` is a type parameter of enclosing function and then will start searching in enclosing scope (5). 106 | 107 | The problem is that if step (4) triggers our macro and the macro produces an error, we will report an error, not be able to continue a search, and never reach step (5). 108 | 109 | At the core, we escape this problem by defining the class `AppContextProvidersSearch`. But now we can’t do this. 110 | 111 | Let’s think about how we make a macro for implicit search, which will fail the search without producing an error. For this, our macro should also return some result (success or failure), with type determined by our macro, and use evidence to success in the implicit search for value: 112 | 113 | ```Scala 114 | object AppContextAsyncProviders { 115 | 116 | trait TryBuild[F[_], Xs<:NonEmptyTuple] 117 | case class TryBuildSuccess[F[_],Xs<:NonEmptyTuple](providers:AppContextAsyncProviders[F,Xs]) extends TryBuild[F,Xs] 118 | case class TryBuildFailure[F[_],Xs<:NonEmptyTuple](message: String) extends TryBuild[F,Xs] 119 | 120 | transparent inline given tryBuild[F[_],Xs <:NonEmptyTuple]: TryBuild[F,Xs] = ${ 121 | tryBuildImpl[F,Xs] 122 | } 123 | 124 | inline given build[F[_]:CpsMonad, Xs <: NonEmptyTuple, R <: TryBuild[F,Xs]](using inline trb: R, inline ev: R <:< TryBuildSuccess[F,Xs]): AppContextAsyncProviders[F,Xs] = { 125 | trb.providers 126 | } 127 | 128 | def tryBuildImpl[F[_]:Type, Xs <: NonEmptyTuple:Type](using Quotes): Expr[TryBuild[F,Xs]] = { 129 | // our macro, which now returns TryBuildFailere instead of reporting the error. 130 | } 131 | 132 | .. 133 | 134 | } 135 | 136 | 137 | ``` 138 | 139 | Full code: [AppContextProviders](https://github.com/rssh/scala-appcontext/blob/main/tagless-final/shared/src/main/scala/com/github/rssh/appcontext/AppContextAsyncProviders.scala). 140 | 141 | Now, let’s port the standard example to the monadic case: [see example 3](https://github.com/rssh/scala-appcontext/blob/59014c7aecacf81ea3fb6f9415ed603001032248/tagless-final/shared/src/test/scala/com/github/rssh/appcontext/Example3Test.scala#L12). Next block of code instantiate and pass `UserDatabase` to the `newSubscrber`, under the hood. 142 | 143 | ```Scala 144 | given EmailService .. 145 | given ConnectionPool = .. 146 | 147 | 148 | val user = User("John", "john@example.com") 149 | val r = newSubscriber1[ToyMonad](user) 150 | ``` 151 | 152 | Hmm... actually we don't use `AppContextAsyncProvider`. 153 | 154 | Let’s make model example close to reality: use real IO and async Connection created in resource: 155 | 156 | See [Example 5](https://github.com/rssh/scala-appcontext/blob/main/tagless-final/jvm/src/test/scala/com/github/rssh/appcontexttest/Example5Test.scala) 157 | 158 | 159 | ## Concrete monad style 160 | 161 | Yet one popular style is using a concrete monads, for example `IO` instead `F[_]`. In such case, we don’t need `InAppContext` and can pass providers, as in the core case, as context parameters. What providers to use: `AppContextProvider or AppContextAsyncProviders` becomes a question of taste. You can even use `AppContextProviderModule` with async dependencies. 162 | 163 | [Example](https://github.com/rssh/scala-appcontext/blob/main/tagless-final/jvm/src/test/scala/com/github/rssh/appcontexttest/Example7Test.scala) 164 | 165 | ## Environment effects. 166 | 167 | If we open the theme of using type-driven dependency injection in the effect systems, we should say a few words about libraries like zio or kyo, which provide their implementation of dependency injection. 168 | All of them are based on the concept that types needed for computation are encoded in their signature (similar to our tagless-final approach). Theoretically, our approach can simplify interaction points with such libraries (i.e., we can assemble the needed computation environment from providers). 169 | 170 | 171 | That’s all for today. The tagless final part is published as a subproject in `appcontext` with name “appcontext-tf”, 172 | (github: https://github.com/rssh/scala-appcontext ) 173 | You can try it using `“com.github.rssh” %%% “appcontext-tf” % “0.2.0”` as dependency. (maybe it should be joined with the core ?) I will be grateful for problem reports and suggestions for better names. 174 | 175 | 176 | 177 | 178 | 179 | 180 | 181 | 182 | 183 | 184 | 185 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # notes 2 | just place for some random notes about programming 3 | 4 | * 2024/12/30 [small type-driven dependency injection in effect systems.](https://github.com/rssh/notes/blob/master/2024_12_30_dependency_injection_tf.md) 5 | * 2024/12/09 [Small and simple type-driven dependency injection]( https://github.com/rssh/notes/blob/master/2024_12_09_dependency-injection.md) 6 | * 2024/01/30 [Monad for backtracked logical progamming]( https://github.com/rssh/notes/blob/master/2024_01_30_logic-monad-1.md) 7 | * 2023/05/05 [Two cents about shrinking Scala user base debates]( https://github.com/rssh/notes/blob/master/2023_05_05_two_cents_about_scala_web_development_in_industry.md) 8 | * 2022.06/05 [Structured concurrency on top of plain Scala Future](https://github.com/rssh/notes/blob/master/2022_06_05_structured-concurrency-scala-future.md) 9 | * 2021.11.03 [The worlds is loosely coupled](https://github.com/rssh/notes/blob/master/2021_11_03_business_vs_match.md) 10 | * 2021.06.27 [automatic coloring of effect monads](https://github.com/rssh/notes/blob/master/2021_06_27_automatic-coloring-for-effects.md) 11 | * 2017.12.14 [deduce variance rules fron Liskov Substitution Principle](https://github.com/rssh/notes/blob/master/2017_12_14-variance-from-lsp.md) 12 | * 2017.08.06 [So, I want to add concurrency to my language...](https://github.com/rssh/notes/blob/master/2017_08_06_concurrency_in_new_language.md) 13 | * 2016.04.28 [test-case for scala compiler timeout](https://github.com/rssh/notes/blob/master/2016_04_28_typelevel_isprime/) 14 | * 2016.03.27 [Bring my stacktrace back from Future](https://github.com/rssh/notes/blob/master/2016_27_03_back-my-stack-traces.md) 15 | * 2016.03.12 [chan chan chan](https://github.com/rssh/notes/blob/master/2016_03_12_chan-chan-chan.md) 16 | * 2016.03.05 [(see ma, no vars): idiomatic go <=> idiomatic scala](https://github.com/rssh/notes/blob/master/2016_03_05_see-ma-no-vars.md) 17 | -------------------------------------------------------------------------------- /feed.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Random notes about programming 5 | https://rssh.github.io/notes/feed.xml 6 | 7 | https://cyber.harvard.edu/rss/rss.html 8 | 9 | Mon, 30 Dec 2024 00:00:00 +0200 10 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 11 | 12 | https://github.com/rssh/notes/blob/master/2024_12_30_dependency_injection_tf.md 13 | small type-driven dependency injection in effect systems. 14 | 15 | 16 | Mon, 9 Dec 2024 00:00:00 +0200 17 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 18 | 19 | https://github.com/rssh/notes/blob/master/2024_12_09_dependency-injection.md 20 | Relative simple and small type-driven dependency injection 21 | 22 | 23 | Tue, 30 Jan 2024 00:00:00 +0200 24 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 25 | 26 | https://github.com/rssh/notes/blob/master/2024_01_30_logic-monad-1.md 27 | Scala and logical monad programming. 28 | 29 | 30 | Fri, 5 May 2023 00:00:00 +0300 31 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 32 | 33 | https://github.com/rssh/notes/blob/master/2023_05_05_two_cents_about_scala_web_development_in_industry.md 34 | About current debates about shrinking the Scala user base and the slow adoption of Scala 3: 35 | 36 | 37 | Sun, 5 Jun 2022 00:00:00 +0300 38 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 39 | 40 | https://github.com/rssh/notes/blob/master/2022_06_05_structured-concurrency-scala-future.md 41 | Structured concurrency with Scala Future 42 | 43 | 44 | Wed, 3 Nov 2021 00:00:00 +0200 45 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 46 | 47 | https://github.com/rssh/notes/blob/master/2021_11_03_business_vs_match.md 48 | The worlds is  loosely coupled 49 | 50 | 51 | Sun, 27 Jun 2021 00:00:00 +0300 52 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 53 | 54 | https://github.com/rssh/notes/blob/master/2021_06_27_automatic-coloring-for-effects.md 55 | 56 | Problem: automatic coloring of effect monads in [dotty-cps-async](https://github.com/rssh/dotty-cps-async) 57 | 58 | 59 | 60 | Sun, 6 Aug 2017 00:00:00 +0300 61 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 62 | 63 | https://github.com/rssh/notes/blob/master/2017_08_06_concurrency_in_new_language.md 64 | So, I want to add concurrency to my language... 65 | 66 | 67 | Sun, 27 Mar 2016 00:00:00 +0200 68 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 69 | 70 | https://github.com/rssh/notes/blob/master/2016_27_03_back-my-stack-traces.md 71 | Give my stacktraces back 72 | 73 | 74 | Sat, 12 Mar 2016 00:00:00 +0200 75 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 76 | 77 | https://github.com/rssh/notes/blob/master/2016_03_12_chan-chan-chan.md 78 | chan chan chan 79 | 80 | 81 | Sat, 5 Mar 2016 00:00:00 +0200 82 | Ruslan Shevchenko <ruslan@shevchenko.kiev.ua> 83 | 84 | https://github.com/rssh/notes/blob/master/2016_03_05_see-ma-no-vars.md 85 | idiomatic Go <=> Idiomatic Scala 86 | 87 | 88 | -------------------------------------------------------------------------------- /scripts/generate-feed.sc: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env -S scala-cli -S 3 2 | 3 | //> using scala "3.3.3" 4 | //> using dep "com.mchange::audiofluidity-rss:0.0.6" 5 | //> using dep "com.lihaoyi::os-lib::0.11.3" 6 | // 7 | 8 | val blogTitle = "Random notes about programming" 9 | val author = "Ruslan Shevchenko " 10 | val baseUrl = "https://github.com/rssh/notes/blob/master" 11 | val feedUrl = "https://rssh.github.io/notes/feed.xml" 12 | val wd = os.pwd 13 | val path = if wd.toString.endsWith("scripts") && os.exists(wd / "generate-feed.sc") then 14 | os.Path(wd.wrapped.getParent) 15 | else if os.exists(wd/"scripts"/"generate-feed.sc") then 16 | wd 17 | else 18 | println(s"Can't determinate directory: should be scripts or current dirrectory, not in ${wd}, exiting") 19 | System.exit(1) 20 | ??? 21 | 22 | import audiofluidity.rss.* 23 | import java.time.* 24 | 25 | def extractTitle(lines:IndexedSeq[String]): Option[String] = { 26 | val titleLine = "^title: (.*)$".r 27 | val head1Line = """^\# (.*)$""".r 28 | val head2Line = """^\#\# (.*)$""".r 29 | lines.collectFirst{ 30 | case titleLine(title) => title 31 | case head1Line(title) => title 32 | case head2Line(title) => title 33 | } 34 | 35 | } 36 | 37 | println(s"path=$path") 38 | val items = os.list(path).filter(file => 39 | os.isFile(file) && file.ext == "md" && 40 | file.baseName != "README" 41 | ).flatMap{ file => 42 | val dateRegExpr = """([0-9]+)_([0-9]+)_([0-9]+)_(.*)$""".r 43 | file.baseName.toString match 44 | case dateRegExpr(sYear,sMonth,sDay, rest) => 45 | val (month, day) = if (sMonth.toInt > 12) { 46 | (sDay.toInt, sMonth.toInt) 47 | } else (sMonth.toInt, sDay.toInt) 48 | val year = sYear.toInt 49 | val date = LocalDate.of(year,month,day) 50 | println(s"file=$file, date=$date") 51 | Some((file, date)) 52 | case _ => 53 | println(s"file $file without date prefix, skipping") 54 | None 55 | }.sortBy(_._2).reverse.map{ (file, ctime) => 56 | val mdContent = os.read.lines(file) 57 | val title = extractTitle(mdContent).getOrElse(file.toString) 58 | val pubDate = ZonedDateTime.of(ctime, LocalTime.MIN, ZoneId.systemDefault) 59 | Element.Item.create(title, s"${baseUrl}/${file.baseName}.${file.ext}", "at {}", author, pubDate=Some(pubDate)) 60 | } 61 | 62 | val channel = Element.Channel.create(blogTitle,feedUrl,"random unsorted notes",items) 63 | println(Element.Rss(channel).asXmlText) 64 | val rss = Element.Rss(channel).asXmlText 65 | os.write.over(path/"feed.xml",rss) 66 | 67 | --------------------------------------------------------------------------------