├── .travis.yml ├── README.md ├── example_test.go ├── helpers.go ├── iter.go ├── logger.go ├── predicates.go ├── processors.go ├── transducers.go └── transducers_test.go /.travis.yml: -------------------------------------------------------------------------------- 1 | language: go 2 | go: 3 | - 1.2 4 | - 1.3 5 | - tip 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Transducers for Go 2 | 3 | [![Build Status](https://travis-ci.org/sdboyer/transducers-go.svg?branch=master)](https://travis-ci.org/sdboyer/transducers-go) 4 | 5 | This is an implementation of transducers, a concept from [Clojure](http://clojure.org), for Go. 6 | 7 | Transducers can be tricky to understand with just an abstract description, but here it is: 8 | 9 | > Transducers are a composable way to build reusable algorithmic transformations. 10 | 11 | Transducers were introduced in Clojure for a sorta-similar reason that `range` exists in Go: having one way of writing element-wise operations on channels *and* other collection structures (though that's just the tip of the iceberg). 12 | 13 | I'm honestly not sure if I these are a good idea for Go. I've written this library as an exploratory experiment in their utility for Go, and would love feedback. 14 | 15 | **UPDATE**: After much reflection, I’m pretty sure they’re a bad idea. They’re just not worth departing the island of type-safety. Still open to being convinced otherwise :) 16 | 17 | ## What Transducers are 18 | 19 | There's a lot out there already, and I don't want to duplicate that here. Here are some bullets to quickly orient you: 20 | 21 | * They're a framework for performing structured transformations on streams of values. If you're familiar with pipeline processing, kinda like that. 22 | * They separate the WAY a transformation is run (concurrently or not, eagerly or lazily) from WHAT the transformation is, while also decomposing HOW the tranformation works into its smallest possible reusable parts. 23 | * The **whole entire thing** is built on a single observation: you can express every possible collection-type operation (map, filter, etc.) as in the form of a reduce operation. 24 | * Clojure is a Lisp. Lisp, as in, "list processing". We have different-looking list primitives in Go (slices/arrays, basically). This difference is much of why transducers can be so fundamental for Clojure, but may seem foreign (though not *necessarily* wrong) in Go. 25 | 26 | Beyond that, here's some resources (mostly in Clojure): 27 | 28 | * If Clojure makes your eyes cross, here's a writeup in [Javascript](http://phuu.net/2014/08/31/csp-and-transducers.html), and one in [PHP](https://github.com/mtdowling/transducers.php). Cognitect has also [implemented transducers](http://cognitect-labs.github.io/) in Python, Javascript, Ruby, and Java. 29 | * Rich Hickey's [StrangeLoop talk](https://www.youtube.com/watch?v=6mTbuzafcII) introducing transducers (and his recent [ClojureConj talk](https://www.youtube.com/watch?v=4KqUvG8HPYo)) 30 | * The [Clojure docs](http://clojure.org/transducers) page for transducers 31 | * [Some](https://gist.github.com/ptaoussanis/e537bd8ffdc943bbbce7) [high-level](https://bendyworks.com/transducers-clojures-next-big-idea/) [summaries](http://thecomputersarewinning.com/post/Transducers-Are-Fundamental/) of transducers 32 | * Some [examples](http://ianrumford.github.io/blog/2014/08/08/Some-trivial-examples-of-using-Clojure-Transducers/) of [uses](http://matthiasnehlsen.com/blog/2014/10/06/Building-Systems-in-Clojure-2/) for transducers...mostly just toy stuff 33 | * A couple [blog](http://blog.podsnap.com/ducers2.html) [posts](http://conscientiousprogrammer.com/blog/2014/08/07/understanding-cloure-transducers-through-types/) examining type issues with transducers 34 | 35 | ## Pudding <- Proof 36 | 37 | I'm calling this proof of concept "done" because [it can pretty much replicate](http://godoc.org/github.com/sdboyer/transducers-go#ex-package--ClojureParity) (expand the ClojureParity example) a [thorough demo case](https://gist.github.com/sdboyer/9fca652f492257f35a41) Rich Hickey put out there. 38 | 39 | Here's some quick eye candy, though: 40 | 41 | ```go 42 | // dot import for brevity, remember this is a nono 43 | import . "github.com/sdboyer/transducers-go" 44 | 45 | func main() { 46 | // To make things work, we need four things (definitions in glossary): 47 | // 1) an input stream 48 | input := Range(4) // ValueStream containing [0 1 2 3] 49 | // 2) a stack of Transducers 50 | transducers := []Transducer{Map(Inc), Filter(Even)} // increment then filter odds 51 | // 3) a reducer to put at the bottom of the transducer stack 52 | reducer := Append() // very simple reducer - just appends values into a []interface{} 53 | // 4) a processor that puts it all together 54 | result := Transduce(input, reducer, transducers...) 55 | 56 | fmt.Println(result) // [2 4] 57 | 58 | 59 | // Or, we can use the Go processor, which does the work in a separate goroutine 60 | // and returns results through a channel. 61 | 62 | // Make an input chan, and stream each value from Range(4) into it 63 | in_chan := make(chan interface{}, 0) 64 | go StreamIntoChan(Range(4), in_chan) 65 | 66 | // Go provides its own bottom reducer (that's where it sends values out through 67 | // the return channel). So we don't provide one - just the input channel. 68 | out_chan := Go(in_chan, 0, transducers...) 69 | // Note that we reuse the transducer stack declared for the first example. 70 | // THIS. THIS is why transducers are cool. 71 | 72 | result2 := make([]interface{}, 0) // zero out the slice 73 | for v := range out_chan { 74 | result2 = append(result2, v) 75 | } 76 | 77 | fmt.Println(result) // [2 4] 78 | 79 | } 80 | ``` 81 | 82 | Remember - what's important here is *not* the particular problem being solved, or the idiosyncracies of the Transduce or Go processors (you can always write your own). What's important is that we can reuse the Transducer stack we declared, and it works - regardless of eagerness vs laziness, parallelism, etc. That's what breaking down transformations into their smallest constituent parts gets us. 83 | 84 | I also worked up another more sorta-real example in response to an idea on the mailing list: using transducers to [decode signals from airplane transponders](https://gist.github.com/sdboyer/4b116fd78d8bad07a9ff). 85 | 86 | ## The Arguments 87 | 88 | I figure there's pros and cons to something like this. Makes sense to put em up front. 89 | 90 | Please feel free to send PRs with more things to put in this section :) 91 | 92 | ### Cons 93 | 94 | * Dodges around the type system - there is little to no compile-time safety here. 95 | * To that end: is Yet Another Generics Attempt™...though, see [#1](https://github.com/sdboyer/transducers-go/issues/1). 96 | * Syntax is not as fluid as Clojure's (though creating such things is kind of a Lisp specialty). 97 | * Pursuant to all of the above, it'd be hard to call this idiomatic Go. 98 | * The `ValueStream` notion is a bedrock for this system, and has significant flaws. 99 | * Since this is based on streams/sequences/iteration, there will be cases where it is unequivocally less efficient than batch processing (slices). 100 | * Performance in general. While Reflect is not used at all (duh), I haven't done perf analysis yet, so I'm not sure how much overhead we're looking at. The stream operations in particular (splitting, slice->stream->slice) probably mean a lot of heap allocs and duplication of data. 101 | * re: performance - Go's compiler doesn't do tail call optimizations, ever (only a bit of inlining), which guarantees every function call means stack a new stack frame. There's idle talk about a [become](https://groups.google.com/forum/#!searchin/golang-nuts/tail$20become/golang-nuts/JZZrFjFKkuM/ug8itJda6wMJ) keyword, but... 102 | 103 | ### Pros 104 | 105 | * Stream-based data processing - so, amenable to dealing with continuous/infinite/larger-than-memory datasets. 106 | * Sure, channels let you be stream-based. But they're [low-level primitives](https://gist.github.com/kachayev/21e7fe149bc5ae0bd878). Plus they're largely orthogonal to this, which is about decomposing processing pipelines into their constituent parts. 107 | * Transducers could be an interesting, powerful way of structuring applications into segments, or for encapsulating library logic in a way that is easy to reuse, and whose purpose is widely understood. 108 | * While the loss of type assurances hurts - a lot - the spec for transducer behavior is clear enough that it's probably feasible to aim at "correctness" via exhaustive black-box tests. (hah) 109 | * And about types - I found a little kernel of something useful when looking beyond parametric polymorphism - [more here](https://github.com/sdboyer/transducers-go/issues/1). 110 | 111 | ## Glossary 112 | 113 | Transducers have some jargon. Here's an attempt to cut it down. These go more or less in order. 114 | 115 | * **Reduce:** If you're not familiar with the general concept of reduction, [LMGTFY](http://en.wikipedia.org/wiki/Fold_(higher-order_function)). 116 | * **Reduce Step:** A [function](http://godoc.org/github.com/sdboyer/transducers-go#ReduceStep)/method with a reduce-like signature: `(accum, value) return` 117 | * **Reducer:** A [set of three](http://godoc.org/github.com/sdboyer/transducers-go#Reducer) functions - the Reduce Step, plus Complete and Init methods. 118 | * **Transducer:** A function that *transforms* a *reducing* function. They [take a Reducer and return a Reducer](http://godoc.org/github.com/sdboyer/transducers-go#Transducer). 119 | * **Predicate:** Some transducers - for example, [Map](http://godoc.org/github.com/sdboyer/transducers-go#Map) and [Filter](http://godoc.org/github.com/sdboyer/transducers-go#Filter) - take a function to do their work. These injected functions are referred to as predicates. 120 | * **Transducer stack:** In short: `[]Transducer`. A stack is stateless (it's just logic) and can be reused in as many processes as desired. 121 | * **Bottom reducer:** The reducer that a stack of transducers will operate on. 122 | * **Processor:** Processors take (at minimum) some kind of collection and a transducer stack, compose a transducer pipeline from the stack, and apply it across the elements of the collection. 123 | 124 | -------------------------------------------------------------------------------- /example_test.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | import "fmt" 4 | 5 | func Example_clojureParity() { 6 | // Mirrors Rich Hickey's original "transducerfun.clj" gist: 7 | // https://gist.github.com/richhickey/b5aefa622180681e1c81 8 | // Note that that syntax is out of date and will not run, but this does: 9 | // https://gist.github.com/sdboyer/9fca652f492257f35a41 10 | // 11 | // Note also that this can be hard to follow - see the corollary example 12 | // under AttachLoggers for step-by-step output. 13 | xform := []Transducer{ 14 | Map(Inc), 15 | Filter(Even), 16 | Dedupe(), 17 | Mapcat(Range), 18 | Chunk(3), 19 | ChunkBy(func(value interface{}) interface{} { 20 | return sum(value.(ValueStream)) > 7 21 | }), 22 | Mapcat(Flatten), 23 | RandomSample(1.0), 24 | TakeNth(1), 25 | Keep(func(v interface{}) interface{} { 26 | if v.(int)%2 != 0 { 27 | return v.(int) * v.(int) 28 | } else { 29 | return nil 30 | } 31 | }), 32 | KeepIndexed(func(i int, v interface{}) interface{} { 33 | if i%2 == 0 { 34 | return i * v.(int) 35 | } else { 36 | return nil 37 | } 38 | }), 39 | Replace(map[interface{}]interface{}{2: "two", 6: "six", 18: "eighteen"}), 40 | Take(11), 41 | TakeWhile(func(v interface{}) bool { 42 | return v != 300 43 | }), 44 | Drop(1), 45 | DropWhile(IsString), 46 | Remove(IsString), 47 | } 48 | 49 | // An []interface{} slice (containing only ints) with vals [0 0 1 1 2 2 ... 17 17] 50 | data := ToSlice(Interleave(Range(18), Range(20))) 51 | 52 | // NB: the original clojure gists also have (sequence ...) and (into []...) 53 | // processors. I didn't replicate `sequence` because, as best I can figure, 54 | // it's redundant with Eduction in the context I've created (no seqs). 55 | // I didn't replicate `into` because it's a use pattern that is awkward with 56 | // static typing, and is readily accomplished via Transduce. 57 | 58 | // reduce immediately, appending the results of transduction into an int slice. 59 | fmt.Println(Transduce(data, Append(), xform...)) 60 | // produces first line of Output: [36 200 10] 61 | 62 | // Eduction takes the same transduction stack, but operates lazily - it returns 63 | // a ValueStream that triggers transduction only as a result of requesting 64 | // values from that returned stream. 65 | fmt.Println(ToSlice(Eduction(data, xform...))) 66 | // produces second line of Output: [36 200 10] 67 | 68 | // Same transduction stack again, but now with the Go processor, which takes an 69 | // input channel, runs transduction in a separate goroutine, and sends results 70 | // back out through an output channel (the one returned from the Go func). 71 | input := make(chan interface{}, 0) 72 | go StreamIntoChan(ToStream(data), input) 73 | output := Go(input, 0, xform...) 74 | for v := range output { 75 | fmt.Println(v) 76 | } 77 | // same output as other processors, just one value per line 78 | 79 | // Output: 80 | // [36 200 10] 81 | // [36 200 10] 82 | // 36 83 | // 200 84 | // 10 85 | } 86 | 87 | func ExampleAttachLoggers_clojureParity() { 88 | // This is the same as the ClojureParity package example, but with loggers 89 | // interleaved through each step. 90 | xform := AttachLoggers(fmt.Printf, 91 | Map(Inc), 92 | Filter(Even), 93 | Dedupe(), 94 | Mapcat(Range), 95 | Chunk(3), 96 | ChunkBy(func(value interface{}) interface{} { 97 | return sum(value.(ValueStream)) > 7 98 | }), 99 | Mapcat(Flatten), 100 | RandomSample(1.0), 101 | TakeNth(1), 102 | Keep(func(v interface{}) interface{} { 103 | if v.(int)%2 != 0 { 104 | return v.(int) * v.(int) 105 | } else { 106 | return nil 107 | } 108 | }), 109 | KeepIndexed(func(i int, v interface{}) interface{} { 110 | if i%2 == 0 { 111 | return i * v.(int) 112 | } else { 113 | return nil 114 | } 115 | }), 116 | Replace(map[interface{}]interface{}{2: "two", 6: "six", 18: "eighteen"}), 117 | Take(11), 118 | TakeWhile(func(v interface{}) bool { 119 | return v != 300 120 | }), 121 | Drop(1), 122 | DropWhile(IsString), 123 | Remove(IsString), 124 | ) 125 | 126 | data := ToSlice(Interleave(Range(18), Range(20))) 127 | Transduce(data, Append(), xform...) 128 | 129 | // Output: 130 | //SRC -> [0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11] |TERM| 131 | // transducers.map_r -> [1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12] |TERM| 132 | // transducers.filter -> [2 2 4 4 6 6 8 8 10 10 12] |TERM| 133 | // *transducers.dedupe -> [2 4 6 8 10 12] |TERM| 134 | // transducers.mapcat -> [0 1 0 1 2 3 0 1 2 3 4 5 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 8 9 0 1 2] |TERM| 135 | // *transducers.chunk -> [[0 1 0] [1 2 3] [0 1 2] [3 4 5] [0 1 2] [3 4 5] [6 7 0] [1 2 3] [4 5 6] [7 8 9] [0 1 2]] |TERM| 136 | // *transducers.chunkBy -> [[[0 1 0] [1 2 3] [0 1 2]] [[3 4 5]] [[0 1 2]] [[3 4 5] [6 7 0]] [[1 2 3]] [[4 5 6] [7 8 9]]] |TERM| 137 | // transducers.mapcat -> [0 1 0 1 2 3 0 1 2 3 4 5 0 1 2 3 4 5 6 7 0 1 2 3 4 5] |TERM| 138 | // transducers.randomSample -> [0 1 0 1 2 3 0 1 2 3 4 5 0 1 2 3 4 5 6 7 0 1 2 3 4 5] |TERM| 139 | // transducers.takeNth -> [0 1 0 1 2 3 0 1 2 3 4 5 0 1 2 3 4 5 6 7 0 1 2 3 4 5] |TERM| 140 | // transducers.keep -> [1 1 9 1 9 25 1 9 25 49 1 9 25] |TERM| 141 | // *transducers.keepIndexed -> [0 18 36 6 200 10 300] |TERM| 142 | // transducers.replace -> [0 eighteen 36 six 200 10 300] |TERM| 143 | // *transducers.take -> [0 eighteen 36 six 200 10 300] |TERM| 144 | // transducers.takeWhile -> [0 eighteen 36 six 200 10] 145 | // *transducers.drop -> [eighteen 36 six 200 10] 146 | // *transducers.dropWhile -> [36 six 200 10] 147 | // transducers.remove -> [36 200 10] 148 | // transducers.append_bottom 149 | //END 150 | } 151 | 152 | func ExampleTransduce() { 153 | // Transducing does an immediate (non-lazy) application of the transduction 154 | // stack to the incoming ValueStream. 155 | 156 | // Increments [0 1 2 3 4] by 1, then filters out odd values. 157 | fmt.Println(Transduce(Range(6), Append(), Map(Inc), Filter(Even))) 158 | // Output: 159 | // [2 4 6] 160 | } 161 | 162 | func ExampleIntoSlice() { 163 | // ValueStreams are forward-only iterators, so reading them into a slice will 164 | // exhaust the iterator. Not great if you have to pass the stream along. 165 | stream := Range(3) 166 | fmt.Println(ToSlice(stream)) // [0 1 2] 167 | fmt.Println(ToSlice(stream)) // [], because the first call exhausted the stream 168 | 169 | // IntoSlice() will read a dup/split of the stream, then use some pointer 170 | // trickery to update local variable in your calling scope: 171 | stream = Range(3) 172 | fmt.Println(IntoSlice(&stream)) // [0 1 2] 173 | fmt.Println(IntoSlice(&stream)) // [0 1 2] 174 | 175 | // Output: 176 | // [0 1 2] 177 | // [] 178 | // [0 1 2] 179 | // [0 1 2] 180 | } 181 | 182 | func ExampleEscape() { 183 | // The Escape transducer allows values in a transduction process to "escape" 184 | // partway through processing into a channel. That channel can be used to 185 | // feed another transduction process, creating a well-defined cascade of 186 | // transduction processes. 187 | 188 | c := make(chan interface{}, 0) 189 | input := make(chan interface{}, 0) 190 | // send [0 1 2 3 4] into the input channel in separate goroutine 191 | go StreamIntoChan(Range(5), input) 192 | // connect c's input end to Escape transducer, start transduction in goroutine 193 | out1 := Go(input, 0, Escape(Even, c, true)) 194 | // connect c's output end to a second transduction process in goroutine 195 | out2 := Go(c, 0, Map(Inc), Map(Inc), Map(Inc)) 196 | 197 | slice1, slice2 := make([]int, 0), make([]int, 0) 198 | // Consume this in a separate goroutine because it's sorta in the middle 199 | go func() { 200 | for v := range out1 { 201 | slice1 = append(slice1, v.(int)) 202 | } 203 | }() 204 | 205 | // Safe to consume this chan in the calling goroutine because it's at the bottom 206 | for v := range out2 { 207 | slice2 = append(slice2, v.(int)) 208 | } 209 | 210 | fmt.Println(slice1) 211 | fmt.Println(slice2) 212 | // Output: 213 | // [1 3] 214 | // [3 5 7] 215 | } 216 | -------------------------------------------------------------------------------- /helpers.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | type reducerBase struct { 4 | next Reducer 5 | } 6 | 7 | func (r reducerBase) Complete(accum interface{}) interface{} { 8 | // Pure functions inherently can't have any completion work, so flow through 9 | return r.next.Complete(accum) 10 | } 11 | 12 | func (r reducerBase) Init() interface{} { 13 | return r.next.Init() 14 | } 15 | 16 | type reducerHelper struct { 17 | S func(accum interface{}, value interface{}) (interface{}, bool) 18 | C func(accum interface{}) interface{} 19 | I func() interface{} 20 | } 21 | 22 | func (r reducerHelper) Step(accum interface{}, value interface{}) (interface{}, bool) { 23 | return r.S(accum, value) 24 | } 25 | 26 | func (r reducerHelper) Complete(accum interface{}) interface{} { 27 | return r.C(accum) 28 | } 29 | 30 | func (r reducerHelper) Init() interface{} { 31 | return r.I() 32 | } 33 | 34 | // Creates a helper struct for defining a Reducer on the fly. 35 | // 36 | // This is mostly useful for creating a bottom reducer with minimal fanfare. 37 | // 38 | // This returns an instance of bareReducer, which is a struct containing three 39 | // function pointers, one for each of the three methods of Reducer - S for 40 | // Step, C for Complete, I for Init. The struct implements Reducer 41 | // by simply passing method calls along to the contained function pointers. 42 | // 43 | // This makes it easier to create Reducers on the fly. The first argument is a 44 | // reducer - if you pass nil, it'll create a no-op reducer for you. If you want to 45 | // overwrite the other two, do it on the returned struct. 46 | func CreateStep(s ReduceStep) reducerHelper { 47 | if s == nil { 48 | s = func(accum interface{}, value interface{}) (interface{}, bool) { 49 | return accum, false 50 | } 51 | } 52 | return reducerHelper{ 53 | S: s, 54 | C: func(accum interface{}) interface{} { 55 | return accum 56 | }, 57 | I: func() interface{} { 58 | return make([]interface{}, 0) 59 | }, 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /iter.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | // ValueStreams are the core abstraction that facilitate value-oriented 4 | // communication in a transduction pipeline. Unfortunately, various typing 5 | // issues preclude the use of slices directly. 6 | // 7 | // They are a lowest-possible-denominator iterator concept: forward-only, 8 | // non-seeking, non-rewinding. They're kinda terrible. But they're also 9 | // very simple: call the function. If the second val is true, the iterator 10 | // is exhausted. If not, the first return value is the next value. 11 | // 12 | // The most important thing to understand is that ValueStreams are the 13 | // *only* thing that transducers treat as being a set of values: if a slice 14 | // or map is passed through a transduction process, transducers that deal 15 | // with multiple values will treat a slice as a single value, but a ValueStream 16 | // as multiple. This is why Exploders return a ValueStream. 17 | // 18 | // TODO At minimum, a proper implementation would probably need to include a 19 | // adding a parameter that allows the caller to indicate they no longer 20 | // need the stream. (Not quite a 'close', but possibly interpreted that way) 21 | type ValueStream func() (value interface{}, done bool) 22 | 23 | // Convenience function that receives from the stream and passes the 24 | // emitted value to an injected function. Will not return until the 25 | // stream reports being exhausted - watch for deadlocks! 26 | func (vs ValueStream) Each(f func(interface{})) { 27 | for { 28 | v, done := vs() 29 | if done { 30 | return 31 | } 32 | f(v) 33 | } 34 | } 35 | 36 | // Recursively reads this stream out into an []interface{}. 37 | // 38 | // Will consume until the stream says its done - unsafe for infinite streams, 39 | // and will block if the stream is based on a blocking datasource (e.g., chan). 40 | func ToSlice(vs ValueStream) (into []interface{}) { 41 | for value, done := vs(); !done; value, done = vs() { 42 | if ivs, ok := value.(ValueStream); ok { 43 | ivs, value = ivs.Split() 44 | into = append(into, ToSlice(ivs)) 45 | } else { 46 | into = append(into, value) 47 | } 48 | } 49 | 50 | return into 51 | } 52 | 53 | // Recursively create a slice from a duplicate (Split()) of the given stream. 54 | // 55 | // Will consume until the stream says its done - unsafe for infinite streams, 56 | // and will block if the stream is based on a blocking datasource (e.g., chan). 57 | // 58 | // The slice is read out of the duplicate, and the passed value is repointed to 59 | // an unconsumed stream, so it is safe for convenient use patterns. See example. 60 | func IntoSlice(vs *ValueStream) (into []interface{}) { 61 | var dup ValueStream 62 | // write through pointer to replace original val with first split 63 | *vs, dup = vs.Split() // write through pointer to replace 64 | 65 | for value, done := dup(); !done; value, done = dup() { 66 | if ivs, ok := value.(ValueStream); ok { 67 | into = append(into, IntoSlice(&ivs)) 68 | } else { 69 | into = append(into, value) 70 | } 71 | } 72 | 73 | return into 74 | } 75 | 76 | // Splits a value stream into two identical streams that can both be 77 | // consumed independently. 78 | // 79 | // Note that calls to the original stream will still work - and any values 80 | // consumed that way will be missed by the split streams. Be very careful! 81 | // 82 | // TODO I think this might leak? 83 | // TODO figure out if there's a nifty way to make this threadsafe 84 | func (vs ValueStream) Split() (ValueStream, ValueStream) { 85 | var src ValueStream = vs 86 | var f1i, f2i int 87 | var held []interface{} 88 | 89 | return func() (value interface{}, done bool) { 90 | if f1i >= f2i { 91 | // this stream is ahead, pull from the source 92 | value, done = src() 93 | if !done { 94 | // recursively dup streams 95 | if vs, ok := value.(ValueStream); ok { 96 | vs, value = vs.Split() 97 | held = append(held, vs) 98 | } else { 99 | held = append(held, value) 100 | } 101 | } 102 | } else { 103 | value, held = held[0], held[1:] 104 | } 105 | 106 | if !done { 107 | f1i++ 108 | } 109 | return 110 | }, func() (value interface{}, done bool) { 111 | if f2i >= f1i { 112 | // this stream is ahead, pull from the source 113 | value, done = src() 114 | if !done { 115 | // recursively dup streams 116 | if vs, ok := value.(ValueStream); ok { 117 | vs, value = vs.Split() 118 | held = append(held, vs) 119 | } else { 120 | held = append(held, value) 121 | } 122 | } 123 | } else { 124 | value, held = held[0], held[1:] 125 | } 126 | 127 | if !done { 128 | f2i++ 129 | } 130 | return 131 | } 132 | } 133 | 134 | func StreamIntoChan(vs ValueStream, c chan<- interface{}) { 135 | vs.Each(func(v interface{}) { 136 | c <- v 137 | }) 138 | close(c) 139 | } 140 | 141 | // Assuming this stream contains other streams, Flatten returns a new stream 142 | // that removes all nesting on the fly, flattening all the streams down into 143 | // the same top-level construct. It's the linearized results of a 144 | // depth-first tree traversal on an arbitrarily deep tree of nested streams. 145 | // 146 | // Meaning that, it a stream containing other streams looking like: 147 | // 148 | // [[1 [2 3] 4] 5 6 [7 [8 [9] [10 11]]]] 149 | // 150 | // And presents it through the returned stream as: 151 | // 152 | // [1 2 3 4 5 6 7 8 9 10 11] 153 | // 154 | // Remember - streams are forward-only, and the flattening stream wraps the 155 | // original source stream. Once you call this, if you consume from the original 156 | // source stream again, that value will be lost to the flattener. 157 | func (vs ValueStream) Flatten() ValueStream { 158 | // create stack of streams and push the first one on 159 | var ss []ValueStream 160 | ss = append(ss, vs) 161 | 162 | var f ValueStream 163 | 164 | f = func() (value interface{}, done bool) { 165 | size := len(ss) 166 | if size == 0 { 167 | // no streams left, we're definitely done 168 | return nil, true 169 | } 170 | 171 | // grab value from stream on top of stack 172 | value, done = ss[size-1]() 173 | 174 | if done { 175 | // this stream is done; pop the stack and recurse 176 | ss = ss[:size-1] 177 | return f() 178 | } 179 | 180 | if innerstream, ok := value.(ValueStream); ok { 181 | // we got another stream, push it on the stack and recurse 182 | ss = append(ss, innerstream) 183 | return f() 184 | } 185 | 186 | // most basic case - we found a leaf. return it. 187 | return 188 | } 189 | 190 | return f 191 | } 192 | 193 | // If something has a special way of representing itself a stream, it should 194 | // implement this method. 195 | type Streamable interface { 196 | AsStream() ValueStream 197 | } 198 | 199 | // Bind a function to the given collection that will allow traversal for reducing 200 | func ToStream(collection interface{}) ValueStream { 201 | // If the structure already provides a reducing method, just return that. 202 | if c, ok := collection.(Streamable); ok { 203 | return c.AsStream() 204 | } 205 | 206 | switch c := collection.(type) { 207 | case []int: 208 | return iteratorToValueStream(&intSliceIterator{slice: c}) 209 | case []interface{}: 210 | return valueSlice(c).AsStream() 211 | case ValueStream: 212 | return c 213 | case <-chan interface{}: 214 | return func() (value interface{}, done bool) { 215 | value, done = <-c 216 | return 217 | } 218 | case chan interface{}: 219 | return func() (value interface{}, done bool) { 220 | value, done = <-c 221 | return 222 | } 223 | default: 224 | panic("not supported...yet") 225 | } 226 | } 227 | 228 | // Wrap an iterator up into a ValueStream func. 229 | func iteratorToValueStream(i Iterator) func() (value interface{}, done bool) { 230 | return func() (interface{}, bool) { 231 | if !i.Valid() { 232 | i.Done() 233 | return nil, true 234 | } 235 | 236 | v := i.Current() 237 | i.Next() 238 | 239 | return v, false 240 | } 241 | } 242 | 243 | // Interleaves two streams together, getting a value from the first stream, 244 | // then a value from the second stream. 245 | // 246 | // Values from both streams are collected at the same time, and if either 247 | // input is exhausted, the interleaved stream terminates immediately, even if 248 | // the other stream does have a value available. 249 | func Interleave(s1 ValueStream, s2 ValueStream) ValueStream { 250 | var done bool 251 | var v1, v2 interface{} 252 | var index int 253 | 254 | return func() (interface{}, bool) { 255 | if done { 256 | return nil, done 257 | } 258 | 259 | if index%2 == 0 { 260 | // check both streams at once - if either is exhausted, stop 261 | v1, done = s1() 262 | if !done { 263 | v2, done = s2() 264 | } 265 | if done { 266 | return nil, done 267 | } 268 | 269 | index++ 270 | return v1, false 271 | } else { 272 | index++ 273 | return v2, false 274 | } 275 | 276 | } 277 | } 278 | 279 | // Simple iterator interface. Mostly used internally to handle slices. 280 | // 281 | // TODO Done() isn't used at all. also kinda terrible in general. 282 | type Iterator interface { 283 | Current() (value interface{}) 284 | Next() 285 | Valid() bool 286 | Done() 287 | } 288 | 289 | type intSliceIterator struct { 290 | slice []int 291 | pos int 292 | } 293 | 294 | func (i *intSliceIterator) Current() interface{} { 295 | return i.slice[i.pos] 296 | } 297 | 298 | func (i *intSliceIterator) Next() { 299 | // TODO atomicity 300 | i.pos++ 301 | } 302 | 303 | func (i *intSliceIterator) Valid() (valid bool) { 304 | return i.pos < len(i.slice) 305 | } 306 | 307 | func (i *intSliceIterator) Done() { 308 | 309 | } 310 | 311 | type valueSlice []interface{} 312 | 313 | func (s valueSlice) AsStream() ValueStream { 314 | return iteratorToValueStream(&interfaceSliceIterator{slice: s}) 315 | } 316 | 317 | type interfaceSliceIterator struct { 318 | slice []interface{} 319 | pos int 320 | } 321 | 322 | func (i *interfaceSliceIterator) Current() interface{} { 323 | return i.slice[i.pos] 324 | } 325 | 326 | func (i *interfaceSliceIterator) Next() { 327 | // TODO atomicity 328 | i.pos++ 329 | } 330 | 331 | func (i *interfaceSliceIterator) Valid() (valid bool) { 332 | return i.pos < len(i.slice) 333 | } 334 | 335 | func (i *interfaceSliceIterator) Done() { 336 | 337 | } 338 | -------------------------------------------------------------------------------- /logger.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | // Interleaves logging transducers into the provided transducer stack. 4 | // 5 | // The first parameter is a logging function - e.g., fmt.Printf - into which 6 | // the logging transducers will send their output through this function. 7 | // 8 | // NOTE: this will block, and/or exhaust memory, on infinite streams. 9 | func AttachLoggers(logger func(string, ...interface{}) (int, error), tds ...Transducer) []Transducer { 10 | tlfunc := func(r Reducer) Reducer { 11 | tl := &topLogger{reduceLogger{logger: logger, next: r}} 12 | return tl 13 | } 14 | 15 | newstack := make([]Transducer, 0) 16 | newstack = append(newstack, tlfunc) 17 | for i := 0; i < len(tds); i++ { 18 | newstack = append(newstack, tds[i]) 19 | newstack = append(newstack, logtd(logger)) 20 | } 21 | 22 | return newstack 23 | } 24 | 25 | type topLogger struct { 26 | reduceLogger 27 | } 28 | 29 | func (r *topLogger) Complete(accum interface{}) interface{} { 30 | if r.term { 31 | r.logger("SRC -> %v |TERM|\n\t%T", r.values, r.next) 32 | } else { 33 | r.logger("SRC -> %v\n\t%T", r.values, r.next) 34 | } 35 | //r.logger("SRC -> %v\n\t%T", r.values, r.next) 36 | accum = r.next.Complete(accum) 37 | r.logger("\nEND\n") 38 | 39 | return accum 40 | } 41 | 42 | func logtd(logger func(string, ...interface{}) (int, error)) Transducer { 43 | return func(r Reducer) Reducer { 44 | lt := &reduceLogger{logger: logger, next: r} 45 | return lt 46 | } 47 | } 48 | 49 | type reduceLogger struct { 50 | values []interface{} 51 | logger func(string, ...interface{}) (int, error) 52 | next Reducer 53 | term bool 54 | } 55 | 56 | func (r *reduceLogger) Init() interface{} { 57 | return r.next.Init() 58 | } 59 | 60 | func (r *reduceLogger) Complete(accum interface{}) interface{} { 61 | if r.term { 62 | r.logger(" -> %v |TERM|\n\t%T", r.values, r.next) 63 | } else { 64 | r.logger(" -> %v\n\t%T", r.values, r.next) 65 | } 66 | return r.next.Complete(accum) 67 | } 68 | 69 | func (r *reduceLogger) Step(accum interface{}, value interface{}) (interface{}, bool) { 70 | // if the transducer produces a ValueStream, dup and dump it. (so, already not infinite-safe) 71 | if vs, ok := value.(ValueStream); ok { 72 | r.values = append(r.values, IntoSlice(&vs)) 73 | value = vs 74 | } else { 75 | r.values = append(r.values, value) 76 | } 77 | 78 | accum, r.term = r.next.Step(accum, value) 79 | return accum, r.term 80 | } 81 | -------------------------------------------------------------------------------- /predicates.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | // Transducer predicate function; used by Map, and others. Takes a value, 4 | // transforms it, and returns the result. 5 | type Mapper func(value interface{}) interface{} 6 | 7 | // Trandsucer predicate function. Same as Map, but passes an index value 8 | // indicating the number of times the predicate has been called. 9 | type IndexedMapper func(index int, value interface{}) interface{} 10 | 11 | // Transducer predicate function; used by most filtering-ish transducers. Takes 12 | // a value and returns a bool, which the transducer uses to make a decision, 13 | // typically (though not necessarily) about whether or not that value gets to 14 | // proceed in the reduction chain. 15 | type Filterer func(interface{}) bool 16 | 17 | // Transducer predicate function. Exploders transform a value of some type into 18 | // a stream of values. Used by Mapcat. 19 | type Exploder func(interface{}) ValueStream 20 | 21 | func sum(vs ValueStream) (total int) { 22 | vs.Each(func(value interface{}) { 23 | total += value.(int) 24 | }) 25 | 26 | return 27 | } 28 | 29 | // Mapper: asserts the value is a ValueStream, then sums all contained 30 | // values (asserts that they are ints). 31 | func Sum(value interface{}) interface{} { 32 | return sum(value.(ValueStream)) 33 | } 34 | 35 | // Filterer: returns true if dynamic type of the value is string 36 | func IsString(v interface{}) bool { 37 | _, ok := v.(string) 38 | return ok 39 | } 40 | 41 | // Mapper: asserts value to int, increemnts by 1 42 | func Inc(value interface{}) interface{} { 43 | return value.(int) + 1 44 | } 45 | 46 | // Filterer: asserts value to int, returns true if even. 47 | func Even(value interface{}) bool { 48 | return value.(int)%2 == 0 49 | } 50 | 51 | // Dumb little thing to emulate clojure's range behavior 52 | func t_range(l int) []int { 53 | slice := make([]int, l) 54 | 55 | for i := 0; i < l; i++ { 56 | slice[i] = i 57 | } 58 | 59 | return slice 60 | } 61 | 62 | // Flattens arbitrarily deep datastructures into a single ValueStream. 63 | func Flatten(value interface{}) ValueStream { 64 | switch v := value.(type) { 65 | case ValueStream: 66 | return v.Flatten() 67 | case []interface{}: 68 | // TODO maybe detect ValueStreams here, too, but probably better to just be consistent 69 | return valueSlice(v).AsStream() 70 | case []int: 71 | return ToStream(v) 72 | case int, interface{}: 73 | var done bool 74 | // create single-eleement value stream 75 | return func() (interface{}, bool) { 76 | if done { 77 | return nil, true 78 | } else { 79 | done = true 80 | return v, false 81 | } 82 | } 83 | default: 84 | panic("not supported") 85 | } 86 | } 87 | 88 | // Exploder: Given an int, returns a ValueStream of ints in the range [0, n). 89 | func Range(limit interface{}) ValueStream { 90 | // lazy and inefficient to use MakeReduce here, do it directly 91 | return ToStream(t_range(limit.(int))) 92 | } 93 | -------------------------------------------------------------------------------- /processors.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | // Transduce performs a non-lazy traversal/reduction over the provided value stream. 4 | func Transduce(coll interface{}, bottom Reducer, tlist ...Transducer) interface{} { 5 | // Final reducing func - append to slice 6 | t := CreatePipeline(bottom, tlist...) 7 | 8 | vs := ToStream(coll) 9 | var ret interface{} = t.Init() 10 | var terminate bool 11 | 12 | for v, done := vs(); !done; v, done = vs() { 13 | ret, terminate = t.Step(ret, v) 14 | if terminate { 15 | break 16 | } 17 | } 18 | 19 | ret = t.Complete(ret) 20 | 21 | return ret 22 | } 23 | 24 | // Applies the transducer stack to the provided collection, then encapsulates 25 | // flow within a ValueStream, and returns the stream. 26 | // 27 | // Calling for a value on the returned ValueStream will send a value into 28 | // the transduction pipe. Depending on the transducer components involved, 29 | // a given pipeline could produce 0..N values for each input value. 30 | // Nevertheless, Eduction ValueStreams still must return one value per call. 31 | // 32 | // In the case that sending an input results in 0 outputs reaching the 33 | // bottom reducer, an Eduction will simply continue sending values to the input 34 | // end until a value emerges, or the input stream is exhausted. In the latter 35 | // case, the returned ValueStream will also indicate it is exhausted. 36 | // 37 | // If more than one value emerges for a single input, the eduction stream 38 | // places the additional results into a FIFO queue. Subsequent calls to 39 | // the stream will drain the queue before sending another value from the 40 | // source stream into the pipeline. 41 | // 42 | // Note that using processor with transducers that have side effects is a 43 | // particularly bad idea. It's also a bad idea to use it if your transduction 44 | // stack has a high degree of fanout, as the queue can become quite large. 45 | func Eduction(coll interface{}, tlist ...Transducer) ValueStream { 46 | var bottom ReduceStep = func(accum interface{}, value interface{}) (interface{}, bool) { 47 | return append(accum.([]interface{}), value), false 48 | } 49 | 50 | src := ToStream(coll) 51 | pipe := CreatePipeline(bottom, tlist...) 52 | var queue []interface{} 53 | var input interface{} 54 | var exhausted, terminate bool 55 | 56 | return func() (value interface{}, done bool) { 57 | // first, check the queue to see if it needs draining 58 | if len(queue) > 0 { 59 | // consume from queue - unshift slice 60 | // TODO is this leaky? not sure how to reason about that 61 | value, queue = queue[0], queue[1:] 62 | return value, false 63 | } 64 | 65 | if exhausted || terminate { 66 | // we'll get here if Complete enqueued more values after: 67 | // a) we got an empty signal from the src stream, or 68 | // b) if the pipeline returned the termination signal 69 | return nil, true 70 | } 71 | 72 | for { 73 | // queue is empty. feed the pipe till its not, or we exhaust src 74 | input, exhausted = src() 75 | if exhausted || terminate { 76 | // src is exhausted, send Complete signal 77 | queue = pipe.Complete(queue).([]interface{}) 78 | // Complete may have flushed some stuff into the accum/queue 79 | if len(queue) > 0 { 80 | value, queue = queue[0], queue[1:] 81 | return value, false 82 | } 83 | // nope. so, we're done. 84 | return nil, true 85 | } 86 | 87 | // temporarily use the value var 88 | value, terminate = pipe.Step(queue, input) 89 | queue = value.([]interface{}) 90 | if terminate { 91 | // this is here because it's less horrifying than the alternative 92 | queue = pipe.Complete(queue).([]interface{}) 93 | } 94 | 95 | if len(queue) > 0 { 96 | value, queue = queue[0], queue[1:] 97 | return value, false 98 | } 99 | } 100 | } 101 | } 102 | 103 | // Given a channel, apply the transducer stack to values it produces, 104 | // emitting values out the other end through the returned channel. 105 | // 106 | // This processor will spawn a separate goroutine that runs the transduction. 107 | // This goroutine consumes on the provided chan, runs transduction, and sends 108 | // resultant values into the returned chan. Thus, make sure that you're filling 109 | // the input channel and consuming on the resultant channel from separate 110 | // goroutines. 111 | // 112 | // The second parameter determines the buffering of the returned channel (it is 113 | // passed directly to the make() call). 114 | func Go(c <-chan interface{}, retcap int, tlist ...Transducer) <-chan interface{} { 115 | out := make(chan interface{}, retcap) 116 | pipe := CreatePipeline(chanReducer{c: out}, tlist...) 117 | 118 | var accum struct{} // accum is unused in this mode 119 | var terminate bool 120 | 121 | go func() { 122 | for v := range c { 123 | _, terminate = pipe.Step(accum, v) 124 | if terminate { 125 | break 126 | } 127 | } 128 | 129 | pipe.Complete(accum) 130 | }() 131 | 132 | return out 133 | } 134 | 135 | type chanReducer struct { 136 | c chan<- interface{} 137 | } 138 | 139 | func (c chanReducer) Step(accum interface{}, value interface{}) (interface{}, bool) { 140 | c.c <- value 141 | return accum, false 142 | } 143 | 144 | func (c chanReducer) Complete(accum interface{}) interface{} { 145 | close(c.c) 146 | return accum 147 | } 148 | 149 | func (c chanReducer) Init() interface{} { 150 | return nil 151 | } 152 | -------------------------------------------------------------------------------- /transducers.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | import "math/rand" 4 | 5 | // The master signature: a reducing step function. 6 | type ReduceStep func(accum interface{}, value interface{}) (result interface{}, terminate bool) 7 | 8 | // A transducer transforms a reducing function into a new reducing function. 9 | type Transducer func(Reducer) Reducer 10 | 11 | // This is separated out because, while Transducers must support Init() (in order to pass 12 | // the call along), not all processors require Init, so they may take a partial step instead. 13 | type Reducer interface { 14 | // The primary reducing step function, called during normal operation. 15 | Step(accum interface{}, value interface{}) (result interface{}, terminate bool) // Reducer 16 | 17 | // Complete is called when the input has been exhausted; stateful transducers 18 | // should flush any held state (e.g. values awaiting a full chunk) through here. 19 | Complete(accum interface{}) (result interface{}) 20 | 21 | // Certain processors will call this to get an initial value for the accumulator. 22 | // TODO maybe should split this out to separate interface 23 | Init() interface{} 24 | } 25 | 26 | // Creates a transduction pipeline from a reducing function and a stack of transducers. 27 | // 28 | // Creating the pipeline means that state in the transducers has been initialized. 29 | // In other words, while you can create as many pipelines as you want from a 30 | // a stack of transducers, you have to create a new pipeline for each 31 | // process/collection you run. 32 | // 33 | // This function is usually called by processors (Transduce, Eduction, etc.). 34 | // If you're using one of those, they'll call it when the time is right. 35 | func CreatePipeline(r Reducer, tds ...Transducer) (rs Reducer) { 36 | rs = Reducer(r) 37 | // Because a pipeline is a series of wrapped functions, we must walk the list 38 | // in reverse order and apply each transducer, starting from the bottom reducer. 39 | for i := len(tds) - 1; i >= 0; i-- { 40 | rs = tds[i](rs) 41 | } 42 | 43 | return 44 | } 45 | 46 | func (r ReduceStep) Step(accum interface{}, value interface{}) (result interface{}, terminate bool) { 47 | return r(accum, value) 48 | } 49 | 50 | func (r ReduceStep) Complete(accum interface{}) (result interface{}) { 51 | return accum 52 | } 53 | 54 | // TODO binding this to the general function type is maybe not the best idea, but it's simple 55 | // and works for now 56 | func (r ReduceStep) Init() interface{} { 57 | return make([]interface{}, 0) 58 | } 59 | 60 | /* Transducer implementations */ 61 | 62 | type map_r struct { 63 | reducerBase 64 | f Mapper 65 | } 66 | 67 | func (r map_r) Step(accum interface{}, value interface{}) (interface{}, bool) { 68 | return r.next.Step(accum, r.f(value)) 69 | } 70 | 71 | // Map calls its predicate once for each value coming through, passing the 72 | // result along to the next step. 73 | func Map(f Mapper) Transducer { 74 | return func(r Reducer) Reducer { 75 | return map_r{reducerBase{r}, f} 76 | } 77 | } 78 | 79 | type filter struct { 80 | reducerBase 81 | f Filterer 82 | } 83 | 84 | func (r filter) Step(accum interface{}, value interface{}) (interface{}, bool) { 85 | var check bool 86 | if vs, ok := value.(ValueStream); ok { 87 | vs, value = vs.Split() 88 | check = r.f(vs) 89 | } else { 90 | check = r.f(value) 91 | } 92 | 93 | if check { 94 | return r.next.Step(accum, value) 95 | } else { 96 | return accum, false 97 | } 98 | } 99 | 100 | // Returns a Filter transducer. Filters call their predicate function for 101 | // each incoming value, dropping the value if the predicate returns false 102 | // and passing it along if it returns true. 103 | func Filter(f Filterer) Transducer { 104 | return func(r Reducer) Reducer { 105 | return filter{reducerBase{r}, f} 106 | } 107 | } 108 | 109 | // for append operations at the bottom of a transducer stack 110 | type append_bottom struct{} 111 | 112 | func (r append_bottom) Step(accum interface{}, value interface{}) (interface{}, bool) { 113 | switch v := value.(type) { 114 | case []int: 115 | return append(accum.([]int), v...), false 116 | case int: 117 | return append(accum.([]int), v), false 118 | case ValueStream: 119 | v.Flatten().Each(func(value interface{}) { 120 | accum = append(accum.([]int), value.(int)) 121 | }) 122 | return accum, false 123 | default: 124 | panic("not supported") 125 | } 126 | } 127 | 128 | func (r append_bottom) Complete(accum interface{}) interface{} { 129 | return accum 130 | } 131 | 132 | func (r append_bottom) Init() interface{} { 133 | return make([]int, 0) 134 | } 135 | 136 | func Append() Reducer { 137 | return append_bottom{} 138 | } 139 | 140 | type mapcat struct { 141 | reducerBase 142 | f Exploder 143 | } 144 | 145 | func (r mapcat) Step(accum interface{}, value interface{}) (interface{}, bool) { 146 | stream := r.f(value) 147 | 148 | var v interface{} 149 | var done, terminate bool 150 | 151 | for { // <-- the *loop* is the 'cat' 152 | v, done = stream() 153 | if done { 154 | break 155 | } 156 | 157 | accum, terminate = r.next.Step(accum, v) 158 | if terminate { 159 | break 160 | } 161 | } 162 | 163 | return accum, terminate 164 | } 165 | 166 | // Mapcat first runs an exploder, then 'concats' results by passing each 167 | // individual value along to the next transducer in the pipeline. 168 | func Mapcat(f Exploder) Transducer { 169 | return func(r Reducer) Reducer { 170 | return mapcat{reducerBase{r}, f} 171 | } 172 | } 173 | 174 | type dedupe struct { 175 | reducerBase 176 | // TODO Slice is fine for prototype, but should replace with type-appropriate 177 | // search tree later 178 | seen valueSlice 179 | } 180 | 181 | func (r *dedupe) Step(accum interface{}, value interface{}) (interface{}, bool) { 182 | for _, v := range r.seen { 183 | if value == v { 184 | return accum, false 185 | } 186 | } 187 | 188 | r.seen = append(r.seen, value) 189 | return r.next.Step(accum, value) 190 | } 191 | 192 | // Dedupe keeps track of values that have passed through it during this 193 | // transduction process and drops any duplicates. 194 | // 195 | // Simple equality (==) is used for comparison. That will panic on non-hashable 196 | // datastructures (maps, slices, channels)! 197 | func Dedupe() Transducer { 198 | return func(r Reducer) Reducer { 199 | return &dedupe{reducerBase{r}, make([]interface{}, 0)} 200 | } 201 | } 202 | 203 | // Condense the traversed collection by partitioning it into 204 | // chunks of []interface{} of the given length. 205 | // 206 | // Here's one place we sorely feel the lack of algebraic types. 207 | func Chunk(length int) Transducer { 208 | if length < 1 { 209 | panic("chunks must be at least one element in size") 210 | } 211 | 212 | return func(r Reducer) Reducer { 213 | // TODO look into most memory-savvy ways of doing this 214 | return &chunk{length: length, coll: make(valueSlice, length, length), next: r} 215 | } 216 | } 217 | 218 | // Chunk is stateful, so it's handled with a struct instead of a pure function 219 | type chunk struct { 220 | length int 221 | count int 222 | terminate bool 223 | coll valueSlice 224 | next Reducer 225 | } 226 | 227 | func (t *chunk) Step(accum interface{}, value interface{}) (interface{}, bool) { 228 | t.coll[t.count] = value 229 | t.count++ // TODO atomic 230 | 231 | if t.count == t.length { 232 | t.count = 0 233 | newcoll := make(valueSlice, t.length, t.length) 234 | copy(newcoll, t.coll) 235 | accum, t.terminate = t.next.Step(accum, newcoll.AsStream()) 236 | return accum, t.terminate 237 | } else { 238 | return accum, false 239 | } 240 | } 241 | 242 | func (t *chunk) Complete(accum interface{}) interface{} { 243 | // if there's a partially-completed chunk, send it through reduction as-is 244 | if t.count != 0 && !t.terminate { 245 | // should be fine to send the original, we know we're done 246 | accum, t.terminate = t.next.Step(accum, t.coll[:t.count].AsStream()) 247 | } 248 | 249 | return t.next.Complete(accum) 250 | } 251 | 252 | func (t *chunk) Init() interface{} { 253 | return t.next.Init() 254 | } 255 | 256 | // Condense the traversed collection by partitioning it into chunks, 257 | // represented by ValueStreams. A new contiguous stream is created every time 258 | // the injected filter function returns a different value from the previous. 259 | func ChunkBy(f Mapper) Transducer { 260 | return func(r Reducer) Reducer { 261 | // TODO look into most memory-savvy ways of doing this 262 | return &chunkBy{chunker: f, coll: make(valueSlice, 0), next: r, first: true, last: nil} 263 | } 264 | } 265 | 266 | type chunkBy struct { 267 | chunker Mapper 268 | first bool 269 | last interface{} 270 | coll valueSlice 271 | next Reducer 272 | terminate bool 273 | } 274 | 275 | func (t *chunkBy) Step(accum interface{}, value interface{}) (interface{}, bool) { 276 | var chunkval interface{} 277 | if vs, ok := value.(ValueStream); ok { 278 | vs, value = vs.Split() 279 | chunkval = t.chunker(vs) 280 | } else { 281 | chunkval = t.chunker(value) 282 | } 283 | 284 | if t.first { // nothing to compare against if first pass 285 | t.first = false 286 | t.last = chunkval 287 | t.coll = append(t.coll, value) 288 | } else if t.last == chunkval { 289 | t.coll = append(t.coll, value) 290 | } else { 291 | t.last = chunkval 292 | accum, t.terminate = t.next.Step(accum, t.coll.AsStream()) 293 | t.coll = nil 294 | t.coll = append(t.coll, value) 295 | } 296 | return accum, t.terminate 297 | } 298 | 299 | func (t *chunkBy) Complete(accum interface{}) interface{} { 300 | // if there's a partially-completed chunk, send it through reduction as-is 301 | if len(t.coll) != 0 && !t.terminate { 302 | accum, t.terminate = t.next.Step(accum, t.coll.AsStream()) 303 | } 304 | 305 | return t.next.Complete(accum) 306 | } 307 | 308 | func (t *chunkBy) Init() interface{} { 309 | return t.next.Init() 310 | } 311 | 312 | type randomSample struct { 313 | filter 314 | } 315 | 316 | // Passes the received value along to the next transducer, with the 317 | // given probability. 318 | func RandomSample(ρ float64) Transducer { 319 | if ρ < 0.0 || ρ > 1.0 { 320 | panic("ρ must be in the range [0.0,1.0].") 321 | } 322 | 323 | return func(r Reducer) Reducer { 324 | return randomSample{filter{reducerBase{r}, func(_ interface{}) bool { 325 | //panic("oh shit") 326 | return rand.Float64() < ρ 327 | }}} 328 | } 329 | } 330 | 331 | type takeNth struct { 332 | filter 333 | } 334 | 335 | // TakeNth takes every nth element to pass through it, discarding the remainder. 336 | func TakeNth(n int) Transducer { 337 | var count int 338 | 339 | return func(r Reducer) Reducer { 340 | return takeNth{filter{reducerBase{r}, func(_ interface{}) bool { 341 | count++ // TODO atomic 342 | return count%n == 0 343 | }}} 344 | } 345 | } 346 | 347 | type keep struct { 348 | reducerBase 349 | f Mapper 350 | } 351 | 352 | func (r keep) Step(accum interface{}, value interface{}) (interface{}, bool) { 353 | nv := r.f(value) 354 | if nv != nil { 355 | return r.next.Step(accum, nv) 356 | } 357 | return accum, false 358 | } 359 | 360 | // Keep calls the provided mapper, then discards any nil value returned from the mapper. 361 | func Keep(f Mapper) Transducer { 362 | return func(r Reducer) Reducer { 363 | return keep{reducerBase{r}, f} 364 | } 365 | } 366 | 367 | type keepIndexed struct { 368 | reducerBase 369 | count int 370 | f IndexedMapper 371 | } 372 | 373 | func (r *keepIndexed) Step(accum interface{}, value interface{}) (interface{}, bool) { 374 | nv := r.f(r.count, value) 375 | r.count++ // TODO atomic 376 | 377 | if nv != nil { 378 | return r.next.Step(accum, nv) 379 | } 380 | 381 | return accum, false 382 | 383 | } 384 | 385 | // KeepIndexed calls the provided indexed mapper, then discards any nil value 386 | // return from the mapper. 387 | func KeepIndexed(f IndexedMapper) Transducer { 388 | return func(r Reducer) Reducer { 389 | return &keepIndexed{reducerBase{r}, 0, f} 390 | } 391 | } 392 | 393 | type replace struct { 394 | reducerBase 395 | pairs map[interface{}]interface{} 396 | } 397 | 398 | func (r replace) Step(accum interface{}, value interface{}) (interface{}, bool) { 399 | if v, exists := r.pairs[value]; exists { 400 | return r.next.Step(accum, v) 401 | } 402 | return r.next.Step(accum, value) 403 | } 404 | 405 | // Given a map of replacement value pairs, will replace any value moving through 406 | // that has a key in the map with the corresponding value. 407 | func Replace(pairs map[interface{}]interface{}) Transducer { 408 | return func(r Reducer) Reducer { 409 | return replace{reducerBase{r}, pairs} 410 | } 411 | } 412 | 413 | type take struct { 414 | reducerBase 415 | max uint 416 | count uint 417 | } 418 | 419 | func (r *take) Step(accum interface{}, value interface{}) (interface{}, bool) { 420 | r.count++ // TODO atomic 421 | if r.count < r.max { 422 | return r.next.Step(accum, value) 423 | } 424 | // should NEVER be called again after this. add a panic branch? 425 | accum, _ = r.next.Step(accum, value) 426 | return accum, true 427 | } 428 | 429 | // Take specifies a maximum number of values to receive, after which it will 430 | // terminate the transducing process. 431 | func Take(max uint) Transducer { 432 | return func(r Reducer) Reducer { 433 | return &take{reducerBase{r}, max, 0} 434 | } 435 | } 436 | 437 | type takeWhile struct { 438 | reducerBase 439 | f Filterer 440 | } 441 | 442 | func (r takeWhile) Step(accum interface{}, value interface{}) (interface{}, bool) { 443 | if !r.f(value) { 444 | return accum, true 445 | } 446 | return r.next.Step(accum, value) 447 | } 448 | 449 | // TakeWhile accepts values until the injected filterer function returns false. 450 | func TakeWhile(f Filterer) Transducer { 451 | return func(r Reducer) Reducer { 452 | return takeWhile{reducerBase{r}, f} 453 | } 454 | } 455 | 456 | type drop struct { 457 | reducerBase 458 | min uint 459 | count uint 460 | } 461 | 462 | func (r *drop) Step(accum interface{}, value interface{}) (interface{}, bool) { 463 | if r.count < r.min { 464 | // Increment inside so no mutation after threshold is met 465 | r.count++ // TODO atomic 466 | return accum, false 467 | } 468 | // should NEVER be called again after this. add a panic branch? 469 | return r.next.Step(accum, value) 470 | } 471 | 472 | // Drop specifies a number of values to initially ignore, after which it will 473 | // let everything through unchanged. 474 | func Drop(min uint) Transducer { 475 | return func(r Reducer) Reducer { 476 | return &drop{reducerBase{r}, min, 0} 477 | } 478 | } 479 | 480 | type dropWhile struct { 481 | reducerBase 482 | f Filterer 483 | accepted bool 484 | } 485 | 486 | func (r *dropWhile) Step(accum interface{}, value interface{}) (interface{}, bool) { 487 | if !r.accepted { 488 | if !r.f(value) { 489 | r.accepted = true 490 | } else { 491 | return accum, false 492 | } 493 | } 494 | return r.next.Step(accum, value) 495 | } 496 | 497 | // DropWhile drops values until the injected filterer function returns false. 498 | func DropWhile(f Filterer) Transducer { 499 | return func(r Reducer) Reducer { 500 | return &dropWhile{reducerBase{r}, f, false} 501 | } 502 | } 503 | 504 | type remove struct { 505 | filter 506 | } 507 | 508 | // Remove drops items when the injected filterer function returns true. 509 | // 510 | // It is the inverse of Filter. 511 | func Remove(f Filterer) Transducer { 512 | return func(r Reducer) Reducer { 513 | return remove{filter{reducerBase{r}, func(value interface{}) bool { 514 | return !f(value) 515 | }}} 516 | } 517 | } 518 | 519 | type escape struct { 520 | reducerBase 521 | f Filterer 522 | c chan<- interface{} 523 | coc bool 524 | } 525 | 526 | func (r escape) Step(accum interface{}, value interface{}) (interface{}, bool) { 527 | var check bool 528 | if vs, ok := value.(ValueStream); ok { 529 | vs, value = vs.Split() 530 | check = r.f(vs) 531 | } else { 532 | check = r.f(value) 533 | } 534 | 535 | if check { 536 | r.c <- value 537 | return accum, false 538 | } else { 539 | return r.next.Step(accum, value) 540 | } 541 | } 542 | 543 | func (r escape) Complete(accum interface{}) interface{} { 544 | if r.coc { 545 | close(r.c) 546 | } 547 | return r.next.Complete(accum) 548 | } 549 | 550 | // A Escape transducer takes a filter func and a send-only value channel. If 551 | // the filtering func returns true, it allows the value to 'escape' from the 552 | // current transduction process and into the channel - which itself may be, 553 | // though is not necessarily, the entry point to a transducing process itself. 554 | // If the filtering func returns false, then the value is passed along to the 555 | // next reducing step unchanged. 556 | // 557 | // Obviously, this transducer has side effects. 558 | // 559 | // The third parameter governs whether the passed channel is closed when the 560 | // Escape reduce step's Complete() method is called (which occurs when the 561 | // transducing process this is involved is complete). This is very useful for 562 | // auto-cleanup, but could cause panics (send on closed channel) if the channel 563 | // is being sent to from elsewhere. Be cognizant. 564 | func Escape(f Filterer, c chan<- interface{}, closeOnComplete bool) Transducer { 565 | return func(r Reducer) Reducer { 566 | return escape{reducerBase{r}, f, c, closeOnComplete} 567 | } 568 | } 569 | -------------------------------------------------------------------------------- /transducers_test.go: -------------------------------------------------------------------------------- 1 | package transducers 2 | 3 | import ( 4 | "fmt" 5 | "testing" 6 | ) 7 | 8 | var ints = []int{1, 2, 3, 4, 5} 9 | var evens = []int{2, 4} 10 | 11 | func dt(t []Transducer) []Transducer { 12 | if testing.Verbose() { 13 | return AttachLoggers(fmt.Printf, t...) 14 | } else { 15 | return AttachLoggers(func(s string, v ...interface{}) (int, error) { 16 | return 0, nil 17 | }, t...) 18 | } 19 | } 20 | 21 | // tb == testbottom. simple appender 22 | func tb() Reducer { 23 | b := CreateStep(func(accum interface{}, value interface{}) (interface{}, bool) { 24 | return append(accum.([]int), value.(int)), false 25 | }) 26 | b.I = func() interface{} { 27 | return make([]int, 0) 28 | } 29 | 30 | return b 31 | } 32 | 33 | func rchan(i int) chan interface{} { 34 | c := make(chan interface{}, 0) 35 | go StreamIntoChan(Range(i), c) 36 | return c 37 | } 38 | 39 | func intSliceEquals(a []int, b []int, t *testing.T) { 40 | if len(a) != len(b) { 41 | t.Error("Slices not even length") 42 | } 43 | 44 | for k, v := range a { 45 | if b[k] != v { 46 | t.Error("Error on index", k, ": expected", v, "got", b[k]) 47 | } 48 | } 49 | } 50 | 51 | func toi(i ...interface{}) []interface{} { 52 | return i 53 | } 54 | 55 | func streamEquals(expected []interface{}, s ValueStream, t *testing.T) { 56 | for k, v := range expected { 57 | val, done := s() 58 | if done { 59 | t.Errorf("Stream terminated before end of slice reached") 60 | } 61 | if v != val { 62 | t.Errorf("Error on index %v: expected %v got %v", k, v, val) 63 | } 64 | } 65 | 66 | _, done := s() 67 | if !done { 68 | t.Errorf("Exhausted slice, but stream had more values") 69 | } 70 | } 71 | 72 | func chanEquals(expected []interface{}, c <-chan interface{}, t *testing.T) { 73 | var i int 74 | for val := range c { 75 | if len(expected) <= i { 76 | t.Errorf("Exhausted slice, but channel had more values") 77 | } 78 | 79 | if val != expected[i] { 80 | t.Errorf("Error on index %v: expected %v got %v", i, expected[i], val) 81 | } 82 | i++ 83 | } 84 | 85 | if i < len(expected)-1 { 86 | t.Errorf("Expected slice was longer than list of channel vals") 87 | } 88 | } 89 | 90 | func TestTransduceMF(t *testing.T) { 91 | mf := Transduce(ToStream(ints), tb(), Map(Inc), Filter(Even)).([]int) 92 | fm := Transduce(ToStream(ints), tb(), Filter(Even), Map(Inc)).([]int) 93 | 94 | intSliceEquals([]int{2, 4, 6}, mf, t) 95 | intSliceEquals([]int{3, 5}, fm, t) 96 | } 97 | 98 | func TestTransduceMapFilterMapcat(t *testing.T) { 99 | xform := []Transducer{Filter(Even), Map(Inc), Mapcat(Range)} 100 | result := Transduce(ToStream(ints), tb(), dt(xform)...).([]int) 101 | 102 | intSliceEquals([]int{0, 1, 2, 0, 1, 2, 3, 4}, result, t) 103 | } 104 | 105 | func TestTransduceMapFilterMapcatDedupe(t *testing.T) { 106 | xform := []Transducer{Filter(Even), Map(Inc), Mapcat(Range), Dedupe()} 107 | 108 | result := Transduce(ToStream(ints), tb(), dt(xform)...).([]int) 109 | intSliceEquals([]int{0, 1, 2, 3, 4}, result, t) 110 | 111 | // Dedupe is stateful. Do it twice to demonstrate that's handled 112 | result2 := Transduce(ToStream(ints), tb(), dt(xform)...).([]int) 113 | intSliceEquals([]int{0, 1, 2, 3, 4}, result2, t) 114 | } 115 | 116 | func TestTransduceChunkFlatten(t *testing.T) { 117 | xform := []Transducer{Chunk(3), Mapcat(Flatten)} 118 | result := Transduce(Range(6), tb(), dt(xform)...).([]int) 119 | // TODO crappy test b/c the steps are logical inversions - need to improv on Seq for better test 120 | 121 | intSliceEquals(([]int{0, 1, 2, 3, 4, 5}), result, t) 122 | } 123 | 124 | func TestTransduceChunkChunkByFlatten(t *testing.T) { 125 | chunker := func(value interface{}) interface{} { 126 | return sum(value.(ValueStream)) > 7 127 | } 128 | xform := []Transducer{Chunk(3), ChunkBy(chunker), Mapcat(Flatten)} 129 | result := Transduce(Range(19), tb(), dt(xform)...).([]int) 130 | 131 | intSliceEquals(t_range(19), result, t) 132 | } 133 | 134 | func TestMultiChunkBy(t *testing.T) { 135 | chunker := func(value interface{}) interface{} { 136 | switch v := value.(int); { 137 | case v < 4: 138 | return "boo" 139 | case v < 7: 140 | return false 141 | default: 142 | return "boo" 143 | } 144 | } 145 | 146 | xform := []Transducer{ChunkBy(chunker), Map(Sum)} 147 | result := Transduce(Range(10), tb(), dt(xform)...).([]int) 148 | intSliceEquals([]int{6, 15, 24}, result, t) 149 | } 150 | 151 | func TestTransduceSample(t *testing.T) { 152 | result := Transduce(Range(12), tb(), RandomSample(1)).([]int) 153 | intSliceEquals(t_range(12), result, t) 154 | 155 | result2 := Transduce(Range(12), tb(), RandomSample(0)).([]int) 156 | if len(result2) != 0 { 157 | t.Error("Random sampling with 0 ρ should filter out all results") 158 | } 159 | } 160 | 161 | func TestTakeNth(t *testing.T) { 162 | result := Transduce(Range(21), tb(), TakeNth(7)).([]int) 163 | 164 | intSliceEquals([]int{6, 13, 20}, result, t) 165 | } 166 | 167 | func TestKeep(t *testing.T) { 168 | v := []interface{}{0, nil, 1, 2, nil, false} 169 | 170 | // include this type converter to make the bool into an int, or seq will have 171 | // a type panic at the end. Just to prove that Keep retains false vals. 172 | mapf := func(val interface{}) interface{} { 173 | if _, ok := val.(bool); ok { 174 | return 15 175 | } 176 | return val 177 | } 178 | 179 | keepf := func(val interface{}) interface{} { 180 | return val 181 | } 182 | 183 | xform := []Transducer{Keep(keepf), Map(mapf)} 184 | 185 | result := Transduce(ToStream(v), tb(), dt(xform)...).([]int) 186 | 187 | intSliceEquals([]int{0, 1, 2, 15}, result, t) 188 | } 189 | 190 | func TestKeepIndexed(t *testing.T) { 191 | keepf := func(index int, value interface{}) interface{} { 192 | if !Even(index) { 193 | return nil 194 | } 195 | return index * value.(int) 196 | } 197 | 198 | td := KeepIndexed(keepf) 199 | 200 | result := Transduce(Range(7), tb(), td).([]int) 201 | intSliceEquals([]int{0, 4, 16, 36}, result, t) 202 | 203 | result2 := Transduce(Range(7), tb(), td).([]int) 204 | intSliceEquals([]int{0, 4, 16, 36}, result2, t) 205 | } 206 | 207 | func TestReplace(t *testing.T) { 208 | tostrings := map[interface{}]interface{}{ 209 | 2: "two", 210 | 6: "six", 211 | 18: "eighteen", 212 | } 213 | 214 | toints := map[interface{}]interface{}{ 215 | "two": 55, 216 | "six": 35, 217 | "eighteen": 41, 218 | } 219 | 220 | xform := []Transducer{Replace(tostrings), Replace(toints)} 221 | result := Transduce(Range(19), tb(), dt(xform)...).([]int) 222 | 223 | intSliceEquals([]int{0, 1, 55, 3, 4, 5, 35, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 41}, result, t) 224 | } 225 | 226 | func TestMapChunkTakeFlatten(t *testing.T) { 227 | xform := []Transducer{Map(Inc), Chunk(2), Take(2), Mapcat(Flatten)} 228 | result := Transduce(Range(6), tb(), dt(xform)...).([]int) 229 | intSliceEquals([]int{1, 2, 3, 4}, result, t) 230 | 231 | result2 := Transduce(Range(6), tb(), dt(xform)...).([]int) 232 | intSliceEquals([]int{1, 2, 3, 4}, result2, t) 233 | } 234 | 235 | func TestTakeWhile(t *testing.T) { 236 | filter := func(value interface{}) bool { 237 | return value.(int) < 4 238 | } 239 | result := Transduce(Range(6), tb(), TakeWhile(filter)).([]int) 240 | intSliceEquals([]int{0, 1, 2, 3}, result, t) 241 | } 242 | 243 | func TestDropDropDropWhileTake(t *testing.T) { 244 | dw := func(value interface{}) bool { 245 | return value.(int) < 5 246 | } 247 | td := []Transducer{Drop(1), Drop(1), DropWhile(dw), Take(5)} 248 | result := Transduce(Range(50), tb(), dt(td)...).([]int) 249 | 250 | intSliceEquals([]int{5, 6, 7, 8, 9}, result, t) 251 | } 252 | 253 | func TestRemove(t *testing.T) { 254 | result := Transduce(Range(8), tb(), Remove(Even)).([]int) 255 | intSliceEquals([]int{1, 3, 5, 7}, result, t) 256 | } 257 | 258 | func TestFlattenValueStream(t *testing.T) { 259 | stream := valueSlice{ 260 | ToStream([]int{0, 1}), 261 | valueSlice{ 262 | ToStream([]int{2, 3}), 263 | ToStream([]int{4, 5, 6}), 264 | }.AsStream(), 265 | ToStream([]int{7, 8}), 266 | }.AsStream().Flatten() 267 | 268 | var flattened []int 269 | stream.Each(func(v interface{}) { 270 | flattened = append(flattened, v.(int)) 271 | }) 272 | 273 | intSliceEquals(t_range(9), flattened, t) 274 | } 275 | 276 | func TestEscape(t *testing.T) { 277 | c := make(chan interface{}, 0) 278 | res := Go(rchan(5), 0, dt([]Transducer{Escape(Even, c, true)})...) 279 | go chanEquals(toi(0, 2, 4), c, t) 280 | chanEquals(toi(1, 3), res, t) 281 | 282 | // connect two transduction processes together 283 | c = make(chan interface{}, 0) 284 | res1 := Go(rchan(5), 0, dt([]Transducer{Escape(Even, c, true)})...) 285 | res2 := Go(c, 0, dt([]Transducer{Map(Inc), Map(Inc), Map(Inc)})...) 286 | go chanEquals(toi(1, 3), res1, t) 287 | go chanEquals(toi(3, 5, 7), res2, t) 288 | } 289 | 290 | func TestStreamSplit(t *testing.T) { 291 | stream := Range(3) 292 | stream, dupd := stream.Split() 293 | 294 | var res1, res2 []int 295 | stream.Each(func(value interface{}) { 296 | res1 = append(res1, value.(int)) 297 | }) 298 | 299 | dupd.Each(func(value interface{}) { 300 | res2 = append(res2, value.(int)) 301 | }) 302 | 303 | intSliceEquals([]int{0, 1, 2}, res1, t) 304 | intSliceEquals([]int{0, 1, 2}, res2, t) 305 | 306 | // test recursive 307 | base := valueSlice{ 308 | []int{0, 1}, 309 | valueSlice{ 310 | []int{2, 3}, 311 | []int{4, 5, 6}, 312 | }, 313 | []int{7, 8}, 314 | } 315 | rstream := valueSlice{ 316 | ToStream([]int{0, 1}), 317 | valueSlice{ 318 | ToStream([]int{2, 3}), 319 | ToStream([]int{4, 5, 6}), 320 | }.AsStream(), 321 | ToStream([]int{7, 8}), 322 | }.AsStream() 323 | 324 | rstream, dup := rstream.Split() 325 | r1 := ToSlice(rstream) 326 | if fmt.Sprintf("%v", r1) != fmt.Sprintf("%v", base) { 327 | t.Error("First stream not expected value, got", r1) 328 | } 329 | r2 := ToSlice(dup) 330 | if fmt.Sprintf("%v", r2) != fmt.Sprintf("%v", base) { 331 | t.Error("Second stream not expected value, got", r2) 332 | } 333 | } 334 | 335 | func TestEduction(t *testing.T) { 336 | var res ValueStream 337 | 338 | // simple 1:1 339 | xf1 := []Transducer{Map(Inc)} 340 | res = Eduction(Range(5), dt(xf1)...) 341 | streamEquals(toi(1, 2, 3, 4, 5), res, t) 342 | 343 | // contractor 344 | xf2 := append(xf1, Filter(Even)) 345 | res = Eduction(Range(5), dt(xf2)...) 346 | streamEquals(toi(2, 4), res, t) 347 | 348 | // expander 349 | xf3 := append(xf2, Mapcat(Range)) 350 | res = Eduction(Range(5), dt(xf3)...) 351 | streamEquals(toi(0, 1, 0, 1, 2, 3), res, t) 352 | 353 | // terminator 354 | xf4 := append(xf3, Take(5)) 355 | res = Eduction(Range(5), dt(xf4)...) 356 | streamEquals(toi(0, 1, 0, 1, 2), res, t) 357 | 358 | // stateful/flusher 359 | xf5 := append(xf4, Chunk(2), Mapcat(Flatten)) // add flatten b/c no auto-recursive compare 360 | res = Eduction(Range(5), dt(xf5)...) 361 | streamEquals(toi(0, 1, 0, 1, 2), res, t) 362 | 363 | // feels like there are more permutations to check 364 | } 365 | 366 | func TestGo(t *testing.T) { 367 | var res <-chan interface{} 368 | 369 | // simple 1:1 370 | xf1 := []Transducer{Map(Inc)} 371 | res = Go(rchan(5), 0, dt(xf1)...) 372 | chanEquals(toi(1, 2, 3, 4, 5), res, t) 373 | res = Go(rchan(5), 2, dt(xf1)...) 374 | chanEquals(toi(1, 2, 3, 4, 5), res, t) 375 | 376 | // contractor 377 | xf2 := append(xf1, Filter(Even)) 378 | res = Go(rchan(5), 0, dt(xf2)...) 379 | chanEquals(toi(2, 4), res, t) 380 | res = Go(rchan(5), 2, dt(xf2)...) 381 | chanEquals(toi(2, 4), res, t) 382 | 383 | // expander 384 | xf3 := append(xf2, Mapcat(Range)) 385 | res = Go(rchan(5), 0, dt(xf3)...) 386 | chanEquals(toi(0, 1, 0, 1, 2, 3), res, t) 387 | res = Go(rchan(5), 2, dt(xf3)...) 388 | chanEquals(toi(0, 1, 0, 1, 2, 3), res, t) 389 | 390 | // terminator 391 | xf4 := append(xf3, Take(5)) 392 | res = Go(rchan(5), 0, dt(xf4)...) 393 | chanEquals(toi(0, 1, 0, 1, 2), res, t) 394 | res = Go(rchan(5), 2, dt(xf4)...) 395 | chanEquals(toi(0, 1, 0, 1, 2), res, t) 396 | 397 | // stateful/flusher 398 | xf5 := append(xf4, Chunk(2), Mapcat(Flatten)) // add flatten b/c no auto-recursive compare 399 | res = Go(rchan(5), 0, dt(xf5)...) 400 | chanEquals(toi(0, 1, 0, 1, 2), res, t) 401 | res = Go(rchan(5), 2, dt(xf5)...) 402 | chanEquals(toi(0, 1, 0, 1, 2), res, t) 403 | 404 | // feels like there are more permutations to check 405 | } 406 | --------------------------------------------------------------------------------