38 | * Use parameters `server 0.0.0.0 6001` to start server listening on port 39 | * 6001. 40 | *
41 | * Use parameters `client 127.0.0.1 6001` to start client connecting to server
42 | * on 127.0.0.1:6001.
43 | */
44 | public static void main(String[] args) throws Exception {
45 | if (args.length == 0) {
46 | ActorSystem serverSystem = ActorSystem.create("Server");
47 | ActorSystem clientSystem = ActorSystem.create("Client");
48 | InetSocketAddress serverAddress = new InetSocketAddress("127.0.0.1", 6000);
49 |
50 | server(serverSystem, serverAddress);
51 |
52 | // http://typesafehub.github.io/ssl-config/CertificateGeneration.html#client-configuration
53 | client(clientSystem, serverAddress);
54 |
55 | } else {
56 | InetSocketAddress serverAddress;
57 | if (args.length == 3) {
58 | serverAddress = new InetSocketAddress(args[1], Integer.valueOf(args[2]));
59 | } else {
60 | serverAddress = new InetSocketAddress("127.0.0.1", 6000);
61 | }
62 | if (args[0].equals("server")) {
63 | ActorSystem system = ActorSystem.create("Server");
64 | server(system, serverAddress);
65 | } else if (args[0].equals("client")) {
66 | ActorSystem system = ActorSystem.create("Client");
67 | client(system, serverAddress);
68 | }
69 | }
70 | }
71 |
72 | // ------------------------ server ------------------------------
73 |
74 | public static void server(ActorSystem system, InetSocketAddress serverAddress) {
75 | final ActorMaterializer materializer = ActorMaterializer.create(system);
76 |
77 | final Sink
9 | This tutorial contains a few samples that demonstrates Akka Streams.
10 |
13 | Akka Streams is an implementation of
14 | Reactive Streams,
15 | which is a standard for asynchronous stream processing with non-blocking backpressure.
16 | Akka Streams is interoperable with other Reactive Streams implementations.
17 |
20 | Akka Streams is currently under development and these samples use a preview release, i.e.
21 | changes can be expected. Please try it out and send feedback to the
22 | Akka mailing list.
23 |
26 | Akka Streams provides a way to express and run a chain of asynchronous processing
27 | steps acting on a sequence of elements. Every step is processed by one actor to
28 | support parallelism. The user describes the “what” instead of the “how”, i.e. things like
29 | batching, buffering, thread-safety are handled behind the scenes.
30 |
31 |
32 |
33 | The processing steps are declared with a DSL, a so called
42 | The
47 | Each DSL element produces a new Flow that can be further transformed, building
48 | up a description of the complete transformation pipeline. In order to execute
49 | this pipeline the
55 | Running a
60 | It should be noted that the streams modeled by this library are “hot”,
61 | meaning that they asynchronously flow through a series of processors without
62 | detailed control by the user. In particular it is not predictable how many
63 | elements a given transformation step might buffer before handing elements
64 | downstream, which means that transformation functions may be invoked more
65 | often than for corresponding transformations on strict collections like
66 |
72 | By default every operation is executed within its own
85 | What does a
89 | Open BasicTransformation.java
90 |
93 | Here we use an
102 | In this sample we convert each read line to upper case and printing it
103 | to the console. This is done in the lines
108 | The
118 | The Unlike a
138 | Try to run the
144 | Try to add additional steps in the flow, for example skip short lines:
145 | The API is intended to be familiar to anyone used to the collections API in Scala.
154 | All stream manipulation operations can be found in the
155 | API documentation.
156 |
163 | The mandatory non-blocking backpressure is a key feature of
164 | Reactive Streams.
165 |
168 | Open WritePrimes.java
169 |
172 | In this sample we use a fast producer and several consumers, with potentially different throughput capacity.
173 | To avoid out of memory problems it is important that the producer does not generate elements faster than
174 | what can be consumed. Also the speed of the slowest consumer must be taken into account to avoid
175 | unbounded buffering in intermediate steps.
176 |
179 | Here we use a random number generator as input. The input producer is a block of code which is
180 | evaluated repeatedly. It can generate elements very fast if needed.
181 |
184 | We filter the numbers through two prime number checks and end up with a stream of
185 | prime numbers, which neighbor +2 number is also a prime number. These two flow filter steps
186 | can potentially be pipelined, i.e. executed in parallel.
187 |
190 | Then we connect that prime number producer to two consumers. One writing to a file, and another
191 | printing to the console. To simulate that the file writer is slow we have added an additional
192 | sleep in a map stage right before the
196 | The connections are made using the FlowGraph DSL: we use the
205 | The first step is to create and import a broadcast node with two outputs, then we use the builder to
206 | connect the source via the broadcast to both sinks.
207 |
210 | Try to run the
216 | Note that speed of the output in the console is limited by the slow file writer, i.e.
217 | one element per second.
218 |
221 | Open primes.txt to see
222 | the file output.
223 |
230 | Let us take a look at an example of more advanced stream manipulation.
231 |
234 | Open GroupLogFile.java
235 |
238 | We want to read a log file and pipe entries of different log level to separate files.
239 |
242 | In this sample we exctract the level with a regular expression matching the log levels and then
243 | write the elements of each group to a separate file.
244 |
247 | Try to run the
253 | Open the input logfile.txt
254 | and look at the resulting output log files in the
255 | target directory.
256 |
263 | Akka Streams also provides a stream based API on top of
264 | Akka I/O.
265 |
268 | Open TcpEcho.java
269 |
272 | When you Run
277 | The server is started by calling
286 | In this sample the server sends back the same bytes as it receives.
287 |
293 | You can add transformation of the bytes using a
303 | The connection from the client is established by calling
314 | In this sample the client sends a sequence of characters one-by-one to the server, aggregates the replies
315 | into a single
319 | Try to run the
325 | That runs the client and server in the same JVM process. It can be more
326 | interesting to run them in separate processes. Run the following commands in separate
327 | terminal windows.
328 |
339 | You can also interact with the server with telnet:
340 |
347 | Type a few characters in the telnet session and press enter to see them echoed back to the terminal.
348 | Flow.
34 | A Flow may be connected to a Source and/or a
35 | Sink. It may also exist without either of these end points, as an
36 | "open" flow. Any open flow when connected to a Source itself becomes a
37 | Source and likewise when connected to a Sink becomes a
38 | Sink. A Flow with both a Source and a
39 | Sink is called a RunnableFlow and may be executed.
40 | Source can be constructed from a collection, an iterator,
43 | a future, or a function which is evaluated repeatedly.
44 | Flow must be runnable (have both Source
50 | and Sink endpoints, and is materialized by calling one of the execution
51 | methods which include .run, .runWith and .runForeach.
52 | Flow involves a process called materialization, which requires
56 | a FlowMaterializer configured for an actor system.
57 | List. An important consequence is that elements that were
67 | produced into a stream may be discarded by later processors, e.g. when using the
68 | take combinator.
69 | Actor
73 | to enable full pipelining of the chained set of computations. This behavior
74 | is determined by the akka.stream.ActorMaterializer which is required
75 | by those methods that materialize the Flow into a series of
76 | org.reactivestreams.Processor instances that are started and active.
77 | Synchronous compaction of steps is possible (but not yet implemented).
78 | Basic transformation
83 |
84 | Flow look like?
86 | Iterator over the Array produced by splitting the text
94 | using the spaces, as input producer;
95 | note that the iterator is an Iterator<String> and this produces
96 | a Source<String>. The flow written to use this Source
97 | must match the type, so we could not treat the source as a source of Int
98 | for example.
99 | map(e -> e.toUpperCase) and
104 | runForeach(System.out::println, materializer).
105 | map(e -> e.toUpperCase) takes Strings and produces
109 | Strings. Behind the scenes, this constructs a Transformer<String, String>
110 | which is itself a Flow
111 | When this is attached to the Source<String>, the result is
112 | a new Flow that is also a Source<String>. If the map
113 | was over a function that converted, say, String to Int, the result would be a
114 | Source<Int> when attaching it to this Source<String>.
115 | runForeach(System.out::println, materializer) constructs and attaches a Sink, in
119 | this case an implementation called Sink.runForeach and again this is
120 | specifically a Sink.runForeach<String> which matches the type of the
121 | Source<String>. The result of attaching this matching Sink
122 | to the Source creates a RunnableFlow which is then also run
123 | bu the runForeach call.
124 | runForeach on a collection (which returns Unit),
127 | the runForeach on a Flow returns a CompletionStage<Done> instead. Because
128 | we get a CompletionStage back, we can use it to shutdown the actor system once the
129 | flow is completed. This is accomplished by the final line in the flow:
130 |
136 |
137 |
131 | handle((done, failure) -> {
132 | system.terminate();
133 | return NotUsed.getInstance();
134 | });
135 | sample.stream.BasicTransformation class
139 | by selecting it in the 'Main class' menu in the Run tab
140 | and click the 'Run' button.
141 |
150 |
151 |
148 | filter(line -> line.length > 3).
149 | Backpressure
161 |
162 | SynchronousFileSink.
193 | FlowGraph.factory().closed(...)
197 | method to construct a runnable graph (substituting 'partial' for 'closed' would create one that has open
198 | inputs or outputs that remain to be connected). The first argument is the 'runForeach' Sink that materializes
199 | to a Future that we can use to detect (abnormal) stream termination, the second argument is
200 | a lambda expression that takes a FlowGraph.Builder and the imported 'slowSink' shape and
201 | performs the actual wiring.
202 | sample.stream.WritePrimes class
211 | by selecting it in the 'Main class' menu in the Run tab
212 | and click the 'Run' button.
213 | Stream of streams
228 |
229 | sample.stream.GroupLogFile class
248 | by selecting it in the 'Main class' menu in the Run tab
249 | and click the 'Run' button.
250 | TCP Stream
261 |
262 | TcpEcho without parameters it starts
273 | both client and server in the same JVM and the client connects to the server over port 6000.
274 | bind on the akka.stream.javadsl.Tcp extension.
278 | It returns a StreamTcp.ServerBinding.
279 | Each new client connection is represented by a new IncomingTcpConnection
280 | element produced by the connections Source<StreamTcp.IncomingConnection> of the
281 | ServerBinding. From the connection the server can operate on the ByteString
282 | elements.
283 |
291 |
292 |
289 | conn.handleWith(Flow.Flow. For example convert characters to
294 | upper case.
295 |
301 |
302 |
297 | final Flow<ByteString, ByteString, NotUsed> toUpper =
298 | Flow.<ByteString>create().map(byteStr -> ByteString.fromString(byteStr.utf8String().toUpperCase()));
299 | conn.handleWith(toUpper, materializer);
300 | outgoingConnection on the
304 | akka.stream.javadsl.Tcp extension and attaching corresponding flow to input
305 | Source and output Sink.
306 |
312 |
313 |
309 | Future<ByteString> result = responseStream.runFold(
310 | ByteString.empty(), (acc, in) -> acc.concat(in), materializer);
311 | ByteString, and finally prints that.
316 | sample.stream.TcpEcho class
320 | by selecting it in the 'Main class' menu in the Run tab
321 | and click the 'Run' button.
322 |
333 |
334 |
331 | <path to activator dir>/activator "run-main sample.stream.TcpEcho server 0.0.0.0 6001"
332 |
337 |
338 |
335 | <path to activator dir>/activator "run-main sample.stream.TcpEcho client 127.0.0.1 6001"
336 |
345 |
346 |
343 | telnet 127.0.0.1 6001
344 | Links
353 |
354 |
355 |
359 |
360 |