This wrapper is transparently handled by the {@link ChkStream}
16 | * infrastructure. You should not be seeing it thrown out of @{link ChkStream}
17 | * methods, unless you call {@link ChkStream#toStream()} to get the underlying
18 | * raw {@link Stream} from a {@link ChkStream}.
19 | *
20 | * @author Alexander Dorokhine
21 | */
22 | public class ChkStreamWrappedException extends RuntimeException {
23 | private static final long serialVersionUID = 1L;
24 | public ChkStreamWrappedException(Exception e) {
25 | super(e);
26 | }
27 | }
28 |
--------------------------------------------------------------------------------
/third_party/openjdk/java8/build.gradle:
--------------------------------------------------------------------------------
1 | apply plugin: 'java-library'
2 | apply plugin: 'maven-publish'
3 |
4 | sourceSets {
5 | main {
6 | java.srcDir file('src/main/generated')
7 | }
8 | }
9 |
10 | sourceCompatibility = 1.8
11 | targetCompatibility = 1.8
12 |
13 | dependencies {
14 | api project(':third_party:openjdk:common')
15 |
16 | testImplementation group: 'com.google.truth', name: 'truth', version: '0.32'
17 | testImplementation group: 'junit', name: 'junit', version: '4.11'
18 | }
19 |
20 | task srcJar(type: Jar, dependsOn: classes) {
21 | classifier = 'src'
22 | from sourceSets.main.allSource
23 | }
24 |
25 | task javadocJar(type: Jar, dependsOn: javadoc) {
26 | classifier = 'javadoc'
27 | from javadoc.destinationDir
28 | }
29 |
30 | artifacts {
31 | archives srcJar
32 | archives javadocJar
33 | }
34 |
35 | publishing {
36 | publications {
37 | maven(MavenPublication) {
38 | groupId 'com.google.chkstream'
39 | artifactId 'chkstream-java8'
40 | version '1.0.0-SNAPSHOT'
41 |
42 | from components.java
43 | }
44 | }
45 | }
46 |
--------------------------------------------------------------------------------
/third_party/openjdk/java8/src/test/java/com/google/chkstream/ChkStreamFunctionalTest.java:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2017 Google.
3 | *
4 | * This file is licensed under the GPLv2+Classpath Exception, which full text
5 | * is found in the LICENSE file at the root of this project.
6 | *
7 | * Google designates this particular file as subject to the "Classpath"
8 | * exception as provided in the LICENSE file that accompanied this code.
9 | */
10 | package com.google.chkstream;
11 |
12 | import static com.google.common.truth.Truth.assertThat;
13 | import static org.junit.Assert.*;
14 |
15 | import java.io.IOException;
16 | import java.util.ArrayList;
17 | import java.util.List;
18 | import java.util.stream.Collectors;
19 | import java.util.stream.IntStream;
20 | import java.util.stream.Stream;
21 |
22 | import org.junit.Test;
23 |
24 | import com.google.chkstream.java8.ChkStream;
25 | import com.google.chkstream.java8.ChkStreams;
26 |
27 | /** Tests for the streaming functionality (maps and transforms) of ChkStream. */
28 | public class ChkStreamFunctionalTest {
29 |
30 | // Tests for transforms (non-sinks)
31 |
32 | @Test
33 | public void testMap() {
34 | assertThat(
35 | wrap(Stream.of(1, 2, 3))
36 | .map(x -> x * 10)
37 | .collect(Collectors.toList()))
38 | .containsExactly(10, 20, 30).inOrder();
39 | }
40 |
41 | @Test
42 | public void testFlatMap_Stream() throws IOException {
43 | List It is possible, and in fact common, for a base class to
39 | * implement AutoCloseable even though not all of its subclasses or
40 | * instances will hold releasable resources. For code that must operate
41 | * in complete generality, or when it is known that the {@code AutoCloseable}
42 | * instance requires resource release, it is recommended to use {@code
43 | * try}-with-resources constructions. However, when using facilities such as
44 | * {@link java.util.stream.Stream} that support both I/O-based and
45 | * non-I/O-based forms, {@code try}-with-resources blocks are in
46 | * general unnecessary when using non-I/O-based forms.
47 | *
48 | * @author Josh Bloch
49 | * @since 1.7
50 | */
51 | public interface AutoCloseable {
52 | /**
53 | * Closes this resource, relinquishing any underlying resources.
54 | * This method is invoked automatically on objects managed by the
55 | * {@code try}-with-resources statement.
56 | *
57 | * While this interface method is declared to throw {@code
58 | * Exception}, implementers are strongly encouraged to
59 | * declare concrete implementations of the {@code close} method to
60 | * throw more specific exceptions, or to throw no exception at all
61 | * if the close operation cannot fail.
62 | *
63 | * Cases where the close operation may fail require careful
64 | * attention by implementers. It is strongly advised to relinquish
65 | * the underlying resources and to internally mark the
66 | * resource as closed, prior to throwing the exception. The {@code
67 | * close} method is unlikely to be invoked more than once and so
68 | * this ensures that the resources are released in a timely manner.
69 | * Furthermore it reduces problems that could arise when the resource
70 | * wraps, or is wrapped, by another resource.
71 | *
72 | * Implementers of this interface are also strongly advised
73 | * to not have the {@code close} method throw {@link
74 | * InterruptedException}.
75 | *
76 | * This exception interacts with a thread's interrupted status,
77 | * and runtime misbehavior is likely to occur if an {@code
78 | * InterruptedException} is {@linkplain Throwable#addSuppressed
79 | * suppressed}.
80 | *
81 | * More generally, if it would cause problems for an
82 | * exception to be suppressed, the {@code AutoCloseable.close}
83 | * method should not throw it.
84 | *
85 | * Note that unlike the {@link java.io.Closeable#close close}
86 | * method of {@link java.io.Closeable}, this {@code close} method
87 | * is not required to be idempotent. In other words,
88 | * calling this {@code close} method more than once may have some
89 | * visible side effect, unlike {@code Closeable.close} which is
90 | * required to have no effect if called more than once.
91 | *
92 | * However, implementers of this interface are strongly encouraged
93 | * to make their {@code close} methods idempotent.
94 | *
95 | * @throws Exception if this resource cannot be closed
96 | */
97 | void close() throws Exception;
98 | }
99 |
--------------------------------------------------------------------------------
/third_party/openjdk/streamsupport/src/test/java/com/google/chkstream/ChkStreamFunctionalTest.java:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2017 Google.
3 | *
4 | * This file is licensed under the GPLv2+Classpath Exception, which full text
5 | * is found in the LICENSE file at the root of this project.
6 | *
7 | * Google designates this particular file as subject to the "Classpath"
8 | * exception as provided in the LICENSE file that accompanied this code.
9 | */
10 | package com.google.chkstream;
11 |
12 | import static com.google.common.truth.Truth.assertThat;
13 | import static org.junit.Assert.*;
14 |
15 | import java.io.IOException;
16 | import java.sql.SQLException;
17 | import java.util.ArrayList;
18 | import java.util.List;
19 |
20 | import org.junit.Test;
21 |
22 | import com.google.chkstream.streamsupport.ChkStream;
23 | import com.google.chkstream.streamsupport.ChkStreams;
24 |
25 | import java8.util.stream.Collectors;
26 | import java8.util.stream.IntStreams;
27 | import java8.util.stream.RefStreams;
28 | import java8.util.stream.Stream;
29 |
30 | /** Tests for the streaming functionality (maps and transforms) of ChkStream. */
31 | public class ChkStreamFunctionalTest {
32 |
33 | // Tests for transforms (non-sinks)
34 |
35 | @Test
36 | public void testMap() {
37 | assertThat(
38 | wrap(RefStreams.of(1, 2, 3))
39 | .map(x -> x * 10)
40 | .collect(Collectors.toList()))
41 | .containsExactly(10, 20, 30).inOrder();
42 | }
43 |
44 | @Test
45 | public void testFlatMap_Stream() throws IOException {
46 | List {@link ChkStream} is similar to {@link Stream} except that it can throw
125 | * checked exceptions that have been added to the stream via the
126 | * {@link #canThrow(Class) method.
127 | *
128 | * See the
129 | *
130 | * official {@link Stream} documentation for details.
131 | *
132 | * @see Stream
133 | */
134 | public class ${class_type(declare=True)}
135 | implements AutoCloseable {
136 | private final ${specialization}Stream stream;
137 | % for i in xrange(0, num_e):
138 | private final Class This is a terminal
160 | * operation.
161 | *
162 | * @return the element iterator for this stream
163 | */
164 | public PrimitiveIterator.Of${specialization} iterator() {
165 | return stream.iterator();
166 | }
167 |
168 | /**
169 | * Returns a spliterator for the elements of this stream.
170 | *
171 | * This is a terminal
172 | * operation.
173 | *
174 | * @return the element spliterator for this stream
175 | */
176 | public Spliterator.Of${specialization} spliterator() {
177 | return stream.spliterator();
178 | }
179 |
180 | /**
181 | * Returns whether this stream, if a terminal operation were to be executed,
182 | * would execute in parallel. Calling this method after invoking an
183 | * terminal stream operation method may yield unpredictable results.
184 | *
185 | * @return {@code true} if this stream would execute in parallel if executed
186 | */
187 | public boolean isParallel() {
188 | return stream.isParallel();
189 | }
190 |
191 | /**
192 | * Returns an equivalent stream that is sequential. May return
193 | * itself, either because the stream was already sequential, or because
194 | * the underlying stream state was modified to be sequential.
195 | *
196 | * This is an intermediate
197 | * operation.
198 | *
199 | * @return a sequential stream
200 | */
201 | public ${class_type()} sequential() {
202 | return fromStream(stream.sequential());
203 | }
204 |
205 | /**
206 | * Returns an equivalent stream that is parallel. May return
207 | * itself, either because the stream was already parallel, or because
208 | * the underlying stream state was modified to be parallel.
209 | *
210 | * This is an intermediate
211 | * operation.
212 | *
213 | * @return a parallel stream
214 | */
215 | public ${class_type()} parallel() {
216 | return fromStream(stream.parallel());
217 | }
218 |
219 | /**
220 | * Returns an equivalent stream that is
221 | * unordered. May return
222 | * itself, either because the stream was already unordered, or because
223 | * the underlying stream state was modified to be unordered.
224 | *
225 | * This is an intermediate
226 | * operation.
227 | *
228 | * @return an unordered stream
229 | */
230 | public ${class_type()} unordered() {
231 | return fromStream(stream.unordered());
232 | }
233 |
234 | /**
235 | * Returns an equivalent stream with an additional close handler. Close
236 | * handlers are run when the {@link #close()} method
237 | * is called on the stream, and are executed in the order they were
238 | * added. All close handlers are run, even if earlier close handlers throw
239 | * exceptions. If any close handler throws an exception, the first
240 | * exception thrown will be relayed to the caller of {@code close()}, with
241 | * any remaining exceptions added to that exception as suppressed exceptions
242 | * (unless one of the remaining exceptions is the same exception as the
243 | * first exception, since an exception cannot suppress itself.) May
244 | * return itself.
245 | *
246 | * This is an intermediate
247 | * operation.
248 | *
249 | * @param closeHandler A task to execute when the stream is closed
250 | * @return a stream with a handler that is run if the stream is closed
251 | */
252 | public ${class_type()} onClose(
253 | final ChkRunnable${_ThrowN}
254 | <${exc_list(extends=True)}>
255 | closeHandler) {
256 | return fromStream(
257 | stream.onClose(functionUtils.wrapChkRunnable(closeHandler)));
258 | }
259 |
260 | /**
261 | * Closes this stream, causing all close handlers for this stream pipeline
262 | * to be called.
263 | *
264 | * @see AutoCloseable#close()
265 | */
266 | @Override
267 | public void close() ${throws_list} {
268 | try {
269 | stream.close();
270 | } catch (ChkStreamWrappedException e) {
271 | rethrowException(e);
272 | }
273 | }
274 |
275 | /**
276 | * Returns a stream consisting of the elements of this stream that match
277 | * the given predicate.
278 | *
279 | * This is an intermediate
280 | * operation.
281 | *
282 | * @param predicate a non-interfering,
283 | * stateless
284 | * predicate to apply to each element to determine if it
285 | * should be included
286 | * @return the new stream
287 | */
288 | public ${class_type()} filter(
289 | final Chk${specialization}Predicate${_ThrowN}
290 | <${exc_list(extends=True)}>
291 | predicate) {
292 | return fromStream(
293 | stream.filter(
294 | functionUtils.wrapChk${specialization}Predicate(predicate)));
295 | }
296 |
297 | /**
298 | * Returns an object-valued {@code Stream} consisting of the results of
299 | * applying the given function to the elements of this stream.
300 | *
301 | * This is an
302 | * intermediate operation.
303 | *
304 | * @param the element type of the new stream
305 | * @param mapper a non-interfering,
306 | * stateless
307 | * function to apply to each element
308 | * @return the new stream
309 | */
310 | public ChkStream${_ThrowN} mapToObj(
311 | Chk${specialization}Function${_ThrowN}
312 | extends U,
313 | ${exc_list(extends=True)}>
314 | mapper) {
315 | Stream newStream =
316 | stream.mapToObj(
317 | functionUtils.wrapChk${specialization}Function(mapper));
318 | return new ChkStream${_ThrowN}(
319 | ${''.join(['e%dClass,' % i for i in xrange(0, num_e)])}
320 | newStream);
321 | }
322 |
323 | /**
324 | * Returns a stream consisting of the results of applying the given
325 | * function to the elements of this stream.
326 | *
327 | * This is an intermediate
328 | * operation.
329 | *
330 | * @param mapper a non-interfering,
331 | * stateless
332 | * function to apply to each element
333 | * @return the new stream
334 | */
335 | public
336 | Chk${specialization}Stream${_ThrowN}
337 | <${exc_list()}>
338 | map(
339 | Chk${specialization}UnaryOperator${_ThrowN}
340 | <${exc_list(extends=True)}>
341 | mapper) {
342 | return fromStream(
343 | stream.map(
344 | functionUtils.wrapChk${specialization}UnaryOperator(mapper)));
345 | }
346 |
347 | % for dest_specialization in SPECIALIZATIONS:
348 | % if dest_specialization != specialization:
349 | /**
350 | * Returns an {@code ${dest_specialization}Stream} consisting of the results
351 | * of applying the given function to the elements of this stream.
352 | *
353 | * This is an
354 | * intermediate operation.
355 | *
356 | * @param mapper a non-interfering,
357 | * stateless
358 | * function to apply to each element
359 | * @return the new stream
360 | */
361 | public
362 | Chk${dest_specialization}Stream${_ThrowN}<${exc_list()}>
363 | mapTo${dest_specialization}(
364 | final Chk${specialization}To${dest_specialization}Function${_ThrowN}
365 | <${exc_list(extends=True)}>
366 | mapper) {
367 | ${dest_specialization}Stream newStream =
368 | stream.mapTo${dest_specialization}(
369 | functionUtils.wrapChk${specialization}To${dest_specialization}Function(
370 | mapper));
371 | return new Chk${dest_specialization}Stream${_ThrowN}<${exc_list()}>(
372 | ${''.join(['e%dClass,' % i for i in xrange(0, num_e)])}
373 | newStream);
374 | }
375 | % endif
376 | % endfor
377 |
378 | /**
379 | * Returns a stream consisting of the results of replacing each element of
380 | * this stream with the contents of a mapped stream produced by applying
381 | * the provided mapping function to each element. Each mapped stream is
382 | * {@link java.util.stream.BaseStream#close() closed} after its contents
383 | * have been placed into this stream. (If a mapped stream is {@code null}
384 | * an empty stream is used, instead.)
385 | *
386 | * This is an intermediate
387 | * operation.
388 | *
389 | * @apiNote
390 | * The {@code flatMap()} operation has the effect of applying a one-to-many
391 | * transformation to the elements of the stream, and then flattening the
392 | * resulting elements into a new stream.
393 | *
394 | * Examples.
395 | *
396 | * If {@code orders} is a stream of purchase orders, and each purchase
397 | * order contains a collection of line items, then the following produces a
398 | * stream containing all the line items in all the orders:
399 | * If {@code path} is the path to a file, then the following produces a
404 | * stream of the {@code words} contained in that file:
405 | * For ordered streams, the selection of distinct elements is stable
460 | * (for duplicated elements, the element appearing first in the encounter
461 | * order is preserved.) For unordered streams, no stability guarantees
462 | * are made.
463 | *
464 | * This is a stateful
465 | * intermediate operation.
466 | *
467 | * @apiNote
468 | * Preserving stability for {@code distinct()} in parallel pipelines is
469 | * relatively expensive (requires that the operation act as a full barrier,
470 | * with substantial buffering overhead), and stability is often not needed.
471 | * Using an unordered stream source (such as {@link #generate(Supplier)})
472 | * or removing the ordering constraint with {@link #unordered()} may result
473 | * in significantly more efficient execution for {@code distinct()} in parallel
474 | * pipelines, if the semantics of your situation permit. If consistency
475 | * with encounter order is required, and you are experiencing poor performance
476 | * or memory utilization with {@code distinct()} in parallel pipelines,
477 | * switching to sequential execution with {@link #sequential()} may improve
478 | * performance.
479 | *
480 | * @return the new stream
481 | */
482 | public ${class_type()} distinct() {
483 | return fromStream(stream.distinct());
484 | }
485 |
486 | /**
487 | * Returns a stream consisting of the elements of this stream, sorted
488 | * according to natural order.
489 | *
490 | * For ordered streams, the sort is stable. For unordered streams, no
491 | * stability guarantees are made.
492 | *
493 | * This is a stateful
494 | * intermediate operation.
495 | *
496 | * @return the new stream
497 | */
498 | public ${class_type()} sorted() {
499 | return fromStream(stream.sorted());
500 | }
501 |
502 | /**
503 | * Returns a stream consisting of the elements of this stream, additionally
504 | * performing the provided action on each element as elements are consumed
505 | * from the resulting stream.
506 | *
507 | * This is an intermediate
508 | * operation.
509 | *
510 | * For parallel stream pipelines, the action may be called at
511 | * whatever time and in whatever thread the element is made available by the
512 | * upstream operation. If the action modifies shared state,
513 | * it is responsible for providing the required synchronization.
514 | *
515 | * @apiNote This method exists mainly to support debugging, where you want
516 | * to see the elements as they flow past a certain point in a pipeline:
517 | * This is a short-circuiting
545 | * stateful intermediate operation.
546 | *
547 | * @apiNote
548 | * While {@code limit()} is generally a cheap operation on sequential
549 | * stream pipelines, it can be quite expensive on ordered parallel pipelines,
550 | * especially for large values of {@code maxSize}, since {@code limit(n)}
551 | * is constrained to return not just any n elements, but the
552 | * first n elements in the encounter order. Using an unordered
553 | * stream source (such as {@link #generate(Supplier)}) or removing the
554 | * ordering constraint with {@link #unordered()} may result in significant
555 | * speedups of {@code limit()} in parallel pipelines, if the semantics of
556 | * your situation permit. If consistency with encounter order is required,
557 | * and you are experiencing poor performance or memory utilization with
558 | * {@code limit()} in parallel pipelines, switching to sequential execution
559 | * with {@link #sequential()} may improve performance.
560 | *
561 | * @param maxSize the number of elements the stream should be limited to
562 | * @return the new stream
563 | * @throws IllegalArgumentException if {@code maxSize} is negative
564 | */
565 | public ${class_type()} limit(long maxSize) {
566 | return fromStream(stream.limit(maxSize));
567 | }
568 |
569 | /**
570 | * Returns a stream consisting of the remaining elements of this stream
571 | * after discarding the first {@code n} elements of the stream.
572 | * If this stream contains fewer than {@code n} elements then an
573 | * empty stream will be returned.
574 | *
575 | * This is a stateful
576 | * intermediate operation.
577 | *
578 | * @apiNote
579 | * While {@code skip()} is generally a cheap operation on sequential
580 | * stream pipelines, it can be quite expensive on ordered parallel pipelines,
581 | * especially for large values of {@code n}, since {@code skip(n)}
582 | * is constrained to skip not just any n elements, but the
583 | * first n elements in the encounter order. Using an unordered
584 | * stream source (such as {@link #generate(Supplier)}) or removing the
585 | * ordering constraint with {@link #unordered()} may result in significant
586 | * speedups of {@code skip()} in parallel pipelines, if the semantics of
587 | * your situation permit. If consistency with encounter order is required,
588 | * and you are experiencing poor performance or memory utilization with
589 | * {@code skip()} in parallel pipelines, switching to sequential execution
590 | * with {@link #sequential()} may improve performance.
591 | *
592 | * @param n the number of leading elements to skip
593 | * @return the new stream
594 | * @throws IllegalArgumentException if {@code n} is negative
595 | */
596 | public ${class_type()} skip(long n) {
597 | return fromStream(stream.skip(n));
598 | }
599 |
600 | /**
601 | * Performs an action for each element of this stream.
602 | *
603 | * This is a terminal
604 | * operation.
605 | *
606 | * The behavior of this operation is explicitly nondeterministic.
607 | * For parallel stream pipelines, this operation does not
608 | * guarantee to respect the encounter order of the stream, as doing so
609 | * would sacrifice the benefit of parallelism. For any given element, the
610 | * action may be performed at whatever time and in whatever thread the
611 | * library chooses. If the action accesses shared state, it is
612 | * responsible for providing the required synchronization.
613 | *
614 | * @param action a
615 | * non-interfering action to perform on the elements
616 | */
617 | public void forEach(
618 | final Chk${specialization}Consumer${_ThrowN}
619 | <${exc_list(extends=True)}>
620 | action) ${throws_list} {
621 | try {
622 | stream.forEach(
623 | functionUtils.wrapChk${specialization}Consumer(action));
624 | } catch (ChkStreamWrappedException e) {
625 | rethrowException(e);
626 | }
627 | }
628 |
629 | /**
630 | * Performs an action for each element of this stream, in the encounter
631 | * order of the stream if the stream has a defined encounter order.
632 | *
633 | * This is a terminal
634 | * operation.
635 | *
636 | * This operation processes the elements one at a time, in encounter
637 | * order if one exists. Performing the action for one element
638 | * happens-before
639 | * performing the action for subsequent elements, but for any given element,
640 | * the action may be performed in whatever thread the library chooses.
641 | *
642 | * @param action a
643 | * non-interfering action to perform on the elements
644 | * @see #forEach(Consumer)
645 | */
646 | public void forEachOrdered(
647 | final Chk${specialization}Consumer${_ThrowN}
648 | <${exc_list(extends=True)}>
649 | action) ${throws_list} {
650 | try {
651 | stream.forEachOrdered(
652 | functionUtils.wrapChk${specialization}Consumer(action));
653 | } catch (ChkStreamWrappedException e) {
654 | rethrowException(e);
655 | }
656 | }
657 |
658 | /**
659 | * Returns an array containing the elements of this stream.
660 | *
661 | * This is a terminal
662 | * operation.
663 | *
664 | * @return an array containing the elements of this stream
665 | */
666 | public ${specialization.lower()}[] toArray() ${throws_list} {
667 | try {
668 | return stream.toArray();
669 | } catch (ChkStreamWrappedException e) {
670 | rethrowException(e);
671 | return null;
672 | }
673 | }
674 |
675 | /**
676 | * Performs a reduction on the
677 | * elements of this stream, using the provided identity value and an
678 | * associative
679 | * accumulation function, and returns the reduced value. This is equivalent
680 | * to:
681 | * The {@code identity} value must be an identity for the accumulator
691 | * function. This means that for all {@code t},
692 | * {@code accumulator.apply(identity, t)} is equal to {@code t}.
693 | * The {@code accumulator} function must be an
694 | * associative function.
695 | *
696 | * This is a terminal
697 | * operation.
698 | *
699 | * @apiNote Sum, min, max, average, and string concatenation are all special
700 | * cases of reduction. Summing a stream of numbers can be expressed as:
701 | *
702 | * While this may seem a more roundabout way to perform an aggregation
713 | * compared to simply mutating a running total in a loop, reduction
714 | * operations parallelize more gracefully, without needing additional
715 | * synchronization and with greatly reduced risk of data races.
716 | *
717 | * @param identity the identity value for the accumulating function
718 | * @param accumulator an associative,
719 | * non-interfering,
720 | * stateless
721 | * function for combining two values
722 | * @return the result of the reduction
723 | */
724 | public ${specialization.lower()} reduce(
725 | ${specialization.lower()} identity,
726 | ${specialization}BinaryOperator accumulator) ${throws_list} {
727 | try {
728 | return stream.reduce(identity, accumulator);
729 | } catch (ChkStreamWrappedException e) {
730 | rethrowException(e);
731 | return -1;
732 | }
733 | }
734 |
735 | /**
736 | * Performs a reduction on the
737 | * elements of this stream, using an
738 | * associative accumulation
739 | * function, and returns an {@code Optional} describing the reduced value,
740 | * if any. This is equivalent to:
741 | * The {@code accumulator} function must be an
758 | * associative function.
759 | *
760 | * This is a terminal
761 | * operation.
762 | *
763 | * @param accumulator an associative,
764 | * non-interfering,
765 | * stateless
766 | * function for combining two values
767 | * @return an {@link Optional} describing the result of the reduction
768 | * @throws NullPointerException if the result of the reduction is null
769 | * @see #reduce(Object, BinaryOperator)
770 | * @see #min()
771 | * @see #max()
772 | */
773 | public Optional${specialization} reduce(
774 | ${specialization}BinaryOperator op) ${throws_list} {
775 | try {
776 | return stream.reduce(op);
777 | } catch (ChkStreamWrappedException e) {
778 | rethrowException(e);
779 | return null;
780 | }
781 | }
782 |
783 | /**
784 | * Performs a mutable
785 | * reduction operation on the elements of this stream. A mutable
786 | * reduction is one in which the reduced value is a mutable result container,
787 | * such as an {@code ArrayList}, and elements are incorporated by updating
788 | * the state of the result rather than by replacing the result. This
789 | * produces a result equivalent to:
790 | * Like {@link #reduce(Object, BinaryOperator)}, {@code collect} operations
798 | * can be parallelized without requiring additional synchronization.
799 | *
800 | * This is a terminal
801 | * operation.
802 | *
803 | * @apiNote There are many existing classes in the JDK whose signatures are
804 | * well-suited for use with method references as arguments to {@code collect()}.
805 | * For example, the following will accumulate strings into an {@code ArrayList}:
806 | * The following will take a stream of strings and concatenates them into a
812 | * single string:
813 | * This is a terminal
855 | * operation.
856 | *
857 | * @return the sum of elements in this stream
858 | */
859 | public ${specialization.lower()} sum() ${throws_list} {
860 | try {
861 | return stream.sum();
862 | } catch (ChkStreamWrappedException e) {
863 | rethrowException(e);
864 | return -1;
865 | }
866 | }
867 |
868 | /**
869 | * Returns an {@code Optional} describing the minimum element of this
870 | * stream, or an empty optional if this stream is empty. This is a special
871 | * case of a reduction
872 | * and is equivalent to:
873 | * This is a terminal operation.
878 | *
879 | * @return an {@code Optional} containing the minimum element of this
880 | * stream, or an empty {@code Optional} if the stream is empty
881 | */
882 | public Optional${specialization} min() ${throws_list} {
883 | try {
884 | return stream.min();
885 | } catch (ChkStreamWrappedException e) {
886 | rethrowException(e);
887 | return null;
888 | }
889 | }
890 |
891 | /**
892 | * Returns an {@code Optional} describing the maximum element of this
893 | * stream, or an empty optional if this stream is empty. This is a special
894 | * case of a reduction
895 | * and is equivalent to:
896 | * This is a terminal
901 | * operation.
902 | *
903 | * @return an {@code Optional} containing the maximum element of this
904 | * stream, or an empty {@code Optional} if the stream is empty
905 | */
906 | public Optional${specialization} max() ${throws_list} {
907 | try {
908 | return stream.max();
909 | } catch (ChkStreamWrappedException e) {
910 | rethrowException(e);
911 | return null;
912 | }
913 | }
914 |
915 | /**
916 | * Returns the count of elements in this stream. This is a special case of
917 | * a reduction and is
918 | * equivalent to:
919 | * This is a terminal operation.
924 | *
925 | * @return the count of elements in this stream
926 | */
927 | public long count() ${throws_list} {
928 | try {
929 | return stream.count();
930 | } catch (ChkStreamWrappedException e) {
931 | rethrowException(e);
932 | return -1;
933 | }
934 | }
935 |
936 | /**
937 | * Returns an {@code OptionalDouble} describing the arithmetic mean of elements of
938 | * this stream, or an empty optional if this stream is empty. This is a
939 | * special case of a
940 | * reduction.
941 | *
942 | * This is a terminal
943 | * operation.
944 | *
945 | * @return an {@code OptionalDouble} containing the average element of this
946 | * stream, or an empty optional if the stream is empty
947 | */
948 | public OptionalDouble average() ${throws_list} {
949 | try {
950 | return stream.average();
951 | } catch (ChkStreamWrappedException e) {
952 | rethrowException(e);
953 | return null;
954 | }
955 | }
956 |
957 | /**
958 | * Returns an {@code IntSummaryStatistics} describing various
959 | * summary data about the elements of this stream. This is a special
960 | * case of a reduction.
961 | *
962 | * This is a terminal
963 | * operation.
964 | *
965 | * @return an {@code IntSummaryStatistics} describing various summary data
966 | * about the elements of this stream
967 | */
968 | public ${specialization}SummaryStatistics summaryStatistics()
969 | ${throws_list} {
970 | try {
971 | return stream.summaryStatistics();
972 | } catch (ChkStreamWrappedException e) {
973 | rethrowException(e);
974 | return null;
975 | }
976 | }
977 |
978 | /**
979 | * Returns whether any elements of this stream match the provided
980 | * predicate. May not evaluate the predicate on all elements if not
981 | * necessary for determining the result. If the stream is empty then
982 | * {@code false} is returned and the predicate is not evaluated.
983 | *
984 | * This is a short-circuiting
985 | * terminal operation.
986 | *
987 | * @apiNote
988 | * This method evaluates the existential quantification of the
989 | * predicate over the elements of the stream (for some x P(x)).
990 | *
991 | * @param predicate a non-interfering,
992 | * stateless
993 | * predicate to apply to elements of this stream
994 | * @return {@code true} if any elements of the stream match the provided
995 | * predicate, otherwise {@code false}
996 | */
997 | public boolean anyMatch(
998 | final Chk${specialization}Predicate${_ThrowN}
999 | <${exc_list(extends=True)}>
1000 | predicate) ${throws_list} {
1001 | try {
1002 | return stream.anyMatch(
1003 | functionUtils.wrapChk${specialization}Predicate(predicate));
1004 | } catch (ChkStreamWrappedException e) {
1005 | rethrowException(e);
1006 | return false;
1007 | }
1008 | }
1009 |
1010 | /**
1011 | * Returns whether all elements of this stream match the provided predicate.
1012 | * May not evaluate the predicate on all elements if not necessary for
1013 | * determining the result. If the stream is empty then {@code true} is
1014 | * returned and the predicate is not evaluated.
1015 | *
1016 | * This is a short-circuiting
1017 | * terminal operation.
1018 | *
1019 | * @apiNote
1020 | * This method evaluates the universal quantification of the
1021 | * predicate over the elements of the stream (for all x P(x)). If the
1022 | * stream is empty, the quantification is said to be vacuously
1023 | * satisfied and is always {@code true} (regardless of P(x)).
1024 | *
1025 | * @param predicate a non-interfering,
1026 | * stateless
1027 | * predicate to apply to elements of this stream
1028 | * @return {@code true} if either all elements of the stream match the
1029 | * provided predicate or the stream is empty, otherwise {@code false}
1030 | */
1031 | public boolean allMatch(
1032 | final Chk${specialization}Predicate${_ThrowN}
1033 | <${exc_list(extends=True)}>
1034 | predicate) ${throws_list} {
1035 | try {
1036 | return stream.allMatch(
1037 | functionUtils.wrapChk${specialization}Predicate(predicate));
1038 | } catch (ChkStreamWrappedException e) {
1039 | rethrowException(e);
1040 | return false;
1041 | }
1042 | }
1043 |
1044 | /**
1045 | * Returns whether no elements of this stream match the provided predicate.
1046 | * May not evaluate the predicate on all elements if not necessary for
1047 | * determining the result. If the stream is empty then {@code true} is
1048 | * returned and the predicate is not evaluated.
1049 | *
1050 | * This is a short-circuiting
1051 | * terminal operation.
1052 | *
1053 | * @apiNote
1054 | * This method evaluates the universal quantification of the
1055 | * negated predicate over the elements of the stream (for all x ~P(x)). If
1056 | * the stream is empty, the quantification is said to be vacuously satisfied
1057 | * and is always {@code true}, regardless of P(x).
1058 | *
1059 | * @param predicate a non-interfering,
1060 | * stateless
1061 | * predicate to apply to elements of this stream
1062 | * @return {@code true} if either no elements of the stream match the
1063 | * provided predicate or the stream is empty, otherwise {@code false}
1064 | */
1065 | public boolean noneMatch(
1066 | final Chk${specialization}Predicate${_ThrowN}
1067 | <${exc_list(extends=True)}>
1068 | predicate) ${throws_list} {
1069 | try {
1070 | return stream.noneMatch(
1071 | functionUtils.wrapChk${specialization}Predicate(predicate));
1072 | } catch (ChkStreamWrappedException e) {
1073 | rethrowException(e);
1074 | return false;
1075 | }
1076 | }
1077 |
1078 | /**
1079 | * Returns an {@link Optional} describing the first element of this stream,
1080 | * or an empty {@code Optional} if the stream is empty. If the stream has
1081 | * no encounter order, then any element may be returned.
1082 | *
1083 | * This is a short-circuiting
1084 | * terminal operation.
1085 | *
1086 | * @return an {@code Optional} describing the first element of this stream,
1087 | * or an empty {@code Optional} if the stream is empty
1088 | * @throws NullPointerException if the element selected is null
1089 | */
1090 | public Optional${specialization} findFirst() ${throws_list} {
1091 | try {
1092 | return stream.findFirst();
1093 | } catch (ChkStreamWrappedException e) {
1094 | rethrowException(e);
1095 | return null;
1096 | }
1097 | }
1098 |
1099 | /**
1100 | * Returns an {@link Optional} describing some element of the stream, or an
1101 | * empty {@code Optional} if the stream is empty.
1102 | *
1103 | * This is a short-circuiting
1104 | * terminal operation.
1105 | *
1106 | * The behavior of this operation is explicitly nondeterministic; it is
1107 | * free to select any element in the stream. This is to allow for maximal
1108 | * performance in parallel operations; the cost is that multiple invocations
1109 | * on the same source may not return the same result. (If a stable result
1110 | * is desired, use {@link #findFirst()} instead.)
1111 | *
1112 | * @return an {@code Optional} describing some element of this stream, or an
1113 | * empty {@code Optional} if the stream is empty
1114 | * @throws NullPointerException if the element selected is null
1115 | * @see #findFirst()
1116 | */
1117 | public Optional${specialization} findAny() ${throws_list} {
1118 | try {
1119 | return stream.findAny();
1120 | } catch (ChkStreamWrappedException e) {
1121 | rethrowException(e);
1122 | return null;
1123 | }
1124 | }
1125 |
1126 | <%
1127 | banned_combinations = [
1128 | ('Long', 'Int'),
1129 | ('Double', 'Int'),
1130 | ('Double', 'Long'),
1131 | ]
1132 | %>
1133 | % for dest_specialization in SPECIALIZATIONS:
1134 | % if specialization != dest_specialization:
1135 | % if (specialization, dest_specialization) not in banned_combinations:
1136 | /**
1137 | * Returns a {@code ${dest_specialization}Stream} consisting of the elements
1138 | * of this stream.
1139 | *
1140 | * This is an intermediate
1141 | * operation.
1142 | *
1143 | * @return a {@code ${dest_specialization}Stream} consisting of the elements
1144 | * of this stream.
1145 | */
1146 | public Chk${dest_specialization}Stream${_ThrowN}<${exc_list()}>
1147 | as${dest_specialization}Stream() {
1148 | ${dest_specialization}Stream newStream =
1149 | stream.as${dest_specialization}Stream();
1150 | return new Chk${dest_specialization}Stream${_ThrowN}<${exc_list()}>(
1151 | ${''.join(['e%dClass,' % i for i in xrange(0, num_e)])}
1152 | newStream);
1153 | }
1154 | % else:
1155 | // Chk${specialization}#as${dest_specialization}Stream() not generated, it
1156 | // is not an allowed combination.
1157 | % endif
1158 | % endif
1159 | % endfor
1160 |
1161 | /**
1162 | * Returns a {@code Stream} consisting of the elements of this stream,
1163 | * each boxed to an {@code Integer}.
1164 | *
1165 | * This is an intermediate
1166 | * operation.
1167 | *
1168 | * @return a {@code Stream} consistent of the elements of this stream,
1169 | * each boxed to an {@code Integer}
1170 | */
1171 | public ChkStream${_ThrowN}<${boxed_type}, ${exc_list()}> boxed() {
1172 | Stream<${boxed_type}> newStream = stream.boxed();
1173 | return new ChkStream${_ThrowN}<${boxed_type}, ${exc_list()}>(
1174 | ${''.join(['e%dClass,' % i for i in xrange(0, num_e)])}
1175 | newStream);
1176 | }
1177 |
1178 | // Adapted static methods.
1179 |
1180 | /**
1181 | * Creates a lazily concatenated stream whose elements are all the
1182 | * elements of this stream followed by all the elements of the
1183 | * given stream. The resulting stream is ordered if both
1184 | * of the input streams are ordered, and parallel if either of the input
1185 | * streams is parallel. When the resulting stream is closed, the close
1186 | * handlers for both input streams are invoked.
1187 | *
1188 | * @implNote
1189 | * Use caution when constructing streams from repeated concatenation.
1190 | * Accessing an element of a deeply concatenated stream can result in deep
1191 | * call chains, or even {@code StackOverflowException}.
1192 | *
1193 | * @param Any checked exceptions thrown by stream operations that have already
1251 | * been added will be wrapped in {@link ChkStreamWrappedException}, an
1252 | * instance of {@link RuntimeException}.
1253 | */
1254 | public ${specialization}Stream toStream() {
1255 | return stream;
1256 | }
1257 |
1258 | // Private methods.
1259 |
1260 | private ${class_type()} fromStream(${specialization}Stream stream) {
1261 | return new ${class_type()}(
1262 | ${''.join(['e%dClass,' % i for i in xrange(0, num_e)])}
1263 | stream);
1264 | }
1265 |
1266 | @SuppressWarnings("unchecked")
1267 | private void rethrowException(ChkStreamWrappedException wrapE)
1268 | ${throws_list} {
1269 | Throwable e = wrapE.getCause();
1270 | if (e instanceof RuntimeException) {
1271 | throw (RuntimeException) e;
1272 | }
1273 | % for i in xrange(0, num_e):
1274 | if (e${i}Class.isInstance(e)) { throw (E${i}) e; }
1275 | % endfor
1276 | throw wrapE;
1277 | }
1278 | }
1279 |
--------------------------------------------------------------------------------
/third_party/openjdk/tmpl/com/google/chkstream/ChkStreams.tmpl.java:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2017 Google.
3 | *
4 | * This file is licensed under the GPLv2+Classpath Exception, which full text
5 | * is found in the LICENSE file at the root of this project.
6 | *
7 | * Google designates this particular file as subject to the "Classpath"
8 | * exception as provided in the LICENSE file that accompanied this code.
9 | */
10 |
11 | <%! for_each_stream_impl = True %>
12 |
13 | package com.google.chkstream.${stream_impl};
14 |
15 | % if stream_impl == 'java8':
16 | import java.util.stream.DoubleStream;
17 | import java.util.stream.IntStream;
18 | import java.util.stream.LongStream;
19 | import java.util.stream.Stream;
20 |
21 | % else:
22 | import java8.util.stream.DoubleStream;
23 | import java8.util.stream.IntStream;
24 | import java8.util.stream.LongStream;
25 | import java8.util.stream.Stream;
26 | % endif
27 |
28 | /**
29 | * Static methods for creating new instances of {@link ChkStream}.
30 | *
31 | * @author Alexander Dorokhine
32 | */
33 | public final class ChkStreams {
34 | private ChkStreams() {}
35 |
36 | /**
37 | * A builder for initializing a {@link ChkStream} with a checked exception.
38 | *
39 | * Call {@link #canThrow(Class)} to start a {@link ChkStream}.
40 | * {@link ChkStream} implements the same method so more exceptions can be
41 | * added later.
42 | *
43 | * @author Alexander Dorokhine
44 | */
45 | public static final class ChkStreamStarter Additional exceptions can be added later by calling
56 | * {@link ChkStream#canThrow(Class)}.
57 | *
58 | * @param exceptionClass Class of the exception this stream can throw.
59 | * @return the newly created stream.
60 | */
61 | public Additional exceptions can be added later by calling
90 | * {@link Chk${specialization}Stream#canThrow(Class)}.
91 | *
92 | * @param exceptionClass Class of the exception this stream can throw.
93 | * @return the newly created stream.
94 | */
95 | public This is a functional interface
44 | * whose functional method is {@link #accept(Object)}.
45 | *
46 | * @param This is a functional interface
66 | * whose functional method is {@link #accept(Object)}.
67 | *
68 | * @param This is a functional interface
42 | * whose functional method is {@link #apply(Object)}.
43 | *
44 | * @param This is a functional interface
42 | * whose functional method is {@link #test(Object)}.
43 | *
44 | * @param This is a functional interface
65 | * whose functional method is {@link #test(${specialization})}.
66 | */
67 | public static interface Chk${specialization}Predicate${_ThrowN}
68 | <${exc_decl_list}>
69 | {
70 | /**
71 | * Evaluates this predicate on the given argument.
72 | *
73 | * @param t the input argument
74 | * @return {@code true} if the input argument matches the predicate,
75 | * otherwise {@code false}
76 | */
77 | boolean test(${specialization.lower()} value) ${throws_list};
78 | }
79 | % endfor
80 | % endfor
81 | }
82 |
--------------------------------------------------------------------------------
/third_party/openjdk/tmpl/com/google/chkstream/function/ChkRunnables.tmpl.java:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright (c) 1994, 2013, Oracle and/or its affiliates. All rights reserved.
3 | * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
4 | *
5 | * This code is free software; you can redistribute it and/or modify it
6 | * under the terms of the GNU General Public License version 2 only, as
7 | * published by the Free Software Foundation. Oracle designates this
8 | * particular file as subject to the "Classpath" exception as provided
9 | * by Oracle in the LICENSE file that accompanied this code.
10 | *
11 | * This code is distributed in the hope that it will be useful, but WITHOUT
12 | * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 | * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 | * version 2 for more details (a copy is included in the LICENSE file that
15 | * accompanied this code).
16 | *
17 | * You should have received a copy of the GNU General Public License version
18 | * 2 along with this work; if not, write to the Free Software Foundation,
19 | * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
20 | *
21 | * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
22 | * or visit www.oracle.com if you need additional information or have any
23 | * questions.
24 | */
25 | package com.google.chkstream.function;
26 |
27 | public final class ChkRunnables {
28 | private ChkRunnables() {}
29 |
30 | % for num_e in xrange(MIN_EXCEPTIONS, MAX_EXCEPTIONS + 1):
31 | <%
32 | exc_decl_list = ', '.join(
33 | ['E%d extends Exception' % i for i in xrange(0, num_e)])
34 | throws_list = 'throws ' + ', '.join(['E%d' % i for i in xrange(0, num_e)])
35 | _ThrowN = '' if num_e == MIN_EXCEPTIONS else '_Throw%d' % num_e
36 | %>
37 |
38 | /**
39 | * The
43 | * This interface is designed to provide a common protocol for objects that
44 | * wish to execute code while they are active. For example,
45 | *
49 | * In addition,
69 | * The general contract of the method Here we use {@code widgets}, a {@code Collection The key abstraction introduced in this package is stream. The
44 | * classes {@link java.util.stream.Stream}, {@link java.util.stream.IntStream},
45 | * {@link java.util.stream.LongStream}, and {@link java.util.stream.DoubleStream}
46 | * are streams over objects and the primitive {@code int}, {@code long} and
47 | * {@code double} types. Streams differ from collections in several ways:
48 | *
49 | * Additional stream sources can be provided by third-party libraries using
95 | * these techniques.
96 | *
97 | * Stream operations are divided into intermediate and
100 | * terminal operations, and are combined to form stream
101 | * pipelines. A stream pipeline consists of a source (such as a
102 | * {@code Collection}, an array, a generator function, or an I/O channel);
103 | * followed by zero or more intermediate operations such as
104 | * {@code Stream.filter} or {@code Stream.map}; and a terminal operation such
105 | * as {@code Stream.forEach} or {@code Stream.reduce}.
106 | *
107 | * Intermediate operations return a new stream. They are always
108 | * lazy; executing an intermediate operation such as
109 | * {@code filter()} does not actually perform any filtering, but instead
110 | * creates a new stream that, when traversed, contains the elements of
111 | * the initial stream that match the given predicate. Traversal
112 | * of the pipeline source does not begin until the terminal operation of the
113 | * pipeline is executed.
114 | *
115 | * Terminal operations, such as {@code Stream.forEach} or
116 | * {@code IntStream.sum}, may traverse the stream to produce a result or a
117 | * side-effect. After the terminal operation is performed, the stream pipeline
118 | * is considered consumed, and can no longer be used; if you need to traverse
119 | * the same data source again, you must return to the data source to get a new
120 | * stream. In almost all cases, terminal operations are eager,
121 | * completing their traversal of the data source and processing of the pipeline
122 | * before returning. Only the terminal operations {@code iterator()} and
123 | * {@code spliterator()} are not; these are provided as an "escape hatch" to enable
124 | * arbitrary client-controlled pipeline traversals in the event that the
125 | * existing operations are not sufficient to the task.
126 | *
127 | * Processing streams lazily allows for significant efficiencies; in a
128 | * pipeline such as the filter-map-sum example above, filtering, mapping, and
129 | * summing can be fused into a single pass on the data, with minimal
130 | * intermediate state. Laziness also allows avoiding examining all the data
131 | * when it is not necessary; for operations such as "find the first string
132 | * longer than 1000 characters", it is only necessary to examine just enough
133 | * strings to find one that has the desired characteristics without examining
134 | * all of the strings available from the source. (This behavior becomes even
135 | * more important when the input stream is infinite and not merely large.)
136 | *
137 | * Intermediate operations are further divided into stateless
138 | * and stateful operations. Stateless operations, such as {@code filter}
139 | * and {@code map}, retain no state from previously seen element when processing
140 | * a new element -- each element can be processed
141 | * independently of operations on other elements. Stateful operations, such as
142 | * {@code distinct} and {@code sorted}, may incorporate state from previously
143 | * seen elements when processing new elements.
144 | *
145 | * Stateful operations may need to process the entire input
146 | * before producing a result. For example, one cannot produce any results from
147 | * sorting a stream until one has seen all elements of the stream. As a result,
148 | * under parallel computation, some pipelines containing stateful intermediate
149 | * operations may require multiple passes on the data or may need to buffer
150 | * significant data. Pipelines containing exclusively stateless intermediate
151 | * operations can be processed in a single pass, whether sequential or parallel,
152 | * with minimal data buffering.
153 | *
154 | * Further, some operations are deemed short-circuiting operations.
155 | * An intermediate operation is short-circuiting if, when presented with
156 | * infinite input, it may produce a finite stream as a result. A terminal
157 | * operation is short-circuiting if, when presented with infinite input, it may
158 | * terminate in finite time. Having a short-circuiting operation in the pipeline
159 | * is a necessary, but not sufficient, condition for the processing of an infinite
160 | * stream to terminate normally in finite time.
161 | *
162 | * Processing elements with an explicit {@code for-}loop is inherently serial.
165 | * Streams facilitate parallel execution by reframing the computation as a pipeline of
166 | * aggregate operations, rather than as imperative operations on each individual
167 | * element. All streams operations can execute either in serial or in parallel.
168 | * The stream implementations in the JDK create serial streams unless parallelism is
169 | * explicitly requested. For example, {@code Collection} has methods
170 | * {@link java.util.Collection#stream} and {@link java.util.Collection#parallelStream},
171 | * which produce sequential and parallel streams respectively; other
172 | * stream-bearing methods such as {@link java.util.stream.IntStream#range(int, int)}
173 | * produce sequential streams but these streams can be efficiently parallelized by
174 | * invoking their {@link java.util.stream.BaseStream#parallel()} method.
175 | * To execute the prior "sum of weights of widgets" query in parallel, we would
176 | * do:
177 | *
178 | * The only difference between the serial and parallel versions of this
186 | * example is the creation of the initial stream, using "{@code parallelStream()}"
187 | * instead of "{@code stream()}". When the terminal operation is initiated,
188 | * the stream pipeline is executed sequentially or in parallel depending on the
189 | * orientation of the stream on which it is invoked. Whether a stream will execute in serial or
190 | * parallel can be determined with the {@code isParallel()} method, and the
191 | * orientation of a stream can be modified with the
192 | * {@link java.util.stream.BaseStream#sequential()} and
193 | * {@link java.util.stream.BaseStream#parallel()} operations. When the terminal
194 | * operation is initiated, the stream pipeline is executed sequentially or in
195 | * parallel depending on the mode of the stream on which it is invoked.
196 | *
197 | * Except for operations identified as explicitly nondeterministic, such
198 | * as {@code findAny()}, whether a stream executes sequentially or in parallel
199 | * should not change the result of the computation.
200 | *
201 | * Most stream operations accept parameters that describe user-specified
202 | * behavior, which are often lambda expressions. To preserve correct behavior,
203 | * these behavioral parameters must be non-interfering, and in
204 | * most cases must be stateless. Such parameters are always instances
205 | * of a functional interface such
206 | * as {@link java.util.function.Function}, and are often lambda expressions or
207 | * method references.
208 | *
209 | * Accordingly, behavioral parameters in stream pipelines whose source might
226 | * not be concurrent should never modify the stream's data source.
227 | * A behavioral parameter is said to interfere with a non-concurrent
228 | * data source if it modifies, or causes to be
229 | * modified, the stream's data source. The need for non-interference applies
230 | * to all pipelines, not just parallel ones. Unless the stream source is
231 | * concurrent, modifying a stream's data source during execution of a stream
232 | * pipeline can cause exceptions, incorrect answers, or nonconformant behavior.
233 | *
234 | * For well-behaved stream sources, the source can be modified before the
235 | * terminal operation commences and those modifications will be reflected in
236 | * the covered elements. For example, consider the following code:
237 | *
238 | * Note also that attempting to access mutable state from behavioral parameters
275 | * presents you with a bad choice with respect to safety and performance; if
276 | * you do not synchronize access to that state, you have a data race and
277 | * therefore your code is broken, but if you do synchronize access to that
278 | * state, you risk having contention undermine the parallelism you are seeking
279 | * to benefit from. The best approach is to avoid stateful behavioral
280 | * parameters to stream operations entirely; there is usually a way to
281 | * restructure the stream pipeline to avoid statefulness.
282 | *
283 | * If the behavioral parameters do have side-effects, unless explicitly
290 | * stated, there are no guarantees as to the
291 | * visibility
292 | * of those side-effects to other threads, nor are there any guarantees that
293 | * different operations on the "same" element within the same stream pipeline
294 | * are executed in the same thread. Further, the ordering of those effects
295 | * may be surprising. Even when a pipeline is constrained to produce a
296 | * result that is consistent with the encounter order of the stream
297 | * source (for example, {@code IntStream.range(0,5).parallel().map(x -> x*2).toArray()}
298 | * must produce {@code [0, 2, 4, 6, 8]}), no guarantees are made as to the order
299 | * in which the mapper function is applied to individual elements, or in what
300 | * thread any behavioral parameter is executed for a given element.
301 | *
302 | * Many computations where one might be tempted to use side effects can be more
303 | * safely and efficiently expressed without side-effects, such as using
304 | * reduction instead of mutable
305 | * accumulators. However, side-effects such as using {@code println()} for debugging
306 | * purposes are usually harmless. A small number of stream operations, such as
307 | * {@code forEach()} and {@code peek()}, can operate only via side-effects;
308 | * these should be used with care.
309 | *
310 | * As an example of how to transform a stream pipeline that inappropriately
311 | * uses side-effects to one that does not, the following code searches a stream
312 | * of strings for those matching a given regular expression, and puts the
313 | * matches in a list.
314 | *
315 | * Streams may or may not have a defined encounter order. Whether
338 | * or not a stream has an encounter order depends on the source and the
339 | * intermediate operations. Certain stream sources (such as {@code List} or
340 | * arrays) are intrinsically ordered, whereas others (such as {@code HashSet})
341 | * are not. Some intermediate operations, such as {@code sorted()}, may impose
342 | * an encounter order on an otherwise unordered stream, and others may render an
343 | * ordered stream unordered, such as {@link java.util.stream.BaseStream#unordered()}.
344 | * Further, some terminal operations may ignore encounter order, such as
345 | * {@code forEach()}.
346 | *
347 | * If a stream is ordered, most operations are constrained to operate on the
348 | * elements in their encounter order; if the source of a stream is a {@code List}
349 | * containing {@code [1, 2, 3]}, then the result of executing {@code map(x -> x*2)}
350 | * must be {@code [2, 4, 6]}. However, if the source has no defined encounter
351 | * order, then any permutation of the values {@code [2, 4, 6]} would be a valid
352 | * result.
353 | *
354 | * For sequential streams, the presence or absence of an encounter order does
355 | * not affect performance, only determinism. If a stream is ordered, repeated
356 | * execution of identical stream pipelines on an identical source will produce
357 | * an identical result; if it is not ordered, repeated execution might produce
358 | * different results.
359 | *
360 | * For parallel streams, relaxing the ordering constraint can sometimes enable
361 | * more efficient execution. Certain aggregate operations,
362 | * such as filtering duplicates ({@code distinct()}) or grouped reductions
363 | * ({@code Collectors.groupingBy()}) can be implemented more efficiently if ordering of elements
364 | * is not relevant. Similarly, operations that are intrinsically tied to encounter order,
365 | * such as {@code limit()}, may require
366 | * buffering to ensure proper ordering, undermining the benefit of parallelism.
367 | * In cases where the stream has an encounter order, but the user does not
368 | * particularly care about that encounter order, explicitly de-ordering
369 | * the stream with {@link java.util.stream.BaseStream#unordered() unordered()} may
370 | * improve parallel performance for some stateful or terminal operations.
371 | * However, most stream pipelines, such as the "sum of weight of blocks" example
372 | * above, still parallelize efficiently even under ordering constraints.
373 | *
374 | * Of course, such operations can be readily implemented as simple sequential
388 | * loops, as in:
389 | * These reduction operations can run safely in parallel with almost no
413 | * modification:
414 | * Reduction parallellizes well because the implementation
419 | * can operate on subsets of the data in parallel, and then combine the
420 | * intermediate results to get the final correct answer. (Even if the language
421 | * had a "parallel for-each" construct, the mutative accumulation approach would
422 | * still required the developer to provide
423 | * thread-safe updates to the shared accumulating variable {@code sum}, and
424 | * the required synchronization would then likely eliminate any performance gain from
425 | * parallelism.) Using {@code reduce()} instead removes all of the
426 | * burden of parallelizing the reduction operation, and the library can provide
427 | * an efficient parallel implementation with no additional synchronization
428 | * required.
429 | *
430 | * The "widgets" examples shown earlier shows how reduction combines with
431 | * other operations to replace for loops with bulk operations. If {@code widgets}
432 | * is a collection of {@code Widget} objects, which have a {@code getWeight} method,
433 | * we can find the heaviest widget with:
434 | * In its more general form, a {@code reduce} operation on elements of type
441 | * {@code More formally, the {@code identity} value must be an identity for
457 | * the combiner function. This means that for all {@code u},
458 | * {@code combiner.apply(identity, u)} is equal to {@code u}. Additionally, the
459 | * {@code combiner} function must be associative and
460 | * must be compatible with the {@code accumulator} function: for all {@code u}
461 | * and {@code t}, {@code combiner.apply(u, accumulator.apply(identity, t))} must
462 | * be {@code equals()} to {@code accumulator.apply(u, t)}.
463 | *
464 | * The three-argument form is a generalization of the two-argument form,
465 | * incorporating a mapping step into the accumulation step. We could
466 | * re-cast the simple sum-of-weights example using the more general form as
467 | * follows:
468 | * If we wanted to take a stream of strings and concatenate them into a
486 | * single long string, we could achieve this with ordinary reduction:
487 | * We would get the desired result, and it would even work in parallel. However,
492 | * we might not be happy about the performance! Such an implementation would do
493 | * a great deal of string copying, and the run time would be O(n^2) in
494 | * the number of characters. A more performant approach would be to accumulate
495 | * the results into a {@link java.lang.StringBuilder}, which is a mutable
496 | * container for accumulating strings. We can use the same technique to
497 | * parallelize mutable reduction as we do with ordinary reduction.
498 | *
499 | * The mutable reduction operation is called
500 | * {@link java.util.stream.Stream#collect(Collector) collect()},
501 | * as it collects together the desired results into a result container such
502 | * as a {@code Collection}.
503 | * A {@code collect} operation requires three functions:
504 | * a supplier function to construct new instances of the result container, an
505 | * accumulator function to incorporate an input element into a result
506 | * container, and a combining function to merge the contents of one result
507 | * container into another. The form of this is very similar to the general
508 | * form of ordinary reduction:
509 | * As with {@code reduce()}, a benefit of expressing {@code collect} in this
515 | * abstract way is that it is directly amenable to parallelization: we can
516 | * accumulate partial results in parallel and then combine them, so long as the
517 | * accumulation and combining functions satisfy the appropriate requirements.
518 | * For example, to collect the String representations of the elements in a
519 | * stream into an {@code ArrayList}, we could write the obvious sequential
520 | * for-each form:
521 | * The three aspects of {@code collect} -- supplier, accumulator, and
545 | * combiner -- are tightly coupled. We can use the abstraction of a
546 | * {@link java.util.stream.Collector} to capture all three aspects. The
547 | * above example for collecting strings into a {@code List} can be rewritten
548 | * using a standard {@code Collector} as:
549 | * Packaging mutable reductions into a Collector has another advantage:
555 | * composability. The class {@link java.util.stream.Collectors} contains a
556 | * number of predefined factories for collectors, including combinators
557 | * that transform one collector into another. For example, suppose we have a
558 | * collector that computes the sum of the salaries of a stream of
559 | * employees, as follows:
560 | *
561 | * As with the regular reduction operation, {@code collect()} operations can
579 | * only be parallelized if appropriate conditions are met. For any partially
580 | * accumulated result, combining it with an empty result container must
581 | * produce an equivalent result. That is, for a partially accumulated result
582 | * {@code p} that is the result of any series of accumulator and combiner
583 | * invocations, {@code p} must be equivalent to
584 | * {@code combiner.apply(p, supplier.get())}.
585 | *
586 | * Further, however the computation is split, it must produce an equivalent
587 | * result. For any input elements {@code t1} and {@code t2}, the results
588 | * {@code r1} and {@code r2} in the computation below must be equivalent:
589 | * Here, equivalence generally means according to {@link java.lang.Object#equals(Object)}.
603 | * but in some cases equivalence may be relaxed to account for differences in
604 | * order.
605 | *
606 | * Suppose, however, that the result container used in this reduction
620 | * was a concurrently modifiable collection -- such as a
621 | * {@link java.util.concurrent.ConcurrentHashMap}. In that case, the parallel
622 | * invocations of the accumulator could actually deposit their results
623 | * concurrently into the same shared result container, eliminating the need for
624 | * the combiner to merge distinct result containers. This potentially provides
625 | * a boost to the parallel execution performance. We call this a
626 | * concurrent reduction.
627 | *
628 | * A {@link java.util.stream.Collector} that supports concurrent reduction is
629 | * marked with the {@link java.util.stream.Collector.Characteristics#CONCURRENT}
630 | * characteristic. However, a concurrent collection also has a downside. If
631 | * multiple threads are depositing results concurrently into a shared container,
632 | * the order in which results are deposited is non-deterministic. Consequently,
633 | * a concurrent reduction is only possible if ordering is not important for the
634 | * stream being processed. The {@link java.util.stream.Stream#collect(Collector)}
635 | * implementation will only perform a concurrent reduction if
636 | * Note that if it is important that the elements for a given key appear in
656 | * the order they appear in the source, then we cannot use a concurrent
657 | * reduction, as ordering is one of the casualties of concurrent insertion.
658 | * We would then be constrained to implement either a sequential reduction or
659 | * a merge-based parallel reduction.
660 | *
661 | * Examples of associative operations include numeric addition, min, and
677 | * max, and string concatenation.
678 | *
679 | * The class {@link java.util.stream.StreamSupport} has a number of
686 | * low-level methods for creating a stream, all using some form of a
687 | * {@link java.util.Spliterator}. A spliterator is the parallel analogue of an
688 | * {@link java.util.Iterator}; it describes a (possibly infinite) collection of
689 | * elements, with support for sequentially advancing, bulk traversal, and
690 | * splitting off some portion of the input into another spliterator which can
691 | * be processed in parallel. At the lowest level, all streams are driven by a
692 | * spliterator.
693 | *
694 | * There are a number of implementation choices in implementing a
695 | * spliterator, nearly all of which are tradeoffs between simplicity of
696 | * implementation and runtime performance of streams using that spliterator.
697 | * The simplest, but least performant, way to create a spliterator is to
698 | * create one from an iterator using
699 | * {@link java.util.Spliterators#spliteratorUnknownSize(java.util.Iterator, int)}.
700 | * While such a spliterator will work, it will likely offer poor parallel
701 | * performance, since we have lost sizing information (how big is the
702 | * underlying data set), as well as being constrained to a simplistic
703 | * splitting algorithm.
704 | *
705 | * A higher-quality spliterator will provide balanced and known-size
706 | * splits, accurate sizing information, and a number of other
707 | * {@link java.util.Spliterator#characteristics() characteristics} of the
708 | * spliterator or data that can be used by implementations to optimize
709 | * execution.
710 | *
711 | * Spliterators for mutable data sources have an additional challenge;
712 | * timing of binding to the data, since the data could change between the time
713 | * the spliterator is created and the time the stream pipeline is executed.
714 | * Ideally, a spliterator for a stream would report a characteristic of
715 |
716 | * {@code IMMUTABLE} or {@code CONCURRENT}; if not it should be
717 | * late-binding. If a source
718 | * cannot directly supply a recommended spliterator, it may indirectly supply
719 | * a spliterator using a {@code Supplier}, and construct a stream via the
720 | * {@code Supplier}-accepting versions of
721 | * {@link java.util.stream.StreamSupport#stream(Supplier, int, boolean) stream()}.
722 | * The spliterator is obtained from the supplier only after the terminal
723 | * operation of the stream pipeline commences.
724 | *
725 | * These requirements significantly reduce the scope of potential
726 | * interference between mutations of the stream source and execution of stream
727 | * pipelines. Streams based on spliterators with the desired characteristics,
728 | * or those using the Supplier-based factory forms, are immune to
729 | * modifications of the data source prior to commencement of the terminal
730 | * operation (provided the behavioral parameters to the stream operations meet
731 | * the required criteria for non-interference and statelessness). See
732 | * Non-Interference
733 | * for more details.
734 | */
735 | package com.google.chkstream;
736 |
--------------------------------------------------------------------------------
{@code
400 | * orders.flatMap(order -> order.getLineItems().stream())...
401 | * }
402 | *
403 | * {@code
406 | * ChkStream
409 | * The {@code mapper} function passed to {@code flatMap} splits a line,
410 | * using a simple regular expression, into an array of words, and then
411 | * creates a stream of words from that array.
412 | *
413 | * @param {@code
518 | * Stream.of("one", "two", "three", "four")
519 | * .filter(e -> e.length() > 3)
520 | * .peek(e -> System.out.println("Filtered value: " + e))
521 | * .map(String::toUpperCase)
522 | * .peek(e -> System.out.println("Mapped value: " + e))
523 | * .collect(Collectors.toList());
524 | * }
525 | *
526 | * @param action a
527 | * non-interfering action to perform on the elements as
528 | * they are consumed from the stream
529 | * @return the new stream
530 | */
531 | public ${class_type()} peek(
532 | final Chk${specialization}Consumer${_ThrowN}
533 | <${exc_list(extends=True)}>
534 | action) {
535 | return fromStream(
536 | stream.peek(
537 | functionUtils.wrapChk${specialization}Consumer(action)));
538 | }
539 |
540 | /**
541 | * Returns a stream consisting of the elements of this stream, truncated
542 | * to be no longer than {@code maxSize} in length.
543 | *
544 | * {@code
682 | * T result = identity;
683 | * for (T element : this stream)
684 | * result = accumulator.apply(result, element)
685 | * return result;
686 | * }
687 | *
688 | * but is not constrained to execute sequentially.
689 | *
690 | * {@code
703 | * Integer sum = integers.reduce(0, (a, b) -> a+b);
704 | * }
705 | *
706 | * or:
707 | *
708 | * {@code
709 | * Integer sum = integers.reduce(0, Integer::sum);
710 | * }
711 | *
712 | * {@code
742 | * boolean foundAny = false;
743 | * T result = null;
744 | * for (T element : this stream) {
745 | * if (!foundAny) {
746 | * foundAny = true;
747 | * result = element;
748 | * }
749 | * else
750 | * result = accumulator.apply(result, element);
751 | * }
752 | * return foundAny ? Optional.of(result) : Optional.empty();
753 | * }
754 | *
755 | * but is not constrained to execute sequentially.
756 | *
757 | * {@code
791 | * R result = supplier.get();
792 | * for (T element : this stream)
793 | * accumulator.accept(result, element);
794 | * return result;
795 | * }
796 | *
797 | * {@code
807 | * List
810 | *
811 | * {@code
814 | * String concat = stringStream.collect(StringBuilder::new, StringBuilder::append,
815 | * StringBuilder::append)
816 | * .toString();
817 | * }
818 | *
819 | * @param {@code
851 | * return reduce(0, Integer::sum);
852 | * }
853 | *
854 | * {@code
874 | * return reduce(Integer::min);
875 | * }
876 | *
877 | * {@code
897 | * return reduce(Integer::max);
898 | * }
899 | *
900 | * {@code
920 | * return mapToLong(e -> 1L).sum();
921 | * }
922 | *
923 | * Runnable
interface should be implemented by any
40 | * class whose instances are intended to be executed by a thread. The
41 | * class must define a method of no arguments called run
.
42 | * Runnable
is implemented by class Thread
.
46 | * Being active simply means that a thread has been started and has not
47 | * yet been stopped.
48 | * Runnable
provides the means for a class to be
50 | * active while not subclassing Thread
. A class that implements
51 | * Runnable
can run without subclassing Thread
52 | * by instantiating a Thread
instance and passing itself in
53 | * as the target. In most cases, the Runnable
interface should
54 | * be used if you are only planning to override the run()
55 | * method and no other Thread
methods.
56 | * This is important because classes should not be subclassed
57 | * unless the programmer intends on modifying or enhancing the fundamental
58 | * behavior of the class.
59 | */
60 | public static interface ChkRunnable${_ThrowN}
61 | <${exc_decl_list}>
62 | {
63 | /**
64 | * When an object implementing interface Runnable
is used
65 | * to create a thread, starting the thread causes the object's
66 | * run
method to be called in that separately executing
67 | * thread.
68 | * run
is that it may
70 | * take any action whatsoever.
71 | */
72 | void run() ${throws_list};
73 | }
74 | % endfor
75 | }
76 |
--------------------------------------------------------------------------------
/third_party/openjdk/tmpl/com/google/chkstream/function/FunctionUtils.tmpl.java:
--------------------------------------------------------------------------------
1 | /*
2 | * Copyright 2017 Google.
3 | *
4 | * This file is licensed under the GPLv2+Classpath Exception, which full text
5 | * is found in the LICENSE file at the root of this project.
6 | *
7 | * Google designates this particular file as subject to the "Classpath"
8 | * exception as provided in the LICENSE file that accompanied this code.
9 | */
10 |
11 | <%!
12 | split = True
13 | for_each_stream_impl = True
14 |
15 | def get_filename(num_exceptions, min_exceptions, specialization=None):
16 | return 'FunctionUtils_Throw%d.java' % num_exceptions
17 | %>
18 |
19 | <%
20 | exc_extend_list = ', '.join(['?' for i in xrange(0, num_e)])
21 | throws_list = 'throws ' + ', '.join(['E%d' % i for i in xrange(0, num_e)])
22 | _ThrowN = '' if num_e == MIN_EXCEPTIONS else '_Throw%d' % num_e
23 | %>
24 |
25 | package com.google.chkstream.${stream_impl}.function;
26 |
27 | % if stream_impl == 'java8':
28 | import java.util.function.*;
29 | % else:
30 | import java8.util.function.*;
31 | % endif
32 |
33 | import com.google.chkstream.ChkStreamWrappedException;
34 | import com.google.chkstream.function.ChkConsumers.*;
35 | import com.google.chkstream.function.ChkFunctions.*;
36 | import com.google.chkstream.function.ChkPredicates.*;
37 | import com.google.chkstream.function.ChkRunnables.*;
38 |
39 | public class FunctionUtils_Throw${num_e} {
40 |
41 | // Consumer
42 |
43 | public {@code
31 | * int sum = widgets.stream()
32 | * .filter(b -> b.getColor() == RED)
33 | * .mapToInt(b -> b.getWeight())
34 | * .sum();
35 | * }
36 | *
37 | *
50 | *
75 | *
76 | * Streams can be obtained in a number of ways. Some examples include:
77 | *
78 | *
93 | *
94 | * Stream operations and pipelines
98 | *
99 | * Parallelism
163 | *
164 | * {@code
179 | * int sumOfWeights = widgets.}
184 | *
185 | * parallelStream()
{@code
180 | * .filter(b -> b.getColor() == RED)
181 | * .mapToInt(b -> b.getWeight())
182 | * .sum();
183 | * }Non-interference
210 | *
211 | * Streams enable you to execute possibly-parallel aggregate operations over a
212 | * variety of data sources, including even non-thread-safe collections such as
213 | * {@code ArrayList}. This is possible only if we can prevent
214 | * interference with the data source during the execution of a stream
215 | * pipeline. Except for the escape-hatch operations {@code iterator()} and
216 | * {@code spliterator()}, execution begins when the terminal operation is
217 | * invoked, and ends when the terminal operation completes. For most data
218 | * sources, preventing interference means ensuring that the data source is
219 | * not modified at all during the execution of the stream pipeline.
220 | * The notable exception to this are streams whose sources are concurrent
221 | * collections, which are specifically designed to handle concurrent modification.
222 | * Concurrent stream sources are those whose {@code Spliterator} reports the
223 | * {@code CONCURRENT} characteristic.
224 | *
225 | * {@code
239 | * List
244 | *
245 | * First a list is created consisting of two strings: "one"; and "two". Then a
246 | * stream is created from that list. Next the list is modified by adding a third
247 | * string: "three". Finally the elements of the stream are collected and joined
248 | * together. Since the list was modified before the terminal {@code collect}
249 | * operation commenced the result will be a string of "one two three". All the
250 | * streams returned from JDK collections, and most other JDK classes,
251 | * are well-behaved in this manner; for streams generated by other libraries, see
252 | * Low-level stream
253 | * construction for requirements for building well-behaved streams.
254 | *
255 | * Stateless behaviors
256 | *
257 | * Stream pipeline results may be nondeterministic or incorrect if the behavioral
258 | * parameters to the stream operations are stateful. A stateful lambda
259 | * (or other object implementing the appropriate functional interface) is one
260 | * whose result depends on any state which might change during the execution
261 | * of the stream pipeline. An example of a stateful lambda is the parameter
262 | * to {@code map()} in:
263 | *
264 | * {@code
265 | * Set
268 | *
269 | * Here, if the mapping operation is performed in parallel, the results for the
270 | * same input could vary from run to run, due to thread scheduling differences,
271 | * whereas, with a stateless lambda expression the results would always be the
272 | * same.
273 | *
274 | * Side-effects
284 | *
285 | * Side-effects in behavioral parameters to stream operations are, in general,
286 | * discouraged, as they can often lead to unwitting violations of the
287 | * statelessness requirement, as well as other thread-safety hazards.
288 | *
289 | * {@code
316 | * ArrayList
320 | *
321 | * This code unnecessarily uses side-effects. If executed in parallel, the
322 | * non-thread-safety of {@code ArrayList} would cause incorrect results, and
323 | * adding needed synchronization would cause contention, undermining the
324 | * benefit of parallelism. Furthermore, using side-effects here is completely
325 | * unnecessary; the {@code forEach()} can simply be replaced with a reduction
326 | * operation that is safer, more efficient, and more amenable to
327 | * parallelization:
328 | *
329 | * {@code
330 | * List
334 | *
335 | * Ordering
336 | *
337 | * Reduction operations
375 | *
376 | * A reduction operation (also called a fold) takes a sequence
377 | * of input elements and combines them into a single summary result by repeated
378 | * application of a combining operation, such as finding the sum or maximum of
379 | * a set of numbers, or accumulating elements into a list. The streams classes have
380 | * multiple forms of general reduction operations, called
381 | * {@link java.util.stream.Stream#reduce(java.util.function.BinaryOperator) reduce()}
382 | * and {@link java.util.stream.Stream#collect(java.util.stream.Collector) collect()},
383 | * as well as multiple specialized reduction forms such as
384 | * {@link java.util.stream.IntStream#sum() sum()}, {@link java.util.stream.IntStream#max() max()},
385 | * or {@link java.util.stream.IntStream#count() count()}.
386 | *
387 | * {@code
390 | * int sum = 0;
391 | * for (int x : numbers) {
392 | * sum += x;
393 | * }
394 | * }
395 | * However, there are good reasons to prefer a reduce operation
396 | * over a mutative accumulation such as the above. Not only is a reduction
397 | * "more abstract" -- it operates on the stream as a whole rather than individual
398 | * elements -- but a properly constructed reduce operation is inherently
399 | * parallelizable, so long as the function(s) used to process the elements
400 | * are associative and
401 | * stateless.
402 | * For example, given a stream of numbers for which we want to find the sum, we
403 | * can write:
404 | * {@code
405 | * int sum = numbers.stream().reduce(0, (x,y) -> x+y);
406 | * }
407 | * or:
408 | * {@code
409 | * int sum = numbers.stream().reduce(0, Integer::sum);
410 | * }
411 | *
412 | * {@code
415 | * int sum = numbers.parallelStream().reduce(0, Integer::sum);
416 | * }
417 | *
418 | * {@code
435 | * OptionalInt heaviest = widgets.parallelStream()
436 | * .mapToInt(Widget::getWeight)
437 | * .max();
438 | * }
439 | *
440 | * {@code
443 | * U reduce(U identity,
444 | * BiFunction accumulator,
445 | * BinaryOperator combiner);
446 | * }
447 | * Here, the identity element is both an initial seed value for the reduction
448 | * and a default result if there are no input elements. The accumulator
449 | * function takes a partial result and the next element, and produces a new
450 | * partial result. The combiner function combines two partial results
451 | * to produce a new partial result. (The combiner is necessary in parallel
452 | * reductions, where the input is partitioned, a partial accumulation computed
453 | * for each partition, and then the partial results are combined to produce a
454 | * final result.)
455 | *
456 | * {@code
469 | * int sumOfWeights = widgets.stream()
470 | * .reduce(0,
471 | * (sum, b) -> sum + b.getWeight())
472 | * Integer::sum);
473 | * }
474 | * though the explicit map-reduce form is more readable and therefore should
475 | * usually be preferred. The generalized form is provided for cases where
476 | * significant work can be optimized away by combining mapping and reducing
477 | * into a single function.
478 | *
479 | * Mutable reduction
480 | *
481 | * A mutable reduction operation accumulates input elements into a
482 | * mutable result container, such as a {@code Collection} or {@code StringBuilder},
483 | * as it processes the elements in the stream.
484 | *
485 | * {@code
488 | * String concatenated = strings.reduce("", String::concat)
489 | * }
490 | *
491 | * {@code
510 | *
514 | * {@code
522 | * ArrayList
527 | * Or we could use a parallelizable collect form:
528 | * {@code
529 | * ArrayList
533 | * or, pulling the mapping operation out of the accumulator function, we could
534 | * express it more succinctly as:
535 | * {@code
536 | * List
539 | * Here, our supplier is just the {@link java.util.ArrayList#ArrayList()
540 | * ArrayList constructor}, the accumulator adds the stringified element to an
541 | * {@code ArrayList}, and the combiner simply uses {@link java.util.ArrayList#addAll addAll}
542 | * to copy the strings from one container into the other.
543 | *
544 | * {@code
550 | * List
553 | *
554 | * {@code
562 | * Collector
565 | *
566 | * (The {@code ?} for the second type parameter merely indicates that we don't
567 | * care about the intermediate representation used by this collector.)
568 | * If we wanted to create a collector to tabulate the sum of salaries by
569 | * department, we could reuse {@code summingSalaries} using
570 | * {@link java.util.stream.Collectors#groupingBy(java.util.function.Function, java.util.stream.Collector) groupingBy}:
571 | *
572 | * {@code
573 | * Map
577 | *
578 | * {@code
590 | * A a1 = supplier.get();
591 | * accumulator.accept(a1, t1);
592 | * accumulator.accept(a1, t2);
593 | * R r1 = finisher.apply(a1); // result without splitting
594 | *
595 | * A a2 = supplier.get();
596 | * accumulator.accept(a2, t1);
597 | * A a3 = supplier.get();
598 | * accumulator.accept(a3, t2);
599 | * R r2 = finisher.apply(combiner.apply(a2, a3)); // result with splitting
600 | * }
601 | *
602 | * Reduction, concurrency, and ordering
607 | *
608 | * With some complex reduction operations, for example a {@code collect()} that
609 | * produces a {@code Map}, such as:
610 | * {@code
611 | * Map
615 | * it may actually be counterproductive to perform the operation in parallel.
616 | * This is because the combining step (merging one {@code Map} into another by
617 | * key) can be expensive for some {@code Map} implementations.
618 | *
619 | *
637 | *
644 | * You can ensure the stream is unordered by using the
645 | * {@link java.util.stream.BaseStream#unordered()} method. For example:
646 | * {@code
647 | * Map
652 | * (where {@link java.util.stream.Collectors#groupingByConcurrent} is the
653 | * concurrent equivalent of {@code groupingBy}).
654 | *
655 | * Associativity
662 | *
663 | * An operator or function {@code op} is associative if the following
664 | * holds:
665 | * {@code
666 | * (a op b) op c == a op (b op c)
667 | * }
668 | * The importance of this to parallel evaluation can be seen if we expand this
669 | * to four terms:
670 | * {@code
671 | * a op b op c op d == (a op b) op (c op d)
672 | * }
673 | * So we can evaluate {@code (a op b)} in parallel with {@code (c op d)}, and
674 | * then invoke {@code op} on the results.
675 | *
676 | * Low-level stream construction
680 | *
681 | * So far, all the stream examples have used methods like
682 | * {@link java.util.Collection#stream()} or {@link java.util.Arrays#stream(Object[])}
683 | * to obtain a stream. How are those stream-bearing methods implemented?
684 | *
685 | *