398 * {@link java.util.stream.Stream#reduce(java.util.function.BinaryOperator) reduce()}
399 * and {@link java.util.stream.Stream#collect(java.util.stream.Collector) collect()},
400 * as well as multiple specialized reduction forms such as
401 * {@link java.util.stream.IntStream#sum() sum()}, {@link java.util.stream.IntStream#max() max()},
402 * or {@link java.util.stream.IntStream#count() count()}.
403 *
404 * <p>Of course, such operations can be readily implemented as simple sequential
405 * loops, as in:
406 * <pre>{@code
407 * int sum = 0;
408 * for (int x : numbers) {
409 * sum += x;
410 * }
411 * }</pre>
412 * However, there are good reasons to prefer a reduce operation
413 * over a mutative accumulation such as the above. Not only is a reduction
414 * "more abstract" -- it operates on the stream as a whole rather than individual
415 * elements -- but a properly constructed reduce operation is inherently
416 * parallelizable, so long as the function(s) used to process the elements
417 * are <a href="package-summary.html#Associativity">associative</a> and
418 * <a href="package-summary.html#NonInterfering">stateless</a>.
419 * For example, given a stream of numbers for which we want to find the sum, we
420 * can write:
421 * <pre>{@code
422 * int sum = numbers.stream().reduce(0, (x,y) -> x+y);
423 * }</pre>
424 * or:
425 * <pre>{@code
426 * int sum = numbers.stream().reduce(0, Integer::sum);
427 * }</pre>
428 *
429 * <p>These reduction operations can run safely in parallel with almost no
430 * modification:
431 * <pre>{@code
432 * int sum = numbers.parallelStream().reduce(0, Integer::sum);
433 * }</pre>
434 *
435 * <p>Reduction parallellizes well because the implementation
436 * can operate on subsets of the data in parallel, and then combine the
437 * intermediate results to get the final correct answer. (Even if the language
438 * had a "parallel for-each" construct, the mutative accumulation approach would
|
398 * {@link java.util.stream.Stream#reduce(java.util.function.BinaryOperator) reduce()}
399 * and {@link java.util.stream.Stream#collect(java.util.stream.Collector) collect()},
400 * as well as multiple specialized reduction forms such as
401 * {@link java.util.stream.IntStream#sum() sum()}, {@link java.util.stream.IntStream#max() max()},
402 * or {@link java.util.stream.IntStream#count() count()}.
403 *
404 * <p>Of course, such operations can be readily implemented as simple sequential
405 * loops, as in:
406 * <pre>{@code
407 * int sum = 0;
408 * for (int x : numbers) {
409 * sum += x;
410 * }
411 * }</pre>
412 * However, there are good reasons to prefer a reduce operation
413 * over a mutative accumulation such as the above. Not only is a reduction
414 * "more abstract" -- it operates on the stream as a whole rather than individual
415 * elements -- but a properly constructed reduce operation is inherently
416 * parallelizable, so long as the function(s) used to process the elements
417 * are <a href="package-summary.html#Associativity">associative</a> and
418 * <a href="package-summary.html#Statelessness">stateless</a>.
419 * For example, given a stream of numbers for which we want to find the sum, we
420 * can write:
421 * <pre>{@code
422 * int sum = numbers.stream().reduce(0, (x,y) -> x+y);
423 * }</pre>
424 * or:
425 * <pre>{@code
426 * int sum = numbers.stream().reduce(0, Integer::sum);
427 * }</pre>
428 *
429 * <p>These reduction operations can run safely in parallel with almost no
430 * modification:
431 * <pre>{@code
432 * int sum = numbers.parallelStream().reduce(0, Integer::sum);
433 * }</pre>
434 *
435 * <p>Reduction parallellizes well because the implementation
436 * can operate on subsets of the data in parallel, and then combine the
437 * intermediate results to get the final correct answer. (Even if the language
438 * had a "parallel for-each" construct, the mutative accumulation approach would
|