Learn more about how concurrency options differ in the Java language



Java? Engineers are trying to make concurrency easy for developers. Although many improvements have been made, concurrency is still a complex and error prone part of the Java platform. Part of the complexity is understanding the low-level abstraction of concurrency in the language itself, which fills your code with synchronized blocks of code. Another complication comes from new libraries, such as fork / join, which are very useful in some scenarios, but have little effect in others. Understanding a large number of low-level options that are prone to confusion requires professional experience and time.

One of the advantages of breaking away from the Java language is the ability to improve and simplify areas such as concurrency. Each Java next generation language provides a unique answer to this question, taking advantage of the language’s default programming style. In this article, I will first introduce the advantages of functional programming style: easy parallelization. I’ll go into the details of scala and groovy (clojure will be covered in the next article). Then I introduce Scala actor.

Perfect number

The mathematician nicomarcus (born in the 6th century BC) divides natural numbers into unique perfect number, redundant number or deficient number. A perfect number is equal to the sum of its positive factors (excluding itself). For example, 6 is a perfect number because its factors are 1, 2, 3 and 6, and 28 is also a perfect number (28 = 1 + 2 + 4 + 7 + 14). The sum of the factors of the excess number is greater than the number, and the sum of the factors of the deficiency number is less than the number.

The perfect number classification is used here for the convenience of introduction. Unless you’re dealing with a lot of numbers, it’s a trivial question whether to look for factors to benefit from parallelism. There are some benefits to using more threads, but the switching overhead between threads is expensive for fine-grained jobs.

Parallelize existing code

In the functional coding style article, we encourage you to use higher-level abstractions, such as simplification, mapping, and filters, rather than iterations. One of the advantages of this method is that it is easy to parallelize.

Readers of my functional thinking series are familiar with the number classification model with perfect numbers (see the perfect numbers sidebar). None of the solutions I’ve shown in this series take advantage of concurrency. But because these solutions use transformation functions, such as map, I can do very little work in every Java. Net language to create parallelized versions.

Listing 1 is a Scala example of a perfect number classifier.

Listing 1. Parallel perfect number classifier in Scala

object NumberClassifier {
def isFactor(factor: Int, number: Int) =
number % factor == 0
def factors(number: Int) = {
val factorsBelowSqrt = (1 to Math.sqrt(number).toInt).par.filter (isFactor(_, number))
val factorsAboveSqrt = factorsBelowSqrt.par.map(number / _)
(factorsBelowSqrt ++ factorsAboveSqrt).toList.distinct
def sum(factors: Seq[Int]) =
factors.par.foldLeft(0)(_ + _)
def isPerfect(number: Int) =
sum(factors(number)) - number == number

The factors () method in Listing 1 returns a list of factors of a number, using the isfactor () method to filter all possible values. The factors () method uses an optimization I described in more detail in “functional thinking: transformation and optimization.”. In short, filtering each number to find a factor is inefficient because by definition, a factor is one of two numbers whose product is equal to the target number.

Instead, I filter only the number of square roots that do not exceed the number of targets, and then generate a list of symmetry factors by dividing the number of targets by each factor that is less than the square root. In Listing 1, the factorsbelowsqrt variable contains the results of the filtering operation. The values of factorsabovesqrt are maps of existing lists used to generate these symmetric values. Finally, the return value of factors () is a concatenated list, which is converted from a parallel list to a regular list.

Notice that the par modifier is added in Listing 1. This modifier causes filters, maps, and foldleft to run in parallel, enabling multiple threads to process requests. PAR method (consistent throughout the scala collection library) converts the sequence to a parallel sequence. Because two types of sequences reflect their signatures, par functions become a temporary way to parallelize an operation.

The simplicity of parallelization in scala has been proved in language design and function pattern. Functional programming encourages the use of generic functions, such as map, filter, and reduce, which can be further optimized by the runtime in an invisible way. Scala language designers considered these optimizations and finally produced the design of collection API.

Marginal situation

In the implementation of the factors () method in Listing 1, the square root of an integer (for example, the square root of 16: 4) appears in two lists. Therefore, the last line returned by the factors () method is a call to the distinct function, which removes duplicate values from the list. You can also use set everywhere, not just in lists, but lists often have useful functions that are not in sets.

Groovy also allows you to easily modify existing function code, parallelizing it through the GPARS library, which is bundled with various groovy distributions. The GPARS framework creates useful abstractions on top of the built-in Java parallelism primitives, often wrapped in syntactic sugar. GPARS provides a dazzling array of parallel mechanisms, one of which can be used to allocate thread pools and then distribute operations to those pools. Listing 2 shows a perfect number classifier written in groovy using GPARS thread pool.

Listing 2. Parallel perfect number classifier in groovy

class NumberClassifierPar {
static def factors(number) {
GParsPool.withPool {
def factors = (1..round(sqrt(number) + 1)).findAllParallel { number % it == 0 }
(factors + factors.collectParallel { number / it }).unique()
static def sumFactors(number) {
factors(number).inject(0, { i, j -> i + j })
static def isPerfect(number) {
sumFactors(number) - number == number

The factors () method in Listing 2 uses the same algorithm as Listing 1: it generates all factors up to the square root of the target number, then generates the remaining factors and returns the concatenated set. As in Listing 1, I use the unique () method to ensure that the square root of an integer does not generate duplicate values.

Instead of zooming in on the collection as in scala to create a symmetric parallel version, groovy’s designers created the xxxparallel() version of the language’s transformation methods (such as findallparallel() and collectparallel()). But unless these methods are wrapped in GPARS thread pool code blocks, they don’t work.

In Listing 2, I create a thread pool and call gparspool. Withpool to create a code block that supports the use of the xxxparallel () method. There are other variants of the withpool method. For example, you can specify the number of threads in the pool.

Clojure provides a similar temporary parallelization mechanism through the simplification library. Use the simplified version of the conversion function for automatic parallelization, for example,
Use R / map instead of map. (R / is the reducer namespace.) The implementation of the reducers is a compelling case study of clojure’s syntactic flexibility, which enables powerful addition through minimal changes.

Actors in Scala

Scala contains many concurrency and parallelism mechanisms. A popular mechanism is the actor model, which provides the advantage of distributing work to threads without the complexity of synchronization. Conceptually, the actor has the ability to do the work and then send a non blocking result to the coordinator. To create an actor, you need to create a subclass of the actor class and implement the Act () method. By using Scala’s syntax sugar, you can bypass many of the defining rituals and define actors within the code block.

One of the optimizations I didn’t perform for the number classifier in Listing 1 was to use threads to partition the factor lookup part of the job. If I have four processors on my computer, I can create a thread for each processor and split the work. For example, if I try to find the sum of the factors of the number 16, I can arrange processor 1 to find the factors from 1 to 4 (sum them), processor 2 to handle 5 to 8, and so on. Using actors is a natural choice: I create an actor for each scope, execute each actor independently (implicitly by syntax sugar or explicitly by calling its act () method), and then collect the results, as shown in Listing 3.

Listing 3. Identifying perfect numbers using actors in Scala

object NumberClassifier extends App {
def isPerfect(candidate: Int) = {
val RANGE = 10000
val numberOFPartitions = (candidate.toDouble / RANGE).ceil.toInt
val coordinator = self
for (i <- 0 until numberOFPartitions) {
val lower = i * RANGE + 1
val upper = candidate.min((i + 1) * RANGE)
actor {
var partialSum = 0
for (j <- lower to upper)
if (candidate % j == 0) partialSum += j
coordinator ! partialSum
var responsesExpected = numberOFPartitions
var sum = 0
while (responsesExpected > 0) {
receive {
case partialSum : Int =>
responsesExpected -= 1
sum += partialSum
sum == 2 * candidate

To keep this example simple, I write isperfect () as a single, complete function. I first created some partitions based on the constant range. Second, I need a way to collect messages generated by actors. In the coordinator variable, I have a reference for the actor to send messages to, where self is a member of the actor, indicating a reliable way to get the thread identifier in scala.

I then create a loop for the partition number, using the range offset to generate the lower and upper limits of the range. Next, create an actor for the scope, using Scala’s syntax sugar to avoid formal class definitions. In the actor, I create a temporary saver for partialsum, analyze the scope, and collect the found factors into partialsum. After collecting the sum of parts and (the sum of all factors in this range), the coordinator! Partialsum sends a message back to the coordinator, using the exclamation point operator. (this messaging syntax is inspired by the Erlang language as a way to make non blocking calls to another thread.)

Next, I start a loop, waiting for all actors to finish processing. In the process of waiting, I enter a receive code block. In this code block, I want an int message, which I assign locally to partialsum, and then I decrement the number of responses I want to add to the total. After all actors have completed and reported the results, the last line of the method compares the sum to twice the number of candidates. If the comparison result is true, then my candidate number is a perfect number, and the return value of this function is true.

A good advantage of actors is ownership partitioning. Each actor has a partialsum local variable, but they are never related to each other. When a message is received through the coordinator, the underlying execution mechanism is invisible: you create a receive block and other implementation details are invisible.

The actor mechanism in scala is an excellent example of Java’s next generation language encapsulating existing tools for JVMs and extending them with consistent abstractions. Write similar code in Java language, and use low-level concurrency primitives. These operations need to coordinate multiple threads very complex. Actors in scala hide all complexity, leaving behind easy to understand abstractions.

Concluding remarks

The next generation of java languages provide answers to the concurrency problems in the Java language, and each language solves these problems in different ways. In this installment, I’ve demonstrated how all three Java next-generation languages can achieve temporary parallelism. I also demonstrated the actor model in scala and built a number classifier to calculate the sum of factors in parallel.

The above is the whole content of this article. I hope it will help you in your study, and I hope you can support developepaer more.