Groovy concurrency: A better way to aggregate results semantically?
I need to call a number of methods in parallel and wait for results. Each relies on different resources, so they may return at different times. I need to wait until I receive all results or time out after a certain amount of time.
I could just spawn threads with a reference to a shared object via a method call, but is there a better, more groovy way to do this?
Current Implementation:
Executors exec = Executors.newFixedThreadPool(10);
for (obj in objects) {
def method = {
def result = new ResultObject(a: obj, b: obj.callSomeMethod())
result
开发者_开发知识库} as Callable<ResultObject>
callables << method
}
List<Future<ResultObject>> results = exec.invokeAll(callables)
for (result in results) {
try{
def searchResult = result.get()
println 'result retrieved'
} catch (Exception e)
{
println 'exception'
e.printStackTrace()
}
}
}
A Groovier solution is to use GPars - a concurrency library written in Groovy.
import static groovyx.gpars.GParsExecutorsPool.withPool
withPool {
def callable = {obj -> new ResultObject(a: obj, b: obj.callSomeMethod())}.async()
List<ResultObject> results = objects.collect(callable)*.get()
}
AbstractExecutorService.invokeAll(Collection<? extends Callable<T>> tasks, long timeout, TimeUnit unit)
The groovy part would be using closures as Callable
精彩评论