开发者

How to use the doSMP and the foreach packages correctly?

I am trying to use the doSMP package that provides a paralle开发者_StackOverflowl backend for the foreach package.

Can you point out what I do wrong? Indeed, using foreach in that way increases significantly the computing time...

#------register doSMP to be used with foreach------
library(doSMP)
w <- startWorkers(4)
registerDoSMP(w)
#--------------------------------------------------

#------A simple function------
sim <- function(a, b)
{
    return(10 * a + b)
}
avec <- 1:200
bvec <- 1:400
#-----------------------------

#------The naive method------
ptime <- system.time({
mat <- matrix(NA, nrow=length(avec), ncol=length(bvec))
for(i in 1:length(avec))
{
    for(j in 1:length(bvec))
    {
         mat[i, j] <- sim(avec[i], bvec[j])
    }
}
})[3]
ptime

elapsed 
   0.36
#----------------------------

#------Using foreach------
ptime <- system.time({
mat2 <- foreach(b=bvec, .combine="cbind") %:%
         foreach(a=avec, .combine="c") %dopar%
     {
            sim(a, b)
    }
})[3]
ptime

elapsed 
  86.98
#-------------------------

EDIT

That question is very very similar to this one and has been migrated from stats.stackexchange.


I personally don't like the doSMP package as it crashes my R often. It is developed for the REvolution build, and somehow fails to run smooth on my machine. For example, your code above, unaltered, just crashes my R.

Next to that, it seems strange to try to use a parallelized function within a loop function It is more logic to do the parallelization in the outer loop. The communication involved in nested parallel computing is causing the dramatic increase in calculation time. You don't gain anything, as your sim function is incredibly fast. In fact, keeping the inner loop serialized makes more sense, as in that situation the calculation time on one core gets bigger than the overhead due to communication.

An illustration using the snowfall-package and using apply for looping instead of for loops. This is also very naive as there is a lot to win with vectorization (see below).

library(snowfall)
sfInit(parallel=T,cpus=2)
#same avec, bvec, sim

system.time({
    out <- sapply(avec,function(i) {
      sapply(bvec,function(j){
        sim(i,j)
      })
    })
})[3]
elapsed 
   0.33 

sfExport("avec","bvec","sim")
system.time({
    out <- sfSapply(avec,function(i) { # this one is parallel
      sapply(bvec,function(j){ # this one is not, no sense in doing so
        sim(i,j)
      })
    })
})[3]
elapsed 
   0.17 

Both matrices are equal, apart from the dimension names due to the structure :

> all.equal(out1,out2)
[1] "Attributes: < Length mismatch: comparison on first 1 components >"

The correct R way to do this would be :

system.time(
  out3 <- outer(avec*10,bvec,"+")
)[3]
elapsed 
   0.01 

which is significantly faster, and creates an identical (though transposed) matrix :

> all.equal(out1,t(out3))
[1] TRUE

(as a reference, your double for-loop runs on 0.73 elapsed time on my system...)


Joris Meys gave me a nice answer here that holds in that situation as well. Sorry for the "double-post"

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜