R optimization: How can I avoid a for loop in this situation?
I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops.
In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this:
0 100 200 300 400 500 600
windows: |-----|-----|-----|-----|-----|-----|
mylist: |-| |-----------|
So I wrote some code to do just that, bu开发者_如何学JAVAt it's fairly slow and has become a bottleneck in my code:
##window for each 100-bp segment
windows <- numeric(6)
##second track
mylist = vector("list")
mylist[[1]] = c(1,20)
mylist[[2]] = c(120,320)
##do the intersection
for(i in 1:length(mylist)){
st <- floor(mylist[[i]][1]/100)+1
sp <- floor(mylist[[i]][2]/100)+1
for(j in st:sp){
b <- max((j-1)*100, mylist[[i]][1])
e <- min(j*100, mylist[[i]][2])
windows[j] <- windows[j] + e - b + 1
}
}
print(windows)
[1] 20 81 101 21 0 0
Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly.
I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?
The "Right" thing to do is to use the bioconductor IRanges
package, which uses an IntervalTree data structure to represent these ranges.
Having both of your objects in their own IRanges
objects, you would then use the findOverlaps
function to win.
Get it here:
http://www.bioconductor.org/packages/release/bioc/html/IRanges.html
By the by, the internals of the package are written in C, so its super fast.
EDIT
On second thought, it's not as much of a slam-dunk as I'm suggesting (a one liner), but you should definitely start using this library if you're working at all with genomic intervals (or other types) ... you'll likely need to do some set operations and stuff. Sorry, don't have time to provide the exact answer, though.
I just thought it's important to point this library out to you.
So I'm not entirely sure why the third and fourth windows aren't 100 and 20 because that would make more sense to me. Here's a one liner for that behavior:
Reduce('+', lapply(mylist, function(x) hist(x[1]:x[2], breaks = (0:6) * 100, plot = F)$counts))
Note that you need to specify the upper bound in breaks
, but it shouldn't be hard to make another pass to get it if you don't know it in advance.
Okay, so I wasted WAY too much time on this, and still only got a factor of 3 speed-up. Can anyone beat this?
The code:
my <- do.call(rbind,mylist)
myFloor <- floor(my/100)
myRem <- my%%100
#Add intervals, over counting interval endpoints
counts <- table(do.call(c,apply(myFloor,1,function(r) r[1]:r[2])))
windows[as.numeric(names(counts))+1] <- counts*101
#subtract off lower and upper endpoints
lowerUncovered <- tapply(myRem[,1],myFloor[,1],sum)
windows[as.numeric(names(lowerUncovered))+1] <- windows[as.numeric(names(lowerUncovered))+1] - lowerUncovered
upperUncovered <- tapply(myRem[,2],myFloor[,2],function(x) 100*length(x) - sum(x))
windows[as.numeric(names(upperUncovered))+1] <- windows[as.numeric(names(upperUncovered))+1] - upperUncovered
The test:
mylist = vector("list")
for(i in 1:20000){
d <- round(runif(1,,500))
mylist[[i]] <- c(d,d+round(runif(1,,700)))
}
windows <- numeric(200)
new_code <-function(){
my <- do.call(rbind,mylist)
myFloor <- floor(my/100)
myRem <- my%%100
counts <- table(do.call(c,apply(myFloor,1,function(r) r[1]:r[2])))
windows[as.numeric(names(counts))+1] <- counts*101
lowerUncovered <- tapply(myRem[,1],myFloor[,1],sum)
windows[as.numeric(names(lowerUncovered))+1] <- windows[as.numeric(names(lowerUncovered))+1] - lowerUncovered
upperUncovered <- tapply(myRem[,2],myFloor[,2],function(x) 100*length(x) - sum(x))
windows[as.numeric(names(upperUncovered))+1] <- windows[as.numeric(names(upperUncovered))+1] - upperUncovered
#print(windows)
}
#old code
old_code <- function(){
for(i in 1:length(mylist)){
st <- floor(mylist[[i]][1]/100)+1
sp <- floor(mylist[[i]][2]/100)+1
for(j in st:sp){
b <- max((j-1)*100, mylist[[i]][1])
e <- min(j*100, mylist[[i]][2])
windows[j] <- windows[j] + e - b + 1
}
}
#print(windows)
}
system.time(old_code())
system.time(new_code())
The result:
> system.time(old_code())
user system elapsed
2.403 0.021 2.183
> system.time(new_code())
user system elapsed
0.739 0.033 0.588
Very frustrating that the system time is basically 0, but the observed time is so great. I bet if you did go down to C you would get a 50-100X speed-up.
I think I have made it much more complicated... System.time didn't help me in performance evaluation in such a small dataset.
windows <- numeric(6)
mylist = vector("list")
mylist[[1]] = c(1,20)
mylist[[2]] = c(120,320)
library(plyr)
l_ply(mylist, function(x) {
sapply((floor(x[1]/100)+1) : (floor(x[2]/100)+1), function(z){
eval.parent(parse(text=paste("windows[",z,"] <- ",
min(z*100, x[2]) - max((z-1)*100, x[1]) + 1,sep="")),sys.nframe())
})
})
print(windows)
EDIT
A modification to eliminate eval
g <- llply(mylist, function(x) {
ldply((floor(x[1]/100)+1) : (floor(x[2]/100)+1), function(z){
t(matrix(c(z,min(z*100, x[2]) - max((z-1)*100, x[1]) + 1),nrow=2))
})
})
for(i in 1:length(g)){
windows[unlist(g[[i]][1])] <- unlist(g[[i]][2])
}
I don't have a bright idea, but you can get rid of the inner loop, and speed up things a bit. Notice that if a window falls fully wihtin a mylist interval, then you just have to add 100 to the corresponding windows
element. So only the st
-th and sp
-th windows need special handling.
windows <- numeric(100)
for(i in 1:length(mylist)){
win <- mylist[[i]] # for cleaner code
st <- floor(win[1]/100)+1
sp <- floor(win[2]/100)+1
# start and stop are within the same window
if (sp == st){
windows[st] <- windows[st] + (win[2]%%100) - (win[1]%%100) +1
}
# start and stop are in separate windows - take care of edges
if (sp > st){
windows[st] <- windows[st] + 100 - (win[1]%%100) + 1
windows[sp] <- windows[sp] + (win[2]%%100)
}
# windows completely inside win
if (sp > st+1){
windows[(st+1):(sp-1)] <- windows[(st+1):(sp-1)] + 100
}
}
I generated a bigger list:
cuts <- sort(sample(1:10000, 70)) # random interval endpoints
mylist <- split(cuts, gl(35,2))
and got 1.08 sec for 1000 replicates of this version versus 1.72 sec for 1000 replicates for the original. With real data the speed-up will depend on whether the intervals in mylist
tend to be much longer than 100 or not.
By the way, one could rewrite the inside loop as a separate function, and then lapply
it over mylist
, but that does not make it work faster.
精彩评论