开发者

Best way to allocate matrix in R, NULL vs NA?

I am writing R code to create a square matrix. So my approach is:

  1. Allocate a matrix of the correct size
  2. Loop through each element of my matrix and fill it with an appropriate value

My question is really simple: what is the best way to pre-allocate this matrix? Thus far, I have two ways:

> x <- matrix(data=NA,nrow=3,ncol=3)
> x
     [,1] [,2] [,3]
[1,]   NA   NA   NA
[2,]   NA   NA   NA
[3,]   NA   NA   NA

or

> x <- list()
> length(x) <- 3^2
> dim(x) <- c(3,3)
> x
     [,1] [,2] [,3]
[1,] NULL NULL NULL
[2,] NULL NULL NULL
[3,] NU开发者_高级运维LL NULL NULL

As far as I can see, the former is a more concise method than the latter. Also, the former fills the matrix with NAs, whereas the latter is filled with NULLs.

Which is the "better" way to do this? In this case, I'm defining "better" as "better performance", because this is statistical computing and this operation will be taking place with large datasets.

While the former is more concise, it isn't breathtakingly easier to understand, so I feel like this could go either way.

Also, what is the difference between NA and NULL in R? ?NA and ?NULL tell me that "NA" has a length of "1" whereas NULL has a length of "0" - but is there more here? Or a best practice? This will affect which method I use to create my matrix.


When in doubt, test yourself. The first approach is both easier and faster.

> create.matrix <- function(size) {
+ x <- matrix()
+ length(x) <- size^2
+ dim(x) <- c(size,size)
+ x
+ }
> 
> system.time(x <- matrix(data=NA,nrow=10000,ncol=10000))
   user  system elapsed 
   4.59    0.23    4.84 
> system.time(y <- create.matrix(size=10000))
   user  system elapsed 
   0.59    0.97   15.81 
> identical(x,y)
[1] TRUE

Regarding the difference between NA and NULL:

There are actually four special constants.

In addition, there are four special constants, NULL, NA, Inf, and NaN.

NULL is used to indicate the empty object. NA is used for absent (“Not Available”) data values. Inf denotes infinity and NaN is not-a-number in the IEEE floating point calculus (results of the operations respectively 1/0 and 0/0, for instance).

You can read more in the R manual on language definition.


According to this article we can do better than preallocating with NA by preallocating with NA_real_. From the article:

as soon as you assign a numeric value to any of the cells in 'x', the matrix will first have to be coerced to numeric when a new value is assigned. The originally allocated logical matrix was allocated in vain and just adds an unnecessary memory footprint and extra work for the garbage collector. Instead allocate it using NA_real_ (or NA_integer_ for integers)

As recommended: let's test it.

testfloat = function(mat){
  n=nrow(mat)
  for(i in 1:n){
    mat[i,] = 1.2
  }
}

>system.time(testfloat(matrix(data=NA,nrow=1e4,ncol=1e4)))
user  system elapsed 
3.08    0.24    3.32 
> system.time(testfloat(matrix(data=NA_real_,nrow=1e4,ncol=1e4)))
user  system elapsed 
2.91    0.23    3.14 

And for integers:

testint = function(mat){
  n=nrow(mat)
  for(i in 1:n){
    mat[i,] = 3
  }
}

> system.time(testint(matrix(data=NA,nrow=1e4,ncol=1e4)))
user  system elapsed 
2.96    0.29    3.31 
> system.time(testint(matrix(data=NA_integer_,nrow=1e4,ncol=1e4)))
user  system elapsed 
2.92    0.35    3.28 

The difference is small in my test cases, but it's there.


rows<-3
cols<-3    
x<-rep(NA, rows*cols)
x1 <- matrix(x,nrow=rows,ncol=cols)
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜