Java: what's the big-O time of declaring an array of size n?
What is the running time of declaring an array of size n in Java? I suppose this would depend on whether the memory is zero'ed out on garbage collection (in which case it could be O(1) ) o开发者_运维知识库r on initialization (in which case it'd have to be O(n) ).
It's O(n)
. Consider this simple program:
public class ArrayTest {
public static void main(String[] args) {
int[] var = new int[5];
}
}
The bytecode generated is:
Compiled from "ArrayTest.java"
public class ArrayTest extends java.lang.Object{
public ArrayTest();
Code:
0: aload_0
1: invokespecial #1; //Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: iconst_5
1: newarray int
3: astore_1
4: return
}
The instruction to take a look at is the newarray
instruction (just search for newarray
). From the VM Spec:
A new array whose components are of type atype and of length count is allocated from the garbage-collected heap. A reference arrayref to this new array object is pushed into the operand stack. Each of the elements of the new array is initialized to the default initial value for the type of the array (§2.5.1).
Since each element is being initialized, it would take O(n)
time.
EDIT
Looking at the link amit provided, it is possible to implement array-initialization with a default value, in constant time. So I guess it ultimately depends on the JVM. You could do some rough benchmarking to see if this is the case.
A small none professional benchmark on JRE1.6:
public static void main(String[] args) {
long start = System.nanoTime();
int[] x = new int[50];
long smallArray = System.nanoTime();
int[] m = new int[1000000];
long bigArray = System.nanoTime();
System.out.println("big:" + new Long( bigArray - smallArray));
System.out.println("small:" + new Long( smallArray - start));
}
gave the following result:
big:6133612
small:6159
so I assume O(n). of course, it is not enough to be sure, but it's a hint.
I am pretty sure that it is O(n), as the memory is initialized when the array is allocated. It should not be higher than O(n), and I can see no way to make it less than O(n), so that seems the only option.
To elaborate further, Java initializes arrays on allocation. There is no way to zero a region of memory without walking across it, and the size of the region dictates the number of instructions. Therefore, the lower bound is O(n). Also, it would make no sense to use a zeroing algorithm slower than linear, since there is a linear solution, so the upper bound must be O(n). Therefore, O(n) is the only answer that makes sense.
Just for fun, though, imagine a weird piece of hardware where the OS has control over the power to individual regions of memory and may zero a region by flipping the power off and then on. This seems like it would be O(1). But a region can only be so large before the utility disappears (wouldn't want to lose everything), so asking to zero a region will still be O(n) with a large divisor.
Let's just test it.
class ArrayAlloc {
static long alloc(int n) {
long start = System.nanoTime();
long[] var = new long[n];
long total = System.nanoTime() - start;
var[n/2] = 8;
return total;
}
public static void main(String[] args) {
for(int i=1; i<100000000; i+=1000000) {
System.out.println(i + "," + alloc(i));
}
}
}
And the results on my linux laptop (i7-4600M @ 2.90GHz):
So it clearly looks like O(n)
, but it also looks like it switches to a more efficient method at around 5 million elements.
精彩评论