Practical limitations of JVM memory and CPU usage?
Let's say money was not a limiting factor, and I wanted to write a Java program that ran on a single powerful machine.
The goal would be to make the Java program run as fast as possible without having to swap or go to disk for anything.
Let's say that this computer has:
- 1 TB of RAM (64 16GB DIMMs)
- 64 processor cores (8 8-core process开发者_运维百科ors)
- running 64-bit Ubuntu
Could a single instance of a java program running in a JVM take advantage of this much RAM and processors?
Are there any practical considerations that might limit the usage and efficiency?
- OS process (memory & threads) limitations?
- JVM memory/heap limitations?
- JVM thread limitations?
Thanks, Galen
A single instance can try to acces all the memory, however NUMA regions mean that things such as GC perform badly accessing memory in another region. This is getting faster and JVM has some NUMA support but it needs to improve if you want scalability. Even so you can get 256 MB of heap and use 700 of native/direct memory without this issue. ;)
The biggest limitation if you have loads of memory is that arrays, collections and ByteBuffer (for memory mapped files) are all limited to a size of 2 billion. (2^31-1)
You can work around these problems with custom collections, but its really something Java should support IMHO.
BTW: You can buy a Dell R910 with 1 TB of memory and 24 cores/48 threads with Ubuntu for £40K.
BTW: I only have experience of JVMs up to 40 GB in size.
First of all, the Java program itself: a poorly designed code wouldn't use that much computer-power. Badly implemented threads, for example, could make your performance be slow.
OS is a limiting factor, too: not all OS can handle well that amount of memory.
I think the JVM can deal with that amount of memory, since the OS supports it.
精彩评论