Why can't the JVM add synchronized/volatile/Lock at runtime?
Since all java applications are run eventually by the JVM, why can't the JVM wrap around single-threaded code into a multi-thread code at runtime depending on how many threads are running/accessing a part of开发者_开发技巧 the code. The JVM sure is aware of the number of threads running and it sure knows which classes are Threads and which part of code can be accessed by multiple threads.
What are the reasons this cannot be implemented or what can make this complex?
Simply spraying synchronized/volatile/Lock on anything that's used by multiple threads does not result in correct multi-threaded behavior. How would the runtime know the correct granularity of locks, for example? How would it avoid deadlocks?
The early collections classes, eg: Vector
and Hashtable
were designed with a similarly naive view of concurrency. Everything was synchronized. It turns out that you could still get into trouble quite easily, however. For example, suppose you wanted to check that a Vector contained at least one element, and if so then you'd remove one. Each of the calls to the Vector would be synchronized, but another thread could execute between these calls, and so you could end up with race condition bugs. (This is what I was referring to when I mentioned granularity of locks, earlier.)
Not possible in general for the JVM
Automatically adding synchronization usually does not lead to a positive effect. Synchronization costs both, performance and memory. Performance, because the processor must check the underlying locks. And memory because the locks must be stored somewhere. When the runtime adds locks everywhere, the program will run single threaded (because every method can be only accessed from one thread at a time), but now with higher costs for the CPU and more memory load (because of the lock handling).
The JVM can remove locks automatically
Usually the Java runtime does not have enough information to add locks in a clever way. But it does the opposite: With the so called "escape analysis" it can check, whether a memory block never escapes a certain code block (and is never shared to another thread). If this is the case, several optimizations are applied. One of them is, that the VM removes all synchronizations for this block.
Database engines can do it
There are systems that have enough information to automatically apply locks: database management systems. The more sophisticated database engines use a technique called "multi version concurrency". With this technique, one needs only locks for writing data, not for reading data. So one needs fewer locks as with a traditional approach and more code can run in parallel. But this comes with a cost: Sometimes the degree of parallelism becomes to high and the system comes in a inconsistent state. The system then undoes some of the changes and repeats them at a later time.
Automatic locks with STM and Clojure
This approach can be brought to the JVM in a (too some degree) automatic way. It is then called "software transactional memory". This is very close to your idea of automatic locks and leaves enough room for parallelism to be useful. On the JVM the language Clojure uses software transactional memory.
So while the JVM cannot add locks automatically in general, Clojure enables this to a certain degree. Try it and look how good it serves you.
I can think about the following reasons: application may use static variables, so 2 application that partially share classes they are using might bother to each other by changing shared state.
What you actually want is implemented by Java EE container that is running several applications pusillanimously. It seems that you are suggesting the JSE container (I have no idea whether this term exists). Try to suggest it to Oracle. It could be a cool JSR!
精彩评论