Writing to the same memory location, is this possible?
Consider the following:
ThreadA and ThreadB are two threads writing diagnostic information to a common object which stores a list of diagnostic information. Is it possible for ThreadA and ThreadB to write to the same memory address at the same time? If so what would result?
I'm开发者_如何学JAVA using .NET however i'm not necessarily interested in one particular language specific answer.
Corruption
Irrespective of the system [concurrent or truly parallel] the state of the memory depends on implementation of the memory device. Generally speaking, memory reads and writes are not atomic, which means it is possible that multiple concurrent accesses to the same memory address may return inconsistent results [ie data corruption].
Imagine two concurrent requests, 1 write, 1 read, of a simple integer value. Let's say an integer is 4 bytes. Let us also say, a read takes 2ns to execute, and a write takes 4ns to execute
- t0, Initial value of underlying 4 byte tuple, [0, 0, 0, 0]
- t1, Write op begins, write first byte [255, 0, 0, 0]
- t2, Write op continues, write second byte [255, 255, 0, 0]
- t2, Read op begins, reads first 2 bytes [255, 255, -, -]
- t3, Write op continues, write third byte [255, 255, 255, 0]
- t3, Read op ends, reads last 2 bytes [255, 255, 255, 0]
- t4, Write op ends, write fourth byte [255, 255, 255, 255]
The value returned by the read is neither the original nor the new value. The value is completely corrupted.
And what it means to you!
Admittedly, that is an incredibly simplified and contrived example, but what possible effect could this have in your scenario? In my opinion, the most vulnerable piece of your diagnostics system is the list of diagnostics data.
If your list is of fixed size, say an array of references to objects, at best you may lose whole objects as array elements are overwritten by competing threads, at worst you seg fault if the element contains a corrupted object reference [a la corruption scenario above].
If your list is dynamic, then it is possible underlying data structure becomes corrupted [if an array as in .Net List<>
when it is re-allocated, or if a linked list your next\prev references become lost or corrupted].
As an aside
Why isn't memory access atomic? For the same reason base collection implementations are not atomic - it would be too restrictive and introduce overhead, effectively penalizing simple scenarios. Therefore it is left to the consumer [us!] to synchronize our own memory accesses.
"The same time" is only possible to a certain granularity - at some point the actual writing to the memory array will get serialized. At that time the value at that address will be that of whichever write happened most recently. You could have some funny behaviour on a multiprocessor system. If each thread were running on a different processor, each with its own cache, each thread might only ever see the results of its own write, and never even know the other thread tried.
In your question you refer to both an object and a memory location. I'm going to assume you mean memory location as objects might do things differently (allocating new memory or whatever). You can have both threads write to the same address, but the results are not predictable. If you execute both threads and look at the result after the call, you cannot rely on a certain outcome. If you need to do this, you have to protect the write operation with a mutex or critical section or whatever. If its an object, the object might be protecting itself to be thread-safe. Hard to say without more details ...
Mostly, it would be app crash or corrupted data. Imagine a mix of atom operations from a two different threads.
Writing to the same global memory address at the same time isn't possible by hardware design! Possible is fictive write in case that any core of CPU has own cache, but result is unpredictable, from that case is invented "LOCK" instruction which grantee unimpeded or atomic access to the global memory address.
There have been some very good answers here that explain what can go wrong with multiple threads writing to shared memory addresses so I won't revisit that material.
To guard against those problems some very useful mechanisms have been developed to create robust multi-threaded systems by design.
When multiple threads need to submit data to a common destination, it can be very effective (and generally language agnostic) to use an OS queue (or if one is not available on your OS, use a pipe whose write's are protected by a mutex). Another thread can be blocked on reading the queue which processes data as it arrives in a nicely synchronized fashion.
As long as the queue depth (or pipe buffer) is sufficiently large to prevent write blocks the threads would not suffer a peformance hit save for the slight overhead of locking and releasing the mutex.
精彩评论