When to use volatile with multi threading?
If there are two threads accessing a global variable then many tutorials say make the variable volatile to prevent the compiler caching the variable in a register and it thus not getting updated correctly. However two threads both accessing a shared variable is something which calls for protection via a mutex isn't it? But in that case, between the thread locking and releasing the mutex the code is in a critical section where only that one thread can access the variable, in which case the variable doesn't need to be volatile?
So therefore what is the use/purpos开发者_StackOverflowe of volatile in a multi-threaded program?
Short & quick answer: volatile
is (nearly) useless for platform-agnostic, multithreaded application programming. It does not provide any synchronization, it does not create memory fences, nor does it ensure the order of execution of operations. It does not make operations atomic. It does not make your code magically thread safe. volatile
may be the single-most misunderstood facility in all of C++. See this, this and this for more information about volatile
On the other hand, volatile
does have some use that may not be so obvious. It can be used much in the same way one would use const
to help the compiler show you where you might be making a mistake in accessing some shared resource in a non-protected way. This use is discussed by Alexandrescu in this article. However, this is basically using the C++ type system in a way that is often viewed as a contrivance and can evoke Undefined Behavior.
volatile
was specifically intended to be used when interfacing with memory-mapped hardware, signal handlers, and the setjmp machine code instruction. This makes volatile
directly applicable to systems-level programming rather than normal applications-level programming.
The 2003 C++ Standard does not say that volatile
applies any kind of Acquire or Release semantics on variables. In fact, the Standard is completely silent on all matters of multithreading. However, specific platforms do apply Acquire and Release semantics on volatile
variables.
[Update for C++11]
The C++11 Standard now does acknowledge multithreading directly in the memory model and the language, and it provides library facilities to deal with it in a platform-independent way. However the semantics of volatile
still have not changed. volatile
is still not a synchronization mechanism. Bjarne Stroustrup says as much in TCPPPL4E:
Do not use
volatile
except in low-level code that deals directly with hardware.Do not assume
volatile
has special meaning in the memory model. It does not. It is not -- as in some later languages -- a synchronization mechanism. To get synchronization, useatomic
, amutex
, or acondition_variable
.
[/End update]
The above all applies to the C++ language itself, as defined by the 2003 Standard (and now the 2011 Standard). Some specific platforms however do add additional functionality or restrictions to what volatile
does. For example, in MSVC 2010 (at least) Acquire and Release semantics do apply to certain operations on volatile
variables. From the MSDN:
When optimizing, the compiler must maintain ordering among references to volatile objects as well as references to other global objects. In particular,
A write to a volatile object (volatile write) has Release semantics; a reference to a global or static object that occurs before a write to a volatile object in the instruction sequence will occur before that volatile write in the compiled binary.
A read of a volatile object (volatile read) has Acquire semantics; a reference to a global or static object that occurs after a read of volatile memory in the instruction sequence will occur after that volatile read in the compiled binary.
However, you might take note of the fact that if you follow the above link, there is some debate in the comments as to whether or not acquire/release semantics actually apply in this case.
In C++11, don't use volatile
for threading, only for MMIO
But TL:DR, it does "work" sort of like atomic with mo_relaxed
on hardware with coherent caches (i.e. everything); it is sufficient to stop compilers keeping vars in registers. atomic
doesn't need memory barriers to create atomicity or inter-thread visibility, only to make the current thread wait before/after an operation to create ordering between this thread's accesses to different variables. mo_relaxed
never needs any barriers, just load, store, or RMW.
For roll-your-own atomics with volatile
(and inline-asm for barriers) in the bad old days before C++11 std::atomic
, volatile
was the only good way to get some things to work. But it depended on a lot of assumptions about how implementations worked and was never guaranteed by any standard.
For example the Linux kernel still uses its own hand-rolled atomics with volatile
, but only supports a few specific C implementations (GNU C, clang, and maybe ICC). Partly that's because of GNU C extensions and inline asm syntax and semantics, but also because it depends on some assumptions about how compilers work.
It's almost always the wrong choice for new projects; you can use std::atomic
(with std::memory_order_relaxed
) to get a compiler to emit the same efficient machine code you could with volatile
. std::atomic
with mo_relaxed
obsoletes volatile
for threading purposes. (except maybe to work around missed-optimization bugs with atomic<double>
on some compilers.)
The internal implementation of std::atomic
on mainstream compilers (like gcc and clang) does not just use volatile
internally; compilers directly expose atomic load, store and RMW builtin functions. (e.g. GNU C __atomic
builtins which operate on "plain" objects.)
Volatile is usable in practice (but don't do it)
That said, volatile
is usable in practice for things like an exit_now
flag on all(?) existing C++ implementations on real CPUs, because of how CPUs work (coherent caches) and shared assumptions about how volatile
should work. But not much else, and is not recommended. The purpose of this answer is to explain how existing CPUs and C++ implementations actually work. If you don't care about that, all you need to know is that std::atomic
with mo_relaxed obsoletes volatile
for threading.
(The ISO C++ standard is pretty vague on it, just saying that volatile
accesses should be evaluated strictly according to the rules of the C++ abstract machine, not optimized away. Given that real implementations use the machine's memory address-space to model C++ address space, this means volatile
reads and assignments have to compile to load/store instructions to access the object-representation in memory.)
As another answer points out, an exit_now
flag is a simple case of inter-thread communication that doesn't need any synchronization: it's not publishing that array contents are ready or anything like that. Just a store that's noticed promptly by a not-optimized-away load in another thread.
// global
bool exit_now = false;
// in one thread
while (!exit_now) { do_stuff; }
// in another thread, or signal handler in this thread
exit_now = true;
Without volatile or atomic, the as-if rule and assumption of no data-race UB allows a compiler to optimize it into asm that only checks the flag once, before entering (or not) an infinite loop. This is exactly what happens in real life for real compilers. (And usually optimize away much of do_stuff
because the loop never exits, so any later code that might have used the result is not reachable if we enter the loop).
// Optimizing compilers transform the loop into asm like this
if (!exit_now) { // check once before entering loop
while(1) do_stuff; // infinite loop
}
Multithreading program stuck in optimized mode but runs normally in -O0 is an example (with description of GCC's asm output) of how exactly this happens with GCC on x86-64. Also MCU programming - C++ O2 optimization breaks while loop on electronics.SE shows another example.
We normally want aggressive optimizations that CSE and hoist loads out of loops, including for global variables.
Before C++11, volatile bool exit_now
was one way to make this work as intended (on normal C++ implementations). But in C++11, data-race UB still applies to volatile
so it's not actually guaranteed by the ISO standard to work everywhere, even assuming HW coherent caches.
Note that for wider types, volatile
gives no guarantee of lack of tearing. I ignored that distinction here for bool
because it's a non-issue on normal implementations. But that's also part of why volatile
is still subject to data-race UB instead of being equivalent to relaxed atomic.
Note that "as intended" doesn't mean the thread doing exit_now
waits for the other thread to actually exit. Or even that it waits for the volatile exit_now=true
store to even be globally visible before continuing to later operations in this thread. (atomic<bool>
with the default mo_seq_cst
would make it wait before any later seq_cst loads at least. On many ISAs you'd just get a full barrier after the store).
C++11 provides a non-UB way that compiles the same
A "keep running" or "exit now" flag should use std::atomic<bool> flag
with mo_relaxed
Using
flag.store(true, std::memory_order_relaxed)
while( !flag.load(std::memory_order_relaxed) ) { ... }
will give you the exact same asm (with no expensive barrier instructions) that you'd get from volatile flag
.
As well as no-tearing, atomic
also gives you the ability to store in one thread and load in another without UB, so the compiler can't hoist the load out of a loop. (The assumption of no data-race UB is what allows the aggressive optimizations we want for non-atomic non-volatile objects.) This feature of atomic<T>
is pretty much the same as what volatile
does for pure loads and pure stores.
atomic<T>
also make +=
and so on into atomic RMW operations (significantly more expensive than an atomic load into a temporary, operate, then a separate atomic store. If you don't want an atomic RMW, write your code with a local temporary).
With the default seq_cst
ordering you'd get from while(!flag)
, it also adds ordering guarantees wrt. non-atomic accesses, and to other atomic accesses.
(In theory, the ISO C++ standard doesn't rule out compile-time optimization of atomics. But in practice compilers don't because there's no way to control when that wouldn't be ok. There are a few cases where even volatile atomic<T>
might not be enough control over optimization of atomics if compilers did optimize, so for now compilers don't. See Why don't compilers merge redundant std::atomic writes? Note that wg21/p0062 recommends against using volatile atomic
in current code to guard against optimization of atomics.)
volatile
does actually work for this on real CPUs (but still don't use it)
even with weakly-ordered memory models (non-x86). But don't actually use it, use atomic<T>
with mo_relaxed
instead!! The point of this section is to address misconceptions about how real CPUs work, not to justify volatile
. If you're writing lockless code, you probably care about performance. Understanding caches and the costs of inter-thread communication is usually important for good performance.
Real CPUs have coherent caches / shared memory: after a store from one core becomes globally visible, no other core can load a stale value. (See also Myths Programmers Believe about CPU Caches which talks some about Java volatiles, equivalent to C++ atomic<T>
with seq_cst memory order.)
When I say load, I mean an asm instruction that accesses memory. That's what a volatile
access ensures, and is not the same thing as lvalue-to-rvalue conversion of a non-atomic / non-volatile C++ variable. (e.g. local_tmp = flag
or while(!flag)
).
The only thing you need to defeat is compile-time optimizations that don't reload at all after the first check. Any load+check on each iteration is sufficient, without any ordering. Without synchronization between this thread and the main thread, it's not meaningful to talk about when exactly the store happened, or ordering of the load wrt. other operations in the loop. Only when it's visible to this thread is what matters. When you see the exit_now flag set, you exit. Inter-core latency on a typical x86 Xeon can be something like 40ns between separate physical cores.
In theory: C++ threads on hardware without coherent caches
I don't see any way this could be remotely efficient, with just pure ISO C++ without requiring the programmer to do explicit flushes in the source code.
In theory you could have a C++ implementation on a machine that wasn't like this, requiring compiler-generated explicit flushes to make things visible to other threads on other cores. (Or for reads to not use a maybe-stale copy). The C++ standard doesn't make this impossible, but C++'s memory model is designed around being efficient on coherent shared-memory machines. E.g. the C++ standard even talks about "read-read coherence", "write-read coherence", etc. One note in the standard even points the connection to hardware:
http://eel.is/c++draft/intro.races#19
[ Note: The four preceding coherence requirements effectively disallow compiler reordering of atomic operations to a single object, even if both operations are relaxed loads. This effectively makes the cache coherence guarantee provided by most hardware available to C++ atomic operations. — end note ]
There's no mechanism for a release
store to only flush itself and a few select address-ranges: it would have to sync everything because it wouldn't know what other threads might want to read if their acquire-load saw this release-store (forming a release-sequence that establishes a happens-before relationship across threads, guaranteeing that earlier non-atomic operations done by the writing thread are now safe to read. Unless it did further writes to them after the release store...) Or compilers would have to be really smart to prove that only a few cache lines needed flushing.
Related: my answer on Is mov + mfence safe on NUMA? goes into detail about the non-existence of x86 systems without coherent shared memory. Also related: Loads and stores reordering on ARM for more about loads/stores to the same location.
There are I think clusters with non-coherent shared memory, but they're not single-system-image machines. Each coherency domain runs a separate kernel, so you can't run threads of a single C++ program across it. Instead you run separate instances of the program (each with their own address space: pointers in one instance aren't valid in the other).
To get them to communicate with each other via explicit flushes, you'd typically use MPI or other message-passing API to make the program specify which address ranges need flushing.
Real hardware doesn't run std::thread
across cache coherency boundaries:
Some asymmetric ARM chips exist, with shared physical address space but not inner-shareable cache domains. So not coherent. (e.g. comment thread an A8 core and an Cortex-M3 like TI Sitara AM335x).
But different kernels would run on those cores, not a single system image that could run threads across both cores. I'm not aware of any C++ implementations that run std::thread
threads across CPU cores without coherent caches.
For ARM specifically, GCC and clang generate code assuming all threads run in the same inner-shareable domain. In fact, the ARMv7 ISA manual says
This architecture (ARMv7) is written with an expectation that all processors using the same operating system or hypervisor are in the same Inner Shareable shareability domain
So non-coherent shared memory between separate domains is only a thing for explicit system-specific use of shared memory regions for communication between different processes under different kernels.
See also this CoreCLR discussion about code-gen using dmb ish
(Inner Shareable barrier) vs. dmb sy
(System) memory barriers in that compiler.
I make the assertion that no C++ implementation for other any other ISA runs std::thread
across cores with non-coherent caches. I don't have proof that no such implementation exists, but it seems highly unlikely. Unless you're targeting a specific exotic piece of HW that works that way, your thinking about performance should assume MESI-like cache coherency between all threads. (Preferably use atomic<T>
in ways that guarantees correctness, though!)
Coherent caches makes it simple
But on a multi-core system with coherent caches, implementing a release-store just means ordering commit into cache for this thread's stores, not doing any explicit flushing. (https://preshing.com/20120913/acquire-and-release-semantics/ and https://preshing.com/20120710/memory-barriers-are-like-source-control-operations/). (And an acquire-load means ordering access to cache in the other core).
A memory barrier instruction just blocks the current thread's loads and/or stores until the store buffer drains; that always happens as fast as possible on its own. (Or for LoadLoad / LoadStore barriers, block until previous loads have completed.) (Does a memory barrier ensure that the cache coherence has been completed? addresses this misconception). So if you don't need ordering, just prompt visibility in other threads, mo_relaxed
is fine. (And so is volatile
, but don't do that.)
See also C/C++11 mappings to processors
Fun fact: on x86, every asm store is a release-store because the x86 memory model is basically seq-cst plus a store buffer (with store forwarding).
Semi-related re: store buffer, global visibility, and coherency: C++11 guarantees very little. Most real ISAs (except PowerPC) do guarantee that all threads can agree on the order of a appearance of two stores by two other threads. (In formal computer-architecture memory model terminology, they're "multi-copy atomic").
- Will two atomic writes to different locations in different threads always be seen in the same order by other threads?
- Concurrent stores seen in a consistent order
Another misconception is that memory fence asm instructions are needed to flush the store buffer for other cores to see our stores at all. Actually the store buffer is always trying to drain itself (commit to L1d cache) as fast as possible, otherwise it would fill up and stall execution. What a full barrier / fence does is stall the current thread until the store buffer is drained, so our later loads appear in the global order after our earlier stores.
- Are loads and stores the only instructions that gets reordered?
- x86 mfence and C++ memory barrier
- Globally Invisible load instructions
(x86's strongly ordered asm memory model means that volatile
on x86 may end up giving you closer to mo_acq_rel
, except that compile-time reordering with non-atomic variables can still happen. But most non-x86 have weakly-ordered memory models so volatile
and relaxed
are about as weak as mo_relaxed
allows.)
(Editor's note: in C++11 volatile
is not the right tool for this job and still has data-race UB. Use std::atomic<bool>
with std::memory_order_relaxed
loads/stores to do this without UB. On real implementations it will compile to the same asm as volatile
. I added an answer with more detail, and also addressing the misconceptions in comments that weakly-ordered memory might be a problem for this use-case: all real-world CPUs have coherent shared memory so volatile
will work for this on real C++ implementations. But still don't do it.
Some discussion in comments seems to be talking about other use-cases where you would need something stronger than relaxed atomics. This answer already points out that volatile
gives you no ordering.)
Volatile is occasionally useful for the following reason: this code:
/* global */ bool flag = false;
while (!flag) {}
is optimized by gcc to:
if (!flag) { while (true) {} }
Which is obviously incorrect if the flag is written to by the other thread. Note that without this optimization the synchronization mechanism probably works (depending on the other code some memory barriers may be needed) - there is no need for a mutex in 1 producer - 1 consumer scenario.
Otherwise the volatile keyword is too weird to be useable - it does not provide any memory ordering guarantees wrt both volatile and non-volatile accesses and does not provide any atomic operations - i.e. you get no help from the compiler with volatile keyword except disabled register caching.
You need volatile and possibly locking.
volatile tells the optimiser that the value can change asynchronously, thus
volatile bool flag = false;
while (!flag) {
/*do something*/
}
will read flag every time around the loop.
If you turn optimisation off or make every variable volatile a program will behave the same but slower. volatile just means 'I know you may have just read it and know what it says, but if I say read it then read it.
Locking is a part of the program. So ,by the way, if you are implementing semaphores then among other things they must be volatile. (Don't try it, it is hard, will probably need a little assembler or the new atomic stuff, and it has already been done.)
#include <iostream>
#include <thread>
#include <unistd.h>
using namespace std;
bool checkValue = false;
int main()
{
std::thread writer([&](){
sleep(2);
checkValue = true;
std::cout << "Value of checkValue set to " << checkValue << std::endl;
});
std::thread reader([&](){
while(!checkValue);
});
writer.join();
reader.join();
}
Once an interviewer who also believed that volatile is useless argued with me that Optimisation wouldn't cause any issues and was referring to different cores having separate cache lines and all that (didn't really understand what he was exactly referring to). But this piece of code when compiled with -O3 on g++ (g++ -O3 thread.cpp -lpthread), it shows undefined behaviour. Basically if the value gets set before the while check it works fine and if not it goes into a loop without bothering to fetch the value (which was actually changed by the other thread). Basically i believe the value of checkValue only gets fetched once into the register and never gets checked again under the highest level of optimisation. If its set to true before the fetch, it works fine and if not it goes into a loop. Please correct me if am wrong.
精彩评论