开发者

Using Interlocks for thread synchronization and maintaining cache coherency

If I were to use some kind of algorithm that uses InterlockCompareExchange operations on a variable in C++ that determines if a set of data is being written to by a particular thread (by creating my own little lock), how do I ensure that the interlocked operation that updated the value is immediately seen by the other thread if the data is being stored in say, level 2 cache on an i7.

I know that cache coherency is used to keep the data across caches of multi-core processors consistent, but what about the small frame of time when one core updates a variable with an interlock function and the cache checks and fixes coherency problems all while another core is checking that variable开发者_JAVA百科 it has in its own cache? Would this problem be fixed if I ensured the variable that is having the InterlockCompareExchange operation is volatile so that changes are written directly to the memory? Am I correct to believe that a memory barrier (MemoryBarrier() on VS) does not ensure cache coherency but only ensures unwanted instruction reordering?

I hope my question isn't too vague. I will try to answer any comments if I was. I don't have any source code to post with this question as I don't have any specific problems, but would like to know for future reference if there could be any problems with this, especially with c++0x having interlocks as part of its standard library.

Thank you.


The compiler can't reorder loads or stores across an interlocked function call, and the implementation will include whatever machine instructions are needed to make sure the CPU core won't.

Cache coherency is always maintained, the only thing you have to worry about is when the value actually gets written out from the instruction pipeline to cache, and that's an ordering issue.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜