开发者

Does accessing shared memory simultaneously cause a performance hit?

I have a simple multi-threaded app for my multi-core system. This app has a parallel region in which no threads write to a given memory address, but some may开发者_JS百科 read simultaneously.

Will there still be some type of overhead or performance hit associated with several threads accessing the same memory even if though no locking is used? If so, why? How big an impact can it have and what can be done about it?


This can depend on the specific cache synchronization protocol in use, but most modern CPUs support having the same cache line shared in multiple processor caches, provided there is no write activity to the cache line. That said, make sure you align your allocations to the cache line size; if you don't, it's possible that data that's being written to could share the same cache line as your read-only data, resulting in a performance hit when the dirtied cache line is flushed on other processors (false sharing).


I would say there wouldn't be. However the problem arises when you have multiple writers to the same references.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜