From C# language specification - 3.10 Execution order:
"Execution of a C# program proceeds such that the side effects of each executing thread are preserved at critical execution points. A side effect is defined as a read or write of a volatile field, a write to a non-volatile variable, a write to an external resource, and the throwing of an exception. The critical execution points at which the order of these side effects must be preserved are references to volatile fields (§10.5.3), lock statements (§8.12), and thread creation and termination. The execution environment is free to change the order of execution of a C# program, subject to the following constraints:
- Data dependence is preserved within a thread of execution. That is, the value of each variable is computed as if all statements in the thread were executed in original program order.
- Initialization ordering rules are preserved (§10.5.4 and §10.5.5).
- The ordering of side effects is preserved with respect to volatile reads and writes (§10.5.3). Additionally, the execution environment need not evaluate part of an expression if it can deduce that that expression’s value is not used and that no needed side effects are produced (including any caused by calling a method or accessing a volatile field). When program execution is interrupted by an asynchronous event (such as an exception thrown by another thread), it is not guaranteed that the observable side effects are visible in the original program order."
"The memory model in .NET talks about when reads and writes "actually" happen compared with when they occur in the program's instruction sequence. Reads and writes can be reordered in any way which doesn't violate the rules given by the memory model. As well as "normal" reads and writes there are volatile reads and writes. Every read which occurs after a volatile read in the instruction sequence occurs after the volatile read in the memory model too - they can't be reordered to before the volatile read. A volatile write goes the other way round - every write which occurs before a volatile write in the instruction sequence occurs before the volatile write in the memory model too." (from http://www.yoda.arachsys.com/csharp/threads/volatility.shtml)
finishedhas been declared
volatile, the main thread must read the actual value
from the field
result." (from http://msdn.microsoft.com/en-us/library/aa645755(VS.71).aspx)
According to above execution order in .NET memory model is kept with volatile!
Does it mean that former writes to variables that are not volatile will be finished (probably yes) and visible to other threads?
How volatile influences operations on memory (especially cache)? What if each core has its own cache or multiprocessor is used?
Am I asking about volatile v.s. cache coherence?
Some reference to this problem:
"The JIT is responsible to maintain correct semantics for
a given target processor by emitting the necessary instruction for that
processor, including processor-specific memory ordering ops like
load-acquire, fences ..." (from http://www.eggheadcafe.com/conversation.aspx?messageid=30644089&threadid=30643832)
"The Whidbey memory model (Framework V2) targets both IA-32 and IA-64, this
memory model assumes that every shared write (ordinary as well as
interlocked) becomes globally visible to all other processors
simultaneously. This is implicitly true because all writes have release
semantics on IA-32 and X64 CPU's, on IA-64 it's just a matter of emitting a
st.rel (store release) instruction for every write to perform each
processor's stores in order and to make them visible in the same order to
other processors (that is, the execution environment is Processor
Above means that the JIT has to emit a st.rel for _name = name; when run on
IA-64 (64 bit managed code), so the other thread will actually see p.Name
pointing to the string.
Add to that that a thread creation implies a full barrier (fence), so in
this particular case it's not required to include a MemoryBarrier in the
thread procedure. Note that the CLR contains other services that implicitly
raise memory barriers, think of Monitor.Enter, Monitor.Exit, ThreadPool
services, IO services..... So I think that for all except the extreme cases,
you can live without thinking about MemoryBarriers in managed code even
when compiled for IA-64." (from http://www.eggheadcafe.com/conversation.aspx?messageid=30644255&threadid=30643832)
Well, It seems that everything is perfect:) We should rely on JIT to generate proper code for underlying hardware.
Double-Check Locking in C# using volatile (from http://msdn.microsoft.com/en-us/library/ms998558.aspx)
More about memory models,semantics on http://www.diag.com/ftp/Memory_Models.pdf
Article about volatile in C# at http://www.codeproject.com/KB/threads/volatile_look_inside.aspx