< prev index next >

src/share/vm/runtime/orderAccess.hpp

Print this page

        

*** 1,7 **** /* ! * Copyright (c) 2003, 2010, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. --- 1,7 ---- /* ! * Copyright (c) 2003, 2015, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation.
*** 35,45 **** // 'after', 'preceding' and 'succeeding' refer to program order. The // terms 'down' and 'below' refer to forward load or store motion // relative to program order, while 'up' and 'above' refer to backward // motion. // - // // We define four primitive memory barrier operations. // // LoadLoad: Load1(s); LoadLoad; Load2 // // Ensures that Load1 completes (obtains the value it loads from memory) --- 35,44 ----
*** 102,128 **** // Acquire/release semantics essentially exploits this asynchronicity: when // the load(X) acquire[ observes the store of ]release store(X), the // accesses before the release must have happened before the accesses after // acquire. // ! // The API offers both stand-alone acquire() and release() as well as joined // load_acquire() and release_store(). It is guaranteed that these are // semantically equivalent w.r.t. the defined model. However, since // stand-alone acquire()/release() does not know which previous // load/subsequent store is considered the synchronizing load/store, they ! // may be more conservative in implementations. We advice using the joined // variants whenever possible. // // Finally, we define a "fence" operation, as a bidirectional barrier. // It guarantees that any memory access preceding the fence is not // reordered w.r.t. any memory accesses subsequent to the fence in program // order. This may be used to prevent sequences of loads from floating up // above sequences of stores. // // The following table shows the implementations on some architectures: // ! // Constraint x86 sparc ppc // --------------------------------------------------------------------------- // fence LoadStore | lock membar #StoreLoad sync // StoreStore | addl 0,(sp) // LoadLoad | // StoreLoad --- 101,127 ---- // Acquire/release semantics essentially exploits this asynchronicity: when // the load(X) acquire[ observes the store of ]release store(X), the // accesses before the release must have happened before the accesses after // acquire. // ! // The API offers both stand-alone acquire() and release() as well as bound // load_acquire() and release_store(). It is guaranteed that these are // semantically equivalent w.r.t. the defined model. However, since // stand-alone acquire()/release() does not know which previous // load/subsequent store is considered the synchronizing load/store, they ! // may be more conservative in implementations. We advise using the bound // variants whenever possible. // // Finally, we define a "fence" operation, as a bidirectional barrier. // It guarantees that any memory access preceding the fence is not // reordered w.r.t. any memory accesses subsequent to the fence in program // order. This may be used to prevent sequences of loads from floating up // above sequences of stores. // // The following table shows the implementations on some architectures: // ! // Constraint x86 sparc TSO ppc // --------------------------------------------------------------------------- // fence LoadStore | lock membar #StoreLoad sync // StoreStore | addl 0,(sp) // LoadLoad | // StoreLoad
*** 155,164 **** --- 154,174 ---- // release_store_fence to update values like the thread state, where we // don't want the current thread to continue until all our prior memory // accesses (including the new thread state) are visible to other threads. // This is equivalent to the volatile semantics of the Java Memory Model. // + // C++ Volatile Semantics + // + // C++ volatile semantics prevent compiler re-ordering between + // volatile memory accesses. However, reordering between non-volatile + // and volatile memory accesses is in general undefined. For compiler + // reordering constraints taking non-volatile memory accesses into + // consideration, a compiler barrier has to be used instead. Some + // compiler implementations may choose to enforce additional + // constraints beyond those required by the language. Note also that + // both volatile semantics and compiler barrier do not prevent + // hardware reordering. // // os::is_MP Considered Redundant // // Callers of this interface do not need to test os::is_MP() before // issuing an operation. The test is taken care of by the implementation
< prev index next >