< prev index next >
src/hotspot/share/runtime/synchronizer.cpp
Print this page
rev 56634 : imported patch 8230876.patch
rev 56635 : v2.00 -> v2.05 (CR5/v2.05/8-for-jdk13) patches combined into one; merge with 8229212.patch; merge with jdk-14+11; merge with 8230184.patch; merge with 8230876.patch; merge with jdk-14+15; merge with jdk-14+18.
rev 56636 : renames, comment cleanups and additions, whitespace and indent fixes; add PaddedObjectMonitor typdef to make 'PaddedEnd<ObjectMonitor' cleanups easier; add a couple of missing 'private' decls; delete unused next() function; merge pieces from dcubed.monitor_deflate_conc.v2.06d in dcubed.monitor_deflate_conc.v2.06[ac]; merge with 8229212.patch; merge with jdk-14+11; merge with 8230184.patch.
rev 56637 : Add OM_CACHE_LINE_SIZE so that ObjectMonitor cache line sizes can be experimented with independently of DEFAULT_CACHE_LINE_SIZE; for SPARC and X64 configs that use 128 for DEFAULT_CACHE_LINE_SIZE, we are experimenting with 64; move _previous_owner_tid and _allocation_state fields to share the cache line with ObjectMonitor::_header; put ObjectMonitor::_ref_count on its own cache line after _owner; add 'int* count_p' parameter to deflate_monitor_list() and deflate_monitor_list_using_JT() and push counter updates down to where the ObjectMonitors are actually removed from the in-use lists; monitors_iterate() async deflation check should use negative ref_count; add 'JavaThread* target' param to deflate_per_thread_idle_monitors_using_JT() add deflate_common_idle_monitors_using_JT() to make it clear which JavaThread* is the target of the work and which is the calling JavaThread* (self); g_free_list, g_om_in_use_list and g_om_in_use_count are now static to synchronizer.cpp (reduce scope); add more diagnostic info to some assert()'s; minor code cleanups and code motion; save_om_ptr() should detect a race with a deflating thread that is bailing out and cause a retry when the ref_count field is not positive; merge with jdk-14+11; add special GC support for TestHumongousClassLoader.java; merge with 8230184.patch; merge with jdk-14+14; merge with jdk-14+18.
rev 56638 : Merge the remainder of the lock-free monitor list changes from v2.06 with v2.06a and v2.06b after running the changes through the edit scripts; merge pieces from dcubed.monitor_deflate_conc.v2.06d in dcubed.monitor_deflate_conc.v2.06[ac]; merge pieces from dcubed.monitor_deflate_conc.v2.06e into dcubed.monitor_deflate_conc.v2.06c; merge with jdk-14+11; test work around for test/jdk/tools/jlink/multireleasejar/JLinkMultiReleaseJarTest.java should not been needed anymore; merge with jdk-14+18.
rev 56639 : loosen a couple more counter checks due to races observed in testing; simplify om_release() extraction of mid since list head or cur_mid_in_use is marked; simplify deflate_monitor_list() extraction of mid since there are no parallel deleters due to the safepoint; simplify deflate_monitor_list_using_JT() extraction of mid since list head or cur_mid_in_use is marked; prepend_block_to_lists() - simplify based on David H's comments; does not need load_acquire() or release_store() because of the cmpxchg(); prepend_to_common() - simplify to use mark_next_loop() for m and use mark_list_head() and release_store() for the non-empty list case; add more debugging for "Non-balanced monitor enter/exit" failure mode; fix race in inflate() in the "CASE: neutral" code path; install_displaced_markword_in_object() does not need to clear the header field since that is handled when the ObjectMonitor is moved from the global free list; LSuccess should clear boxReg to set ICC.ZF=1 to avoid depending on existing boxReg contents; update fast_unlock() to detect when object no longer refers to the same ObjectMonitor and take fast path exit instead; clarify fast_lock() code where we detect when object no longer refers to the same ObjectMonitor; add/update comments for movptr() calls where we move a literal into an Address; remove set_owner(); refactor setting of owner field into set_owner_from(2 versions), set_owner_from_BasicLock(), and try_set_owner_from(); the new functions include monitorinflation+owner logging; extract debug code from v2.06 and v2.07 and move to v2.07.debug; change 'jccb' -> 'jcc' and 'jmpb' -> 'jmp' as needed; checkpoint initial version of MacroAssembler::inc_om_ref_count(); update LP64 MacroAssembler::fast_lock() and fast_unlock() to use inc_om_ref_count(); fast_lock() return flag setting logic can use 'testptr(tmpReg, tmpReg)' instead of 'cmpptr(tmpReg, 0)' since that's more efficient; fast_unlock() LSuccess return flag setting logic can use 'testl (boxReg, 0)' instead of 'xorptr(boxReg, boxReg)' since that's more efficient; cleanup "fast-path" vs "fast path" and "slow-path" vs "slow path"; update MacroAssembler::rtm_inflated_locking() to use inc_om_ref_count(); update MacroAssembler::fast_lock() to preserve the flags before decrementing ref_count and restore the flags afterwards; this is more clean than depending on the contents of rax/tmpReg; coleenp CR - refactor async monitor deflation work from ServiceThread::service_thread_entry() to ObjectSynchronizer::deflate_idle_monitors_using_JT(); rehn,eosterlund CR - add support for HandshakeAfterDeflateIdleMonitors for platforms that don't have ObjectMonitor ref_count support implemented in C2 fast_lock() and fast_unlock().
*** 35,44 ****
--- 35,45 ----
#include "oops/markWord.hpp"
#include "oops/oop.inline.hpp"
#include "runtime/atomic.hpp"
#include "runtime/biasedLocking.hpp"
#include "runtime/handles.inline.hpp"
+ #include "runtime/handshake.hpp"
#include "runtime/interfaceSupport.inline.hpp"
#include "runtime/mutexLocker.hpp"
#include "runtime/objectMonitor.hpp"
#include "runtime/objectMonitor.inline.hpp"
#include "runtime/osThread.hpp"
*** 126,139 ****
--- 127,147 ----
// ObjectMonitors are prepended here.
static ObjectMonitor* volatile g_free_list = NULL;
// Global ObjectMonitor in-use list. When a JavaThread is exiting,
// ObjectMonitors on its per-thread in-use list are prepended here.
static ObjectMonitor* volatile g_om_in_use_list = NULL;
+ // Global ObjectMonitor wait list. If HandshakeAfterDeflateIdleMonitors
+ // is true, deflated ObjectMonitors wait on this list until after a
+ // handshake or a safepoint for platforms that don't support handshakes.
+ // After the handshake or safepoint, the deflated ObjectMonitors are
+ // prepended to g_free_list.
+ static ObjectMonitor* volatile g_wait_list = NULL;
static volatile int g_om_free_count = 0; // # on g_free_list
static volatile int g_om_in_use_count = 0; // # on g_om_in_use_list
static volatile int g_om_population = 0; // # Extant -- in circulation
+ static volatile int g_om_wait_count = 0; // # on g_wait_list
#define CHAINMARKER (cast_to_oop<intptr_t>(-1))
// =====================> List Management functions
*** 210,299 ****
// field may or may not have been marked originally.
static ObjectMonitor* unmarked_next(ObjectMonitor* om) {
return (ObjectMonitor*)((intptr_t)OrderAccess::load_acquire(&om->_next_om) & ~0x1);
}
- #if 0
- // XXX - this is unused
- // Unmark the next field in an ObjectMonitor. Requires that the next
- // field be marked.
- static void unmark_next(ObjectMonitor* om) {
- ADIM_guarantee(is_next_marked(om), "next field must be marked: next=" INTPTR_FORMAT, p2i(om->_next_om));
-
- ObjectMonitor* next = unmarked_next(om);
- set_next(om, next);
- }
- #endif
-
- volatile int visit_counter = 42;
- static void chk_for_list_loop(ObjectMonitor* list, int count) {
- if (!CheckMonitorLists) {
- return;
- }
- int l_visit_counter = Atomic::add(1, &visit_counter);
- int l_count = 0;
- ObjectMonitor* prev = NULL;
- for (ObjectMonitor* mid = list; mid != NULL; mid = unmarked_next(mid)) {
- if (mid->visit_marker == l_visit_counter) {
- log_error(monitorinflation)("ERROR: prev=" INTPTR_FORMAT ", l_count=%d"
- " refers to an ObjectMonitor that has"
- " already been visited: mid=" INTPTR_FORMAT,
- p2i(prev), l_count, p2i(mid));
- fatal("list=" INTPTR_FORMAT " of %d items has a loop.", p2i(list), count);
- }
- mid->visit_marker = l_visit_counter;
- prev = mid;
- if (++l_count > count + 1024 * 1024) {
- fatal("list=" INTPTR_FORMAT " of %d items may have a loop; l_count=%d",
- p2i(list), count, l_count);
- }
- }
- }
-
- static void chk_om_not_on_list(ObjectMonitor* om, ObjectMonitor* list, int count) {
- if (!CheckMonitorLists) {
- return;
- }
- guarantee(list != om, "ERROR: om=" INTPTR_FORMAT " must not be head of the "
- "list=" INTPTR_FORMAT ", count=%d", p2i(om), p2i(list), count);
- int l_count = 0;
- for (ObjectMonitor* mid = list; mid != NULL; mid = unmarked_next(mid)) {
- if (unmarked_next(mid) == om) {
- log_error(monitorinflation)("ERROR: mid=" INTPTR_FORMAT ", l_count=%d"
- " next_om refers to om=" INTPTR_FORMAT,
- p2i(mid), l_count, p2i(om));
- fatal("list=" INTPTR_FORMAT " of %d items has bad next_om value.",
- p2i(list), count);
- }
- if (++l_count > count + 1024 * 1024) {
- fatal("list=" INTPTR_FORMAT " of %d items may have a loop; l_count=%d",
- p2i(list), count, l_count);
- }
- }
- }
-
- static void chk_om_elems_not_on_list(ObjectMonitor* elems, int elems_count,
- ObjectMonitor* list, int list_count) {
- if (!CheckMonitorLists) {
- return;
- }
- chk_for_list_loop(elems, elems_count);
- for (ObjectMonitor* mid = elems; mid != NULL; mid = unmarked_next(mid)) {
- chk_om_not_on_list(mid, list, list_count);
- }
- }
-
// Prepend a list of ObjectMonitors to the specified *list_p. 'tail' is
// the last ObjectMonitor in the list and there are 'count' on the list.
// Also updates the specified *count_p.
static void prepend_list_to_common(ObjectMonitor* list, ObjectMonitor* tail,
int count, ObjectMonitor* volatile* list_p,
volatile int* count_p) {
- chk_for_list_loop(OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
- chk_om_elems_not_on_list(list, count, OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
while (true) {
ObjectMonitor* cur = OrderAccess::load_acquire(list_p);
// Prepend list to *list_p.
ObjectMonitor* next = NULL;
if (!mark_next(tail, &next)) {
--- 218,233 ----
*** 330,343 ****
// Prepend a newly allocated block of ObjectMonitors to g_block_list and
// g_free_list. Also updates g_om_population and g_om_free_count.
void ObjectSynchronizer::prepend_block_to_lists(PaddedObjectMonitor* new_blk) {
// First we handle g_block_list:
while (true) {
! PaddedObjectMonitor* cur = OrderAccess::load_acquire(&g_block_list);
// Prepend new_blk to g_block_list. The first ObjectMonitor in
// a block is reserved for use as linkage to the next block.
! OrderAccess::release_store(&new_blk[0]._next_om, cur);
if (Atomic::cmpxchg(new_blk, &g_block_list, cur) == cur) {
// Successfully switched g_block_list to the new_blk value.
Atomic::add(_BLOCKSIZE - 1, &g_om_population);
break;
}
--- 264,277 ----
// Prepend a newly allocated block of ObjectMonitors to g_block_list and
// g_free_list. Also updates g_om_population and g_om_free_count.
void ObjectSynchronizer::prepend_block_to_lists(PaddedObjectMonitor* new_blk) {
// First we handle g_block_list:
while (true) {
! PaddedObjectMonitor* cur = g_block_list;
// Prepend new_blk to g_block_list. The first ObjectMonitor in
// a block is reserved for use as linkage to the next block.
! new_blk[0]._next_om = cur;
if (Atomic::cmpxchg(new_blk, &g_block_list, cur) == cur) {
// Successfully switched g_block_list to the new_blk value.
Atomic::add(_BLOCKSIZE - 1, &g_om_population);
break;
}
*** 355,364 ****
--- 289,307 ----
static void prepend_list_to_g_free_list(ObjectMonitor* list,
ObjectMonitor* tail, int count) {
prepend_list_to_common(list, tail, count, &g_free_list, &g_om_free_count);
}
+ // Prepend a list of ObjectMonitors to g_wait_list. 'tail' is the last
+ // ObjectMonitor in the list and there are 'count' on the list. Also
+ // updates g_om_wait_count.
+ static void prepend_list_to_g_wait_list(ObjectMonitor* list,
+ ObjectMonitor* tail, int count) {
+ assert(HandshakeAfterDeflateIdleMonitors, "sanity check");
+ prepend_list_to_common(list, tail, count, &g_wait_list, &g_om_wait_count);
+ }
+
// Prepend a list of ObjectMonitors to g_om_in_use_list. 'tail' is the last
// ObjectMonitor in the list and there are 'count' on the list. Also
// updates g_om_in_use_list.
static void prepend_list_to_g_om_in_use_list(ObjectMonitor* list,
ObjectMonitor* tail, int count) {
*** 367,413 ****
// Prepend an ObjectMonitor to the specified list. Also updates
// the specified counter.
static void prepend_to_common(ObjectMonitor* m, ObjectMonitor* volatile * list_p,
int volatile * count_p) {
- chk_for_list_loop(OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
- chk_om_not_on_list(m, OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
-
while (true) {
! ObjectMonitor* cur = OrderAccess::load_acquire(list_p);
! // Prepend ObjectMonitor to *list_p.
ObjectMonitor* next = NULL;
! if (!mark_next(m, &next)) {
! continue; // failed to mark next field so try it all again
! }
set_next(m, cur); // m now points to cur (and unmarks m)
! if (cur == NULL) {
! // No potential race with other prependers since *list_p is empty.
if (Atomic::cmpxchg(m, list_p, cur) == cur) {
! // Successfully switched *list_p to 'm'.
! Atomic::inc(count_p);
break;
}
// Implied else: try it all again
- } else {
- // Try to mark next field to guard against races:
- if (!mark_next(cur, &next)) {
- continue; // failed to mark next field so try it all again
- }
- // We marked the next field so try to switch *list_p to 'm'.
- if (Atomic::cmpxchg(m, list_p, cur) != cur) {
- // The list head has changed so unmark the next field and try again:
- set_next(cur, next);
- continue;
}
Atomic::inc(count_p);
- set_next(cur, next); // unmark next field
- break;
- }
- }
}
// Prepend an ObjectMonitor to a per-thread om_free_list.
// Also updates the per-thread om_free_count.
static void prepend_to_om_free_list(Thread* self, ObjectMonitor* m) {
--- 310,341 ----
// Prepend an ObjectMonitor to the specified list. Also updates
// the specified counter.
static void prepend_to_common(ObjectMonitor* m, ObjectMonitor* volatile * list_p,
int volatile * count_p) {
while (true) {
! (void)mark_next_loop(m); // mark m so we can safely update its next field
! ObjectMonitor* cur = NULL;
ObjectMonitor* next = NULL;
! // Mark the list head to guard against A-B-A race:
! if (mark_list_head(list_p, &cur, &next)) {
! // List head is now marked so we can safely switch it.
set_next(m, cur); // m now points to cur (and unmarks m)
! OrderAccess::release_store(list_p, m); // Switch list head to unmarked m.
! set_next(cur, next); // Unmark the previous list head.
! break;
! }
! // The list is empty so try to set the list head.
! assert(cur == NULL, "cur must be NULL: cur=" INTPTR_FORMAT, p2i(cur));
! set_next(m, cur); // m now points to NULL (and unmarks m)
if (Atomic::cmpxchg(m, list_p, cur) == cur) {
! // List head is now unmarked m.
break;
}
// Implied else: try it all again
}
Atomic::inc(count_p);
}
// Prepend an ObjectMonitor to a per-thread om_free_list.
// Also updates the per-thread om_free_count.
static void prepend_to_om_free_list(Thread* self, ObjectMonitor* m) {
*** 422,434 ****
// Take an ObjectMonitor from the start of the specified list. Also
// decrements the specified counter. Returns NULL if none are available.
static ObjectMonitor* take_from_start_of_common(ObjectMonitor* volatile * list_p,
int volatile * count_p) {
- chk_for_list_loop(OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
-
ObjectMonitor* next = NULL;
ObjectMonitor* take = NULL;
// Mark the list head to guard against A-B-A race:
if (!mark_list_head(list_p, &take, &next)) {
return NULL; // None are available.
--- 350,359 ----
*** 570,586 ****
// stack-locking in the object's header, the third check is for
// recursive stack-locking in the displaced header in the BasicLock,
// and last are the inflated Java Monitor (ObjectMonitor) checks.
lock->set_displaced_header(markWord::unused_mark());
! if (owner == NULL && Atomic::replace_if_null(self, &(m->_owner))) {
assert(m->_recursions == 0, "invariant");
return true;
}
if (AsyncDeflateIdleMonitors &&
! Atomic::cmpxchg(self, &m->_owner, DEFLATER_MARKER) == DEFLATER_MARKER) {
// The deflation protocol finished the first part (setting owner),
// but it failed the second part (making ref_count negative) and
// bailed. Or the ObjectMonitor was async deflated and reused.
// Acquired the monitor.
assert(m->_recursions == 0, "invariant");
--- 495,511 ----
// stack-locking in the object's header, the third check is for
// recursive stack-locking in the displaced header in the BasicLock,
// and last are the inflated Java Monitor (ObjectMonitor) checks.
lock->set_displaced_header(markWord::unused_mark());
! if (owner == NULL && m->try_set_owner_from(self, NULL) == NULL) {
assert(m->_recursions == 0, "invariant");
return true;
}
if (AsyncDeflateIdleMonitors &&
! m->try_set_owner_from(self, DEFLATER_MARKER) == DEFLATER_MARKER) {
// The deflation protocol finished the first part (setting owner),
// but it failed the second part (making ref_count negative) and
// bailed. Or the ObjectMonitor was async deflated and reused.
// Acquired the monitor.
assert(m->_recursions == 0, "invariant");
*** 1317,1326 ****
--- 1242,1254 ----
return false;
}
if (MonitorUsedDeflationThreshold > 0) {
int monitors_used = OrderAccess::load_acquire(&g_om_population) -
OrderAccess::load_acquire(&g_om_free_count);
+ if (HandshakeAfterDeflateIdleMonitors) {
+ monitors_used -= OrderAccess::load_acquire(&g_om_wait_count);
+ }
int monitor_usage = (monitors_used * 100LL) /
OrderAccess::load_acquire(&g_om_population);
return monitor_usage > MonitorUsedDeflationThreshold;
}
return false;
*** 1349,1360 ****
// than AsyncDeflationInterval (unless is_async_deflation_requested)
// in order to not swamp the ServiceThread.
_last_async_deflation_time_ns = os::javaTimeNanos();
return true;
}
! if (is_MonitorBound_exceeded(OrderAccess::load_acquire(&g_om_population) -
! OrderAccess::load_acquire(&g_om_free_count))) {
// Not enough ObjectMonitors on the global free list.
return true;
}
return false;
}
--- 1277,1292 ----
// than AsyncDeflationInterval (unless is_async_deflation_requested)
// in order to not swamp the ServiceThread.
_last_async_deflation_time_ns = os::javaTimeNanos();
return true;
}
! int monitors_used = OrderAccess::load_acquire(&g_om_population) -
! OrderAccess::load_acquire(&g_om_free_count);
! if (HandshakeAfterDeflateIdleMonitors) {
! monitors_used -= OrderAccess::load_acquire(&g_om_wait_count);
! }
! if (is_MonitorBound_exceeded(monitors_used)) {
// Not enough ObjectMonitors on the global free list.
return true;
}
return false;
}
*** 1395,1405 ****
list_oops_do(OrderAccess::load_acquire(&thread->om_in_use_list), OrderAccess::load_acquire(&thread->om_in_use_count), f);
}
void ObjectSynchronizer::list_oops_do(ObjectMonitor* list, int count, OopClosure* f) {
assert(SafepointSynchronize::is_at_safepoint(), "must be at safepoint");
- chk_for_list_loop(list, count);
// The oops_do() phase does not overlap with monitor deflation
// so no need to update the ObjectMonitor's ref_count for this
// ObjectMonitor* use.
for (ObjectMonitor* mid = list; mid != NULL; mid = unmarked_next(mid)) {
if (mid->object() != NULL) {
--- 1327,1336 ----
*** 1537,1546 ****
--- 1468,1480 ----
assert(take->ref_count() >= 0, "must not be negative: ref_count=%d",
take->ref_count());
}
}
take->Recycle();
+ // Since we're taking from the global free-list, take must be Free.
+ // om_release() also sets the allocation state to Free because it
+ // is called from other code paths.
assert(take->is_free(), "invariant");
om_release(self, take, false);
}
self->om_free_provision += 1 + (self->om_free_provision/2);
if (self->om_free_provision > MAXPRIVATE) self->om_free_provision = MAXPRIVATE;
*** 1636,1663 ****
fatal("thread=" INTPTR_FORMAT " in-use list must not be empty.", p2i(self));
}
while (true) {
if (m == mid) {
// We found 'm' on the per-thread in-use list so try to extract it.
! // First try the list head:
! if (Atomic::cmpxchg(next, &self->om_in_use_list, mid) != mid) {
! // We could not switch the list head to next.
! ObjectMonitor* marked_mid = mark_om_ptr(mid);
! // Switch cur_mid_in_use's next field to next (which also
! // unmarks cur_mid_in_use):
! ADIM_guarantee(cur_mid_in_use != NULL, "must not be NULL");
! if (Atomic::cmpxchg(next, &cur_mid_in_use->_next_om, marked_mid)
! != marked_mid) {
! // We could not switch cur_mid_in_use's next field. This
! // should not be possible since it was marked so we:
! fatal("mid=" INTPTR_FORMAT " must be referred to by the list "
! "head: &om_in_use_list=" INTPTR_FORMAT " or by "
! "cur_mid_in_use's next field: cur_mid_in_use=" INTPTR_FORMAT
! ", next_om=" INTPTR_FORMAT, p2i(mid),
! p2i((ObjectMonitor**)&self->om_in_use_list),
! p2i(cur_mid_in_use), p2i(cur_mid_in_use->_next_om));
! }
}
extracted = true;
Atomic::dec(&self->om_in_use_count);
// Unmark mid, but leave the next value for any lagging list
// walkers. It will get cleaned up when mid is prepended to
--- 1570,1588 ----
fatal("thread=" INTPTR_FORMAT " in-use list must not be empty.", p2i(self));
}
while (true) {
if (m == mid) {
// We found 'm' on the per-thread in-use list so try to extract it.
! if (cur_mid_in_use == NULL) {
! // mid is the list head and it is marked. Switch the list head
! // to next which unmarks the list head, but leaves mid marked:
! OrderAccess::release_store(&self->om_in_use_list, next);
! } else {
! // mid and cur_mid_in_use are marked. Switch cur_mid_in_use's
! // next field to next which unmarks cur_mid_in_use, but leaves
! // mid marked:
! OrderAccess::release_store(&cur_mid_in_use->_next_om, next);
}
extracted = true;
Atomic::dec(&self->om_in_use_count);
// Unmark mid, but leave the next value for any lagging list
// walkers. It will get cleaned up when mid is prepended to
*** 1718,1728 ****
// An async deflation thread checks to see if the target thread
// is exiting, but if it has made it past that check before we
// started exiting, then it is racing to get to the in-use list.
if (mark_list_head(&self->om_in_use_list, &in_use_list, &next)) {
- chk_for_list_loop(in_use_list, OrderAccess::load_acquire(&self->om_in_use_count));
// At this point, we have marked the in-use list head so an
// async deflation thread cannot come in after us. If an async
// deflation thread is ahead of us, then we'll detect that and
// wait for it to finish its work.
//
--- 1643,1652 ----
*** 1774,1784 ****
int free_count = 0;
ObjectMonitor* free_list = OrderAccess::load_acquire(&self->om_free_list);
ObjectMonitor* free_tail = NULL;
if (free_list != NULL) {
- chk_for_list_loop(free_list, OrderAccess::load_acquire(&self->om_free_count));
// The thread is going away. Set 'free_tail' to the last per-thread free
// monitor which will be linked to g_free_list below.
stringStream ss;
for (ObjectMonitor* s = free_list; s != NULL; s = unmarked_next(s)) {
free_count++;
--- 1698,1707 ----
*** 1927,1936 ****
--- 1850,1860 ----
m->_Responsible = NULL;
m->_SpinDuration = ObjectMonitor::Knob_SpinLimit; // Consider: maintain by type/class
markWord cmp = object->cas_set_mark(markWord::INFLATING(), mark);
if (cmp != mark) {
+ // om_release() will reset the allocation state from New to Free.
om_release(self, m, true);
continue; // Interference -- just retry
}
// We've successfully installed INFLATING (0) into the mark-word.
*** 1974,1996 ****
// Optimization: if the mark.locker stack address is associated
// with this thread we could simply set m->_owner = self.
// Note that a thread can inflate an object
// that it has stack-locked -- as might happen in wait() -- directly
// with CAS. That is, we can avoid the xchg-NULL .... ST idiom.
! m->set_owner(mark.locker());
m->set_object(object);
// TODO-FIXME: assert BasicLock->dhw != 0.
omh_p->set_om_ptr(m);
- assert(m->is_new(), "freshly allocated monitor must be new");
- m->set_allocation_state(ObjectMonitor::Old);
// Must preserve store ordering. The monitor state must
// be stable at the time of publishing the monitor address.
guarantee(object->mark() == markWord::INFLATING(), "invariant");
object->release_set_mark(markWord::encode(m));
// Hopefully the performance counters are allocated on distinct cache lines
// to avoid false sharing on MP systems ...
OM_PERFDATA_OP(Inflations, inc());
if (log_is_enabled(Trace, monitorinflation)) {
ResourceMark rm(self);
--- 1898,1927 ----
// Optimization: if the mark.locker stack address is associated
// with this thread we could simply set m->_owner = self.
// Note that a thread can inflate an object
// that it has stack-locked -- as might happen in wait() -- directly
// with CAS. That is, we can avoid the xchg-NULL .... ST idiom.
! if (AsyncDeflateIdleMonitors) {
! m->set_owner_from(mark.locker(), NULL, DEFLATER_MARKER);
! } else {
! m->set_owner_from(mark.locker(), NULL);
! }
m->set_object(object);
// TODO-FIXME: assert BasicLock->dhw != 0.
omh_p->set_om_ptr(m);
// Must preserve store ordering. The monitor state must
// be stable at the time of publishing the monitor address.
guarantee(object->mark() == markWord::INFLATING(), "invariant");
object->release_set_mark(markWord::encode(m));
+ // Once ObjectMonitor is configured and the object is associated
+ // with the ObjectMonitor, it is safe to allow async deflation:
+ assert(m->is_new(), "freshly allocated monitor must be new");
+ m->set_allocation_state(ObjectMonitor::Old);
+
// Hopefully the performance counters are allocated on distinct cache lines
// to avoid false sharing on MP systems ...
OM_PERFDATA_OP(Inflations, inc());
if (log_is_enabled(Trace, monitorinflation)) {
ResourceMark rm(self);
*** 2027,2057 ****
m->set_object(object);
m->_Responsible = NULL;
m->_SpinDuration = ObjectMonitor::Knob_SpinLimit; // consider: keep metastats by type/class
omh_p->set_om_ptr(m);
- assert(m->is_new(), "freshly allocated monitor must be new");
- m->set_allocation_state(ObjectMonitor::Old);
if (object->cas_set_mark(markWord::encode(m), mark) != mark) {
- guarantee(!m->owner_is_DEFLATER_MARKER() || m->ref_count() >= 0,
- "race between deflation and om_release() with m=" INTPTR_FORMAT
- ", _owner=" INTPTR_FORMAT ", ref_count=%d", p2i(m),
- p2i(m->_owner), m->ref_count());
m->set_header(markWord::zero());
m->set_object(NULL);
m->Recycle();
omh_p->set_om_ptr(NULL);
! // om_release() will reset the allocation state
om_release(self, m, true);
m = NULL;
continue;
// interference - the markword changed - just retry.
// The state-transitions are one-way, so there's no chance of
// live-lock -- "Inflated" is an absorbing state.
}
// Hopefully the performance counters are allocated on distinct
// cache lines to avoid false sharing on MP systems ...
OM_PERFDATA_OP(Inflations, inc());
if (log_is_enabled(Trace, monitorinflation)) {
ResourceMark rm(self);
--- 1958,1987 ----
m->set_object(object);
m->_Responsible = NULL;
m->_SpinDuration = ObjectMonitor::Knob_SpinLimit; // consider: keep metastats by type/class
omh_p->set_om_ptr(m);
if (object->cas_set_mark(markWord::encode(m), mark) != mark) {
m->set_header(markWord::zero());
m->set_object(NULL);
m->Recycle();
omh_p->set_om_ptr(NULL);
! // om_release() will reset the allocation state from New to Free.
om_release(self, m, true);
m = NULL;
continue;
// interference - the markword changed - just retry.
// The state-transitions are one-way, so there's no chance of
// live-lock -- "Inflated" is an absorbing state.
}
+ // Once the ObjectMonitor is configured and object is associated
+ // with the ObjectMonitor, it is safe to allow async deflation:
+ assert(m->is_new(), "freshly allocated monitor must be new");
+ m->set_allocation_state(ObjectMonitor::Old);
+
// Hopefully the performance counters are allocated on distinct
// cache lines to avoid false sharing on MP systems ...
OM_PERFDATA_OP(Inflations, inc());
if (log_is_enabled(Trace, monitorinflation)) {
ResourceMark rm(self);
*** 2151,2162 ****
// Restore the header back to obj
obj->release_set_mark(dmw);
if (AsyncDeflateIdleMonitors) {
// clear() expects the owner field to be NULL and we won't race
// with the simple C2 ObjectMonitor enter optimization since
! // we're at a safepoint.
! mid->set_owner(NULL);
}
mid->clear();
assert(mid->object() == NULL, "invariant: object=" INTPTR_FORMAT,
p2i(mid->object()));
--- 2081,2093 ----
// Restore the header back to obj
obj->release_set_mark(dmw);
if (AsyncDeflateIdleMonitors) {
// clear() expects the owner field to be NULL and we won't race
// with the simple C2 ObjectMonitor enter optimization since
! // we're at a safepoint. DEFLATER_MARKER is the only non-NULL
! // value we should see here.
! mid->try_set_owner_from(NULL, DEFLATER_MARKER);
}
mid->clear();
assert(mid->object() == NULL, "invariant: object=" INTPTR_FORMAT,
p2i(mid->object()));
*** 2217,2238 ****
// Easy checks are first - the ObjectMonitor is busy or ObjectMonitor*
// is in use so no deflation.
return false;
}
! if (Atomic::replace_if_null(DEFLATER_MARKER, &(mid->_owner))) {
// ObjectMonitor is not owned by another thread. Our setting
// owner to DEFLATER_MARKER forces any contending thread through
// the slow path. This is just the first part of the async
// deflation dance.
if (mid->_contentions != 0 || mid->_waiters != 0) {
// Another thread has raced to enter the ObjectMonitor after
// mid->is_busy() above or has already entered and waited on
// it which makes it busy so no deflation. Restore owner to
// NULL if it is still DEFLATER_MARKER.
! Atomic::cmpxchg((void*)NULL, &mid->_owner, DEFLATER_MARKER);
return false;
}
if (Atomic::cmpxchg(-max_jint, &mid->_ref_count, (jint)0) == 0) {
// Make ref_count negative to force any contending threads or
--- 2148,2169 ----
// Easy checks are first - the ObjectMonitor is busy or ObjectMonitor*
// is in use so no deflation.
return false;
}
! if (mid->try_set_owner_from(DEFLATER_MARKER, NULL) == NULL) {
// ObjectMonitor is not owned by another thread. Our setting
// owner to DEFLATER_MARKER forces any contending thread through
// the slow path. This is just the first part of the async
// deflation dance.
if (mid->_contentions != 0 || mid->_waiters != 0) {
// Another thread has raced to enter the ObjectMonitor after
// mid->is_busy() above or has already entered and waited on
// it which makes it busy so no deflation. Restore owner to
// NULL if it is still DEFLATER_MARKER.
! mid->try_set_owner_from(NULL, DEFLATER_MARKER);
return false;
}
if (Atomic::cmpxchg(-max_jint, &mid->_ref_count, (jint)0) == 0) {
// Make ref_count negative to force any contending threads or
*** 2320,2330 ****
}
// The ref_count was no longer 0 so we lost the race since the
// ObjectMonitor is now busy or the ObjectMonitor* is now is use.
// Restore owner to NULL if it is still DEFLATER_MARKER:
! Atomic::cmpxchg((void*)NULL, &mid->_owner, DEFLATER_MARKER);
}
// The owner field is no longer NULL so we lost the race since the
// ObjectMonitor is now busy.
return false;
--- 2251,2261 ----
}
// The ref_count was no longer 0 so we lost the race since the
// ObjectMonitor is now busy or the ObjectMonitor* is now is use.
// Restore owner to NULL if it is still DEFLATER_MARKER:
! mid->try_set_owner_from(NULL, DEFLATER_MARKER);
}
// The owner field is no longer NULL so we lost the race since the
// ObjectMonitor is now busy.
return false;
*** 2361,2392 ****
oop obj = (oop) mid->object();
if (obj != NULL && deflate_monitor(mid, obj, free_head_p, free_tail_p)) {
// Deflation succeeded and already updated free_head_p and
// free_tail_p as needed. Finish the move to the local free list
// by unlinking mid from the global or per-thread in-use list.
! if (Atomic::cmpxchg(next, list_p, mid) != mid) {
! // We could not switch the list head to next.
! ADIM_guarantee(cur_mid_in_use != NULL, "must not be NULL");
! if (Atomic::cmpxchg(next, &cur_mid_in_use->_next_om, mid) != mid) {
! // deflate_monitor_list() is called at a safepoint so the
! // global or per-thread in-use list should not be modified
! // in parallel so we:
! fatal("mid=" INTPTR_FORMAT " must be referred to by the list head: "
! "list_p=" INTPTR_FORMAT " or by cur_mid_in_use's next field: "
! "cur_mid_in_use=" INTPTR_FORMAT ", next_om=" INTPTR_FORMAT,
! p2i(mid), p2i((ObjectMonitor**)list_p), p2i(cur_mid_in_use),
! p2i(cur_mid_in_use->_next_om));
! }
}
// At this point mid is disconnected from the in-use list so
// its marked next field no longer has any effects.
deflated_count++;
Atomic::dec(count_p);
- chk_for_list_loop(OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
- chk_om_not_on_list(mid, OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
// mid is current tail in the free_head_p list so NULL terminate it
// (which also unmarks it):
set_next(mid, NULL);
// All the list management is done so move on to the next one:
--- 2292,2315 ----
oop obj = (oop) mid->object();
if (obj != NULL && deflate_monitor(mid, obj, free_head_p, free_tail_p)) {
// Deflation succeeded and already updated free_head_p and
// free_tail_p as needed. Finish the move to the local free list
// by unlinking mid from the global or per-thread in-use list.
! if (cur_mid_in_use == NULL) {
! // mid is the list head and it is marked. Switch the list head
! // to next which unmarks the list head, but leaves mid marked:
! OrderAccess::release_store(list_p, next);
! } else {
! // mid is marked. Switch cur_mid_in_use's next field to next
! // which is safe because we have no parallel list deletions,
! // but we leave mid marked:
! OrderAccess::release_store(&cur_mid_in_use->_next_om, next);
}
// At this point mid is disconnected from the in-use list so
// its marked next field no longer has any effects.
deflated_count++;
Atomic::dec(count_p);
// mid is current tail in the free_head_p list so NULL terminate it
// (which also unmarks it):
set_next(mid, NULL);
// All the list management is done so move on to the next one:
*** 2456,2466 ****
while (true) {
// The current mid's next field is marked at this point. If we have
// a cur_mid_in_use, then its next field is also marked at this point.
if (next != NULL) {
! // We mark the next -> next field so that an om_flush()
// thread that is behind us cannot pass us when we
// unmark the current mid's next field.
next_next = mark_next_loop(next);
}
--- 2379,2389 ----
while (true) {
// The current mid's next field is marked at this point. If we have
// a cur_mid_in_use, then its next field is also marked at this point.
if (next != NULL) {
! // We mark next's next field so that an om_flush()
// thread that is behind us cannot pass us when we
// unmark the current mid's next field.
next_next = mark_next_loop(next);
}
*** 2469,2503 ****
if (mid->object() != NULL && mid->is_old() &&
deflate_monitor_using_JT(mid, free_head_p, free_tail_p)) {
// Deflation succeeded and already updated free_head_p and
// free_tail_p as needed. Finish the move to the local free list
// by unlinking mid from the global or per-thread in-use list.
! if (Atomic::cmpxchg(next, list_p, mid) != mid) {
! // We could not switch the list head to next.
! ObjectMonitor* marked_mid = mark_om_ptr(mid);
ObjectMonitor* marked_next = mark_om_ptr(next);
! // Switch cur_mid_in_use's next field to marked next:
! ADIM_guarantee(cur_mid_in_use != NULL, "must not be NULL");
! if (Atomic::cmpxchg(marked_next, &cur_mid_in_use->_next_om,
! marked_mid) != marked_mid) {
! // We could not switch cur_mid_in_use's next field. This
! // should not be possible since it was marked so we:
! fatal("mid=" INTPTR_FORMAT " must be referred to by the list head: "
! "&list_p=" INTPTR_FORMAT " or by cur_mid_in_use's next field: "
! "cur_mid_in_use=" INTPTR_FORMAT ", next_om=" INTPTR_FORMAT,
! p2i(mid), p2i((ObjectMonitor**)list_p), p2i(cur_mid_in_use),
! p2i(cur_mid_in_use->_next_om));
! }
}
// At this point mid is disconnected from the in-use list so
// its marked next field no longer has any effects.
deflated_count++;
Atomic::dec(count_p);
- chk_for_list_loop(OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
- chk_om_not_on_list(mid, OrderAccess::load_acquire(list_p),
- OrderAccess::load_acquire(count_p));
// mid is current tail in the free_head_p list so NULL terminate it
// (which also unmarks it):
set_next(mid, NULL);
// All the list management is done so move on to the next one:
--- 2392,2416 ----
if (mid->object() != NULL && mid->is_old() &&
deflate_monitor_using_JT(mid, free_head_p, free_tail_p)) {
// Deflation succeeded and already updated free_head_p and
// free_tail_p as needed. Finish the move to the local free list
// by unlinking mid from the global or per-thread in-use list.
! if (cur_mid_in_use == NULL) {
! // mid is the list head and it is marked. Switch the list head
! // to next which is also marked (if not NULL) and also leave
! // mid marked:
! OrderAccess::release_store(list_p, next);
! } else {
ObjectMonitor* marked_next = mark_om_ptr(next);
! // mid and cur_mid_in_use are marked. Switch cur_mid_in_use's
! // next field to marked_next and also leave mid marked:
! OrderAccess::release_store(&cur_mid_in_use->_next_om, marked_next);
}
// At this point mid is disconnected from the in-use list so
// its marked next field no longer has any effects.
deflated_count++;
Atomic::dec(count_p);
// mid is current tail in the free_head_p list so NULL terminate it
// (which also unmarks it):
set_next(mid, NULL);
// All the list management is done so move on to the next one:
*** 2618,2627 ****
--- 2531,2601 ----
if (ls != NULL) {
ls->print_cr("deflating global idle monitors, %3.7f secs, %d monitors", timer.seconds(), deflated_count);
}
}
+ class HandshakeForDeflation : public ThreadClosure {
+ public:
+ void do_thread(Thread* thread) {
+ log_trace(monitorinflation)("HandshakeForDeflation::do_thread: thread="
+ INTPTR_FORMAT, p2i(thread));
+ }
+ };
+
+ void ObjectSynchronizer::deflate_idle_monitors_using_JT() {
+ assert(AsyncDeflateIdleMonitors, "sanity check");
+
+ // Deflate any global idle monitors.
+ deflate_global_idle_monitors_using_JT();
+
+ int count = 0;
+ for (JavaThreadIteratorWithHandle jtiwh; JavaThread *jt = jtiwh.next(); ) {
+ if (jt->om_in_use_count > 0 && !jt->is_exiting()) {
+ // This JavaThread is using ObjectMonitors so deflate any that
+ // are idle unless this JavaThread is exiting; do not race with
+ // ObjectSynchronizer::om_flush().
+ deflate_per_thread_idle_monitors_using_JT(jt);
+ count++;
+ }
+ }
+ if (count > 0) {
+ log_debug(monitorinflation)("did async deflation of idle monitors for %d thread(s).", count);
+ }
+ // The ServiceThread's async deflation request has been processed.
+ set_is_async_deflation_requested(false);
+
+ if (HandshakeAfterDeflateIdleMonitors && g_om_wait_count > 0) {
+ // There are deflated ObjectMonitors waiting for a handshake
+ // (or a safepoint) for safety.
+
+ // g_wait_list and g_om_wait_count are only updated by the calling
+ // thread so no need for load_acquire() or release_store().
+ ObjectMonitor* list = g_wait_list;
+ ADIM_guarantee(list != NULL, "g_wait_list must not be NULL");
+ int count = g_om_wait_count;
+ g_wait_list = NULL;
+ g_om_wait_count = 0;
+
+ // Find the tail for prepend_list_to_common().
+ int l_count = 0;
+ ObjectMonitor* tail = NULL;
+ for (ObjectMonitor* n = list; n != NULL; n = unmarked_next(n)) {
+ tail = n;
+ l_count++;
+ }
+ ADIM_guarantee(count == l_count, "count=%d != l_count=%d", count, l_count);
+
+ // Will execute a safepoint if !ThreadLocalHandshakes:
+ HandshakeForDeflation hfd_tc;
+ Handshake::execute(&hfd_tc);
+
+ prepend_list_to_common(list, tail, count, &g_free_list, &g_om_free_count);
+
+ log_info(monitorinflation)("moved %d idle monitors from global waiting list to global free list", count);
+ }
+ }
+
// Deflate global idle ObjectMonitors using a JavaThread.
//
void ObjectSynchronizer::deflate_global_idle_monitors_using_JT() {
assert(AsyncDeflateIdleMonitors, "sanity check");
assert(Thread::current()->is_Java_thread(), "precondition");
*** 2682,2692 ****
--- 2656,2670 ----
// and then unmarked while prepend_to_common() is sorting it
// all out.
assert(unmarked_next(free_tail_p) == NULL, "must be NULL: _next_om="
INTPTR_FORMAT, p2i(unmarked_next(free_tail_p)));
+ if (HandshakeAfterDeflateIdleMonitors) {
+ prepend_list_to_g_wait_list(free_head_p, free_tail_p, local_deflated_count);
+ } else {
prepend_list_to_g_free_list(free_head_p, free_tail_p, local_deflated_count);
+ }
OM_PERFDATA_OP(Deflations, inc(local_deflated_count));
}
if (saved_mid_in_use_p != NULL) {
*** 2752,2765 ****
// exit_globals()'s call to audit_and_print_stats() is done
// at the Info level.
ObjectSynchronizer::audit_and_print_stats(false /* on_exit */);
} else if (log_is_enabled(Info, monitorinflation)) {
log_info(monitorinflation)("g_om_population=%d, g_om_in_use_count=%d, "
! "g_om_free_count=%d",
OrderAccess::load_acquire(&g_om_population),
OrderAccess::load_acquire(&g_om_in_use_count),
! OrderAccess::load_acquire(&g_om_free_count));
}
ForceMonitorScavenge = 0; // Reset
GVars.stw_random = os::random();
GVars.stw_cycle++;
--- 2730,2744 ----
// exit_globals()'s call to audit_and_print_stats() is done
// at the Info level.
ObjectSynchronizer::audit_and_print_stats(false /* on_exit */);
} else if (log_is_enabled(Info, monitorinflation)) {
log_info(monitorinflation)("g_om_population=%d, g_om_in_use_count=%d, "
! "g_om_free_count=%d, g_om_wait_count=%d",
OrderAccess::load_acquire(&g_om_population),
OrderAccess::load_acquire(&g_om_in_use_count),
! OrderAccess::load_acquire(&g_om_free_count),
! OrderAccess::load_acquire(&g_om_wait_count));
}
ForceMonitorScavenge = 0; // Reset
GVars.stw_random = os::random();
GVars.stw_cycle++;
*** 2922,2944 ****
if (OrderAccess::load_acquire(&g_om_population) == chk_om_population) {
ls->print_cr("g_om_population=%d equals chk_om_population=%d",
OrderAccess::load_acquire(&g_om_population),
chk_om_population);
} else {
! ls->print_cr("ERROR: g_om_population=%d is not equal to "
"chk_om_population=%d",
OrderAccess::load_acquire(&g_om_population),
chk_om_population);
- error_cnt++;
}
// Check g_om_in_use_list and g_om_in_use_count:
chk_global_in_use_list_and_count(ls, &error_cnt);
// Check g_free_list and g_om_free_count:
chk_global_free_list_and_count(ls, &error_cnt);
ls->print_cr("Checking per-thread lists:");
for (JavaThreadIteratorWithHandle jtiwh; JavaThread *jt = jtiwh.next(); ) {
// Check om_in_use_list and om_in_use_count:
chk_per_thread_in_use_list_and_count(jt, ls, &error_cnt);
--- 2901,2931 ----
if (OrderAccess::load_acquire(&g_om_population) == chk_om_population) {
ls->print_cr("g_om_population=%d equals chk_om_population=%d",
OrderAccess::load_acquire(&g_om_population),
chk_om_population);
} else {
! // With lock free access to the monitor lists, it is possible for
! // log_monitor_list_counts() to return a value that doesn't match
! // g_om_population. So far a higher value has been seen in testing
! // so something is being double counted by log_monitor_list_counts().
! ls->print_cr("WARNING: g_om_population=%d is not equal to "
"chk_om_population=%d",
OrderAccess::load_acquire(&g_om_population),
chk_om_population);
}
// Check g_om_in_use_list and g_om_in_use_count:
chk_global_in_use_list_and_count(ls, &error_cnt);
// Check g_free_list and g_om_free_count:
chk_global_free_list_and_count(ls, &error_cnt);
+ if (HandshakeAfterDeflateIdleMonitors) {
+ // Check g_wait_list and g_om_wait_count:
+ chk_global_wait_list_and_count(ls, &error_cnt);
+ }
+
ls->print_cr("Checking per-thread lists:");
for (JavaThreadIteratorWithHandle jtiwh; JavaThread *jt = jtiwh.next(); ) {
// Check om_in_use_list and om_in_use_count:
chk_per_thread_in_use_list_and_count(jt, ls, &error_cnt);
*** 3032,3041 ****
--- 3019,3050 ----
OrderAccess::load_acquire(&g_om_free_count),
chk_om_free_count);
}
}
+ // Check the global wait list and count; log the results of the checks.
+ void ObjectSynchronizer::chk_global_wait_list_and_count(outputStream * out,
+ int *error_cnt_p) {
+ int chk_om_wait_count = 0;
+ for (ObjectMonitor* n = OrderAccess::load_acquire(&g_wait_list); n != NULL; n = unmarked_next(n)) {
+ // Rules for g_wait_list are the same as of g_free_list:
+ chk_free_entry(NULL /* jt */, n, out, error_cnt_p);
+ chk_om_wait_count++;
+ }
+ if (OrderAccess::load_acquire(&g_om_wait_count) == chk_om_wait_count) {
+ out->print_cr("g_om_wait_count=%d equals chk_om_wait_count=%d",
+ OrderAccess::load_acquire(&g_om_wait_count),
+ chk_om_wait_count);
+ } else {
+ out->print_cr("ERROR: g_om_wait_count=%d is not equal to "
+ "chk_om_wait_count=%d",
+ OrderAccess::load_acquire(&g_om_wait_count),
+ chk_om_wait_count);
+ *error_cnt_p = *error_cnt_p + 1;
+ }
+ }
+
// Check the global in-use list and count; log the results of the checks.
void ObjectSynchronizer::chk_global_in_use_list_and_count(outputStream * out,
int *error_cnt_p) {
int chk_om_in_use_count = 0;
for (ObjectMonitor* n = OrderAccess::load_acquire(&g_om_in_use_list); n != NULL; n = unmarked_next(n)) {
*** 3045,3058 ****
if (OrderAccess::load_acquire(&g_om_in_use_count) == chk_om_in_use_count) {
out->print_cr("g_om_in_use_count=%d equals chk_om_in_use_count=%d",
OrderAccess::load_acquire(&g_om_in_use_count),
chk_om_in_use_count);
} else {
! out->print_cr("ERROR: g_om_in_use_count=%d is not equal to chk_om_in_use_count=%d",
OrderAccess::load_acquire(&g_om_in_use_count),
chk_om_in_use_count);
- *error_cnt_p = *error_cnt_p + 1;
}
}
// Check an in-use monitor entry; log any errors.
void ObjectSynchronizer::chk_in_use_entry(JavaThread* jt, ObjectMonitor* n,
--- 3054,3069 ----
if (OrderAccess::load_acquire(&g_om_in_use_count) == chk_om_in_use_count) {
out->print_cr("g_om_in_use_count=%d equals chk_om_in_use_count=%d",
OrderAccess::load_acquire(&g_om_in_use_count),
chk_om_in_use_count);
} else {
! // With lock free access to the monitor lists, it is possible for
! // an exiting JavaThread to put its in-use ObjectMonitors on the
! // global in-use list after chk_om_in_use_count is calculated above.
! out->print_cr("WARNING: g_om_in_use_count=%d is not equal to chk_om_in_use_count=%d",
OrderAccess::load_acquire(&g_om_in_use_count),
chk_om_in_use_count);
}
}
// Check an in-use monitor entry; log any errors.
void ObjectSynchronizer::chk_in_use_entry(JavaThread* jt, ObjectMonitor* n,
*** 3213,3231 ****
// Log counts for the global and per-thread monitor lists and return
// the population count.
int ObjectSynchronizer::log_monitor_list_counts(outputStream * out) {
int pop_count = 0;
! out->print_cr("%18s %10s %10s %10s",
! "Global Lists:", "InUse", "Free", "Total");
! out->print_cr("================== ========== ========== ==========");
! out->print_cr("%18s %10d %10d %10d", "",
OrderAccess::load_acquire(&g_om_in_use_count),
OrderAccess::load_acquire(&g_om_free_count),
OrderAccess::load_acquire(&g_om_population));
pop_count += OrderAccess::load_acquire(&g_om_in_use_count) +
OrderAccess::load_acquire(&g_om_free_count);
out->print_cr("%18s %10s %10s %10s",
"Per-Thread Lists:", "InUse", "Free", "Provision");
out->print_cr("================== ========== ========== ==========");
--- 3224,3246 ----
// Log counts for the global and per-thread monitor lists and return
// the population count.
int ObjectSynchronizer::log_monitor_list_counts(outputStream * out) {
int pop_count = 0;
! out->print_cr("%18s %10s %10s %10s %10s",
! "Global Lists:", "InUse", "Free", "Wait", "Total");
! out->print_cr("================== ========== ========== ========== ==========");
! out->print_cr("%18s %10d %10d %10d %10d", "",
OrderAccess::load_acquire(&g_om_in_use_count),
OrderAccess::load_acquire(&g_om_free_count),
+ OrderAccess::load_acquire(&g_om_wait_count),
OrderAccess::load_acquire(&g_om_population));
pop_count += OrderAccess::load_acquire(&g_om_in_use_count) +
OrderAccess::load_acquire(&g_om_free_count);
+ if (HandshakeAfterDeflateIdleMonitors) {
+ pop_count += OrderAccess::load_acquire(&g_om_wait_count);
+ }
out->print_cr("%18s %10s %10s %10s",
"Per-Thread Lists:", "InUse", "Free", "Provision");
out->print_cr("================== ========== ========== ==========");
< prev index next >