# HG changeset patch # User rehn # Date 1558342487 -7200 # Mon May 20 10:54:47 2019 +0200 # Node ID 254bfd2cccf9b62b24dbb789d114bf2f8e9f7e79 # Parent cb80f2adf35c62b4d058ceba3b6c5d0865f87988 imported patch 8221734-v3 diff --git a/src/hotspot/share/aot/aotCodeHeap.cpp b/src/hotspot/share/aot/aotCodeHeap.cpp --- a/src/hotspot/share/aot/aotCodeHeap.cpp +++ b/src/hotspot/share/aot/aotCodeHeap.cpp @@ -38,6 +38,7 @@ #include "memory/universe.hpp" #include "oops/compressedOops.hpp" #include "oops/method.inline.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/handles.inline.hpp" #include "runtime/os.hpp" #include "runtime/safepointVerifiers.hpp" @@ -733,8 +734,7 @@ } } if (marked > 0) { - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } diff --git a/src/hotspot/share/aot/aotCompiledMethod.cpp b/src/hotspot/share/aot/aotCompiledMethod.cpp --- a/src/hotspot/share/aot/aotCompiledMethod.cpp +++ b/src/hotspot/share/aot/aotCompiledMethod.cpp @@ -165,7 +165,7 @@ { // Enter critical section. Does not block for safepoint. - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); if (*_state_adr == new_state) { // another thread already performed this transition so nothing @@ -188,12 +188,10 @@ #endif // Remove AOTCompiledMethod from method. - if (method() != NULL && (method()->code() == this || - method()->from_compiled_entry() == verified_entry_point())) { - HandleMark hm; - method()->clear_code(false /* already owns Patching_lock */); + if (method() != NULL) { + method()->unlink_code(this); } - } // leave critical region under Patching_lock + } // leave critical region under CompiledMethod_lock if (TraceCreateZombies) { @@ -216,7 +214,7 @@ { // Enter critical section. Does not block for safepoint. - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); if (*_state_adr == in_use) { // another thread already performed this transition so nothing @@ -230,7 +228,7 @@ // Log the transition once log_state_change(); - } // leave critical region under Patching_lock + } // leave critical region under CompiledMethod_lock if (TraceCreateZombies) { diff --git a/src/hotspot/share/aot/aotCompiledMethod.hpp b/src/hotspot/share/aot/aotCompiledMethod.hpp --- a/src/hotspot/share/aot/aotCompiledMethod.hpp +++ b/src/hotspot/share/aot/aotCompiledMethod.hpp @@ -175,6 +175,7 @@ state() == not_used; } virtual bool is_alive() const { return _is_alive(); } virtual bool is_in_use() const { return state() == in_use; } + virtual bool is_not_installed() const { return state() == not_installed; } virtual bool is_unloading() { return false; } diff --git a/src/hotspot/share/code/codeCache.cpp b/src/hotspot/share/code/codeCache.cpp --- a/src/hotspot/share/code/codeCache.cpp +++ b/src/hotspot/share/code/codeCache.cpp @@ -1142,28 +1142,25 @@ // At least one nmethod has been marked for deoptimization - // All this already happens inside a VM_Operation, so we'll do all the work here. - // Stuff copied from VM_Deoptimize and modified slightly. - - // We do not want any GCs to happen while we are in the middle of this VM operation - ResourceMark rm; - DeoptimizationMarker dm; - - // Deoptimize all activations depending on marked nmethods - Deoptimization::deoptimize_dependents(); - - // Make the dependent methods not entrant - make_marked_nmethods_not_entrant(); + Deoptimization::deoptimize_all_marked(); } #endif // INCLUDE_JVMTI -// Deoptimize all methods +// Deoptimize all(most) methods void CodeCache::mark_all_nmethods_for_deoptimization() { MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag); CompiledMethodIterator iter(CompiledMethodIterator::only_alive_and_not_unloading); while(iter.next()) { CompiledMethod* nm = iter.method(); - if (!nm->method()->is_method_handle_intrinsic()) { + // Not installed are unsafe to mark for deopt, normally never deopted. + // A not_entrant method may become a zombie at any time, + // since we don't know on which side of last safepoint it became not_entrant + // (state must be in_use). + // Native method are unsafe to mark for deopt, normally never deopted. + if (!nm->method()->is_method_handle_intrinsic() && + !nm->is_not_installed() && + nm->is_in_use() && + !nm->is_native_method()) { nm->mark_for_deoptimization(); } } @@ -1191,7 +1188,12 @@ CompiledMethodIterator iter(CompiledMethodIterator::only_alive_and_not_unloading); while(iter.next()) { CompiledMethod* nm = iter.method(); - if (nm->is_marked_for_deoptimization() && !nm->is_not_entrant()) { + // only_alive_and_not_unloading returns not_entrant nmethods. + // A not_entrant can become a zombie at anytime, + // if it was made not_entrant before previous safepoint/handshake. + // We check that it is not not_entrant and not zombie, + // by checking is_in_use(). + if (nm->is_marked_for_deoptimization() && nm->is_in_use()) { nm->make_not_entrant(); } } @@ -1203,17 +1205,12 @@ if (number_of_nmethods_with_dependencies() == 0) return; - // CodeCache can only be updated by a thread_in_VM and they will all be - // stopped during the safepoint so CodeCache will be safe to update without - // holding the CodeCache_lock. - KlassDepChange changes(dependee); // Compute the dependent nmethods if (mark_for_deoptimization(changes) > 0) { // At least one nmethod has been marked for deoptimization - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } @@ -1222,26 +1219,9 @@ // --- Compile_lock is not held. However we are at a safepoint. assert_locked_or_safepoint(Compile_lock); - // CodeCache can only be updated by a thread_in_VM and they will all be - // stopped dring the safepoint so CodeCache will be safe to update without - // holding the CodeCache_lock. - // Compute the dependent nmethods if (mark_for_deoptimization(m_h()) > 0) { - // At least one nmethod has been marked for deoptimization - - // All this already happens inside a VM_Operation, so we'll do all the work here. - // Stuff copied from VM_Deoptimize and modified slightly. - - // We do not want any GCs to happen while we are in the middle of this VM operation - ResourceMark rm; - DeoptimizationMarker dm; - - // Deoptimize all activations depending on marked nmethods - Deoptimization::deoptimize_dependents(); - - // Make the dependent methods not entrant - make_marked_nmethods_not_entrant(); + Deoptimization::deoptimize_all_marked(); } } diff --git a/src/hotspot/share/code/compiledMethod.hpp b/src/hotspot/share/code/compiledMethod.hpp --- a/src/hotspot/share/code/compiledMethod.hpp +++ b/src/hotspot/share/code/compiledMethod.hpp @@ -214,6 +214,7 @@ }; virtual bool is_in_use() const = 0; + virtual bool is_not_installed() const = 0; virtual int comp_level() const = 0; virtual int compile_id() const = 0; diff --git a/src/hotspot/share/code/nmethod.cpp b/src/hotspot/share/code/nmethod.cpp --- a/src/hotspot/share/code/nmethod.cpp +++ b/src/hotspot/share/code/nmethod.cpp @@ -49,6 +49,7 @@ #include "oops/oop.inline.hpp" #include "prims/jvmtiImpl.hpp" #include "runtime/atomic.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/flags/flagSetting.hpp" #include "runtime/frame.inline.hpp" #include "runtime/handles.inline.hpp" @@ -1121,11 +1122,7 @@ // have the Method* live here, in case we unload the nmethod because // it is pointing to some oop (other than the Method*) being unloaded. if (_method != NULL) { - // OSR methods point to the Method*, but the Method* does not - // point back! - if (_method->code() == this) { - _method->clear_code(); // Break a cycle - } + _method->unlink_code(this); } // Make the class unloaded - i.e., change state and notify sweeper @@ -1207,16 +1204,9 @@ } } -void nmethod::unlink_from_method(bool acquire_lock) { - // We need to check if both the _code and _from_compiled_code_entry_point - // refer to this nmethod because there is a race in setting these two fields - // in Method* as seen in bugid 4947125. - // If the vep() points to the zombie nmethod, the memory for the nmethod - // could be flushed and the compiler and vtable stubs could still call - // through it. - if (method() != NULL && (method()->code() == this || - method()->from_compiled_entry() == verified_entry_point())) { - method()->clear_code(acquire_lock); +void nmethod::unlink_from_method() { + if (method() != NULL) { + method()->unlink_code(); } } @@ -1243,24 +1233,24 @@ // during patching, depending on the nmethod state we must notify the GC that // code has been unloaded, unregistering it. We cannot do this right while - // holding the Patching_lock because we need to use the CodeCache_lock. This + // holding the CompiledMethod_lock because we need to use the CodeCache_lock. This // would be prone to deadlocks. // This flag is used to remember whether we need to later lock and unregister. bool nmethod_needs_unregister = false; + // invalidate osr nmethod before acquiring the patching lock since + // they both acquire leaf locks and we don't want a deadlock. + // This logic is equivalent to the logic below for patching the + // verified entry point of regular methods. We check that the + // nmethod is in use to ensure that it is invalidated only once. + if (is_osr_method() && is_in_use()) { + // this effectively makes the osr nmethod not entrant + invalidate_osr_method(); + } + { - // invalidate osr nmethod before acquiring the patching lock since - // they both acquire leaf locks and we don't want a deadlock. - // This logic is equivalent to the logic below for patching the - // verified entry point of regular methods. We check that the - // nmethod is in use to ensure that it is invalidated only once. - if (is_osr_method() && is_in_use()) { - // this effectively makes the osr nmethod not entrant - invalidate_osr_method(); - } - // Enter critical section. Does not block for safepoint. - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); if (_state == state) { // another thread already performed this transition so nothing @@ -1304,8 +1294,9 @@ log_state_change(); // Remove nmethod from method. - unlink_from_method(false /* already owns Patching_lock */); - } // leave critical region under Patching_lock + unlink_from_method(); + + } // leave critical region under CompiledMethod_lock #if INCLUDE_JVMCI // Invalidate can't occur while holding the Patching lock diff --git a/src/hotspot/share/code/nmethod.hpp b/src/hotspot/share/code/nmethod.hpp --- a/src/hotspot/share/code/nmethod.hpp +++ b/src/hotspot/share/code/nmethod.hpp @@ -119,7 +119,7 @@ // used by jvmti to track if an unload event has been posted for this nmethod. bool _unload_reported; - // Protected by Patching_lock + // Protected by CompiledMethod_lock volatile signed char _state; // {not_installed, in_use, not_entrant, zombie, unloaded} #ifdef ASSERT @@ -386,7 +386,7 @@ int comp_level() const { return _comp_level; } - void unlink_from_method(bool acquire_lock); + void unlink_from_method(); // Support for oops in scopes and relocs: // Note: index 0 is reserved for null. diff --git a/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp b/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp --- a/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp +++ b/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp @@ -1,5 +1,5 @@ /* - * Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2018, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -45,7 +45,7 @@ // We don't need to take the lock when unlinking nmethods from // the Method, because it is only concurrently unlinked by // the entry barrier, which acquires the per nmethod lock. - nm->unlink_from_method(false /* acquire_lock */); + nm->unlink_from_method(); // We can end up calling nmethods that are unloading // since we clear compiled ICs lazily. Returning false diff --git a/src/hotspot/share/gc/z/zNMethod.cpp b/src/hotspot/share/gc/z/zNMethod.cpp --- a/src/hotspot/share/gc/z/zNMethod.cpp +++ b/src/hotspot/share/gc/z/zNMethod.cpp @@ -1,5 +1,5 @@ /* - * Copyright (c) 2017, 2018, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -285,7 +285,7 @@ // We don't need to take the lock when unlinking nmethods from // the Method, because it is only concurrently unlinked by // the entry barrier, which acquires the per nmethod lock. - nm->unlink_from_method(false /* acquire_lock */); + nm->unlink_from_method(); return; } diff --git a/src/hotspot/share/jvmci/jvmciEnv.cpp b/src/hotspot/share/jvmci/jvmciEnv.cpp --- a/src/hotspot/share/jvmci/jvmciEnv.cpp +++ b/src/hotspot/share/jvmci/jvmciEnv.cpp @@ -31,6 +31,7 @@ #include "memory/universe.hpp" #include "oops/objArrayKlass.hpp" #include "oops/typeArrayOop.inline.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/jniHandles.inline.hpp" #include "runtime/javaCalls.hpp" #include "jvmci/jniAccessMark.inline.hpp" @@ -1496,8 +1497,7 @@ // Invalidating the HotSpotNmethod means we want the nmethod // to be deoptimized. nm->mark_for_deoptimization(); - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } // A HotSpotNmethod instance can only reference a single nmethod diff --git a/src/hotspot/share/oops/method.cpp b/src/hotspot/share/oops/method.cpp --- a/src/hotspot/share/oops/method.cpp +++ b/src/hotspot/share/oops/method.cpp @@ -103,7 +103,7 @@ // Fix and bury in Method* set_interpreter_entry(NULL); // sets i2i entry and from_int set_adapter_entry(NULL); - clear_code(false /* don't need a lock */); // from_c/from_i get set to c2i/i2i + Method::clear_code(); // from_c/from_i get set to c2i/i2i if (access_flags.is_native()) { clear_native_function(); @@ -815,7 +815,7 @@ set_native_function( SharedRuntime::native_method_throw_unsatisfied_link_error_entry(), !native_bind_event_is_interesting); - clear_code(); + this->unlink_code(); } address Method::critical_native_function() { @@ -938,8 +938,7 @@ } // Revert to using the interpreter and clear out the nmethod -void Method::clear_code(bool acquire_lock /* = true */) { - MutexLocker pl(acquire_lock ? Patching_lock : NULL, Mutex::_no_safepoint_check_flag); +void Method::clear_code() { // this may be NULL if c2i adapters have not been made yet // Only should happen at allocate time. if (adapter() == NULL) { @@ -953,6 +952,25 @@ _code = NULL; } +void Method::unlink_code(CompiledMethod *compare) { + MutexLocker ml(CompiledMethod_lock->owned_by_self() ? NULL : CompiledMethod_lock, Mutex::_no_safepoint_check_flag); + // We need to check if both the _code and _from_compiled_code_entry_point + // refer to this nmethod because there is a race in setting these two fields + // in Method* as seen in bugid 4947125. + // If the vep() points to the zombie nmethod, the memory for the nmethod + // could be flushed and the compiler and vtable stubs could still call + // through it. + if (code() == compare || + from_compiled_entry() == compare->verified_entry_point()) { + clear_code(); + } +} + +void Method::unlink_code() { + MutexLocker ml(CompiledMethod_lock->owned_by_self() ? NULL : CompiledMethod_lock, Mutex::_no_safepoint_check_flag); + clear_code(); +} + #if INCLUDE_CDS // Called by class data sharing to remove any entry points (which are not shared) void Method::unlink_method() { @@ -1179,7 +1197,7 @@ // Install compiled code. Instantly it can execute. void Method::set_code(const methodHandle& mh, CompiledMethod *code) { - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); assert( code, "use clear_code to remove code" ); assert( mh->check_code(), "" ); diff --git a/src/hotspot/share/oops/method.hpp b/src/hotspot/share/oops/method.hpp --- a/src/hotspot/share/oops/method.hpp +++ b/src/hotspot/share/oops/method.hpp @@ -463,7 +463,17 @@ address verified_code_entry(); bool check_code() const; // Not inline to avoid circular ref CompiledMethod* volatile code() const; - void clear_code(bool acquire_lock = true); // Clear out any compiled code + + // Locks CompiledMethod_lock if not held. + void unlink_code(CompiledMethod *compare); + // Locks CompiledMethod_lock if not held. + void unlink_code(); + +private: + // Either called with CompiledMethod_lock held or from constructor. + void clear_code(); + +public: static void set_code(const methodHandle& mh, CompiledMethod* code); void set_adapter_entry(AdapterHandlerEntry* adapter) { constMethod()->set_adapter_entry(adapter); diff --git a/src/hotspot/share/prims/jvmtiEventController.cpp b/src/hotspot/share/prims/jvmtiEventController.cpp --- a/src/hotspot/share/prims/jvmtiEventController.cpp +++ b/src/hotspot/share/prims/jvmtiEventController.cpp @@ -32,6 +32,7 @@ #include "prims/jvmtiExport.hpp" #include "prims/jvmtiImpl.hpp" #include "prims/jvmtiThreadState.inline.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/frame.hpp" #include "runtime/thread.inline.hpp" #include "runtime/threadSMR.hpp" @@ -239,8 +240,7 @@ } } if (num_marked > 0) { - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } } diff --git a/src/hotspot/share/prims/methodHandles.cpp b/src/hotspot/share/prims/methodHandles.cpp --- a/src/hotspot/share/prims/methodHandles.cpp +++ b/src/hotspot/share/prims/methodHandles.cpp @@ -42,6 +42,7 @@ #include "oops/typeArrayOop.inline.hpp" #include "prims/methodHandles.hpp" #include "runtime/compilationPolicy.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/fieldDescriptor.inline.hpp" #include "runtime/handles.inline.hpp" #include "runtime/interfaceSupport.inline.hpp" @@ -1109,8 +1110,7 @@ } if (marked > 0) { // At least one nmethod has been marked for deoptimization. - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } @@ -1506,8 +1506,7 @@ } if (marked > 0) { // At least one nmethod has been marked for deoptimization - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } } diff --git a/src/hotspot/share/prims/whitebox.cpp b/src/hotspot/share/prims/whitebox.cpp --- a/src/hotspot/share/prims/whitebox.cpp +++ b/src/hotspot/share/prims/whitebox.cpp @@ -822,10 +822,8 @@ WB_END WB_ENTRY(void, WB_DeoptimizeAll(JNIEnv* env, jobject o)) - MutexLocker mu(Compile_lock); CodeCache::mark_all_nmethods_for_deoptimization(); - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); WB_END WB_ENTRY(jint, WB_DeoptimizeMethod(JNIEnv* env, jobject o, jobject method, jboolean is_osr)) @@ -842,8 +840,7 @@ } result += CodeCache::mark_for_deoptimization(mh()); if (result > 0) { - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } return result; WB_END diff --git a/src/hotspot/share/runtime/biasedLocking.cpp b/src/hotspot/share/runtime/biasedLocking.cpp --- a/src/hotspot/share/runtime/biasedLocking.cpp +++ b/src/hotspot/share/runtime/biasedLocking.cpp @@ -628,14 +628,11 @@ event->commit(); } -BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { - assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); - +BiasedLocking::Condition fast_revoke(Handle obj, markOop mark, bool attempt_rebias, JavaThread* thread = NULL) { // We can revoke the biases of anonymously-biased objects // efficiently enough that we should not cause these revocations to // update the heuristics because doing so may cause unwanted bulk // revocations (which are expensive) to occur. - markOop mark = obj->mark(); if (mark->is_biased_anonymously() && !attempt_rebias) { // We are probably trying to revoke the bias of this object due to // an identity hash code computation. Try to revoke the bias @@ -647,7 +644,7 @@ markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); if (res_mark == biased_value) { - return BIAS_REVOKED; + return BiasedLocking::BIAS_REVOKED; } } else if (mark->has_bias_pattern()) { Klass* k = obj->klass(); @@ -662,7 +659,7 @@ markOop biased_value = mark; markOop res_mark = obj->cas_set_mark(prototype_header, mark); assert(!obj->mark()->has_bias_pattern(), "even if we raced, should still be revoked"); - return BIAS_REVOKED; + return BiasedLocking::BIAS_REVOKED; } else if (prototype_header->bias_epoch() != mark->bias_epoch()) { // The epoch of this biasing has expired indicating that the // object is effectively unbiased. Depending on whether we need @@ -672,23 +669,61 @@ // can reach this point due to various points in the runtime // needing to revoke biases. if (attempt_rebias) { - assert(THREAD->is_Java_thread(), ""); markOop biased_value = mark; - markOop rebiased_prototype = markOopDesc::encode((JavaThread*) THREAD, mark->age(), prototype_header->bias_epoch()); + markOop rebiased_prototype = markOopDesc::encode(thread, mark->age(), prototype_header->bias_epoch()); markOop res_mark = obj->cas_set_mark(rebiased_prototype, mark); if (res_mark == biased_value) { - return BIAS_REVOKED_AND_REBIASED; + return BiasedLocking::BIAS_REVOKED_AND_REBIASED; } } else { markOop biased_value = mark; markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); if (res_mark == biased_value) { - return BIAS_REVOKED; + return BiasedLocking::BIAS_REVOKED; } } } } + return BiasedLocking::NOT_REVOKED; +} + +BiasedLocking::Condition BiasedLocking::revoke_own_locks_in_handshake(Handle obj, TRAPS) { + markOop mark = obj->mark(); + BiasedLocking::Condition bc = fast_revoke(obj, mark, false); + if (bc != NOT_REVOKED) { + return bc; + } + + if (!mark->has_bias_pattern()) { + return NOT_BIASED; + } + + Klass *k = obj->klass(); + markOop prototype_header = k->prototype_header(); + guarantee(mark->biased_locker() == THREAD && + prototype_header->bias_epoch() == mark->bias_epoch(), "Revoke failed, unhandled biased lock state"); + ResourceMark rm; + log_info(biasedlocking)("Revoking bias by walking my own stack:"); + EventBiasedLockSelfRevocation event; + BiasedLocking::Condition cond = revoke_bias(obj(), false, false, (JavaThread*) THREAD, NULL); + ((JavaThread*) THREAD)->set_cached_monitor_info(NULL); + assert(cond == BIAS_REVOKED, "why not?"); + if (event.should_commit()) { + post_self_revocation_event(&event, k); + } + return cond; +} + +BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { + assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); + assert(!attempt_rebias || THREAD->is_Java_thread(), ""); + + markOop mark = obj->mark(); + BiasedLocking::Condition bc = fast_revoke(obj, mark, attempt_rebias, (JavaThread*) THREAD); + if (bc != NOT_REVOKED) { + return bc; + } HeuristicsResult heuristics = update_heuristics(obj(), attempt_rebias); if (heuristics == HR_NOT_BIASED) { diff --git a/src/hotspot/share/runtime/biasedLocking.hpp b/src/hotspot/share/runtime/biasedLocking.hpp --- a/src/hotspot/share/runtime/biasedLocking.hpp +++ b/src/hotspot/share/runtime/biasedLocking.hpp @@ -159,6 +159,7 @@ static int* slow_path_entry_count_addr(); enum Condition { + NOT_REVOKED = 0, NOT_BIASED = 1, BIAS_REVOKED = 2, BIAS_REVOKED_AND_REBIASED = 3 @@ -175,6 +176,7 @@ // This should be called by JavaThreads to revoke the bias of an object static Condition revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS); + static Condition revoke_own_locks_in_handshake(Handle obj, TRAPS); // These do not allow rebiasing; they are used by deoptimization to // ensure that monitors on the stack can be migrated diff --git a/src/hotspot/share/runtime/deoptimization.cpp b/src/hotspot/share/runtime/deoptimization.cpp --- a/src/hotspot/share/runtime/deoptimization.cpp +++ b/src/hotspot/share/runtime/deoptimization.cpp @@ -776,10 +776,35 @@ return bt; JRT_END +class DeoptimizeMarkedTC : public ThreadClosure { + bool _in_handshake; + public: + DeoptimizeMarkedTC(bool in_handshake) : _in_handshake(in_handshake) {} + virtual void do_thread(Thread* thread) { + assert(thread->is_Java_thread(), "must be"); + JavaThread* jt = (JavaThread*)thread; + jt->deoptimize_marked_methods(_in_handshake); + } +}; -int Deoptimization::deoptimize_dependents() { - Threads::deoptimized_wrt_marked_nmethods(); - return 0; +void Deoptimization::deoptimize_all_marked() { + ResourceMark rm; + DeoptimizationMarker dm; + + if (SafepointSynchronize::is_at_safepoint()) { + DeoptimizeMarkedTC deopt(false); + // Make the dependent methods not entrant + CodeCache::make_marked_nmethods_not_entrant(); + Threads::java_threads_do(&deopt); + } else { + // Make the dependent methods not entrant + { + MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag); + CodeCache::make_marked_nmethods_not_entrant(); + } + DeoptimizeMarkedTC deopt(true); + Handshake::execute(&deopt); + } } Deoptimization::DeoptAction Deoptimization::_unloaded_action @@ -1243,14 +1268,7 @@ } } - -void Deoptimization::revoke_biases_of_monitors(JavaThread* thread, frame fr, RegisterMap* map) { - if (!UseBiasedLocking) { - return; - } - - GrowableArray* objects_to_revoke = new GrowableArray(); - +static void get_monitors_from_stack(GrowableArray* objects_to_revoke, JavaThread* thread, frame fr, RegisterMap* map) { // Unfortunately we don't have a RegisterMap available in most of // the places we want to call this routine so we need to walk the // stack again to update the register map. @@ -1274,6 +1292,14 @@ cvf = compiledVFrame::cast(cvf->sender()); } collect_monitors(cvf, objects_to_revoke); +} + +void Deoptimization::revoke_safepoint(JavaThread* thread, frame fr, RegisterMap* map) { + if (!UseBiasedLocking) { + return; + } + GrowableArray* objects_to_revoke = new GrowableArray(); + get_monitors_from_stack(objects_to_revoke, thread, fr, map); if (SafepointSynchronize::is_at_safepoint()) { BiasedLocking::revoke_at_safepoint(objects_to_revoke); @@ -1282,6 +1308,23 @@ } } +void Deoptimization::revoke_handshake(JavaThread* thread, frame fr, RegisterMap* map) { + if (!UseBiasedLocking) { + return; + } + GrowableArray* objects_to_revoke = new GrowableArray(); + get_monitors_from_stack(objects_to_revoke, thread, fr, map); + + int len = objects_to_revoke->length(); + for (int i = 0; i < len; i++) { + oop obj = (objects_to_revoke->at(i))(); + markOop mark = obj->mark(); + assert(!mark->has_bias_pattern() || mark->biased_locker() == thread, "Can't revoke"); + BiasedLocking::revoke_own_locks_in_handshake(objects_to_revoke->at(i), thread); + assert(!obj->mark()->has_bias_pattern(), "biases should be revoked by now"); + } +} + void Deoptimization::deoptimize_single_frame(JavaThread* thread, frame fr, Deoptimization::DeoptReason reason) { assert(fr.can_be_deoptimized(), "checking frame type"); @@ -1310,11 +1353,16 @@ fr.deoptimize(thread); } -void Deoptimization::deoptimize(JavaThread* thread, frame fr, RegisterMap *map) { - deoptimize(thread, fr, map, Reason_constraint); +void Deoptimization::deoptimize(JavaThread* thread, frame fr, RegisterMap *map, bool in_handshake) { + deopt_thread(in_handshake, thread, fr, map, Reason_constraint); } void Deoptimization::deoptimize(JavaThread* thread, frame fr, RegisterMap *map, DeoptReason reason) { + deopt_thread(false, thread, fr, map, reason); +} + +void Deoptimization::deopt_thread(bool in_handshake, JavaThread* thread, + frame fr, RegisterMap *map, DeoptReason reason) { // Deoptimize only if the frame comes from compile code. // Do not deoptimize the frame which is already patched // during the execution of the loops below. @@ -1324,7 +1372,11 @@ ResourceMark rm; DeoptimizationMarker dm; if (UseBiasedLocking) { - revoke_biases_of_monitors(thread, fr, map); + if (in_handshake) { + revoke_handshake(thread, fr, map); + } else { + revoke_safepoint(thread, fr, map); + } } deoptimize_single_frame(thread, fr, reason); diff --git a/src/hotspot/share/runtime/deoptimization.hpp b/src/hotspot/share/runtime/deoptimization.hpp --- a/src/hotspot/share/runtime/deoptimization.hpp +++ b/src/hotspot/share/runtime/deoptimization.hpp @@ -135,12 +135,19 @@ Unpack_LIMIT = 4 }; + static void deoptimize_all_marked(); + + private: // Checks all compiled methods. Invalid methods are deleted and // corresponding activations are deoptimized. static int deoptimize_dependents(); + static void revoke_handshake(JavaThread* thread, frame fr, RegisterMap* map); + static void revoke_safepoint(JavaThread* thread, frame fr, RegisterMap* map); + static void deopt_thread(bool in_handshake, JavaThread* thread, frame fr, RegisterMap *map, DeoptReason reason); + public: // Deoptimizes a frame lazily. nmethod gets patched deopt happens on return to the frame - static void deoptimize(JavaThread* thread, frame fr, RegisterMap *reg_map); + static void deoptimize(JavaThread* thread, frame fr, RegisterMap *map, bool in_handshake = false); static void deoptimize(JavaThread* thread, frame fr, RegisterMap *reg_map, DeoptReason reason); #if INCLUDE_JVMCI @@ -153,7 +160,9 @@ // Helper function to revoke biases of all monitors in frame if UseBiasedLocking // is enabled - static void revoke_biases_of_monitors(JavaThread* thread, frame fr, RegisterMap* map); + static void revoke_biases_of_monitors(JavaThread* thread, frame fr, RegisterMap* map) { + revoke_safepoint(thread, fr, map); + } #if COMPILER2_OR_JVMCI JVMCI_ONLY(public:) diff --git a/src/hotspot/share/runtime/mutex.hpp b/src/hotspot/share/runtime/mutex.hpp --- a/src/hotspot/share/runtime/mutex.hpp +++ b/src/hotspot/share/runtime/mutex.hpp @@ -62,7 +62,7 @@ event, access = event + 1, tty = access + 2, - special = tty + 1, + special = tty + 2, suspend_resume = special + 1, vmweak = suspend_resume + 2, leaf = vmweak + 2, diff --git a/src/hotspot/share/runtime/mutexLocker.cpp b/src/hotspot/share/runtime/mutexLocker.cpp --- a/src/hotspot/share/runtime/mutexLocker.cpp +++ b/src/hotspot/share/runtime/mutexLocker.cpp @@ -39,6 +39,7 @@ // Consider using GCC's __read_mostly. Mutex* Patching_lock = NULL; +Mutex* CompiledMethod_lock = NULL; Monitor* SystemDictionary_lock = NULL; Mutex* ProtectionDomainSet_lock = NULL; Mutex* SharedDictionary_lock = NULL; @@ -261,6 +262,8 @@ def(ClassLoaderDataGraph_lock , PaddedMutex , nonleaf, true, Monitor::_safepoint_check_always); def(Patching_lock , PaddedMutex , special, true, Monitor::_safepoint_check_never); // used for safepointing and code patching. + def(OsrList_lock , PaddedMutex , special-1, true, Monitor::_safepoint_check_never); + def(CompiledMethod_lock , PaddedMutex , special-1, true, Monitor::_safepoint_check_never); def(Service_lock , PaddedMonitor, special, true, Monitor::_safepoint_check_never); // used for service thread operations def(JmethodIdCreation_lock , PaddedMutex , leaf, true, Monitor::_safepoint_check_always); // used for creating jmethodIDs. @@ -276,7 +279,6 @@ def(SymbolArena_lock , PaddedMutex , leaf+2, true, Monitor::_safepoint_check_never); def(ProfilePrint_lock , PaddedMutex , leaf, false, Monitor::_safepoint_check_always); // serial profile printing def(ExceptionCache_lock , PaddedMutex , leaf, false, Monitor::_safepoint_check_always); // serial profile printing - def(OsrList_lock , PaddedMutex , leaf, true, Monitor::_safepoint_check_never); def(Debug1_lock , PaddedMutex , leaf, true, Monitor::_safepoint_check_never); #ifndef PRODUCT def(FullGCALot_lock , PaddedMutex , leaf, false, Monitor::_safepoint_check_always); // a lock to make FullGCALot MT safe diff --git a/src/hotspot/share/runtime/mutexLocker.hpp b/src/hotspot/share/runtime/mutexLocker.hpp --- a/src/hotspot/share/runtime/mutexLocker.hpp +++ b/src/hotspot/share/runtime/mutexLocker.hpp @@ -32,6 +32,7 @@ // Mutexes used in the VM. extern Mutex* Patching_lock; // a lock used to guard code patching of compiled code +extern Mutex* CompiledMethod_lock; // a lock used to guard a compiled method extern Monitor* SystemDictionary_lock; // a lock on the system dictionary extern Mutex* ProtectionDomainSet_lock; // a lock on the pd_set list in the system dictionary extern Mutex* SharedDictionary_lock; // a lock on the CDS shared dictionary diff --git a/src/hotspot/share/runtime/thread.cpp b/src/hotspot/share/runtime/thread.cpp --- a/src/hotspot/share/runtime/thread.cpp +++ b/src/hotspot/share/runtime/thread.cpp @@ -2833,18 +2833,17 @@ #endif // PRODUCT -void JavaThread::deoptimized_wrt_marked_nmethods() { +void JavaThread::deoptimize_marked_methods(bool in_handshake) { if (!has_last_Java_frame()) return; // BiasedLocking needs an updated RegisterMap for the revoke monitors pass StackFrameStream fst(this, UseBiasedLocking); for (; !fst.is_done(); fst.next()) { if (fst.current()->should_be_deoptimized()) { - Deoptimization::deoptimize(this, *fst.current(), fst.register_map()); + Deoptimization::deoptimize(this, *fst.current(), fst.register_map(), in_handshake); } } } - // If the caller is a NamedThread, then remember, in the current scope, // the given JavaThread in its _processed_thread field. class RememberProcessedThread: public StackObj { @@ -4598,13 +4597,6 @@ threads_do(&handles_closure); } -void Threads::deoptimized_wrt_marked_nmethods() { - ALL_JAVA_THREADS(p) { - p->deoptimized_wrt_marked_nmethods(); - } -} - - // Get count Java threads that are waiting to enter the specified monitor. GrowableArray* Threads::get_pending_threads(ThreadsList * t_list, int count, diff --git a/src/hotspot/share/runtime/thread.hpp b/src/hotspot/share/runtime/thread.hpp --- a/src/hotspot/share/runtime/thread.hpp +++ b/src/hotspot/share/runtime/thread.hpp @@ -1923,7 +1923,7 @@ void deoptimize(); void make_zombies(); - void deoptimized_wrt_marked_nmethods(); + void deoptimize_marked_methods(bool in_handshake); public: // Returns the running thread as a JavaThread diff --git a/src/hotspot/share/runtime/vmOperations.cpp b/src/hotspot/share/runtime/vmOperations.cpp --- a/src/hotspot/share/runtime/vmOperations.cpp +++ b/src/hotspot/share/runtime/vmOperations.cpp @@ -118,18 +118,6 @@ } } -void VM_Deoptimize::doit() { - // We do not want any GCs to happen while we are in the middle of this VM operation - ResourceMark rm; - DeoptimizationMarker dm; - - // Deoptimize all activations depending on marked nmethods - Deoptimization::deoptimize_dependents(); - - // Make the dependent methods not entrant - CodeCache::make_marked_nmethods_not_entrant(); -} - void VM_MarkActiveNMethods::doit() { NMethodSweeper::mark_active_nmethods(); } diff --git a/src/hotspot/share/runtime/vmOperations.hpp b/src/hotspot/share/runtime/vmOperations.hpp --- a/src/hotspot/share/runtime/vmOperations.hpp +++ b/src/hotspot/share/runtime/vmOperations.hpp @@ -49,7 +49,6 @@ template(ClearICs) \ template(ForceSafepoint) \ template(ForceAsyncSafepoint) \ - template(Deoptimize) \ template(DeoptimizeFrame) \ template(DeoptimizeAll) \ template(ZombieAll) \ @@ -318,14 +317,6 @@ VM_GTestExecuteAtSafepoint() {} }; -class VM_Deoptimize: public VM_Operation { - public: - VM_Deoptimize() {} - VMOp_Type type() const { return VMOp_Deoptimize; } - void doit(); - bool allow_nested_vm_operations() const { return true; } -}; - class VM_MarkActiveNMethods: public VM_Operation { public: VM_MarkActiveNMethods() {} diff --git a/src/hotspot/share/services/dtraceAttacher.cpp b/src/hotspot/share/services/dtraceAttacher.cpp --- a/src/hotspot/share/services/dtraceAttacher.cpp +++ b/src/hotspot/share/services/dtraceAttacher.cpp @@ -1,5 +1,5 @@ /* - * Copyright (c) 2006, 2018, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2006, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -33,23 +33,6 @@ #ifdef SOLARIS -class VM_DeoptimizeTheWorld : public VM_Operation { - public: - VMOp_Type type() const { - return VMOp_DeoptimizeTheWorld; - } - void doit() { - CodeCache::mark_all_nmethods_for_deoptimization(); - ResourceMark rm; - DeoptimizationMarker dm; - // Deoptimize all activations depending on marked methods - Deoptimization::deoptimize_dependents(); - - // Mark the dependent methods non entrant - CodeCache::make_marked_nmethods_not_entrant(); - } -}; - static void set_bool_flag(const char* flag, bool value) { JVMFlag::boolAtPut((char*)flag, strlen(flag), &value, JVMFlag::ATTACH_ON_DEMAND); @@ -74,8 +57,8 @@ if (changed) { // one or more flags changed, need to deoptimize - VM_DeoptimizeTheWorld op; - VMThread::execute(&op); + CodeCache::mark_all_nmethods_for_deoptimization(); + Deoptimization::deoptimize_all_marked(); } } @@ -97,8 +80,8 @@ } if (changed) { // one or more flags changed, need to deoptimize - VM_DeoptimizeTheWorld op; - VMThread::execute(&op); + CodeCache::mark_all_nmethods_for_deoptimization(); + Deoptimization::deoptimize_all_marked(); } } diff --git a/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java b/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java new file mode 100644 --- /dev/null +++ b/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java @@ -0,0 +1,64 @@ +/* + * Copyright (c) 2014, 2016, Oracle and/or its affiliates. All rights reserved. + * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. + * + * This code is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 only, as + * published by the Free Software Foundation. + * + * This code is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + * version 2 for more details (a copy is included in the LICENSE file that + * accompanied this code). + * + * You should have received a copy of the GNU General Public License version + * 2 along with this work; if not, write to the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA + * or visit www.oracle.com if you need additional information or have any + * questions. + */ + +/* + * @test UnexpectedDeoptimizationAllTest + * @key stress + * @summary stressing code cache by forcing unexpected deoptimizations of all methods + * @library /test/lib / + * @modules java.base/jdk.internal.misc + * java.management + * + * @build sun.hotspot.WhiteBox compiler.codecache.stress.Helper compiler.codecache.stress.TestCaseImpl + * @run driver ClassFileInstaller sun.hotspot.WhiteBox + * sun.hotspot.WhiteBox$WhiteBoxPermission + * @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions + * -XX:+WhiteBoxAPI -XX:-DeoptimizeRandom + * -XX:CompileCommand=dontinline,compiler.codecache.stress.Helper$TestCase::method + * -XX:-SegmentedCodeCache + * compiler.codecache.stress.UnexpectedDeoptimizationAllTest + * @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions + * -XX:+WhiteBoxAPI -XX:-DeoptimizeRandom + * -XX:CompileCommand=dontinline,compiler.codecache.stress.Helper$TestCase::method + * -XX:+SegmentedCodeCache + * compiler.codecache.stress.UnexpectedDeoptimizationAllTest + */ + +package compiler.codecache.stress; + +public class UnexpectedDeoptimizationAllTest implements Runnable { + + public static void main(String[] args) { + new CodeCacheStressRunner(new UnexpectedDeoptimizationAllTest()).runTest(); + } + + @Override + public void run() { + Helper.WHITE_BOX.deoptimizeAll(); + try { + Thread.currentThread().sleep(10); + } catch (Exception e) { + } + } + +} # HG changeset patch # User rehn # Date 1558342489 -7200 # Mon May 20 10:54:49 2019 +0200 # Node ID c1a67d81029758ef09613748edabc926524cae13 # Parent 254bfd2cccf9b62b24dbb789d114bf2f8e9f7e79 imported patch 8221734-v4 diff --git a/src/hotspot/share/runtime/biasedLocking.cpp b/src/hotspot/share/runtime/biasedLocking.cpp --- a/src/hotspot/share/runtime/biasedLocking.cpp +++ b/src/hotspot/share/runtime/biasedLocking.cpp @@ -628,72 +628,8 @@ event->commit(); } -BiasedLocking::Condition fast_revoke(Handle obj, markOop mark, bool attempt_rebias, JavaThread* thread = NULL) { - // We can revoke the biases of anonymously-biased objects - // efficiently enough that we should not cause these revocations to - // update the heuristics because doing so may cause unwanted bulk - // revocations (which are expensive) to occur. - if (mark->is_biased_anonymously() && !attempt_rebias) { - // We are probably trying to revoke the bias of this object due to - // an identity hash code computation. Try to revoke the bias - // without a safepoint. This is possible if we can successfully - // compare-and-exchange an unbiased header into the mark word of - // the object, meaning that no other thread has raced to acquire - // the bias of the object. - markOop biased_value = mark; - markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); - markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); - if (res_mark == biased_value) { - return BiasedLocking::BIAS_REVOKED; - } - } else if (mark->has_bias_pattern()) { - Klass* k = obj->klass(); - markOop prototype_header = k->prototype_header(); - if (!prototype_header->has_bias_pattern()) { - // This object has a stale bias from before the bulk revocation - // for this data type occurred. It's pointless to update the - // heuristics at this point so simply update the header with a - // CAS. If we fail this race, the object's bias has been revoked - // by another thread so we simply return and let the caller deal - // with it. - markOop biased_value = mark; - markOop res_mark = obj->cas_set_mark(prototype_header, mark); - assert(!obj->mark()->has_bias_pattern(), "even if we raced, should still be revoked"); - return BiasedLocking::BIAS_REVOKED; - } else if (prototype_header->bias_epoch() != mark->bias_epoch()) { - // The epoch of this biasing has expired indicating that the - // object is effectively unbiased. Depending on whether we need - // to rebias or revoke the bias of this object we can do it - // efficiently enough with a CAS that we shouldn't update the - // heuristics. This is normally done in the assembly code but we - // can reach this point due to various points in the runtime - // needing to revoke biases. - if (attempt_rebias) { - markOop biased_value = mark; - markOop rebiased_prototype = markOopDesc::encode(thread, mark->age(), prototype_header->bias_epoch()); - markOop res_mark = obj->cas_set_mark(rebiased_prototype, mark); - if (res_mark == biased_value) { - return BiasedLocking::BIAS_REVOKED_AND_REBIASED; - } - } else { - markOop biased_value = mark; - markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); - markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); - if (res_mark == biased_value) { - return BiasedLocking::BIAS_REVOKED; - } - } - } - } - return BiasedLocking::NOT_REVOKED; -} - BiasedLocking::Condition BiasedLocking::revoke_own_locks_in_handshake(Handle obj, TRAPS) { markOop mark = obj->mark(); - BiasedLocking::Condition bc = fast_revoke(obj, mark, false); - if (bc != NOT_REVOKED) { - return bc; - } if (!mark->has_bias_pattern()) { return NOT_BIASED; @@ -701,7 +637,7 @@ Klass *k = obj->klass(); markOop prototype_header = k->prototype_header(); - guarantee(mark->biased_locker() == THREAD && + assert(mark->biased_locker() == THREAD && prototype_header->bias_epoch() == mark->bias_epoch(), "Revoke failed, unhandled biased lock state"); ResourceMark rm; log_info(biasedlocking)("Revoking bias by walking my own stack:"); @@ -717,12 +653,64 @@ BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); - assert(!attempt_rebias || THREAD->is_Java_thread(), ""); + // We can revoke the biases of anonymously-biased objects + // efficiently enough that we should not cause these revocations to + // update the heuristics because doing so may cause unwanted bulk + // revocations (which are expensive) to occur. markOop mark = obj->mark(); - BiasedLocking::Condition bc = fast_revoke(obj, mark, attempt_rebias, (JavaThread*) THREAD); - if (bc != NOT_REVOKED) { - return bc; + if (mark->is_biased_anonymously() && !attempt_rebias) { + // We are probably trying to revoke the bias of this object due to + // an identity hash code computation. Try to revoke the bias + // without a safepoint. This is possible if we can successfully + // compare-and-exchange an unbiased header into the mark word of + // the object, meaning that no other thread has raced to acquire + // the bias of the object. + markOop biased_value = mark; + markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); + markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); + if (res_mark == biased_value) { + return BIAS_REVOKED; + } + } else if (mark->has_bias_pattern()) { + Klass* k = obj->klass(); + markOop prototype_header = k->prototype_header(); + if (!prototype_header->has_bias_pattern()) { + // This object has a stale bias from before the bulk revocation + // for this data type occurred. It's pointless to update the + // heuristics at this point so simply update the header with a + // CAS. If we fail this race, the object's bias has been revoked + // by another thread so we simply return and let the caller deal + // with it. + markOop biased_value = mark; + markOop res_mark = obj->cas_set_mark(prototype_header, mark); + assert(!obj->mark()->has_bias_pattern(), "even if we raced, should still be revoked"); + return BIAS_REVOKED; + } else if (prototype_header->bias_epoch() != mark->bias_epoch()) { + // The epoch of this biasing has expired indicating that the + // object is effectively unbiased. Depending on whether we need + // to rebias or revoke the bias of this object we can do it + // efficiently enough with a CAS that we shouldn't update the + // heuristics. This is normally done in the assembly code but we + // can reach this point due to various points in the runtime + // needing to revoke biases. + if (attempt_rebias) { + assert(THREAD->is_Java_thread(), ""); + markOop biased_value = mark; + markOop rebiased_prototype = markOopDesc::encode((JavaThread*) THREAD, mark->age(), prototype_header->bias_epoch()); + markOop res_mark = obj->cas_set_mark(rebiased_prototype, mark); + if (res_mark == biased_value) { + return BIAS_REVOKED_AND_REBIASED; + } + } else { + markOop biased_value = mark; + markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); + markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); + if (res_mark == biased_value) { + return BIAS_REVOKED; + } + } + } } HeuristicsResult heuristics = update_heuristics(obj(), attempt_rebias); diff --git a/src/hotspot/share/runtime/biasedLocking.hpp b/src/hotspot/share/runtime/biasedLocking.hpp --- a/src/hotspot/share/runtime/biasedLocking.hpp +++ b/src/hotspot/share/runtime/biasedLocking.hpp @@ -159,7 +159,6 @@ static int* slow_path_entry_count_addr(); enum Condition { - NOT_REVOKED = 0, NOT_BIASED = 1, BIAS_REVOKED = 2, BIAS_REVOKED_AND_REBIASED = 3 diff --git a/src/hotspot/share/runtime/deoptimization.cpp b/src/hotspot/share/runtime/deoptimization.cpp --- a/src/hotspot/share/runtime/deoptimization.cpp +++ b/src/hotspot/share/runtime/deoptimization.cpp @@ -1319,7 +1319,6 @@ for (int i = 0; i < len; i++) { oop obj = (objects_to_revoke->at(i))(); markOop mark = obj->mark(); - assert(!mark->has_bias_pattern() || mark->biased_locker() == thread, "Can't revoke"); BiasedLocking::revoke_own_locks_in_handshake(objects_to_revoke->at(i), thread); assert(!obj->mark()->has_bias_pattern(), "biases should be revoked by now"); } diff --git a/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java b/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java --- a/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java +++ b/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java @@ -56,7 +56,7 @@ public void run() { Helper.WHITE_BOX.deoptimizeAll(); try { - Thread.currentThread().sleep(10); + Thread.sleep(10); } catch (Exception e) { } }