# HG changeset patch # User rehn # Date 1557827590 -7200 # Tue May 14 11:53:10 2019 +0200 # Node ID 2534b19714ebff0dd2263bc26475dedb1906df4a # Parent 6b06de11e78e004cf768748cce0a398cdeb68121 [mq]: 8221734-v2 diff --git a/src/hotspot/share/aot/aotCodeHeap.cpp b/src/hotspot/share/aot/aotCodeHeap.cpp --- a/src/hotspot/share/aot/aotCodeHeap.cpp +++ b/src/hotspot/share/aot/aotCodeHeap.cpp @@ -38,6 +38,7 @@ #include "memory/universe.hpp" #include "oops/compressedOops.hpp" #include "oops/method.inline.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/handles.inline.hpp" #include "runtime/os.hpp" #include "runtime/safepointVerifiers.hpp" @@ -733,8 +734,7 @@ } } if (marked > 0) { - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } diff --git a/src/hotspot/share/aot/aotCompiledMethod.cpp b/src/hotspot/share/aot/aotCompiledMethod.cpp --- a/src/hotspot/share/aot/aotCompiledMethod.cpp +++ b/src/hotspot/share/aot/aotCompiledMethod.cpp @@ -165,7 +165,7 @@ { // Enter critical section. Does not block for safepoint. - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); if (*_state_adr == new_state) { // another thread already performed this transition so nothing @@ -188,12 +188,8 @@ #endif // Remove AOTCompiledMethod from method. - if (method() != NULL && (method()->code() == this || - method()->from_compiled_entry() == verified_entry_point())) { - HandleMark hm; - method()->clear_code(false /* already owns Patching_lock */); - } - } // leave critical region under Patching_lock + Method::unlink_code(method(), this); + } // leave critical region under CompiledMethod_lock if (TraceCreateZombies) { @@ -216,7 +212,7 @@ { // Enter critical section. Does not block for safepoint. - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); if (*_state_adr == in_use) { // another thread already performed this transition so nothing @@ -230,7 +226,7 @@ // Log the transition once log_state_change(); - } // leave critical region under Patching_lock + } // leave critical region under CompiledMethod_lock if (TraceCreateZombies) { diff --git a/src/hotspot/share/code/codeCache.cpp b/src/hotspot/share/code/codeCache.cpp --- a/src/hotspot/share/code/codeCache.cpp +++ b/src/hotspot/share/code/codeCache.cpp @@ -1138,18 +1138,7 @@ // At least one nmethod has been marked for deoptimization - // All this already happens inside a VM_Operation, so we'll do all the work here. - // Stuff copied from VM_Deoptimize and modified slightly. - - // We do not want any GCs to happen while we are in the middle of this VM operation - ResourceMark rm; - DeoptimizationMarker dm; - - // Deoptimize all activations depending on marked nmethods - Deoptimization::deoptimize_dependents(); - - // Make the dependent methods not entrant - make_marked_nmethods_not_entrant(); + Deoptimization::deoptimize_all_marked(); } #endif // INCLUDE_JVMTI @@ -1208,8 +1197,7 @@ // Compute the dependent nmethods if (mark_for_deoptimization(changes) > 0) { // At least one nmethod has been marked for deoptimization - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } @@ -1224,20 +1212,7 @@ // Compute the dependent nmethods if (mark_for_deoptimization(m_h()) > 0) { - // At least one nmethod has been marked for deoptimization - - // All this already happens inside a VM_Operation, so we'll do all the work here. - // Stuff copied from VM_Deoptimize and modified slightly. - - // We do not want any GCs to happen while we are in the middle of this VM operation - ResourceMark rm; - DeoptimizationMarker dm; - - // Deoptimize all activations depending on marked nmethods - Deoptimization::deoptimize_dependents(); - - // Make the dependent methods not entrant - make_marked_nmethods_not_entrant(); + Deoptimization::deoptimize_all_marked(); } } diff --git a/src/hotspot/share/code/nmethod.cpp b/src/hotspot/share/code/nmethod.cpp --- a/src/hotspot/share/code/nmethod.cpp +++ b/src/hotspot/share/code/nmethod.cpp @@ -49,6 +49,7 @@ #include "oops/oop.inline.hpp" #include "prims/jvmtiImpl.hpp" #include "runtime/atomic.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/flags/flagSetting.hpp" #include "runtime/frame.inline.hpp" #include "runtime/handles.inline.hpp" @@ -1120,13 +1121,7 @@ // so we don't have to break the cycle. Note that it is possible to // have the Method* live here, in case we unload the nmethod because // it is pointing to some oop (other than the Method*) being unloaded. - if (_method != NULL) { - // OSR methods point to the Method*, but the Method* does not - // point back! - if (_method->code() == this) { - _method->clear_code(); // Break a cycle - } - } + Method::unlink_code(_method, this); // Break a cycle // Make the class unloaded - i.e., change state and notify sweeper assert(SafepointSynchronize::is_at_safepoint() || Thread::current()->is_ConcurrentGC_thread(), @@ -1208,17 +1203,14 @@ } } -void nmethod::unlink_from_method(bool acquire_lock) { +void nmethod::unlink_from_method() { // We need to check if both the _code and _from_compiled_code_entry_point // refer to this nmethod because there is a race in setting these two fields // in Method* as seen in bugid 4947125. // If the vep() points to the zombie nmethod, the memory for the nmethod // could be flushed and the compiler and vtable stubs could still call // through it. - if (method() != NULL && (method()->code() == this || - method()->from_compiled_entry() == verified_entry_point())) { - method()->clear_code(acquire_lock); - } + Method::unlink_code(method(), this); } /** @@ -1244,24 +1236,24 @@ // during patching, depending on the nmethod state we must notify the GC that // code has been unloaded, unregistering it. We cannot do this right while - // holding the Patching_lock because we need to use the CodeCache_lock. This + // holding the CompiledMethod_lock because we need to use the CodeCache_lock. This // would be prone to deadlocks. // This flag is used to remember whether we need to later lock and unregister. bool nmethod_needs_unregister = false; + // invalidate osr nmethod before acquiring the patching lock since + // they both acquire leaf locks and we don't want a deadlock. + // This logic is equivalent to the logic below for patching the + // verified entry point of regular methods. We check that the + // nmethod is in use to ensure that it is invalidated only once. + if (is_osr_method() && is_in_use()) { + // this effectively makes the osr nmethod not entrant + invalidate_osr_method(); + } + { - // invalidate osr nmethod before acquiring the patching lock since - // they both acquire leaf locks and we don't want a deadlock. - // This logic is equivalent to the logic below for patching the - // verified entry point of regular methods. We check that the - // nmethod is in use to ensure that it is invalidated only once. - if (is_osr_method() && is_in_use()) { - // this effectively makes the osr nmethod not entrant - invalidate_osr_method(); - } - // Enter critical section. Does not block for safepoint. - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); if (_state == state) { // another thread already performed this transition so nothing @@ -1305,8 +1297,9 @@ log_state_change(); // Remove nmethod from method. - unlink_from_method(false /* already owns Patching_lock */); - } // leave critical region under Patching_lock + unlink_from_method(); + + } // leave critical region under CompiledMethod_lock #if INCLUDE_JVMCI // Invalidate can't occur while holding the Patching lock diff --git a/src/hotspot/share/code/nmethod.hpp b/src/hotspot/share/code/nmethod.hpp --- a/src/hotspot/share/code/nmethod.hpp +++ b/src/hotspot/share/code/nmethod.hpp @@ -119,7 +119,7 @@ // used by jvmti to track if an unload event has been posted for this nmethod. bool _unload_reported; - // Protected by Patching_lock + // Protected by CompiledMethod_lock volatile signed char _state; // {not_installed, in_use, not_entrant, zombie, unloaded} #ifdef ASSERT @@ -386,7 +386,7 @@ int comp_level() const { return _comp_level; } - void unlink_from_method(bool acquire_lock); + void unlink_from_method(); // Support for oops in scopes and relocs: // Note: index 0 is reserved for null. diff --git a/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp b/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp --- a/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp +++ b/src/hotspot/share/gc/z/zBarrierSetNMethod.cpp @@ -1,5 +1,5 @@ /* - * Copyright (c) 2018, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2018, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -45,7 +45,7 @@ // We don't need to take the lock when unlinking nmethods from // the Method, because it is only concurrently unlinked by // the entry barrier, which acquires the per nmethod lock. - nm->unlink_from_method(false /* acquire_lock */); + nm->unlink_from_method(); // We can end up calling nmethods that are unloading // since we clear compiled ICs lazily. Returning false diff --git a/src/hotspot/share/gc/z/zNMethod.cpp b/src/hotspot/share/gc/z/zNMethod.cpp --- a/src/hotspot/share/gc/z/zNMethod.cpp +++ b/src/hotspot/share/gc/z/zNMethod.cpp @@ -1,5 +1,5 @@ /* - * Copyright (c) 2017, 2018, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -285,7 +285,7 @@ // We don't need to take the lock when unlinking nmethods from // the Method, because it is only concurrently unlinked by // the entry barrier, which acquires the per nmethod lock. - nm->unlink_from_method(false /* acquire_lock */); + nm->unlink_from_method(); return; } diff --git a/src/hotspot/share/jvmci/jvmciEnv.cpp b/src/hotspot/share/jvmci/jvmciEnv.cpp --- a/src/hotspot/share/jvmci/jvmciEnv.cpp +++ b/src/hotspot/share/jvmci/jvmciEnv.cpp @@ -31,6 +31,7 @@ #include "memory/universe.hpp" #include "oops/objArrayKlass.hpp" #include "oops/typeArrayOop.inline.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/jniHandles.inline.hpp" #include "runtime/javaCalls.hpp" #include "jvmci/jniAccessMark.inline.hpp" @@ -1496,8 +1497,7 @@ // Invalidating the HotSpotNmethod means we want the nmethod // to be deoptimized. nm->mark_for_deoptimization(); - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } // A HotSpotNmethod instance can only reference a single nmethod diff --git a/src/hotspot/share/oops/method.cpp b/src/hotspot/share/oops/method.cpp --- a/src/hotspot/share/oops/method.cpp +++ b/src/hotspot/share/oops/method.cpp @@ -103,7 +103,7 @@ // Fix and bury in Method* set_interpreter_entry(NULL); // sets i2i entry and from_int set_adapter_entry(NULL); - clear_code(false /* don't need a lock */); // from_c/from_i get set to c2i/i2i + Method::clear_code(); // from_c/from_i get set to c2i/i2i if (access_flags.is_native()) { clear_native_function(); @@ -815,7 +815,7 @@ set_native_function( SharedRuntime::native_method_throw_unsatisfied_link_error_entry(), !native_bind_event_is_interesting); - clear_code(); + Method::unlink_code(this); } address Method::critical_native_function() { @@ -938,8 +938,7 @@ } // Revert to using the interpreter and clear out the nmethod -void Method::clear_code(bool acquire_lock /* = true */) { - MutexLocker pl(acquire_lock ? Patching_lock : NULL, Mutex::_no_safepoint_check_flag); +void Method::clear_code() { // this may be NULL if c2i adapters have not been made yet // Only should happen at allocate time. if (adapter() == NULL) { @@ -953,6 +952,24 @@ _code = NULL; } +void Method::unlink_code(Method *method, CompiledMethod *compare) { + if (method == NULL) { + return; + } + MutexLocker ml(CompiledMethod_lock->owned_by_self() ? NULL : CompiledMethod_lock, Mutex::_no_safepoint_check_flag); + if (method->code() == compare || + method->from_compiled_entry() == compare->verified_entry_point()) { + method->clear_code(); + } +} + +void Method::unlink_code(Method *method) { + if (method != NULL) { + MutexLocker ml(CompiledMethod_lock->owned_by_self() ? NULL : CompiledMethod_lock, Mutex::_no_safepoint_check_flag); + method->clear_code(); + } +} + #if INCLUDE_CDS // Called by class data sharing to remove any entry points (which are not shared) void Method::unlink_method() { @@ -1172,7 +1189,7 @@ // Install compiled code. Instantly it can execute. void Method::set_code(const methodHandle& mh, CompiledMethod *code) { - MutexLocker pl(Patching_lock, Mutex::_no_safepoint_check_flag); + MutexLocker pl(CompiledMethod_lock, Mutex::_no_safepoint_check_flag); assert( code, "use clear_code to remove code" ); assert( mh->check_code(), "" ); diff --git a/src/hotspot/share/oops/method.hpp b/src/hotspot/share/oops/method.hpp --- a/src/hotspot/share/oops/method.hpp +++ b/src/hotspot/share/oops/method.hpp @@ -463,7 +463,15 @@ address verified_code_entry(); bool check_code() const; // Not inline to avoid circular ref CompiledMethod* volatile code() const; - void clear_code(bool acquire_lock = true); // Clear out any compiled code + + static void unlink_code(Method *method, CompiledMethod *compare); + static void unlink_code(Method *method); + +private: + // Either called with CompiledMethod_lock held or from constructor. + void clear_code(); + +public: static void set_code(const methodHandle& mh, CompiledMethod* code); void set_adapter_entry(AdapterHandlerEntry* adapter) { constMethod()->set_adapter_entry(adapter); diff --git a/src/hotspot/share/prims/jvmtiEventController.cpp b/src/hotspot/share/prims/jvmtiEventController.cpp --- a/src/hotspot/share/prims/jvmtiEventController.cpp +++ b/src/hotspot/share/prims/jvmtiEventController.cpp @@ -32,6 +32,7 @@ #include "prims/jvmtiExport.hpp" #include "prims/jvmtiImpl.hpp" #include "prims/jvmtiThreadState.inline.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/frame.hpp" #include "runtime/thread.inline.hpp" #include "runtime/threadSMR.hpp" @@ -239,8 +240,7 @@ } } if (num_marked > 0) { - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } } diff --git a/src/hotspot/share/prims/methodHandles.cpp b/src/hotspot/share/prims/methodHandles.cpp --- a/src/hotspot/share/prims/methodHandles.cpp +++ b/src/hotspot/share/prims/methodHandles.cpp @@ -42,6 +42,7 @@ #include "oops/typeArrayOop.inline.hpp" #include "prims/methodHandles.hpp" #include "runtime/compilationPolicy.hpp" +#include "runtime/deoptimization.hpp" #include "runtime/fieldDescriptor.inline.hpp" #include "runtime/handles.inline.hpp" #include "runtime/interfaceSupport.inline.hpp" @@ -1109,8 +1110,7 @@ } if (marked > 0) { // At least one nmethod has been marked for deoptimization. - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } @@ -1506,8 +1506,7 @@ } if (marked > 0) { // At least one nmethod has been marked for deoptimization - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } } } diff --git a/src/hotspot/share/prims/whitebox.cpp b/src/hotspot/share/prims/whitebox.cpp --- a/src/hotspot/share/prims/whitebox.cpp +++ b/src/hotspot/share/prims/whitebox.cpp @@ -823,8 +823,7 @@ WB_ENTRY(void, WB_DeoptimizeAll(JNIEnv* env, jobject o)) MutexLocker mu(Compile_lock); CodeCache::mark_all_nmethods_for_deoptimization(); - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); WB_END WB_ENTRY(jint, WB_DeoptimizeMethod(JNIEnv* env, jobject o, jobject method, jboolean is_osr)) @@ -841,8 +840,7 @@ } result += CodeCache::mark_for_deoptimization(mh()); if (result > 0) { - VM_Deoptimize op; - VMThread::execute(&op); + Deoptimization::deoptimize_all_marked(); } return result; WB_END diff --git a/src/hotspot/share/runtime/biasedLocking.cpp b/src/hotspot/share/runtime/biasedLocking.cpp --- a/src/hotspot/share/runtime/biasedLocking.cpp +++ b/src/hotspot/share/runtime/biasedLocking.cpp @@ -628,9 +628,7 @@ event->commit(); } -BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { - assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); - +BiasedLocking::Condition fast_revoke(Handle obj, bool attempt_rebias, JavaThread* thread = NULL) { // We can revoke the biases of anonymously-biased objects // efficiently enough that we should not cause these revocations to // update the heuristics because doing so may cause unwanted bulk @@ -647,7 +645,7 @@ markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); if (res_mark == biased_value) { - return BIAS_REVOKED; + return BiasedLocking::BIAS_REVOKED; } } else if (mark->has_bias_pattern()) { Klass* k = obj->klass(); @@ -662,7 +660,7 @@ markOop biased_value = mark; markOop res_mark = obj->cas_set_mark(prototype_header, mark); assert(!obj->mark()->has_bias_pattern(), "even if we raced, should still be revoked"); - return BIAS_REVOKED; + return BiasedLocking::BIAS_REVOKED; } else if (prototype_header->bias_epoch() != mark->bias_epoch()) { // The epoch of this biasing has expired indicating that the // object is effectively unbiased. Depending on whether we need @@ -672,31 +670,74 @@ // can reach this point due to various points in the runtime // needing to revoke biases. if (attempt_rebias) { - assert(THREAD->is_Java_thread(), ""); markOop biased_value = mark; - markOop rebiased_prototype = markOopDesc::encode((JavaThread*) THREAD, mark->age(), prototype_header->bias_epoch()); + markOop rebiased_prototype = markOopDesc::encode(thread, mark->age(), prototype_header->bias_epoch()); markOop res_mark = obj->cas_set_mark(rebiased_prototype, mark); if (res_mark == biased_value) { - return BIAS_REVOKED_AND_REBIASED; + return BiasedLocking::BIAS_REVOKED_AND_REBIASED; } } else { markOop biased_value = mark; markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = obj->cas_set_mark(unbiased_prototype, mark); if (res_mark == biased_value) { - return BIAS_REVOKED; + return BiasedLocking::BIAS_REVOKED; } } } } + return BiasedLocking::NOT_REVOKED; +} + +BiasedLocking::Condition BiasedLocking::revoke_and_rebias_in_handshake(Handle obj, TRAPS) { + BiasedLocking::Condition bc = fast_revoke(obj, false); + if (bc != NOT_REVOKED) { + return bc; + } + + markOop mark = obj->mark(); + if (!mark->has_bias_pattern()) { + return NOT_BIASED; + } + + Klass *k = obj->klass(); + markOop prototype_header = k->prototype_header(); + if (mark->biased_locker() == THREAD && prototype_header->bias_epoch() == mark->bias_epoch()) { + ResourceMark rm; + log_info(biasedlocking)("Revoking bias by walking my own stack:"); + EventBiasedLockSelfRevocation event; + BiasedLocking::Condition cond = revoke_bias(obj(), false, false, (JavaThread*) THREAD, NULL); + ((JavaThread*) THREAD)->set_cached_monitor_info(NULL); + assert(cond == BIAS_REVOKED, "why not?"); + if (event.should_commit()) { + post_self_revocation_event(&event, k); + } + return cond; + } + + ShouldNotReachHere(); + + return NOT_REVOKED; +} + +BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { + assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); + assert(!attempt_rebias || THREAD->is_Java_thread(), ""); + + BiasedLocking::Condition bc = fast_revoke(obj, attempt_rebias, (JavaThread*) THREAD); + if (bc != NOT_REVOKED) { + return bc; + } HeuristicsResult heuristics = update_heuristics(obj(), attempt_rebias); if (heuristics == HR_NOT_BIASED) { return NOT_BIASED; } else if (heuristics == HR_SINGLE_REVOKE) { + markOop mark = obj->mark(); Klass *k = obj->klass(); markOop prototype_header = k->prototype_header(); - if (mark->biased_locker() == THREAD && + if (mark->has_bias_pattern() && + mark->biased_locker() == ((JavaThread*) THREAD) && prototype_header->bias_epoch() == mark->bias_epoch()) { // A thread is trying to revoke the bias of an object biased // toward it, again likely due to an identity hash code diff --git a/src/hotspot/share/runtime/biasedLocking.hpp b/src/hotspot/share/runtime/biasedLocking.hpp --- a/src/hotspot/share/runtime/biasedLocking.hpp +++ b/src/hotspot/share/runtime/biasedLocking.hpp @@ -159,6 +159,7 @@ static int* slow_path_entry_count_addr(); enum Condition { + NOT_REVOKED = 0, NOT_BIASED = 1, BIAS_REVOKED = 2, BIAS_REVOKED_AND_REBIASED = 3 @@ -175,6 +176,7 @@ // This should be called by JavaThreads to revoke the bias of an object static Condition revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS); + static Condition revoke_and_rebias_in_handshake(Handle obj, TRAPS); // These do not allow rebiasing; they are used by deoptimization to // ensure that monitors on the stack can be migrated diff --git a/src/hotspot/share/runtime/deoptimization.cpp b/src/hotspot/share/runtime/deoptimization.cpp --- a/src/hotspot/share/runtime/deoptimization.cpp +++ b/src/hotspot/share/runtime/deoptimization.cpp @@ -776,10 +776,35 @@ return bt; JRT_END +class DeoptimizeMarkedTC : public ThreadClosure { + bool _in_handshake; + public: + DeoptimizeMarkedTC(bool in_handshake) : _in_handshake(in_handshake) {} + virtual void do_thread(Thread* thread) { + assert(thread->is_Java_thread(), "must be"); + JavaThread* jt = (JavaThread*)thread; + jt->deoptimize_marked_methods(_in_handshake); + } +}; -int Deoptimization::deoptimize_dependents() { - Threads::deoptimized_wrt_marked_nmethods(); - return 0; +void Deoptimization::deoptimize_all_marked() { + ResourceMark rm; + DeoptimizationMarker dm; + + if (SafepointSynchronize::is_at_safepoint()) { + DeoptimizeMarkedTC deopt(false); + // Make the dependent methods not entrant + CodeCache::make_marked_nmethods_not_entrant(); + Threads::java_threads_do(&deopt); + } else { + // Make the dependent methods not entrant + { + MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag); + CodeCache::make_marked_nmethods_not_entrant(); + } + DeoptimizeMarkedTC deopt(true); + Handshake::execute(&deopt); + } } Deoptimization::DeoptAction Deoptimization::_unloaded_action @@ -1243,14 +1268,7 @@ } } - -void Deoptimization::revoke_biases_of_monitors(JavaThread* thread, frame fr, RegisterMap* map) { - if (!UseBiasedLocking) { - return; - } - - GrowableArray* objects_to_revoke = new GrowableArray(); - +static void get_monitors_from_stack(GrowableArray* objects_to_revoke, JavaThread* thread, frame fr, RegisterMap* map) { // Unfortunately we don't have a RegisterMap available in most of // the places we want to call this routine so we need to walk the // stack again to update the register map. @@ -1274,6 +1292,14 @@ cvf = compiledVFrame::cast(cvf->sender()); } collect_monitors(cvf, objects_to_revoke); +} + +void Deoptimization::inflate_monitors(JavaThread* thread, frame fr, RegisterMap* map) { + if (!UseBiasedLocking) { + return; + } + GrowableArray* objects_to_revoke = new GrowableArray(); + get_monitors_from_stack(objects_to_revoke, thread, fr, map); if (SafepointSynchronize::is_at_safepoint()) { BiasedLocking::revoke_at_safepoint(objects_to_revoke); @@ -1282,6 +1308,24 @@ } } +void Deoptimization::inflate_monitors_handshake(JavaThread* thread, frame fr, RegisterMap* map) { + if (!UseBiasedLocking) { + return; + } + GrowableArray* objects_to_revoke = new GrowableArray(); + get_monitors_from_stack(objects_to_revoke, thread, fr, map); + + int len = objects_to_revoke->length(); + for (int i = 0; i < len; i++) { + oop obj = (objects_to_revoke->at(i))(); + markOop mark = obj->mark(); + assert(!mark->has_bias_pattern() || mark->biased_locker() == thread, "Can't revoke"); + BiasedLocking::revoke_and_rebias_in_handshake(objects_to_revoke->at(i), thread); + assert(!obj->mark()->has_bias_pattern(), "biases should be revoked by now"); + ObjectSynchronizer::inflate(thread, obj, ObjectSynchronizer::inflate_cause_vm_internal); + } +} + void Deoptimization::deoptimize_single_frame(JavaThread* thread, frame fr, Deoptimization::DeoptReason reason) { assert(fr.can_be_deoptimized(), "checking frame type"); @@ -1310,11 +1354,16 @@ fr.deoptimize(thread); } -void Deoptimization::deoptimize(JavaThread* thread, frame fr, RegisterMap *map) { - deoptimize(thread, fr, map, Reason_constraint); +void Deoptimization::deoptimize(JavaThread* thread, frame fr, RegisterMap *map, bool in_handshake) { + deopt_thread(in_handshake, thread, fr, map, Reason_constraint); } void Deoptimization::deoptimize(JavaThread* thread, frame fr, RegisterMap *map, DeoptReason reason) { + deopt_thread(false, thread, fr, map, reason); +} + +void Deoptimization::deopt_thread(bool in_handshake, JavaThread* thread, + frame fr, RegisterMap *map, DeoptReason reason) { // Deoptimize only if the frame comes from compile code. // Do not deoptimize the frame which is already patched // during the execution of the loops below. @@ -1324,7 +1373,11 @@ ResourceMark rm; DeoptimizationMarker dm; if (UseBiasedLocking) { - revoke_biases_of_monitors(thread, fr, map); + if (in_handshake) { + inflate_monitors_handshake(thread, fr, map); + } else { + inflate_monitors(thread, fr, map); + } } deoptimize_single_frame(thread, fr, reason); @@ -1489,7 +1542,7 @@ ResourceMark rm; // Revoke biases of any monitors in the frame to ensure we can migrate them - revoke_biases_of_monitors(thread, fr, ®_map); + fix_monitors(thread, fr, ®_map); DeoptReason reason = trap_request_reason(trap_request); DeoptAction action = trap_request_action(trap_request); diff --git a/src/hotspot/share/runtime/deoptimization.hpp b/src/hotspot/share/runtime/deoptimization.hpp --- a/src/hotspot/share/runtime/deoptimization.hpp +++ b/src/hotspot/share/runtime/deoptimization.hpp @@ -135,12 +135,19 @@ Unpack_LIMIT = 4 }; + static void deoptimize_all_marked(); + + private: // Checks all compiled methods. Invalid methods are deleted and // corresponding activations are deoptimized. static int deoptimize_dependents(); + static void inflate_monitors_handshake(JavaThread* thread, frame fr, RegisterMap* map); + static void inflate_monitors(JavaThread* thread, frame fr, RegisterMap* map); + static void deopt_thread(bool in_handshake, JavaThread* thread, frame fr, RegisterMap *map, DeoptReason reason); + public: // Deoptimizes a frame lazily. nmethod gets patched deopt happens on return to the frame - static void deoptimize(JavaThread* thread, frame fr, RegisterMap *reg_map); + static void deoptimize(JavaThread* thread, frame fr, RegisterMap *map, bool in_handshake = false); static void deoptimize(JavaThread* thread, frame fr, RegisterMap *reg_map, DeoptReason reason); #if INCLUDE_JVMCI @@ -153,7 +160,8 @@ // Helper function to revoke biases of all monitors in frame if UseBiasedLocking // is enabled - static void revoke_biases_of_monitors(JavaThread* thread, frame fr, RegisterMap* map); + static void fix_monitors(JavaThread* thread, frame fr, RegisterMap* map) + { inflate_monitors(thread, fr, map); } #if COMPILER2_OR_JVMCI JVMCI_ONLY(public:) diff --git a/src/hotspot/share/runtime/mutex.hpp b/src/hotspot/share/runtime/mutex.hpp --- a/src/hotspot/share/runtime/mutex.hpp +++ b/src/hotspot/share/runtime/mutex.hpp @@ -62,7 +62,7 @@ event, access = event + 1, tty = access + 2, - special = tty + 1, + special = tty + 2, suspend_resume = special + 1, vmweak = suspend_resume + 2, leaf = vmweak + 2, diff --git a/src/hotspot/share/runtime/mutexLocker.cpp b/src/hotspot/share/runtime/mutexLocker.cpp --- a/src/hotspot/share/runtime/mutexLocker.cpp +++ b/src/hotspot/share/runtime/mutexLocker.cpp @@ -39,6 +39,7 @@ // Consider using GCC's __read_mostly. Mutex* Patching_lock = NULL; +Mutex* CompiledMethod_lock = NULL; Monitor* SystemDictionary_lock = NULL; Mutex* ProtectionDomainSet_lock = NULL; Mutex* SharedDictionary_lock = NULL; @@ -260,6 +261,8 @@ def(ClassLoaderDataGraph_lock , PaddedMutex , nonleaf, true, Monitor::_safepoint_check_always); def(Patching_lock , PaddedMutex , special, true, Monitor::_safepoint_check_never); // used for safepointing and code patching. + def(OsrList_lock , PaddedMutex , special-1, true, Monitor::_safepoint_check_never); + def(CompiledMethod_lock , PaddedMutex , special-1, true, Monitor::_safepoint_check_never); def(Service_lock , PaddedMonitor, special, true, Monitor::_safepoint_check_never); // used for service thread operations def(JmethodIdCreation_lock , PaddedMutex , leaf, true, Monitor::_safepoint_check_always); // used for creating jmethodIDs. @@ -275,7 +278,6 @@ def(SymbolArena_lock , PaddedMutex , leaf+2, true, Monitor::_safepoint_check_never); def(ProfilePrint_lock , PaddedMutex , leaf, false, Monitor::_safepoint_check_always); // serial profile printing def(ExceptionCache_lock , PaddedMutex , leaf, false, Monitor::_safepoint_check_always); // serial profile printing - def(OsrList_lock , PaddedMutex , leaf, true, Monitor::_safepoint_check_never); def(Debug1_lock , PaddedMutex , leaf, true, Monitor::_safepoint_check_never); #ifndef PRODUCT def(FullGCALot_lock , PaddedMutex , leaf, false, Monitor::_safepoint_check_always); // a lock to make FullGCALot MT safe diff --git a/src/hotspot/share/runtime/mutexLocker.hpp b/src/hotspot/share/runtime/mutexLocker.hpp --- a/src/hotspot/share/runtime/mutexLocker.hpp +++ b/src/hotspot/share/runtime/mutexLocker.hpp @@ -32,6 +32,7 @@ // Mutexes used in the VM. extern Mutex* Patching_lock; // a lock used to guard code patching of compiled code +extern Mutex* CompiledMethod_lock; // a lock used to guard a compiled method extern Monitor* SystemDictionary_lock; // a lock on the system dictionary extern Mutex* ProtectionDomainSet_lock; // a lock on the pd_set list in the system dictionary extern Mutex* SharedDictionary_lock; // a lock on the CDS shared dictionary diff --git a/src/hotspot/share/runtime/synchronizer.cpp b/src/hotspot/share/runtime/synchronizer.cpp --- a/src/hotspot/share/runtime/synchronizer.cpp +++ b/src/hotspot/share/runtime/synchronizer.cpp @@ -1315,7 +1315,7 @@ // Inflate mutates the heap ... // Relaxing assertion for bug 6320749. assert(Universe::verify_in_progress() || - !SafepointSynchronize::is_at_safepoint(), "invariant"); + !Universe::heap()->is_gc_active(), "invariant"); EventJavaMonitorInflate event; @@ -1444,7 +1444,7 @@ // to avoid false sharing on MP systems ... OM_PERFDATA_OP(Inflations, inc()); if (log_is_enabled(Trace, monitorinflation)) { - ResourceMark rm(Self); + ResourceMark rm; lsh.print_cr("inflate(has_locker): object=" INTPTR_FORMAT ", mark=" INTPTR_FORMAT ", type='%s'", p2i(object), p2i(object->mark()), object->klass()->external_name()); @@ -1494,7 +1494,7 @@ // cache lines to avoid false sharing on MP systems ... OM_PERFDATA_OP(Inflations, inc()); if (log_is_enabled(Trace, monitorinflation)) { - ResourceMark rm(Self); + ResourceMark rm; lsh.print_cr("inflate(neutral): object=" INTPTR_FORMAT ", mark=" INTPTR_FORMAT ", type='%s'", p2i(object), p2i(object->mark()), object->klass()->external_name()); diff --git a/src/hotspot/share/runtime/thread.cpp b/src/hotspot/share/runtime/thread.cpp --- a/src/hotspot/share/runtime/thread.cpp +++ b/src/hotspot/share/runtime/thread.cpp @@ -2833,18 +2833,17 @@ #endif // PRODUCT -void JavaThread::deoptimized_wrt_marked_nmethods() { +void JavaThread::deoptimize_marked_methods(bool in_handshake) { if (!has_last_Java_frame()) return; // BiasedLocking needs an updated RegisterMap for the revoke monitors pass StackFrameStream fst(this, UseBiasedLocking); for (; !fst.is_done(); fst.next()) { if (fst.current()->should_be_deoptimized()) { - Deoptimization::deoptimize(this, *fst.current(), fst.register_map()); + Deoptimization::deoptimize(this, *fst.current(), fst.register_map(), in_handshake); } } } - // If the caller is a NamedThread, then remember, in the current scope, // the given JavaThread in its _processed_thread field. class RememberProcessedThread: public StackObj { @@ -4600,13 +4599,6 @@ threads_do(&handles_closure); } -void Threads::deoptimized_wrt_marked_nmethods() { - ALL_JAVA_THREADS(p) { - p->deoptimized_wrt_marked_nmethods(); - } -} - - // Get count Java threads that are waiting to enter the specified monitor. GrowableArray* Threads::get_pending_threads(ThreadsList * t_list, int count, diff --git a/src/hotspot/share/runtime/thread.hpp b/src/hotspot/share/runtime/thread.hpp --- a/src/hotspot/share/runtime/thread.hpp +++ b/src/hotspot/share/runtime/thread.hpp @@ -1923,7 +1923,7 @@ void deoptimize(); void make_zombies(); - void deoptimized_wrt_marked_nmethods(); + void deoptimize_marked_methods(bool in_handshake); public: // Returns the running thread as a JavaThread diff --git a/src/hotspot/share/runtime/vmOperations.cpp b/src/hotspot/share/runtime/vmOperations.cpp --- a/src/hotspot/share/runtime/vmOperations.cpp +++ b/src/hotspot/share/runtime/vmOperations.cpp @@ -118,18 +118,6 @@ } } -void VM_Deoptimize::doit() { - // We do not want any GCs to happen while we are in the middle of this VM operation - ResourceMark rm; - DeoptimizationMarker dm; - - // Deoptimize all activations depending on marked nmethods - Deoptimization::deoptimize_dependents(); - - // Make the dependent methods not entrant - CodeCache::make_marked_nmethods_not_entrant(); -} - void VM_MarkActiveNMethods::doit() { NMethodSweeper::mark_active_nmethods(); } diff --git a/src/hotspot/share/runtime/vmOperations.hpp b/src/hotspot/share/runtime/vmOperations.hpp --- a/src/hotspot/share/runtime/vmOperations.hpp +++ b/src/hotspot/share/runtime/vmOperations.hpp @@ -49,7 +49,6 @@ template(ClearICs) \ template(ForceSafepoint) \ template(ForceAsyncSafepoint) \ - template(Deoptimize) \ template(DeoptimizeFrame) \ template(DeoptimizeAll) \ template(ZombieAll) \ @@ -318,14 +317,6 @@ VM_GTestExecuteAtSafepoint() {} }; -class VM_Deoptimize: public VM_Operation { - public: - VM_Deoptimize() {} - VMOp_Type type() const { return VMOp_Deoptimize; } - void doit(); - bool allow_nested_vm_operations() const { return true; } -}; - class VM_MarkActiveNMethods: public VM_Operation { public: VM_MarkActiveNMethods() {} diff --git a/src/hotspot/share/services/dtraceAttacher.cpp b/src/hotspot/share/services/dtraceAttacher.cpp --- a/src/hotspot/share/services/dtraceAttacher.cpp +++ b/src/hotspot/share/services/dtraceAttacher.cpp @@ -1,5 +1,5 @@ /* - * Copyright (c) 2006, 2018, Oracle and/or its affiliates. All rights reserved. + * Copyright (c) 2006, 2019, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it @@ -33,23 +33,6 @@ #ifdef SOLARIS -class VM_DeoptimizeTheWorld : public VM_Operation { - public: - VMOp_Type type() const { - return VMOp_DeoptimizeTheWorld; - } - void doit() { - CodeCache::mark_all_nmethods_for_deoptimization(); - ResourceMark rm; - DeoptimizationMarker dm; - // Deoptimize all activations depending on marked methods - Deoptimization::deoptimize_dependents(); - - // Mark the dependent methods non entrant - CodeCache::make_marked_nmethods_not_entrant(); - } -}; - static void set_bool_flag(const char* flag, bool value) { JVMFlag::boolAtPut((char*)flag, strlen(flag), &value, JVMFlag::ATTACH_ON_DEMAND); @@ -74,8 +57,8 @@ if (changed) { // one or more flags changed, need to deoptimize - VM_DeoptimizeTheWorld op; - VMThread::execute(&op); + CodeCache::mark_all_nmethods_for_deoptimization(); + Deoptimization::deoptimize_all_marked(); } } @@ -97,8 +80,8 @@ } if (changed) { // one or more flags changed, need to deoptimize - VM_DeoptimizeTheWorld op; - VMThread::execute(&op); + CodeCache::mark_all_nmethods_for_deoptimization(); + Deoptimization::deoptimize_all_marked(); } } # HG changeset patch # User rehn # Date 1557829036 -7200 # Tue May 14 12:17:16 2019 +0200 # Node ID 017a2fa651de7a3efe744402956c19b1101bb23c # Parent 2534b19714ebff0dd2263bc26475dedb1906df4a [mq]: 8221734-v3 diff --git a/src/hotspot/share/aot/aotCompiledMethod.cpp b/src/hotspot/share/aot/aotCompiledMethod.cpp --- a/src/hotspot/share/aot/aotCompiledMethod.cpp +++ b/src/hotspot/share/aot/aotCompiledMethod.cpp @@ -188,7 +188,9 @@ #endif // Remove AOTCompiledMethod from method. - Method::unlink_code(method(), this); + if (method() != NULL) { + method()->unlink_code(this); + } } // leave critical region under CompiledMethod_lock diff --git a/src/hotspot/share/code/codeCache.cpp b/src/hotspot/share/code/codeCache.cpp --- a/src/hotspot/share/code/codeCache.cpp +++ b/src/hotspot/share/code/codeCache.cpp @@ -1188,10 +1188,6 @@ if (number_of_nmethods_with_dependencies() == 0) return; - // CodeCache can only be updated by a thread_in_VM and they will all be - // stopped during the safepoint so CodeCache will be safe to update without - // holding the CodeCache_lock. - KlassDepChange changes(dependee); // Compute the dependent nmethods @@ -1206,10 +1202,6 @@ // --- Compile_lock is not held. However we are at a safepoint. assert_locked_or_safepoint(Compile_lock); - // CodeCache can only be updated by a thread_in_VM and they will all be - // stopped dring the safepoint so CodeCache will be safe to update without - // holding the CodeCache_lock. - // Compute the dependent nmethods if (mark_for_deoptimization(m_h()) > 0) { Deoptimization::deoptimize_all_marked(); diff --git a/src/hotspot/share/code/nmethod.cpp b/src/hotspot/share/code/nmethod.cpp --- a/src/hotspot/share/code/nmethod.cpp +++ b/src/hotspot/share/code/nmethod.cpp @@ -1121,7 +1121,9 @@ // so we don't have to break the cycle. Note that it is possible to // have the Method* live here, in case we unload the nmethod because // it is pointing to some oop (other than the Method*) being unloaded. - Method::unlink_code(_method, this); // Break a cycle + if (_method != NULL) { + _method->unlink_code(this); + } // Make the class unloaded - i.e., change state and notify sweeper assert(SafepointSynchronize::is_at_safepoint() || Thread::current()->is_ConcurrentGC_thread(), @@ -1204,13 +1206,9 @@ } void nmethod::unlink_from_method() { - // We need to check if both the _code and _from_compiled_code_entry_point - // refer to this nmethod because there is a race in setting these two fields - // in Method* as seen in bugid 4947125. - // If the vep() points to the zombie nmethod, the memory for the nmethod - // could be flushed and the compiler and vtable stubs could still call - // through it. - Method::unlink_code(method(), this); + if (method() != NULL) { + method()->unlink_code(); + } } /** diff --git a/src/hotspot/share/oops/method.cpp b/src/hotspot/share/oops/method.cpp --- a/src/hotspot/share/oops/method.cpp +++ b/src/hotspot/share/oops/method.cpp @@ -815,7 +815,7 @@ set_native_function( SharedRuntime::native_method_throw_unsatisfied_link_error_entry(), !native_bind_event_is_interesting); - Method::unlink_code(this); + this->unlink_code(); } address Method::critical_native_function() { @@ -952,22 +952,23 @@ _code = NULL; } -void Method::unlink_code(Method *method, CompiledMethod *compare) { - if (method == NULL) { - return; - } +void Method::unlink_code(CompiledMethod *compare) { MutexLocker ml(CompiledMethod_lock->owned_by_self() ? NULL : CompiledMethod_lock, Mutex::_no_safepoint_check_flag); - if (method->code() == compare || - method->from_compiled_entry() == compare->verified_entry_point()) { - method->clear_code(); + // We need to check if both the _code and _from_compiled_code_entry_point + // refer to this nmethod because there is a race in setting these two fields + // in Method* as seen in bugid 4947125. + // If the vep() points to the zombie nmethod, the memory for the nmethod + // could be flushed and the compiler and vtable stubs could still call + // through it. + if (code() == compare || + from_compiled_entry() == compare->verified_entry_point()) { + clear_code(); } } -void Method::unlink_code(Method *method) { - if (method != NULL) { - MutexLocker ml(CompiledMethod_lock->owned_by_self() ? NULL : CompiledMethod_lock, Mutex::_no_safepoint_check_flag); - method->clear_code(); - } +void Method::unlink_code() { + MutexLocker ml(CompiledMethod_lock->owned_by_self() ? NULL : CompiledMethod_lock, Mutex::_no_safepoint_check_flag); + clear_code(); } #if INCLUDE_CDS diff --git a/src/hotspot/share/oops/method.hpp b/src/hotspot/share/oops/method.hpp --- a/src/hotspot/share/oops/method.hpp +++ b/src/hotspot/share/oops/method.hpp @@ -464,8 +464,10 @@ bool check_code() const; // Not inline to avoid circular ref CompiledMethod* volatile code() const; - static void unlink_code(Method *method, CompiledMethod *compare); - static void unlink_code(Method *method); + // Locks CompiledMethod_lock if not held. + void unlink_code(CompiledMethod *compare); + // Locks CompiledMethod_lock if not held. + void unlink_code(); private: // Either called with CompiledMethod_lock held or from constructor. diff --git a/src/hotspot/share/runtime/biasedLocking.cpp b/src/hotspot/share/runtime/biasedLocking.cpp --- a/src/hotspot/share/runtime/biasedLocking.cpp +++ b/src/hotspot/share/runtime/biasedLocking.cpp @@ -628,12 +628,11 @@ event->commit(); } -BiasedLocking::Condition fast_revoke(Handle obj, bool attempt_rebias, JavaThread* thread = NULL) { +BiasedLocking::Condition fast_revoke(Handle obj, markOop mark, bool attempt_rebias, JavaThread* thread = NULL) { // We can revoke the biases of anonymously-biased objects // efficiently enough that we should not cause these revocations to // update the heuristics because doing so may cause unwanted bulk // revocations (which are expensive) to occur. - markOop mark = obj->mark(); if (mark->is_biased_anonymously() && !attempt_rebias) { // We are probably trying to revoke the bias of this object due to // an identity hash code computation. Try to revoke the bias @@ -689,42 +688,39 @@ return BiasedLocking::NOT_REVOKED; } -BiasedLocking::Condition BiasedLocking::revoke_and_rebias_in_handshake(Handle obj, TRAPS) { - BiasedLocking::Condition bc = fast_revoke(obj, false); +BiasedLocking::Condition BiasedLocking::revoke_own_locks_in_handshake(Handle obj, TRAPS) { + markOop mark = obj->mark(); + BiasedLocking::Condition bc = fast_revoke(obj, mark, false); if (bc != NOT_REVOKED) { return bc; } - markOop mark = obj->mark(); if (!mark->has_bias_pattern()) { return NOT_BIASED; } Klass *k = obj->klass(); markOop prototype_header = k->prototype_header(); - if (mark->biased_locker() == THREAD && prototype_header->bias_epoch() == mark->bias_epoch()) { - ResourceMark rm; - log_info(biasedlocking)("Revoking bias by walking my own stack:"); - EventBiasedLockSelfRevocation event; - BiasedLocking::Condition cond = revoke_bias(obj(), false, false, (JavaThread*) THREAD, NULL); - ((JavaThread*) THREAD)->set_cached_monitor_info(NULL); - assert(cond == BIAS_REVOKED, "why not?"); - if (event.should_commit()) { - post_self_revocation_event(&event, k); - } - return cond; + guarantee(mark->biased_locker() == THREAD && + prototype_header->bias_epoch() == mark->bias_epoch(), "Revoke failed, unhandled biased lock state"); + ResourceMark rm; + log_info(biasedlocking)("Revoking bias by walking my own stack:"); + EventBiasedLockSelfRevocation event; + BiasedLocking::Condition cond = revoke_bias(obj(), false, false, (JavaThread*) THREAD, NULL); + ((JavaThread*) THREAD)->set_cached_monitor_info(NULL); + assert(cond == BIAS_REVOKED, "why not?"); + if (event.should_commit()) { + post_self_revocation_event(&event, k); } - - ShouldNotReachHere(); - - return NOT_REVOKED; + return cond; } BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { assert(!SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); assert(!attempt_rebias || THREAD->is_Java_thread(), ""); - BiasedLocking::Condition bc = fast_revoke(obj, attempt_rebias, (JavaThread*) THREAD); + markOop mark = obj->mark(); + BiasedLocking::Condition bc = fast_revoke(obj, mark, attempt_rebias, (JavaThread*) THREAD); if (bc != NOT_REVOKED) { return bc; } @@ -733,11 +729,9 @@ if (heuristics == HR_NOT_BIASED) { return NOT_BIASED; } else if (heuristics == HR_SINGLE_REVOKE) { - markOop mark = obj->mark(); Klass *k = obj->klass(); markOop prototype_header = k->prototype_header(); - if (mark->has_bias_pattern() && - mark->biased_locker() == ((JavaThread*) THREAD) && + if (mark->biased_locker() == THREAD && prototype_header->bias_epoch() == mark->bias_epoch()) { // A thread is trying to revoke the bias of an object biased // toward it, again likely due to an identity hash code diff --git a/src/hotspot/share/runtime/biasedLocking.hpp b/src/hotspot/share/runtime/biasedLocking.hpp --- a/src/hotspot/share/runtime/biasedLocking.hpp +++ b/src/hotspot/share/runtime/biasedLocking.hpp @@ -176,7 +176,7 @@ // This should be called by JavaThreads to revoke the bias of an object static Condition revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS); - static Condition revoke_and_rebias_in_handshake(Handle obj, TRAPS); + static Condition revoke_own_locks_in_handshake(Handle obj, TRAPS); // These do not allow rebiasing; they are used by deoptimization to // ensure that monitors on the stack can be migrated diff --git a/src/hotspot/share/runtime/deoptimization.cpp b/src/hotspot/share/runtime/deoptimization.cpp --- a/src/hotspot/share/runtime/deoptimization.cpp +++ b/src/hotspot/share/runtime/deoptimization.cpp @@ -777,7 +777,7 @@ JRT_END class DeoptimizeMarkedTC : public ThreadClosure { - bool _in_handshake; + bool _in_handshake; public: DeoptimizeMarkedTC(bool in_handshake) : _in_handshake(in_handshake) {} virtual void do_thread(Thread* thread) { @@ -1294,7 +1294,7 @@ collect_monitors(cvf, objects_to_revoke); } -void Deoptimization::inflate_monitors(JavaThread* thread, frame fr, RegisterMap* map) { +void Deoptimization::revoke_safepoint(JavaThread* thread, frame fr, RegisterMap* map) { if (!UseBiasedLocking) { return; } @@ -1308,7 +1308,7 @@ } } -void Deoptimization::inflate_monitors_handshake(JavaThread* thread, frame fr, RegisterMap* map) { +void Deoptimization::revoke_handshake(JavaThread* thread, frame fr, RegisterMap* map) { if (!UseBiasedLocking) { return; } @@ -1320,9 +1320,8 @@ oop obj = (objects_to_revoke->at(i))(); markOop mark = obj->mark(); assert(!mark->has_bias_pattern() || mark->biased_locker() == thread, "Can't revoke"); - BiasedLocking::revoke_and_rebias_in_handshake(objects_to_revoke->at(i), thread); + BiasedLocking::revoke_own_locks_in_handshake(objects_to_revoke->at(i), thread); assert(!obj->mark()->has_bias_pattern(), "biases should be revoked by now"); - ObjectSynchronizer::inflate(thread, obj, ObjectSynchronizer::inflate_cause_vm_internal); } } @@ -1374,9 +1373,9 @@ DeoptimizationMarker dm; if (UseBiasedLocking) { if (in_handshake) { - inflate_monitors_handshake(thread, fr, map); + revoke_handshake(thread, fr, map); } else { - inflate_monitors(thread, fr, map); + revoke_safepoint(thread, fr, map); } } deoptimize_single_frame(thread, fr, reason); @@ -1542,7 +1541,7 @@ ResourceMark rm; // Revoke biases of any monitors in the frame to ensure we can migrate them - fix_monitors(thread, fr, ®_map); + revoke_biases_of_monitors(thread, fr, ®_map); DeoptReason reason = trap_request_reason(trap_request); DeoptAction action = trap_request_action(trap_request); diff --git a/src/hotspot/share/runtime/deoptimization.hpp b/src/hotspot/share/runtime/deoptimization.hpp --- a/src/hotspot/share/runtime/deoptimization.hpp +++ b/src/hotspot/share/runtime/deoptimization.hpp @@ -141,11 +141,11 @@ // Checks all compiled methods. Invalid methods are deleted and // corresponding activations are deoptimized. static int deoptimize_dependents(); - static void inflate_monitors_handshake(JavaThread* thread, frame fr, RegisterMap* map); - static void inflate_monitors(JavaThread* thread, frame fr, RegisterMap* map); + static void revoke_handshake(JavaThread* thread, frame fr, RegisterMap* map); + static void revoke_safepoint(JavaThread* thread, frame fr, RegisterMap* map); static void deopt_thread(bool in_handshake, JavaThread* thread, frame fr, RegisterMap *map, DeoptReason reason); + public: - // Deoptimizes a frame lazily. nmethod gets patched deopt happens on return to the frame static void deoptimize(JavaThread* thread, frame fr, RegisterMap *map, bool in_handshake = false); static void deoptimize(JavaThread* thread, frame fr, RegisterMap *reg_map, DeoptReason reason); @@ -160,8 +160,9 @@ // Helper function to revoke biases of all monitors in frame if UseBiasedLocking // is enabled - static void fix_monitors(JavaThread* thread, frame fr, RegisterMap* map) - { inflate_monitors(thread, fr, map); } + static void revoke_biases_of_monitors(JavaThread* thread, frame fr, RegisterMap* map) { + revoke_safepoint(thread, fr, map); + } #if COMPILER2_OR_JVMCI JVMCI_ONLY(public:) diff --git a/src/hotspot/share/runtime/synchronizer.cpp b/src/hotspot/share/runtime/synchronizer.cpp --- a/src/hotspot/share/runtime/synchronizer.cpp +++ b/src/hotspot/share/runtime/synchronizer.cpp @@ -1315,7 +1315,7 @@ // Inflate mutates the heap ... // Relaxing assertion for bug 6320749. assert(Universe::verify_in_progress() || - !Universe::heap()->is_gc_active(), "invariant"); + !SafepointSynchronize::is_at_safepoint(), "invariant"); EventJavaMonitorInflate event; @@ -1444,7 +1444,7 @@ // to avoid false sharing on MP systems ... OM_PERFDATA_OP(Inflations, inc()); if (log_is_enabled(Trace, monitorinflation)) { - ResourceMark rm; + ResourceMark rm(Self); lsh.print_cr("inflate(has_locker): object=" INTPTR_FORMAT ", mark=" INTPTR_FORMAT ", type='%s'", p2i(object), p2i(object->mark()), object->klass()->external_name()); @@ -1494,7 +1494,7 @@ // cache lines to avoid false sharing on MP systems ... OM_PERFDATA_OP(Inflations, inc()); if (log_is_enabled(Trace, monitorinflation)) { - ResourceMark rm; + ResourceMark rm(Self); lsh.print_cr("inflate(neutral): object=" INTPTR_FORMAT ", mark=" INTPTR_FORMAT ", type='%s'", p2i(object), p2i(object->mark()), object->klass()->external_name()); # HG changeset patch # User rehn # Date 1557848805 -7200 # Tue May 14 17:46:45 2019 +0200 # Node ID 551d47a3c1e22726c881806047988ce07f951ad1 # Parent 017a2fa651de7a3efe744402956c19b1101bb23c [mq]: 8221734-v3-stress-test diff --git a/src/hotspot/share/aot/aotCompiledMethod.hpp b/src/hotspot/share/aot/aotCompiledMethod.hpp --- a/src/hotspot/share/aot/aotCompiledMethod.hpp +++ b/src/hotspot/share/aot/aotCompiledMethod.hpp @@ -175,6 +175,7 @@ state() == not_used; } virtual bool is_alive() const { return _is_alive(); } virtual bool is_in_use() const { return state() == in_use; } + virtual bool is_not_installed() const { return state() == not_installed; } virtual bool is_unloading() { return false; } diff --git a/src/hotspot/share/code/codeCache.cpp b/src/hotspot/share/code/codeCache.cpp --- a/src/hotspot/share/code/codeCache.cpp +++ b/src/hotspot/share/code/codeCache.cpp @@ -1142,13 +1142,21 @@ } #endif // INCLUDE_JVMTI -// Deoptimize all methods +// Deoptimize all(most) methods void CodeCache::mark_all_nmethods_for_deoptimization() { MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag); CompiledMethodIterator iter(CompiledMethodIterator::only_alive_and_not_unloading); while(iter.next()) { CompiledMethod* nm = iter.method(); - if (!nm->method()->is_method_handle_intrinsic()) { + // Not installed are unsafe to mark for deopt, normally never deopted. + // A not_entrant method may become a zombie at any time, + // since we don't know on which side of last safepoint it became not_entrant + // (state must be in_use). + // Native method are unsafe to mark for deopt, normally never deopted. + if (!nm->method()->is_method_handle_intrinsic() && + !nm->is_not_installed() && + nm->is_in_use() && + !nm->is_native_method()) { nm->mark_for_deoptimization(); } } @@ -1176,7 +1184,12 @@ CompiledMethodIterator iter(CompiledMethodIterator::only_alive_and_not_unloading); while(iter.next()) { CompiledMethod* nm = iter.method(); - if (nm->is_marked_for_deoptimization() && !nm->is_not_entrant()) { + // only_alive_and_not_unloading returns not_entrant nmethods. + // A not_entrant can become a zombie at anytime, + // if it was made not_entrant before previous safepoint/handshake. + // We check that it is not not_entrant and not zombie, + // by checking is_in_use(). + if (nm->is_marked_for_deoptimization() && nm->is_in_use()) { nm->make_not_entrant(); } } diff --git a/src/hotspot/share/code/compiledMethod.hpp b/src/hotspot/share/code/compiledMethod.hpp --- a/src/hotspot/share/code/compiledMethod.hpp +++ b/src/hotspot/share/code/compiledMethod.hpp @@ -214,6 +214,7 @@ }; virtual bool is_in_use() const = 0; + virtual bool is_not_installed() const = 0; virtual int comp_level() const = 0; virtual int compile_id() const = 0; diff --git a/src/hotspot/share/prims/whitebox.cpp b/src/hotspot/share/prims/whitebox.cpp --- a/src/hotspot/share/prims/whitebox.cpp +++ b/src/hotspot/share/prims/whitebox.cpp @@ -821,7 +821,6 @@ WB_END WB_ENTRY(void, WB_DeoptimizeAll(JNIEnv* env, jobject o)) - MutexLocker mu(Compile_lock); CodeCache::mark_all_nmethods_for_deoptimization(); Deoptimization::deoptimize_all_marked(); WB_END diff --git a/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java b/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java new file mode 100644 --- /dev/null +++ b/test/hotspot/jtreg/compiler/codecache/stress/UnexpectedDeoptimizationAllTest.java @@ -0,0 +1,64 @@ +/* + * Copyright (c) 2014, 2016, Oracle and/or its affiliates. All rights reserved. + * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. + * + * This code is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 only, as + * published by the Free Software Foundation. + * + * This code is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License + * version 2 for more details (a copy is included in the LICENSE file that + * accompanied this code). + * + * You should have received a copy of the GNU General Public License version + * 2 along with this work; if not, write to the Free Software Foundation, + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. + * + * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA + * or visit www.oracle.com if you need additional information or have any + * questions. + */ + +/* + * @test UnexpectedDeoptimizationAllTest + * @key stress + * @summary stressing code cache by forcing unexpected deoptimizations of all methods + * @library /test/lib / + * @modules java.base/jdk.internal.misc + * java.management + * + * @build sun.hotspot.WhiteBox compiler.codecache.stress.Helper compiler.codecache.stress.TestCaseImpl + * @run driver ClassFileInstaller sun.hotspot.WhiteBox + * sun.hotspot.WhiteBox$WhiteBoxPermission + * @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions + * -XX:+WhiteBoxAPI -XX:-DeoptimizeRandom + * -XX:CompileCommand=dontinline,compiler.codecache.stress.Helper$TestCase::method + * -XX:-SegmentedCodeCache + * compiler.codecache.stress.UnexpectedDeoptimizationAllTest + * @run main/othervm -Xbootclasspath/a:. -XX:+UnlockDiagnosticVMOptions + * -XX:+WhiteBoxAPI -XX:-DeoptimizeRandom + * -XX:CompileCommand=dontinline,compiler.codecache.stress.Helper$TestCase::method + * -XX:+SegmentedCodeCache + * compiler.codecache.stress.UnexpectedDeoptimizationAllTest + */ + +package compiler.codecache.stress; + +public class UnexpectedDeoptimizationAllTest implements Runnable { + + public static void main(String[] args) { + new CodeCacheStressRunner(new UnexpectedDeoptimizationAllTest()).runTest(); + } + + @Override + public void run() { + Helper.WHITE_BOX.deoptimizeAll(); + try { + Thread.currentThread().sleep(10); + } catch (Exception e) { + } + } + +}