Print this page


@@ -262,12 +262,12 @@
       // Blocking or yielding incur their own penalties in the form of context switching
       // and the resultant loss of $ residency.
       // Further complicating matters is that yield() does not work as naively expected
       // on many platforms -- yield() does not guarantee that any other ready threads
-      // will run.   As such we revert yield_all() after some number of iterations.
-      // Yield_all() is implemented as a short unconditional sleep on some platforms.
+      // will run.   As such we revert to naked_short_sleep() after some number of iterations.
+      // nakes_short_sleep() is implemented as a short unconditional sleep.
       // Typical operating systems round a "short" sleep period up to 10 msecs, so sleeping
       // can actually increase the time it takes the VM thread to detect that a system-wide
       // stop-the-world safepoint has been reached.  In a pathological scenario such as that
       // described in CR6415670 the VMthread may sleep just before the mutator(s) become safe.
       // In that case the mutators will be stalled waiting for the safepoint to complete and the

@@ -320,13 +320,11 @@
         SpinPause() ;     // MP-Polite spin
       } else
       if (steps < DeferThrSuspendLoopCount) {
         os::NakedYield() ;
       } else {
-        os::yield_all() ;
-        // Alternately, the VM thread could transiently depress its scheduling priority or
-        // transiently increase the priority of the tardy mutator(s).
+        os::naked_short_sleep(1);
       iterations ++ ;
     assert(iterations < (uint)max_jint, "We have been iterating in the safepoint loop too long");