1 /* 2 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 3 * 4 * This code is free software; you can redistribute it and/or modify it 5 * under the terms of the GNU General Public License version 2 only, as 6 * published by the Free Software Foundation. Oracle designates this 7 * particular file as subject to the "Classpath" exception as provided 8 * by Oracle in the LICENSE file that accompanied this code. 9 * 10 * This code is distributed in the hope that it will be useful, but WITHOUT 11 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 13 * version 2 for more details (a copy is included in the LICENSE file that 14 * accompanied this code). 15 * 16 * You should have received a copy of the GNU General Public License version 17 * 2 along with this work; if not, write to the Free Software Foundation, 18 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 19 * 20 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 21 * or visit www.oracle.com if you need additional information or have any 22 * questions. 23 */ 24 25 /* 26 * This file is available under and governed by the GNU General Public 27 * License version 2 only, as published by the Free Software Foundation. 28 * However, the following notice accompanied the original version of this 29 * file: 30 * 31 * Written by Doug Lea, Bill Scherer, and Michael Scott with 32 * assistance from members of JCP JSR-166 Expert Group and released to 33 * the public domain, as explained at 34 * http://creativecommons.org/licenses/publicdomain 35 */ 36 37 package java.util.concurrent; 38 import java.util.concurrent.locks.*; 39 import java.util.concurrent.atomic.*; 40 import java.util.*; 41 42 /** 43 * A {@linkplain BlockingQueue blocking queue} in which each insert 44 * operation must wait for a corresponding remove operation by another 45 * thread, and vice versa. A synchronous queue does not have any 46 * internal capacity, not even a capacity of one. You cannot 47 * <tt>peek</tt> at a synchronous queue because an element is only 48 * present when you try to remove it; you cannot insert an element 49 * (using any method) unless another thread is trying to remove it; 50 * you cannot iterate as there is nothing to iterate. The 51 * <em>head</em> of the queue is the element that the first queued 52 * inserting thread is trying to add to the queue; if there is no such 53 * queued thread then no element is available for removal and 54 * <tt>poll()</tt> will return <tt>null</tt>. For purposes of other 55 * <tt>Collection</tt> methods (for example <tt>contains</tt>), a 56 * <tt>SynchronousQueue</tt> acts as an empty collection. This queue 57 * does not permit <tt>null</tt> elements. 58 * 59 * <p>Synchronous queues are similar to rendezvous channels used in 60 * CSP and Ada. They are well suited for handoff designs, in which an 61 * object running in one thread must sync up with an object running 62 * in another thread in order to hand it some information, event, or 63 * task. 64 * 65 * <p> This class supports an optional fairness policy for ordering 66 * waiting producer and consumer threads. By default, this ordering 67 * is not guaranteed. However, a queue constructed with fairness set 68 * to <tt>true</tt> grants threads access in FIFO order. 69 * 70 * <p>This class and its iterator implement all of the 71 * <em>optional</em> methods of the {@link Collection} and {@link 72 * Iterator} interfaces. 73 * 74 * <p>This class is a member of the 75 * <a href="{@docRoot}/../technotes/guides/collections/index.html"> 76 * Java Collections Framework</a>. 77 * 78 * @since 1.5 79 * @author Doug Lea and Bill Scherer and Michael Scott 80 * @param <E> the type of elements held in this collection 81 */ 82 public class SynchronousQueue<E> extends AbstractQueue<E> 83 implements BlockingQueue<E>, java.io.Serializable { 84 private static final long serialVersionUID = -3223113410248163686L; 85 86 /* 87 * This class implements extensions of the dual stack and dual 88 * queue algorithms described in "Nonblocking Concurrent Objects 89 * with Condition Synchronization", by W. N. Scherer III and 90 * M. L. Scott. 18th Annual Conf. on Distributed Computing, 91 * Oct. 2004 (see also 92 * http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/duals.html). 93 * The (Lifo) stack is used for non-fair mode, and the (Fifo) 94 * queue for fair mode. The performance of the two is generally 95 * similar. Fifo usually supports higher throughput under 96 * contention but Lifo maintains higher thread locality in common 97 * applications. 98 * 99 * A dual queue (and similarly stack) is one that at any given 100 * time either holds "data" -- items provided by put operations, 101 * or "requests" -- slots representing take operations, or is 102 * empty. A call to "fulfill" (i.e., a call requesting an item 103 * from a queue holding data or vice versa) dequeues a 104 * complementary node. The most interesting feature of these 105 * queues is that any operation can figure out which mode the 106 * queue is in, and act accordingly without needing locks. 107 * 108 * Both the queue and stack extend abstract class Transferer 109 * defining the single method transfer that does a put or a 110 * take. These are unified into a single method because in dual 111 * data structures, the put and take operations are symmetrical, 112 * so nearly all code can be combined. The resulting transfer 113 * methods are on the long side, but are easier to follow than 114 * they would be if broken up into nearly-duplicated parts. 115 * 116 * The queue and stack data structures share many conceptual 117 * similarities but very few concrete details. For simplicity, 118 * they are kept distinct so that they can later evolve 119 * separately. 120 * 121 * The algorithms here differ from the versions in the above paper 122 * in extending them for use in synchronous queues, as well as 123 * dealing with cancellation. The main differences include: 124 * 125 * 1. The original algorithms used bit-marked pointers, but 126 * the ones here use mode bits in nodes, leading to a number 127 * of further adaptations. 128 * 2. SynchronousQueues must block threads waiting to become 129 * fulfilled. 130 * 3. Support for cancellation via timeout and interrupts, 131 * including cleaning out cancelled nodes/threads 132 * from lists to avoid garbage retention and memory depletion. 133 * 134 * Blocking is mainly accomplished using LockSupport park/unpark, 135 * except that nodes that appear to be the next ones to become 136 * fulfilled first spin a bit (on multiprocessors only). On very 137 * busy synchronous queues, spinning can dramatically improve 138 * throughput. And on less busy ones, the amount of spinning is 139 * small enough not to be noticeable. 140 * 141 * Cleaning is done in different ways in queues vs stacks. For 142 * queues, we can almost always remove a node immediately in O(1) 143 * time (modulo retries for consistency checks) when it is 144 * cancelled. But if it may be pinned as the current tail, it must 145 * wait until some subsequent cancellation. For stacks, we need a 146 * potentially O(n) traversal to be sure that we can remove the 147 * node, but this can run concurrently with other threads 148 * accessing the stack. 149 * 150 * While garbage collection takes care of most node reclamation 151 * issues that otherwise complicate nonblocking algorithms, care 152 * is taken to "forget" references to data, other nodes, and 153 * threads that might be held on to long-term by blocked 154 * threads. In cases where setting to null would otherwise 155 * conflict with main algorithms, this is done by changing a 156 * node's link to now point to the node itself. This doesn't arise 157 * much for Stack nodes (because blocked threads do not hang on to 158 * old head pointers), but references in Queue nodes must be 159 * aggressively forgotten to avoid reachability of everything any 160 * node has ever referred to since arrival. 161 */ 162 163 /** 164 * Shared internal API for dual stacks and queues. 165 */ 166 abstract static class Transferer { 167 /** 168 * Performs a put or take. 169 * 170 * @param e if non-null, the item to be handed to a consumer; 171 * if null, requests that transfer return an item 172 * offered by producer. 173 * @param timed if this operation should timeout 174 * @param nanos the timeout, in nanoseconds 175 * @return if non-null, the item provided or received; if null, 176 * the operation failed due to timeout or interrupt -- 177 * the caller can distinguish which of these occurred 178 * by checking Thread.interrupted. 179 */ 180 abstract Object transfer(Object e, boolean timed, long nanos); 181 } 182 183 /** The number of CPUs, for spin control */ 184 static final int NCPUS = Runtime.getRuntime().availableProcessors(); 185 186 /** 187 * The number of times to spin before blocking in timed waits. 188 * The value is empirically derived -- it works well across a 189 * variety of processors and OSes. Empirically, the best value 190 * seems not to vary with number of CPUs (beyond 2) so is just 191 * a constant. 192 */ 193 static final int maxTimedSpins = (NCPUS < 2) ? 0 : 32; 194 195 /** 196 * The number of times to spin before blocking in untimed waits. 197 * This is greater than timed value because untimed waits spin 198 * faster since they don't need to check times on each spin. 199 */ 200 static final int maxUntimedSpins = maxTimedSpins * 16; 201 202 /** 203 * The number of nanoseconds for which it is faster to spin 204 * rather than to use timed park. A rough estimate suffices. 205 */ 206 static final long spinForTimeoutThreshold = 1000L; 207 208 /** Dual stack */ 209 static final class TransferStack extends Transferer { 210 /* 211 * This extends Scherer-Scott dual stack algorithm, differing, 212 * among other ways, by using "covering" nodes rather than 213 * bit-marked pointers: Fulfilling operations push on marker 214 * nodes (with FULFILLING bit set in mode) to reserve a spot 215 * to match a waiting node. 216 */ 217 218 /* Modes for SNodes, ORed together in node fields */ 219 /** Node represents an unfulfilled consumer */ 220 static final int REQUEST = 0; 221 /** Node represents an unfulfilled producer */ 222 static final int DATA = 1; 223 /** Node is fulfilling another unfulfilled DATA or REQUEST */ 224 static final int FULFILLING = 2; 225 226 /** Return true if m has fulfilling bit set */ 227 static boolean isFulfilling(int m) { return (m & FULFILLING) != 0; } 228 229 /** Node class for TransferStacks. */ 230 static final class SNode { 231 volatile SNode next; // next node in stack 232 volatile SNode match; // the node matched to this 233 volatile Thread waiter; // to control park/unpark 234 Object item; // data; or null for REQUESTs 235 int mode; 236 // Note: item and mode fields don't need to be volatile 237 // since they are always written before, and read after, 238 // other volatile/atomic operations. 239 240 SNode(Object item) { 241 this.item = item; 242 } 243 244 boolean casNext(SNode cmp, SNode val) { 245 return cmp == next && 246 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val); 247 } 248 249 /** 250 * Tries to match node s to this node, if so, waking up thread. 251 * Fulfillers call tryMatch to identify their waiters. 252 * Waiters block until they have been matched. 253 * 254 * @param s the node to match 255 * @return true if successfully matched to s 256 */ 257 boolean tryMatch(SNode s) { 258 if (match == null && 259 UNSAFE.compareAndSwapObject(this, matchOffset, null, s)) { 260 Thread w = waiter; 261 if (w != null) { // waiters need at most one unpark 262 waiter = null; 263 LockSupport.unpark(w); 264 } 265 return true; 266 } 267 return match == s; 268 } 269 270 /** 271 * Tries to cancel a wait by matching node to itself. 272 */ 273 void tryCancel() { 274 UNSAFE.compareAndSwapObject(this, matchOffset, null, this); 275 } 276 277 boolean isCancelled() { 278 return match == this; 279 } 280 281 // Unsafe mechanics 282 private static final sun.misc.Unsafe UNSAFE = sun.misc.Unsafe.getUnsafe(); 283 private static final long nextOffset = 284 objectFieldOffset(UNSAFE, "next", SNode.class); 285 private static final long matchOffset = 286 objectFieldOffset(UNSAFE, "match", SNode.class); 287 288 } 289 290 /** The head (top) of the stack */ 291 volatile SNode head; 292 293 boolean casHead(SNode h, SNode nh) { 294 return h == head && 295 UNSAFE.compareAndSwapObject(this, headOffset, h, nh); 296 } 297 298 /** 299 * Creates or resets fields of a node. Called only from transfer 300 * where the node to push on stack is lazily created and 301 * reused when possible to help reduce intervals between reads 302 * and CASes of head and to avoid surges of garbage when CASes 303 * to push nodes fail due to contention. 304 */ 305 static SNode snode(SNode s, Object e, SNode next, int mode) { 306 if (s == null) s = new SNode(e); 307 s.mode = mode; 308 s.next = next; 309 return s; 310 } 311 312 /** 313 * Puts or takes an item. 314 */ 315 Object transfer(Object e, boolean timed, long nanos) { 316 /* 317 * Basic algorithm is to loop trying one of three actions: 318 * 319 * 1. If apparently empty or already containing nodes of same 320 * mode, try to push node on stack and wait for a match, 321 * returning it, or null if cancelled. 322 * 323 * 2. If apparently containing node of complementary mode, 324 * try to push a fulfilling node on to stack, match 325 * with corresponding waiting node, pop both from 326 * stack, and return matched item. The matching or 327 * unlinking might not actually be necessary because of 328 * other threads performing action 3: 329 * 330 * 3. If top of stack already holds another fulfilling node, 331 * help it out by doing its match and/or pop 332 * operations, and then continue. The code for helping 333 * is essentially the same as for fulfilling, except 334 * that it doesn't return the item. 335 */ 336 337 SNode s = null; // constructed/reused as needed 338 int mode = (e == null) ? REQUEST : DATA; 339 340 for (;;) { 341 SNode h = head; 342 if (h == null || h.mode == mode) { // empty or same-mode 343 if (timed && nanos <= 0) { // can't wait 344 if (h != null && h.isCancelled()) 345 casHead(h, h.next); // pop cancelled node 346 else 347 return null; 348 } else if (casHead(h, s = snode(s, e, h, mode))) { 349 SNode m = awaitFulfill(s, timed, nanos); 350 if (m == s) { // wait was cancelled 351 clean(s); 352 return null; 353 } 354 if ((h = head) != null && h.next == s) 355 casHead(h, s.next); // help s's fulfiller 356 return (mode == REQUEST) ? m.item : s.item; 357 } 358 } else if (!isFulfilling(h.mode)) { // try to fulfill 359 if (h.isCancelled()) // already cancelled 360 casHead(h, h.next); // pop and retry 361 else if (casHead(h, s=snode(s, e, h, FULFILLING|mode))) { 362 for (;;) { // loop until matched or waiters disappear 363 SNode m = s.next; // m is s's match 364 if (m == null) { // all waiters are gone 365 casHead(s, null); // pop fulfill node 366 s = null; // use new node next time 367 break; // restart main loop 368 } 369 SNode mn = m.next; 370 if (m.tryMatch(s)) { 371 casHead(s, mn); // pop both s and m 372 return (mode == REQUEST) ? m.item : s.item; 373 } else // lost match 374 s.casNext(m, mn); // help unlink 375 } 376 } 377 } else { // help a fulfiller 378 SNode m = h.next; // m is h's match 379 if (m == null) // waiter is gone 380 casHead(h, null); // pop fulfilling node 381 else { 382 SNode mn = m.next; 383 if (m.tryMatch(h)) // help match 384 casHead(h, mn); // pop both h and m 385 else // lost match 386 h.casNext(m, mn); // help unlink 387 } 388 } 389 } 390 } 391 392 /** 393 * Spins/blocks until node s is matched by a fulfill operation. 394 * 395 * @param s the waiting node 396 * @param timed true if timed wait 397 * @param nanos timeout value 398 * @return matched node, or s if cancelled 399 */ 400 SNode awaitFulfill(SNode s, boolean timed, long nanos) { 401 /* 402 * When a node/thread is about to block, it sets its waiter 403 * field and then rechecks state at least one more time 404 * before actually parking, thus covering race vs 405 * fulfiller noticing that waiter is non-null so should be 406 * woken. 407 * 408 * When invoked by nodes that appear at the point of call 409 * to be at the head of the stack, calls to park are 410 * preceded by spins to avoid blocking when producers and 411 * consumers are arriving very close in time. This can 412 * happen enough to bother only on multiprocessors. 413 * 414 * The order of checks for returning out of main loop 415 * reflects fact that interrupts have precedence over 416 * normal returns, which have precedence over 417 * timeouts. (So, on timeout, one last check for match is 418 * done before giving up.) Except that calls from untimed 419 * SynchronousQueue.{poll/offer} don't check interrupts 420 * and don't wait at all, so are trapped in transfer 421 * method rather than calling awaitFulfill. 422 */ 423 long lastTime = timed ? System.nanoTime() : 0; 424 Thread w = Thread.currentThread(); 425 SNode h = head; 426 int spins = (shouldSpin(s) ? 427 (timed ? maxTimedSpins : maxUntimedSpins) : 0); 428 for (;;) { 429 if (w.isInterrupted()) 430 s.tryCancel(); 431 SNode m = s.match; 432 if (m != null) 433 return m; 434 if (timed) { 435 long now = System.nanoTime(); 436 nanos -= now - lastTime; 437 lastTime = now; 438 if (nanos <= 0) { 439 s.tryCancel(); 440 continue; 441 } 442 } 443 if (spins > 0) 444 spins = shouldSpin(s) ? (spins-1) : 0; 445 else if (s.waiter == null) 446 s.waiter = w; // establish waiter so can park next iter 447 else if (!timed) 448 LockSupport.park(this); 449 else if (nanos > spinForTimeoutThreshold) 450 LockSupport.parkNanos(this, nanos); 451 } 452 } 453 454 /** 455 * Returns true if node s is at head or there is an active 456 * fulfiller. 457 */ 458 boolean shouldSpin(SNode s) { 459 SNode h = head; 460 return (h == s || h == null || isFulfilling(h.mode)); 461 } 462 463 /** 464 * Unlinks s from the stack. 465 */ 466 void clean(SNode s) { 467 s.item = null; // forget item 468 s.waiter = null; // forget thread 469 470 /* 471 * At worst we may need to traverse entire stack to unlink 472 * s. If there are multiple concurrent calls to clean, we 473 * might not see s if another thread has already removed 474 * it. But we can stop when we see any node known to 475 * follow s. We use s.next unless it too is cancelled, in 476 * which case we try the node one past. We don't check any 477 * further because we don't want to doubly traverse just to 478 * find sentinel. 479 */ 480 481 SNode past = s.next; 482 if (past != null && past.isCancelled()) 483 past = past.next; 484 485 // Absorb cancelled nodes at head 486 SNode p; 487 while ((p = head) != null && p != past && p.isCancelled()) 488 casHead(p, p.next); 489 490 // Unsplice embedded nodes 491 while (p != null && p != past) { 492 SNode n = p.next; 493 if (n != null && n.isCancelled()) 494 p.casNext(n, n.next); 495 else 496 p = n; 497 } 498 } 499 500 // Unsafe mechanics 501 private static final sun.misc.Unsafe UNSAFE = sun.misc.Unsafe.getUnsafe(); 502 private static final long headOffset = 503 objectFieldOffset(UNSAFE, "head", TransferStack.class); 504 505 } 506 507 /** Dual Queue */ 508 static final class TransferQueue extends Transferer { 509 /* 510 * This extends Scherer-Scott dual queue algorithm, differing, 511 * among other ways, by using modes within nodes rather than 512 * marked pointers. The algorithm is a little simpler than 513 * that for stacks because fulfillers do not need explicit 514 * nodes, and matching is done by CAS'ing QNode.item field 515 * from non-null to null (for put) or vice versa (for take). 516 */ 517 518 /** Node class for TransferQueue. */ 519 static final class QNode { 520 volatile QNode next; // next node in queue 521 volatile Object item; // CAS'ed to or from null 522 volatile Thread waiter; // to control park/unpark 523 final boolean isData; 524 525 QNode(Object item, boolean isData) { 526 this.item = item; 527 this.isData = isData; 528 } 529 530 boolean casNext(QNode cmp, QNode val) { 531 return next == cmp && 532 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val); 533 } 534 535 boolean casItem(Object cmp, Object val) { 536 return item == cmp && 537 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, val); 538 } 539 540 /** 541 * Tries to cancel by CAS'ing ref to this as item. 542 */ 543 void tryCancel(Object cmp) { 544 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, this); 545 } 546 547 boolean isCancelled() { 548 return item == this; 549 } 550 551 /** 552 * Returns true if this node is known to be off the queue 553 * because its next pointer has been forgotten due to 554 * an advanceHead operation. 555 */ 556 boolean isOffList() { 557 return next == this; 558 } 559 560 // Unsafe mechanics 561 private static final sun.misc.Unsafe UNSAFE = sun.misc.Unsafe.getUnsafe(); 562 private static final long nextOffset = 563 objectFieldOffset(UNSAFE, "next", QNode.class); 564 private static final long itemOffset = 565 objectFieldOffset(UNSAFE, "item", QNode.class); 566 } 567 568 /** Head of queue */ 569 transient volatile QNode head; 570 /** Tail of queue */ 571 transient volatile QNode tail; 572 /** 573 * Reference to a cancelled node that might not yet have been 574 * unlinked from queue because it was the last inserted node 575 * when it cancelled. 576 */ 577 transient volatile QNode cleanMe; 578 579 TransferQueue() { 580 QNode h = new QNode(null, false); // initialize to dummy node. 581 head = h; 582 tail = h; 583 } 584 585 /** 586 * Tries to cas nh as new head; if successful, unlink 587 * old head's next node to avoid garbage retention. 588 */ 589 void advanceHead(QNode h, QNode nh) { 590 if (h == head && 591 UNSAFE.compareAndSwapObject(this, headOffset, h, nh)) 592 h.next = h; // forget old next 593 } 594 595 /** 596 * Tries to cas nt as new tail. 597 */ 598 void advanceTail(QNode t, QNode nt) { 599 if (tail == t) 600 UNSAFE.compareAndSwapObject(this, tailOffset, t, nt); 601 } 602 603 /** 604 * Tries to CAS cleanMe slot. 605 */ 606 boolean casCleanMe(QNode cmp, QNode val) { 607 return cleanMe == cmp && 608 UNSAFE.compareAndSwapObject(this, cleanMeOffset, cmp, val); 609 } 610 611 /** 612 * Puts or takes an item. 613 */ 614 Object transfer(Object e, boolean timed, long nanos) { 615 /* Basic algorithm is to loop trying to take either of 616 * two actions: 617 * 618 * 1. If queue apparently empty or holding same-mode nodes, 619 * try to add node to queue of waiters, wait to be 620 * fulfilled (or cancelled) and return matching item. 621 * 622 * 2. If queue apparently contains waiting items, and this 623 * call is of complementary mode, try to fulfill by CAS'ing 624 * item field of waiting node and dequeuing it, and then 625 * returning matching item. 626 * 627 * In each case, along the way, check for and try to help 628 * advance head and tail on behalf of other stalled/slow 629 * threads. 630 * 631 * The loop starts off with a null check guarding against 632 * seeing uninitialized head or tail values. This never 633 * happens in current SynchronousQueue, but could if 634 * callers held non-volatile/final ref to the 635 * transferer. The check is here anyway because it places 636 * null checks at top of loop, which is usually faster 637 * than having them implicitly interspersed. 638 */ 639 640 QNode s = null; // constructed/reused as needed 641 boolean isData = (e != null); 642 643 for (;;) { 644 QNode t = tail; 645 QNode h = head; 646 if (t == null || h == null) // saw uninitialized value 647 continue; // spin 648 649 if (h == t || t.isData == isData) { // empty or same-mode 650 QNode tn = t.next; 651 if (t != tail) // inconsistent read 652 continue; 653 if (tn != null) { // lagging tail 654 advanceTail(t, tn); 655 continue; 656 } 657 if (timed && nanos <= 0) // can't wait 658 return null; 659 if (s == null) 660 s = new QNode(e, isData); 661 if (!t.casNext(null, s)) // failed to link in 662 continue; 663 664 advanceTail(t, s); // swing tail and wait 665 Object x = awaitFulfill(s, e, timed, nanos); 666 if (x == s) { // wait was cancelled 667 clean(t, s); 668 return null; 669 } 670 671 if (!s.isOffList()) { // not already unlinked 672 advanceHead(t, s); // unlink if head 673 if (x != null) // and forget fields 674 s.item = s; 675 s.waiter = null; 676 } 677 return (x != null) ? x : e; 678 679 } else { // complementary-mode 680 QNode m = h.next; // node to fulfill 681 if (t != tail || m == null || h != head) 682 continue; // inconsistent read 683 684 Object x = m.item; 685 if (isData == (x != null) || // m already fulfilled 686 x == m || // m cancelled 687 !m.casItem(x, e)) { // lost CAS 688 advanceHead(h, m); // dequeue and retry 689 continue; 690 } 691 692 advanceHead(h, m); // successfully fulfilled 693 LockSupport.unpark(m.waiter); 694 return (x != null) ? x : e; 695 } 696 } 697 } 698 699 /** 700 * Spins/blocks until node s is fulfilled. 701 * 702 * @param s the waiting node 703 * @param e the comparison value for checking match 704 * @param timed true if timed wait 705 * @param nanos timeout value 706 * @return matched item, or s if cancelled 707 */ 708 Object awaitFulfill(QNode s, Object e, boolean timed, long nanos) { 709 /* Same idea as TransferStack.awaitFulfill */ 710 long lastTime = timed ? System.nanoTime() : 0; 711 Thread w = Thread.currentThread(); 712 int spins = ((head.next == s) ? 713 (timed ? maxTimedSpins : maxUntimedSpins) : 0); 714 for (;;) { 715 if (w.isInterrupted()) 716 s.tryCancel(e); 717 Object x = s.item; 718 if (x != e) 719 return x; 720 if (timed) { 721 long now = System.nanoTime(); 722 nanos -= now - lastTime; 723 lastTime = now; 724 if (nanos <= 0) { 725 s.tryCancel(e); 726 continue; 727 } 728 } 729 if (spins > 0) 730 --spins; 731 else if (s.waiter == null) 732 s.waiter = w; 733 else if (!timed) 734 LockSupport.park(this); 735 else if (nanos > spinForTimeoutThreshold) 736 LockSupport.parkNanos(this, nanos); 737 } 738 } 739 740 /** 741 * Gets rid of cancelled node s with original predecessor pred. 742 */ 743 void clean(QNode pred, QNode s) { 744 s.waiter = null; // forget thread 745 /* 746 * At any given time, exactly one node on list cannot be 747 * deleted -- the last inserted node. To accommodate this, 748 * if we cannot delete s, we save its predecessor as 749 * "cleanMe", deleting the previously saved version 750 * first. At least one of node s or the node previously 751 * saved can always be deleted, so this always terminates. 752 */ 753 while (pred.next == s) { // Return early if already unlinked 754 QNode h = head; 755 QNode hn = h.next; // Absorb cancelled first node as head 756 if (hn != null && hn.isCancelled()) { 757 advanceHead(h, hn); 758 continue; 759 } 760 QNode t = tail; // Ensure consistent read for tail 761 if (t == h) 762 return; 763 QNode tn = t.next; 764 if (t != tail) 765 continue; 766 if (tn != null) { 767 advanceTail(t, tn); 768 continue; 769 } 770 if (s != t) { // If not tail, try to unsplice 771 QNode sn = s.next; 772 if (sn == s || pred.casNext(s, sn)) 773 return; 774 } 775 QNode dp = cleanMe; 776 if (dp != null) { // Try unlinking previous cancelled node 777 QNode d = dp.next; 778 QNode dn; 779 if (d == null || // d is gone or 780 d == dp || // d is off list or 781 !d.isCancelled() || // d not cancelled or 782 (d != t && // d not tail and 783 (dn = d.next) != null && // has successor 784 dn != d && // that is on list 785 dp.casNext(d, dn))) // d unspliced 786 casCleanMe(dp, null); 787 if (dp == pred) 788 return; // s is already saved node 789 } else if (casCleanMe(null, pred)) 790 return; // Postpone cleaning s 791 } 792 } 793 794 // unsafe mechanics 795 private static final sun.misc.Unsafe UNSAFE = sun.misc.Unsafe.getUnsafe(); 796 private static final long headOffset = 797 objectFieldOffset(UNSAFE, "head", TransferQueue.class); 798 private static final long tailOffset = 799 objectFieldOffset(UNSAFE, "tail", TransferQueue.class); 800 private static final long cleanMeOffset = 801 objectFieldOffset(UNSAFE, "cleanMe", TransferQueue.class); 802 803 } 804 805 /** 806 * The transferer. Set only in constructor, but cannot be declared 807 * as final without further complicating serialization. Since 808 * this is accessed only at most once per public method, there 809 * isn't a noticeable performance penalty for using volatile 810 * instead of final here. 811 */ 812 private transient volatile Transferer transferer; 813 814 /** 815 * Creates a <tt>SynchronousQueue</tt> with nonfair access policy. 816 */ 817 public SynchronousQueue() { 818 this(false); 819 } 820 821 /** 822 * Creates a <tt>SynchronousQueue</tt> with the specified fairness policy. 823 * 824 * @param fair if true, waiting threads contend in FIFO order for 825 * access; otherwise the order is unspecified. 826 */ 827 public SynchronousQueue(boolean fair) { 828 transferer = fair ? new TransferQueue() : new TransferStack(); 829 } 830 831 /** 832 * Adds the specified element to this queue, waiting if necessary for 833 * another thread to receive it. 834 * 835 * @throws InterruptedException {@inheritDoc} 836 * @throws NullPointerException {@inheritDoc} 837 */ 838 public void put(E o) throws InterruptedException { 839 if (o == null) throw new NullPointerException(); 840 if (transferer.transfer(o, false, 0) == null) { 841 Thread.interrupted(); 842 throw new InterruptedException(); 843 } 844 } 845 846 /** 847 * Inserts the specified element into this queue, waiting if necessary 848 * up to the specified wait time for another thread to receive it. 849 * 850 * @return <tt>true</tt> if successful, or <tt>false</tt> if the 851 * specified waiting time elapses before a consumer appears. 852 * @throws InterruptedException {@inheritDoc} 853 * @throws NullPointerException {@inheritDoc} 854 */ 855 public boolean offer(E o, long timeout, TimeUnit unit) 856 throws InterruptedException { 857 if (o == null) throw new NullPointerException(); 858 if (transferer.transfer(o, true, unit.toNanos(timeout)) != null) 859 return true; 860 if (!Thread.interrupted()) 861 return false; 862 throw new InterruptedException(); 863 } 864 865 /** 866 * Inserts the specified element into this queue, if another thread is 867 * waiting to receive it. 868 * 869 * @param e the element to add 870 * @return <tt>true</tt> if the element was added to this queue, else 871 * <tt>false</tt> 872 * @throws NullPointerException if the specified element is null 873 */ 874 public boolean offer(E e) { 875 if (e == null) throw new NullPointerException(); 876 return transferer.transfer(e, true, 0) != null; 877 } 878 879 /** 880 * Retrieves and removes the head of this queue, waiting if necessary 881 * for another thread to insert it. 882 * 883 * @return the head of this queue 884 * @throws InterruptedException {@inheritDoc} 885 */ 886 public E take() throws InterruptedException { 887 Object e = transferer.transfer(null, false, 0); 888 if (e != null) 889 return (E)e; 890 Thread.interrupted(); 891 throw new InterruptedException(); 892 } 893 894 /** 895 * Retrieves and removes the head of this queue, waiting 896 * if necessary up to the specified wait time, for another thread 897 * to insert it. 898 * 899 * @return the head of this queue, or <tt>null</tt> if the 900 * specified waiting time elapses before an element is present. 901 * @throws InterruptedException {@inheritDoc} 902 */ 903 public E poll(long timeout, TimeUnit unit) throws InterruptedException { 904 Object e = transferer.transfer(null, true, unit.toNanos(timeout)); 905 if (e != null || !Thread.interrupted()) 906 return (E)e; 907 throw new InterruptedException(); 908 } 909 910 /** 911 * Retrieves and removes the head of this queue, if another thread 912 * is currently making an element available. 913 * 914 * @return the head of this queue, or <tt>null</tt> if no 915 * element is available. 916 */ 917 public E poll() { 918 return (E)transferer.transfer(null, true, 0); 919 } 920 921 /** 922 * Always returns <tt>true</tt>. 923 * A <tt>SynchronousQueue</tt> has no internal capacity. 924 * 925 * @return <tt>true</tt> 926 */ 927 public boolean isEmpty() { 928 return true; 929 } 930 931 /** 932 * Always returns zero. 933 * A <tt>SynchronousQueue</tt> has no internal capacity. 934 * 935 * @return zero. 936 */ 937 public int size() { 938 return 0; 939 } 940 941 /** 942 * Always returns zero. 943 * A <tt>SynchronousQueue</tt> has no internal capacity. 944 * 945 * @return zero. 946 */ 947 public int remainingCapacity() { 948 return 0; 949 } 950 951 /** 952 * Does nothing. 953 * A <tt>SynchronousQueue</tt> has no internal capacity. 954 */ 955 public void clear() { 956 } 957 958 /** 959 * Always returns <tt>false</tt>. 960 * A <tt>SynchronousQueue</tt> has no internal capacity. 961 * 962 * @param o the element 963 * @return <tt>false</tt> 964 */ 965 public boolean contains(Object o) { 966 return false; 967 } 968 969 /** 970 * Always returns <tt>false</tt>. 971 * A <tt>SynchronousQueue</tt> has no internal capacity. 972 * 973 * @param o the element to remove 974 * @return <tt>false</tt> 975 */ 976 public boolean remove(Object o) { 977 return false; 978 } 979 980 /** 981 * Returns <tt>false</tt> unless the given collection is empty. 982 * A <tt>SynchronousQueue</tt> has no internal capacity. 983 * 984 * @param c the collection 985 * @return <tt>false</tt> unless given collection is empty 986 */ 987 public boolean containsAll(Collection<?> c) { 988 return c.isEmpty(); 989 } 990 991 /** 992 * Always returns <tt>false</tt>. 993 * A <tt>SynchronousQueue</tt> has no internal capacity. 994 * 995 * @param c the collection 996 * @return <tt>false</tt> 997 */ 998 public boolean removeAll(Collection<?> c) { 999 return false; 1000 } 1001 1002 /** 1003 * Always returns <tt>false</tt>. 1004 * A <tt>SynchronousQueue</tt> has no internal capacity. 1005 * 1006 * @param c the collection 1007 * @return <tt>false</tt> 1008 */ 1009 public boolean retainAll(Collection<?> c) { 1010 return false; 1011 } 1012 1013 /** 1014 * Always returns <tt>null</tt>. 1015 * A <tt>SynchronousQueue</tt> does not return elements 1016 * unless actively waited on. 1017 * 1018 * @return <tt>null</tt> 1019 */ 1020 public E peek() { 1021 return null; 1022 } 1023 1024 /** 1025 * Returns an empty iterator in which <tt>hasNext</tt> always returns 1026 * <tt>false</tt>. 1027 * 1028 * @return an empty iterator 1029 */ 1030 public Iterator<E> iterator() { 1031 return Collections.emptyIterator(); 1032 } 1033 1034 /** 1035 * Returns a zero-length array. 1036 * @return a zero-length array 1037 */ 1038 public Object[] toArray() { 1039 return new Object[0]; 1040 } 1041 1042 /** 1043 * Sets the zeroeth element of the specified array to <tt>null</tt> 1044 * (if the array has non-zero length) and returns it. 1045 * 1046 * @param a the array 1047 * @return the specified array 1048 * @throws NullPointerException if the specified array is null 1049 */ 1050 public <T> T[] toArray(T[] a) { 1051 if (a.length > 0) 1052 a[0] = null; 1053 return a; 1054 } 1055 1056 /** 1057 * @throws UnsupportedOperationException {@inheritDoc} 1058 * @throws ClassCastException {@inheritDoc} 1059 * @throws NullPointerException {@inheritDoc} 1060 * @throws IllegalArgumentException {@inheritDoc} 1061 */ 1062 public int drainTo(Collection<? super E> c) { 1063 if (c == null) 1064 throw new NullPointerException(); 1065 if (c == this) 1066 throw new IllegalArgumentException(); 1067 int n = 0; 1068 E e; 1069 while ( (e = poll()) != null) { 1070 c.add(e); 1071 ++n; 1072 } 1073 return n; 1074 } 1075 1076 /** 1077 * @throws UnsupportedOperationException {@inheritDoc} 1078 * @throws ClassCastException {@inheritDoc} 1079 * @throws NullPointerException {@inheritDoc} 1080 * @throws IllegalArgumentException {@inheritDoc} 1081 */ 1082 public int drainTo(Collection<? super E> c, int maxElements) { 1083 if (c == null) 1084 throw new NullPointerException(); 1085 if (c == this) 1086 throw new IllegalArgumentException(); 1087 int n = 0; 1088 E e; 1089 while (n < maxElements && (e = poll()) != null) { 1090 c.add(e); 1091 ++n; 1092 } 1093 return n; 1094 } 1095 1096 /* 1097 * To cope with serialization strategy in the 1.5 version of 1098 * SynchronousQueue, we declare some unused classes and fields 1099 * that exist solely to enable serializability across versions. 1100 * These fields are never used, so are initialized only if this 1101 * object is ever serialized or deserialized. 1102 */ 1103 1104 static class WaitQueue implements java.io.Serializable { } 1105 static class LifoWaitQueue extends WaitQueue { 1106 private static final long serialVersionUID = -3633113410248163686L; 1107 } 1108 static class FifoWaitQueue extends WaitQueue { 1109 private static final long serialVersionUID = -3623113410248163686L; 1110 } 1111 private ReentrantLock qlock; 1112 private WaitQueue waitingProducers; 1113 private WaitQueue waitingConsumers; 1114 1115 /** 1116 * Save the state to a stream (that is, serialize it). 1117 * 1118 * @param s the stream 1119 */ 1120 private void writeObject(java.io.ObjectOutputStream s) 1121 throws java.io.IOException { 1122 boolean fair = transferer instanceof TransferQueue; 1123 if (fair) { 1124 qlock = new ReentrantLock(true); 1125 waitingProducers = new FifoWaitQueue(); 1126 waitingConsumers = new FifoWaitQueue(); 1127 } 1128 else { 1129 qlock = new ReentrantLock(); 1130 waitingProducers = new LifoWaitQueue(); 1131 waitingConsumers = new LifoWaitQueue(); 1132 } 1133 s.defaultWriteObject(); 1134 } 1135 1136 private void readObject(final java.io.ObjectInputStream s) 1137 throws java.io.IOException, ClassNotFoundException { 1138 s.defaultReadObject(); 1139 if (waitingProducers instanceof FifoWaitQueue) 1140 transferer = new TransferQueue(); 1141 else 1142 transferer = new TransferStack(); 1143 } 1144 1145 // Unsafe mechanics 1146 static long objectFieldOffset(sun.misc.Unsafe UNSAFE, 1147 String field, Class<?> klazz) { 1148 try { 1149 return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field)); 1150 } catch (NoSuchFieldException e) { 1151 // Convert Exception to corresponding Error 1152 NoSuchFieldError error = new NoSuchFieldError(field); 1153 error.initCause(e); 1154 throw error; 1155 } 1156 } 1157 1158 }