1 /* 2 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 3 * 4 * This code is free software; you can redistribute it and/or modify it 5 * under the terms of the GNU General Public License version 2 only, as 6 * published by the Free Software Foundation. Oracle designates this 7 * particular file as subject to the "Classpath" exception as provided 8 * by Oracle in the LICENSE file that accompanied this code. 9 * 10 * This code is distributed in the hope that it will be useful, but WITHOUT 11 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 13 * version 2 for more details (a copy is included in the LICENSE file that 14 * accompanied this code). 15 * 16 * You should have received a copy of the GNU General Public License version 17 * 2 along with this work; if not, write to the Free Software Foundation, 18 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 19 * 20 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 21 * or visit www.oracle.com if you need additional information or have any 22 * questions. 23 */ 24 25 /* 26 * This file is available under and governed by the GNU General Public 27 * License version 2 only, as published by the Free Software Foundation. 28 * However, the following notice accompanied the original version of this 29 * file: 30 * 31 * Written by Doug Lea with assistance from members of JCP JSR-166 32 * Expert Group and released to the public domain, as explained at 33 * http://creativecommons.org/publicdomain/zero/1.0/ 34 */ 35 36 package java.util.concurrent; 37 38 import java.util.ArrayList; 39 import java.util.ConcurrentModificationException; 40 import java.util.HashSet; 41 import java.util.Iterator; 42 import java.util.List; 43 import java.util.concurrent.atomic.AtomicInteger; 44 import java.util.concurrent.locks.AbstractQueuedSynchronizer; 45 import java.util.concurrent.locks.Condition; 46 import java.util.concurrent.locks.ReentrantLock; 47 48 /** 49 * An {@link ExecutorService} that executes each submitted task using 50 * one of possibly several pooled threads, normally configured 51 * using {@link Executors} factory methods. 52 * 53 * <p>Thread pools address two different problems: they usually 54 * provide improved performance when executing large numbers of 55 * asynchronous tasks, due to reduced per-task invocation overhead, 56 * and they provide a means of bounding and managing the resources, 57 * including threads, consumed when executing a collection of tasks. 58 * Each {@code ThreadPoolExecutor} also maintains some basic 59 * statistics, such as the number of completed tasks. 60 * 61 * <p>To be useful across a wide range of contexts, this class 62 * provides many adjustable parameters and extensibility 63 * hooks. However, programmers are urged to use the more convenient 64 * {@link Executors} factory methods {@link 65 * Executors#newCachedThreadPool} (unbounded thread pool, with 66 * automatic thread reclamation), {@link Executors#newFixedThreadPool} 67 * (fixed size thread pool) and {@link 68 * Executors#newSingleThreadExecutor} (single background thread), that 69 * preconfigure settings for the most common usage 70 * scenarios. Otherwise, use the following guide when manually 71 * configuring and tuning this class: 72 * 73 * <dl> 74 * 75 * <dt>Core and maximum pool sizes</dt> 76 * 77 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the 78 * pool size (see {@link #getPoolSize}) 79 * according to the bounds set by 80 * corePoolSize (see {@link #getCorePoolSize}) and 81 * maximumPoolSize (see {@link #getMaximumPoolSize}). 82 * 83 * When a new task is submitted in method {@link #execute(Runnable)}, 84 * if fewer than corePoolSize threads are running, a new thread is 85 * created to handle the request, even if other worker threads are 86 * idle. Else if fewer than maximumPoolSize threads are running, a 87 * new thread will be created to handle the request only if the queue 88 * is full. By setting corePoolSize and maximumPoolSize the same, you 89 * create a fixed-size thread pool. By setting maximumPoolSize to an 90 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you 91 * allow the pool to accommodate an arbitrary number of concurrent 92 * tasks. Most typically, core and maximum pool sizes are set only 93 * upon construction, but they may also be changed dynamically using 94 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd> 95 * 96 * <dt>On-demand construction</dt> 97 * 98 * <dd>By default, even core threads are initially created and 99 * started only when new tasks arrive, but this can be overridden 100 * dynamically using method {@link #prestartCoreThread} or {@link 101 * #prestartAllCoreThreads}. You probably want to prestart threads if 102 * you construct the pool with a non-empty queue. </dd> 103 * 104 * <dt>Creating new threads</dt> 105 * 106 * <dd>New threads are created using a {@link ThreadFactory}. If not 107 * otherwise specified, a {@link Executors#defaultThreadFactory} is 108 * used, that creates threads to all be in the same {@link 109 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and 110 * non-daemon status. By supplying a different ThreadFactory, you can 111 * alter the thread's name, thread group, priority, daemon status, 112 * etc. If a {@code ThreadFactory} fails to create a thread when asked 113 * by returning null from {@code newThread}, the executor will 114 * continue, but might not be able to execute any tasks. Threads 115 * should possess the "modifyThread" {@code RuntimePermission}. If 116 * worker threads or other threads using the pool do not possess this 117 * permission, service may be degraded: configuration changes may not 118 * take effect in a timely manner, and a shutdown pool may remain in a 119 * state in which termination is possible but not completed.</dd> 120 * 121 * <dt>Keep-alive times</dt> 122 * 123 * <dd>If the pool currently has more than corePoolSize threads, 124 * excess threads will be terminated if they have been idle for more 125 * than the keepAliveTime (see {@link #getKeepAliveTime(TimeUnit)}). 126 * This provides a means of reducing resource consumption when the 127 * pool is not being actively used. If the pool becomes more active 128 * later, new threads will be constructed. This parameter can also be 129 * changed dynamically using method {@link #setKeepAliveTime(long, 130 * TimeUnit)}. Using a value of {@code Long.MAX_VALUE} {@link 131 * TimeUnit#NANOSECONDS} effectively disables idle threads from ever 132 * terminating prior to shut down. By default, the keep-alive policy 133 * applies only when there are more than corePoolSize threads, but 134 * method {@link #allowCoreThreadTimeOut(boolean)} can be used to 135 * apply this time-out policy to core threads as well, so long as the 136 * keepAliveTime value is non-zero. </dd> 137 * 138 * <dt>Queuing</dt> 139 * 140 * <dd>Any {@link BlockingQueue} may be used to transfer and hold 141 * submitted tasks. The use of this queue interacts with pool sizing: 142 * 143 * <ul> 144 * 145 * <li>If fewer than corePoolSize threads are running, the Executor 146 * always prefers adding a new thread 147 * rather than queuing. 148 * 149 * <li>If corePoolSize or more threads are running, the Executor 150 * always prefers queuing a request rather than adding a new 151 * thread. 152 * 153 * <li>If a request cannot be queued, a new thread is created unless 154 * this would exceed maximumPoolSize, in which case, the task will be 155 * rejected. 156 * 157 * </ul> 158 * 159 * There are three general strategies for queuing: 160 * <ol> 161 * 162 * <li><em> Direct handoffs.</em> A good default choice for a work 163 * queue is a {@link SynchronousQueue} that hands off tasks to threads 164 * without otherwise holding them. Here, an attempt to queue a task 165 * will fail if no threads are immediately available to run it, so a 166 * new thread will be constructed. This policy avoids lockups when 167 * handling sets of requests that might have internal dependencies. 168 * Direct handoffs generally require unbounded maximumPoolSizes to 169 * avoid rejection of new submitted tasks. This in turn admits the 170 * possibility of unbounded thread growth when commands continue to 171 * arrive on average faster than they can be processed. 172 * 173 * <li><em> Unbounded queues.</em> Using an unbounded queue (for 174 * example a {@link LinkedBlockingQueue} without a predefined 175 * capacity) will cause new tasks to wait in the queue when all 176 * corePoolSize threads are busy. Thus, no more than corePoolSize 177 * threads will ever be created. (And the value of the maximumPoolSize 178 * therefore doesn't have any effect.) This may be appropriate when 179 * each task is completely independent of others, so tasks cannot 180 * affect each others execution; for example, in a web page server. 181 * While this style of queuing can be useful in smoothing out 182 * transient bursts of requests, it admits the possibility of 183 * unbounded work queue growth when commands continue to arrive on 184 * average faster than they can be processed. 185 * 186 * <li><em>Bounded queues.</em> A bounded queue (for example, an 187 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when 188 * used with finite maximumPoolSizes, but can be more difficult to 189 * tune and control. Queue sizes and maximum pool sizes may be traded 190 * off for each other: Using large queues and small pools minimizes 191 * CPU usage, OS resources, and context-switching overhead, but can 192 * lead to artificially low throughput. If tasks frequently block (for 193 * example if they are I/O bound), a system may be able to schedule 194 * time for more threads than you otherwise allow. Use of small queues 195 * generally requires larger pool sizes, which keeps CPUs busier but 196 * may encounter unacceptable scheduling overhead, which also 197 * decreases throughput. 198 * 199 * </ol> 200 * 201 * </dd> 202 * 203 * <dt>Rejected tasks</dt> 204 * 205 * <dd>New tasks submitted in method {@link #execute(Runnable)} will be 206 * <em>rejected</em> when the Executor has been shut down, and also when 207 * the Executor uses finite bounds for both maximum threads and work queue 208 * capacity, and is saturated. In either case, the {@code execute} method 209 * invokes the {@link 210 * RejectedExecutionHandler#rejectedExecution(Runnable, ThreadPoolExecutor)} 211 * method of its {@link RejectedExecutionHandler}. Four predefined handler 212 * policies are provided: 213 * 214 * <ol> 215 * 216 * <li>In the default {@link ThreadPoolExecutor.AbortPolicy}, the handler 217 * throws a runtime {@link RejectedExecutionException} upon rejection. 218 * 219 * <li>In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread 220 * that invokes {@code execute} itself runs the task. This provides a 221 * simple feedback control mechanism that will slow down the rate that 222 * new tasks are submitted. 223 * 224 * <li>In {@link ThreadPoolExecutor.DiscardPolicy}, a task that cannot 225 * be executed is simply dropped. This policy is designed only for 226 * those rare cases in which task completion is never relied upon. 227 * 228 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the 229 * executor is not shut down, the task at the head of the work queue 230 * is dropped, and then execution is retried (which can fail again, 231 * causing this to be repeated.) This policy is rarely acceptable. In 232 * nearly all cases, you should also cancel the task to cause an 233 * exception in any component waiting for its completion, and/or log 234 * the failure, as illustrated in {@link 235 * ThreadPoolExecutor.DiscardOldestPolicy} documentation. 236 * 237 * </ol> 238 * 239 * It is possible to define and use other kinds of {@link 240 * RejectedExecutionHandler} classes. Doing so requires some care 241 * especially when policies are designed to work only under particular 242 * capacity or queuing policies. </dd> 243 * 244 * <dt>Hook methods</dt> 245 * 246 * <dd>This class provides {@code protected} overridable 247 * {@link #beforeExecute(Thread, Runnable)} and 248 * {@link #afterExecute(Runnable, Throwable)} methods that are called 249 * before and after execution of each task. These can be used to 250 * manipulate the execution environment; for example, reinitializing 251 * ThreadLocals, gathering statistics, or adding log entries. 252 * Additionally, method {@link #terminated} can be overridden to perform 253 * any special processing that needs to be done once the Executor has 254 * fully terminated. 255 * 256 * <p>If hook, callback, or BlockingQueue methods throw exceptions, 257 * internal worker threads may in turn fail, abruptly terminate, and 258 * possibly be replaced.</dd> 259 * 260 * <dt>Queue maintenance</dt> 261 * 262 * <dd>Method {@link #getQueue()} allows access to the work queue 263 * for purposes of monitoring and debugging. Use of this method for 264 * any other purpose is strongly discouraged. Two supplied methods, 265 * {@link #remove(Runnable)} and {@link #purge} are available to 266 * assist in storage reclamation when large numbers of queued tasks 267 * become cancelled.</dd> 268 * 269 * <dt>Reclamation</dt> 270 * 271 * <dd>A pool that is no longer referenced in a program <em>AND</em> 272 * has no remaining threads may be reclaimed (garbage collected) 273 * without being explicitly shutdown. You can configure a pool to 274 * allow all unused threads to eventually die by setting appropriate 275 * keep-alive times, using a lower bound of zero core threads and/or 276 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd> 277 * 278 * </dl> 279 * 280 * <p><b>Extension example.</b> Most extensions of this class 281 * override one or more of the protected hook methods. For example, 282 * here is a subclass that adds a simple pause/resume feature: 283 * 284 * <pre> {@code 285 * class PausableThreadPoolExecutor extends ThreadPoolExecutor { 286 * private boolean isPaused; 287 * private ReentrantLock pauseLock = new ReentrantLock(); 288 * private Condition unpaused = pauseLock.newCondition(); 289 * 290 * public PausableThreadPoolExecutor(...) { super(...); } 291 * 292 * protected void beforeExecute(Thread t, Runnable r) { 293 * super.beforeExecute(t, r); 294 * pauseLock.lock(); 295 * try { 296 * while (isPaused) unpaused.await(); 297 * } catch (InterruptedException ie) { 298 * t.interrupt(); 299 * } finally { 300 * pauseLock.unlock(); 301 * } 302 * } 303 * 304 * public void pause() { 305 * pauseLock.lock(); 306 * try { 307 * isPaused = true; 308 * } finally { 309 * pauseLock.unlock(); 310 * } 311 * } 312 * 313 * public void resume() { 314 * pauseLock.lock(); 315 * try { 316 * isPaused = false; 317 * unpaused.signalAll(); 318 * } finally { 319 * pauseLock.unlock(); 320 * } 321 * } 322 * }}</pre> 323 * 324 * @since 1.5 325 * @author Doug Lea 326 */ 327 public class ThreadPoolExecutor extends AbstractExecutorService { 328 /** 329 * The main pool control state, ctl, is an atomic integer packing 330 * two conceptual fields 331 * workerCount, indicating the effective number of threads 332 * runState, indicating whether running, shutting down etc 333 * 334 * In order to pack them into one int, we limit workerCount to 335 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2 336 * billion) otherwise representable. If this is ever an issue in 337 * the future, the variable can be changed to be an AtomicLong, 338 * and the shift/mask constants below adjusted. But until the need 339 * arises, this code is a bit faster and simpler using an int. 340 * 341 * The workerCount is the number of workers that have been 342 * permitted to start and not permitted to stop. The value may be 343 * transiently different from the actual number of live threads, 344 * for example when a ThreadFactory fails to create a thread when 345 * asked, and when exiting threads are still performing 346 * bookkeeping before terminating. The user-visible pool size is 347 * reported as the current size of the workers set. 348 * 349 * The runState provides the main lifecycle control, taking on values: 350 * 351 * RUNNING: Accept new tasks and process queued tasks 352 * SHUTDOWN: Don't accept new tasks, but process queued tasks 353 * STOP: Don't accept new tasks, don't process queued tasks, 354 * and interrupt in-progress tasks 355 * TIDYING: All tasks have terminated, workerCount is zero, 356 * the thread transitioning to state TIDYING 357 * will run the terminated() hook method 358 * TERMINATED: terminated() has completed 359 * 360 * The numerical order among these values matters, to allow 361 * ordered comparisons. The runState monotonically increases over 362 * time, but need not hit each state. The transitions are: 363 * 364 * RUNNING -> SHUTDOWN 365 * On invocation of shutdown() 366 * (RUNNING or SHUTDOWN) -> STOP 367 * On invocation of shutdownNow() 368 * SHUTDOWN -> TIDYING 369 * When both queue and pool are empty 370 * STOP -> TIDYING 371 * When pool is empty 372 * TIDYING -> TERMINATED 373 * When the terminated() hook method has completed 374 * 375 * Threads waiting in awaitTermination() will return when the 376 * state reaches TERMINATED. 377 * 378 * Detecting the transition from SHUTDOWN to TIDYING is less 379 * straightforward than you'd like because the queue may become 380 * empty after non-empty and vice versa during SHUTDOWN state, but 381 * we can only terminate if, after seeing that it is empty, we see 382 * that workerCount is 0 (which sometimes entails a recheck -- see 383 * below). 384 */ 385 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0)); 386 private static final int COUNT_BITS = Integer.SIZE - 3; 387 private static final int COUNT_MASK = (1 << COUNT_BITS) - 1; 388 389 // runState is stored in the high-order bits 390 private static final int RUNNING = -1 << COUNT_BITS; 391 private static final int SHUTDOWN = 0 << COUNT_BITS; 392 private static final int STOP = 1 << COUNT_BITS; 393 private static final int TIDYING = 2 << COUNT_BITS; 394 private static final int TERMINATED = 3 << COUNT_BITS; 395 396 // Packing and unpacking ctl 397 private static int runStateOf(int c) { return c & ~COUNT_MASK; } 398 private static int workerCountOf(int c) { return c & COUNT_MASK; } 399 private static int ctlOf(int rs, int wc) { return rs | wc; } 400 401 /* 402 * Bit field accessors that don't require unpacking ctl. 403 * These depend on the bit layout and on workerCount being never negative. 404 */ 405 406 private static boolean runStateLessThan(int c, int s) { 407 return c < s; 408 } 409 410 private static boolean runStateAtLeast(int c, int s) { 411 return c >= s; 412 } 413 414 private static boolean isRunning(int c) { 415 return c < SHUTDOWN; 416 } 417 418 /** 419 * Attempts to CAS-increment the workerCount field of ctl. 420 */ 421 private boolean compareAndIncrementWorkerCount(int expect) { 422 return ctl.compareAndSet(expect, expect + 1); 423 } 424 425 /** 426 * Attempts to CAS-decrement the workerCount field of ctl. 427 */ 428 private boolean compareAndDecrementWorkerCount(int expect) { 429 return ctl.compareAndSet(expect, expect - 1); 430 } 431 432 /** 433 * Decrements the workerCount field of ctl. This is called only on 434 * abrupt termination of a thread (see processWorkerExit). Other 435 * decrements are performed within getTask. 436 */ 437 private void decrementWorkerCount() { 438 ctl.addAndGet(-1); 439 } 440 441 /** 442 * The queue used for holding tasks and handing off to worker 443 * threads. We do not require that workQueue.poll() returning 444 * null necessarily means that workQueue.isEmpty(), so rely 445 * solely on isEmpty to see if the queue is empty (which we must 446 * do for example when deciding whether to transition from 447 * SHUTDOWN to TIDYING). This accommodates special-purpose 448 * queues such as DelayQueues for which poll() is allowed to 449 * return null even if it may later return non-null when delays 450 * expire. 451 */ 452 private final BlockingQueue<Runnable> workQueue; 453 454 /** 455 * Lock held on access to workers set and related bookkeeping. 456 * While we could use a concurrent set of some sort, it turns out 457 * to be generally preferable to use a lock. Among the reasons is 458 * that this serializes interruptIdleWorkers, which avoids 459 * unnecessary interrupt storms, especially during shutdown. 460 * Otherwise exiting threads would concurrently interrupt those 461 * that have not yet interrupted. It also simplifies some of the 462 * associated statistics bookkeeping of largestPoolSize etc. We 463 * also hold mainLock on shutdown and shutdownNow, for the sake of 464 * ensuring workers set is stable while separately checking 465 * permission to interrupt and actually interrupting. 466 */ 467 private final ReentrantLock mainLock = new ReentrantLock(); 468 469 /** 470 * Set containing all worker threads in pool. Accessed only when 471 * holding mainLock. 472 */ 473 private final HashSet<Worker> workers = new HashSet<>(); 474 475 /** 476 * Wait condition to support awaitTermination. 477 */ 478 private final Condition termination = mainLock.newCondition(); 479 480 /** 481 * Tracks largest attained pool size. Accessed only under 482 * mainLock. 483 */ 484 private int largestPoolSize; 485 486 /** 487 * Counter for completed tasks. Updated only on termination of 488 * worker threads. Accessed only under mainLock. 489 */ 490 private long completedTaskCount; 491 492 /* 493 * All user control parameters are declared as volatiles so that 494 * ongoing actions are based on freshest values, but without need 495 * for locking, since no internal invariants depend on them 496 * changing synchronously with respect to other actions. 497 */ 498 499 /** 500 * Factory for new threads. All threads are created using this 501 * factory (via method addWorker). All callers must be prepared 502 * for addWorker to fail, which may reflect a system or user's 503 * policy limiting the number of threads. Even though it is not 504 * treated as an error, failure to create threads may result in 505 * new tasks being rejected or existing ones remaining stuck in 506 * the queue. 507 * 508 * We go further and preserve pool invariants even in the face of 509 * errors such as OutOfMemoryError, that might be thrown while 510 * trying to create threads. Such errors are rather common due to 511 * the need to allocate a native stack in Thread.start, and users 512 * will want to perform clean pool shutdown to clean up. There 513 * will likely be enough memory available for the cleanup code to 514 * complete without encountering yet another OutOfMemoryError. 515 */ 516 private volatile ThreadFactory threadFactory; 517 518 /** 519 * Handler called when saturated or shutdown in execute. 520 */ 521 private volatile RejectedExecutionHandler handler; 522 523 /** 524 * Timeout in nanoseconds for idle threads waiting for work. 525 * Threads use this timeout when there are more than corePoolSize 526 * present or if allowCoreThreadTimeOut. Otherwise they wait 527 * forever for new work. 528 */ 529 private volatile long keepAliveTime; 530 531 /** 532 * If false (default), core threads stay alive even when idle. 533 * If true, core threads use keepAliveTime to time out waiting 534 * for work. 535 */ 536 private volatile boolean allowCoreThreadTimeOut; 537 538 /** 539 * Core pool size is the minimum number of workers to keep alive 540 * (and not allow to time out etc) unless allowCoreThreadTimeOut 541 * is set, in which case the minimum is zero. 542 * 543 * Since the worker count is actually stored in COUNT_BITS bits, 544 * the effective limit is {@code corePoolSize & COUNT_MASK}. 545 */ 546 private volatile int corePoolSize; 547 548 /** 549 * Maximum pool size. 550 * 551 * Since the worker count is actually stored in COUNT_BITS bits, 552 * the effective limit is {@code maximumPoolSize & COUNT_MASK}. 553 */ 554 private volatile int maximumPoolSize; 555 556 /** 557 * The default rejected execution handler. 558 */ 559 private static final RejectedExecutionHandler defaultHandler = 560 new AbortPolicy(); 561 562 /** 563 * Permission required for callers of shutdown and shutdownNow. 564 * We additionally require (see checkShutdownAccess) that callers 565 * have permission to actually interrupt threads in the worker set 566 * (as governed by Thread.interrupt, which relies on 567 * ThreadGroup.checkAccess, which in turn relies on 568 * SecurityManager.checkAccess). Shutdowns are attempted only if 569 * these checks pass. 570 * 571 * All actual invocations of Thread.interrupt (see 572 * interruptIdleWorkers and interruptWorkers) ignore 573 * SecurityExceptions, meaning that the attempted interrupts 574 * silently fail. In the case of shutdown, they should not fail 575 * unless the SecurityManager has inconsistent policies, sometimes 576 * allowing access to a thread and sometimes not. In such cases, 577 * failure to actually interrupt threads may disable or delay full 578 * termination. Other uses of interruptIdleWorkers are advisory, 579 * and failure to actually interrupt will merely delay response to 580 * configuration changes so is not handled exceptionally. 581 */ 582 private static final RuntimePermission shutdownPerm = 583 new RuntimePermission("modifyThread"); 584 585 /** 586 * Class Worker mainly maintains interrupt control state for 587 * threads running tasks, along with other minor bookkeeping. 588 * This class opportunistically extends AbstractQueuedSynchronizer 589 * to simplify acquiring and releasing a lock surrounding each 590 * task execution. This protects against interrupts that are 591 * intended to wake up a worker thread waiting for a task from 592 * instead interrupting a task being run. We implement a simple 593 * non-reentrant mutual exclusion lock rather than use 594 * ReentrantLock because we do not want worker tasks to be able to 595 * reacquire the lock when they invoke pool control methods like 596 * setCorePoolSize. Additionally, to suppress interrupts until 597 * the thread actually starts running tasks, we initialize lock 598 * state to a negative value, and clear it upon start (in 599 * runWorker). 600 */ 601 private final class Worker 602 extends AbstractQueuedSynchronizer 603 implements Runnable 604 { 605 /** 606 * This class will never be serialized, but we provide a 607 * serialVersionUID to suppress a javac warning. 608 */ 609 private static final long serialVersionUID = 6138294804551838833L; 610 611 /** Thread this worker is running in. Null if factory fails. */ 612 @SuppressWarnings("serial") // Unlikely to be serializable 613 final Thread thread; 614 /** Initial task to run. Possibly null. */ 615 @SuppressWarnings("serial") // Not statically typed as Serializable 616 Runnable firstTask; 617 /** Per-thread task counter */ 618 volatile long completedTasks; 619 620 // TODO: switch to AbstractQueuedLongSynchronizer and move 621 // completedTasks into the lock word. 622 623 /** 624 * Creates with given first task and thread from ThreadFactory. 625 * @param firstTask the first task (null if none) 626 */ 627 Worker(Runnable firstTask) { 628 setState(-1); // inhibit interrupts until runWorker 629 this.firstTask = firstTask; 630 this.thread = getThreadFactory().newThread(this); 631 } 632 633 /** Delegates main run loop to outer runWorker. */ 634 public void run() { 635 runWorker(this); 636 } 637 638 // Lock methods 639 // 640 // The value 0 represents the unlocked state. 641 // The value 1 represents the locked state. 642 643 protected boolean isHeldExclusively() { 644 return getState() != 0; 645 } 646 647 protected boolean tryAcquire(int unused) { 648 if (compareAndSetState(0, 1)) { 649 setExclusiveOwnerThread(Thread.currentThread()); 650 return true; 651 } 652 return false; 653 } 654 655 protected boolean tryRelease(int unused) { 656 setExclusiveOwnerThread(null); 657 setState(0); 658 return true; 659 } 660 661 public void lock() { acquire(1); } 662 public boolean tryLock() { return tryAcquire(1); } 663 public void unlock() { release(1); } 664 public boolean isLocked() { return isHeldExclusively(); } 665 666 void interruptIfStarted() { 667 Thread t; 668 if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) { 669 try { 670 t.interrupt(); 671 } catch (SecurityException ignore) { 672 } 673 } 674 } 675 } 676 677 /* 678 * Methods for setting control state 679 */ 680 681 /** 682 * Transitions runState to given target, or leaves it alone if 683 * already at least the given target. 684 * 685 * @param targetState the desired state, either SHUTDOWN or STOP 686 * (but not TIDYING or TERMINATED -- use tryTerminate for that) 687 */ 688 private void advanceRunState(int targetState) { 689 // assert targetState == SHUTDOWN || targetState == STOP; 690 for (;;) { 691 int c = ctl.get(); 692 if (runStateAtLeast(c, targetState) || 693 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c)))) 694 break; 695 } 696 } 697 698 /** 699 * Transitions to TERMINATED state if either (SHUTDOWN and pool 700 * and queue empty) or (STOP and pool empty). If otherwise 701 * eligible to terminate but workerCount is nonzero, interrupts an 702 * idle worker to ensure that shutdown signals propagate. This 703 * method must be called following any action that might make 704 * termination possible -- reducing worker count or removing tasks 705 * from the queue during shutdown. The method is non-private to 706 * allow access from ScheduledThreadPoolExecutor. 707 */ 708 final void tryTerminate() { 709 for (;;) { 710 int c = ctl.get(); 711 if (isRunning(c) || 712 runStateAtLeast(c, TIDYING) || 713 (runStateLessThan(c, STOP) && ! workQueue.isEmpty())) 714 return; 715 if (workerCountOf(c) != 0) { // Eligible to terminate 716 interruptIdleWorkers(ONLY_ONE); 717 return; 718 } 719 720 final ReentrantLock mainLock = this.mainLock; 721 mainLock.lock(); 722 try { 723 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) { 724 try { 725 terminated(); 726 } finally { 727 ctl.set(ctlOf(TERMINATED, 0)); 728 termination.signalAll(); 729 } 730 return; 731 } 732 } finally { 733 mainLock.unlock(); 734 } 735 // else retry on failed CAS 736 } 737 } 738 739 /* 740 * Methods for controlling interrupts to worker threads. 741 */ 742 743 /** 744 * If there is a security manager, makes sure caller has 745 * permission to shut down threads in general (see shutdownPerm). 746 * If this passes, additionally makes sure the caller is allowed 747 * to interrupt each worker thread. This might not be true even if 748 * first check passed, if the SecurityManager treats some threads 749 * specially. 750 */ 751 private void checkShutdownAccess() { 752 // assert mainLock.isHeldByCurrentThread(); 753 SecurityManager security = System.getSecurityManager(); 754 if (security != null) { 755 security.checkPermission(shutdownPerm); 756 for (Worker w : workers) 757 security.checkAccess(w.thread); 758 } 759 } 760 761 /** 762 * Interrupts all threads, even if active. Ignores SecurityExceptions 763 * (in which case some threads may remain uninterrupted). 764 */ 765 private void interruptWorkers() { 766 // assert mainLock.isHeldByCurrentThread(); 767 for (Worker w : workers) 768 w.interruptIfStarted(); 769 } 770 771 /** 772 * Interrupts threads that might be waiting for tasks (as 773 * indicated by not being locked) so they can check for 774 * termination or configuration changes. Ignores 775 * SecurityExceptions (in which case some threads may remain 776 * uninterrupted). 777 * 778 * @param onlyOne If true, interrupt at most one worker. This is 779 * called only from tryTerminate when termination is otherwise 780 * enabled but there are still other workers. In this case, at 781 * most one waiting worker is interrupted to propagate shutdown 782 * signals in case all threads are currently waiting. 783 * Interrupting any arbitrary thread ensures that newly arriving 784 * workers since shutdown began will also eventually exit. 785 * To guarantee eventual termination, it suffices to always 786 * interrupt only one idle worker, but shutdown() interrupts all 787 * idle workers so that redundant workers exit promptly, not 788 * waiting for a straggler task to finish. 789 */ 790 private void interruptIdleWorkers(boolean onlyOne) { 791 final ReentrantLock mainLock = this.mainLock; 792 mainLock.lock(); 793 try { 794 for (Worker w : workers) { 795 Thread t = w.thread; 796 if (!t.isInterrupted() && w.tryLock()) { 797 try { 798 t.interrupt(); 799 } catch (SecurityException ignore) { 800 } finally { 801 w.unlock(); 802 } 803 } 804 if (onlyOne) 805 break; 806 } 807 } finally { 808 mainLock.unlock(); 809 } 810 } 811 812 /** 813 * Common form of interruptIdleWorkers, to avoid having to 814 * remember what the boolean argument means. 815 */ 816 private void interruptIdleWorkers() { 817 interruptIdleWorkers(false); 818 } 819 820 private static final boolean ONLY_ONE = true; 821 822 /* 823 * Misc utilities, most of which are also exported to 824 * ScheduledThreadPoolExecutor 825 */ 826 827 /** 828 * Invokes the rejected execution handler for the given command. 829 * Package-protected for use by ScheduledThreadPoolExecutor. 830 */ 831 final void reject(Runnable command) { 832 handler.rejectedExecution(command, this); 833 } 834 835 /** 836 * Performs any further cleanup following run state transition on 837 * invocation of shutdown. A no-op here, but used by 838 * ScheduledThreadPoolExecutor to cancel delayed tasks. 839 */ 840 void onShutdown() { 841 } 842 843 /** 844 * Drains the task queue into a new list, normally using 845 * drainTo. But if the queue is a DelayQueue or any other kind of 846 * queue for which poll or drainTo may fail to remove some 847 * elements, it deletes them one by one. 848 */ 849 private List<Runnable> drainQueue() { 850 BlockingQueue<Runnable> q = workQueue; 851 ArrayList<Runnable> taskList = new ArrayList<>(); 852 q.drainTo(taskList); 853 if (!q.isEmpty()) { 854 for (Runnable r : q.toArray(new Runnable[0])) { 855 if (q.remove(r)) 856 taskList.add(r); 857 } 858 } 859 return taskList; 860 } 861 862 /* 863 * Methods for creating, running and cleaning up after workers 864 */ 865 866 /** 867 * Checks if a new worker can be added with respect to current 868 * pool state and the given bound (either core or maximum). If so, 869 * the worker count is adjusted accordingly, and, if possible, a 870 * new worker is created and started, running firstTask as its 871 * first task. This method returns false if the pool is stopped or 872 * eligible to shut down. It also returns false if the thread 873 * factory fails to create a thread when asked. If the thread 874 * creation fails, either due to the thread factory returning 875 * null, or due to an exception (typically OutOfMemoryError in 876 * Thread.start()), we roll back cleanly. 877 * 878 * @param firstTask the task the new thread should run first (or 879 * null if none). Workers are created with an initial first task 880 * (in method execute()) to bypass queuing when there are fewer 881 * than corePoolSize threads (in which case we always start one), 882 * or when the queue is full (in which case we must bypass queue). 883 * Initially idle threads are usually created via 884 * prestartCoreThread or to replace other dying workers. 885 * 886 * @param core if true use corePoolSize as bound, else 887 * maximumPoolSize. (A boolean indicator is used here rather than a 888 * value to ensure reads of fresh values after checking other pool 889 * state). 890 * @return true if successful 891 */ 892 private boolean addWorker(Runnable firstTask, boolean core) { 893 retry: 894 for (int c = ctl.get();;) { 895 // Check if queue empty only if necessary. 896 if (runStateAtLeast(c, SHUTDOWN) 897 && (runStateAtLeast(c, STOP) 898 || firstTask != null 899 || workQueue.isEmpty())) 900 return false; 901 902 for (;;) { 903 if (workerCountOf(c) 904 >= ((core ? corePoolSize : maximumPoolSize) & COUNT_MASK)) 905 return false; 906 if (compareAndIncrementWorkerCount(c)) 907 break retry; 908 c = ctl.get(); // Re-read ctl 909 if (runStateAtLeast(c, SHUTDOWN)) 910 continue retry; 911 // else CAS failed due to workerCount change; retry inner loop 912 } 913 } 914 915 boolean workerStarted = false; 916 boolean workerAdded = false; 917 Worker w = null; 918 try { 919 w = new Worker(firstTask); 920 final Thread t = w.thread; 921 if (t != null) { 922 final ReentrantLock mainLock = this.mainLock; 923 mainLock.lock(); 924 try { 925 // Recheck while holding lock. 926 // Back out on ThreadFactory failure or if 927 // shut down before lock acquired. 928 int c = ctl.get(); 929 930 if (isRunning(c) || 931 (runStateLessThan(c, STOP) && firstTask == null)) { 932 if (t.getState() != Thread.State.NEW) 933 throw new IllegalThreadStateException(); 934 workers.add(w); 935 workerAdded = true; 936 int s = workers.size(); 937 if (s > largestPoolSize) 938 largestPoolSize = s; 939 } 940 } finally { 941 mainLock.unlock(); 942 } 943 if (workerAdded) { 944 t.start(); 945 workerStarted = true; 946 } 947 } 948 } finally { 949 if (! workerStarted) 950 addWorkerFailed(w); 951 } 952 return workerStarted; 953 } 954 955 /** 956 * Rolls back the worker thread creation. 957 * - removes worker from workers, if present 958 * - decrements worker count 959 * - rechecks for termination, in case the existence of this 960 * worker was holding up termination 961 */ 962 private void addWorkerFailed(Worker w) { 963 final ReentrantLock mainLock = this.mainLock; 964 mainLock.lock(); 965 try { 966 if (w != null) 967 workers.remove(w); 968 decrementWorkerCount(); 969 tryTerminate(); 970 } finally { 971 mainLock.unlock(); 972 } 973 } 974 975 /** 976 * Performs cleanup and bookkeeping for a dying worker. Called 977 * only from worker threads. Unless completedAbruptly is set, 978 * assumes that workerCount has already been adjusted to account 979 * for exit. This method removes thread from worker set, and 980 * possibly terminates the pool or replaces the worker if either 981 * it exited due to user task exception or if fewer than 982 * corePoolSize workers are running or queue is non-empty but 983 * there are no workers. 984 * 985 * @param w the worker 986 * @param completedAbruptly if the worker died due to user exception 987 */ 988 private void processWorkerExit(Worker w, boolean completedAbruptly) { 989 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted 990 decrementWorkerCount(); 991 992 final ReentrantLock mainLock = this.mainLock; 993 mainLock.lock(); 994 try { 995 completedTaskCount += w.completedTasks; 996 workers.remove(w); 997 } finally { 998 mainLock.unlock(); 999 } 1000 1001 tryTerminate(); 1002 1003 int c = ctl.get(); 1004 if (runStateLessThan(c, STOP)) { 1005 if (!completedAbruptly) { 1006 int min = allowCoreThreadTimeOut ? 0 : corePoolSize; 1007 if (min == 0 && ! workQueue.isEmpty()) 1008 min = 1; 1009 if (workerCountOf(c) >= min) 1010 return; // replacement not needed 1011 } 1012 addWorker(null, false); 1013 } 1014 } 1015 1016 /** 1017 * Performs blocking or timed wait for a task, depending on 1018 * current configuration settings, or returns null if this worker 1019 * must exit because of any of: 1020 * 1. There are more than maximumPoolSize workers (due to 1021 * a call to setMaximumPoolSize). 1022 * 2. The pool is stopped. 1023 * 3. The pool is shutdown and the queue is empty. 1024 * 4. This worker timed out waiting for a task, and timed-out 1025 * workers are subject to termination (that is, 1026 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize}) 1027 * both before and after the timed wait, and if the queue is 1028 * non-empty, this worker is not the last thread in the pool. 1029 * 1030 * @return task, or null if the worker must exit, in which case 1031 * workerCount is decremented 1032 */ 1033 private Runnable getTask() { 1034 boolean timedOut = false; // Did the last poll() time out? 1035 1036 for (;;) { 1037 int c = ctl.get(); 1038 1039 // Check if queue empty only if necessary. 1040 if (runStateAtLeast(c, SHUTDOWN) 1041 && (runStateAtLeast(c, STOP) || workQueue.isEmpty())) { 1042 decrementWorkerCount(); 1043 return null; 1044 } 1045 1046 int wc = workerCountOf(c); 1047 1048 // Are workers subject to culling? 1049 boolean timed = allowCoreThreadTimeOut || wc > corePoolSize; 1050 1051 if ((wc > maximumPoolSize || (timed && timedOut)) 1052 && (wc > 1 || workQueue.isEmpty())) { 1053 if (compareAndDecrementWorkerCount(c)) 1054 return null; 1055 continue; 1056 } 1057 1058 try { 1059 Runnable r = timed ? 1060 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) : 1061 workQueue.take(); 1062 if (r != null) 1063 return r; 1064 timedOut = true; 1065 } catch (InterruptedException retry) { 1066 timedOut = false; 1067 } 1068 } 1069 } 1070 1071 /** 1072 * Main worker run loop. Repeatedly gets tasks from queue and 1073 * executes them, while coping with a number of issues: 1074 * 1075 * 1. We may start out with an initial task, in which case we 1076 * don't need to get the first one. Otherwise, as long as pool is 1077 * running, we get tasks from getTask. If it returns null then the 1078 * worker exits due to changed pool state or configuration 1079 * parameters. Other exits result from exception throws in 1080 * external code, in which case completedAbruptly holds, which 1081 * usually leads processWorkerExit to replace this thread. 1082 * 1083 * 2. Before running any task, the lock is acquired to prevent 1084 * other pool interrupts while the task is executing, and then we 1085 * ensure that unless pool is stopping, this thread does not have 1086 * its interrupt set. 1087 * 1088 * 3. Each task run is preceded by a call to beforeExecute, which 1089 * might throw an exception, in which case we cause thread to die 1090 * (breaking loop with completedAbruptly true) without processing 1091 * the task. 1092 * 1093 * 4. Assuming beforeExecute completes normally, we run the task, 1094 * gathering any of its thrown exceptions to send to afterExecute. 1095 * We separately handle RuntimeException, Error (both of which the 1096 * specs guarantee that we trap) and arbitrary Throwables. 1097 * Because we cannot rethrow Throwables within Runnable.run, we 1098 * wrap them within Errors on the way out (to the thread's 1099 * UncaughtExceptionHandler). Any thrown exception also 1100 * conservatively causes thread to die. 1101 * 1102 * 5. After task.run completes, we call afterExecute, which may 1103 * also throw an exception, which will also cause thread to 1104 * die. According to JLS Sec 14.20, this exception is the one that 1105 * will be in effect even if task.run throws. 1106 * 1107 * The net effect of the exception mechanics is that afterExecute 1108 * and the thread's UncaughtExceptionHandler have as accurate 1109 * information as we can provide about any problems encountered by 1110 * user code. 1111 * 1112 * @param w the worker 1113 */ 1114 final void runWorker(Worker w) { 1115 Thread wt = Thread.currentThread(); 1116 Runnable task = w.firstTask; 1117 w.firstTask = null; 1118 w.unlock(); // allow interrupts 1119 boolean completedAbruptly = true; 1120 try { 1121 while (task != null || (task = getTask()) != null) { 1122 w.lock(); 1123 // If pool is stopping, ensure thread is interrupted; 1124 // if not, ensure thread is not interrupted. This 1125 // requires a recheck in second case to deal with 1126 // shutdownNow race while clearing interrupt 1127 if ((runStateAtLeast(ctl.get(), STOP) || 1128 (Thread.interrupted() && 1129 runStateAtLeast(ctl.get(), STOP))) && 1130 !wt.isInterrupted()) 1131 wt.interrupt(); 1132 try { 1133 beforeExecute(wt, task); 1134 try { 1135 task.run(); 1136 afterExecute(task, null); 1137 } catch (Throwable ex) { 1138 afterExecute(task, ex); 1139 throw ex; 1140 } 1141 } finally { 1142 task = null; 1143 w.completedTasks++; 1144 w.unlock(); 1145 } 1146 } 1147 completedAbruptly = false; 1148 } finally { 1149 processWorkerExit(w, completedAbruptly); 1150 } 1151 } 1152 1153 // Public constructors and methods 1154 1155 /** 1156 * Creates a new {@code ThreadPoolExecutor} with the given initial 1157 * parameters, the 1158 * {@linkplain Executors#defaultThreadFactory default thread factory} 1159 * and the {@linkplain ThreadPoolExecutor.AbortPolicy 1160 * default rejected execution handler}. 1161 * 1162 * <p>It may be more convenient to use one of the {@link Executors} 1163 * factory methods instead of this general purpose constructor. 1164 * 1165 * @param corePoolSize the number of threads to keep in the pool, even 1166 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1167 * @param maximumPoolSize the maximum number of threads to allow in the 1168 * pool 1169 * @param keepAliveTime when the number of threads is greater than 1170 * the core, this is the maximum time that excess idle threads 1171 * will wait for new tasks before terminating. 1172 * @param unit the time unit for the {@code keepAliveTime} argument 1173 * @param workQueue the queue to use for holding tasks before they are 1174 * executed. This queue will hold only the {@code Runnable} 1175 * tasks submitted by the {@code execute} method. 1176 * @throws IllegalArgumentException if one of the following holds:<br> 1177 * {@code corePoolSize < 0}<br> 1178 * {@code keepAliveTime < 0}<br> 1179 * {@code maximumPoolSize <= 0}<br> 1180 * {@code maximumPoolSize < corePoolSize} 1181 * @throws NullPointerException if {@code workQueue} is null 1182 */ 1183 public ThreadPoolExecutor(int corePoolSize, 1184 int maximumPoolSize, 1185 long keepAliveTime, 1186 TimeUnit unit, 1187 BlockingQueue<Runnable> workQueue) { 1188 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, 1189 Executors.defaultThreadFactory(), defaultHandler); 1190 } 1191 1192 /** 1193 * Creates a new {@code ThreadPoolExecutor} with the given initial 1194 * parameters and the {@linkplain ThreadPoolExecutor.AbortPolicy 1195 * default rejected execution handler}. 1196 * 1197 * @param corePoolSize the number of threads to keep in the pool, even 1198 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1199 * @param maximumPoolSize the maximum number of threads to allow in the 1200 * pool 1201 * @param keepAliveTime when the number of threads is greater than 1202 * the core, this is the maximum time that excess idle threads 1203 * will wait for new tasks before terminating. 1204 * @param unit the time unit for the {@code keepAliveTime} argument 1205 * @param workQueue the queue to use for holding tasks before they are 1206 * executed. This queue will hold only the {@code Runnable} 1207 * tasks submitted by the {@code execute} method. 1208 * @param threadFactory the factory to use when the executor 1209 * creates a new thread 1210 * @throws IllegalArgumentException if one of the following holds:<br> 1211 * {@code corePoolSize < 0}<br> 1212 * {@code keepAliveTime < 0}<br> 1213 * {@code maximumPoolSize <= 0}<br> 1214 * {@code maximumPoolSize < corePoolSize} 1215 * @throws NullPointerException if {@code workQueue} 1216 * or {@code threadFactory} is null 1217 */ 1218 public ThreadPoolExecutor(int corePoolSize, 1219 int maximumPoolSize, 1220 long keepAliveTime, 1221 TimeUnit unit, 1222 BlockingQueue<Runnable> workQueue, 1223 ThreadFactory threadFactory) { 1224 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, 1225 threadFactory, defaultHandler); 1226 } 1227 1228 /** 1229 * Creates a new {@code ThreadPoolExecutor} with the given initial 1230 * parameters and the 1231 * {@linkplain Executors#defaultThreadFactory default thread factory}. 1232 * 1233 * @param corePoolSize the number of threads to keep in the pool, even 1234 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1235 * @param maximumPoolSize the maximum number of threads to allow in the 1236 * pool 1237 * @param keepAliveTime when the number of threads is greater than 1238 * the core, this is the maximum time that excess idle threads 1239 * will wait for new tasks before terminating. 1240 * @param unit the time unit for the {@code keepAliveTime} argument 1241 * @param workQueue the queue to use for holding tasks before they are 1242 * executed. This queue will hold only the {@code Runnable} 1243 * tasks submitted by the {@code execute} method. 1244 * @param handler the handler to use when execution is blocked 1245 * because the thread bounds and queue capacities are reached 1246 * @throws IllegalArgumentException if one of the following holds:<br> 1247 * {@code corePoolSize < 0}<br> 1248 * {@code keepAliveTime < 0}<br> 1249 * {@code maximumPoolSize <= 0}<br> 1250 * {@code maximumPoolSize < corePoolSize} 1251 * @throws NullPointerException if {@code workQueue} 1252 * or {@code handler} is null 1253 */ 1254 public ThreadPoolExecutor(int corePoolSize, 1255 int maximumPoolSize, 1256 long keepAliveTime, 1257 TimeUnit unit, 1258 BlockingQueue<Runnable> workQueue, 1259 RejectedExecutionHandler handler) { 1260 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, 1261 Executors.defaultThreadFactory(), handler); 1262 } 1263 1264 /** 1265 * Creates a new {@code ThreadPoolExecutor} with the given initial 1266 * parameters. 1267 * 1268 * @param corePoolSize the number of threads to keep in the pool, even 1269 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1270 * @param maximumPoolSize the maximum number of threads to allow in the 1271 * pool 1272 * @param keepAliveTime when the number of threads is greater than 1273 * the core, this is the maximum time that excess idle threads 1274 * will wait for new tasks before terminating. 1275 * @param unit the time unit for the {@code keepAliveTime} argument 1276 * @param workQueue the queue to use for holding tasks before they are 1277 * executed. This queue will hold only the {@code Runnable} 1278 * tasks submitted by the {@code execute} method. 1279 * @param threadFactory the factory to use when the executor 1280 * creates a new thread 1281 * @param handler the handler to use when execution is blocked 1282 * because the thread bounds and queue capacities are reached 1283 * @throws IllegalArgumentException if one of the following holds:<br> 1284 * {@code corePoolSize < 0}<br> 1285 * {@code keepAliveTime < 0}<br> 1286 * {@code maximumPoolSize <= 0}<br> 1287 * {@code maximumPoolSize < corePoolSize} 1288 * @throws NullPointerException if {@code workQueue} 1289 * or {@code threadFactory} or {@code handler} is null 1290 */ 1291 public ThreadPoolExecutor(int corePoolSize, 1292 int maximumPoolSize, 1293 long keepAliveTime, 1294 TimeUnit unit, 1295 BlockingQueue<Runnable> workQueue, 1296 ThreadFactory threadFactory, 1297 RejectedExecutionHandler handler) { 1298 if (corePoolSize < 0 || 1299 maximumPoolSize <= 0 || 1300 maximumPoolSize < corePoolSize || 1301 keepAliveTime < 0) 1302 throw new IllegalArgumentException(); 1303 if (workQueue == null || threadFactory == null || handler == null) 1304 throw new NullPointerException(); 1305 this.corePoolSize = corePoolSize; 1306 this.maximumPoolSize = maximumPoolSize; 1307 this.workQueue = workQueue; 1308 this.keepAliveTime = unit.toNanos(keepAliveTime); 1309 this.threadFactory = threadFactory; 1310 this.handler = handler; 1311 } 1312 1313 /** 1314 * Executes the given task sometime in the future. The task 1315 * may execute in a new thread or in an existing pooled thread. 1316 * 1317 * If the task cannot be submitted for execution, either because this 1318 * executor has been shutdown or because its capacity has been reached, 1319 * the task is handled by the current {@link RejectedExecutionHandler}. 1320 * 1321 * @param command the task to execute 1322 * @throws RejectedExecutionException at discretion of 1323 * {@code RejectedExecutionHandler}, if the task 1324 * cannot be accepted for execution 1325 * @throws NullPointerException if {@code command} is null 1326 */ 1327 public void execute(Runnable command) { 1328 if (command == null) 1329 throw new NullPointerException(); 1330 /* 1331 * Proceed in 3 steps: 1332 * 1333 * 1. If fewer than corePoolSize threads are running, try to 1334 * start a new thread with the given command as its first 1335 * task. The call to addWorker atomically checks runState and 1336 * workerCount, and so prevents false alarms that would add 1337 * threads when it shouldn't, by returning false. 1338 * 1339 * 2. If a task can be successfully queued, then we still need 1340 * to double-check whether we should have added a thread 1341 * (because existing ones died since last checking) or that 1342 * the pool shut down since entry into this method. So we 1343 * recheck state and if necessary roll back the enqueuing if 1344 * stopped, or start a new thread if there are none. 1345 * 1346 * 3. If we cannot queue task, then we try to add a new 1347 * thread. If it fails, we know we are shut down or saturated 1348 * and so reject the task. 1349 */ 1350 int c = ctl.get(); 1351 if (workerCountOf(c) < corePoolSize) { 1352 if (addWorker(command, true)) 1353 return; 1354 c = ctl.get(); 1355 } 1356 if (isRunning(c) && workQueue.offer(command)) { 1357 int recheck = ctl.get(); 1358 if (! isRunning(recheck) && remove(command)) 1359 reject(command); 1360 else if (workerCountOf(recheck) == 0) 1361 addWorker(null, false); 1362 } 1363 else if (!addWorker(command, false)) 1364 reject(command); 1365 } 1366 1367 /** 1368 * Initiates an orderly shutdown in which previously submitted 1369 * tasks are executed, but no new tasks will be accepted. 1370 * Invocation has no additional effect if already shut down. 1371 * 1372 * <p>This method does not wait for previously submitted tasks to 1373 * complete execution. Use {@link #awaitTermination awaitTermination} 1374 * to do that. 1375 * 1376 * @throws SecurityException {@inheritDoc} 1377 */ 1378 public void shutdown() { 1379 final ReentrantLock mainLock = this.mainLock; 1380 mainLock.lock(); 1381 try { 1382 checkShutdownAccess(); 1383 advanceRunState(SHUTDOWN); 1384 interruptIdleWorkers(); 1385 onShutdown(); // hook for ScheduledThreadPoolExecutor 1386 } finally { 1387 mainLock.unlock(); 1388 } 1389 tryTerminate(); 1390 } 1391 1392 /** 1393 * Attempts to stop all actively executing tasks, halts the 1394 * processing of waiting tasks, and returns a list of the tasks 1395 * that were awaiting execution. These tasks are drained (removed) 1396 * from the task queue upon return from this method. 1397 * 1398 * <p>This method does not wait for actively executing tasks to 1399 * terminate. Use {@link #awaitTermination awaitTermination} to 1400 * do that. 1401 * 1402 * <p>There are no guarantees beyond best-effort attempts to stop 1403 * processing actively executing tasks. This implementation 1404 * interrupts tasks via {@link Thread#interrupt}; any task that 1405 * fails to respond to interrupts may never terminate. 1406 * 1407 * @throws SecurityException {@inheritDoc} 1408 */ 1409 public List<Runnable> shutdownNow() { 1410 List<Runnable> tasks; 1411 final ReentrantLock mainLock = this.mainLock; 1412 mainLock.lock(); 1413 try { 1414 checkShutdownAccess(); 1415 advanceRunState(STOP); 1416 interruptWorkers(); 1417 tasks = drainQueue(); 1418 } finally { 1419 mainLock.unlock(); 1420 } 1421 tryTerminate(); 1422 return tasks; 1423 } 1424 1425 public boolean isShutdown() { 1426 return runStateAtLeast(ctl.get(), SHUTDOWN); 1427 } 1428 1429 /** Used by ScheduledThreadPoolExecutor. */ 1430 boolean isStopped() { 1431 return runStateAtLeast(ctl.get(), STOP); 1432 } 1433 1434 /** 1435 * Returns true if this executor is in the process of terminating 1436 * after {@link #shutdown} or {@link #shutdownNow} but has not 1437 * completely terminated. This method may be useful for 1438 * debugging. A return of {@code true} reported a sufficient 1439 * period after shutdown may indicate that submitted tasks have 1440 * ignored or suppressed interruption, causing this executor not 1441 * to properly terminate. 1442 * 1443 * @return {@code true} if terminating but not yet terminated 1444 */ 1445 public boolean isTerminating() { 1446 int c = ctl.get(); 1447 return runStateAtLeast(c, SHUTDOWN) && runStateLessThan(c, TERMINATED); 1448 } 1449 1450 public boolean isTerminated() { 1451 return runStateAtLeast(ctl.get(), TERMINATED); 1452 } 1453 1454 public boolean awaitTermination(long timeout, TimeUnit unit) 1455 throws InterruptedException { 1456 long nanos = unit.toNanos(timeout); 1457 final ReentrantLock mainLock = this.mainLock; 1458 mainLock.lock(); 1459 try { 1460 while (runStateLessThan(ctl.get(), TERMINATED)) { 1461 if (nanos <= 0L) 1462 return false; 1463 nanos = termination.awaitNanos(nanos); 1464 } 1465 return true; 1466 } finally { 1467 mainLock.unlock(); 1468 } 1469 } 1470 1471 // Override without "throws Throwable" for compatibility with subclasses 1472 // whose finalize method invokes super.finalize() (as is recommended). 1473 // Before JDK 11, finalize() had a non-empty method body. 1474 1475 /** 1476 * @implNote Previous versions of this class had a finalize method 1477 * that shut down this executor, but in this version, finalize 1478 * does nothing. 1479 */ 1480 @Deprecated(since="9") 1481 protected void finalize() {} 1482 1483 /** 1484 * Sets the thread factory used to create new threads. 1485 * 1486 * @param threadFactory the new thread factory 1487 * @throws NullPointerException if threadFactory is null 1488 * @see #getThreadFactory 1489 */ 1490 public void setThreadFactory(ThreadFactory threadFactory) { 1491 if (threadFactory == null) 1492 throw new NullPointerException(); 1493 this.threadFactory = threadFactory; 1494 } 1495 1496 /** 1497 * Returns the thread factory used to create new threads. 1498 * 1499 * @return the current thread factory 1500 * @see #setThreadFactory(ThreadFactory) 1501 */ 1502 public ThreadFactory getThreadFactory() { 1503 return threadFactory; 1504 } 1505 1506 /** 1507 * Sets a new handler for unexecutable tasks. 1508 * 1509 * @param handler the new handler 1510 * @throws NullPointerException if handler is null 1511 * @see #getRejectedExecutionHandler 1512 */ 1513 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) { 1514 if (handler == null) 1515 throw new NullPointerException(); 1516 this.handler = handler; 1517 } 1518 1519 /** 1520 * Returns the current handler for unexecutable tasks. 1521 * 1522 * @return the current handler 1523 * @see #setRejectedExecutionHandler(RejectedExecutionHandler) 1524 */ 1525 public RejectedExecutionHandler getRejectedExecutionHandler() { 1526 return handler; 1527 } 1528 1529 /** 1530 * Sets the core number of threads. This overrides any value set 1531 * in the constructor. If the new value is smaller than the 1532 * current value, excess existing threads will be terminated when 1533 * they next become idle. If larger, new threads will, if needed, 1534 * be started to execute any queued tasks. 1535 * 1536 * @param corePoolSize the new core size 1537 * @throws IllegalArgumentException if {@code corePoolSize < 0} 1538 * or {@code corePoolSize} is greater than the {@linkplain 1539 * #getMaximumPoolSize() maximum pool size} 1540 * @see #getCorePoolSize 1541 */ 1542 public void setCorePoolSize(int corePoolSize) { 1543 if (corePoolSize < 0 || maximumPoolSize < corePoolSize) 1544 throw new IllegalArgumentException(); 1545 int delta = corePoolSize - this.corePoolSize; 1546 this.corePoolSize = corePoolSize; 1547 if (workerCountOf(ctl.get()) > corePoolSize) 1548 interruptIdleWorkers(); 1549 else if (delta > 0) { 1550 // We don't really know how many new threads are "needed". 1551 // As a heuristic, prestart enough new workers (up to new 1552 // core size) to handle the current number of tasks in 1553 // queue, but stop if queue becomes empty while doing so. 1554 int k = Math.min(delta, workQueue.size()); 1555 while (k-- > 0 && addWorker(null, true)) { 1556 if (workQueue.isEmpty()) 1557 break; 1558 } 1559 } 1560 } 1561 1562 /** 1563 * Returns the core number of threads. 1564 * 1565 * @return the core number of threads 1566 * @see #setCorePoolSize 1567 */ 1568 public int getCorePoolSize() { 1569 return corePoolSize; 1570 } 1571 1572 /** 1573 * Starts a core thread, causing it to idly wait for work. This 1574 * overrides the default policy of starting core threads only when 1575 * new tasks are executed. This method will return {@code false} 1576 * if all core threads have already been started. 1577 * 1578 * @return {@code true} if a thread was started 1579 */ 1580 public boolean prestartCoreThread() { 1581 return workerCountOf(ctl.get()) < corePoolSize && 1582 addWorker(null, true); 1583 } 1584 1585 /** 1586 * Same as prestartCoreThread except arranges that at least one 1587 * thread is started even if corePoolSize is 0. 1588 */ 1589 void ensurePrestart() { 1590 int wc = workerCountOf(ctl.get()); 1591 if (wc < corePoolSize) 1592 addWorker(null, true); 1593 else if (wc == 0) 1594 addWorker(null, false); 1595 } 1596 1597 /** 1598 * Starts all core threads, causing them to idly wait for work. This 1599 * overrides the default policy of starting core threads only when 1600 * new tasks are executed. 1601 * 1602 * @return the number of threads started 1603 */ 1604 public int prestartAllCoreThreads() { 1605 int n = 0; 1606 while (addWorker(null, true)) 1607 ++n; 1608 return n; 1609 } 1610 1611 /** 1612 * Returns true if this pool allows core threads to time out and 1613 * terminate if no tasks arrive within the keepAlive time, being 1614 * replaced if needed when new tasks arrive. When true, the same 1615 * keep-alive policy applying to non-core threads applies also to 1616 * core threads. When false (the default), core threads are never 1617 * terminated due to lack of incoming tasks. 1618 * 1619 * @return {@code true} if core threads are allowed to time out, 1620 * else {@code false} 1621 * 1622 * @since 1.6 1623 */ 1624 public boolean allowsCoreThreadTimeOut() { 1625 return allowCoreThreadTimeOut; 1626 } 1627 1628 /** 1629 * Sets the policy governing whether core threads may time out and 1630 * terminate if no tasks arrive within the keep-alive time, being 1631 * replaced if needed when new tasks arrive. When false, core 1632 * threads are never terminated due to lack of incoming 1633 * tasks. When true, the same keep-alive policy applying to 1634 * non-core threads applies also to core threads. To avoid 1635 * continual thread replacement, the keep-alive time must be 1636 * greater than zero when setting {@code true}. This method 1637 * should in general be called before the pool is actively used. 1638 * 1639 * @param value {@code true} if should time out, else {@code false} 1640 * @throws IllegalArgumentException if value is {@code true} 1641 * and the current keep-alive time is not greater than zero 1642 * 1643 * @since 1.6 1644 */ 1645 public void allowCoreThreadTimeOut(boolean value) { 1646 if (value && keepAliveTime <= 0) 1647 throw new IllegalArgumentException("Core threads must have nonzero keep alive times"); 1648 if (value != allowCoreThreadTimeOut) { 1649 allowCoreThreadTimeOut = value; 1650 if (value) 1651 interruptIdleWorkers(); 1652 } 1653 } 1654 1655 /** 1656 * Sets the maximum allowed number of threads. This overrides any 1657 * value set in the constructor. If the new value is smaller than 1658 * the current value, excess existing threads will be 1659 * terminated when they next become idle. 1660 * 1661 * @param maximumPoolSize the new maximum 1662 * @throws IllegalArgumentException if the new maximum is 1663 * less than or equal to zero, or 1664 * less than the {@linkplain #getCorePoolSize core pool size} 1665 * @see #getMaximumPoolSize 1666 */ 1667 public void setMaximumPoolSize(int maximumPoolSize) { 1668 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize) 1669 throw new IllegalArgumentException(); 1670 this.maximumPoolSize = maximumPoolSize; 1671 if (workerCountOf(ctl.get()) > maximumPoolSize) 1672 interruptIdleWorkers(); 1673 } 1674 1675 /** 1676 * Returns the maximum allowed number of threads. 1677 * 1678 * @return the maximum allowed number of threads 1679 * @see #setMaximumPoolSize 1680 */ 1681 public int getMaximumPoolSize() { 1682 return maximumPoolSize; 1683 } 1684 1685 /** 1686 * Sets the thread keep-alive time, which is the amount of time 1687 * that threads may remain idle before being terminated. 1688 * Threads that wait this amount of time without processing a 1689 * task will be terminated if there are more than the core 1690 * number of threads currently in the pool, or if this pool 1691 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}. 1692 * This overrides any value set in the constructor. 1693 * 1694 * @param time the time to wait. A time value of zero will cause 1695 * excess threads to terminate immediately after executing tasks. 1696 * @param unit the time unit of the {@code time} argument 1697 * @throws IllegalArgumentException if {@code time} less than zero or 1698 * if {@code time} is zero and {@code allowsCoreThreadTimeOut} 1699 * @see #getKeepAliveTime(TimeUnit) 1700 */ 1701 public void setKeepAliveTime(long time, TimeUnit unit) { 1702 if (time < 0) 1703 throw new IllegalArgumentException(); 1704 if (time == 0 && allowsCoreThreadTimeOut()) 1705 throw new IllegalArgumentException("Core threads must have nonzero keep alive times"); 1706 long keepAliveTime = unit.toNanos(time); 1707 long delta = keepAliveTime - this.keepAliveTime; 1708 this.keepAliveTime = keepAliveTime; 1709 if (delta < 0) 1710 interruptIdleWorkers(); 1711 } 1712 1713 /** 1714 * Returns the thread keep-alive time, which is the amount of time 1715 * that threads may remain idle before being terminated. 1716 * Threads that wait this amount of time without processing a 1717 * task will be terminated if there are more than the core 1718 * number of threads currently in the pool, or if this pool 1719 * {@linkplain #allowsCoreThreadTimeOut() allows core thread timeout}. 1720 * 1721 * @param unit the desired time unit of the result 1722 * @return the time limit 1723 * @see #setKeepAliveTime(long, TimeUnit) 1724 */ 1725 public long getKeepAliveTime(TimeUnit unit) { 1726 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS); 1727 } 1728 1729 /* User-level queue utilities */ 1730 1731 /** 1732 * Returns the task queue used by this executor. Access to the 1733 * task queue is intended primarily for debugging and monitoring. 1734 * This queue may be in active use. Retrieving the task queue 1735 * does not prevent queued tasks from executing. 1736 * 1737 * @return the task queue 1738 */ 1739 public BlockingQueue<Runnable> getQueue() { 1740 return workQueue; 1741 } 1742 1743 /** 1744 * Removes this task from the executor's internal queue if it is 1745 * present, thus causing it not to be run if it has not already 1746 * started. 1747 * 1748 * <p>This method may be useful as one part of a cancellation 1749 * scheme. It may fail to remove tasks that have been converted 1750 * into other forms before being placed on the internal queue. 1751 * For example, a task entered using {@code submit} might be 1752 * converted into a form that maintains {@code Future} status. 1753 * However, in such cases, method {@link #purge} may be used to 1754 * remove those Futures that have been cancelled. 1755 * 1756 * @param task the task to remove 1757 * @return {@code true} if the task was removed 1758 */ 1759 public boolean remove(Runnable task) { 1760 boolean removed = workQueue.remove(task); 1761 tryTerminate(); // In case SHUTDOWN and now empty 1762 return removed; 1763 } 1764 1765 /** 1766 * Tries to remove from the work queue all {@link Future} 1767 * tasks that have been cancelled. This method can be useful as a 1768 * storage reclamation operation, that has no other impact on 1769 * functionality. Cancelled tasks are never executed, but may 1770 * accumulate in work queues until worker threads can actively 1771 * remove them. Invoking this method instead tries to remove them now. 1772 * However, this method may fail to remove tasks in 1773 * the presence of interference by other threads. 1774 */ 1775 public void purge() { 1776 final BlockingQueue<Runnable> q = workQueue; 1777 try { 1778 Iterator<Runnable> it = q.iterator(); 1779 while (it.hasNext()) { 1780 Runnable r = it.next(); 1781 if (r instanceof Future<?> && ((Future<?>)r).isCancelled()) 1782 it.remove(); 1783 } 1784 } catch (ConcurrentModificationException fallThrough) { 1785 // Take slow path if we encounter interference during traversal. 1786 // Make copy for traversal and call remove for cancelled entries. 1787 // The slow path is more likely to be O(N*N). 1788 for (Object r : q.toArray()) 1789 if (r instanceof Future<?> && ((Future<?>)r).isCancelled()) 1790 q.remove(r); 1791 } 1792 1793 tryTerminate(); // In case SHUTDOWN and now empty 1794 } 1795 1796 /* Statistics */ 1797 1798 /** 1799 * Returns the current number of threads in the pool. 1800 * 1801 * @return the number of threads 1802 */ 1803 public int getPoolSize() { 1804 final ReentrantLock mainLock = this.mainLock; 1805 mainLock.lock(); 1806 try { 1807 // Remove rare and surprising possibility of 1808 // isTerminated() && getPoolSize() > 0 1809 return runStateAtLeast(ctl.get(), TIDYING) ? 0 1810 : workers.size(); 1811 } finally { 1812 mainLock.unlock(); 1813 } 1814 } 1815 1816 /** 1817 * Returns the approximate number of threads that are actively 1818 * executing tasks. 1819 * 1820 * @return the number of threads 1821 */ 1822 public int getActiveCount() { 1823 final ReentrantLock mainLock = this.mainLock; 1824 mainLock.lock(); 1825 try { 1826 int n = 0; 1827 for (Worker w : workers) 1828 if (w.isLocked()) 1829 ++n; 1830 return n; 1831 } finally { 1832 mainLock.unlock(); 1833 } 1834 } 1835 1836 /** 1837 * Returns the largest number of threads that have ever 1838 * simultaneously been in the pool. 1839 * 1840 * @return the number of threads 1841 */ 1842 public int getLargestPoolSize() { 1843 final ReentrantLock mainLock = this.mainLock; 1844 mainLock.lock(); 1845 try { 1846 return largestPoolSize; 1847 } finally { 1848 mainLock.unlock(); 1849 } 1850 } 1851 1852 /** 1853 * Returns the approximate total number of tasks that have ever been 1854 * scheduled for execution. Because the states of tasks and 1855 * threads may change dynamically during computation, the returned 1856 * value is only an approximation. 1857 * 1858 * @return the number of tasks 1859 */ 1860 public long getTaskCount() { 1861 final ReentrantLock mainLock = this.mainLock; 1862 mainLock.lock(); 1863 try { 1864 long n = completedTaskCount; 1865 for (Worker w : workers) { 1866 n += w.completedTasks; 1867 if (w.isLocked()) 1868 ++n; 1869 } 1870 return n + workQueue.size(); 1871 } finally { 1872 mainLock.unlock(); 1873 } 1874 } 1875 1876 /** 1877 * Returns the approximate total number of tasks that have 1878 * completed execution. Because the states of tasks and threads 1879 * may change dynamically during computation, the returned value 1880 * is only an approximation, but one that does not ever decrease 1881 * across successive calls. 1882 * 1883 * @return the number of tasks 1884 */ 1885 public long getCompletedTaskCount() { 1886 final ReentrantLock mainLock = this.mainLock; 1887 mainLock.lock(); 1888 try { 1889 long n = completedTaskCount; 1890 for (Worker w : workers) 1891 n += w.completedTasks; 1892 return n; 1893 } finally { 1894 mainLock.unlock(); 1895 } 1896 } 1897 1898 /** 1899 * Returns a string identifying this pool, as well as its state, 1900 * including indications of run state and estimated worker and 1901 * task counts. 1902 * 1903 * @return a string identifying this pool, as well as its state 1904 */ 1905 public String toString() { 1906 long ncompleted; 1907 int nworkers, nactive; 1908 final ReentrantLock mainLock = this.mainLock; 1909 mainLock.lock(); 1910 try { 1911 ncompleted = completedTaskCount; 1912 nactive = 0; 1913 nworkers = workers.size(); 1914 for (Worker w : workers) { 1915 ncompleted += w.completedTasks; 1916 if (w.isLocked()) 1917 ++nactive; 1918 } 1919 } finally { 1920 mainLock.unlock(); 1921 } 1922 int c = ctl.get(); 1923 String runState = 1924 isRunning(c) ? "Running" : 1925 runStateAtLeast(c, TERMINATED) ? "Terminated" : 1926 "Shutting down"; 1927 return super.toString() + 1928 "[" + runState + 1929 ", pool size = " + nworkers + 1930 ", active threads = " + nactive + 1931 ", queued tasks = " + workQueue.size() + 1932 ", completed tasks = " + ncompleted + 1933 "]"; 1934 } 1935 1936 /* Extension hooks */ 1937 1938 /** 1939 * Method invoked prior to executing the given Runnable in the 1940 * given thread. This method is invoked by thread {@code t} that 1941 * will execute task {@code r}, and may be used to re-initialize 1942 * ThreadLocals, or to perform logging. 1943 * 1944 * <p>This implementation does nothing, but may be customized in 1945 * subclasses. Note: To properly nest multiple overridings, subclasses 1946 * should generally invoke {@code super.beforeExecute} at the end of 1947 * this method. 1948 * 1949 * @param t the thread that will run task {@code r} 1950 * @param r the task that will be executed 1951 */ 1952 protected void beforeExecute(Thread t, Runnable r) { } 1953 1954 /** 1955 * Method invoked upon completion of execution of the given Runnable. 1956 * This method is invoked by the thread that executed the task. If 1957 * non-null, the Throwable is the uncaught {@code RuntimeException} 1958 * or {@code Error} that caused execution to terminate abruptly. 1959 * 1960 * <p>This implementation does nothing, but may be customized in 1961 * subclasses. Note: To properly nest multiple overridings, subclasses 1962 * should generally invoke {@code super.afterExecute} at the 1963 * beginning of this method. 1964 * 1965 * <p><b>Note:</b> When actions are enclosed in tasks (such as 1966 * {@link FutureTask}) either explicitly or via methods such as 1967 * {@code submit}, these task objects catch and maintain 1968 * computational exceptions, and so they do not cause abrupt 1969 * termination, and the internal exceptions are <em>not</em> 1970 * passed to this method. If you would like to trap both kinds of 1971 * failures in this method, you can further probe for such cases, 1972 * as in this sample subclass that prints either the direct cause 1973 * or the underlying exception if a task has been aborted: 1974 * 1975 * <pre> {@code 1976 * class ExtendedExecutor extends ThreadPoolExecutor { 1977 * // ... 1978 * protected void afterExecute(Runnable r, Throwable t) { 1979 * super.afterExecute(r, t); 1980 * if (t == null 1981 * && r instanceof Future<?> 1982 * && ((Future<?>)r).isDone()) { 1983 * try { 1984 * Object result = ((Future<?>) r).get(); 1985 * } catch (CancellationException ce) { 1986 * t = ce; 1987 * } catch (ExecutionException ee) { 1988 * t = ee.getCause(); 1989 * } catch (InterruptedException ie) { 1990 * // ignore/reset 1991 * Thread.currentThread().interrupt(); 1992 * } 1993 * } 1994 * if (t != null) 1995 * System.out.println(t); 1996 * } 1997 * }}</pre> 1998 * 1999 * @param r the runnable that has completed 2000 * @param t the exception that caused termination, or null if 2001 * execution completed normally 2002 */ 2003 protected void afterExecute(Runnable r, Throwable t) { } 2004 2005 /** 2006 * Method invoked when the Executor has terminated. Default 2007 * implementation does nothing. Note: To properly nest multiple 2008 * overridings, subclasses should generally invoke 2009 * {@code super.terminated} within this method. 2010 */ 2011 protected void terminated() { } 2012 2013 /* Predefined RejectedExecutionHandlers */ 2014 2015 /** 2016 * A handler for rejected tasks that runs the rejected task 2017 * directly in the calling thread of the {@code execute} method, 2018 * unless the executor has been shut down, in which case the task 2019 * is discarded. 2020 */ 2021 public static class CallerRunsPolicy implements RejectedExecutionHandler { 2022 /** 2023 * Creates a {@code CallerRunsPolicy}. 2024 */ 2025 public CallerRunsPolicy() { } 2026 2027 /** 2028 * Executes task r in the caller's thread, unless the executor 2029 * has been shut down, in which case the task is discarded. 2030 * 2031 * @param r the runnable task requested to be executed 2032 * @param e the executor attempting to execute this task 2033 */ 2034 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2035 if (!e.isShutdown()) { 2036 r.run(); 2037 } 2038 } 2039 } 2040 2041 /** 2042 * A handler for rejected tasks that throws a 2043 * {@link RejectedExecutionException}. 2044 * 2045 * This is the default handler for {@link ThreadPoolExecutor} and 2046 * {@link ScheduledThreadPoolExecutor}. 2047 */ 2048 public static class AbortPolicy implements RejectedExecutionHandler { 2049 /** 2050 * Creates an {@code AbortPolicy}. 2051 */ 2052 public AbortPolicy() { } 2053 2054 /** 2055 * Always throws RejectedExecutionException. 2056 * 2057 * @param r the runnable task requested to be executed 2058 * @param e the executor attempting to execute this task 2059 * @throws RejectedExecutionException always 2060 */ 2061 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2062 throw new RejectedExecutionException("Task " + r.toString() + 2063 " rejected from " + 2064 e.toString()); 2065 } 2066 } 2067 2068 /** 2069 * A handler for rejected tasks that silently discards the 2070 * rejected task. 2071 */ 2072 public static class DiscardPolicy implements RejectedExecutionHandler { 2073 /** 2074 * Creates a {@code DiscardPolicy}. 2075 */ 2076 public DiscardPolicy() { } 2077 2078 /** 2079 * Does nothing, which has the effect of discarding task r. 2080 * 2081 * @param r the runnable task requested to be executed 2082 * @param e the executor attempting to execute this task 2083 */ 2084 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2085 } 2086 } 2087 2088 /** 2089 * A handler for rejected tasks that discards the oldest unhandled 2090 * request and then retries {@code execute}, unless the executor 2091 * is shut down, in which case the task is discarded. This policy is 2092 * rarely useful in cases where other threads may be waiting for 2093 * tasks to terminate, or failures must be recorded. Instead consider 2094 * using a handler of the form: 2095 * <pre> {@code 2096 * new RejectedExecutionHandler() { 2097 * public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2098 * Runnable dropped = e.getQueue().poll(); 2099 * if (dropped instanceof Future<?>) { 2100 * ((Future<?>)dropped).cancel(false); 2101 * // also consider logging the failure 2102 * } 2103 * e.execute(r); // retry 2104 * }}}</pre> 2105 */ 2106 public static class DiscardOldestPolicy implements RejectedExecutionHandler { 2107 /** 2108 * Creates a {@code DiscardOldestPolicy} for the given executor. 2109 */ 2110 public DiscardOldestPolicy() { } 2111 2112 /** 2113 * Obtains and ignores the next task that the executor 2114 * would otherwise execute, if one is immediately available, 2115 * and then retries execution of task r, unless the executor 2116 * is shut down, in which case task r is instead discarded. 2117 * 2118 * @param r the runnable task requested to be executed 2119 * @param e the executor attempting to execute this task 2120 */ 2121 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2122 if (!e.isShutdown()) { 2123 e.getQueue().poll(); 2124 e.execute(r); 2125 } 2126 } 2127 } 2128 }