Print this page
Split |
Close |
Expand all |
Collapse all |
--- old/src/share/classes/java/util/concurrent/ThreadPoolExecutor.java
+++ new/src/share/classes/java/util/concurrent/ThreadPoolExecutor.java
1 1 /*
2 2 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
3 3 *
4 4 * This code is free software; you can redistribute it and/or modify it
5 5 * under the terms of the GNU General Public License version 2 only, as
6 6 * published by the Free Software Foundation. Oracle designates this
7 7 * particular file as subject to the "Classpath" exception as provided
8 8 * by Oracle in the LICENSE file that accompanied this code.
9 9 *
10 10 * This code is distributed in the hope that it will be useful, but WITHOUT
11 11 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
12 12 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
13 13 * version 2 for more details (a copy is included in the LICENSE file that
14 14 * accompanied this code).
15 15 *
16 16 * You should have received a copy of the GNU General Public License version
17 17 * 2 along with this work; if not, write to the Free Software Foundation,
18 18 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
19 19 *
20 20 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
21 21 * or visit www.oracle.com if you need additional information or have any
22 22 * questions.
23 23 */
24 24
25 25 /*
26 26 * This file is available under and governed by the GNU General Public
27 27 * License version 2 only, as published by the Free Software Foundation.
28 28 * However, the following notice accompanied the original version of this
29 29 * file:
30 30 *
31 31 * Written by Doug Lea with assistance from members of JCP JSR-166
32 32 * Expert Group and released to the public domain, as explained at
33 33 * http://creativecommons.org/licenses/publicdomain
34 34 */
35 35
36 36 package java.util.concurrent;
37 37 import java.util.concurrent.locks.*;
38 38 import java.util.concurrent.atomic.*;
39 39 import java.util.*;
40 40
41 41 /**
42 42 * An {@link ExecutorService} that executes each submitted task using
43 43 * one of possibly several pooled threads, normally configured
44 44 * using {@link Executors} factory methods.
45 45 *
46 46 * <p>Thread pools address two different problems: they usually
47 47 * provide improved performance when executing large numbers of
48 48 * asynchronous tasks, due to reduced per-task invocation overhead,
49 49 * and they provide a means of bounding and managing the resources,
50 50 * including threads, consumed when executing a collection of tasks.
51 51 * Each {@code ThreadPoolExecutor} also maintains some basic
52 52 * statistics, such as the number of completed tasks.
53 53 *
54 54 * <p>To be useful across a wide range of contexts, this class
55 55 * provides many adjustable parameters and extensibility
56 56 * hooks. However, programmers are urged to use the more convenient
57 57 * {@link Executors} factory methods {@link
58 58 * Executors#newCachedThreadPool} (unbounded thread pool, with
59 59 * automatic thread reclamation), {@link Executors#newFixedThreadPool}
60 60 * (fixed size thread pool) and {@link
61 61 * Executors#newSingleThreadExecutor} (single background thread), that
62 62 * preconfigure settings for the most common usage
63 63 * scenarios. Otherwise, use the following guide when manually
64 64 * configuring and tuning this class:
65 65 *
66 66 * <dl>
67 67 *
68 68 * <dt>Core and maximum pool sizes</dt>
69 69 *
70 70 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the
71 71 * pool size (see {@link #getPoolSize})
72 72 * according to the bounds set by
73 73 * corePoolSize (see {@link #getCorePoolSize}) and
74 74 * maximumPoolSize (see {@link #getMaximumPoolSize}).
75 75 *
76 76 * When a new task is submitted in method {@link #execute}, and fewer
77 77 * than corePoolSize threads are running, a new thread is created to
78 78 * handle the request, even if other worker threads are idle. If
79 79 * there are more than corePoolSize but less than maximumPoolSize
80 80 * threads running, a new thread will be created only if the queue is
81 81 * full. By setting corePoolSize and maximumPoolSize the same, you
82 82 * create a fixed-size thread pool. By setting maximumPoolSize to an
83 83 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you
84 84 * allow the pool to accommodate an arbitrary number of concurrent
85 85 * tasks. Most typically, core and maximum pool sizes are set only
86 86 * upon construction, but they may also be changed dynamically using
87 87 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd>
88 88 *
89 89 * <dt>On-demand construction</dt>
90 90 *
91 91 * <dd> By default, even core threads are initially created and
92 92 * started only when new tasks arrive, but this can be overridden
93 93 * dynamically using method {@link #prestartCoreThread} or {@link
94 94 * #prestartAllCoreThreads}. You probably want to prestart threads if
95 95 * you construct the pool with a non-empty queue. </dd>
96 96 *
97 97 * <dt>Creating new threads</dt>
98 98 *
99 99 * <dd>New threads are created using a {@link ThreadFactory}. If not
100 100 * otherwise specified, a {@link Executors#defaultThreadFactory} is
101 101 * used, that creates threads to all be in the same {@link
102 102 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and
103 103 * non-daemon status. By supplying a different ThreadFactory, you can
104 104 * alter the thread's name, thread group, priority, daemon status,
105 105 * etc. If a {@code ThreadFactory} fails to create a thread when asked
106 106 * by returning null from {@code newThread}, the executor will
107 107 * continue, but might not be able to execute any tasks. Threads
108 108 * should possess the "modifyThread" {@code RuntimePermission}. If
109 109 * worker threads or other threads using the pool do not possess this
110 110 * permission, service may be degraded: configuration changes may not
111 111 * take effect in a timely manner, and a shutdown pool may remain in a
112 112 * state in which termination is possible but not completed.</dd>
113 113 *
114 114 * <dt>Keep-alive times</dt>
115 115 *
116 116 * <dd>If the pool currently has more than corePoolSize threads,
117 117 * excess threads will be terminated if they have been idle for more
118 118 * than the keepAliveTime (see {@link #getKeepAliveTime}). This
119 119 * provides a means of reducing resource consumption when the pool is
120 120 * not being actively used. If the pool becomes more active later, new
121 121 * threads will be constructed. This parameter can also be changed
122 122 * dynamically using method {@link #setKeepAliveTime}. Using a value
123 123 * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively
124 124 * disables idle threads from ever terminating prior to shut down. By
125 125 * default, the keep-alive policy applies only when there are more
126 126 * than corePoolSizeThreads. But method {@link
127 127 * #allowCoreThreadTimeOut(boolean)} can be used to apply this
128 128 * time-out policy to core threads as well, so long as the
129 129 * keepAliveTime value is non-zero. </dd>
130 130 *
131 131 * <dt>Queuing</dt>
132 132 *
133 133 * <dd>Any {@link BlockingQueue} may be used to transfer and hold
134 134 * submitted tasks. The use of this queue interacts with pool sizing:
135 135 *
136 136 * <ul>
137 137 *
138 138 * <li> If fewer than corePoolSize threads are running, the Executor
139 139 * always prefers adding a new thread
140 140 * rather than queuing.</li>
141 141 *
142 142 * <li> If corePoolSize or more threads are running, the Executor
143 143 * always prefers queuing a request rather than adding a new
144 144 * thread.</li>
145 145 *
146 146 * <li> If a request cannot be queued, a new thread is created unless
147 147 * this would exceed maximumPoolSize, in which case, the task will be
148 148 * rejected.</li>
149 149 *
150 150 * </ul>
151 151 *
152 152 * There are three general strategies for queuing:
153 153 * <ol>
154 154 *
155 155 * <li> <em> Direct handoffs.</em> A good default choice for a work
156 156 * queue is a {@link SynchronousQueue} that hands off tasks to threads
157 157 * without otherwise holding them. Here, an attempt to queue a task
158 158 * will fail if no threads are immediately available to run it, so a
159 159 * new thread will be constructed. This policy avoids lockups when
160 160 * handling sets of requests that might have internal dependencies.
161 161 * Direct handoffs generally require unbounded maximumPoolSizes to
162 162 * avoid rejection of new submitted tasks. This in turn admits the
163 163 * possibility of unbounded thread growth when commands continue to
164 164 * arrive on average faster than they can be processed. </li>
165 165 *
166 166 * <li><em> Unbounded queues.</em> Using an unbounded queue (for
167 167 * example a {@link LinkedBlockingQueue} without a predefined
168 168 * capacity) will cause new tasks to wait in the queue when all
169 169 * corePoolSize threads are busy. Thus, no more than corePoolSize
170 170 * threads will ever be created. (And the value of the maximumPoolSize
171 171 * therefore doesn't have any effect.) This may be appropriate when
172 172 * each task is completely independent of others, so tasks cannot
173 173 * affect each others execution; for example, in a web page server.
174 174 * While this style of queuing can be useful in smoothing out
175 175 * transient bursts of requests, it admits the possibility of
176 176 * unbounded work queue growth when commands continue to arrive on
177 177 * average faster than they can be processed. </li>
178 178 *
179 179 * <li><em>Bounded queues.</em> A bounded queue (for example, an
180 180 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when
181 181 * used with finite maximumPoolSizes, but can be more difficult to
182 182 * tune and control. Queue sizes and maximum pool sizes may be traded
183 183 * off for each other: Using large queues and small pools minimizes
184 184 * CPU usage, OS resources, and context-switching overhead, but can
185 185 * lead to artificially low throughput. If tasks frequently block (for
186 186 * example if they are I/O bound), a system may be able to schedule
187 187 * time for more threads than you otherwise allow. Use of small queues
188 188 * generally requires larger pool sizes, which keeps CPUs busier but
189 189 * may encounter unacceptable scheduling overhead, which also
190 190 * decreases throughput. </li>
191 191 *
192 192 * </ol>
193 193 *
194 194 * </dd>
195 195 *
196 196 * <dt>Rejected tasks</dt>
197 197 *
198 198 * <dd> New tasks submitted in method {@link #execute} will be
199 199 * <em>rejected</em> when the Executor has been shut down, and also
200 200 * when the Executor uses finite bounds for both maximum threads and
201 201 * work queue capacity, and is saturated. In either case, the {@code
202 202 * execute} method invokes the {@link
203 203 * RejectedExecutionHandler#rejectedExecution} method of its {@link
204 204 * RejectedExecutionHandler}. Four predefined handler policies are
205 205 * provided:
206 206 *
207 207 * <ol>
208 208 *
209 209 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the
210 210 * handler throws a runtime {@link RejectedExecutionException} upon
211 211 * rejection. </li>
212 212 *
213 213 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread
214 214 * that invokes {@code execute} itself runs the task. This provides a
215 215 * simple feedback control mechanism that will slow down the rate that
216 216 * new tasks are submitted. </li>
217 217 *
218 218 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that
219 219 * cannot be executed is simply dropped. </li>
220 220 *
221 221 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the
222 222 * executor is not shut down, the task at the head of the work queue
223 223 * is dropped, and then execution is retried (which can fail again,
224 224 * causing this to be repeated.) </li>
225 225 *
226 226 * </ol>
227 227 *
228 228 * It is possible to define and use other kinds of {@link
229 229 * RejectedExecutionHandler} classes. Doing so requires some care
230 230 * especially when policies are designed to work only under particular
231 231 * capacity or queuing policies. </dd>
232 232 *
233 233 * <dt>Hook methods</dt>
234 234 *
235 235 * <dd>This class provides {@code protected} overridable {@link
236 236 * #beforeExecute} and {@link #afterExecute} methods that are called
237 237 * before and after execution of each task. These can be used to
238 238 * manipulate the execution environment; for example, reinitializing
239 239 * ThreadLocals, gathering statistics, or adding log
240 240 * entries. Additionally, method {@link #terminated} can be overridden
241 241 * to perform any special processing that needs to be done once the
242 242 * Executor has fully terminated.
243 243 *
244 244 * <p>If hook or callback methods throw exceptions, internal worker
245 245 * threads may in turn fail and abruptly terminate.</dd>
246 246 *
247 247 * <dt>Queue maintenance</dt>
248 248 *
249 249 * <dd> Method {@link #getQueue} allows access to the work queue for
250 250 * purposes of monitoring and debugging. Use of this method for any
251 251 * other purpose is strongly discouraged. Two supplied methods,
252 252 * {@link #remove} and {@link #purge} are available to assist in
253 253 * storage reclamation when large numbers of queued tasks become
254 254 * cancelled.</dd>
255 255 *
256 256 * <dt>Finalization</dt>
257 257 *
258 258 * <dd> A pool that is no longer referenced in a program <em>AND</em>
259 259 * has no remaining threads will be {@code shutdown} automatically. If
260 260 * you would like to ensure that unreferenced pools are reclaimed even
261 261 * if users forget to call {@link #shutdown}, then you must arrange
262 262 * that unused threads eventually die, by setting appropriate
263 263 * keep-alive times, using a lower bound of zero core threads and/or
264 264 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd>
265 265 *
266 266 * </dl>
267 267 *
268 268 * <p> <b>Extension example</b>. Most extensions of this class
269 269 * override one or more of the protected hook methods. For example,
270 270 * here is a subclass that adds a simple pause/resume feature:
271 271 *
272 272 * <pre> {@code
273 273 * class PausableThreadPoolExecutor extends ThreadPoolExecutor {
274 274 * private boolean isPaused;
275 275 * private ReentrantLock pauseLock = new ReentrantLock();
276 276 * private Condition unpaused = pauseLock.newCondition();
277 277 *
278 278 * public PausableThreadPoolExecutor(...) { super(...); }
279 279 *
280 280 * protected void beforeExecute(Thread t, Runnable r) {
281 281 * super.beforeExecute(t, r);
282 282 * pauseLock.lock();
283 283 * try {
284 284 * while (isPaused) unpaused.await();
285 285 * } catch (InterruptedException ie) {
286 286 * t.interrupt();
287 287 * } finally {
288 288 * pauseLock.unlock();
289 289 * }
290 290 * }
291 291 *
292 292 * public void pause() {
293 293 * pauseLock.lock();
294 294 * try {
295 295 * isPaused = true;
296 296 * } finally {
297 297 * pauseLock.unlock();
298 298 * }
299 299 * }
300 300 *
301 301 * public void resume() {
302 302 * pauseLock.lock();
303 303 * try {
304 304 * isPaused = false;
305 305 * unpaused.signalAll();
306 306 * } finally {
307 307 * pauseLock.unlock();
308 308 * }
309 309 * }
310 310 * }}</pre>
311 311 *
312 312 * @since 1.5
313 313 * @author Doug Lea
314 314 */
315 315 public class ThreadPoolExecutor extends AbstractExecutorService {
316 316 /**
317 317 * The main pool control state, ctl, is an atomic integer packing
318 318 * two conceptual fields
319 319 * workerCount, indicating the effective number of threads
320 320 * runState, indicating whether running, shutting down etc
321 321 *
322 322 * In order to pack them into one int, we limit workerCount to
323 323 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2
324 324 * billion) otherwise representable. If this is ever an issue in
325 325 * the future, the variable can be changed to be an AtomicLong,
326 326 * and the shift/mask constants below adjusted. But until the need
327 327 * arises, this code is a bit faster and simpler using an int.
328 328 *
329 329 * The workerCount is the number of workers that have been
330 330 * permitted to start and not permitted to stop. The value may be
331 331 * transiently different from the actual number of live threads,
332 332 * for example when a ThreadFactory fails to create a thread when
333 333 * asked, and when exiting threads are still performing
334 334 * bookkeeping before terminating. The user-visible pool size is
335 335 * reported as the current size of the workers set.
336 336 *
337 337 * The runState provides the main lifecyle control, taking on values:
338 338 *
339 339 * RUNNING: Accept new tasks and process queued tasks
340 340 * SHUTDOWN: Don't accept new tasks, but process queued tasks
341 341 * STOP: Don't accept new tasks, don't process queued tasks,
342 342 * and interrupt in-progress tasks
343 343 * TIDYING: All tasks have terminated, workerCount is zero,
344 344 * the thread transitioning to state TIDYING
345 345 * will run the terminated() hook method
346 346 * TERMINATED: terminated() has completed
347 347 *
348 348 * The numerical order among these values matters, to allow
349 349 * ordered comparisons. The runState monotonically increases over
350 350 * time, but need not hit each state. The transitions are:
351 351 *
352 352 * RUNNING -> SHUTDOWN
353 353 * On invocation of shutdown(), perhaps implicitly in finalize()
354 354 * (RUNNING or SHUTDOWN) -> STOP
355 355 * On invocation of shutdownNow()
356 356 * SHUTDOWN -> TIDYING
357 357 * When both queue and pool are empty
358 358 * STOP -> TIDYING
359 359 * When pool is empty
360 360 * TIDYING -> TERMINATED
361 361 * When the terminated() hook method has completed
362 362 *
363 363 * Threads waiting in awaitTermination() will return when the
364 364 * state reaches TERMINATED.
365 365 *
366 366 * Detecting the transition from SHUTDOWN to TIDYING is less
367 367 * straightforward than you'd like because the queue may become
368 368 * empty after non-empty and vice versa during SHUTDOWN state, but
369 369 * we can only terminate if, after seeing that it is empty, we see
370 370 * that workerCount is 0 (which sometimes entails a recheck -- see
371 371 * below).
372 372 */
373 373 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
374 374 private static final int COUNT_BITS = Integer.SIZE - 3;
375 375 private static final int CAPACITY = (1 << COUNT_BITS) - 1;
376 376
377 377 // runState is stored in the high-order bits
378 378 private static final int RUNNING = -1 << COUNT_BITS;
379 379 private static final int SHUTDOWN = 0 << COUNT_BITS;
380 380 private static final int STOP = 1 << COUNT_BITS;
381 381 private static final int TIDYING = 2 << COUNT_BITS;
382 382 private static final int TERMINATED = 3 << COUNT_BITS;
383 383
384 384 // Packing and unpacking ctl
385 385 private static int runStateOf(int c) { return c & ~CAPACITY; }
386 386 private static int workerCountOf(int c) { return c & CAPACITY; }
387 387 private static int ctlOf(int rs, int wc) { return rs | wc; }
388 388
389 389 /*
390 390 * Bit field accessors that don't require unpacking ctl.
391 391 * These depend on the bit layout and on workerCount being never negative.
392 392 */
393 393
394 394 private static boolean runStateLessThan(int c, int s) {
395 395 return c < s;
396 396 }
397 397
398 398 private static boolean runStateAtLeast(int c, int s) {
399 399 return c >= s;
400 400 }
401 401
402 402 private static boolean isRunning(int c) {
403 403 return c < SHUTDOWN;
404 404 }
405 405
406 406 /**
407 407 * Attempt to CAS-increment the workerCount field of ctl.
408 408 */
409 409 private boolean compareAndIncrementWorkerCount(int expect) {
410 410 return ctl.compareAndSet(expect, expect + 1);
411 411 }
412 412
413 413 /**
414 414 * Attempt to CAS-decrement the workerCount field of ctl.
415 415 */
416 416 private boolean compareAndDecrementWorkerCount(int expect) {
417 417 return ctl.compareAndSet(expect, expect - 1);
418 418 }
419 419
420 420 /**
421 421 * Decrements the workerCount field of ctl. This is called only on
422 422 * abrupt termination of a thread (see processWorkerExit). Other
423 423 * decrements are performed within getTask.
424 424 */
425 425 private void decrementWorkerCount() {
426 426 do {} while (! compareAndDecrementWorkerCount(ctl.get()));
427 427 }
428 428
429 429 /**
430 430 * The queue used for holding tasks and handing off to worker
431 431 * threads. We do not require that workQueue.poll() returning
432 432 * null necessarily means that workQueue.isEmpty(), so rely
433 433 * solely on isEmpty to see if the queue is empty (which we must
434 434 * do for example when deciding whether to transition from
435 435 * SHUTDOWN to TIDYING). This accommodates special-purpose
436 436 * queues such as DelayQueues for which poll() is allowed to
437 437 * return null even if it may later return non-null when delays
438 438 * expire.
439 439 */
440 440 private final BlockingQueue<Runnable> workQueue;
441 441
442 442 /**
443 443 * Lock held on access to workers set and related bookkeeping.
444 444 * While we could use a concurrent set of some sort, it turns out
445 445 * to be generally preferable to use a lock. Among the reasons is
446 446 * that this serializes interruptIdleWorkers, which avoids
447 447 * unnecessary interrupt storms, especially during shutdown.
448 448 * Otherwise exiting threads would concurrently interrupt those
449 449 * that have not yet interrupted. It also simplifies some of the
450 450 * associated statistics bookkeeping of largestPoolSize etc. We
451 451 * also hold mainLock on shutdown and shutdownNow, for the sake of
452 452 * ensuring workers set is stable while separately checking
453 453 * permission to interrupt and actually interrupting.
454 454 */
455 455 private final ReentrantLock mainLock = new ReentrantLock();
456 456
457 457 /**
458 458 * Set containing all worker threads in pool. Accessed only when
459 459 * holding mainLock.
460 460 */
461 461 private final HashSet<Worker> workers = new HashSet<Worker>();
462 462
463 463 /**
464 464 * Wait condition to support awaitTermination
465 465 */
466 466 private final Condition termination = mainLock.newCondition();
467 467
468 468 /**
469 469 * Tracks largest attained pool size. Accessed only under
470 470 * mainLock.
471 471 */
472 472 private int largestPoolSize;
473 473
474 474 /**
475 475 * Counter for completed tasks. Updated only on termination of
476 476 * worker threads. Accessed only under mainLock.
477 477 */
478 478 private long completedTaskCount;
479 479
480 480 /*
481 481 * All user control parameters are declared as volatiles so that
482 482 * ongoing actions are based on freshest values, but without need
483 483 * for locking, since no internal invariants depend on them
484 484 * changing synchronously with respect to other actions.
485 485 */
486 486
487 487 /**
488 488 * Factory for new threads. All threads are created using this
489 489 * factory (via method addWorker). All callers must be prepared
490 490 * for addWorker to fail, which may reflect a system or user's
491 491 * policy limiting the number of threads. Even though it is not
492 492 * treated as an error, failure to create threads may result in
493 493 * new tasks being rejected or existing ones remaining stuck in
494 494 * the queue. On the other hand, no special precautions exist to
495 495 * handle OutOfMemoryErrors that might be thrown while trying to
496 496 * create threads, since there is generally no recourse from
497 497 * within this class.
498 498 */
499 499 private volatile ThreadFactory threadFactory;
500 500
501 501 /**
502 502 * Handler called when saturated or shutdown in execute.
503 503 */
504 504 private volatile RejectedExecutionHandler handler;
505 505
506 506 /**
507 507 * Timeout in nanoseconds for idle threads waiting for work.
508 508 * Threads use this timeout when there are more than corePoolSize
509 509 * present or if allowCoreThreadTimeOut. Otherwise they wait
510 510 * forever for new work.
511 511 */
512 512 private volatile long keepAliveTime;
513 513
514 514 /**
515 515 * If false (default), core threads stay alive even when idle.
516 516 * If true, core threads use keepAliveTime to time out waiting
517 517 * for work.
518 518 */
519 519 private volatile boolean allowCoreThreadTimeOut;
520 520
521 521 /**
522 522 * Core pool size is the minimum number of workers to keep alive
523 523 * (and not allow to time out etc) unless allowCoreThreadTimeOut
524 524 * is set, in which case the minimum is zero.
525 525 */
526 526 private volatile int corePoolSize;
527 527
528 528 /**
529 529 * Maximum pool size. Note that the actual maximum is internally
530 530 * bounded by CAPACITY.
531 531 */
532 532 private volatile int maximumPoolSize;
533 533
534 534 /**
535 535 * The default rejected execution handler
536 536 */
537 537 private static final RejectedExecutionHandler defaultHandler =
538 538 new AbortPolicy();
539 539
540 540 /**
541 541 * Permission required for callers of shutdown and shutdownNow.
542 542 * We additionally require (see checkShutdownAccess) that callers
543 543 * have permission to actually interrupt threads in the worker set
544 544 * (as governed by Thread.interrupt, which relies on
545 545 * ThreadGroup.checkAccess, which in turn relies on
546 546 * SecurityManager.checkAccess). Shutdowns are attempted only if
547 547 * these checks pass.
548 548 *
549 549 * All actual invocations of Thread.interrupt (see
550 550 * interruptIdleWorkers and interruptWorkers) ignore
551 551 * SecurityExceptions, meaning that the attempted interrupts
552 552 * silently fail. In the case of shutdown, they should not fail
553 553 * unless the SecurityManager has inconsistent policies, sometimes
554 554 * allowing access to a thread and sometimes not. In such cases,
555 555 * failure to actually interrupt threads may disable or delay full
556 556 * termination. Other uses of interruptIdleWorkers are advisory,
557 557 * and failure to actually interrupt will merely delay response to
558 558 * configuration changes so is not handled exceptionally.
559 559 */
560 560 private static final RuntimePermission shutdownPerm =
561 561 new RuntimePermission("modifyThread");
562 562
563 563 /**
564 564 * Class Worker mainly maintains interrupt control state for
565 565 * threads running tasks, along with other minor bookkeeping.
566 566 * This class opportunistically extends AbstractQueuedSynchronizer
567 567 * to simplify acquiring and releasing a lock surrounding each
568 568 * task execution. This protects against interrupts that are
569 569 * intended to wake up a worker thread waiting for a task from
570 570 * instead interrupting a task being run. We implement a simple
571 571 * non-reentrant mutual exclusion lock rather than use ReentrantLock
572 572 * because we do not want worker tasks to be able to reacquire the
573 573 * lock when they invoke pool control methods like setCorePoolSize.
574 574 */
575 575 private final class Worker
576 576 extends AbstractQueuedSynchronizer
577 577 implements Runnable
578 578 {
579 579 /**
580 580 * This class will never be serialized, but we provide a
581 581 * serialVersionUID to suppress a javac warning.
582 582 */
583 583 private static final long serialVersionUID = 6138294804551838833L;
584 584
585 585 /** Thread this worker is running in. Null if factory fails. */
586 586 final Thread thread;
587 587 /** Initial task to run. Possibly null. */
588 588 Runnable firstTask;
589 589 /** Per-thread task counter */
590 590 volatile long completedTasks;
591 591
592 592 /**
593 593 * Creates with given first task and thread from ThreadFactory.
594 594 * @param firstTask the first task (null if none)
595 595 */
596 596 Worker(Runnable firstTask) {
597 597 this.firstTask = firstTask;
598 598 this.thread = getThreadFactory().newThread(this);
599 599 }
600 600
601 601 /** Delegates main run loop to outer runWorker */
602 602 public void run() {
603 603 runWorker(this);
604 604 }
605 605
606 606 // Lock methods
607 607 //
608 608 // The value 0 represents the unlocked state.
609 609 // The value 1 represents the locked state.
610 610
611 611 protected boolean isHeldExclusively() {
612 612 return getState() == 1;
613 613 }
614 614
615 615 protected boolean tryAcquire(int unused) {
616 616 if (compareAndSetState(0, 1)) {
617 617 setExclusiveOwnerThread(Thread.currentThread());
618 618 return true;
619 619 }
620 620 return false;
621 621 }
622 622
623 623 protected boolean tryRelease(int unused) {
624 624 setExclusiveOwnerThread(null);
625 625 setState(0);
626 626 return true;
627 627 }
628 628
629 629 public void lock() { acquire(1); }
630 630 public boolean tryLock() { return tryAcquire(1); }
631 631 public void unlock() { release(1); }
632 632 public boolean isLocked() { return isHeldExclusively(); }
633 633 }
634 634
635 635 /*
636 636 * Methods for setting control state
637 637 */
638 638
639 639 /**
640 640 * Transitions runState to given target, or leaves it alone if
641 641 * already at least the given target.
642 642 *
643 643 * @param targetState the desired state, either SHUTDOWN or STOP
644 644 * (but not TIDYING or TERMINATED -- use tryTerminate for that)
645 645 */
646 646 private void advanceRunState(int targetState) {
647 647 for (;;) {
648 648 int c = ctl.get();
649 649 if (runStateAtLeast(c, targetState) ||
650 650 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))))
651 651 break;
652 652 }
653 653 }
654 654
655 655 /**
656 656 * Transitions to TERMINATED state if either (SHUTDOWN and pool
657 657 * and queue empty) or (STOP and pool empty). If otherwise
658 658 * eligible to terminate but workerCount is nonzero, interrupts an
659 659 * idle worker to ensure that shutdown signals propagate. This
660 660 * method must be called following any action that might make
661 661 * termination possible -- reducing worker count or removing tasks
662 662 * from the queue during shutdown. The method is non-private to
663 663 * allow access from ScheduledThreadPoolExecutor.
664 664 */
665 665 final void tryTerminate() {
666 666 for (;;) {
667 667 int c = ctl.get();
668 668 if (isRunning(c) ||
669 669 runStateAtLeast(c, TIDYING) ||
670 670 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty()))
671 671 return;
672 672 if (workerCountOf(c) != 0) { // Eligible to terminate
673 673 interruptIdleWorkers(ONLY_ONE);
674 674 return;
675 675 }
676 676
677 677 final ReentrantLock mainLock = this.mainLock;
678 678 mainLock.lock();
679 679 try {
680 680 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) {
681 681 try {
682 682 terminated();
683 683 } finally {
684 684 ctl.set(ctlOf(TERMINATED, 0));
685 685 termination.signalAll();
686 686 }
687 687 return;
688 688 }
689 689 } finally {
690 690 mainLock.unlock();
691 691 }
692 692 // else retry on failed CAS
693 693 }
694 694 }
695 695
696 696 /*
697 697 * Methods for controlling interrupts to worker threads.
698 698 */
699 699
700 700 /**
701 701 * If there is a security manager, makes sure caller has
702 702 * permission to shut down threads in general (see shutdownPerm).
703 703 * If this passes, additionally makes sure the caller is allowed
704 704 * to interrupt each worker thread. This might not be true even if
705 705 * first check passed, if the SecurityManager treats some threads
706 706 * specially.
707 707 */
708 708 private void checkShutdownAccess() {
709 709 SecurityManager security = System.getSecurityManager();
710 710 if (security != null) {
711 711 security.checkPermission(shutdownPerm);
712 712 final ReentrantLock mainLock = this.mainLock;
713 713 mainLock.lock();
714 714 try {
715 715 for (Worker w : workers)
716 716 security.checkAccess(w.thread);
717 717 } finally {
718 718 mainLock.unlock();
719 719 }
720 720 }
721 721 }
722 722
723 723 /**
724 724 * Interrupts all threads, even if active. Ignores SecurityExceptions
725 725 * (in which case some threads may remain uninterrupted).
726 726 */
727 727 private void interruptWorkers() {
728 728 final ReentrantLock mainLock = this.mainLock;
729 729 mainLock.lock();
730 730 try {
731 731 for (Worker w : workers) {
732 732 try {
733 733 w.thread.interrupt();
734 734 } catch (SecurityException ignore) {
735 735 }
736 736 }
737 737 } finally {
738 738 mainLock.unlock();
739 739 }
740 740 }
741 741
742 742 /**
743 743 * Interrupts threads that might be waiting for tasks (as
744 744 * indicated by not being locked) so they can check for
745 745 * termination or configuration changes. Ignores
746 746 * SecurityExceptions (in which case some threads may remain
747 747 * uninterrupted).
748 748 *
749 749 * @param onlyOne If true, interrupt at most one worker. This is
750 750 * called only from tryTerminate when termination is otherwise
751 751 * enabled but there are still other workers. In this case, at
752 752 * most one waiting worker is interrupted to propagate shutdown
753 753 * signals in case all threads are currently waiting.
754 754 * Interrupting any arbitrary thread ensures that newly arriving
755 755 * workers since shutdown began will also eventually exit.
756 756 * To guarantee eventual termination, it suffices to always
757 757 * interrupt only one idle worker, but shutdown() interrupts all
758 758 * idle workers so that redundant workers exit promptly, not
759 759 * waiting for a straggler task to finish.
760 760 */
761 761 private void interruptIdleWorkers(boolean onlyOne) {
762 762 final ReentrantLock mainLock = this.mainLock;
763 763 mainLock.lock();
764 764 try {
765 765 for (Worker w : workers) {
766 766 Thread t = w.thread;
767 767 if (!t.isInterrupted() && w.tryLock()) {
768 768 try {
769 769 t.interrupt();
770 770 } catch (SecurityException ignore) {
771 771 } finally {
772 772 w.unlock();
773 773 }
774 774 }
775 775 if (onlyOne)
776 776 break;
777 777 }
778 778 } finally {
779 779 mainLock.unlock();
780 780 }
781 781 }
782 782
783 783 /**
784 784 * Common form of interruptIdleWorkers, to avoid having to
785 785 * remember what the boolean argument means.
786 786 */
787 787 private void interruptIdleWorkers() {
788 788 interruptIdleWorkers(false);
789 789 }
790 790
791 791 private static final boolean ONLY_ONE = true;
792 792
793 793 /**
794 794 * Ensures that unless the pool is stopping, the current thread
795 795 * does not have its interrupt set. This requires a double-check
796 796 * of state in case the interrupt was cleared concurrently with a
797 797 * shutdownNow -- if so, the interrupt is re-enabled.
798 798 */
799 799 private void clearInterruptsForTaskRun() {
800 800 if (runStateLessThan(ctl.get(), STOP) &&
801 801 Thread.interrupted() &&
802 802 runStateAtLeast(ctl.get(), STOP))
803 803 Thread.currentThread().interrupt();
804 804 }
805 805
806 806 /*
807 807 * Misc utilities, most of which are also exported to
808 808 * ScheduledThreadPoolExecutor
809 809 */
810 810
811 811 /**
812 812 * Invokes the rejected execution handler for the given command.
813 813 * Package-protected for use by ScheduledThreadPoolExecutor.
814 814 */
815 815 final void reject(Runnable command) {
816 816 handler.rejectedExecution(command, this);
817 817 }
818 818
819 819 /**
820 820 * Performs any further cleanup following run state transition on
821 821 * invocation of shutdown. A no-op here, but used by
822 822 * ScheduledThreadPoolExecutor to cancel delayed tasks.
823 823 */
824 824 void onShutdown() {
825 825 }
826 826
827 827 /**
828 828 * State check needed by ScheduledThreadPoolExecutor to
829 829 * enable running tasks during shutdown.
830 830 *
831 831 * @param shutdownOK true if should return true if SHUTDOWN
832 832 */
833 833 final boolean isRunningOrShutdown(boolean shutdownOK) {
834 834 int rs = runStateOf(ctl.get());
835 835 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK);
836 836 }
837 837
838 838 /**
839 839 * Drains the task queue into a new list, normally using
840 840 * drainTo. But if the queue is a DelayQueue or any other kind of
841 841 * queue for which poll or drainTo may fail to remove some
842 842 * elements, it deletes them one by one.
843 843 */
844 844 private List<Runnable> drainQueue() {
845 845 BlockingQueue<Runnable> q = workQueue;
846 846 List<Runnable> taskList = new ArrayList<Runnable>();
847 847 q.drainTo(taskList);
848 848 if (!q.isEmpty()) {
849 849 for (Runnable r : q.toArray(new Runnable[0])) {
850 850 if (q.remove(r))
851 851 taskList.add(r);
852 852 }
853 853 }
854 854 return taskList;
855 855 }
856 856
857 857 /*
858 858 * Methods for creating, running and cleaning up after workers
859 859 */
860 860
861 861 /**
862 862 * Checks if a new worker can be added with respect to current
863 863 * pool state and the given bound (either core or maximum). If so,
864 864 * the worker count is adjusted accordingly, and, if possible, a
865 865 * new worker is created and started running firstTask as its
866 866 * first task. This method returns false if the pool is stopped or
867 867 * eligible to shut down. It also returns false if the thread
868 868 * factory fails to create a thread when asked, which requires a
869 869 * backout of workerCount, and a recheck for termination, in case
870 870 * the existence of this worker was holding up termination.
871 871 *
872 872 * @param firstTask the task the new thread should run first (or
873 873 * null if none). Workers are created with an initial first task
874 874 * (in method execute()) to bypass queuing when there are fewer
875 875 * than corePoolSize threads (in which case we always start one),
876 876 * or when the queue is full (in which case we must bypass queue).
877 877 * Initially idle threads are usually created via
878 878 * prestartCoreThread or to replace other dying workers.
879 879 *
880 880 * @param core if true use corePoolSize as bound, else
881 881 * maximumPoolSize. (A boolean indicator is used here rather than a
882 882 * value to ensure reads of fresh values after checking other pool
883 883 * state).
884 884 * @return true if successful
885 885 */
886 886 private boolean addWorker(Runnable firstTask, boolean core) {
887 887 retry:
888 888 for (;;) {
889 889 int c = ctl.get();
890 890 int rs = runStateOf(c);
891 891
892 892 // Check if queue empty only if necessary.
893 893 if (rs >= SHUTDOWN &&
894 894 ! (rs == SHUTDOWN &&
895 895 firstTask == null &&
896 896 ! workQueue.isEmpty()))
897 897 return false;
898 898
899 899 for (;;) {
900 900 int wc = workerCountOf(c);
901 901 if (wc >= CAPACITY ||
902 902 wc >= (core ? corePoolSize : maximumPoolSize))
903 903 return false;
904 904 if (compareAndIncrementWorkerCount(c))
905 905 break retry;
906 906 c = ctl.get(); // Re-read ctl
907 907 if (runStateOf(c) != rs)
908 908 continue retry;
909 909 // else CAS failed due to workerCount change; retry inner loop
910 910 }
911 911 }
912 912
913 913 Worker w = new Worker(firstTask);
914 914 Thread t = w.thread;
915 915
916 916 final ReentrantLock mainLock = this.mainLock;
917 917 mainLock.lock();
918 918 try {
919 919 // Recheck while holding lock.
920 920 // Back out on ThreadFactory failure or if
921 921 // shut down before lock acquired.
922 922 int c = ctl.get();
923 923 int rs = runStateOf(c);
924 924
925 925 if (t == null ||
926 926 (rs >= SHUTDOWN &&
927 927 ! (rs == SHUTDOWN &&
928 928 firstTask == null))) {
929 929 decrementWorkerCount();
930 930 tryTerminate();
931 931 return false;
932 932 }
933 933
934 934 workers.add(w);
935 935
936 936 int s = workers.size();
937 937 if (s > largestPoolSize)
938 938 largestPoolSize = s;
939 939 } finally {
940 940 mainLock.unlock();
941 941 }
942 942
943 943 t.start();
944 944 // It is possible (but unlikely) for a thread to have been
945 945 // added to workers, but not yet started, during transition to
946 946 // STOP, which could result in a rare missed interrupt,
947 947 // because Thread.interrupt is not guaranteed to have any effect
948 948 // on a non-yet-started Thread (see Thread#interrupt).
949 949 if (runStateOf(ctl.get()) == STOP && ! t.isInterrupted())
950 950 t.interrupt();
951 951
952 952 return true;
953 953 }
954 954
955 955 /**
956 956 * Performs cleanup and bookkeeping for a dying worker. Called
957 957 * only from worker threads. Unless completedAbruptly is set,
958 958 * assumes that workerCount has already been adjusted to account
959 959 * for exit. This method removes thread from worker set, and
960 960 * possibly terminates the pool or replaces the worker if either
961 961 * it exited due to user task exception or if fewer than
962 962 * corePoolSize workers are running or queue is non-empty but
963 963 * there are no workers.
964 964 *
965 965 * @param w the worker
966 966 * @param completedAbruptly if the worker died due to user exception
967 967 */
968 968 private void processWorkerExit(Worker w, boolean completedAbruptly) {
969 969 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted
970 970 decrementWorkerCount();
971 971
972 972 final ReentrantLock mainLock = this.mainLock;
973 973 mainLock.lock();
974 974 try {
975 975 completedTaskCount += w.completedTasks;
976 976 workers.remove(w);
977 977 } finally {
978 978 mainLock.unlock();
979 979 }
980 980
981 981 tryTerminate();
982 982
983 983 int c = ctl.get();
984 984 if (runStateLessThan(c, STOP)) {
985 985 if (!completedAbruptly) {
986 986 int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
987 987 if (min == 0 && ! workQueue.isEmpty())
988 988 min = 1;
989 989 if (workerCountOf(c) >= min)
990 990 return; // replacement not needed
991 991 }
992 992 addWorker(null, false);
993 993 }
994 994 }
995 995
996 996 /**
997 997 * Performs blocking or timed wait for a task, depending on
998 998 * current configuration settings, or returns null if this worker
999 999 * must exit because of any of:
1000 1000 * 1. There are more than maximumPoolSize workers (due to
1001 1001 * a call to setMaximumPoolSize).
1002 1002 * 2. The pool is stopped.
1003 1003 * 3. The pool is shutdown and the queue is empty.
1004 1004 * 4. This worker timed out waiting for a task, and timed-out
1005 1005 * workers are subject to termination (that is,
1006 1006 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize})
1007 1007 * both before and after the timed wait.
1008 1008 *
1009 1009 * @return task, or null if the worker must exit, in which case
1010 1010 * workerCount is decremented
1011 1011 */
1012 1012 private Runnable getTask() {
1013 1013 boolean timedOut = false; // Did the last poll() time out?
1014 1014
1015 1015 retry:
1016 1016 for (;;) {
1017 1017 int c = ctl.get();
1018 1018 int rs = runStateOf(c);
1019 1019
1020 1020 // Check if queue empty only if necessary.
1021 1021 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
1022 1022 decrementWorkerCount();
1023 1023 return null;
1024 1024 }
1025 1025
1026 1026 boolean timed; // Are workers subject to culling?
1027 1027
1028 1028 for (;;) {
1029 1029 int wc = workerCountOf(c);
1030 1030 timed = allowCoreThreadTimeOut || wc > corePoolSize;
1031 1031
1032 1032 if (wc <= maximumPoolSize && ! (timedOut && timed))
1033 1033 break;
1034 1034 if (compareAndDecrementWorkerCount(c))
1035 1035 return null;
1036 1036 c = ctl.get(); // Re-read ctl
1037 1037 if (runStateOf(c) != rs)
1038 1038 continue retry;
1039 1039 // else CAS failed due to workerCount change; retry inner loop
1040 1040 }
1041 1041
1042 1042 try {
1043 1043 Runnable r = timed ?
1044 1044 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
1045 1045 workQueue.take();
1046 1046 if (r != null)
1047 1047 return r;
1048 1048 timedOut = true;
1049 1049 } catch (InterruptedException retry) {
1050 1050 timedOut = false;
1051 1051 }
1052 1052 }
1053 1053 }
1054 1054
1055 1055 /**
1056 1056 * Main worker run loop. Repeatedly gets tasks from queue and
1057 1057 * executes them, while coping with a number of issues:
1058 1058 *
1059 1059 * 1. We may start out with an initial task, in which case we
1060 1060 * don't need to get the first one. Otherwise, as long as pool is
1061 1061 * running, we get tasks from getTask. If it returns null then the
1062 1062 * worker exits due to changed pool state or configuration
1063 1063 * parameters. Other exits result from exception throws in
1064 1064 * external code, in which case completedAbruptly holds, which
1065 1065 * usually leads processWorkerExit to replace this thread.
1066 1066 *
1067 1067 * 2. Before running any task, the lock is acquired to prevent
1068 1068 * other pool interrupts while the task is executing, and
1069 1069 * clearInterruptsForTaskRun called to ensure that unless pool is
1070 1070 * stopping, this thread does not have its interrupt set.
1071 1071 *
1072 1072 * 3. Each task run is preceded by a call to beforeExecute, which
1073 1073 * might throw an exception, in which case we cause thread to die
1074 1074 * (breaking loop with completedAbruptly true) without processing
1075 1075 * the task.
1076 1076 *
1077 1077 * 4. Assuming beforeExecute completes normally, we run the task,
1078 1078 * gathering any of its thrown exceptions to send to
1079 1079 * afterExecute. We separately handle RuntimeException, Error
1080 1080 * (both of which the specs guarantee that we trap) and arbitrary
1081 1081 * Throwables. Because we cannot rethrow Throwables within
1082 1082 * Runnable.run, we wrap them within Errors on the way out (to the
1083 1083 * thread's UncaughtExceptionHandler). Any thrown exception also
1084 1084 * conservatively causes thread to die.
1085 1085 *
1086 1086 * 5. After task.run completes, we call afterExecute, which may
1087 1087 * also throw an exception, which will also cause thread to
1088 1088 * die. According to JLS Sec 14.20, this exception is the one that
1089 1089 * will be in effect even if task.run throws.
1090 1090 *
1091 1091 * The net effect of the exception mechanics is that afterExecute
1092 1092 * and the thread's UncaughtExceptionHandler have as accurate
1093 1093 * information as we can provide about any problems encountered by
1094 1094 * user code.
1095 1095 *
1096 1096 * @param w the worker
1097 1097 */
1098 1098 final void runWorker(Worker w) {
1099 1099 Runnable task = w.firstTask;
1100 1100 w.firstTask = null;
1101 1101 boolean completedAbruptly = true;
1102 1102 try {
1103 1103 while (task != null || (task = getTask()) != null) {
1104 1104 w.lock();
1105 1105 clearInterruptsForTaskRun();
1106 1106 try {
1107 1107 beforeExecute(w.thread, task);
1108 1108 Throwable thrown = null;
1109 1109 try {
1110 1110 task.run();
1111 1111 } catch (RuntimeException x) {
1112 1112 thrown = x; throw x;
1113 1113 } catch (Error x) {
1114 1114 thrown = x; throw x;
1115 1115 } catch (Throwable x) {
1116 1116 thrown = x; throw new Error(x);
1117 1117 } finally {
1118 1118 afterExecute(task, thrown);
1119 1119 }
1120 1120 } finally {
1121 1121 task = null;
1122 1122 w.completedTasks++;
1123 1123 w.unlock();
1124 1124 }
1125 1125 }
1126 1126 completedAbruptly = false;
1127 1127 } finally {
1128 1128 processWorkerExit(w, completedAbruptly);
1129 1129 }
1130 1130 }
1131 1131
1132 1132 // Public constructors and methods
1133 1133
1134 1134 /**
1135 1135 * Creates a new {@code ThreadPoolExecutor} with the given initial
1136 1136 * parameters and default thread factory and rejected execution handler.
1137 1137 * It may be more convenient to use one of the {@link Executors} factory
1138 1138 * methods instead of this general purpose constructor.
1139 1139 *
1140 1140 * @param corePoolSize the number of threads to keep in the pool, even
1141 1141 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1142 1142 * @param maximumPoolSize the maximum number of threads to allow in the
1143 1143 * pool
1144 1144 * @param keepAliveTime when the number of threads is greater than
1145 1145 * the core, this is the maximum time that excess idle threads
1146 1146 * will wait for new tasks before terminating.
1147 1147 * @param unit the time unit for the {@code keepAliveTime} argument
1148 1148 * @param workQueue the queue to use for holding tasks before they are
1149 1149 * executed. This queue will hold only the {@code Runnable}
1150 1150 * tasks submitted by the {@code execute} method.
1151 1151 * @throws IllegalArgumentException if one of the following holds:<br>
1152 1152 * {@code corePoolSize < 0}<br>
1153 1153 * {@code keepAliveTime < 0}<br>
1154 1154 * {@code maximumPoolSize <= 0}<br>
1155 1155 * {@code maximumPoolSize < corePoolSize}
1156 1156 * @throws NullPointerException if {@code workQueue} is null
1157 1157 */
1158 1158 public ThreadPoolExecutor(int corePoolSize,
1159 1159 int maximumPoolSize,
1160 1160 long keepAliveTime,
1161 1161 TimeUnit unit,
1162 1162 BlockingQueue<Runnable> workQueue) {
1163 1163 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1164 1164 Executors.defaultThreadFactory(), defaultHandler);
1165 1165 }
1166 1166
1167 1167 /**
1168 1168 * Creates a new {@code ThreadPoolExecutor} with the given initial
1169 1169 * parameters and default rejected execution handler.
1170 1170 *
1171 1171 * @param corePoolSize the number of threads to keep in the pool, even
1172 1172 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1173 1173 * @param maximumPoolSize the maximum number of threads to allow in the
1174 1174 * pool
1175 1175 * @param keepAliveTime when the number of threads is greater than
1176 1176 * the core, this is the maximum time that excess idle threads
1177 1177 * will wait for new tasks before terminating.
1178 1178 * @param unit the time unit for the {@code keepAliveTime} argument
1179 1179 * @param workQueue the queue to use for holding tasks before they are
1180 1180 * executed. This queue will hold only the {@code Runnable}
1181 1181 * tasks submitted by the {@code execute} method.
1182 1182 * @param threadFactory the factory to use when the executor
1183 1183 * creates a new thread
1184 1184 * @throws IllegalArgumentException if one of the following holds:<br>
1185 1185 * {@code corePoolSize < 0}<br>
1186 1186 * {@code keepAliveTime < 0}<br>
1187 1187 * {@code maximumPoolSize <= 0}<br>
1188 1188 * {@code maximumPoolSize < corePoolSize}
1189 1189 * @throws NullPointerException if {@code workQueue}
1190 1190 * or {@code threadFactory} is null
1191 1191 */
1192 1192 public ThreadPoolExecutor(int corePoolSize,
1193 1193 int maximumPoolSize,
1194 1194 long keepAliveTime,
1195 1195 TimeUnit unit,
1196 1196 BlockingQueue<Runnable> workQueue,
1197 1197 ThreadFactory threadFactory) {
1198 1198 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1199 1199 threadFactory, defaultHandler);
1200 1200 }
1201 1201
1202 1202 /**
1203 1203 * Creates a new {@code ThreadPoolExecutor} with the given initial
1204 1204 * parameters and default thread factory.
1205 1205 *
1206 1206 * @param corePoolSize the number of threads to keep in the pool, even
1207 1207 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1208 1208 * @param maximumPoolSize the maximum number of threads to allow in the
1209 1209 * pool
1210 1210 * @param keepAliveTime when the number of threads is greater than
1211 1211 * the core, this is the maximum time that excess idle threads
1212 1212 * will wait for new tasks before terminating.
1213 1213 * @param unit the time unit for the {@code keepAliveTime} argument
1214 1214 * @param workQueue the queue to use for holding tasks before they are
1215 1215 * executed. This queue will hold only the {@code Runnable}
1216 1216 * tasks submitted by the {@code execute} method.
1217 1217 * @param handler the handler to use when execution is blocked
1218 1218 * because the thread bounds and queue capacities are reached
1219 1219 * @throws IllegalArgumentException if one of the following holds:<br>
1220 1220 * {@code corePoolSize < 0}<br>
1221 1221 * {@code keepAliveTime < 0}<br>
1222 1222 * {@code maximumPoolSize <= 0}<br>
1223 1223 * {@code maximumPoolSize < corePoolSize}
1224 1224 * @throws NullPointerException if {@code workQueue}
1225 1225 * or {@code handler} is null
1226 1226 */
1227 1227 public ThreadPoolExecutor(int corePoolSize,
1228 1228 int maximumPoolSize,
1229 1229 long keepAliveTime,
1230 1230 TimeUnit unit,
1231 1231 BlockingQueue<Runnable> workQueue,
1232 1232 RejectedExecutionHandler handler) {
1233 1233 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
1234 1234 Executors.defaultThreadFactory(), handler);
1235 1235 }
1236 1236
1237 1237 /**
1238 1238 * Creates a new {@code ThreadPoolExecutor} with the given initial
1239 1239 * parameters.
1240 1240 *
1241 1241 * @param corePoolSize the number of threads to keep in the pool, even
1242 1242 * if they are idle, unless {@code allowCoreThreadTimeOut} is set
1243 1243 * @param maximumPoolSize the maximum number of threads to allow in the
1244 1244 * pool
1245 1245 * @param keepAliveTime when the number of threads is greater than
1246 1246 * the core, this is the maximum time that excess idle threads
1247 1247 * will wait for new tasks before terminating.
1248 1248 * @param unit the time unit for the {@code keepAliveTime} argument
1249 1249 * @param workQueue the queue to use for holding tasks before they are
1250 1250 * executed. This queue will hold only the {@code Runnable}
1251 1251 * tasks submitted by the {@code execute} method.
1252 1252 * @param threadFactory the factory to use when the executor
1253 1253 * creates a new thread
1254 1254 * @param handler the handler to use when execution is blocked
1255 1255 * because the thread bounds and queue capacities are reached
1256 1256 * @throws IllegalArgumentException if one of the following holds:<br>
1257 1257 * {@code corePoolSize < 0}<br>
1258 1258 * {@code keepAliveTime < 0}<br>
1259 1259 * {@code maximumPoolSize <= 0}<br>
1260 1260 * {@code maximumPoolSize < corePoolSize}
1261 1261 * @throws NullPointerException if {@code workQueue}
1262 1262 * or {@code threadFactory} or {@code handler} is null
1263 1263 */
1264 1264 public ThreadPoolExecutor(int corePoolSize,
1265 1265 int maximumPoolSize,
1266 1266 long keepAliveTime,
1267 1267 TimeUnit unit,
1268 1268 BlockingQueue<Runnable> workQueue,
1269 1269 ThreadFactory threadFactory,
1270 1270 RejectedExecutionHandler handler) {
1271 1271 if (corePoolSize < 0 ||
1272 1272 maximumPoolSize <= 0 ||
1273 1273 maximumPoolSize < corePoolSize ||
1274 1274 keepAliveTime < 0)
1275 1275 throw new IllegalArgumentException();
1276 1276 if (workQueue == null || threadFactory == null || handler == null)
1277 1277 throw new NullPointerException();
1278 1278 this.corePoolSize = corePoolSize;
1279 1279 this.maximumPoolSize = maximumPoolSize;
1280 1280 this.workQueue = workQueue;
1281 1281 this.keepAliveTime = unit.toNanos(keepAliveTime);
1282 1282 this.threadFactory = threadFactory;
1283 1283 this.handler = handler;
1284 1284 }
1285 1285
1286 1286 /**
1287 1287 * Executes the given task sometime in the future. The task
1288 1288 * may execute in a new thread or in an existing pooled thread.
1289 1289 *
1290 1290 * If the task cannot be submitted for execution, either because this
1291 1291 * executor has been shutdown or because its capacity has been reached,
1292 1292 * the task is handled by the current {@code RejectedExecutionHandler}.
1293 1293 *
1294 1294 * @param command the task to execute
1295 1295 * @throws RejectedExecutionException at discretion of
1296 1296 * {@code RejectedExecutionHandler}, if the task
1297 1297 * cannot be accepted for execution
1298 1298 * @throws NullPointerException if {@code command} is null
1299 1299 */
1300 1300 public void execute(Runnable command) {
1301 1301 if (command == null)
1302 1302 throw new NullPointerException();
1303 1303 /*
1304 1304 * Proceed in 3 steps:
1305 1305 *
1306 1306 * 1. If fewer than corePoolSize threads are running, try to
1307 1307 * start a new thread with the given command as its first
1308 1308 * task. The call to addWorker atomically checks runState and
1309 1309 * workerCount, and so prevents false alarms that would add
1310 1310 * threads when it shouldn't, by returning false.
1311 1311 *
1312 1312 * 2. If a task can be successfully queued, then we still need
1313 1313 * to double-check whether we should have added a thread
1314 1314 * (because existing ones died since last checking) or that
1315 1315 * the pool shut down since entry into this method. So we
1316 1316 * recheck state and if necessary roll back the enqueuing if
1317 1317 * stopped, or start a new thread if there are none.
1318 1318 *
1319 1319 * 3. If we cannot queue task, then we try to add a new
1320 1320 * thread. If it fails, we know we are shut down or saturated
1321 1321 * and so reject the task.
1322 1322 */
1323 1323 int c = ctl.get();
1324 1324 if (workerCountOf(c) < corePoolSize) {
1325 1325 if (addWorker(command, true))
1326 1326 return;
1327 1327 c = ctl.get();
1328 1328 }
1329 1329 if (isRunning(c) && workQueue.offer(command)) {
1330 1330 int recheck = ctl.get();
1331 1331 if (! isRunning(recheck) && remove(command))
1332 1332 reject(command);
1333 1333 else if (workerCountOf(recheck) == 0)
1334 1334 addWorker(null, false);
1335 1335 }
1336 1336 else if (!addWorker(command, false))
1337 1337 reject(command);
1338 1338 }
1339 1339
1340 1340 /**
1341 1341 * Initiates an orderly shutdown in which previously submitted
1342 1342 * tasks are executed, but no new tasks will be accepted.
1343 1343 * Invocation has no additional effect if already shut down.
1344 1344 *
1345 1345 * <p>This method does not wait for previously submitted tasks to
1346 1346 * complete execution. Use {@link #awaitTermination awaitTermination}
1347 1347 * to do that.
1348 1348 *
1349 1349 * @throws SecurityException {@inheritDoc}
1350 1350 */
1351 1351 public void shutdown() {
1352 1352 final ReentrantLock mainLock = this.mainLock;
1353 1353 mainLock.lock();
1354 1354 try {
1355 1355 checkShutdownAccess();
1356 1356 advanceRunState(SHUTDOWN);
1357 1357 interruptIdleWorkers();
1358 1358 onShutdown(); // hook for ScheduledThreadPoolExecutor
1359 1359 } finally {
1360 1360 mainLock.unlock();
1361 1361 }
1362 1362 tryTerminate();
1363 1363 }
1364 1364
1365 1365 /**
1366 1366 * Attempts to stop all actively executing tasks, halts the
1367 1367 * processing of waiting tasks, and returns a list of the tasks
1368 1368 * that were awaiting execution. These tasks are drained (removed)
1369 1369 * from the task queue upon return from this method.
1370 1370 *
1371 1371 * <p>This method does not wait for actively executing tasks to
1372 1372 * terminate. Use {@link #awaitTermination awaitTermination} to
1373 1373 * do that.
1374 1374 *
1375 1375 * <p>There are no guarantees beyond best-effort attempts to stop
1376 1376 * processing actively executing tasks. This implementation
1377 1377 * cancels tasks via {@link Thread#interrupt}, so any task that
1378 1378 * fails to respond to interrupts may never terminate.
1379 1379 *
1380 1380 * @throws SecurityException {@inheritDoc}
1381 1381 */
1382 1382 public List<Runnable> shutdownNow() {
1383 1383 List<Runnable> tasks;
1384 1384 final ReentrantLock mainLock = this.mainLock;
1385 1385 mainLock.lock();
1386 1386 try {
1387 1387 checkShutdownAccess();
1388 1388 advanceRunState(STOP);
1389 1389 interruptWorkers();
1390 1390 tasks = drainQueue();
1391 1391 } finally {
1392 1392 mainLock.unlock();
1393 1393 }
1394 1394 tryTerminate();
1395 1395 return tasks;
1396 1396 }
1397 1397
1398 1398 public boolean isShutdown() {
1399 1399 return ! isRunning(ctl.get());
1400 1400 }
1401 1401
1402 1402 /**
1403 1403 * Returns true if this executor is in the process of terminating
1404 1404 * after {@link #shutdown} or {@link #shutdownNow} but has not
1405 1405 * completely terminated. This method may be useful for
1406 1406 * debugging. A return of {@code true} reported a sufficient
1407 1407 * period after shutdown may indicate that submitted tasks have
1408 1408 * ignored or suppressed interruption, causing this executor not
1409 1409 * to properly terminate.
1410 1410 *
1411 1411 * @return true if terminating but not yet terminated
1412 1412 */
1413 1413 public boolean isTerminating() {
1414 1414 int c = ctl.get();
1415 1415 return ! isRunning(c) && runStateLessThan(c, TERMINATED);
1416 1416 }
1417 1417
1418 1418 public boolean isTerminated() {
1419 1419 return runStateAtLeast(ctl.get(), TERMINATED);
1420 1420 }
1421 1421
1422 1422 public boolean awaitTermination(long timeout, TimeUnit unit)
1423 1423 throws InterruptedException {
1424 1424 long nanos = unit.toNanos(timeout);
1425 1425 final ReentrantLock mainLock = this.mainLock;
1426 1426 mainLock.lock();
1427 1427 try {
1428 1428 for (;;) {
1429 1429 if (runStateAtLeast(ctl.get(), TERMINATED))
1430 1430 return true;
1431 1431 if (nanos <= 0)
1432 1432 return false;
1433 1433 nanos = termination.awaitNanos(nanos);
1434 1434 }
1435 1435 } finally {
1436 1436 mainLock.unlock();
1437 1437 }
1438 1438 }
1439 1439
1440 1440 /**
1441 1441 * Invokes {@code shutdown} when this executor is no longer
1442 1442 * referenced and it has no threads.
1443 1443 */
1444 1444 protected void finalize() {
1445 1445 shutdown();
1446 1446 }
1447 1447
1448 1448 /**
1449 1449 * Sets the thread factory used to create new threads.
1450 1450 *
1451 1451 * @param threadFactory the new thread factory
1452 1452 * @throws NullPointerException if threadFactory is null
1453 1453 * @see #getThreadFactory
1454 1454 */
1455 1455 public void setThreadFactory(ThreadFactory threadFactory) {
1456 1456 if (threadFactory == null)
1457 1457 throw new NullPointerException();
1458 1458 this.threadFactory = threadFactory;
1459 1459 }
1460 1460
1461 1461 /**
1462 1462 * Returns the thread factory used to create new threads.
1463 1463 *
1464 1464 * @return the current thread factory
1465 1465 * @see #setThreadFactory
1466 1466 */
1467 1467 public ThreadFactory getThreadFactory() {
1468 1468 return threadFactory;
1469 1469 }
1470 1470
1471 1471 /**
1472 1472 * Sets a new handler for unexecutable tasks.
1473 1473 *
1474 1474 * @param handler the new handler
1475 1475 * @throws NullPointerException if handler is null
1476 1476 * @see #getRejectedExecutionHandler
1477 1477 */
1478 1478 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) {
1479 1479 if (handler == null)
1480 1480 throw new NullPointerException();
1481 1481 this.handler = handler;
1482 1482 }
1483 1483
1484 1484 /**
1485 1485 * Returns the current handler for unexecutable tasks.
1486 1486 *
1487 1487 * @return the current handler
1488 1488 * @see #setRejectedExecutionHandler
1489 1489 */
1490 1490 public RejectedExecutionHandler getRejectedExecutionHandler() {
1491 1491 return handler;
1492 1492 }
1493 1493
1494 1494 /**
1495 1495 * Sets the core number of threads. This overrides any value set
1496 1496 * in the constructor. If the new value is smaller than the
1497 1497 * current value, excess existing threads will be terminated when
1498 1498 * they next become idle. If larger, new threads will, if needed,
1499 1499 * be started to execute any queued tasks.
1500 1500 *
1501 1501 * @param corePoolSize the new core size
1502 1502 * @throws IllegalArgumentException if {@code corePoolSize < 0}
1503 1503 * @see #getCorePoolSize
1504 1504 */
1505 1505 public void setCorePoolSize(int corePoolSize) {
1506 1506 if (corePoolSize < 0)
1507 1507 throw new IllegalArgumentException();
1508 1508 int delta = corePoolSize - this.corePoolSize;
1509 1509 this.corePoolSize = corePoolSize;
1510 1510 if (workerCountOf(ctl.get()) > corePoolSize)
1511 1511 interruptIdleWorkers();
1512 1512 else if (delta > 0) {
1513 1513 // We don't really know how many new threads are "needed".
1514 1514 // As a heuristic, prestart enough new workers (up to new
1515 1515 // core size) to handle the current number of tasks in
1516 1516 // queue, but stop if queue becomes empty while doing so.
1517 1517 int k = Math.min(delta, workQueue.size());
1518 1518 while (k-- > 0 && addWorker(null, true)) {
1519 1519 if (workQueue.isEmpty())
1520 1520 break;
1521 1521 }
1522 1522 }
1523 1523 }
1524 1524
1525 1525 /**
1526 1526 * Returns the core number of threads.
1527 1527 *
1528 1528 * @return the core number of threads
1529 1529 * @see #setCorePoolSize
1530 1530 */
1531 1531 public int getCorePoolSize() {
1532 1532 return corePoolSize;
1533 1533 }
1534 1534
1535 1535 /**
1536 1536 * Starts a core thread, causing it to idly wait for work. This
1537 1537 * overrides the default policy of starting core threads only when
1538 1538 * new tasks are executed. This method will return {@code false}
1539 1539 * if all core threads have already been started.
1540 1540 *
1541 1541 * @return {@code true} if a thread was started
1542 1542 */
1543 1543 public boolean prestartCoreThread() {
1544 1544 return workerCountOf(ctl.get()) < corePoolSize &&
1545 1545 addWorker(null, true);
1546 1546 }
1547 1547
1548 1548 /**
1549 1549 * Starts all core threads, causing them to idly wait for work. This
1550 1550 * overrides the default policy of starting core threads only when
1551 1551 * new tasks are executed.
1552 1552 *
1553 1553 * @return the number of threads started
1554 1554 */
1555 1555 public int prestartAllCoreThreads() {
1556 1556 int n = 0;
1557 1557 while (addWorker(null, true))
1558 1558 ++n;
1559 1559 return n;
1560 1560 }
1561 1561
1562 1562 /**
1563 1563 * Returns true if this pool allows core threads to time out and
1564 1564 * terminate if no tasks arrive within the keepAlive time, being
1565 1565 * replaced if needed when new tasks arrive. When true, the same
1566 1566 * keep-alive policy applying to non-core threads applies also to
1567 1567 * core threads. When false (the default), core threads are never
1568 1568 * terminated due to lack of incoming tasks.
1569 1569 *
1570 1570 * @return {@code true} if core threads are allowed to time out,
1571 1571 * else {@code false}
1572 1572 *
1573 1573 * @since 1.6
1574 1574 */
1575 1575 public boolean allowsCoreThreadTimeOut() {
1576 1576 return allowCoreThreadTimeOut;
1577 1577 }
1578 1578
1579 1579 /**
1580 1580 * Sets the policy governing whether core threads may time out and
1581 1581 * terminate if no tasks arrive within the keep-alive time, being
1582 1582 * replaced if needed when new tasks arrive. When false, core
1583 1583 * threads are never terminated due to lack of incoming
1584 1584 * tasks. When true, the same keep-alive policy applying to
1585 1585 * non-core threads applies also to core threads. To avoid
1586 1586 * continual thread replacement, the keep-alive time must be
1587 1587 * greater than zero when setting {@code true}. This method
1588 1588 * should in general be called before the pool is actively used.
1589 1589 *
1590 1590 * @param value {@code true} if should time out, else {@code false}
1591 1591 * @throws IllegalArgumentException if value is {@code true}
1592 1592 * and the current keep-alive time is not greater than zero
1593 1593 *
1594 1594 * @since 1.6
1595 1595 */
1596 1596 public void allowCoreThreadTimeOut(boolean value) {
1597 1597 if (value && keepAliveTime <= 0)
1598 1598 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1599 1599 if (value != allowCoreThreadTimeOut) {
1600 1600 allowCoreThreadTimeOut = value;
1601 1601 if (value)
1602 1602 interruptIdleWorkers();
1603 1603 }
1604 1604 }
1605 1605
1606 1606 /**
1607 1607 * Sets the maximum allowed number of threads. This overrides any
1608 1608 * value set in the constructor. If the new value is smaller than
1609 1609 * the current value, excess existing threads will be
1610 1610 * terminated when they next become idle.
1611 1611 *
1612 1612 * @param maximumPoolSize the new maximum
1613 1613 * @throws IllegalArgumentException if the new maximum is
1614 1614 * less than or equal to zero, or
1615 1615 * less than the {@linkplain #getCorePoolSize core pool size}
1616 1616 * @see #getMaximumPoolSize
1617 1617 */
1618 1618 public void setMaximumPoolSize(int maximumPoolSize) {
1619 1619 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize)
1620 1620 throw new IllegalArgumentException();
1621 1621 this.maximumPoolSize = maximumPoolSize;
1622 1622 if (workerCountOf(ctl.get()) > maximumPoolSize)
1623 1623 interruptIdleWorkers();
1624 1624 }
1625 1625
1626 1626 /**
1627 1627 * Returns the maximum allowed number of threads.
1628 1628 *
1629 1629 * @return the maximum allowed number of threads
1630 1630 * @see #setMaximumPoolSize
1631 1631 */
1632 1632 public int getMaximumPoolSize() {
1633 1633 return maximumPoolSize;
1634 1634 }
1635 1635
1636 1636 /**
1637 1637 * Sets the time limit for which threads may remain idle before
1638 1638 * being terminated. If there are more than the core number of
1639 1639 * threads currently in the pool, after waiting this amount of
1640 1640 * time without processing a task, excess threads will be
1641 1641 * terminated. This overrides any value set in the constructor.
1642 1642 *
1643 1643 * @param time the time to wait. A time value of zero will cause
1644 1644 * excess threads to terminate immediately after executing tasks.
1645 1645 * @param unit the time unit of the {@code time} argument
1646 1646 * @throws IllegalArgumentException if {@code time} less than zero or
1647 1647 * if {@code time} is zero and {@code allowsCoreThreadTimeOut}
1648 1648 * @see #getKeepAliveTime
1649 1649 */
1650 1650 public void setKeepAliveTime(long time, TimeUnit unit) {
1651 1651 if (time < 0)
1652 1652 throw new IllegalArgumentException();
1653 1653 if (time == 0 && allowsCoreThreadTimeOut())
1654 1654 throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
1655 1655 long keepAliveTime = unit.toNanos(time);
1656 1656 long delta = keepAliveTime - this.keepAliveTime;
1657 1657 this.keepAliveTime = keepAliveTime;
1658 1658 if (delta < 0)
1659 1659 interruptIdleWorkers();
1660 1660 }
1661 1661
1662 1662 /**
1663 1663 * Returns the thread keep-alive time, which is the amount of time
1664 1664 * that threads in excess of the core pool size may remain
1665 1665 * idle before being terminated.
1666 1666 *
1667 1667 * @param unit the desired time unit of the result
1668 1668 * @return the time limit
1669 1669 * @see #setKeepAliveTime
1670 1670 */
1671 1671 public long getKeepAliveTime(TimeUnit unit) {
1672 1672 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS);
1673 1673 }
1674 1674
1675 1675 /* User-level queue utilities */
1676 1676
1677 1677 /**
1678 1678 * Returns the task queue used by this executor. Access to the
1679 1679 * task queue is intended primarily for debugging and monitoring.
1680 1680 * This queue may be in active use. Retrieving the task queue
1681 1681 * does not prevent queued tasks from executing.
1682 1682 *
1683 1683 * @return the task queue
1684 1684 */
1685 1685 public BlockingQueue<Runnable> getQueue() {
1686 1686 return workQueue;
1687 1687 }
1688 1688
1689 1689 /**
1690 1690 * Removes this task from the executor's internal queue if it is
1691 1691 * present, thus causing it not to be run if it has not already
1692 1692 * started.
1693 1693 *
1694 1694 * <p> This method may be useful as one part of a cancellation
1695 1695 * scheme. It may fail to remove tasks that have been converted
1696 1696 * into other forms before being placed on the internal queue. For
1697 1697 * example, a task entered using {@code submit} might be
1698 1698 * converted into a form that maintains {@code Future} status.
1699 1699 * However, in such cases, method {@link #purge} may be used to
1700 1700 * remove those Futures that have been cancelled.
1701 1701 *
1702 1702 * @param task the task to remove
1703 1703 * @return true if the task was removed
1704 1704 */
1705 1705 public boolean remove(Runnable task) {
1706 1706 boolean removed = workQueue.remove(task);
1707 1707 tryTerminate(); // In case SHUTDOWN and now empty
1708 1708 return removed;
1709 1709 }
1710 1710
1711 1711 /**
1712 1712 * Tries to remove from the work queue all {@link Future}
1713 1713 * tasks that have been cancelled. This method can be useful as a
1714 1714 * storage reclamation operation, that has no other impact on
1715 1715 * functionality. Cancelled tasks are never executed, but may
1716 1716 * accumulate in work queues until worker threads can actively
1717 1717 * remove them. Invoking this method instead tries to remove them now.
1718 1718 * However, this method may fail to remove tasks in
1719 1719 * the presence of interference by other threads.
1720 1720 */
1721 1721 public void purge() {
1722 1722 final BlockingQueue<Runnable> q = workQueue;
1723 1723 try {
1724 1724 Iterator<Runnable> it = q.iterator();
1725 1725 while (it.hasNext()) {
1726 1726 Runnable r = it.next();
1727 1727 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1728 1728 it.remove();
1729 1729 }
1730 1730 } catch (ConcurrentModificationException fallThrough) {
1731 1731 // Take slow path if we encounter interference during traversal.
1732 1732 // Make copy for traversal and call remove for cancelled entries.
1733 1733 // The slow path is more likely to be O(N*N).
1734 1734 for (Object r : q.toArray())
1735 1735 if (r instanceof Future<?> && ((Future<?>)r).isCancelled())
1736 1736 q.remove(r);
1737 1737 }
1738 1738
1739 1739 tryTerminate(); // In case SHUTDOWN and now empty
1740 1740 }
1741 1741
1742 1742 /* Statistics */
1743 1743
1744 1744 /**
1745 1745 * Returns the current number of threads in the pool.
1746 1746 *
1747 1747 * @return the number of threads
1748 1748 */
1749 1749 public int getPoolSize() {
1750 1750 final ReentrantLock mainLock = this.mainLock;
1751 1751 mainLock.lock();
1752 1752 try {
1753 1753 // Remove rare and surprising possibility of
1754 1754 // isTerminated() && getPoolSize() > 0
1755 1755 return runStateAtLeast(ctl.get(), TIDYING) ? 0
1756 1756 : workers.size();
1757 1757 } finally {
1758 1758 mainLock.unlock();
1759 1759 }
1760 1760 }
1761 1761
1762 1762 /**
1763 1763 * Returns the approximate number of threads that are actively
1764 1764 * executing tasks.
1765 1765 *
1766 1766 * @return the number of threads
1767 1767 */
1768 1768 public int getActiveCount() {
1769 1769 final ReentrantLock mainLock = this.mainLock;
1770 1770 mainLock.lock();
1771 1771 try {
1772 1772 int n = 0;
1773 1773 for (Worker w : workers)
1774 1774 if (w.isLocked())
1775 1775 ++n;
1776 1776 return n;
1777 1777 } finally {
1778 1778 mainLock.unlock();
1779 1779 }
1780 1780 }
1781 1781
1782 1782 /**
1783 1783 * Returns the largest number of threads that have ever
1784 1784 * simultaneously been in the pool.
1785 1785 *
1786 1786 * @return the number of threads
1787 1787 */
1788 1788 public int getLargestPoolSize() {
1789 1789 final ReentrantLock mainLock = this.mainLock;
1790 1790 mainLock.lock();
1791 1791 try {
1792 1792 return largestPoolSize;
1793 1793 } finally {
1794 1794 mainLock.unlock();
1795 1795 }
1796 1796 }
1797 1797
1798 1798 /**
1799 1799 * Returns the approximate total number of tasks that have ever been
1800 1800 * scheduled for execution. Because the states of tasks and
1801 1801 * threads may change dynamically during computation, the returned
1802 1802 * value is only an approximation.
1803 1803 *
1804 1804 * @return the number of tasks
1805 1805 */
1806 1806 public long getTaskCount() {
1807 1807 final ReentrantLock mainLock = this.mainLock;
1808 1808 mainLock.lock();
1809 1809 try {
1810 1810 long n = completedTaskCount;
1811 1811 for (Worker w : workers) {
1812 1812 n += w.completedTasks;
1813 1813 if (w.isLocked())
1814 1814 ++n;
1815 1815 }
1816 1816 return n + workQueue.size();
1817 1817 } finally {
1818 1818 mainLock.unlock();
1819 1819 }
1820 1820 }
1821 1821
1822 1822 /**
1823 1823 * Returns the approximate total number of tasks that have
1824 1824 * completed execution. Because the states of tasks and threads
1825 1825 * may change dynamically during computation, the returned value
1826 1826 * is only an approximation, but one that does not ever decrease
1827 1827 * across successive calls.
1828 1828 *
1829 1829 * @return the number of tasks
1830 1830 */
1831 1831 public long getCompletedTaskCount() {
1832 1832 final ReentrantLock mainLock = this.mainLock;
1833 1833 mainLock.lock();
↓ open down ↓ |
1833 lines elided |
↑ open up ↑ |
1834 1834 try {
1835 1835 long n = completedTaskCount;
1836 1836 for (Worker w : workers)
1837 1837 n += w.completedTasks;
1838 1838 return n;
1839 1839 } finally {
1840 1840 mainLock.unlock();
1841 1841 }
1842 1842 }
1843 1843
1844 + /**
1845 + * Returns a string identifying this pool, as well as its state,
1846 + * including indications of run state and estimated worker and
1847 + * task counts.
1848 + *
1849 + * @return a string identifying this pool, as well as its state
1850 + */
1851 + public String toString() {
1852 + long ncompleted;
1853 + int nworkers, nactive;
1854 + final ReentrantLock mainLock = this.mainLock;
1855 + mainLock.lock();
1856 + try {
1857 + ncompleted = completedTaskCount;
1858 + nactive = 0;
1859 + nworkers = workers.size();
1860 + for (Worker w : workers) {
1861 + ncompleted += w.completedTasks;
1862 + if (w.isLocked())
1863 + ++nactive;
1864 + }
1865 + } finally {
1866 + mainLock.unlock();
1867 + }
1868 + int c = ctl.get();
1869 + String rs = (runStateLessThan(c, SHUTDOWN) ? "Running" :
1870 + (runStateAtLeast(c, TERMINATED) ? "Terminated" :
1871 + "Shutting down"));
1872 + return super.toString() +
1873 + "[" + rs +
1874 + ", pool size = " + nworkers +
1875 + ", active threads = " + nactive +
1876 + ", queued tasks = " + workQueue.size() +
1877 + ", completed tasks = " + ncompleted +
1878 + "]";
1879 + }
1880 +
1844 1881 /* Extension hooks */
1845 1882
1846 1883 /**
1847 1884 * Method invoked prior to executing the given Runnable in the
1848 1885 * given thread. This method is invoked by thread {@code t} that
1849 1886 * will execute task {@code r}, and may be used to re-initialize
1850 1887 * ThreadLocals, or to perform logging.
1851 1888 *
1852 1889 * <p>This implementation does nothing, but may be customized in
1853 1890 * subclasses. Note: To properly nest multiple overridings, subclasses
1854 1891 * should generally invoke {@code super.beforeExecute} at the end of
1855 1892 * this method.
1856 1893 *
1857 1894 * @param t the thread that will run task {@code r}
1858 1895 * @param r the task that will be executed
1859 1896 */
1860 1897 protected void beforeExecute(Thread t, Runnable r) { }
1861 1898
1862 1899 /**
1863 1900 * Method invoked upon completion of execution of the given Runnable.
1864 1901 * This method is invoked by the thread that executed the task. If
1865 1902 * non-null, the Throwable is the uncaught {@code RuntimeException}
1866 1903 * or {@code Error} that caused execution to terminate abruptly.
1867 1904 *
1868 1905 * <p>This implementation does nothing, but may be customized in
1869 1906 * subclasses. Note: To properly nest multiple overridings, subclasses
1870 1907 * should generally invoke {@code super.afterExecute} at the
1871 1908 * beginning of this method.
1872 1909 *
1873 1910 * <p><b>Note:</b> When actions are enclosed in tasks (such as
1874 1911 * {@link FutureTask}) either explicitly or via methods such as
1875 1912 * {@code submit}, these task objects catch and maintain
1876 1913 * computational exceptions, and so they do not cause abrupt
1877 1914 * termination, and the internal exceptions are <em>not</em>
1878 1915 * passed to this method. If you would like to trap both kinds of
1879 1916 * failures in this method, you can further probe for such cases,
1880 1917 * as in this sample subclass that prints either the direct cause
1881 1918 * or the underlying exception if a task has been aborted:
1882 1919 *
1883 1920 * <pre> {@code
1884 1921 * class ExtendedExecutor extends ThreadPoolExecutor {
1885 1922 * // ...
1886 1923 * protected void afterExecute(Runnable r, Throwable t) {
1887 1924 * super.afterExecute(r, t);
1888 1925 * if (t == null && r instanceof Future<?>) {
1889 1926 * try {
1890 1927 * Object result = ((Future<?>) r).get();
1891 1928 * } catch (CancellationException ce) {
1892 1929 * t = ce;
1893 1930 * } catch (ExecutionException ee) {
1894 1931 * t = ee.getCause();
1895 1932 * } catch (InterruptedException ie) {
1896 1933 * Thread.currentThread().interrupt(); // ignore/reset
1897 1934 * }
1898 1935 * }
1899 1936 * if (t != null)
1900 1937 * System.out.println(t);
1901 1938 * }
1902 1939 * }}</pre>
1903 1940 *
1904 1941 * @param r the runnable that has completed
1905 1942 * @param t the exception that caused termination, or null if
1906 1943 * execution completed normally
1907 1944 */
1908 1945 protected void afterExecute(Runnable r, Throwable t) { }
1909 1946
1910 1947 /**
1911 1948 * Method invoked when the Executor has terminated. Default
1912 1949 * implementation does nothing. Note: To properly nest multiple
1913 1950 * overridings, subclasses should generally invoke
1914 1951 * {@code super.terminated} within this method.
1915 1952 */
1916 1953 protected void terminated() { }
1917 1954
1918 1955 /* Predefined RejectedExecutionHandlers */
1919 1956
1920 1957 /**
1921 1958 * A handler for rejected tasks that runs the rejected task
1922 1959 * directly in the calling thread of the {@code execute} method,
1923 1960 * unless the executor has been shut down, in which case the task
1924 1961 * is discarded.
1925 1962 */
1926 1963 public static class CallerRunsPolicy implements RejectedExecutionHandler {
1927 1964 /**
1928 1965 * Creates a {@code CallerRunsPolicy}.
1929 1966 */
1930 1967 public CallerRunsPolicy() { }
1931 1968
1932 1969 /**
1933 1970 * Executes task r in the caller's thread, unless the executor
1934 1971 * has been shut down, in which case the task is discarded.
1935 1972 *
1936 1973 * @param r the runnable task requested to be executed
1937 1974 * @param e the executor attempting to execute this task
1938 1975 */
1939 1976 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1940 1977 if (!e.isShutdown()) {
1941 1978 r.run();
1942 1979 }
1943 1980 }
1944 1981 }
1945 1982
1946 1983 /**
1947 1984 * A handler for rejected tasks that throws a
1948 1985 * {@code RejectedExecutionException}.
1949 1986 */
1950 1987 public static class AbortPolicy implements RejectedExecutionHandler {
1951 1988 /**
1952 1989 * Creates an {@code AbortPolicy}.
1953 1990 */
↓ open down ↓ |
100 lines elided |
↑ open up ↑ |
1954 1991 public AbortPolicy() { }
1955 1992
1956 1993 /**
1957 1994 * Always throws RejectedExecutionException.
1958 1995 *
1959 1996 * @param r the runnable task requested to be executed
1960 1997 * @param e the executor attempting to execute this task
1961 1998 * @throws RejectedExecutionException always.
1962 1999 */
1963 2000 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1964 - throw new RejectedExecutionException();
2001 + throw new RejectedExecutionException("Task " + r.toString() +
2002 + " rejected from " +
2003 + e.toString());
1965 2004 }
1966 2005 }
1967 2006
1968 2007 /**
1969 2008 * A handler for rejected tasks that silently discards the
1970 2009 * rejected task.
1971 2010 */
1972 2011 public static class DiscardPolicy implements RejectedExecutionHandler {
1973 2012 /**
1974 2013 * Creates a {@code DiscardPolicy}.
1975 2014 */
1976 2015 public DiscardPolicy() { }
1977 2016
1978 2017 /**
1979 2018 * Does nothing, which has the effect of discarding task r.
1980 2019 *
1981 2020 * @param r the runnable task requested to be executed
1982 2021 * @param e the executor attempting to execute this task
1983 2022 */
1984 2023 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
1985 2024 }
1986 2025 }
1987 2026
1988 2027 /**
1989 2028 * A handler for rejected tasks that discards the oldest unhandled
1990 2029 * request and then retries {@code execute}, unless the executor
1991 2030 * is shut down, in which case the task is discarded.
1992 2031 */
1993 2032 public static class DiscardOldestPolicy implements RejectedExecutionHandler {
1994 2033 /**
1995 2034 * Creates a {@code DiscardOldestPolicy} for the given executor.
1996 2035 */
1997 2036 public DiscardOldestPolicy() { }
1998 2037
1999 2038 /**
2000 2039 * Obtains and ignores the next task that the executor
2001 2040 * would otherwise execute, if one is immediately available,
2002 2041 * and then retries execution of task r, unless the executor
2003 2042 * is shut down, in which case task r is instead discarded.
2004 2043 *
2005 2044 * @param r the runnable task requested to be executed
2006 2045 * @param e the executor attempting to execute this task
2007 2046 */
2008 2047 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
2009 2048 if (!e.isShutdown()) {
2010 2049 e.getQueue().poll();
2011 2050 e.execute(r);
2012 2051 }
2013 2052 }
2014 2053 }
2015 2054 }
↓ open down ↓ |
41 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX