369 // GenCollectedHeap(ParNew,DefNew,Tenured) and
370 // ParallelScavengeHeap(ParallelGC, ParallelOldGC)
371 // need the card-mark if and only if the region is
372 // in the old gen, and do not care if the card-mark
373 // succeeds or precedes the initializing stores themselves,
374 // so long as the card-mark is completed before the next
375 // scavenge. For all these cases, we can do a card mark
376 // at the point at which we do a slow path allocation
377 // in the old gen, i.e. in this call.
378 // (b) GenCollectedHeap(ConcurrentMarkSweepGeneration) requires
379 // in addition that the card-mark for an old gen allocated
380 // object strictly follow any associated initializing stores.
381 // In these cases, the memRegion remembered below is
382 // used to card-mark the entire region either just before the next
383 // slow-path allocation by this thread or just before the next scavenge or
384 // CMS-associated safepoint, whichever of these events happens first.
385 // (The implicit assumption is that the object has been fully
386 // initialized by this point, a fact that we assert when doing the
387 // card-mark.)
388 // (c) G1CollectedHeap(G1) uses two kinds of write barriers. When a
389 // G1 concurrent marking is in progress an SATB (pre-write-)barrier is
390 // is used to remember the pre-value of any store. Initializing
391 // stores will not need this barrier, so we need not worry about
392 // compensating for the missing pre-barrier here. Turning now
393 // to the post-barrier, we note that G1 needs a RS update barrier
394 // which simply enqueues a (sequence of) dirty cards which may
395 // optionally be refined by the concurrent update threads. Note
396 // that this barrier need only be applied to a non-young write,
397 // but, like in CMS, because of the presence of concurrent refinement
398 // (much like CMS' precleaning), must strictly follow the oop-store.
399 // Thus, using the same protocol for maintaining the intended
400 // invariants turns out, serendepitously, to be the same for both
401 // G1 and CMS.
402 //
403 // For any future collector, this code should be reexamined with
404 // that specific collector in mind, and the documentation above suitably
405 // extended and updated.
406 oop CollectedHeap::new_store_pre_barrier(JavaThread* thread, oop new_obj) {
407 // If a previous card-mark was deferred, flush it now.
408 flush_deferred_store_barrier(thread);
409 if (can_elide_initializing_store_barrier(new_obj) ||
|
369 // GenCollectedHeap(ParNew,DefNew,Tenured) and
370 // ParallelScavengeHeap(ParallelGC, ParallelOldGC)
371 // need the card-mark if and only if the region is
372 // in the old gen, and do not care if the card-mark
373 // succeeds or precedes the initializing stores themselves,
374 // so long as the card-mark is completed before the next
375 // scavenge. For all these cases, we can do a card mark
376 // at the point at which we do a slow path allocation
377 // in the old gen, i.e. in this call.
378 // (b) GenCollectedHeap(ConcurrentMarkSweepGeneration) requires
379 // in addition that the card-mark for an old gen allocated
380 // object strictly follow any associated initializing stores.
381 // In these cases, the memRegion remembered below is
382 // used to card-mark the entire region either just before the next
383 // slow-path allocation by this thread or just before the next scavenge or
384 // CMS-associated safepoint, whichever of these events happens first.
385 // (The implicit assumption is that the object has been fully
386 // initialized by this point, a fact that we assert when doing the
387 // card-mark.)
388 // (c) G1CollectedHeap(G1) uses two kinds of write barriers. When a
389 // G1 concurrent marking is in progress an SATB (pre-write-)barrier
390 // is used to remember the pre-value of any store. Initializing
391 // stores will not need this barrier, so we need not worry about
392 // compensating for the missing pre-barrier here. Turning now
393 // to the post-barrier, we note that G1 needs a RS update barrier
394 // which simply enqueues a (sequence of) dirty cards which may
395 // optionally be refined by the concurrent update threads. Note
396 // that this barrier need only be applied to a non-young write,
397 // but, like in CMS, because of the presence of concurrent refinement
398 // (much like CMS' precleaning), must strictly follow the oop-store.
399 // Thus, using the same protocol for maintaining the intended
400 // invariants turns out, serendepitously, to be the same for both
401 // G1 and CMS.
402 //
403 // For any future collector, this code should be reexamined with
404 // that specific collector in mind, and the documentation above suitably
405 // extended and updated.
406 oop CollectedHeap::new_store_pre_barrier(JavaThread* thread, oop new_obj) {
407 // If a previous card-mark was deferred, flush it now.
408 flush_deferred_store_barrier(thread);
409 if (can_elide_initializing_store_barrier(new_obj) ||
|