< prev index next >

src/cpu/x86/vm/macroAssembler_x86.cpp

Print this page
rev 8961 : [mq]: diff-shenandoah.patch
rev 8928 : Merge
rev 8925 : 8072817: CardTableExtension kind() should be BarrierSet::CardTableExtension
Summary: Use BarrierSet::CardTableForRS where needed, and update concrete bs tags.
Reviewed-by: jwilhelm, jmasa
rev 8879 : 8076373: In 32-bit VM interpreter and compiled code process NaN values differently
Summary: Change interpreter to use XMM registers on x86_32 if they are available. Add stubs for methods transforming from/to int/long float/double.
Reviewed-by: kvn, mcberg
rev 8830 : 8131682: C1 should use multibyte nops everywhere
Reviewed-by: dlong, goetz, adinn, aph, vlivanov
rev 8715 : 8131344: Missing klass.inline.hpp include in compiler files
Reviewed-by: kvn
rev 8698 : 8130448: thread dump improvements, comment additions, new diagnostics inspired by 8077392
Reviewed-by: dholmes, coleenp
rev 8638 : 8081202: Hotspot compile warning: "Invalid suffix on literal; C++11 requires a space between literal and identifier"
Summary: Need to add a space between macro identifier and string literal
Reviewed-by: stefank, dholmes, kbarrett
rev 8567 : 8079315: UseCondCardMark broken in conjunction with CMS precleaning on x86
Summary: Add the necessary StoreLoad barrier in interpreter, C1 and C2 for x86
Reviewed-by: tschatzl
rev 8566 : 8078438: Interpreter should support conditional card marks (UseCondCardMark) on x86 and aarch64
Summary: Add interpreter support for conditional card marks on x86 and aarch64
Reviewed-by: tschatzl, aph
rev 8504 : 8081778: Use Intel x64 CPU instructions for RSA acceleration
Summary: Add intrinsics for BigInteger squareToLen and mulAdd methods.
Reviewed-by: kvn, jrose
rev 8422 : Merge
rev 8417 : Merge
rev 8413 : 8079792: GC directory structure cleanup
Reviewed-by: brutisso, stefank, david
rev 8400 : Merge
rev 8398 : 6811960: x86 biasedlocking epoch expired rare bug
Summary: It is now guaranteed that biased_locking_enter will be passed a valid tmp_reg.
Reviewed-by: coleenp, dcubed, kvn
rev 8379 : 8076276: Add support for AVX512
Reviewed-by: kvn, roland
Contributed-by: michael.c.berg@intel.com
rev 8295 : Merge
rev 8290 : 8068945: Use RBP register as proper frame pointer in JIT compiled code on x86
Summary: Introduce the PreserveFramePointer flag to control if RBP is used as the frame pointer or as a general purpose register.
Reviewed-by: kvn, roland, dlong, enevill, shade
rev 8284 : 8078113: 8011102 changes may cause incorrect results
Summary: replace Vzeroupper instruction in stubs with zeroing only used ymm registers.
Reviewed-by: kvn
Contributed-by: sandhya.viswanathan@intel.com
rev 8229 : 8073165: Contended Locking fast exit bucket
Summary: JEP-143/JDK-8073165 Contended Locking fast exit bucket
Reviewed-by: dholmes, acorn, dice, dcubed
Contributed-by: dave.dice@oracle.com, karen.kinnear@oracle.com, daniel.daugherty@oracle.com
rev 7923 : 8069016: Add BarrierSet downcast support
Summary: Add FakeRttiSupport utility and use to provide barrier_set_cast.
Reviewed-by: jmasa, sangheki
rev 7853 : 8061553: Contended Locking fast enter bucket
Summary: JEP-143/JDK-8061553 Contended Locking fast enter bucket
Reviewed-by: dholmes, acorn
Contributed-by: dave.dice@oracle.com, karen.kinnear@oracle.com, daniel.daugherty@oracle.com
rev 7747 : 8069580: String intrinsic related cleanups
Summary: Small cleanup of string intrinsic related code.
Reviewed-by: kvn, roland
rev 7678 : 8063086: Math.pow yields different results upon repeated calls
Summary: C2 treats x^2 as a special case and computes x * x while the interpreter and c1 don't have special case code for X^2.
Reviewed-by: kvn
rev 7386 : Merge
rev 7379 : 8062950: Bug in locking code when UseOptoBiasInlining is disabled: assert(dmw->is_neutral()) failed: invariant
Reviewed-by: dholmes, kvn
rev 7377 : Merge
rev 7366 : 8061308: Remove iCMS
Reviewed-by: mgerdin, jmasa
rev 7349 : 8062851: cleanup ObjectMonitor offset adjustments
Summary: JEP-143/JDK-8046133 - cleanup computation of ObjectMonitor field pointers
Reviewed-by: dholmes, redestad, coleenp
rev 6997 : 8055494: Add C2 x86 intrinsic for BigInteger::multiplyToLen() method
Summary: Add new C2 intrinsic for BigInteger::multiplyToLen() on x86 in 64-bit VM.
Reviewed-by: roland
rev 6839 : 8052081: Optimize generated by C2 code for Intel's Atom processor
Summary: Allow to execute vectorization and crc32 optimization on Atom. Enable UseFPUForSpilling by default on x86.
Reviewed-by: roland
rev 6412 : 8037816: Fix for 8036122 breaks build with Xcode5/clang
Summary: Repaired or selectively disabled offending formats; future-proofed with additional checking
Reviewed-by: kvn, jrose, stefank
rev 6365 : 8029302: Performance regression in Math.pow intrinsic
Summary: Added special case for x^y where y == 2
Reviewed-by: kvn, roland
rev 6307 : 8032410: compiler/uncommontrap/TestStackBangRbp.java times out on Solaris-Sparc V9
Summary: make compiled code bang the stack by the worst case size of the interpreter frame at deoptimization points.
Reviewed-by: twisti, kvn
rev 6249 : 8038939: Some options related to RTM locking optimization works inconsistently
Summary: Switch UseRTMXendForLockBusy flag ON by default and change code to retry RTM locking on lock busy condition by default.
Reviewed-by: roland
rev 6182 : 8031320: Use Intel RTM instructions for locks
Summary: Use RTM for inflated locks and stack locks.
Reviewed-by: iveresov, twisti, roland, dcubed
rev 6048 : 8033805: Move Fast_Lock/Fast_Unlock code from .ad files to macroassembler
Summary: Consolidated C2 x86 locking code in one place in macroAssembler_x86.cpp.
Reviewed-by: roland
rev 5720 : 8028109: compiler/codecache/CheckReservedInitialCodeCacheSizeArgOrder.java crashes in RT_Baseline
Summary: Use non-relocatable code to load byte_map_base
Reviewed-by: kvn, roland
rev 5637 : 8026775: nsk/jvmti/RedefineClasses/StressRedefine crashes due to EXCEPTION_ACCESS_VIOLATION
Summary: Uncommon trap blob did not bang all the stack shadow pages
Reviewed-by: kvn, twisti, iveresov, jrose
rev 5594 : 8024927: Nashorn performance regression with CompressedOops
Summary: Allocate compressed class space at end of Java heap.  For small heap sizes, without CDS, save some space so compressed classes can have the same favorable compression as oops
Reviewed-by: stefank, hseigel, goetz
rev 5425 : 8014555: G1: Memory ordering problem with Conc refinement and card marking
Summary: Add a StoreLoad barrier in the G1 post-barrier to fix a race with concurrent refinement. Also-reviewed-by: martin.doerr@sap.com
Reviewed-by: iveresov, tschatzl, brutisso, roland, kvn
rev 5259 : 8015107: NPG: Use consistent naming for metaspace concepts
Reviewed-by: coleenp, mgerdin, hseigel
rev 5093 : 8003424: Enable Class Data Sharing for CompressedOops
8016729: ObjectAlignmentInBytes=16 now forces the use of heap based compressed oops
8005933: The -Xshare:auto option is ignored for -server
Summary: Move klass metaspace above the heap and support CDS with compressed klass ptrs.
Reviewed-by: coleenp, kvn, mgerdin, tschatzl, stefank
rev 4918 : 7088419: Use x86 Hardware CRC32 Instruction with java.util.zip.CRC32
Summary: add intrinsics using new instruction to interpreter, C1, C2, for suitable x86; add test
Reviewed-by: kvn, twisti
rev 4438 : 8011102: Clear AVX registers after return from JNI call
Summary: Execute vzeroupper instruction after JNI call and on exits in jit compiled code which use 256bit vectors.
Reviewed-by: roland
rev 4332 : 8008555: Debugging code in compiled method sometimes leaks memory
Summary: support for strings that have same life-time as code that uses them.
Reviewed-by: kvn, twisti
rev 4148 : 8007708: compiler/6855215 assert(VM_Version::supports_sse4_2())
Summary: Added missing UseSSE42 check. Also added missing avx2 assert for vpermq instruction.
Reviewed-by: roland, twisti
rev 4108 : Merge
rev 4107 : 8005915: Unify SERIALGC and INCLUDE_ALTERNATE_GCS
Summary: Rename INCLUDE_ALTERNATE_GCS to INCLUDE_ALL_GCS and replace SERIALGC with INCLUDE_ALL_GCS.
Reviewed-by: coleenp, stefank
rev 4044 : 6896617: Optimize sun.nio.cs.ISO_8859_1$Encode.encodeArrayLoop() on x86
Summary: Use SSE4.2 and AVX2 instructions for encodeArray intrinsic.
Reviewed-by: roland
rev 3978 : 8005419: Improve intrinsics code performance on x86 by using AVX2
Summary: use 256bit vpxor,vptest instructions in String.compareTo() and equals() intrinsics.
Reviewed-by: twisti
rev 3977 : 8004537: replace AbstractAssembler emit_long with emit_int32
Reviewed-by: jrose, kvn, twisti
Contributed-by: Morris Meyer <morris.meyer@oracle.com>
rev 3976 : 8005544: Use 256bit YMM registers in arraycopy stubs on x86
Summary: Use YMM registers in arraycopy and array_fill stubs.
Reviewed-by: roland, twisti
rev 3975 : 8005522: use fast-string instructions on x86 for zeroing
Summary: use 'rep stosb' instead of 'rep stosq' when fast-string operations are available.
Reviewed-by: twisti, roland
rev 3931 : 8004250: replace AbstractAssembler a_byte/a_long with emit_int8/emit_int32
Reviewed-by: jrose, kvn, twisti
Contributed-by: Morris Meyer <morris.meyer@oracle.com>
rev 3928 : 8004835: Improve AES intrinsics on x86
Summary: Enable AES intrinsics on non-AVX cpus, group together aes instructions in crypto stubs.
Reviewed-by: roland, twisti
rev 3888 : 8003250: SPARC: move MacroAssembler into separate file
Reviewed-by: jrose, kvn
rev 3883 : 8003240: x86: move MacroAssembler into separate file
Reviewed-by: kvn


3712   } else {
3713     lea(rscratch1, src);
3714     Assembler::mulsd(dst, Address(rscratch1, 0));
3715   }
3716 }
3717 
3718 void MacroAssembler::mulss(XMMRegister dst, AddressLiteral src) {
3719   if (reachable(src)) {
3720     Assembler::mulss(dst, as_Address(src));
3721   } else {
3722     lea(rscratch1, src);
3723     Assembler::mulss(dst, Address(rscratch1, 0));
3724   }
3725 }
3726 
3727 void MacroAssembler::null_check(Register reg, int offset) {
3728   if (needs_explicit_null_check(offset)) {
3729     // provoke OS NULL exception if reg = NULL by
3730     // accessing M[reg] w/o changing any (non-CC) registers
3731     // NOTE: cmpl is plenty here to provoke a segv





3732     cmpptr(rax, Address(reg, 0));
3733     // Note: should probably use testl(rax, Address(reg, 0));
3734     //       may be shorter code (however, this version of
3735     //       testl needs to be implemented first)
3736   } else {
3737     // nothing to do, (later) access of M[reg + offset]
3738     // will provoke OS NULL exception if reg = NULL
3739   }
3740 }
3741 
3742 void MacroAssembler::os_breakpoint() {
3743   // instead of directly emitting a breakpoint, call os:breakpoint for better debugability
3744   // (e.g., MSVC can't call ps() otherwise)
3745   call(RuntimeAddress(CAST_FROM_FN_PTR(address, os::breakpoint)));
3746 }
3747 
3748 void MacroAssembler::pop_CPU_state() {
3749   pop_FPU_state();
3750   pop_IU_state();
3751 }


4210   if (pre_val != rax)
4211     pop(pre_val);
4212 
4213   if (obj != noreg && obj != rax)
4214     pop(obj);
4215 
4216   if(tosca_live) pop(rax);
4217 
4218   bind(done);
4219 }
4220 
4221 void MacroAssembler::g1_write_barrier_post(Register store_addr,
4222                                            Register new_val,
4223                                            Register thread,
4224                                            Register tmp,
4225                                            Register tmp2) {
4226 #ifdef _LP64
4227   assert(thread == r15_thread, "must be");
4228 #endif // _LP64
4229 







4230   Address queue_index(thread, in_bytes(JavaThread::dirty_card_queue_offset() +
4231                                        PtrQueue::byte_offset_of_index()));
4232   Address buffer(thread, in_bytes(JavaThread::dirty_card_queue_offset() +
4233                                        PtrQueue::byte_offset_of_buf()));
4234 
4235   CardTableModRefBS* ct =
4236     barrier_set_cast<CardTableModRefBS>(Universe::heap()->barrier_set());
4237   assert(sizeof(*ct->byte_map_base) == sizeof(jbyte), "adjust this code");
4238 
4239   Label done;
4240   Label runtime;
4241 
4242   // Does store cross heap regions?
4243 
4244   movptr(tmp, store_addr);
4245   xorptr(tmp, new_val);
4246   shrptr(tmp, HeapRegion::LogOfHRGrainBytes);
4247   jcc(Assembler::equal, done);
4248 
4249   // crosses regions, storing NULL?


4394 void MacroAssembler::testptr(Register dst, Register src) {
4395   LP64_ONLY(testq(dst, src)) NOT_LP64(testl(dst, src));
4396 }
4397 
4398 // Defines obj, preserves var_size_in_bytes, okay for t2 == var_size_in_bytes.
4399 void MacroAssembler::tlab_allocate(Register obj,
4400                                    Register var_size_in_bytes,
4401                                    int con_size_in_bytes,
4402                                    Register t1,
4403                                    Register t2,
4404                                    Label& slow_case) {
4405   assert_different_registers(obj, t1, t2);
4406   assert_different_registers(obj, var_size_in_bytes, t1);
4407   Register end = t2;
4408   Register thread = NOT_LP64(t1) LP64_ONLY(r15_thread);
4409 
4410   verify_tlab();
4411 
4412   NOT_LP64(get_thread(thread));
4413 


4414   movptr(obj, Address(thread, JavaThread::tlab_top_offset()));
4415   if (var_size_in_bytes == noreg) {
4416     lea(end, Address(obj, con_size_in_bytes));
4417   } else {



4418     lea(end, Address(obj, var_size_in_bytes, Address::times_1));
4419   }
4420   cmpptr(end, Address(thread, JavaThread::tlab_end_offset()));
4421   jcc(Assembler::above, slow_case);
4422 
4423   // update the tlab top pointer
4424   movptr(Address(thread, JavaThread::tlab_top_offset()), end);
4425 


4426   // recover var_size_in_bytes if necessary
4427   if (var_size_in_bytes == end) {
4428     subptr(var_size_in_bytes, obj);
4429   }
4430   verify_tlab();
4431 }
4432 
4433 // Preserves rbx, and rdx.
4434 Register MacroAssembler::tlab_refill(Label& retry,
4435                                      Label& try_eden,
4436                                      Label& slow_case) {
4437   Register top = rax;
4438   Register t1  = rcx;
4439   Register t2  = rsi;
4440   Register thread_reg = NOT_LP64(rdi) LP64_ONLY(r15_thread);
4441   assert_different_registers(top, thread_reg, t1, t2, /* preserve: */ rbx, rdx);
4442   Label do_refill, discard_tlab;
4443 
4444   if (!Universe::heap()->supports_inline_contig_alloc()) {
4445     // No allocation in the shared eden.


5673     } else if (CheckJNICalls) {
5674       call(RuntimeAddress(StubRoutines::x86::verify_mxcsr_entry()));
5675     }
5676   }
5677   if (VM_Version::supports_avx()) {
5678     // Clear upper bits of YMM registers to avoid SSE <-> AVX transition penalty.
5679     vzeroupper();
5680   }
5681 
5682 #ifndef _LP64
5683   // Either restore the x87 floating pointer control word after returning
5684   // from the JNI call or verify that it wasn't changed.
5685   if (CheckJNICalls) {
5686     call(RuntimeAddress(StubRoutines::x86::verify_fpu_cntrl_wrd_entry()));
5687   }
5688 #endif // _LP64
5689 }
5690 
5691 
5692 void MacroAssembler::load_klass(Register dst, Register src) {



5693 #ifdef _LP64
5694   if (UseCompressedClassPointers) {
5695     movl(dst, Address(src, oopDesc::klass_offset_in_bytes()));
5696     decode_klass_not_null(dst);
5697   } else
5698 #endif
5699     movptr(dst, Address(src, oopDesc::klass_offset_in_bytes()));
5700 }
5701 
5702 void MacroAssembler::load_prototype_header(Register dst, Register src) {
5703   load_klass(dst, src);
5704   movptr(dst, Address(dst, Klass::prototype_header_offset()));
5705 }
5706 
5707 void MacroAssembler::store_klass(Register dst, Register src) {
5708 #ifdef _LP64
5709   if (UseCompressedClassPointers) {
5710     encode_klass_not_null(src);
5711     movl(Address(dst, oopDesc::klass_offset_in_bytes()), src);
5712   } else




3712   } else {
3713     lea(rscratch1, src);
3714     Assembler::mulsd(dst, Address(rscratch1, 0));
3715   }
3716 }
3717 
3718 void MacroAssembler::mulss(XMMRegister dst, AddressLiteral src) {
3719   if (reachable(src)) {
3720     Assembler::mulss(dst, as_Address(src));
3721   } else {
3722     lea(rscratch1, src);
3723     Assembler::mulss(dst, Address(rscratch1, 0));
3724   }
3725 }
3726 
3727 void MacroAssembler::null_check(Register reg, int offset) {
3728   if (needs_explicit_null_check(offset)) {
3729     // provoke OS NULL exception if reg = NULL by
3730     // accessing M[reg] w/o changing any (non-CC) registers
3731     // NOTE: cmpl is plenty here to provoke a segv
3732 
3733     if (ShenandoahVerifyReadsToFromSpace) {
3734       oopDesc::bs()->interpreter_read_barrier(this, reg);
3735     }
3736 
3737     cmpptr(rax, Address(reg, 0));
3738     // Note: should probably use testl(rax, Address(reg, 0));
3739     //       may be shorter code (however, this version of
3740     //       testl needs to be implemented first)
3741   } else {
3742     // nothing to do, (later) access of M[reg + offset]
3743     // will provoke OS NULL exception if reg = NULL
3744   }
3745 }
3746 
3747 void MacroAssembler::os_breakpoint() {
3748   // instead of directly emitting a breakpoint, call os:breakpoint for better debugability
3749   // (e.g., MSVC can't call ps() otherwise)
3750   call(RuntimeAddress(CAST_FROM_FN_PTR(address, os::breakpoint)));
3751 }
3752 
3753 void MacroAssembler::pop_CPU_state() {
3754   pop_FPU_state();
3755   pop_IU_state();
3756 }


4215   if (pre_val != rax)
4216     pop(pre_val);
4217 
4218   if (obj != noreg && obj != rax)
4219     pop(obj);
4220 
4221   if(tosca_live) pop(rax);
4222 
4223   bind(done);
4224 }
4225 
4226 void MacroAssembler::g1_write_barrier_post(Register store_addr,
4227                                            Register new_val,
4228                                            Register thread,
4229                                            Register tmp,
4230                                            Register tmp2) {
4231 #ifdef _LP64
4232   assert(thread == r15_thread, "must be");
4233 #endif // _LP64
4234 
4235   if (UseShenandoahGC) {
4236     // No need for this in Shenandoah.
4237     return;
4238   }
4239 
4240   assert(UseG1GC, "expect G1 GC");
4241 
4242   Address queue_index(thread, in_bytes(JavaThread::dirty_card_queue_offset() +
4243                                        PtrQueue::byte_offset_of_index()));
4244   Address buffer(thread, in_bytes(JavaThread::dirty_card_queue_offset() +
4245                                        PtrQueue::byte_offset_of_buf()));
4246 
4247   CardTableModRefBS* ct =
4248     barrier_set_cast<CardTableModRefBS>(Universe::heap()->barrier_set());
4249   assert(sizeof(*ct->byte_map_base) == sizeof(jbyte), "adjust this code");
4250 
4251   Label done;
4252   Label runtime;
4253 
4254   // Does store cross heap regions?
4255 
4256   movptr(tmp, store_addr);
4257   xorptr(tmp, new_val);
4258   shrptr(tmp, HeapRegion::LogOfHRGrainBytes);
4259   jcc(Assembler::equal, done);
4260 
4261   // crosses regions, storing NULL?


4406 void MacroAssembler::testptr(Register dst, Register src) {
4407   LP64_ONLY(testq(dst, src)) NOT_LP64(testl(dst, src));
4408 }
4409 
4410 // Defines obj, preserves var_size_in_bytes, okay for t2 == var_size_in_bytes.
4411 void MacroAssembler::tlab_allocate(Register obj,
4412                                    Register var_size_in_bytes,
4413                                    int con_size_in_bytes,
4414                                    Register t1,
4415                                    Register t2,
4416                                    Label& slow_case) {
4417   assert_different_registers(obj, t1, t2);
4418   assert_different_registers(obj, var_size_in_bytes, t1);
4419   Register end = t2;
4420   Register thread = NOT_LP64(t1) LP64_ONLY(r15_thread);
4421 
4422   verify_tlab();
4423 
4424   NOT_LP64(get_thread(thread));
4425 
4426   uint oop_extra_words = Universe::heap()->oop_extra_words();
4427 
4428   movptr(obj, Address(thread, JavaThread::tlab_top_offset()));
4429   if (var_size_in_bytes == noreg) {
4430     lea(end, Address(obj, con_size_in_bytes + oop_extra_words * HeapWordSize));
4431   } else {
4432     if (oop_extra_words > 0) {
4433       addq(var_size_in_bytes, oop_extra_words * HeapWordSize);
4434     }
4435     lea(end, Address(obj, var_size_in_bytes, Address::times_1));
4436   }
4437   cmpptr(end, Address(thread, JavaThread::tlab_end_offset()));
4438   jcc(Assembler::above, slow_case);
4439 
4440   // update the tlab top pointer
4441   movptr(Address(thread, JavaThread::tlab_top_offset()), end);
4442 
4443   Universe::heap()->compile_prepare_oop(this, obj);
4444 
4445   // recover var_size_in_bytes if necessary
4446   if (var_size_in_bytes == end) {
4447     subptr(var_size_in_bytes, obj);
4448   }
4449   verify_tlab();
4450 }
4451 
4452 // Preserves rbx, and rdx.
4453 Register MacroAssembler::tlab_refill(Label& retry,
4454                                      Label& try_eden,
4455                                      Label& slow_case) {
4456   Register top = rax;
4457   Register t1  = rcx;
4458   Register t2  = rsi;
4459   Register thread_reg = NOT_LP64(rdi) LP64_ONLY(r15_thread);
4460   assert_different_registers(top, thread_reg, t1, t2, /* preserve: */ rbx, rdx);
4461   Label do_refill, discard_tlab;
4462 
4463   if (!Universe::heap()->supports_inline_contig_alloc()) {
4464     // No allocation in the shared eden.


5692     } else if (CheckJNICalls) {
5693       call(RuntimeAddress(StubRoutines::x86::verify_mxcsr_entry()));
5694     }
5695   }
5696   if (VM_Version::supports_avx()) {
5697     // Clear upper bits of YMM registers to avoid SSE <-> AVX transition penalty.
5698     vzeroupper();
5699   }
5700 
5701 #ifndef _LP64
5702   // Either restore the x87 floating pointer control word after returning
5703   // from the JNI call or verify that it wasn't changed.
5704   if (CheckJNICalls) {
5705     call(RuntimeAddress(StubRoutines::x86::verify_fpu_cntrl_wrd_entry()));
5706   }
5707 #endif // _LP64
5708 }
5709 
5710 
5711 void MacroAssembler::load_klass(Register dst, Register src) {
5712   if (ShenandoahVerifyReadsToFromSpace) {
5713     oopDesc::bs()->interpreter_read_barrier(this, src);
5714   }
5715 #ifdef _LP64
5716   if (UseCompressedClassPointers) {
5717     movl(dst, Address(src, oopDesc::klass_offset_in_bytes()));
5718     decode_klass_not_null(dst);
5719   } else
5720 #endif
5721     movptr(dst, Address(src, oopDesc::klass_offset_in_bytes()));
5722 }
5723 
5724 void MacroAssembler::load_prototype_header(Register dst, Register src) {
5725   load_klass(dst, src);
5726   movptr(dst, Address(dst, Klass::prototype_header_offset()));
5727 }
5728 
5729 void MacroAssembler::store_klass(Register dst, Register src) {
5730 #ifdef _LP64
5731   if (UseCompressedClassPointers) {
5732     encode_klass_not_null(src);
5733     movl(Address(dst, oopDesc::klass_offset_in_bytes()), src);
5734   } else


< prev index next >