Fix the load64 and store64 macros, created to handle 8-byte unaligned

loads and stores (resp.) The ldq_u and stq_u instruction mask off the
lower 3 bits of the final address before loading from or storing to
the address, so as to avoid unaligned loads and stores. They do not
themselves allow loads from or stores to unaligned addresses. Replace
the macro definitions by a packed struct dereference.

Submitted by: Richard Henderson (rth at twiddle dot net)
This commit is contained in:
Marcel Moolenaar 2005-06-02 05:34:08 +00:00
parent b50c8bde99
commit d4337d869f
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=146886

View file

@ -58,13 +58,13 @@ extern Elf_Dyn _GOT_END_;
* We don't use these when relocating jump slots and GOT entries,
* since they are guaranteed to be aligned.
*/
#define load64(p) ({ \
Elf_Addr __res; \
__asm__("ldq_u %0,%1" : "=r"(__res) : "m"(*(p))); \
__res; })
#define store64(p, v) \
__asm__("stq_u %1,%0" : "=m"(*(p)) : "r"(v))
struct ualong {
Elf_Addr x __attribute__((packed));
};
#define load64(p) (((struct ualong *)(p))->x)
#define store64(p,v) (((struct ualong *)(p))->x = (v))
/* Relocate a non-PLT object with addend. */
static int