Instead of calling code object directly, call indirectly and
pass the code object in a register. The object pool is then loaded from
the code object. This is another preparation step for making generated code
relocatable.
All non-ia32 platforms:
No entry patching.
ARM:
PC marker (now code object) moves to the same place as on x64 (below saved PP, above saved FP).
R9 is now used as PP, R10 as CODE_REG.
BUG=
R=koda@google.com, rmacnak@google.com
Committed: 1d343e5a7b
Review URL: https://codereview.chromium.org//1192103004 .
Instead of calling code object directly, call indirectly and
pass the code object in a register. The object pool is then loaded from
the code object. This is another preparation step for making generated code
relocatable.
All non-ia32 platforms:
No entry patching.
ARM:
PC marker (now code object) moves to the same place as on x64 (below saved PP, above saved FP).
R9 is now used as PP, R10 as CODE_REG.
BUG=
R=rmacnak@google.com
Review URL: https://codereview.chromium.org//1192103004 .
This makes the code in the code generator independent from how stubs
are actually called (i.e. directly embedding the target address, or
indirectly by loading the target address from the code object).
BUG=
R=rmacnak@google.com
Review URL: https://codereview.chromium.org//1270803003 .
This allows to make the last explicitly named stubs shared between isolates.
When sharing code stubs, we can't do patching at their entry anymore.
Therefore, I had to remove patching support of the array allocation stub.
Is this a functionality we want to keep?
The change is mostly performance-neutral because optimized code has an inlined fast
path for array allocation and only uses the stub for the slow-case.
The only isolate-specific stubs left are object allocation stubs which are
associated with their Class are per-isolate.
Since this CL removes any isolate-specific stubs from StubCode, it becomes AllStatic.
BUG=
R=koda@google.com
Review URL: https://codereview.chromium.org//1247783002 .
Instead of going back and forth from stub code to C++, perform
only the lookup in C++ and call target functions only from
stub code.
noSuchMethod and implicit closure invocations are now also work with
the megamorphic cache. Before they would go slow-case in the megamorphic case.
This CL eliminates the InstanceFunctionLookup stub that was previously
used to handle noSuchMethod and implicit closure invocations.
R=srdjan@google.com
Review URL: https://codereview.chromium.org//221173011
git-svn-id: https://dart.googlecode.com/svn/branches/bleeding_edge/dart@34774 260f80e4-7a28-3924-810f-c04153c831b5
Change executable pages to be read/execute but not writable by default.
All pages are made temporarily writable just before a full GC, because both
the mark and sweep phases write to the pages. When allocating in a page and
when patching code, the pages are made temporarily writable.
The order of allocation of Code and Instructions objects is changed so that
a GC will not occur after Instructions is allocated. (A full GC would
render the Instructions unwritable.) A scoped object is used to make memory
protection simpler.
Original CL: https://codereview.chromium.org/106593002/
I added a cc test that is expected to crash.
R=srdjan@google.com
Review URL: https://codereview.chromium.org//136563002
git-svn-id: https://dart.googlecode.com/svn/branches/bleeding_edge/dart@32493 260f80e4-7a28-3924-810f-c04153c831b5