For conditional jumps that are larger than 32bits, invert the
branch logic so that it jumps around an unconditional 64bit
branch to the target.
--HG--
extra : convert_revision : ada7f685d84394abc19d909a021957e25043a722
Remove unnecessary masking of shift count.
Patch submitted by Chris Dearman (chris@mips.com).
--HG--
extra : convert_revision : 8986dba933c63d68c3b0498af53b9cdd6c99c69d
1) The "register" FST0 is the sole member of the x87regs register
class. In many places, however, the code is written so as to strongly
suggest that there might be multiple such registers. This patch removes
such conceits, replacing expressions such as (rmask(r) & x87regs)
with (r == FST0), etc.
2) prepareResultReg() has been slightly refactored to make the x87
stack fiddling a bit easier to follow and to remove a fragile assumption.
3) Do not pass the "pop" argument to asm_spill() on non-IA32 platforms.
4) Remove redundant normalization of boolean values.
5) Comment the FPU stack depth consistency check.
--HG--
extra : convert_revision : 04a3292575e6af31578914f7f3b9478b5cad2a1c
Attach script objects immediately in all JSAPI script-creating functions;
have JS_NewScriptObject simply return the already-allocated object; and
make JS_DestroyScript a no-op.
Verify that all scripts given to JSAPI script-consuming functions have
objects, or are the canonical empty script object.
All scripts produced using JSAPI functions should be able to have
JS_NewScriptObject applied to them. However, JS_CompileFile and
JS_CompileFileHandleForPrincipals fail to pass TCF_NEED_MUTABLE_SCRIPT, and
thus will occasionally return JSScript::emptyScript(); applying
JS_NewScriptObject to that causes a crash.
Changed all the register iteration loops to use lsbSet/msbSet functions
that use fast find-first-bit intrinsics when available. Typical loops of
the form:
for (Register r = FirstReg; r <= LastReg; r = nextReg(r))
if (predicate(r))
/* use r */
were transformed by replacing the per-iteration predicate with a single
mask calculation, then iterating through only the 1 bits in the mask:
RegisterMask set = /* calculate predicate with bitmask ops */;
for (Register r = lsReg(set); set; r = lsNextReg(set))
/* use r */
Iteration can be low-to-hi with lsReg/lsNextReg, or hi-to-low with msReg/msNextReg.
Primitives are provided for 32 and 64-bit masks. PPC and MIPS need a 64-bit
mask, for example, even on 32-bit systems.
Refactoring details:
I renamed msbSet() to msbSet32() as part of adding [msb|lsb]Set[32|64], which
affected the AccSet code trivially.
I used if (sizeof(RegisterMask) == 4) to choose between 32 and 64bit
implementations, counting on a sane compiler to strip out the provably dead
path. An alternative would be to move the definitions of lsReg() and msReg() to
NativeXXX.h, after the RegisterMask typedef, allowing backends to hardcode the
choice. Given we have six backends and one more on the way, it seemed better
to centralize the code and also avoid more ifdefs.
I moved the definitions of msbSet/lsbSet to nanojit.h, where other such helpers
already live. It didn't seem appropriate to keep adding to LIR.h since the
helpers will now be used in several places in nanojit.
RegAlloc::managed is now set in Assembler.cpp instead of each backend; six
lines of code replaced by one.
prevreg() was dead after these changes. Additionally, I hand-inlined nextreg()
in the other backends, because the usage was highly specialized -- those call
sites depended on nextreg being reg+1, (or reg+2) not some generic iteration.
I removed RegAlloc::countActive() since the only case was testing countActive()
== 0, which is equivalent to activeMask() == 0.
--HG--
extra : convert_revision : c7009f5cd83ea028b98f59e1f8830a76ba27c1dd