Commit Graph

75 Commits

Author SHA1 Message Date
Stephen Rothwell
40b313608a Finally eradicate CONFIG_HOTPLUG
Ever since commit 45f035ab9b ("CONFIG_HOTPLUG should be always on"),
it has been basically impossible to build a kernel with CONFIG_HOTPLUG
turned off.  Remove all the remaining references to it.

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Doug Thompson <dougthompson@xmission.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Acked-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-06-03 14:20:18 -07:00
Marc Zyngier
0394e1f605 ARM: KVM: enforce maximum size for identity mapped code
We're about to move to an init procedure where we rely on the
fact that the init code fits in a single page. Make sure we
align the idmap text on a vector alignment, and that the code is
not bigger than a single page.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu>
2013-04-28 22:23:09 -07:00
Christoffer Dall
9e9a367c29 ARM: Section based HYP idmap
Add a method (hyp_idmap_setup) to populate a hyp pgd with an
identity mapping of the code contained in the .hyp.idmap.text
section.

Offer a method to drop this identity mapping through
hyp_idmap_teardown.

Make all the above depend on CONFIG_ARM_VIRT_EXT and CONFIG_ARM_LPAE.

Reviewed-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-23 13:29:09 -05:00
Pawel Moll
dad5451a32 ARM: 7605/1: vmlinux.lds: Move .notes section next to the rodata
The .notes, being read-only data by nature, were placed between
read-write .data and .bss. This was harmful in case of the XIP
kernel, as being placed in the RAM range, most likely far
from the ROM address, was inflating the XIP images.

Moving the .notes at the end of the read-only section
(consisting of .text, .rodata and unwind info) fixes the problem.

Reported-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Tested-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-12-16 10:04:24 +00:00
Stephen Boyd
ee951c630c ARM: 7568/1: Sort exception table at compile time
Add the ARM machine identifier to sortextable and select the
config option so that we can sort the exception table at compile
time. sortextable relies on a section named __ex_table existing
in the vmlinux, but ARM's linker script places the exception
table in the data section. Give the exception table its own
section so that sortextable can find it.

This allows us to skip the sorting step during boot.

Cc: David Daney <david.daney@cavium.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-11-04 10:31:16 +00:00
David Brown
9973290ce2 ARM: 7428/1: Prevent KALLSYM size mismatch on ARM.
ARM builds seem to be plagued by an occasional build error:

    Inconsistent kallsyms data
    This is a bug - please report about it
    Try "make KALLSYMS_EXTRA_PASS=1" as a workaround

The problem has to do with alignment of some sections by the linker.
The kallsyms data is built in two passes by first linking the kernel
without it, and then linking the kernel again with the symbols
included.  Normally, this just shifts the symbols, without changing
their order, and the compression used by the kallsyms gives the same
result.

On non SMP, the per CPU data is empty.  Depending on the where the
alignment ends up, it can come out as either:

   +-------------------+
   | last text segment |
   +-------------------+
   /* padding */
   +-------------------+     <- L1_CACHE_BYTES alignemnt
   | per cpu (empty)   |
   +-------------------+
__per_cpu_end:
   /* padding */
__data_loc:
   +-------------------+     <- THREAD_SIZE alignment
   | data              |
   +-------------------+

or

   +-------------------+
   | last text segment |
   +-------------------+
   /* padding */
   +-------------------+     <- L1_CACHE_BYTES alignemnt
   | per cpu (empty)   |
   +-------------------+
__per_cpu_end:
   /* no padding */
__data_loc:
   +-------------------+     <- THREAD_SIZE alignment
   | data              |
   +-------------------+

if the alignment satisfies both.  Because symbols that have the same
address are sorted by 'nm -n', the second case will be in a different
order than the first case.  This changes the compression, changing the
size of the kallsym data, causing the build failure.

The KALLSYMS_EXTRA_PASS=1 workaround usually works, but it is still
possible to have the alignment change between the second and third
pass.  It's probably even possible for it to never reach a fixedpoint.

The problem only occurs on non-SMP, when the per-cpu data is empty,
and when the data segment has alignment (and immediately follows the
text segments).  Fix this by only including the per_cpu section on
SMP, when it is not empty.

Signed-off-by: David Brown <davidb@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-06-22 22:54:18 +01:00
Marc Zyngier
b8b9987ffd ARM: 7320/1: Fix proc_info table alignment
With an admittedly exotic choice of configuration options
(CC_OPTIMIZE_FOR_SIZE, THUMB2, some other size-minimizing ones)
and compiler, the proc_info table can end up being misaligned,
and the kernel being unbootable (Error: unrecognized/unsupported
processor variant).

Forcing the alignement to 4 bytes in the linker script fixes the
issue.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-02-09 16:25:37 +00:00
Will Deacon
972da06470 ARM: 7290/1: vmlinux.lds.S: align the exception fixup table to a 4-byte boundary
The exception fixup table is currently aligned to a 32-byte boundary.
Whilst this won't cause any problems, the exception_table_entry
structures contain only a pair of unsigned longs, so 4-byte alignment
is all that is required. If the table was walked from start to end,
cacheline alignment may bring some performance benefits, but since a
binary search is used, the access pattern is random and will not benefit
from a stricter alignment.

Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-23 10:20:04 +00:00
Will Deacon
f0d5375e3c ARM: 7289/1: vmlinux.lds.S: do not hardcode cacheline size as 32 bytes
The linker script assumes a cacheline size of 32 bytes when aligning
the .data..cacheline_aligned and .data..percpu sections.

This patch updates the script to use L1_CACHE_BYTES, which should be set
to 64 on platforms that require it.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-01-23 10:20:04 +00:00
Will Deacon
8903826d0c ARM: idmap: populate identity map pgd at init time using .init.text
When disabling and re-enabling the MMU, it is necessary to take out an
identity mapping for the code that manipulates the SCTLR in order to
avoid it disappearing from under our feet. This is useful when soft
rebooting and returning from CPU suspend.

This patch allocates a set of page tables during boot and populates them
with an identity mapping for the .idmap.text section. This means that
users of the identity map do not need to manage their own pgd and can
instead annotate their functions with __idmap or, in the case of assembly
code, place them in the correct section.

Acked-by: Dave Martin <dave.martin@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2011-12-06 14:04:14 +00:00
Russell King
bdf4e94823 Merge branch 'misc' into for-linus
Conflicts:
	arch/arm/mach-integrator/integrator_ap.c
2011-10-25 08:19:59 +01:00
Simon Glass
87e040b645 ARM: 7017/1: Use generic BUG() handler
ARM uses its own BUG() handler which makes its output slightly different
from other archtectures.

One of the problems is that the ARM implementation doesn't report the function
with the BUG() in it, but always reports the PC being in __bug(). The generic
implementation doesn't have this problem.

Currently we get something like:

kernel BUG at fs/proc/breakme.c:35!
Unable to handle kernel NULL pointer dereference at virtual address 00000000
...
PC is at __bug+0x20/0x2c

With this patch it displays:

kernel BUG at fs/proc/breakme.c:35!
Internal error: Oops - undefined instruction: 0 [#1] PREEMPT SMP
...
PC is at write_breakme+0xd0/0x1b4

This implementation uses an undefined instruction to implement BUG, and sets up
a bug table containing the relevant information. Many versions of gcc do not
support %c properly for ARM (inserting a # when they shouldn't) so we work
around this using distasteful macro magic.

v1: Initial version to replace existing ARM BUG() implementation with something
more similar to other architectures.

v2: Add Thumb support, remove backtrace whitespace output changes. Change to
use macros instead of requiring the asm %d flag to work (thanks to
Dave Martin <dave.martin@linaro.org>)

v3: Remove old BUG() implementation in favor of this one.
Remove the Backtrace: message (will submit this separately).
Use ARM_EXIT_KEEP() so that some architectures can dump exit text at link time
thanks to Stephen Boyd <sboyd@codeaurora.org> (although since we always
define GENERIC_BUG this might be academic.)
Rebase to linux-2.6.git master.

v4: Allow BUGS in modules (these were not reported correctly in v3)
(thanks to Stephen Boyd <sboyd@codeaurora.org> for suggesting that.)
Remove __bug() as this is no longer needed.

v5: Add %progbits as the section flags.

Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-10-17 09:13:41 +01:00
Russell King
6760b10960 ARM: fix vmlinux.lds.S discarding sections
We are seeing linker errors caused by sections being discarded, despite
the linker script trying to keep them.  The result is (eg):

`.exit.text' referenced in section `.alt.smp.init' of drivers/built-in.o: defined in discarded section `.exit.text' of drivers/built-in.o
`.exit.text' referenced in section `.alt.smp.init' of net/built-in.o: defined in discarded section `.exit.text' of net/built-in.o

This is the relevent part of the linker script (reformatted to make it
clearer):
| SECTIONS
| {
| /*
| * unwind exit sections must be discarded before the rest of the
| * unwind sections get included.
| */
| /DISCARD/ : {
| *(.ARM.exidx.exit.text)
| *(.ARM.extab.exit.text)
| }
| ...
| .exit.text : {
| *(.exit.text)
| *(.memexit.text)
| }
| ...
| /DISCARD/ : {
| *(.exit.text)
| *(.memexit.text)
| *(.exit.data)
| *(.memexit.data)
| *(.memexit.rodata)
| *(.exitcall.exit)
| *(.discard)
| *(.discard.*)
| }
| }

Now, this is what the linker manual says about discarded output sections:

|    The special output section name `/DISCARD/' may be used to discard
| input sections.  Any input sections which are assigned to an output
| section named `/DISCARD/' are not included in the output file.

No questions, no exceptions. It doesn't say "unless they are listed
before the /DISCARD/ section." Now, this is what asn-generic/vmlinux.lds.S
says:
| /*
|  * Default discarded sections.
|  *
|  * Some archs want to discard exit text/data at runtime rather than
|  * link time due to cross-section references such as alt instructions,
|  * bug table, eh_frame, etc. DISCARDS must be the last of output
|  * section definitions so that such archs put those in earlier section
|  * definitions.
|  */

And guess what - the list _always_ includes .exit.text etc.

Now, what's actually happening is that the linker is reading the script,
and it finds the first /DISCARD/ output section at the beginning of the
script. It continues reading the script, and finds the 'DISCARD' macro
at the end, which having been postprocessed results in another
/DISCARD/ output section. As the linker already contains the earlier
/DISCARD/ output section, it adds it to that existing section, so it
effectively is placed at the start. This can be seen by using the -M
option to ld:

| Linker script and memory map
|
|                 0xc037c080                jiffies = jiffies_64
|
| /DISCARD/
|  *(.ARM.exidx.exit.text)
|  *(.ARM.extab.exit.text)
|  *(.exit.text)
|  *(.memexit.text)
|  *(.exit.data)
|  *(.memexit.data)
|  *(.memexit.rodata)
|  *(.exitcall.exit)
|  *(.discard)
|  *(.discard.*)
|
|                 0xc0008000                . = 0xc0008000
|
| .head.text      0xc0008000      0x1d0
|                 0xc0008000                _text = .
|  *(.head.text)
|  .head.text     0xc0008000      0x1d0 arch/arm/kernel/head.o
|                 0xc0008000                stext
|
| .text           0xc0008200   0x2d78d0
|                 0xc0008200                _stext = .
|                 0xc0008200                __exception_text_start = .
|  *(.exception.text)
|  .exception.text
| ...

As you can see, all the discarded sections are grouped together - and
as a result of it being the first output section, they all appear before
any other section.

The result is that not only is the unwind information discarded (as
intended), but also the .exit.text, despite us wanting to have the
.exit.text preserved.

We can't move the unwind information elsewhere, because it'll then be
included even when we do actually discard the .exit.text (and similar)
sections.

So, work around this by avoiding the generic DISCARDS macro, and instead
conditionalize the sections to be discarded ourselves.  This avoids the
ambiguity in how the linker assigns input sections to output sections,
making our script less dependent on undocumented linker behaviour.

Reported-by: Rob Herring <robherring2@gmail.com>
Tested-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-09-20 23:42:31 +01:00
Russell King
e2f81844ef ARM: vmlinux.lds: use _text and _stext the same way as x86
x86 uses _text to mark the start of the kernel image including the
head text, and _stext to mark the start of the .text section.  Change
our vmlinux.lds to conform.  An audit of the places which use _stext
and _text in arch/arm indicates no users of either symbol are impacted
by this change.  It does mean a slight change to /proc/iomem output.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:36:34 +01:00
Russell King
3835d69a6c ARM: vmlinux.lds: move init sections between text and data sections
Place the init sections between the text and data sections.  This
means all code is grouped together at the beginning of the kernel
image, and all data is at the end of the image.  This avoids problems
with the 24-bit branch instruction relocations becoming invalid with
large initramfs images.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:36:31 +01:00
Russell King
43fc9d2fa5 ARM: vmlinux.lds: remove .rodata/.rodata1 from main .text segment
RODATA() already handles these sections, so allow it to take care
of them for us.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:36:28 +01:00
Russell King
1604d79d37 ARM: vmlinux.lds: rearrange .init output section
Keep the various linker tables as separate output sections rather
than combining them together into one big .init section.  This
makes the 'vmlinux' easier to see what is placed where.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:35:41 +01:00
Russell King
39df88872f ARM: vmlinux.lds: move discarded sections to beginning
Rather than scattering the discarded sections throughout the linker
file, move them to the start.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-07 23:35:38 +01:00
Tejun Heo
0415b00d17 percpu: Always align percpu output section to PAGE_SIZE
Percpu allocator honors alignment request upto PAGE_SIZE and both the
percpu addresses in the percpu address space and the translated kernel
addresses should be aligned accordingly.  The calculation of the
former depends on the alignment of percpu output section in the kernel
image.

The linker script macros PERCPU_VADDR() and PERCPU() are used to
define this output section and the latter takes @align parameter.
Several architectures are using @align smaller than PAGE_SIZE breaking
percpu memory alignment.

This patch removes @align parameter from PERCPU(), renames it to
PERCPU_SECTION() and makes it always align to PAGE_SIZE.  While at it,
add PCPU_SETUP_BUG_ON() checks such that alignment problems are
reliably detected and remove percpu alignment comment recently added
in workqueue.c as the condition would trigger BUG way before reaching
there.

For um, this patch raises the alignment of percpu area.  As the area
is in .init, there shouldn't be any noticeable difference.

This problem was discovered by David Howells while debugging boot
failure on mn10300.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Mike Frysinger <vapier@gentoo.org>
Cc: uclinux-dist-devel@blackfin.uclinux.org
Cc: David Howells <dhowells@redhat.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: user-mode-linux-devel@lists.sourceforge.net
2011-03-24 18:50:09 +01:00
Linus Torvalds
16d8775700 Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (91 commits)
  ARM: 6806/1: irq: introduce entry and exit functions for chained handlers
  ARM: 6781/1: Thumb-2: Work around buggy Thumb-2 short branch relocations in gas
  ARM: 6747/1: P2V: Thumb2 support
  ARM: 6798/1: aout-core: zero thread debug registers in a.out core dump
  ARM: 6796/1: Footbridge: Fix I/O mappings for NOMMU mode
  ARM: 6784/1: errata: no automatic Store Buffer drain on Cortex-A9
  ARM: 6772/1: errata: possible fault MMU translations following an ASID switch
  ARM: 6776/1: mach-ux500: activate fix for errata 753970
  ARM: 6794/1: SPEAr: Append UL to device address macros.
  ARM: 6793/1: SPEAr: Remove unused *_SIZE macros from spear*.h files
  ARM: 6792/1: SPEAr: Replace SIZE macro's with SZ_4K macros
  ARM: 6791/1: SPEAr3xx: Declare device structures after shirq code
  ARM: 6790/1: SPEAr: Clock Framework: Rename usbd clock and align apb_clk entry
  ARM: 6789/1: SPEAr3xx: Rename sdio to sdhci
  ARM: 6788/1: SPEAr: Include mach/hardware.h instead of mach/spear.h
  ARM: 6787/1: SPEAr: Reorder #includes in .h & .c files.
  ARM: 6681/1: SPEAr: add debugfs support to clk API
  ARM: 6703/1: SPEAr: update clk API support
  ARM: 6679/1: SPEAr: make clk API functions more generic
  ARM: 6737/1: SPEAr: formalized timer support
  ...
2011-03-16 19:03:06 -07:00
Russell King
05e3475451 Merge branch 'p2v' into devel
Conflicts:
	arch/arm/kernel/module.c
	arch/arm/mach-s5pv210/sleep.S
2011-03-16 23:35:27 +00:00
Linus Torvalds
79d8a8f736 Merge branch 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
* 'for-2.6.39' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
  percpu, x86: Add arch-specific this_cpu_cmpxchg_double() support
  percpu: Generic support for this_cpu_cmpxchg_double()
  alpha: use L1_CACHE_BYTES for cacheline size in the linker script
  percpu: align percpu readmostly subsection to cacheline

Fix up trivial conflict in arch/x86/kernel/vmlinux.lds.S due to the
percpu alignment having changed ("x86: Reduce back the alignment of the
per-CPU data section")
2011-03-16 08:22:41 -07:00
Russell King
a9ad21fed0 ARM: Keep exit text/data around for SMP_ON_UP
When SMP_ON_UP is used and the spinlocks are inlined, we end up with
inline spinlocks in the exit code, with references from the SMP
alternatives section to the exit sections.  This causes link time
errors.  Avoid this by placing the exit sections in the init-discarded
region.

Cc: <stable@kernel.org>
Tested-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-21 19:29:27 +00:00
Pawel Moll
dc810efb0c ARM: 6740/1: Place correctly notes section in the linker script
Commit 18991197b4 added --build-id
linker option when toolchain supports it. ARM one does, but for some
reason places the section at 0 when linker script doesn't mention it
explicitly.

The 1e621a8e37 worked around the problem
removing this section from binary image with explicit objcopy options,
but it still exists in vmlinux, confusing tools like debuggers and perf.

This problem was discussed here:
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-May/015994.html
http://lists.infradead.org/pipermail/linux-arm-kernel/2010-May/015994.html
but the proposed changes to the linker script were substantial.

This patch simply places NOTES (36 bytes long, at least when compiled
with CodeSourcery toolchain) between data and bss, which seem to be
the right place (and suggested by the sample linker script in
include/asm-generic/vmlinux.lds.h).

It is enough to place it correctly in vmlinux (so debuggers are happy):

Section Headers:
  [11] .data             PROGBITS        c07ce000 7ce000 020fc0 00  WA  0   0 32
  [12] .notes            NOTE            c07eefc0 7eefc0 000024 00  AX  0   0  4
  [13] .bss              NOBITS          c07ef000 7eefe4 01e628 00  WA  0   0 32
Program Headers:
  LOAD           0x008000 0xc0008000 0xc0008000 0x7e6fe4 0x805628 RWE 0x8000
  NOTE           0x7eefc0 0xc07eefc0 0xc07eefc0 0x00024 0x00024 R E 0x4
Section to Segment mapping:
  Segment Sections...
   00     <...> .data .notes .bss
   01     .notes

and to get it exposed as /sys/kernel/notes used by perf tools.

Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-21 19:29:25 +00:00
Russell King
dc21af99fa ARM: P2V: introduce phys_to_virt/virt_to_phys runtime patching
This idea came from Nicolas, Eric Miao produced an initial version,
which was then rewritten into this.

Patch the physical to virtual translations at runtime.  As we modify
the code, this makes it incompatible with XIP kernels, but allows us
to achieve this with minimal loss of performance.

As many translations are of the form:

	physical = virtual + (PHYS_OFFSET - PAGE_OFFSET)
	virtual = physical - (PHYS_OFFSET - PAGE_OFFSET)

we generate an 'add' instruction for __virt_to_phys(), and a 'sub'
instruction for __phys_to_virt().  We calculate at run time (PHYS_OFFSET
- PAGE_OFFSET) by comparing the address prior to MMU initialization with
where it should be once the MMU has been initialized, and place this
constant into the above add/sub instructions.

Once we have (PHYS_OFFSET - PAGE_OFFSET), we can calculate the real
PHYS_OFFSET as PAGE_OFFSET is a build-time constant, and save this for
the C-mode PHYS_OFFSET variable definition to use.

At present, we are unable to support Realview with Sparsemem enabled
as this uses a complex mapping function, and MSM as this requires a
constant which will not fit in our math instruction.

Add a module version magic string for this feature to prevent
incompatible modules being loaded.

Tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Tested-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-02-17 23:27:32 +00:00