Commit 8112753bb2 made 44x in
ARCH=powerpc builds use cpu setup routines in cpu_setup_44x.S,
but didn't make a similar change for ARCH=ppc, and consequently
the ARCH=ppc builds fail with undefined symbols (since both use
the same cputable.c).
This fixes it by including cpu_setup_44x.S in the ARCH=ppc builds,
and by taking out the now-redundant FPU initialization in
arch/ppc/kernel/head_44x.S.
Signed-off-by: Paul Mackerras <paulus@samba.org>
This patch fixes arch/ppc kernels, at least for prep subarch, after
build-id addition. Without this, kernels were 3 times the size and
bootloader refused to load them. Now they are back to normal again.
Tested only with Roland McGrath's "Use LDFLAGS_MODULE only for .ko
links" patch applied - boots and works fine.
Signed-off-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Current status of APUS:
- arch/powerpc/: removed in 2.6.23
- arch/ppc/: marked BROKEN since 2 years
This therefore removes the remaining parts of APUS support from
arch/ppc, include/asm-ppc, arch/powerpc and include/asm-powerpc.
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Instantiation of 8MB pages on the TLB cache for the kernel static
mapping trashes r3 register on !CONFIG_8xx_CPU6 configurations.
This ensures r3 gets saved and restored.
This has been posted to linuxppc-embedded by Marcelo Tosatti
<marcelo@kvack.org>, but only an incomplete version of the patch
has been applied in c51e078f82.
This patch adds the rest of the fix.
Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
The 440 family of processors don't have a tlbie instruction. So, we
implement TLB invalidates by explicitly searching the TLB with tlbsx.,
then clobbering the relevant entry, if any. Unfortunately the PID for
the search needs to be stored in the MMUCR register, which is also
used by the TLB miss handler. Interrupts were enabled in _tlbie(), so
an interrupt between loading the MMUCR and the tlbsx could cause
incorrect search results, and thus a failure to invalide TLB entries
which needed to be invalidated.
This fixes the problem in both arch/ppc and arch/powerpc by inhibiting
interrupts (even critical and debug interrupts) across the relevant
instructions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
per cpu data section contains two types of data. One set which is
exclusively accessed by the local cpu and the other set which is per cpu,
but also shared by remote cpus. In the current kernel, these two sets are
not clearely separated out. This can potentially cause the same data
cacheline shared between the two sets of data, which will result in
unnecessary bouncing of the cacheline between cpus.
One way to fix the problem is to cacheline align the remotely accessed per
cpu data, both at the beginning and at the end. Because of the padding at
both ends, this will likely cause some memory wastage and also the
interface to achieve this is not clean.
This patch:
Moves the remotely accessed per cpu data (which is currently marked
as ____cacheline_aligned_in_smp) into a different section, where all the data
elements are cacheline aligned. And as such, this differentiates the local
only data and remotely accessed data cleanly.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: <linux-arch@vger.kernel.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If the kernel OOPSed or BUGed then it probably should be considered as
tainted. Thus, all subsequent OOPSes and SysRq dumps will report the
tainted kernel. This saves a lot of time explaining oddities in the
calltraces.
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Added parisc patch from Matthew Wilson -Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (209 commits)
[POWERPC] Create add_rtc() function to enable the RTC CMOS driver
[POWERPC] Add H_ILLAN_ATTRIBUTES hcall number
[POWERPC] xilinxfb: Parameterize xilinxfb platform device registration
[POWERPC] Oprofile support for Power 5++
[POWERPC] Enable arbitary speed tty ioctls and split input/output speed
[POWERPC] Make drivers/char/hvc_console.c:khvcd() static
[POWERPC] Remove dead code for preventing pread() and pwrite() calls
[POWERPC] Remove unnecessary #undef printk from prom.c
[POWERPC] Fix typo in Ebony default DTS
[POWERPC] Check for NULL ppc_md.init_IRQ() before calling
[POWERPC] Remove extra return statement
[POWERPC] pasemi: Don't auto-select CONFIG_EMBEDDED
[POWERPC] pasemi: Rename platform
[POWERPC] arch/powerpc/kernel/sysfs.c: Move NUMA exports
[POWERPC] Add __read_mostly support for powerpc
[POWERPC] Modify sched_clock() to make CONFIG_PRINTK_TIME more sane
[POWERPC] Create a dummy zImage if no valid platform has been selected
[POWERPC] PS3: Bootwrapper support.
[POWERPC] powermac i2c: Use mutex
[POWERPC] Schedule removal of arch/ppc
...
Fixed up conflicts manually in:
Documentation/feature-removal-schedule.txt
arch/powerpc/kernel/pci_32.c
arch/powerpc/kernel/pci_64.c
include/asm-powerpc/pci.h
and asked the powerpc people to double-check the result..
The current generic bug implementation has a call to dump_stack() in case a
WARN_ON(whatever) gets hit. Since report_bug(), which calls dump_stack(),
gets called from an exception handler we can do better: just pass the
pt_regs structure to report_bug() and pass it to show_regs() in case of a
warning. This will give more debug informations like register contents,
etc... In addition this avoids some pointless lines that dump_stack()
emits, since it includes a stack backtrace of the exception handler which
is of no interest in case of a warning. E.g. on s390 the following lines
are currently always present in a stack backtrace if dump_stack() gets
called from report_bug():
[<000000000001517a>] show_trace+0x92/0xe8)
[<0000000000015270>] show_stack+0xa0/0xd0
[<00000000000152ce>] dump_stack+0x2e/0x3c
[<0000000000195450>] report_bug+0x98/0xf8
[<0000000000016cc8>] illegal_op+0x1fc/0x21c
[<00000000000227d6>] sysc_return+0x0/0x10
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Kyle McMartin <kyle@parisc-linux.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
I'm not sure if this is going to fly, weak symbols work on the compilers I'm
using, but whether they work for all of the affected architectures I can't say.
I've cc'ed as many arch maintainers/lists as I could find.
But assuming they do, we can use a weak empty definition of
pcibios_add_platform_entries() to avoid having an empty definition on every
arch.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
do_signal is never used in modular code (obviously), and no other
architecture exports it either.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Currently the powerpc kernel has a 64-bit only feature,
COHERENT_ICACHE used for those CPUS which maintain icache/dcache
coherency in hardware (POWER5, essentially). It also has a feature,
SPLIT_ID_CACHE, which is used on CPUs which have separate i and
d-caches, which is to say everything except 601 and Freescale E200.
In nearly all the places we check the SPLIT_ID_CACHE, what we actually
care about is whether the i and d-caches are coherent (which they will
be, trivially, if they're the same cache).
This tries to clarify the situation a little. The COHERENT_ICACHE
feature becomes availble on 32-bit and is set for all CPUs where i and
d-cache are effectively coherent, whether this is due to special logic
(POWER5) or because they're unified. We check this, instead of
SPLIT_ID_CACHE nearly everywhere.
The SPLIT_ID_CACHE feature itself is replaced by a UNIFIED_ID_CACHE
feature with reversed sense, set only on 601 and Freescale E200. In
the two places (one Freescale BookE specific) where we really care
whether it's a unified cache, not whether they're coherent, we check
this feature. The CPUs with unified cache are so few, we could
consider replacing this feature bit with explicit checks against the
PVR.
This will make unifying the 32-bit and 64-bit cache flush code a
little more straightforward.
Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
We get the following warnings in various ARCH=ppc builds:
WARNING: "ee_restarts" [arch/ppc/kernel/built-in] is COMMON symbol
WARNING: "fee_restarts" [arch/ppc/kernel/built-in] is COMMON symbol
WARNING: "htab_hash_searches" [arch/ppc/mm/built-in] is COMMON symbol
WARNING: "next_slot" [arch/ppc/mm/built-in] is COMMON symbol
WARNING: "mmu_hash_lock" [arch/ppc/mm/built-in] is COMMON symbol
WARNING: "primary_pteg_full" [arch/ppc/mm/built-in] is COMMON symbol
WARNING: "global_dbcr0" [arch/ppc/kernel/built-in] is COMMON symbol
Switch to local symbols for ee_restarts, fee_restarts, and global_dbcr0 and
global symbols for mmu_hash_lock, next_slot, primary_pteg_full, and
htab_hash_searches.
(except mmu_hash_lock which is global) and
space directive instead.
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
This finally renames the thread_info field in task structure to stack, so that
the assumptions about this field are gone and archs have more freedom about
placing the thread_info structure.
Nonbroken archs which have a proper thread pointer can do the access to both
current thread and task structure via a single pointer.
It'll allow for a few more cleanups of the fork code, from which e.g. ia64
could benefit.
Signed-off-by: Roman Zippel <zippel@linux-m68k.org>
[akpm@linux-foundation.org: build fix]
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ian Molton <spyro@f2s.com>
Cc: Haavard Skinnemoen <hskinnemoen@atmel.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: Greg Ungerer <gerg@uclinux.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
Cc: Richard Curnow <rc@rc0.org.uk>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
Cc: Andi Kleen <ak@muc.de>
Cc: Chris Zankel <chris@zankel.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.
Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).
Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Let's allow page-alignment in general for per-cpu data (wanted by Xen, and
Ingo suggested KVM as well).
Because larger alignments can use more room, we increase the max per-cpu
memory to 64k rather than 32k: it's getting a little tight.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Remove last_syscall from 32bit powerpc, its been gone in 64bit for years.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>