Virtually index, physically tagged cache architectures can get away
without cache flushing when forking. This patch adds a new cache
flushing function flush_cache_dup_mm(struct mm_struct *) which for the
moment I've implemented to do the same thing on all architectures
except on MIPS where it's a no-op.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Currently, to tell a task that it should go to the refrigerator, we set the
PF_FREEZE flag for it and send a fake signal to it. Unfortunately there
are two SMP-related problems with this approach. First, a task running on
another CPU may be updating its flags while the freezer attempts to set
PF_FREEZE for it and this may leave the task's flags in an inconsistent
state. Second, there is a potential race between freeze_process() and
refrigerator() in which freeze_process() running on one CPU is reading a
task's PF_FREEZE flag while refrigerator() running on another CPU has just
set PF_FROZEN for the same task and attempts to reset PF_FREEZE for it. If
the refrigerator wins the race, freeze_process() will state that PF_FREEZE
hasn't been set for the task and will set it unnecessarily, so the task
will go to the refrigerator once again after it's been thawed.
To solve first of these problems we need to stop using PF_FREEZE to tell
tasks that they should go to the refrigerator. Instead, we can introduce a
special TIF_*** flag and use it for this purpose, since it is allowed to
change the other tasks' TIF_*** flags and there are special calls for it.
To avoid the freeze_process()-refrigerator() race we can make
freeze_process() to always check the task's PF_FROZEN flag after it's read
its "freeze" flag. We should also make sure that refrigerator() will
always reset the task's "freeze" flag after it's set PF_FROZEN for it.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Pavel Machek <pavel@ucw.cz>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Andi Kleen <ak@muc.de>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Previously we haven't been doing anything with verbose BUG() reporting,
and we've been relying on the oops path for handling BUG()'s, which is
rather sub-optimal.
This switches BUG handling to use a fixed trapa vector (#0x3e) where we
construct a small bug frame post trapa instruction to get the context
right. This also makes it trivial to wire up a DIE_BUG for the atomic
die chain, which we couldn't really do before.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
We have a few different ways to do the atomic operations, so split
them out in to different headers rather than bloating atomic.h.
Kernelspace gUSA will take this up to a third implementation.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In the 64-bit PTE case there's no point in restricting the encoding
to the low bits of the PTE, we can instead bump all of this up to
the high 32 bits and extend PTE_FILE_MAX_BITS to 32, adopting the
same convention used by x86 PAE.
There's a minor discrepency between the number of bits used for the
swap type encoding between 32 and 64-bit PTEs, but this is unlikely
to cause any problem given the extended offset.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
In order to sort out our struct termios and add proper speed control we need
to separate the kernel and user termios structures. Glibc is fine but the
other libraries rely on the kernel exported struct termios and we need to
extend this without breaking the ABI/API
To do so we add a struct ktermios which is the kernel view of a termios
structure and overlaps the struct termios with extra fields on the end for
now. (That limitation will go away in later patches). Some platforms (eg
alpha) planned ahead and thus use the same struct for both, others did not.
This just adds the structures but does not use them, it seems a sensible
splitting point for bisect if there are compile failures (not that I expect
them)
Signed-off-by: Alan Cox <alan@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Make the contents of the userspace asm/setup.h header consistent on all
architectures:
- export setup.h to userspace on all architectures
- export only COMMAND_LINE_SIZE to userspace
- frv: move COMMAND_LINE_SIZE from param.h
- i386: remove duplicate COMMAND_LINE_SIZE from param.h
- arm:
- export ATAGs to userspace
- change u8/u16/u32 to __u8/__u16/__u32
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Acked-by: Russell King <rmk@arm.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Pass struct dev pointer to dma_cache_sync()
dma_cache_sync() is ill-designed in that it does not have a struct device
pointer argument which makes proper support for systems that consist of a
mix of coherent and non-coherent DMA devices hard. Change dma_cache_sync
to take a struct device pointer as first argument and fix all its callers
to pass it.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The last thing we agreed on was to remove the macros entirely for 2.6.19,
on all architectures. Unfortunately, I think nobody actually _did_ that,
so they are still there.
[akpm@osdl.org: x86_64 fix]
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Greg Schafer <gschafer@zip.com.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The following moves the creation of IPR interupts into setup-7750.c
and updates a few other things to make it all work after the "Drop
CPU subtype IRQ headers" commit. It boots and runs fine on my titan
board.
- adds an ipr_idx to the ipr_data and uses a function in the subtype
code to calculate the address of the IPR registers
- adds a function to enable individual interrupt mode for externals
in the subtype code and calls that from the titan board code
instead of doing it directly.
- I changed the shift in the ipr_data to be the actual # of bits to
shift, instead of the numnber / 4 - made it easier to match with
the manual.
Signed-off-by: Jamie Lenehan <lenehan@twibble.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
When hugetlbpage support isn't enabled, this can be bogus.
Wrap it back in _PAGE_FLAGS_HARD to avoid changes to the
base PTE when not aiming for larger sizes.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
gcc4 gets a bit pissy about the outputs:
include/asm/atomic.h: In function 'atomic_add':
include/asm/atomic.h:37: error: invalid lvalue in asm statement
include/asm/atomic.h:30: error: invalid lvalue in asm output 1
...
this ended up being a thinko anyways, so just fix it up.
Verified for proper behaviour with the older toolchains, too.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds basic NO_IDLE_HZ support to the SH timer API so timers
are able to wire it up. Taken from the ARM version, as it fit in
to our API with very few changes needed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This syncs up the SH clock framework with the linux/clk.h API,
for which there were only some minor changes required, namely
the clk_get() dev_id and subsequent callsites.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There were a number of places that made evil PAGE_SIZE == 4k
assumptions that ended up breaking when trying to play with
8k and 64k page sizes, this fixes those up.
The most significant change is the way we load THREAD_SIZE,
previously this was done via:
mov #(THREAD_SIZE >> 8), reg
shll8 reg
to avoid a memory access and allow the immediate load. With
a 64k PAGE_SIZE, we're out of range for the immediate load
size without resorting to special instructions available in
later ISAs (movi20s and so on). The "workaround" for this is
to bump up the shift to 10 and insert a shll2, which gives a
bit more flexibility while still being much cheaper than a
memory access.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This extends the SH DMA API for allowing handling of DMA
channels based off of their respective capabilities.
A couple of functions are added to the existing API,
the core bits are register_chan_caps() for registering
channel capabilities, and request_dma_bycap() for fetching
a channel dynamically based off of a capability set.
Signed-off-by: Mark Glaisher <mark.glaisher@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Two of the fields in /proc/[number]/stat are documented in
proc(5) as:
kstkesp %lu
The current value of esp (stack pointer), as
found in the kernel stack page for the process.
kstkeip %lu
The current EIP (instruction pointer).
The SH currently prints the the last SP and PC of the process
inside the kernel, while most other archs use the last user
space values.
This patch modifes the SH to display the user space values.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Handle simple TLB miss faults which can be resolved completely
from the page table in assembler.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adds support for a generic push switch framework. Adaptable for
various switches, including GPIO switches and the push switches commonly
found on Renesas debug boards.
This allows switch states to be trivially reported through sysfs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>