Replace an irrelevant include of bug.h with the more appropriate
includes of slab.h and module.h.
it's not as if the original inclusion is an error, it's simply not
related to the contents of that source file, while the other two are.
compile-tested on i386.
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Fix the various misspellings of "system", controller", "interrupt" and
"[un]necessary".
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Some of the per-cpu counters and thus their locks are accessed from IRQ
contexts. This can cause a deadlock if it interrupts a cpu-offline thread
which is transferring a dead-cpu's counts to the global counter.
Add appropriate IRQ protection in the cpu-hotplug callback path.
Signed-off-by: Gautham R Shenoy <ego@in.ibm.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
The task_struct->pid member is going to be deprecated, so start
using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in
the kernel.
The first thing to start with is the pid, printed to dmesg - in
this case we may safely use task_pid_nr(). Besides, printks produce
more (much more) than a half of all the explicit pid usage.
[akpm@linux-foundation.org: git-drm went and changed lots of stuff]
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
remove asm/bitops.h includes
including asm/bitops directly may cause compile errors. don't include it
and include linux/bitops instead. next patch will deny including asm header
directly.
Cc: Adrian Bunk <bunk@kernel.org>
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cause writes to cpuset "cpus" file to update cpus_allowed for member tasks:
- collect batches of tasks under tasklist_lock and then call
set_cpus_allowed() on them outside the lock (since this can sleep).
- add a simple generic priority heap type to allow efficient collection
of batches of tasks to be processed without duplicating or missing any
tasks in subsequent batches.
- make "cpus" file update a no-op if the mask hasn't changed
- fix race between update_cpumask() and sched_setaffinity() by making
sched_setaffinity() post-check that it's not running on any cpus outside
cpuset_cpus_allowed().
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Paul Menage <menage@google.com>
Cc: Paul Jackson <pj@sgi.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Cedric Le Goater <clg@fr.ibm.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Serge Hallyn <serue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Negative shifts are not allowed in C (the result is undefined). Same thing
with full-width shifts.
It works on most platforms but not on the VAX with gcc 4.0.1 (it results in an
"operand reserved" fault).
Shifting by more than the width of the value on the left is also not
allowed. I think the extra '>> 1' tacked on at the end in the original
code was an attempt to work around that. Getting rid of that is an extra
feature of this patch.
Here's the chapter and verse, taken from the final draft of the C99
standard ("6.5.7 Bitwise shift operators", paragraph 3):
"The integer promotions are performed on each of the operands. The
type of the result is that of the promoted left operand. If the
value of the right operand is negative or is greater than or equal
to the width of the promoted left operand, the behavior is
undefined."
Thank you to Jan-Benedict Glaw, Christoph Hellwig, Maciej Rozycki, Pekka
Enberg, Andreas Schwab, and Christoph Lameter for review. Special thanks
to Andreas for spotting that my fix only removed half the undefined
behaviour.
Signed-off-by: Peter Lund <firefly@vax64.dk>
Christoph Lameter <clameter@sgi.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: "Maciej W. Rozycki" <macro@linux-mips.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Andreas Schwab <schwab@suse.de>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: WU Fengguang <wfg@mail.ustc.edu.cn>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Various architectures may call bust_spinlocks() recursively; the function
itself, however, doesn't appear to be meant to be called in this manner.
Nevertheless, this doesn't appear to be a problem as long as
bust_spinlocks(0) doesn't get called twice in a row (otherwise,
unblank_screen() may enter the scheduler). However, at least on i386 die()
has been capable of returning (and on other architectures this should
really be that way, too) when notify_die() returns NOTIFY_STOP.
Short of getting a reply to a respective query, this patch makes
bust_spinlocks() increment/decrement oops_in_progress, and wake klogd only
when the count drops back to zero.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Hello, I fixed and tested a small bug in lib/sort.c file, heap sort
function.
The fix avoids unnecessary swap of contents when i is 0 (saves few loads
and stores), which happens every time sort function is called. I felt the
fix is worth bringing it to your attention given the importance and
frequent use of the sort function.
Acked-by: Matt Mackall <mpm@selenic.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
It would be nice if the argv_split library function could gracefully handle
a NULL pointer in the argcp parameter, so as to allow functions using it
that did not care about the value of argc to not have to declare a useless
variable. This patch accomplishes that. Tested by me, with successful
results.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Satyam Sharma <satyam@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Slab constructors currently have a flags parameter that is never used. And
the order of the arguments is opposite to other slab functions. The object
pointer is placed before the kmem_cache pointer.
Convert
ctor(void *object, struct kmem_cache *s, unsigned long flags)
to
ctor(struct kmem_cache *s, void *object)
throughout the kernel
[akpm@linux-foundation.org: coupla fixes]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Given a set of objects, floating proportions aims to efficiently give the
proportional 'activity' of a single item as compared to the whole set. Where
'activity' is a measure of a temporal property of the items.
It is efficient in that it need not inspect any other items of the set
in order to provide the answer. It is not even needed to know how many
other items there are.
It has one parameter, and that is the period of 'time' over which the
'activity' is measured.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>