This patch combines Greg Bank's dprintk() work with the existing dynamic
printk patchset, we are now calling it 'dynamic debug'.
The new feature of this patchset is a richer /debugfs control file interface,
(an example output from my system is at the bottom), which allows fined grained
control over the the debug output. The output can be controlled by function,
file, module, format string, and line number.
for example, enabled all debug messages in module 'nf_conntrack':
echo -n 'module nf_conntrack +p' > /mnt/debugfs/dynamic_debug/control
to disable them:
echo -n 'module nf_conntrack -p' > /mnt/debugfs/dynamic_debug/control
A further explanation can be found in the documentation patch.
Signed-off-by: Greg Banks <gnb@sgi.com>
Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Right now, the kobject_uevent code blocks for each uevent that's being
generated, due to using (for hystoric reasons) UHM_WAIT_EXEC as flag to
call_usermode_helper(). Specifically, the effect is that each uevent
that is being sent causes the code to wake up keventd, then block until
keventd has processed the work. Needless to say, this happens many times
during the system boot.
This patches changes that to UHN_NO_WAIT (brilliant name for a constant
btw) so that we only schedule the work to fire the uevent message, but
do not wait for keventd to process the work.
This removes one of the bottlenecks during boot; each one of them is
only a small effect, but the sum of them does add up.
[Note, distros that need this are broken, they should be setting
CONFIG_UEVENT_HELPER_PATH to "", that way this code path will never be
excuted at all -- gregkh]
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This patch implements uevent suppress in kobject and removes it
from struct device, based on the following ideas:
1,Uevent sending should be one attribute of kobject, so suppressing it
in kobject layer is more natural than in device layer. By this way,
we can do it for other objects embedded with kobject.
2,It may save several bytes for each instance of struct device.(On my
omap3(32bit ARM) based box, can save 8bytes per device object)
This patch also introduces dev_set|get_uevent_suppress() helpers to
set and query uevent_suppress attribute in case to help kobject
as private part of struct device in future.
[This version is against the latest driver-core patch set of Greg,please
ignore the last version.]
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Now that all users of bus_id is gone, we can remove it from struct
device.
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Impact: allow architectures to monitor busses for dma mem leakage
This patch adds checking code to detect if a device has pending DMA
operations when it is about to be unbound from its device driver.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Impact: get notified if a device dma maps illegal areas
This patch adds a check to print a warning message when a device driver
tries to map a memory area from the kernel text segment or rodata.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Impact: saves stacktrace of a dma mapping and prints it if there is an error
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
This adds a function to dump the DMA mappings that the debugging code is
aware of -- either for a single device, or for _all_ devices.
This can be useful for debugging -- sticking a call to it in the DMA
page fault handler, for example, to see if the faulting address _should_
be mapped or not, and hence work out whether it's IOMMU bugs we're
seeing, or driver bugs.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Impact: avoid recursive kfree calls, less slab activity on heavy load
debugobjects checks on kfree whether tracked objects are freed. When a
tracked object is freed debugobjects frees the internal reference
object as well. The debug object slab cache is marked to not recurse
into debugobjects when a slab objects is freed, but the recursive call
can be problematic versus locking in the memory allocator.
Defer the freeing of debug slab objects via schedule_work. The reasons
not to use RCU are:
1) rcu makes the data structure larger
2) there is no real need for rcu as nothing references the obj after
we freed it
3) under heavy load it is easier to reuse the to be freed objects instead
of allocating new objects from the slab. This lowered the slab activity
significantly in a heavy load networking test where lots of timers are
created/destroyed. The workqueue based delayed free allows us just to
put the to be freed objects back into the object pool and reuse them
right away.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <200903162049.58058.nickpiggin@yahoo.com.au>
Impact: refactor/consolidate object management, prepare for delayed free
debugobjects allocates static reference objects to track objects which
are initialized or activated before the slab cache becomes
available. These static reference objects have to be handled
seperately in free_object(). The handling of these objects is in the
way of implementing a delayed free functionality. The delayed free is
required to avoid callbacks into the mm code from
debug_check_no_obj_freed().
Replace the static object references with dynamic ones after the slab
cache has been initialized. The static objects are now marked initdata.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <200903162049.58058.nickpiggin@yahoo.com.au>
Jeremy Fitzhardinge reported:
> Change fef20d9c13, "vsprintf:
> unify the format decoding layer for its 3 users", causes a
> regression in xenbus which results in no devices getting
> attached to a new domain.
%.*s is broken - fix it.
Reported-by: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Guennadi Liakhovetski noticed that the end condition for the loop in
bitmap_find_free_region() is wrong, and the "return if error" was also
using the wrong conditional that would only trigger if the bitmap was an
exact multiple of the allocation size, which is not necessarily the case
with dma_alloc_from_coherent().
Such a failure would end up in bitmap_find_free_region() accessing
beyond the end of the bitmap.
Reported-by: Guennadi Liakhovetski <lg@denx.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>