uprobe_unregister() removes the breakpoints only if the last consumer
goes away. To support the filtering it should do this every time, we
want to remove the breakpoints which nobody else want to keep.
Note: given that filter_chain() is not actually implemented, this patch
itself doesn't change the behaviour yet, register_for_each_vma(false)
is a heavy "nop" unless there are no more consumers.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Add the new helper filter_chain(). Currently it is only placeholder,
the comment explains what is should do. We will change it later to
consult every consumer to decide whether we need to install the swbp.
Until then it works as if any consumer returns true, this matches the
current behavior.
Change install_breakpoint() to call filter_chain() instead of checking
uprobe->consumers != NULL. We obviously need this, and this equally
closes the race with _unregister().
Change remove_breakpoint() to call this helper too. Currently this is
pointless because remove_breakpoint() is only called when the last
consumer goes away, but we will change this.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
uprobe_consumer->filter() is pointless in its current form, kill it.
We will add it back, but with the different signature/semantics. Perhaps
we will even re-introduce the callsite in handler_chain(), but not to
just skip uc->handler().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
register/unregister verifies that inode/uc != NULL. For what?
This really looks like "hide the potential problem", the caller
should pass the valid data.
register() also checks uc->next == NULL, probably to prevent the
double-register but the caller can do other stupid/wrong things.
If we do this check, then we should document that uc->next should
be cleared before register() and add BUG_ON().
Also add the small comment about the i_size_read() check.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cosmetic. __set_bit(UPROBE_SKIP_SSTEP) is the part of initialization,
it is not clear why it is set in insert_uprobe().
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
. Check for flex and bison before continuing building, from Borislav Petkov.
. Make event_copy local to mmaps, fixing buffer wrap around problems, from
David Ahern.
. Add option for runtime switching perf data file in perf report, just press
's' and a menu with the valid files found in the current directory will be
presented, from Feng Tang.
. Add support to display whole group data for raw columns, from Jiri Olsa.
. Fix SIGALRM and pipe read race for the rwtop perl script. from Jiri Olsa.
. Fix perf_evsel::exclude_GH handling and add a test to catch regressions, from
Jiri Olsa.
. Error checking fixes, from Namhyung Kim.
. Fix calloc argument ordering, from Paul Gortmaker.
. Fix set event list leader, from Stephane Eranian.
. Add per processor socket count aggregation in perf stat, from Stephane Eranian.
. Fix perf python binding breakage.
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
A sweep of the kernel for regex "kcalloc(sizeof" turned up 2 reversed
args, fixed in commit d3d09e1820 ("EDAC:
Fix kcalloc argument order") and also fixed in the networking commit
a1b1add07f ("gro: Fix kcalloc argument
order").
I know that was the regex used, because on seeing the 1st of these
changes, I wondered "how many other instances of this are there" and I
happened to just use "calloc(sizeof" as a regex and it in turn found
these additional reversed args instances in the perf code.
In the kcalloc cases, the changes are cosmetic, since the numbers are
simply multiplied. I had no desire to go data mining in userspace to
see if the same thing held true there, however.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Joe Perches <joe@perches.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1359594349-25912-1-git-send-email-paul.gortmaker@windriver.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Currently we don't display group members' values for raw columns like
'Samples' and 'Period' when in group report mode.
Uniting '__hpp__percent_fmt' and '__hpp__raw_fmt' function under new
function __hpp__fmt. It's basically '__hpp__percent_fmt' code with new
'fmt_percent' bool parameter added saying whether raw number or
percentage should be printed.
This way raw columns print out all the group members when
in group report mode, like:
$ perf record -e '{cycles,cache-misses}' ls
...
$ perf report --group --show-total-period --stdio
...
# Overhead Period Command Shared Object Symbol
# ................ ........................ ....... ................. .................................
#
23.63% 11.24% 3331335 317 ls [kernel.kallsyms] [k] __lock_acquire
12.72% 0.00% 1793100 0 ls [kernel.kallsyms] [k] native_sched_clock
9.72% 0.00% 1369920 0 ls libc-2.14.90.so [.] _nl_find_locale
0.03% 0.07% 4476 2 ls [kernel.kallsyms] [k] intel_pmu_enable_all
0.00% 11.73% 0 331 ls ld-2.14.90.so [.] _dl_cache_libcmp
0.00% 11.06% 0 312 ls [kernel.kallsyms] [k] vma_interval_tree_insert
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1359981185-16819-2-git-send-email-jolsa@redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This patch adds per-processor socket count aggregation for system-wide
mode measurements. This is a useful mode to detect imbalance between
sockets.
To enable this mode, use --aggr-socket in addition
to -a. (system-wide).
The output includes the socket number and the number of online
processors on that socket. This is useful to gauge the amount of
aggregation.
# ./perf stat -I 1000 -a --aggr-socket -e cycles sleep 2
# time socket cpus counts events
1.000097680 S0 4 5,788,785 cycles
2.000379943 S0 4 27,361,546 cycles
2.001167808 S0 4 818,275 cycles
Signed-off-by: Stephane Eranian <eranian@google.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1360161962-9675-3-git-send-email-eranian@google.com
[ committer note: Added missing man page entry based on above comments ]
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
I am getting segfaults *after* the time sorting of perf samples where
the event type is off the charts:
(gdb) bt
\#0 0x0807b1b2 in hists__inc_nr_events (hists=0x80a99c4, type=1163281902) at util/hist.c:1225
\#1 0x08070795 in perf_session_deliver_event (session=0x80a9b90, event=0xf7a6aff8, sample=0xffffc318, tool=0xffffc520,
file_offset=0) at util/session.c:884
\#2 0x0806f9b9 in flush_sample_queue (s=0x80a9b90, tool=0xffffc520) at util/session.c:555
\#3 0x0806fc53 in process_finished_round (tool=0xffffc520, event=0x0, session=0x80a9b90) at util/session.c:645
This is bizarre because the event has already been processed once --
before it was added to the samples queue -- and the event was found to
be sane at that time.
There seem to be 2 causes:
1. perf_evlist__mmap_read updates the read location even though there
are outstanding references to events sitting in the mmap buffers via the
ordered samples queue.
2. There is a single evlist->event_copy for all evlist entries.
event_copy is used to handle an event wrapping at the mmap buffer
boundary.
This patch addresses the second problem - making event_copy local to
each perf_mmap. With this change my highly repeatable use case no longer
fails.
The first problem is much more complicated and will be the subject of a
future patch.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1360098762-61827-1-git-send-email-dsahern@gmail.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Current _sort__sym_cmp() function is used for comparing symbols between
two hist entries on symbol, symbol_from and symbol_to sort keys. Those
functions pass addresses of symbols but it's meaningless since it gets
over-written inside of the _sort__sym_cmp function to a start address of
the symbol. So just get rid of them.
This might cause a difference than prior output for branch stacks since
it seems not using start address of the symbol but branch address.
However AFAICS it'd be same as it gets overwritten anyway.
Also remove redundant part of code in sort__sym_cmp().
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Acked-by: Jiri Olsa <jolsa@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1360130237-9963-1-git-send-email-namhyung@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Based on perf report/top/scripts browser integration idea from acme.
This will enable user to runtime switch the data file, when this option
is selected, it will popup all the legal data files in current working
directory, and the filename selected by user is saved in the global
variable "input_name", and a new key 'K_SWITCH_INPUT_DATA' will be
passed back to the built-in command which will perform the switch.
This initial version only enables it for 'perf report'.
v2: rebase to latest 'perf/core' branch (6e1d4dd) of acme's perf tree
Signed-off-by: Feng Tang <feng.tang@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1359873501-24541-1-git-send-email-feng.tang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>