The topology test case of 'perf test' seems to be broken on my x86
system - due to the comparison of a "core-id" with # of CPUs online.
There are 8 online CPUs:
$ cat /sys/devices/system/cpu/online
0-7
but core-ids are not sequential and some core-ids exceed the number
of online CPUs.
$ cat /sys/devices/system/cpu/cpu?/topology/core_id
0
1
9
10
0
1
9
10
Looks like we can safely remove the check. Output before:
$ perf --version
perf version 4.4.rc1.g34258a
$ perf test -v topo
36: Test topology in session :
--- start ---
test child forked, pid 5906
templ file: /tmp/perf-test-vCwWG3
core_id number is too big.You may need to upgrade the perf tool.
test child interrupted
---- end ----
Test topology in session: FAILED!
and after:
$ perf test -v topo
36: Test topology in session :
--- start ---
test child forked, pid 6532
templ file: /tmp/perf-test-y10wFJ
CPU 0, core 0, socket 0
CPU 1, core 1, socket 0
CPU 2, core 9, socket 0
CPU 3, core 10, socket 0
CPU 4, core 0, socket 1
CPU 5, core 1, socket 1
CPU 6, core 9, socket 1
CPU 7, core 10, socket 1
test child finished with 0
---- end ----
Test topology in session: Ok
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Cc: Jan Stancek <jstancek@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Link: http://lkml.kernel.org/r/20151203233219.GA27696@us.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The 'struct machine' represents the machine where the samples were/are
being collected, and we also have a 'struct perf_env' with extra details
about such machine, that we were collecting at 'perf.data' creation time
but we also needed when no perf.data file is being used, such as in
'perf top'.
So, get those structs closer together, as they provide a bigger picture
of the sample's environment.
In 'perf session', when the file argument is NULL, we can assume that
the tool is sampling the running machine, so point machine->env to
the global put in place in previous patches, while set it to the
perf_header.env one when reading from a file.
This paves the way for machine->env to be used in
perf_event__preprocess_sample to populate addr_location.socket.
Tested-by: Wang Nan <wangnan0@huawei.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/n/tip-2ajotl0khscutm68exictoy9@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This patch stores the cpu socket_id and core_id in a perf.data header,
and reads them into the perf_env struct when processing perf.data files.
The changes modifies the CPU_TOPOLOGY section, making sure it is
backward/forward compatible.
The patch checks the section size before reading the core and socket ids.
It never reads data crossing the section boundary. An old perf binary
without this patch can also correctly read the perf.data from a new perf
with this patch.
Because the new info is added at the end of the cpu_topology section, an
old perf tool ignores the extra data.
Examples:
1. New perf with this patch read perf.data from an old perf without the
patch:
$ perf_new report -i perf_old.data --header-only -I
......
# sibling threads : 33
# sibling threads : 34
# sibling threads : 35
# Core ID and Socket ID information is not available
# node0 meminfo : total = 32823872 kB, free = 29315548 kB
# node0 cpu list : 0-17,36-53
......
2. Old perf without the patch reads perf.data from a new perf with the
patch:
$ perf_old report -i perf_new.data --header-only -I
......
# sibling threads : 33
# sibling threads : 34
# sibling threads : 35
# node0 meminfo : total = 32823872 kB, free = 29190932 kB
# node0 cpu list : 0-17,36-53
......
3. New perf read new perf.data:
$ perf_new report -i perf_new.data --header-only -I
......
# sibling threads : 33
# sibling threads : 34
# sibling threads : 35
# CPU 0: Core ID 0, Socket ID 0
# CPU 1: Core ID 1, Socket ID 0
......
# CPU 61: Core ID 10, Socket ID 1
# CPU 62: Core ID 11, Socket ID 1
# CPU 63: Core ID 16, Socket ID 1
# node0 meminfo : total = 32823872 kB, free = 29190932 kB
# node0 cpu list : 0-17,36-53
Signed-off-by: Kan Liang <kan.liang@intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Link: http://lkml.kernel.org/r/1441115893-22006-2-git-send-email-kan.liang@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This has a different model than the 'thread' and 'map' struct lifetimes:
there is not a definitive "don't use this DSO anymore" event, i.e. we may
get many 'struct map' holding references to the '/usr/lib64/libc-2.20.so'
DSO but then at some point some DSO may have no references but we still
don't want to straight away release its resources, because "soon" we may
get a new 'struct map' that needs it and we want to reuse its symtab or
other resources.
So we need some way to garbage collect it when crossing some memory
usage threshold, which is left for anoter patch, for now it is
sufficient to release it when calling dsos__exit(), i.e. when deleting
the whole list as part of deleting the 'struct machine' containing it,
which will leave only referenced objects being used.
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/n/tip-majzgz07cm90t2tejrjy4clf@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>