mirror of
https://github.com/Dasharo/linux.git
synced 2026-03-06 15:25:10 -08:00
Merge tag 'trace-v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing updates from Steven Rostedt:
- Add options to the osnoise tracer:
- 'panic_on_stop' option that panics the kernel if osnoise is
greater than some user defined threshold.
- 'preempt' option, to test noise while preemption is disabled
- 'irq' option, to test noise when interrupts are disabled
- Add .percent and .graph suffix to histograms to give different
outputs
- Add nohitcount to disable showing hitcount in histogram output
- Add new __cpumask() to trace event fields to annotate that a unsigned
long array is a cpumask to user space and should be treated as one.
- Add trace_trigger kernel command line parameter to enable trace event
triggers at boot up. Useful to trace stack traces, disable tracing
and take snapshots.
- Fix x86/kmmio mmio tracer to work with the updates to lockdep
- Unify the panic and die notifiers
- Add back ftrace_expect reference that is used to extract more
information in the ftrace_bug() code.
- Have trigger filter parsing errors show up in the tracing error log.
- Updated MAINTAINERS file to add kernel tracing mailing list and
patchwork info
- Use IDA to keep track of event type numbers.
- And minor fixes and clean ups
* tag 'trace-v6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (44 commits)
tracing: Fix cpumask() example typo
tracing: Improve panic/die notifiers
ftrace: Prevent RCU stall on PREEMPT_VOLUNTARY kernels
tracing: Do not synchronize freeing of trigger filter on boot up
tracing: Remove pointer (asterisk) and brackets from cpumask_t field
tracing: Have trigger filter parsing errors show up in error_log
x86/mm/kmmio: Remove redundant preempt_disable()
tracing: Fix infinite loop in tracing_read_pipe on overflowed print_trace_line
Documentation/osnoise: Add osnoise/options documentation
tracing/osnoise: Add preempt and/or irq disabled options
tracing/osnoise: Add PANIC_ON_STOP option
Documentation/osnoise: Escape underscore of NO_ prefix
tracing: Fix some checker warnings
tracing/osnoise: Make osnoise_options static
tracing: remove unnecessary trace_trigger ifdef
ring-buffer: Handle resize in early boot up
tracing/hist: Fix issue of losting command info in error_log
tracing: Fix issue of missing one synthetic field
tracing/hist: Fix out-of-bound write on 'action_data.var_ref_idx'
tracing/hist: Fix wrong return value in parse_action_params()
...
This commit is contained in:
@@ -6266,6 +6266,25 @@
|
||||
See also Documentation/trace/ftrace.rst "trace options"
|
||||
section.
|
||||
|
||||
trace_trigger=[trigger-list]
|
||||
[FTRACE] Add a event trigger on specific events.
|
||||
Set a trigger on top of a specific event, with an optional
|
||||
filter.
|
||||
|
||||
The format is is "trace_trigger=<event>.<trigger>[ if <filter>],..."
|
||||
Where more than one trigger may be specified that are comma deliminated.
|
||||
|
||||
For example:
|
||||
|
||||
trace_trigger="sched_switch.stacktrace if prev_state == 2"
|
||||
|
||||
The above will enable the "stacktrace" trigger on the "sched_switch"
|
||||
event but only trigger it if the "prev_state" of the "sched_switch"
|
||||
event is "2" (TASK_UNINTERUPTIBLE).
|
||||
|
||||
See also "Event triggers" in Documentation/trace/events.rst
|
||||
|
||||
|
||||
traceoff_on_warning
|
||||
[FTRACE] enable this option to disable tracing when a
|
||||
warning is hit. This turns off "tracing_on". Tracing can
|
||||
|
||||
@@ -25,7 +25,7 @@ Documentation written by Tom Zanussi
|
||||
|
||||
hist:keys=<field1[,field2,...]>[:values=<field1[,field2,...]>]
|
||||
[:sort=<field1[,field2,...]>][:size=#entries][:pause][:continue]
|
||||
[:clear][:name=histname1][:<handler>.<action>] [if <filter>]
|
||||
[:clear][:name=histname1][:nohitcount][:<handler>.<action>] [if <filter>]
|
||||
|
||||
When a matching event is hit, an entry is added to a hash table
|
||||
using the key(s) and value(s) named. Keys and values correspond to
|
||||
@@ -79,6 +79,8 @@ Documentation written by Tom Zanussi
|
||||
.log2 display log2 value rather than raw number
|
||||
.buckets=size display grouping of values rather than raw number
|
||||
.usecs display a common_timestamp in microseconds
|
||||
.percent display a number of percentage value
|
||||
.graph display a bar-graph of a value
|
||||
============= =================================================
|
||||
|
||||
Note that in general the semantics of a given field aren't
|
||||
@@ -137,6 +139,12 @@ Documentation written by Tom Zanussi
|
||||
existing trigger, rather than via the '>' operator, which will cause
|
||||
the trigger to be removed through truncation.
|
||||
|
||||
The 'nohitcount' (or NOHC) parameter will suppress display of
|
||||
raw hitcount in the histogram. This option requires at least one
|
||||
value field which is not a 'raw hitcount'. For example,
|
||||
'hist:...:vals=hitcount:nohitcount' is rejected, but
|
||||
'hist:...:vals=hitcount.percent:nohitcount' is OK.
|
||||
|
||||
- enable_hist/disable_hist
|
||||
|
||||
The enable_hist and disable_hist triggers can be used to have one
|
||||
|
||||
@@ -92,8 +92,8 @@ Note that the example above shows a high number of HW noise samples.
|
||||
The reason being is that this sample was taken on a virtual machine,
|
||||
and the host interference is detected as a hardware interference.
|
||||
|
||||
Tracer options
|
||||
---------------------
|
||||
Tracer Configuration
|
||||
--------------------
|
||||
|
||||
The tracer has a set of options inside the osnoise directory, they are:
|
||||
|
||||
@@ -109,6 +109,27 @@ The tracer has a set of options inside the osnoise directory, they are:
|
||||
- tracing_threshold: the minimum delta between two time() reads to be
|
||||
considered as noise, in us. When set to 0, the default value will
|
||||
be used, which is currently 5 us.
|
||||
- osnoise/options: a set of on/off options that can be enabled by
|
||||
writing the option name to the file or disabled by writing the option
|
||||
name preceded with the 'NO\_' prefix. For example, writing
|
||||
NO_OSNOISE_WORKLOAD disables the OSNOISE_WORKLOAD option. The
|
||||
special DEAFAULTS option resets all options to the default value.
|
||||
|
||||
Tracer Options
|
||||
--------------
|
||||
|
||||
The osnoise/options file exposes a set of on/off configuration options for
|
||||
the osnoise tracer. These options are:
|
||||
|
||||
- DEFAULTS: reset the options to the default value.
|
||||
- OSNOISE_WORKLOAD: do not dispatch osnoise workload (see dedicated
|
||||
section below).
|
||||
- PANIC_ON_STOP: call panic() if the tracer stops. This option serves to
|
||||
capture a vmcore.
|
||||
- OSNOISE_PREEMPT_DISABLE: disable preemption while running the osnoise
|
||||
workload, allowing only IRQ and hardware-related noise.
|
||||
- OSNOISE_IRQ_DISABLE: disable IRQs while running the osnoise workload,
|
||||
allowing only NMIs and hardware-related noise, like hwlat tracer.
|
||||
|
||||
Additional Tracing
|
||||
------------------
|
||||
@@ -150,3 +171,10 @@ tracepoints is smaller than eight us reported in the sample_threshold.
|
||||
The reason roots in the overhead of the entry and exit code that happens
|
||||
before and after any interference execution. This justifies the dual
|
||||
approach: measuring thread and tracing.
|
||||
|
||||
Running osnoise tracer without workload
|
||||
---------------------------------------
|
||||
|
||||
By enabling the osnoise tracer with the NO_OSNOISE_WORKLOAD option set,
|
||||
the osnoise: tracepoints serve to measure the execution time of
|
||||
any type of Linux task, free from the interference of other tasks.
|
||||
|
||||
@@ -8528,6 +8528,9 @@ FUNCTION HOOKS (FTRACE)
|
||||
M: Steven Rostedt <rostedt@goodmis.org>
|
||||
M: Masami Hiramatsu <mhiramat@kernel.org>
|
||||
R: Mark Rutland <mark.rutland@arm.com>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
L: linux-trace-kernel@vger.kernel.org
|
||||
Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
|
||||
F: Documentation/trace/ftrace*
|
||||
@@ -11606,6 +11609,9 @@ M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>
|
||||
M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
|
||||
M: "David S. Miller" <davem@davemloft.net>
|
||||
M: Masami Hiramatsu <mhiramat@kernel.org>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
L: linux-trace-kernel@vger.kernel.org
|
||||
Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
|
||||
F: Documentation/trace/kprobes.rst
|
||||
@@ -21079,6 +21085,9 @@ F: drivers/hwmon/pmbus/tps546d24.c
|
||||
TRACING
|
||||
M: Steven Rostedt <rostedt@goodmis.org>
|
||||
M: Masami Hiramatsu <mhiramat@kernel.org>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
L: linux-trace-kernel@vger.kernel.org
|
||||
Q: https://patchwork.kernel.org/project/linux-trace-kernel/list/
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git
|
||||
F: Documentation/trace/*
|
||||
|
||||
@@ -221,7 +221,9 @@ void ftrace_replace_code(int enable)
|
||||
|
||||
ret = ftrace_verify_code(rec->ip, old);
|
||||
if (ret) {
|
||||
ftrace_expected = old;
|
||||
ftrace_bug(ret, rec);
|
||||
ftrace_expected = NULL;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -62,7 +62,13 @@ struct kmmio_context {
|
||||
int active;
|
||||
};
|
||||
|
||||
static DEFINE_SPINLOCK(kmmio_lock);
|
||||
/*
|
||||
* The kmmio_lock is taken in int3 context, which is treated as NMI context.
|
||||
* This causes lockdep to complain about it bein in both NMI and normal
|
||||
* context. Hide it from lockdep, as it should not have any other locks
|
||||
* taken under it, and this is only enabled for debugging mmio anyway.
|
||||
*/
|
||||
static arch_spinlock_t kmmio_lock = __ARCH_SPIN_LOCK_UNLOCKED;
|
||||
|
||||
/* Protected by kmmio_lock */
|
||||
unsigned int kmmio_count;
|
||||
@@ -240,15 +246,14 @@ int kmmio_handler(struct pt_regs *regs, unsigned long addr)
|
||||
page_base &= page_level_mask(l);
|
||||
|
||||
/*
|
||||
* Preemption is now disabled to prevent process switch during
|
||||
* single stepping. We can only handle one active kmmio trace
|
||||
* Hold the RCU read lock over single stepping to avoid looking
|
||||
* up the probe and kmmio_fault_page again. The rcu_read_lock_sched()
|
||||
* also disables preemption and prevents process switch during
|
||||
* the single stepping. We can only handle one active kmmio trace
|
||||
* per cpu, so ensure that we finish it before something else
|
||||
* gets to run. We also hold the RCU read lock over single
|
||||
* stepping to avoid looking up the probe and kmmio_fault_page
|
||||
* again.
|
||||
* gets to run.
|
||||
*/
|
||||
preempt_disable();
|
||||
rcu_read_lock();
|
||||
rcu_read_lock_sched_notrace();
|
||||
|
||||
faultpage = get_kmmio_fault_page(page_base);
|
||||
if (!faultpage) {
|
||||
@@ -317,8 +322,7 @@ int kmmio_handler(struct pt_regs *regs, unsigned long addr)
|
||||
return 1; /* fault handled */
|
||||
|
||||
no_kmmio:
|
||||
rcu_read_unlock();
|
||||
preempt_enable_no_resched();
|
||||
rcu_read_unlock_sched_notrace();
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -346,10 +350,10 @@ static int post_kmmio_handler(unsigned long condition, struct pt_regs *regs)
|
||||
ctx->probe->post_handler(ctx->probe, condition, regs);
|
||||
|
||||
/* Prevent racing against release_kmmio_fault_page(). */
|
||||
spin_lock(&kmmio_lock);
|
||||
arch_spin_lock(&kmmio_lock);
|
||||
if (ctx->fpage->count)
|
||||
arm_kmmio_fault_page(ctx->fpage);
|
||||
spin_unlock(&kmmio_lock);
|
||||
arch_spin_unlock(&kmmio_lock);
|
||||
|
||||
regs->flags &= ~X86_EFLAGS_TF;
|
||||
regs->flags |= ctx->saved_flags;
|
||||
@@ -357,8 +361,7 @@ static int post_kmmio_handler(unsigned long condition, struct pt_regs *regs)
|
||||
/* These were acquired in kmmio_handler(). */
|
||||
ctx->active--;
|
||||
BUG_ON(ctx->active);
|
||||
rcu_read_unlock();
|
||||
preempt_enable_no_resched();
|
||||
rcu_read_unlock_sched_notrace();
|
||||
|
||||
/*
|
||||
* if somebody else is singlestepping across a probe point, flags
|
||||
@@ -440,7 +443,8 @@ int register_kmmio_probe(struct kmmio_probe *p)
|
||||
unsigned int l;
|
||||
pte_t *pte;
|
||||
|
||||
spin_lock_irqsave(&kmmio_lock, flags);
|
||||
local_irq_save(flags);
|
||||
arch_spin_lock(&kmmio_lock);
|
||||
if (get_kmmio_probe(addr)) {
|
||||
ret = -EEXIST;
|
||||
goto out;
|
||||
@@ -460,7 +464,9 @@ int register_kmmio_probe(struct kmmio_probe *p)
|
||||
size += page_level_size(l);
|
||||
}
|
||||
out:
|
||||
spin_unlock_irqrestore(&kmmio_lock, flags);
|
||||
arch_spin_unlock(&kmmio_lock);
|
||||
local_irq_restore(flags);
|
||||
|
||||
/*
|
||||
* XXX: What should I do here?
|
||||
* Here was a call to global_flush_tlb(), but it does not exist
|
||||
@@ -494,7 +500,8 @@ static void remove_kmmio_fault_pages(struct rcu_head *head)
|
||||
struct kmmio_fault_page **prevp = &dr->release_list;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&kmmio_lock, flags);
|
||||
local_irq_save(flags);
|
||||
arch_spin_lock(&kmmio_lock);
|
||||
while (f) {
|
||||
if (!f->count) {
|
||||
list_del_rcu(&f->list);
|
||||
@@ -506,7 +513,8 @@ static void remove_kmmio_fault_pages(struct rcu_head *head)
|
||||
}
|
||||
f = *prevp;
|
||||
}
|
||||
spin_unlock_irqrestore(&kmmio_lock, flags);
|
||||
arch_spin_unlock(&kmmio_lock);
|
||||
local_irq_restore(flags);
|
||||
|
||||
/* This is the real RCU destroy call. */
|
||||
call_rcu(&dr->rcu, rcu_free_kmmio_fault_pages);
|
||||
@@ -540,14 +548,16 @@ void unregister_kmmio_probe(struct kmmio_probe *p)
|
||||
if (!pte)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&kmmio_lock, flags);
|
||||
local_irq_save(flags);
|
||||
arch_spin_lock(&kmmio_lock);
|
||||
while (size < size_lim) {
|
||||
release_kmmio_fault_page(addr + size, &release_list);
|
||||
size += page_level_size(l);
|
||||
}
|
||||
list_del_rcu(&p->list);
|
||||
kmmio_count--;
|
||||
spin_unlock_irqrestore(&kmmio_lock, flags);
|
||||
arch_spin_unlock(&kmmio_lock);
|
||||
local_irq_restore(flags);
|
||||
|
||||
if (!release_list)
|
||||
return;
|
||||
|
||||
@@ -113,8 +113,7 @@ void ring_buffer_change_overwrite(struct trace_buffer *buffer, int val);
|
||||
|
||||
struct ring_buffer_event *ring_buffer_lock_reserve(struct trace_buffer *buffer,
|
||||
unsigned long length);
|
||||
int ring_buffer_unlock_commit(struct trace_buffer *buffer,
|
||||
struct ring_buffer_event *event);
|
||||
int ring_buffer_unlock_commit(struct trace_buffer *buffer);
|
||||
int ring_buffer_write(struct trace_buffer *buffer,
|
||||
unsigned long length, void *data);
|
||||
|
||||
|
||||
@@ -136,7 +136,6 @@ struct trace_event_functions {
|
||||
|
||||
struct trace_event {
|
||||
struct hlist_node node;
|
||||
struct list_head list;
|
||||
int type;
|
||||
struct trace_event_functions *funcs;
|
||||
};
|
||||
@@ -235,7 +234,8 @@ void tracing_record_taskinfo_sched_switch(struct task_struct *prev,
|
||||
void tracing_record_cmdline(struct task_struct *task);
|
||||
void tracing_record_tgid(struct task_struct *task);
|
||||
|
||||
int trace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...);
|
||||
int trace_output_call(struct trace_iterator *iter, char *name, char *fmt, ...)
|
||||
__printf(3, 4);
|
||||
|
||||
struct event_filter;
|
||||
|
||||
|
||||
@@ -97,7 +97,8 @@ extern int trace_seq_hex_dump(struct trace_seq *s, const char *prefix_str,
|
||||
const void *buf, size_t len, bool ascii);
|
||||
|
||||
#else /* CONFIG_TRACING */
|
||||
static inline void trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
|
||||
static inline __printf(2, 3)
|
||||
void trace_seq_printf(struct trace_seq *s, const char *fmt, ...)
|
||||
{
|
||||
}
|
||||
static inline void
|
||||
|
||||
@@ -21,6 +21,9 @@
|
||||
#undef __get_bitmask
|
||||
#define __get_bitmask(field) (char *)__get_dynamic_array(field)
|
||||
|
||||
#undef __get_cpumask
|
||||
#define __get_cpumask(field) (char *)__get_dynamic_array(field)
|
||||
|
||||
#undef __get_sockaddr
|
||||
#define __get_sockaddr(field) ((struct sockaddr *)__get_dynamic_array(field))
|
||||
|
||||
@@ -40,6 +43,9 @@
|
||||
#undef __get_rel_bitmask
|
||||
#define __get_rel_bitmask(field) (char *)__get_rel_dynamic_array(field)
|
||||
|
||||
#undef __get_rel_cpumask
|
||||
#define __get_rel_cpumask(field) (char *)__get_rel_dynamic_array(field)
|
||||
|
||||
#undef __get_rel_sockaddr
|
||||
#define __get_rel_sockaddr(field) ((struct sockaddr *)__get_rel_dynamic_array(field))
|
||||
|
||||
|
||||
@@ -21,6 +21,9 @@
|
||||
#undef __get_bitmask
|
||||
#define __get_bitmask(field) (char *)__get_dynamic_array(field)
|
||||
|
||||
#undef __get_cpumask
|
||||
#define __get_cpumask(field) (char *)__get_dynamic_array(field)
|
||||
|
||||
#undef __get_sockaddr
|
||||
#define __get_sockaddr(field) ((struct sockaddr *)__get_dynamic_array(field))
|
||||
|
||||
@@ -41,6 +44,9 @@
|
||||
#undef __get_rel_bitmask
|
||||
#define __get_rel_bitmask(field) (char *)__get_rel_dynamic_array(field)
|
||||
|
||||
#undef __get_rel_cpumask
|
||||
#define __get_rel_cpumask(field) (char *)__get_rel_dynamic_array(field)
|
||||
|
||||
#undef __get_rel_sockaddr
|
||||
#define __get_rel_sockaddr(field) ((struct sockaddr *)__get_rel_dynamic_array(field))
|
||||
|
||||
|
||||
@@ -32,6 +32,9 @@
|
||||
#undef __bitmask
|
||||
#define __bitmask(item, nr_bits) __dynamic_array(char, item, -1)
|
||||
|
||||
#undef __cpumask
|
||||
#define __cpumask(item) __dynamic_array(char, item, -1)
|
||||
|
||||
#undef __sockaddr
|
||||
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
|
||||
|
||||
@@ -47,6 +50,9 @@
|
||||
#undef __rel_bitmask
|
||||
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(char, item, -1)
|
||||
|
||||
#undef __rel_cpumask
|
||||
#define __rel_cpumask(item) __rel_dynamic_array(char, item, -1)
|
||||
|
||||
#undef __rel_sockaddr
|
||||
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)
|
||||
|
||||
|
||||
@@ -38,6 +38,9 @@
|
||||
#undef __bitmask
|
||||
#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __cpumask
|
||||
#define __cpumask(item) __dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __sockaddr
|
||||
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
|
||||
|
||||
@@ -53,5 +56,8 @@
|
||||
#undef __rel_bitmask
|
||||
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __rel_cpumask
|
||||
#define __rel_cpumask(item) __rel_dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __rel_sockaddr
|
||||
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)
|
||||
|
||||
@@ -42,6 +42,9 @@
|
||||
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
|
||||
})
|
||||
|
||||
#undef __get_cpumask
|
||||
#define __get_cpumask(field) __get_bitmask(field)
|
||||
|
||||
#undef __get_rel_bitmask
|
||||
#define __get_rel_bitmask(field) \
|
||||
({ \
|
||||
@@ -51,6 +54,9 @@
|
||||
trace_print_bitmask_seq(p, __bitmask, __bitmask_size); \
|
||||
})
|
||||
|
||||
#undef __get_rel_cpumask
|
||||
#define __get_rel_cpumask(field) __get_rel_bitmask(field)
|
||||
|
||||
#undef __get_sockaddr
|
||||
#define __get_sockaddr(field) ((struct sockaddr *)__get_dynamic_array(field))
|
||||
|
||||
|
||||
@@ -46,6 +46,12 @@
|
||||
#undef __bitmask
|
||||
#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __cpumask
|
||||
#define __cpumask(item) { \
|
||||
.type = "__data_loc cpumask_t", .name = #item, \
|
||||
.size = 4, .align = 4, \
|
||||
.is_signed = 0, .filter_type = FILTER_OTHER },
|
||||
|
||||
#undef __sockaddr
|
||||
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
|
||||
|
||||
@@ -64,5 +70,11 @@
|
||||
#undef __rel_bitmask
|
||||
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __rel_cpumask
|
||||
#define __rel_cpumask(item) { \
|
||||
.type = "__rel_loc cpumask_t", .name = #item, \
|
||||
.size = 4, .align = 4, \
|
||||
.is_signed = 0, .filter_type = FILTER_OTHER },
|
||||
|
||||
#undef __rel_sockaddr
|
||||
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)
|
||||
|
||||
@@ -82,10 +82,16 @@
|
||||
#define __bitmask(item, nr_bits) __dynamic_array(unsigned long, item, \
|
||||
__bitmask_size_in_longs(nr_bits))
|
||||
|
||||
#undef __cpumask
|
||||
#define __cpumask(item) __bitmask(item, nr_cpumask_bits)
|
||||
|
||||
#undef __rel_bitmask
|
||||
#define __rel_bitmask(item, nr_bits) __rel_dynamic_array(unsigned long, item, \
|
||||
__bitmask_size_in_longs(nr_bits))
|
||||
|
||||
#undef __rel_cpumask
|
||||
#define __rel_cpumask(item) __rel_bitmask(item, nr_cpumask_bits)
|
||||
|
||||
#undef __sockaddr
|
||||
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
|
||||
|
||||
|
||||
@@ -57,6 +57,16 @@
|
||||
#define __assign_bitmask(dst, src, nr_bits) \
|
||||
memcpy(__get_bitmask(dst), (src), __bitmask_size_in_bytes(nr_bits))
|
||||
|
||||
#undef __cpumask
|
||||
#define __cpumask(item) __dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __get_cpumask
|
||||
#define __get_cpumask(field) (char *)__get_dynamic_array(field)
|
||||
|
||||
#undef __assign_cpumask
|
||||
#define __assign_cpumask(dst, src) \
|
||||
memcpy(__get_cpumask(dst), (src), __bitmask_size_in_bytes(nr_cpumask_bits))
|
||||
|
||||
#undef __sockaddr
|
||||
#define __sockaddr(field, len) __dynamic_array(u8, field, len)
|
||||
|
||||
@@ -98,6 +108,16 @@
|
||||
#define __assign_rel_bitmask(dst, src, nr_bits) \
|
||||
memcpy(__get_rel_bitmask(dst), (src), __bitmask_size_in_bytes(nr_bits))
|
||||
|
||||
#undef __rel_cpumask
|
||||
#define __rel_cpumask(item) __rel_dynamic_array(unsigned long, item, -1)
|
||||
|
||||
#undef __get_rel_cpumask
|
||||
#define __get_rel_cpumask(field) (char *)__get_rel_dynamic_array(field)
|
||||
|
||||
#undef __assign_rel_cpumask
|
||||
#define __assign_rel_cpumask(dst, src) \
|
||||
memcpy(__get_rel_cpumask(dst), (src), __bitmask_size_in_bytes(nr_cpumask_bits))
|
||||
|
||||
#undef __rel_sockaddr
|
||||
#define __rel_sockaddr(field, len) __rel_dynamic_array(u8, field, len)
|
||||
|
||||
|
||||
@@ -13,11 +13,13 @@
|
||||
#undef __get_dynamic_array_len
|
||||
#undef __get_str
|
||||
#undef __get_bitmask
|
||||
#undef __get_cpumask
|
||||
#undef __get_sockaddr
|
||||
#undef __get_rel_dynamic_array
|
||||
#undef __get_rel_dynamic_array_len
|
||||
#undef __get_rel_str
|
||||
#undef __get_rel_bitmask
|
||||
#undef __get_rel_cpumask
|
||||
#undef __get_rel_sockaddr
|
||||
#undef __print_array
|
||||
#undef __print_hex_dump
|
||||
|
||||
@@ -375,6 +375,7 @@ config SCHED_TRACER
|
||||
config HWLAT_TRACER
|
||||
bool "Tracer to detect hardware latencies (like SMIs)"
|
||||
select GENERIC_TRACER
|
||||
select TRACER_MAX_TRACE
|
||||
help
|
||||
This tracer, when enabled will create one or more kernel threads,
|
||||
depending on what the cpumask file is set to, which each thread
|
||||
@@ -410,6 +411,7 @@ config HWLAT_TRACER
|
||||
config OSNOISE_TRACER
|
||||
bool "OS Noise tracer"
|
||||
select GENERIC_TRACER
|
||||
select TRACER_MAX_TRACE
|
||||
help
|
||||
In the context of high-performance computing (HPC), the Operating
|
||||
System Noise (osnoise) refers to the interference experienced by an
|
||||
|
||||
@@ -163,7 +163,7 @@ static void ftrace_sync_ipi(void *data)
|
||||
static ftrace_func_t ftrace_ops_get_list_func(struct ftrace_ops *ops)
|
||||
{
|
||||
/*
|
||||
* If this is a dynamic, RCU, or per CPU ops, or we force list func,
|
||||
* If this is a dynamic or RCU ops, or we force list func,
|
||||
* then it needs to call the list anyway.
|
||||
*/
|
||||
if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_RCU) ||
|
||||
@@ -2762,6 +2762,19 @@ void __weak ftrace_arch_code_modify_post_process(void)
|
||||
{
|
||||
}
|
||||
|
||||
static int update_ftrace_func(ftrace_func_t func)
|
||||
{
|
||||
static ftrace_func_t save_func;
|
||||
|
||||
/* Avoid updating if it hasn't changed */
|
||||
if (func == save_func)
|
||||
return 0;
|
||||
|
||||
save_func = func;
|
||||
|
||||
return ftrace_update_ftrace_func(func);
|
||||
}
|
||||
|
||||
void ftrace_modify_all_code(int command)
|
||||
{
|
||||
int update = command & FTRACE_UPDATE_TRACE_FUNC;
|
||||
@@ -2782,7 +2795,7 @@ void ftrace_modify_all_code(int command)
|
||||
* traced.
|
||||
*/
|
||||
if (update) {
|
||||
err = ftrace_update_ftrace_func(ftrace_ops_list_func);
|
||||
err = update_ftrace_func(ftrace_ops_list_func);
|
||||
if (FTRACE_WARN_ON(err))
|
||||
return;
|
||||
}
|
||||
@@ -2798,7 +2811,7 @@ void ftrace_modify_all_code(int command)
|
||||
/* If irqs are disabled, we are in stop machine */
|
||||
if (!irqs_disabled())
|
||||
smp_call_function(ftrace_sync_ipi, NULL, 1);
|
||||
err = ftrace_update_ftrace_func(ftrace_trace_function);
|
||||
err = update_ftrace_func(ftrace_trace_function);
|
||||
if (FTRACE_WARN_ON(err))
|
||||
return;
|
||||
}
|
||||
@@ -3070,8 +3083,6 @@ out:
|
||||
/*
|
||||
* Dynamic ops may be freed, we must make sure that all
|
||||
* callers are done before leaving this function.
|
||||
* The same goes for freeing the per_cpu data of the per_cpu
|
||||
* ops.
|
||||
*/
|
||||
if (ops->flags & FTRACE_OPS_FL_DYNAMIC) {
|
||||
/*
|
||||
@@ -4192,6 +4203,7 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
|
||||
}
|
||||
found = 1;
|
||||
}
|
||||
cond_resched();
|
||||
} while_for_each_ftrace_rec();
|
||||
out_unlock:
|
||||
mutex_unlock(&ftrace_lock);
|
||||
@@ -7518,8 +7530,6 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
|
||||
/*
|
||||
* Check the following for each ops before calling their func:
|
||||
* if RCU flag is set, then rcu_is_watching() must be true
|
||||
* if PER_CPU is set, then ftrace_function_local_disable()
|
||||
* must be false
|
||||
* Otherwise test if the ip matches the ops filter
|
||||
*
|
||||
* If any of the above fails then the op->func() is not executed.
|
||||
@@ -7569,8 +7579,8 @@ NOKPROBE_SYMBOL(arch_ftrace_ops_list_func);
|
||||
|
||||
/*
|
||||
* If there's only one function registered but it does not support
|
||||
* recursion, needs RCU protection and/or requires per cpu handling, then
|
||||
* this function will be called by the mcount trampoline.
|
||||
* recursion, needs RCU protection, then this function will be called
|
||||
* by the mcount trampoline.
|
||||
*/
|
||||
static void ftrace_ops_assist_func(unsigned long ip, unsigned long parent_ip,
|
||||
struct ftrace_ops *op, struct ftrace_regs *fregs)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user