You've already forked linux-apfs
mirror of
https://github.com/linux-apfs/linux-apfs.git
synced 2026-05-01 15:00:59 -07:00
Merge branch 'rcu/next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu
The major features of this series are: - making RCU more aggressive about entering dyntick-idle mode in order to improve energy efficiency - converting a few more call_rcu()s to kfree_rcu()s - applying a number of rcutree fixes and cleanups to rcutiny - removing CONFIG_SMP #ifdefs from treercu - allowing RCU CPU stall times to be set via sysfs - adding CPU-stall capability to rcutorture - adding more RCU-abuse diagnostics - updating documentation - fixing yet more issues located by the still-ongoing top-to-bottom inspection of RCU, this time with a special focus on the CPU-hotplug code path. Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
+1689
-101
File diff suppressed because it is too large
Load Diff
@@ -180,6 +180,20 @@ over a rather long period of time, but improvements are always welcome!
|
||||
operations that would not normally be undertaken while a real-time
|
||||
workload is running.
|
||||
|
||||
In particular, if you find yourself invoking one of the expedited
|
||||
primitives repeatedly in a loop, please do everyone a favor:
|
||||
Restructure your code so that it batches the updates, allowing
|
||||
a single non-expedited primitive to cover the entire batch.
|
||||
This will very likely be faster than the loop containing the
|
||||
expedited primitive, and will be much much easier on the rest
|
||||
of the system, especially to real-time workloads running on
|
||||
the rest of the system.
|
||||
|
||||
In addition, it is illegal to call the expedited forms from
|
||||
a CPU-hotplug notifier, or while holding a lock that is acquired
|
||||
by a CPU-hotplug notifier. Failing to observe this restriction
|
||||
will result in deadlock.
|
||||
|
||||
7. If the updater uses call_rcu() or synchronize_rcu(), then the
|
||||
corresponding readers must use rcu_read_lock() and
|
||||
rcu_read_unlock(). If the updater uses call_rcu_bh() or
|
||||
|
||||
@@ -12,14 +12,38 @@ CONFIG_RCU_CPU_STALL_TIMEOUT
|
||||
This kernel configuration parameter defines the period of time
|
||||
that RCU will wait from the beginning of a grace period until it
|
||||
issues an RCU CPU stall warning. This time period is normally
|
||||
ten seconds.
|
||||
sixty seconds.
|
||||
|
||||
RCU_SECONDS_TILL_STALL_RECHECK
|
||||
This configuration parameter may be changed at runtime via the
|
||||
/sys/module/rcutree/parameters/rcu_cpu_stall_timeout, however
|
||||
this parameter is checked only at the beginning of a cycle.
|
||||
So if you are 30 seconds into a 70-second stall, setting this
|
||||
sysfs parameter to (say) five will shorten the timeout for the
|
||||
-next- stall, or the following warning for the current stall
|
||||
(assuming the stall lasts long enough). It will not affect the
|
||||
timing of the next warning for the current stall.
|
||||
|
||||
This macro defines the period of time that RCU will wait after
|
||||
issuing a stall warning until it issues another stall warning
|
||||
for the same stall. This time period is normally set to three
|
||||
times the check interval plus thirty seconds.
|
||||
Stall-warning messages may be enabled and disabled completely via
|
||||
/sys/module/rcutree/parameters/rcu_cpu_stall_suppress.
|
||||
|
||||
CONFIG_RCU_CPU_STALL_VERBOSE
|
||||
|
||||
This kernel configuration parameter causes the stall warning to
|
||||
also dump the stacks of any tasks that are blocking the current
|
||||
RCU-preempt grace period.
|
||||
|
||||
RCU_CPU_STALL_INFO
|
||||
|
||||
This kernel configuration parameter causes the stall warning to
|
||||
print out additional per-CPU diagnostic information, including
|
||||
information on scheduling-clock ticks and RCU's idle-CPU tracking.
|
||||
|
||||
RCU_STALL_DELAY_DELTA
|
||||
|
||||
Although the lockdep facility is extremely useful, it does add
|
||||
some overhead. Therefore, under CONFIG_PROVE_RCU, the
|
||||
RCU_STALL_DELAY_DELTA macro allows five extra seconds before
|
||||
giving an RCU CPU stall warning message.
|
||||
|
||||
RCU_STALL_RAT_DELAY
|
||||
|
||||
@@ -64,6 +88,54 @@ INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffi
|
||||
|
||||
This is rare, but does happen from time to time in real life.
|
||||
|
||||
If the CONFIG_RCU_CPU_STALL_INFO kernel configuration parameter is set,
|
||||
more information is printed with the stall-warning message, for example:
|
||||
|
||||
INFO: rcu_preempt detected stall on CPU
|
||||
0: (63959 ticks this GP) idle=241/3fffffffffffffff/0
|
||||
(t=65000 jiffies)
|
||||
|
||||
In kernels with CONFIG_RCU_FAST_NO_HZ, even more information is
|
||||
printed:
|
||||
|
||||
INFO: rcu_preempt detected stall on CPU
|
||||
0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 drain=0 . timer=-1
|
||||
(t=65000 jiffies)
|
||||
|
||||
The "(64628 ticks this GP)" indicates that this CPU has taken more
|
||||
than 64,000 scheduling-clock interrupts during the current stalled
|
||||
grace period. If the CPU was not yet aware of the current grace
|
||||
period (for example, if it was offline), then this part of the message
|
||||
indicates how many grace periods behind the CPU is.
|
||||
|
||||
The "idle=" portion of the message prints the dyntick-idle state.
|
||||
The hex number before the first "/" is the low-order 12 bits of the
|
||||
dynticks counter, which will have an even-numbered value if the CPU is
|
||||
in dyntick-idle mode and an odd-numbered value otherwise. The hex
|
||||
number between the two "/"s is the value of the nesting, which will
|
||||
be a small positive number if in the idle loop and a very large positive
|
||||
number (as shown above) otherwise.
|
||||
|
||||
For CONFIG_RCU_FAST_NO_HZ kernels, the "drain=0" indicates that the
|
||||
CPU is not in the process of trying to force itself into dyntick-idle
|
||||
state, the "." indicates that the CPU has not given up forcing RCU
|
||||
into dyntick-idle mode (it would be "H" otherwise), and the "timer=-1"
|
||||
indicates that the CPU has not recented forced RCU into dyntick-idle
|
||||
mode (it would otherwise indicate the number of microseconds remaining
|
||||
in this forced state).
|
||||
|
||||
|
||||
Multiple Warnings From One Stall
|
||||
|
||||
If a stall lasts long enough, multiple stall-warning messages will be
|
||||
printed for it. The second and subsequent messages are printed at
|
||||
longer intervals, so that the time between (say) the first and second
|
||||
message will be about three times the interval between the beginning
|
||||
of the stall and the first message.
|
||||
|
||||
|
||||
What Causes RCU CPU Stall Warnings?
|
||||
|
||||
So your kernel printed an RCU CPU stall warning. The next question is
|
||||
"What caused it?" The following problems can result in RCU CPU stall
|
||||
warnings:
|
||||
@@ -128,4 +200,5 @@ is occurring, which will usually be in the function nearest the top of
|
||||
that portion of the stack which remains the same from trace to trace.
|
||||
If you can reliably trigger the stall, ftrace can be quite helpful.
|
||||
|
||||
RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE.
|
||||
RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE
|
||||
and with RCU's event tracing.
|
||||
|
||||
@@ -69,6 +69,13 @@ onoff_interval
|
||||
CPU-hotplug operations regardless of what value is
|
||||
specified for onoff_interval.
|
||||
|
||||
onoff_holdoff The number of seconds to wait until starting CPU-hotplug
|
||||
operations. This would normally only be used when
|
||||
rcutorture was built into the kernel and started
|
||||
automatically at boot time, in which case it is useful
|
||||
in order to avoid confusing boot-time code with CPUs
|
||||
coming and going.
|
||||
|
||||
shuffle_interval
|
||||
The number of seconds to keep the test threads affinitied
|
||||
to a particular subset of the CPUs, defaults to 3 seconds.
|
||||
@@ -79,6 +86,24 @@ shutdown_secs The number of seconds to run the test before terminating
|
||||
zero, which disables test termination and system shutdown.
|
||||
This capability is useful for automated testing.
|
||||
|
||||
stall_cpu The number of seconds that a CPU should be stalled while
|
||||
within both an rcu_read_lock() and a preempt_disable().
|
||||
This stall happens only once per rcutorture run.
|
||||
If you need multiple stalls, use modprobe and rmmod to
|
||||
repeatedly run rcutorture. The default for stall_cpu
|
||||
is zero, which prevents rcutorture from stalling a CPU.
|
||||
|
||||
Note that attempts to rmmod rcutorture while the stall
|
||||
is ongoing will hang, so be careful what value you
|
||||
choose for this module parameter! In addition, too-large
|
||||
values for stall_cpu might well induce failures and
|
||||
warnings in other parts of the kernel. You have been
|
||||
warned!
|
||||
|
||||
stall_cpu_holdoff
|
||||
The number of seconds to wait after rcutorture starts
|
||||
before stalling a CPU. Defaults to 10 seconds.
|
||||
|
||||
stat_interval The number of seconds between output of torture
|
||||
statistics (via printk()). Regardless of the interval,
|
||||
statistics are printed when the module is unloaded.
|
||||
@@ -271,11 +296,13 @@ The following script may be used to torture RCU:
|
||||
#!/bin/sh
|
||||
|
||||
modprobe rcutorture
|
||||
sleep 100
|
||||
sleep 3600
|
||||
rmmod rcutorture
|
||||
dmesg | grep torture:
|
||||
|
||||
The output can be manually inspected for the error flag of "!!!".
|
||||
One could of course create a more elaborate script that automatically
|
||||
checked for such errors. The "rmmod" command forces a "SUCCESS" or
|
||||
"FAILURE" indication to be printk()ed.
|
||||
checked for such errors. The "rmmod" command forces a "SUCCESS",
|
||||
"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first
|
||||
two are self-explanatory, while the last indicates that while there
|
||||
were no RCU failures, CPU-hotplug problems were detected.
|
||||
|
||||
+16
-20
@@ -33,23 +33,23 @@ rcu/rcuboost:
|
||||
The output of "cat rcu/rcudata" looks as follows:
|
||||
|
||||
rcu_sched:
|
||||
0 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=545/1/0 df=50 of=0 ri=0 ql=163 qs=NRW. kt=0/W/0 ktl=ebc3 b=10 ci=153737 co=0 ca=0
|
||||
1 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=967/1/0 df=58 of=0 ri=0 ql=634 qs=NRW. kt=0/W/1 ktl=58c b=10 ci=191037 co=0 ca=0
|
||||
2 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1081/1/0 df=175 of=0 ri=0 ql=74 qs=N.W. kt=0/W/2 ktl=da94 b=10 ci=75991 co=0 ca=0
|
||||
3 c=20942 g=20943 pq=1 pgp=20942 qp=1 dt=1846/0/0 df=404 of=0 ri=0 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=72261 co=0 ca=0
|
||||
4 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=369/1/0 df=83 of=0 ri=0 ql=48 qs=N.W. kt=0/W/4 ktl=e0e7 b=10 ci=128365 co=0 ca=0
|
||||
5 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=381/1/0 df=64 of=0 ri=0 ql=169 qs=NRW. kt=0/W/5 ktl=fb2f b=10 ci=164360 co=0 ca=0
|
||||
6 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1037/1/0 df=183 of=0 ri=0 ql=62 qs=N.W. kt=0/W/6 ktl=d2ad b=10 ci=65663 co=0 ca=0
|
||||
7 c=20897 g=20897 pq=1 pgp=20896 qp=0 dt=1572/0/0 df=382 of=0 ri=0 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=75006 co=0 ca=0
|
||||
0 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=545/1/0 df=50 of=0 ql=163 qs=NRW. kt=0/W/0 ktl=ebc3 b=10 ci=153737 co=0 ca=0
|
||||
1 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=967/1/0 df=58 of=0 ql=634 qs=NRW. kt=0/W/1 ktl=58c b=10 ci=191037 co=0 ca=0
|
||||
2 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1081/1/0 df=175 of=0 ql=74 qs=N.W. kt=0/W/2 ktl=da94 b=10 ci=75991 co=0 ca=0
|
||||
3 c=20942 g=20943 pq=1 pgp=20942 qp=1 dt=1846/0/0 df=404 of=0 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=72261 co=0 ca=0
|
||||
4 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=369/1/0 df=83 of=0 ql=48 qs=N.W. kt=0/W/4 ktl=e0e7 b=10 ci=128365 co=0 ca=0
|
||||
5 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=381/1/0 df=64 of=0 ql=169 qs=NRW. kt=0/W/5 ktl=fb2f b=10 ci=164360 co=0 ca=0
|
||||
6 c=20972 g=20973 pq=1 pgp=20973 qp=0 dt=1037/1/0 df=183 of=0 ql=62 qs=N.W. kt=0/W/6 ktl=d2ad b=10 ci=65663 co=0 ca=0
|
||||
7 c=20897 g=20897 pq=1 pgp=20896 qp=0 dt=1572/0/0 df=382 of=0 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=75006 co=0 ca=0
|
||||
rcu_bh:
|
||||
0 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=545/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/0 ktl=ebc3 b=10 ci=0 co=0 ca=0
|
||||
1 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=967/1/0 df=3 of=0 ri=1 ql=0 qs=.... kt=0/W/1 ktl=58c b=10 ci=151 co=0 ca=0
|
||||
2 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1081/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/2 ktl=da94 b=10 ci=0 co=0 ca=0
|
||||
3 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1846/0/0 df=8 of=0 ri=1 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=0 co=0 ca=0
|
||||
4 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=369/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/4 ktl=e0e7 b=10 ci=0 co=0 ca=0
|
||||
5 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=381/1/0 df=4 of=0 ri=1 ql=0 qs=.... kt=0/W/5 ktl=fb2f b=10 ci=0 co=0 ca=0
|
||||
6 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1037/1/0 df=6 of=0 ri=1 ql=0 qs=.... kt=0/W/6 ktl=d2ad b=10 ci=0 co=0 ca=0
|
||||
7 c=1474 g=1474 pq=1 pgp=1473 qp=0 dt=1572/0/0 df=8 of=0 ri=1 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=0 co=0 ca=0
|
||||
0 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=545/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/0 ktl=ebc3 b=10 ci=0 co=0 ca=0
|
||||
1 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=967/1/0 df=3 of=0 ql=0 qs=.... kt=0/W/1 ktl=58c b=10 ci=151 co=0 ca=0
|
||||
2 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1081/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/2 ktl=da94 b=10 ci=0 co=0 ca=0
|
||||
3 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1846/0/0 df=8 of=0 ql=0 qs=.... kt=0/W/3 ktl=d1cd b=10 ci=0 co=0 ca=0
|
||||
4 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=369/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/4 ktl=e0e7 b=10 ci=0 co=0 ca=0
|
||||
5 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=381/1/0 df=4 of=0 ql=0 qs=.... kt=0/W/5 ktl=fb2f b=10 ci=0 co=0 ca=0
|
||||
6 c=1480 g=1480 pq=1 pgp=1480 qp=0 dt=1037/1/0 df=6 of=0 ql=0 qs=.... kt=0/W/6 ktl=d2ad b=10 ci=0 co=0 ca=0
|
||||
7 c=1474 g=1474 pq=1 pgp=1473 qp=0 dt=1572/0/0 df=8 of=0 ql=0 qs=.... kt=0/W/7 ktl=cf15 b=10 ci=0 co=0 ca=0
|
||||
|
||||
The first section lists the rcu_data structures for rcu_sched, the second
|
||||
for rcu_bh. Note that CONFIG_TREE_PREEMPT_RCU kernels will have an
|
||||
@@ -119,10 +119,6 @@ o "of" is the number of times that some other CPU has forced a
|
||||
CPU is offline when it is really alive and kicking) is a fatal
|
||||
error, so it makes sense to err conservatively.
|
||||
|
||||
o "ri" is the number of times that RCU has seen fit to send a
|
||||
reschedule IPI to this CPU in order to get it to report a
|
||||
quiescent state.
|
||||
|
||||
o "ql" is the number of RCU callbacks currently residing on
|
||||
this CPU. This is the total number of callbacks, regardless
|
||||
of what state they are in (new, waiting for grace period to
|
||||
|
||||
@@ -165,13 +165,6 @@ static inline int ext_hash(u16 code)
|
||||
return (code + (code >> 9)) & 0xff;
|
||||
}
|
||||
|
||||
static void ext_int_hash_update(struct rcu_head *head)
|
||||
{
|
||||
struct ext_int_info *p = container_of(head, struct ext_int_info, rcu);
|
||||
|
||||
kfree(p);
|
||||
}
|
||||
|
||||
int register_external_interrupt(u16 code, ext_int_handler_t handler)
|
||||
{
|
||||
struct ext_int_info *p;
|
||||
@@ -202,7 +195,7 @@ int unregister_external_interrupt(u16 code, ext_int_handler_t handler)
|
||||
list_for_each_entry_rcu(p, &ext_int_hash[index], entry)
|
||||
if (p->code == code && p->handler == handler) {
|
||||
list_del_rcu(&p->entry);
|
||||
call_rcu(&p->rcu, ext_int_hash_update);
|
||||
kfree_rcu(p, rcu);
|
||||
}
|
||||
spin_unlock_irqrestore(&ext_int_hash_lock, flags);
|
||||
return 0;
|
||||
|
||||
@@ -85,16 +85,6 @@ static struct ft_tport *ft_tport_create(struct fc_lport *lport)
|
||||
return tport;
|
||||
}
|
||||
|
||||
/*
|
||||
* Free tport via RCU.
|
||||
*/
|
||||
static void ft_tport_rcu_free(struct rcu_head *rcu)
|
||||
{
|
||||
struct ft_tport *tport = container_of(rcu, struct ft_tport, rcu);
|
||||
|
||||
kfree(tport);
|
||||
}
|
||||
|
||||
/*
|
||||
* Delete a target local port.
|
||||
* Caller holds ft_lport_lock.
|
||||
@@ -114,7 +104,7 @@ static void ft_tport_delete(struct ft_tport *tport)
|
||||
tpg->tport = NULL;
|
||||
tport->tpg = NULL;
|
||||
}
|
||||
call_rcu(&tport->rcu, ft_tport_rcu_free);
|
||||
kfree_rcu(tport, rcu);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -190,6 +190,33 @@ extern void rcu_idle_exit(void);
|
||||
extern void rcu_irq_enter(void);
|
||||
extern void rcu_irq_exit(void);
|
||||
|
||||
/**
|
||||
* RCU_NONIDLE - Indicate idle-loop code that needs RCU readers
|
||||
* @a: Code that RCU needs to pay attention to.
|
||||
*
|
||||
* RCU, RCU-bh, and RCU-sched read-side critical sections are forbidden
|
||||
* in the inner idle loop, that is, between the rcu_idle_enter() and
|
||||
* the rcu_idle_exit() -- RCU will happily ignore any such read-side
|
||||
* critical sections. However, things like powertop need tracepoints
|
||||
* in the inner idle loop.
|
||||
*
|
||||
* This macro provides the way out: RCU_NONIDLE(do_something_with_RCU())
|
||||
* will tell RCU that it needs to pay attending, invoke its argument
|
||||
* (in this example, a call to the do_something_with_RCU() function),
|
||||
* and then tell RCU to go back to ignoring this CPU. It is permissible
|
||||
* to nest RCU_NONIDLE() wrappers, but the nesting level is currently
|
||||
* quite limited. If deeper nesting is required, it will be necessary
|
||||
* to adjust DYNTICK_TASK_NESTING_VALUE accordingly.
|
||||
*
|
||||
* This macro may be used from process-level code only.
|
||||
*/
|
||||
#define RCU_NONIDLE(a) \
|
||||
do { \
|
||||
rcu_idle_exit(); \
|
||||
do { a; } while (0); \
|
||||
rcu_idle_enter(); \
|
||||
} while (0)
|
||||
|
||||
/*
|
||||
* Infrastructure to implement the synchronize_() primitives in
|
||||
* TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
|
||||
@@ -226,6 +253,15 @@ static inline void destroy_rcu_head_on_stack(struct rcu_head *head)
|
||||
}
|
||||
#endif /* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
|
||||
|
||||
#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU)
|
||||
bool rcu_lockdep_current_cpu_online(void);
|
||||
#else /* #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */
|
||||
static inline bool rcu_lockdep_current_cpu_online(void)
|
||||
{
|
||||
return 1;
|
||||
}
|
||||
#endif /* #else #if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_PROVE_RCU) */
|
||||
|
||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||
|
||||
#ifdef CONFIG_PROVE_RCU
|
||||
@@ -239,13 +275,11 @@ static inline int rcu_is_cpu_idle(void)
|
||||
|
||||
static inline void rcu_lock_acquire(struct lockdep_map *map)
|
||||
{
|
||||
WARN_ON_ONCE(rcu_is_cpu_idle());
|
||||
lock_acquire(map, 0, 0, 2, 1, NULL, _THIS_IP_);
|
||||
}
|
||||
|
||||
static inline void rcu_lock_release(struct lockdep_map *map)
|
||||
{
|
||||
WARN_ON_ONCE(rcu_is_cpu_idle());
|
||||
lock_release(map, 1, _THIS_IP_);
|
||||
}
|
||||
|
||||
@@ -270,6 +304,9 @@ extern int debug_lockdep_rcu_enabled(void);
|
||||
* occur in the same context, for example, it is illegal to invoke
|
||||
* rcu_read_unlock() in process context if the matching rcu_read_lock()
|
||||
* was invoked from within an irq handler.
|
||||
*
|
||||
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
|
||||
* offline from an RCU perspective, so check for those as well.
|
||||
*/
|
||||
static inline int rcu_read_lock_held(void)
|
||||
{
|
||||
@@ -277,6 +314,8 @@ static inline int rcu_read_lock_held(void)
|
||||
return 1;
|
||||
if (rcu_is_cpu_idle())
|
||||
return 0;
|
||||
if (!rcu_lockdep_current_cpu_online())
|
||||
return 0;
|
||||
return lock_is_held(&rcu_lock_map);
|
||||
}
|
||||
|
||||
@@ -313,6 +352,9 @@ extern int rcu_read_lock_bh_held(void);
|
||||
* notice an extended quiescent state to other CPUs that started a grace
|
||||
* period. Otherwise we would delay any grace period as long as we run in
|
||||
* the idle task.
|
||||
*
|
||||
* Similarly, we avoid claiming an SRCU read lock held if the current
|
||||
* CPU is offline.
|
||||
*/
|
||||
#ifdef CONFIG_PREEMPT_COUNT
|
||||
static inline int rcu_read_lock_sched_held(void)
|
||||
@@ -323,6 +365,8 @@ static inline int rcu_read_lock_sched_held(void)
|
||||
return 1;
|
||||
if (rcu_is_cpu_idle())
|
||||
return 0;
|
||||
if (!rcu_lockdep_current_cpu_online())
|
||||
return 0;
|
||||
if (debug_locks)
|
||||
lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
|
||||
return lockdep_opinion || preempt_count() != 0 || irqs_disabled();
|
||||
@@ -381,8 +425,22 @@ extern int rcu_my_thread_group_empty(void);
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#if defined(CONFIG_PROVE_RCU) && !defined(CONFIG_PREEMPT_RCU)
|
||||
static inline void rcu_preempt_sleep_check(void)
|
||||
{
|
||||
rcu_lockdep_assert(!lock_is_held(&rcu_lock_map),
|
||||
"Illegal context switch in RCU read-side "
|
||||
"critical section");
|
||||
}
|
||||
#else /* #ifdef CONFIG_PROVE_RCU */
|
||||
static inline void rcu_preempt_sleep_check(void)
|
||||
{
|
||||
}
|
||||
#endif /* #else #ifdef CONFIG_PROVE_RCU */
|
||||
|
||||
#define rcu_sleep_check() \
|
||||
do { \
|
||||
rcu_preempt_sleep_check(); \
|
||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map), \
|
||||
"Illegal context switch in RCU-bh" \
|
||||
" read-side critical section"); \
|
||||
@@ -470,6 +528,13 @@ extern int rcu_my_thread_group_empty(void);
|
||||
* NULL. Although rcu_access_pointer() may also be used in cases where
|
||||
* update-side locks prevent the value of the pointer from changing, you
|
||||
* should instead use rcu_dereference_protected() for this use case.
|
||||
*
|
||||
* It is also permissible to use rcu_access_pointer() when read-side
|
||||
* access to the pointer was removed at least one grace period ago, as
|
||||
* is the case in the context of the RCU callback that is freeing up
|
||||
* the data, or after a synchronize_rcu() returns. This can be useful
|
||||
* when tearing down multi-linked structures after a grace period
|
||||
* has elapsed.
|
||||
*/
|
||||
#define rcu_access_pointer(p) __rcu_access_pointer((p), __rcu)
|
||||
|
||||
@@ -659,6 +724,8 @@ static inline void rcu_read_lock(void)
|
||||
__rcu_read_lock();
|
||||
__acquire(RCU);
|
||||
rcu_lock_acquire(&rcu_lock_map);
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"rcu_read_lock() used illegally while idle");
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -678,6 +745,8 @@ static inline void rcu_read_lock(void)
|
||||
*/
|
||||
static inline void rcu_read_unlock(void)
|
||||
{
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"rcu_read_unlock() used illegally while idle");
|
||||
rcu_lock_release(&rcu_lock_map);
|
||||
__release(RCU);
|
||||
__rcu_read_unlock();
|
||||
@@ -705,6 +774,8 @@ static inline void rcu_read_lock_bh(void)
|
||||
local_bh_disable();
|
||||
__acquire(RCU_BH);
|
||||
rcu_lock_acquire(&rcu_bh_lock_map);
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"rcu_read_lock_bh() used illegally while idle");
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -714,6 +785,8 @@ static inline void rcu_read_lock_bh(void)
|
||||
*/
|
||||
static inline void rcu_read_unlock_bh(void)
|
||||
{
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"rcu_read_unlock_bh() used illegally while idle");
|
||||
rcu_lock_release(&rcu_bh_lock_map);
|
||||
__release(RCU_BH);
|
||||
local_bh_enable();
|
||||
@@ -737,6 +810,8 @@ static inline void rcu_read_lock_sched(void)
|
||||
preempt_disable();
|
||||
__acquire(RCU_SCHED);
|
||||
rcu_lock_acquire(&rcu_sched_lock_map);
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"rcu_read_lock_sched() used illegally while idle");
|
||||
}
|
||||
|
||||
/* Used by lockdep and tracing: cannot be traced, cannot call lockdep. */
|
||||
@@ -753,6 +828,8 @@ static inline notrace void rcu_read_lock_sched_notrace(void)
|
||||
*/
|
||||
static inline void rcu_read_unlock_sched(void)
|
||||
{
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"rcu_read_unlock_sched() used illegally while idle");
|
||||
rcu_lock_release(&rcu_sched_lock_map);
|
||||
__release(RCU_SCHED);
|
||||
preempt_enable();
|
||||
@@ -841,7 +918,7 @@ void __kfree_rcu(struct rcu_head *head, unsigned long offset)
|
||||
/* See the kfree_rcu() header comment. */
|
||||
BUILD_BUG_ON(!__is_kfree_rcu_offset(offset));
|
||||
|
||||
call_rcu(head, (rcu_callback)offset);
|
||||
kfree_call_rcu(head, (rcu_callback)offset);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -27,13 +27,9 @@
|
||||
|
||||
#include <linux/cache.h>
|
||||
|
||||
#ifdef CONFIG_RCU_BOOST
|
||||
static inline void rcu_init(void)
|
||||
{
|
||||
}
|
||||
#else /* #ifdef CONFIG_RCU_BOOST */
|
||||
void rcu_init(void);
|
||||
#endif /* #else #ifdef CONFIG_RCU_BOOST */
|
||||
|
||||
static inline void rcu_barrier_bh(void)
|
||||
{
|
||||
@@ -83,6 +79,12 @@ static inline void synchronize_sched_expedited(void)
|
||||
synchronize_sched();
|
||||
}
|
||||
|
||||
static inline void kfree_call_rcu(struct rcu_head *head,
|
||||
void (*func)(struct rcu_head *rcu))
|
||||
{
|
||||
call_rcu(head, func);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_TINY_RCU
|
||||
|
||||
static inline void rcu_preempt_note_context_switch(void)
|
||||
|
||||
@@ -61,6 +61,24 @@ extern void synchronize_rcu_bh(void);
|
||||
extern void synchronize_sched_expedited(void);
|
||||
extern void synchronize_rcu_expedited(void);
|
||||
|
||||
void kfree_call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu));
|
||||
|
||||
/**
|
||||
* synchronize_rcu_bh_expedited - Brute-force RCU-bh grace period
|
||||
*
|
||||
* Wait for an RCU-bh grace period to elapse, but use a "big hammer"
|
||||
* approach to force the grace period to end quickly. This consumes
|
||||
* significant time on all CPUs and is unfriendly to real-time workloads,
|
||||
* so is thus not recommended for any sort of common-case code. In fact,
|
||||
* if you are using synchronize_rcu_bh_expedited() in a loop, please
|
||||
* restructure your code to batch your updates, and then use a single
|
||||
* synchronize_rcu_bh() instead.
|
||||
*
|
||||
* Note that it is illegal to call this function while holding any lock
|
||||
* that is acquired by a CPU-hotplug notifier. And yes, it is also illegal
|
||||
* to call this function from a CPU-hotplug notifier. Failing to observe
|
||||
* these restriction will result in deadlock.
|
||||
*/
|
||||
static inline void synchronize_rcu_bh_expedited(void)
|
||||
{
|
||||
synchronize_sched_expedited();
|
||||
@@ -83,6 +101,7 @@ extern void rcu_sched_force_quiescent_state(void);
|
||||
/* A context switch is a grace period for RCU-sched and RCU-bh. */
|
||||
static inline int rcu_blocking_is_gp(void)
|
||||
{
|
||||
might_sleep(); /* Check for RCU read-side critical section. */
|
||||
return num_online_cpus() == 1;
|
||||
}
|
||||
|
||||
|
||||
@@ -1864,8 +1864,7 @@ extern void task_clear_jobctl_pending(struct task_struct *task,
|
||||
#ifdef CONFIG_PREEMPT_RCU
|
||||
|
||||
#define RCU_READ_UNLOCK_BLOCKED (1 << 0) /* blocked while in RCU read-side. */
|
||||
#define RCU_READ_UNLOCK_BOOSTED (1 << 1) /* boosted while in RCU read-side. */
|
||||
#define RCU_READ_UNLOCK_NEED_QS (1 << 2) /* RCU core needs CPU response. */
|
||||
#define RCU_READ_UNLOCK_NEED_QS (1 << 1) /* RCU core needs CPU response. */
|
||||
|
||||
static inline void rcu_copy_process(struct task_struct *p)
|
||||
{
|
||||
|
||||
+11
-4
@@ -99,15 +99,18 @@ long srcu_batches_completed(struct srcu_struct *sp);
|
||||
* power mode. This way we can notice an extended quiescent state to
|
||||
* other CPUs that started a grace period. Otherwise we would delay any
|
||||
* grace period as long as we run in the idle task.
|
||||
*
|
||||
* Similarly, we avoid claiming an SRCU read lock held if the current
|
||||
* CPU is offline.
|
||||
*/
|
||||
static inline int srcu_read_lock_held(struct srcu_struct *sp)
|
||||
{
|
||||
if (rcu_is_cpu_idle())
|
||||
return 0;
|
||||
|
||||
if (!debug_lockdep_rcu_enabled())
|
||||
return 1;
|
||||
|
||||
if (rcu_is_cpu_idle())
|
||||
return 0;
|
||||
if (!rcu_lockdep_current_cpu_online())
|
||||
return 0;
|
||||
return lock_is_held(&sp->dep_map);
|
||||
}
|
||||
|
||||
@@ -169,6 +172,8 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp)
|
||||
int retval = __srcu_read_lock(sp);
|
||||
|
||||
rcu_lock_acquire(&(sp)->dep_map);
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"srcu_read_lock() used illegally while idle");
|
||||
return retval;
|
||||
}
|
||||
|
||||
@@ -182,6 +187,8 @@ static inline int srcu_read_lock(struct srcu_struct *sp) __acquires(sp)
|
||||
static inline void srcu_read_unlock(struct srcu_struct *sp, int idx)
|
||||
__releases(sp)
|
||||
{
|
||||
rcu_lockdep_assert(!rcu_is_cpu_idle(),
|
||||
"srcu_read_unlock() used illegally while idle");
|
||||
rcu_lock_release(&(sp)->dep_map);
|
||||
__srcu_read_unlock(sp, idx);
|
||||
}
|
||||
|
||||
+39
-24
@@ -313,19 +313,22 @@ TRACE_EVENT(rcu_prep_idle,
|
||||
/*
|
||||
* Tracepoint for the registration of a single RCU callback function.
|
||||
* The first argument is the type of RCU, the second argument is
|
||||
* a pointer to the RCU callback itself, and the third element is the
|
||||
* new RCU callback queue length for the current CPU.
|
||||
* a pointer to the RCU callback itself, the third element is the
|
||||
* number of lazy callbacks queued, and the fourth element is the
|
||||
* total number of callbacks queued.
|
||||
*/
|
||||
TRACE_EVENT(rcu_callback,
|
||||
|
||||
TP_PROTO(char *rcuname, struct rcu_head *rhp, long qlen),
|
||||
TP_PROTO(char *rcuname, struct rcu_head *rhp, long qlen_lazy,
|
||||
long qlen),
|
||||
|
||||
TP_ARGS(rcuname, rhp, qlen),
|
||||
TP_ARGS(rcuname, rhp, qlen_lazy, qlen),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(char *, rcuname)
|
||||
__field(void *, rhp)
|
||||
__field(void *, func)
|
||||
__field(long, qlen_lazy)
|
||||
__field(long, qlen)
|
||||
),
|
||||
|
||||
@@ -333,11 +336,13 @@ TRACE_EVENT(rcu_callback,
|
||||
__entry->rcuname = rcuname;
|
||||
__entry->rhp = rhp;
|
||||
__entry->func = rhp->func;
|
||||
__entry->qlen_lazy = qlen_lazy;
|
||||
__entry->qlen = qlen;
|
||||
),
|
||||
|
||||
TP_printk("%s rhp=%p func=%pf %ld",
|
||||
__entry->rcuname, __entry->rhp, __entry->func, __entry->qlen)
|
||||
TP_printk("%s rhp=%p func=%pf %ld/%ld",
|
||||
__entry->rcuname, __entry->rhp, __entry->func,
|
||||
__entry->qlen_lazy, __entry->qlen)
|
||||
);
|
||||
|
||||
/*
|
||||
@@ -345,20 +350,21 @@ TRACE_EVENT(rcu_callback,
|
||||
* kfree() form. The first argument is the RCU type, the second argument
|
||||
* is a pointer to the RCU callback, the third argument is the offset
|
||||
* of the callback within the enclosing RCU-protected data structure,
|
||||
* and the fourth argument is the new RCU callback queue length for the
|
||||
* current CPU.
|
||||
* the fourth argument is the number of lazy callbacks queued, and the
|
||||
* fifth argument is the total number of callbacks queued.
|
||||
*/
|
||||
TRACE_EVENT(rcu_kfree_callback,
|
||||
|
||||
TP_PROTO(char *rcuname, struct rcu_head *rhp, unsigned long offset,
|
||||
long qlen),
|
||||
long qlen_lazy, long qlen),
|
||||
|
||||
TP_ARGS(rcuname, rhp, offset, qlen),
|
||||
TP_ARGS(rcuname, rhp, offset, qlen_lazy, qlen),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(char *, rcuname)
|
||||
__field(void *, rhp)
|
||||
__field(unsigned long, offset)
|
||||
__field(long, qlen_lazy)
|
||||
__field(long, qlen)
|
||||
),
|
||||
|
||||
@@ -366,41 +372,45 @@ TRACE_EVENT(rcu_kfree_callback,
|
||||
__entry->rcuname = rcuname;
|
||||
__entry->rhp = rhp;
|
||||
__entry->offset = offset;
|
||||
__entry->qlen_lazy = qlen_lazy;
|
||||
__entry->qlen = qlen;
|
||||
),
|
||||
|
||||
TP_printk("%s rhp=%p func=%ld %ld",
|
||||
TP_printk("%s rhp=%p func=%ld %ld/%ld",
|
||||
__entry->rcuname, __entry->rhp, __entry->offset,
|
||||
__entry->qlen)
|
||||
__entry->qlen_lazy, __entry->qlen)
|
||||
);
|
||||
|
||||
/*
|
||||
* Tracepoint for marking the beginning rcu_do_batch, performed to start
|
||||
* RCU callback invocation. The first argument is the RCU flavor,
|
||||
* the second is the total number of callbacks (including those that
|
||||
* are not yet ready to be invoked), and the third argument is the
|
||||
* current RCU-callback batch limit.
|
||||
* the second is the number of lazy callbacks queued, the third is
|
||||
* the total number of callbacks queued, and the fourth argument is
|
||||
* the current RCU-callback batch limit.
|
||||
*/
|
||||
TRACE_EVENT(rcu_batch_start,
|
||||
|
||||
TP_PROTO(char *rcuname, long qlen, int blimit),
|
||||
TP_PROTO(char *rcuname, long qlen_lazy, long qlen, int blimit),
|
||||
|
||||
TP_ARGS(rcuname, qlen, blimit),
|
||||
TP_ARGS(rcuname, qlen_lazy, qlen, blimit),
|
||||
|
||||
TP_STRUCT__entry(
|
||||
__field(char *, rcuname)
|
||||
__field(long, qlen_lazy)
|
||||
__field(long, qlen)
|
||||
__field(int, blimit)
|
||||
),
|
||||
|
||||
TP_fast_assign(
|
||||
__entry->rcuname = rcuname;
|
||||
__entry->qlen_lazy = qlen_lazy;
|
||||
__entry->qlen = qlen;
|
||||
__entry->blimit = blimit;
|
||||
),
|
||||
|
||||
TP_printk("%s CBs=%ld bl=%d",
|
||||
__entry->rcuname, __entry->qlen, __entry->blimit)
|
||||
TP_printk("%s CBs=%ld/%ld bl=%d",
|
||||
__entry->rcuname, __entry->qlen_lazy, __entry->qlen,
|
||||
__entry->blimit)
|
||||
);
|
||||
|
||||
/*
|
||||
@@ -531,16 +541,21 @@ TRACE_EVENT(rcu_torture_read,
|
||||
#else /* #ifdef CONFIG_RCU_TRACE */
|
||||
|
||||
#define trace_rcu_grace_period(rcuname, gpnum, gpevent) do { } while (0)
|
||||
#define trace_rcu_grace_period_init(rcuname, gpnum, level, grplo, grphi, qsmask) do { } while (0)
|
||||
#define trace_rcu_grace_period_init(rcuname, gpnum, level, grplo, grphi, \
|
||||
qsmask) do { } while (0)
|
||||
#define trace_rcu_preempt_task(rcuname, pid, gpnum) do { } while (0)
|
||||
#define trace_rcu_unlock_preempted_task(rcuname, gpnum, pid) do { } while (0)
|
||||
#define trace_rcu_quiescent_state_report(rcuname, gpnum, mask, qsmask, level, grplo, grphi, gp_tasks) do { } while (0)
|
||||
#define trace_rcu_quiescent_state_report(rcuname, gpnum, mask, qsmask, level, \
|
||||
grplo, grphi, gp_tasks) do { } \
|
||||
while (0)
|
||||
#define trace_rcu_fqs(rcuname, gpnum, cpu, qsevent) do { } while (0)
|
||||
#define trace_rcu_dyntick(polarity, oldnesting, newnesting) do { } while (0)
|
||||
#define trace_rcu_prep_idle(reason) do { } while (0)
|
||||
#define trace_rcu_callback(rcuname, rhp, qlen) do { } while (0)
|
||||
#define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen) do { } while (0)
|
||||
#define trace_rcu_batch_start(rcuname, qlen, blimit) do { } while (0)
|
||||
#define trace_rcu_callback(rcuname, rhp, qlen_lazy, qlen) do { } while (0)
|
||||
#define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen_lazy, qlen) \
|
||||
do { } while (0)
|
||||
#define trace_rcu_batch_start(rcuname, qlen_lazy, qlen, blimit) \
|
||||
do { } while (0)
|
||||
#define trace_rcu_invoke_callback(rcuname, rhp) do { } while (0)
|
||||
#define trace_rcu_invoke_kfree_callback(rcuname, rhp, offset) do { } while (0)
|
||||
#define trace_rcu_batch_end(rcuname, callbacks_invoked, cb, nr, iit, risk) \
|
||||
|
||||
@@ -438,15 +438,6 @@ config PREEMPT_RCU
|
||||
This option enables preemptible-RCU code that is common between
|
||||
the TREE_PREEMPT_RCU and TINY_PREEMPT_RCU implementations.
|
||||
|
||||
config RCU_TRACE
|
||||
bool "Enable tracing for RCU"
|
||||
help
|
||||
This option provides tracing in RCU which presents stats
|
||||
in debugfs for debugging RCU implementation.
|
||||
|
||||
Say Y here if you want to enable RCU tracing
|
||||
Say N if you are unsure.
|
||||
|
||||
config RCU_FANOUT
|
||||
int "Tree-based hierarchical RCU fanout value"
|
||||
range 2 64 if 64BIT
|
||||
|
||||
+7
-1
@@ -4176,7 +4176,13 @@ void lockdep_rcu_suspicious(const char *file, const int line, const char *s)
|
||||
printk("-------------------------------\n");
|
||||
printk("%s:%d %s!\n", file, line, s);
|
||||
printk("\nother info that might help us debug this:\n\n");
|
||||
printk("\nrcu_scheduler_active = %d, debug_locks = %d\n", rcu_scheduler_active, debug_locks);
|
||||
printk("\n%srcu_scheduler_active = %d, debug_locks = %d\n",
|
||||
!rcu_lockdep_current_cpu_online()
|
||||
? "RCU used illegally from offline CPU!\n"
|
||||
: rcu_is_cpu_idle()
|
||||
? "RCU used illegally from idle CPU!\n"
|
||||
: "",
|
||||
rcu_scheduler_active, debug_locks);
|
||||
|
||||
/*
|
||||
* If a CPU is in the RCU-free window in idle (ie: in the section
|
||||
|
||||
+23
-3
@@ -33,8 +33,27 @@
|
||||
* Process-level increment to ->dynticks_nesting field. This allows for
|
||||
* architectures that use half-interrupts and half-exceptions from
|
||||
* process context.
|
||||
*
|
||||
* DYNTICK_TASK_NEST_MASK defines a field of width DYNTICK_TASK_NEST_WIDTH
|
||||
* that counts the number of process-based reasons why RCU cannot
|
||||
* consider the corresponding CPU to be idle, and DYNTICK_TASK_NEST_VALUE
|
||||
* is the value used to increment or decrement this field.
|
||||
*
|
||||
* The rest of the bits could in principle be used to count interrupts,
|
||||
* but this would mean that a negative-one value in the interrupt
|
||||
* field could incorrectly zero out the DYNTICK_TASK_NEST_MASK field.
|
||||
* We therefore provide a two-bit guard field defined by DYNTICK_TASK_MASK
|
||||
* that is set to DYNTICK_TASK_FLAG upon initial exit from idle.
|
||||
* The DYNTICK_TASK_EXIT_IDLE value is thus the combined value used upon
|
||||
* initial exit from idle.
|
||||
*/
|
||||
#define DYNTICK_TASK_NESTING (LLONG_MAX / 2 - 1)
|
||||
#define DYNTICK_TASK_NEST_WIDTH 7
|
||||
#define DYNTICK_TASK_NEST_VALUE ((LLONG_MAX >> DYNTICK_TASK_NEST_WIDTH) + 1)
|
||||
#define DYNTICK_TASK_NEST_MASK (LLONG_MAX - DYNTICK_TASK_NEST_VALUE + 1)
|
||||
#define DYNTICK_TASK_FLAG ((DYNTICK_TASK_NEST_VALUE / 8) * 2)
|
||||
#define DYNTICK_TASK_MASK ((DYNTICK_TASK_NEST_VALUE / 8) * 3)
|
||||
#define DYNTICK_TASK_EXIT_IDLE (DYNTICK_TASK_NEST_VALUE + \
|
||||
DYNTICK_TASK_FLAG)
|
||||
|
||||
/*
|
||||
* debug_rcu_head_queue()/debug_rcu_head_unqueue() are used internally
|
||||
@@ -50,7 +69,6 @@ extern struct debug_obj_descr rcuhead_debug_descr;
|
||||
|
||||
static inline void debug_rcu_head_queue(struct rcu_head *head)
|
||||
{
|
||||
WARN_ON_ONCE((unsigned long)head & 0x3);
|
||||
debug_object_activate(head, &rcuhead_debug_descr);
|
||||
debug_object_active_state(head, &rcuhead_debug_descr,
|
||||
STATE_RCU_HEAD_READY,
|
||||
@@ -76,16 +94,18 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
|
||||
|
||||
extern void kfree(const void *);
|
||||
|
||||
static inline void __rcu_reclaim(char *rn, struct rcu_head *head)
|
||||
static inline bool __rcu_reclaim(char *rn, struct rcu_head *head)
|
||||
{
|
||||
unsigned long offset = (unsigned long)head->func;
|
||||
|
||||
if (__is_kfree_rcu_offset(offset)) {
|
||||
RCU_TRACE(trace_rcu_invoke_kfree_callback(rn, head, offset));
|
||||
kfree((void *)head - offset);
|
||||
return 1;
|
||||
} else {
|
||||
RCU_TRACE(trace_rcu_invoke_callback(rn, head));
|
||||
head->func(head);
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -88,6 +88,9 @@ EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled);
|
||||
* section.
|
||||
*
|
||||
* Check debug_lockdep_rcu_enabled() to prevent false positives during boot.
|
||||
*
|
||||
* Note that rcu_read_lock() is disallowed if the CPU is either idle or
|
||||
* offline from an RCU perspective, so check for those as well.
|
||||
*/
|
||||
int rcu_read_lock_bh_held(void)
|
||||
{
|
||||
@@ -95,6 +98,8 @@ int rcu_read_lock_bh_held(void)
|
||||
return 1;
|
||||
if (rcu_is_cpu_idle())
|
||||
return 0;
|
||||
if (!rcu_lockdep_current_cpu_online())
|
||||
return 0;
|
||||
return in_softirq() || irqs_disabled();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
|
||||
|
||||
+20
-6
@@ -53,7 +53,7 @@ static void __call_rcu(struct rcu_head *head,
|
||||
|
||||
#include "rcutiny_plugin.h"
|
||||
|
||||
static long long rcu_dynticks_nesting = DYNTICK_TASK_NESTING;
|
||||
static long long rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
|
||||
|
||||
/* Common code for rcu_idle_enter() and rcu_irq_exit(), see kernel/rcutree.c. */
|
||||
static void rcu_idle_enter_common(long long oldval)
|
||||
@@ -88,10 +88,16 @@ void rcu_idle_enter(void)
|
||||
|
||||
local_irq_save(flags);
|
||||
oldval = rcu_dynticks_nesting;
|
||||
rcu_dynticks_nesting = 0;
|
||||
WARN_ON_ONCE((rcu_dynticks_nesting & DYNTICK_TASK_NEST_MASK) == 0);
|
||||
if ((rcu_dynticks_nesting & DYNTICK_TASK_NEST_MASK) ==
|
||||
DYNTICK_TASK_NEST_VALUE)
|
||||
rcu_dynticks_nesting = 0;
|
||||
else
|
||||
rcu_dynticks_nesting -= DYNTICK_TASK_NEST_VALUE;
|
||||
rcu_idle_enter_common(oldval);
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rcu_idle_enter);
|
||||
|
||||
/*
|
||||
* Exit an interrupt handler towards idle.
|
||||
@@ -140,11 +146,15 @@ void rcu_idle_exit(void)
|
||||
|
||||
local_irq_save(flags);
|
||||
oldval = rcu_dynticks_nesting;
|
||||
WARN_ON_ONCE(oldval != 0);
|
||||
rcu_dynticks_nesting = DYNTICK_TASK_NESTING;
|
||||
WARN_ON_ONCE(rcu_dynticks_nesting < 0);
|
||||
if (rcu_dynticks_nesting & DYNTICK_TASK_NEST_MASK)
|
||||
rcu_dynticks_nesting += DYNTICK_TASK_NEST_VALUE;
|
||||
else
|
||||
rcu_dynticks_nesting = DYNTICK_TASK_EXIT_IDLE;
|
||||
rcu_idle_exit_common(oldval);
|
||||
local_irq_restore(flags);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(rcu_idle_exit);
|
||||
|
||||
/*
|
||||
* Enter an interrupt handler, moving away from idle.
|
||||
@@ -258,7 +268,7 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
|
||||
|
||||
/* If no RCU callbacks ready to invoke, just return. */
|
||||
if (&rcp->rcucblist == rcp->donetail) {
|
||||
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, -1));
|
||||
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, 0, -1));
|
||||
RCU_TRACE(trace_rcu_batch_end(rcp->name, 0,
|
||||
ACCESS_ONCE(rcp->rcucblist),
|
||||
need_resched(),
|
||||
@@ -269,7 +279,7 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
|
||||
|
||||
/* Move the ready-to-invoke callbacks to a local list. */
|
||||
local_irq_save(flags);
|
||||
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, -1));
|
||||
RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
|
||||
list = rcp->rcucblist;
|
||||
rcp->rcucblist = *rcp->donetail;
|
||||
*rcp->donetail = NULL;
|
||||
@@ -319,6 +329,10 @@ static void rcu_process_callbacks(struct softirq_action *unused)
|
||||
*/
|
||||
void synchronize_sched(void)
|
||||
{
|
||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
|
||||
!lock_is_held(&rcu_lock_map) &&
|
||||
!lock_is_held(&rcu_sched_lock_map),
|
||||
"Illegal synchronize_sched() in RCU read-side critical section");
|
||||
cond_resched();
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(synchronize_sched);
|
||||
|
||||
+60
-17
@@ -132,6 +132,7 @@ static struct rcu_preempt_ctrlblk rcu_preempt_ctrlblk = {
|
||||
RCU_TRACE(.rcb.name = "rcu_preempt")
|
||||
};
|
||||
|
||||
static void rcu_read_unlock_special(struct task_struct *t);
|
||||
static int rcu_preempted_readers_exp(void);
|
||||
static void rcu_report_exp_done(void);
|
||||
|
||||
@@ -146,6 +147,16 @@ static int rcu_cpu_blocking_cur_gp(void)
|
||||
/*
|
||||
* Check for a running RCU reader. Because there is only one CPU,
|
||||
* there can be but one running RCU reader at a time. ;-)
|
||||
*
|
||||
* Returns zero if there are no running readers. Returns a positive
|
||||
* number if there is at least one reader within its RCU read-side
|
||||
* critical section. Returns a negative number if an outermost reader
|
||||
* is in the midst of exiting from its RCU read-side critical section
|
||||
*
|
||||
* Returns zero if there are no running readers. Returns a positive
|
||||
* number if there is at least one reader within its RCU read-side
|
||||
* critical section. Returns a negative number if an outermost reader
|
||||
* is in the midst of exiting from its RCU read-side critical section.
|
||||
*/
|
||||
static int rcu_preempt_running_reader(void)
|
||||
{
|
||||
@@ -307,7 +318,6 @@ static int rcu_boost(void)
|
||||
t = container_of(tb, struct task_struct, rcu_node_entry);
|
||||
rt_mutex_init_proxy_locked(&mtx, t);
|
||||
t->rcu_boost_mutex = &mtx;
|
||||
t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BOOSTED;
|
||||
raw_local_irq_restore(flags);
|
||||
rt_mutex_lock(&mtx);
|
||||
rt_mutex_unlock(&mtx); /* Keep lockdep happy. */
|
||||
@@ -475,7 +485,7 @@ void rcu_preempt_note_context_switch(void)
|
||||
unsigned long flags;
|
||||
|
||||
local_irq_save(flags); /* must exclude scheduler_tick(). */
|
||||
if (rcu_preempt_running_reader() &&
|
||||
if (rcu_preempt_running_reader() > 0 &&
|
||||
(t->rcu_read_unlock_special & RCU_READ_UNLOCK_BLOCKED) == 0) {
|
||||
|
||||
/* Possibly blocking in an RCU read-side critical section. */
|
||||
@@ -494,6 +504,13 @@ void rcu_preempt_note_context_switch(void)
|
||||
list_add(&t->rcu_node_entry, &rcu_preempt_ctrlblk.blkd_tasks);
|
||||
if (rcu_cpu_blocking_cur_gp())
|
||||
rcu_preempt_ctrlblk.gp_tasks = &t->rcu_node_entry;
|
||||
} else if (rcu_preempt_running_reader() < 0 &&
|
||||
t->rcu_read_unlock_special) {
|
||||
/*
|
||||
* Complete exit from RCU read-side critical section on
|
||||
* behalf of preempted instance of __rcu_read_unlock().
|
||||
*/
|
||||
rcu_read_unlock_special(t);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -526,12 +543,15 @@ EXPORT_SYMBOL_GPL(__rcu_read_lock);
|
||||
* notify RCU core processing or task having blocked during the RCU
|
||||
* read-side critical section.
|
||||
*/
|
||||
static void rcu_read_unlock_special(struct task_struct *t)
|
||||
static noinline void rcu_read_unlock_special(struct task_struct *t)
|
||||
{
|
||||
int empty;
|
||||
int empty_exp;
|
||||
unsigned long flags;
|
||||
struct list_head *np;
|
||||
#ifdef CONFIG_RCU_BOOST
|
||||
struct rt_mutex *rbmp = NULL;
|
||||
#endif /* #ifdef CONFIG_RCU_BOOST */
|
||||
int special;
|
||||
|
||||
/*
|
||||
@@ -552,7 +572,7 @@ static void rcu_read_unlock_special(struct task_struct *t)
|
||||
rcu_preempt_cpu_qs();
|
||||
|
||||
/* Hardware IRQ handlers cannot block. */
|
||||
if (in_irq()) {
|
||||
if (in_irq() || in_serving_softirq()) {
|
||||
local_irq_restore(flags);
|
||||
return;
|
||||
}
|
||||
@@ -597,10 +617,10 @@ static void rcu_read_unlock_special(struct task_struct *t)
|
||||
}
|
||||
#ifdef CONFIG_RCU_BOOST
|
||||
/* Unboost self if was boosted. */
|
||||
if (special & RCU_READ_UNLOCK_BOOSTED) {
|
||||
t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_BOOSTED;
|
||||
rt_mutex_unlock(t->rcu_boost_mutex);
|
||||
if (t->rcu_boost_mutex != NULL) {
|
||||
rbmp = t->rcu_boost_mutex;
|
||||
t->rcu_boost_mutex = NULL;
|
||||
rt_mutex_unlock(rbmp);
|
||||
}
|
||||
#endif /* #ifdef CONFIG_RCU_BOOST */
|
||||
local_irq_restore(flags);
|
||||
@@ -618,13 +638,22 @@ void __rcu_read_unlock(void)
|
||||
struct task_struct *t = current;
|
||||
|
||||
barrier(); /* needed if we ever invoke rcu_read_unlock in rcutiny.c */
|
||||
--t->rcu_read_lock_nesting;
|
||||
barrier(); /* decrement before load of ->rcu_read_unlock_special */
|
||||
if (t->rcu_read_lock_nesting == 0 &&
|
||||
unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
|
||||
rcu_read_unlock_special(t);
|
||||
if (t->rcu_read_lock_nesting != 1)
|
||||
--t->rcu_read_lock_nesting;
|
||||
else {
|
||||
t->rcu_read_lock_nesting = INT_MIN;
|
||||
barrier(); /* assign before ->rcu_read_unlock_special load */
|
||||
if (unlikely(ACCESS_ONCE(t->rcu_read_unlock_special)))
|
||||
rcu_read_unlock_special(t);
|
||||
barrier(); /* ->rcu_read_unlock_special load before assign */
|
||||
t->rcu_read_lock_nesting = 0;
|
||||
}
|
||||
#ifdef CONFIG_PROVE_LOCKING
|
||||
WARN_ON_ONCE(t->rcu_read_lock_nesting < 0);
|
||||
{
|
||||
int rrln = ACCESS_ONCE(t->rcu_read_lock_nesting);
|
||||
|
||||
WARN_ON_ONCE(rrln < 0 && rrln > INT_MIN / 2);
|
||||
}
|
||||
#endif /* #ifdef CONFIG_PROVE_LOCKING */
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__rcu_read_unlock);
|
||||
@@ -649,7 +678,7 @@ static void rcu_preempt_check_callbacks(void)
|
||||
invoke_rcu_callbacks();
|
||||
if (rcu_preempt_gp_in_progress() &&
|
||||
rcu_cpu_blocking_cur_gp() &&
|
||||
rcu_preempt_running_reader())
|
||||
rcu_preempt_running_reader() > 0)
|
||||
t->rcu_read_unlock_special |= RCU_READ_UNLOCK_NEED_QS;
|
||||
}
|
||||
|
||||
@@ -706,6 +735,11 @@ EXPORT_SYMBOL_GPL(call_rcu);
|
||||
*/
|
||||
void synchronize_rcu(void)
|
||||
{
|
||||
rcu_lockdep_assert(!lock_is_held(&rcu_bh_lock_map) &&
|
||||
!lock_is_held(&rcu_lock_map) &&
|
||||
!lock_is_held(&rcu_sched_lock_map),
|
||||
"Illegal synchronize_rcu() in RCU read-side critical section");
|
||||
|
||||
#ifdef CONFIG_DEBUG_LOCK_ALLOC
|
||||
if (!rcu_scheduler_active)
|
||||
return;
|
||||
@@ -882,7 +916,8 @@ static void rcu_preempt_process_callbacks(void)
|
||||
static void invoke_rcu_callbacks(void)
|
||||
{
|
||||
have_rcu_kthread_work = 1;
|
||||
wake_up(&rcu_kthread_wq);
|
||||
if (rcu_kthread_task != NULL)
|
||||
wake_up(&rcu_kthread_wq);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_RCU_TRACE
|
||||
@@ -943,12 +978,16 @@ early_initcall(rcu_spawn_kthreads);
|
||||
|
||||
#else /* #ifdef CONFIG_RCU_BOOST */
|
||||
|
||||
/* Hold off callback invocation until early_initcall() time. */
|
||||
static int rcu_scheduler_fully_active __read_mostly;
|
||||
|
||||
/*
|
||||
* Start up softirq processing of callbacks.
|
||||
*/
|
||||
void invoke_rcu_callbacks(void)
|
||||
{
|
||||
raise_softirq(RCU_SOFTIRQ);
|
||||
if (rcu_scheduler_fully_active)
|
||||
raise_softirq(RCU_SOFTIRQ);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_RCU_TRACE
|
||||
@@ -963,10 +1002,14 @@ static bool rcu_is_callbacks_kthread(void)
|
||||
|
||||
#endif /* #ifdef CONFIG_RCU_TRACE */
|
||||
|
||||
void rcu_init(void)
|
||||
static int __init rcu_scheduler_really_started(void)
|
||||
{
|
||||
rcu_scheduler_fully_active = 1;
|
||||
open_softirq(RCU_SOFTIRQ, rcu_process_callbacks);
|
||||
raise_softirq(RCU_SOFTIRQ); /* Invoke any callbacks from early boot. */
|
||||
return 0;
|
||||
}
|
||||
early_initcall(rcu_scheduler_really_started);
|
||||
|
||||
#endif /* #else #ifdef CONFIG_RCU_BOOST */
|
||||
|
||||
|
||||
+86
-5
@@ -65,7 +65,10 @@ static int fqs_duration; /* Duration of bursts (us), 0 to disable. */
|
||||
static int fqs_holdoff; /* Hold time within burst (us). */
|
||||
static int fqs_stutter = 3; /* Wait time between bursts (s). */
|
||||
static int onoff_interval; /* Wait time between CPU hotplugs, 0=disable. */
|
||||
static int onoff_holdoff; /* Seconds after boot before CPU hotplugs. */
|
||||
static int shutdown_secs; /* Shutdown time (s). <=0 for no shutdown. */
|
||||
static int stall_cpu; /* CPU-stall duration (s). 0 for no stall. */
|
||||
static int stall_cpu_holdoff = 10; /* Time to wait until stall (s). */
|
||||
static int test_boost = 1; /* Test RCU prio boost: 0=no, 1=maybe, 2=yes. */
|
||||
static int test_boost_interval = 7; /* Interval between boost tests, seconds. */
|
||||
static int test_boost_duration = 4; /* Duration of each boost test, seconds. */
|
||||
@@ -95,8 +98,14 @@ module_param(fqs_stutter, int, 0444);
|
||||
MODULE_PARM_DESC(fqs_stutter, "Wait time between fqs bursts (s)");
|
||||
module_param(onoff_interval, int, 0444);
|
||||
MODULE_PARM_DESC(onoff_interval, "Time between CPU hotplugs (s), 0=disable");
|
||||
module_param(onoff_holdoff, int, 0444);
|
||||
MODULE_PARM_DESC(onoff_holdoff, "Time after boot before CPU hotplugs (s)");
|
||||
module_param(shutdown_secs, int, 0444);
|
||||
MODULE_PARM_DESC(shutdown_secs, "Shutdown time (s), zero to disable.");
|
||||
module_param(stall_cpu, int, 0444);
|
||||
MODULE_PARM_DESC(stall_cpu, "Stall duration (s), zero to disable.");
|
||||
module_param(stall_cpu_holdoff, int, 0444);
|
||||
MODULE_PARM_DESC(stall_cpu_holdoff, "Time to wait before starting stall (s).");
|
||||
module_param(test_boost, int, 0444);
|
||||
MODULE_PARM_DESC(test_boost, "Test RCU prio boost: 0=no, 1=maybe, 2=yes.");
|
||||
module_param(test_boost_interval, int, 0444);
|
||||
@@ -129,6 +138,7 @@ static struct task_struct *shutdown_task;
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
static struct task_struct *onoff_task;
|
||||
#endif /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||
static struct task_struct *stall_task;
|
||||
|
||||
#define RCU_TORTURE_PIPE_LEN 10
|
||||
|
||||
@@ -990,12 +1000,12 @@ static void rcu_torture_timer(unsigned long unused)
|
||||
rcu_read_lock_bh_held() ||
|
||||
rcu_read_lock_sched_held() ||
|
||||
srcu_read_lock_held(&srcu_ctl));
|
||||
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
|
||||
if (p == NULL) {
|
||||
/* Leave because rcu_torture_writer is not yet underway */
|
||||
cur_ops->readunlock(idx);
|
||||
return;
|
||||
}
|
||||
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
|
||||
if (p->rtort_mbtest == 0)
|
||||
atomic_inc(&n_rcu_torture_mberror);
|
||||
spin_lock(&rand_lock);
|
||||
@@ -1053,13 +1063,13 @@ rcu_torture_reader(void *arg)
|
||||
rcu_read_lock_bh_held() ||
|
||||
rcu_read_lock_sched_held() ||
|
||||
srcu_read_lock_held(&srcu_ctl));
|
||||
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
|
||||
if (p == NULL) {
|
||||
/* Wait for rcu_torture_writer to get underway */
|
||||
cur_ops->readunlock(idx);
|
||||
schedule_timeout_interruptible(HZ);
|
||||
continue;
|
||||
}
|
||||
do_trace_rcu_torture_read(cur_ops->name, &p->rtort_rcu);
|
||||
if (p->rtort_mbtest == 0)
|
||||
atomic_inc(&n_rcu_torture_mberror);
|
||||
cur_ops->read_delay(&rand);
|
||||
@@ -1300,13 +1310,13 @@ rcu_torture_print_module_parms(struct rcu_torture_ops *cur_ops, char *tag)
|
||||
"fqs_duration=%d fqs_holdoff=%d fqs_stutter=%d "
|
||||
"test_boost=%d/%d test_boost_interval=%d "
|
||||
"test_boost_duration=%d shutdown_secs=%d "
|
||||
"onoff_interval=%d\n",
|
||||
"onoff_interval=%d onoff_holdoff=%d\n",
|
||||
torture_type, tag, nrealreaders, nfakewriters,
|
||||
stat_interval, verbose, test_no_idle_hz, shuffle_interval,
|
||||
stutter, irqreader, fqs_duration, fqs_holdoff, fqs_stutter,
|
||||
test_boost, cur_ops->can_boost,
|
||||
test_boost_interval, test_boost_duration, shutdown_secs,
|
||||
onoff_interval);
|
||||
onoff_interval, onoff_holdoff);
|
||||
}
|
||||
|
||||
static struct notifier_block rcutorture_shutdown_nb = {
|
||||
@@ -1410,6 +1420,11 @@ rcu_torture_onoff(void *arg)
|
||||
for_each_online_cpu(cpu)
|
||||
maxcpu = cpu;
|
||||
WARN_ON(maxcpu < 0);
|
||||
if (onoff_holdoff > 0) {
|
||||
VERBOSE_PRINTK_STRING("rcu_torture_onoff begin holdoff");
|
||||
schedule_timeout_interruptible(onoff_holdoff * HZ);
|
||||
VERBOSE_PRINTK_STRING("rcu_torture_onoff end holdoff");
|
||||
}
|
||||
while (!kthread_should_stop()) {
|
||||
cpu = (rcu_random(&rand) >> 4) % (maxcpu + 1);
|
||||
if (cpu_online(cpu) && cpu_is_hotpluggable(cpu)) {
|
||||
@@ -1450,12 +1465,15 @@ rcu_torture_onoff(void *arg)
|
||||
static int __cpuinit
|
||||
rcu_torture_onoff_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (onoff_interval <= 0)
|
||||
return 0;
|
||||
onoff_task = kthread_run(rcu_torture_onoff, NULL, "rcu_torture_onoff");
|
||||
if (IS_ERR(onoff_task)) {
|
||||
ret = PTR_ERR(onoff_task);
|
||||
onoff_task = NULL;
|
||||
return PTR_ERR(onoff_task);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
@@ -1481,6 +1499,63 @@ static void rcu_torture_onoff_cleanup(void)
|
||||
|
||||
#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */
|
||||
|
||||
/*
|
||||
* CPU-stall kthread. It waits as specified by stall_cpu_holdoff, then
|
||||
* induces a CPU stall for the time specified by stall_cpu.
|
||||
*/
|
||||
static int __cpuinit rcu_torture_stall(void *args)
|
||||
{
|
||||
unsigned long stop_at;
|
||||
|
||||
VERBOSE_PRINTK_STRING("rcu_torture_stall task started");
|
||||
if (stall_cpu_holdoff > 0) {
|
||||
VERBOSE_PRINTK_STRING("rcu_torture_stall begin holdoff");
|
||||
schedule_timeout_interruptible(stall_cpu_holdoff * HZ);
|
||||
VERBOSE_PRINTK_STRING("rcu_torture_stall end holdoff");
|
||||
}
|
||||
if (!kthread_should_stop()) {
|
||||
stop_at = get_seconds() + stall_cpu;
|
||||
/* RCU CPU stall is expected behavior in following code. */
|
||||
printk(KERN_ALERT "rcu_torture_stall start.\n");
|
||||
rcu_read_lock();
|
||||
preempt_disable();
|
||||
while (ULONG_CMP_LT(get_seconds(), stop_at))
|
||||
continue; /* Induce RCU CPU stall warning. */
|
||||
preempt_enable();
|
||||
rcu_read_unlock();
|
||||
printk(KERN_ALERT "rcu_torture_stall end.\n");
|
||||
}
|
||||
rcutorture_shutdown_absorb("rcu_torture_stall");
|
||||
while (!kthread_should_stop())
|
||||
schedule_timeout_interruptible(10 * HZ);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Spawn CPU-stall kthread, if stall_cpu specified. */
|
||||
static int __init rcu_torture_stall_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (stall_cpu <= 0)
|
||||
return 0;
|
||||
stall_task = kthread_run(rcu_torture_stall, NULL, "rcu_torture_stall");
|
||||
if (IS_ERR(stall_task)) {
|
||||
ret = PTR_ERR(stall_task);
|
||||
stall_task = NULL;
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Clean up after the CPU-stall kthread, if one was spawned. */
|
||||
static void rcu_torture_stall_cleanup(void)
|
||||
{
|
||||
if (stall_task == NULL)
|
||||
return;
|
||||
VERBOSE_PRINTK_STRING("Stopping rcu_torture_stall_task.");
|
||||
kthread_stop(stall_task);
|
||||
}
|
||||
|
||||
static int rcutorture_cpu_notify(struct notifier_block *self,
|
||||
unsigned long action, void *hcpu)
|
||||
{
|
||||
@@ -1523,6 +1598,7 @@ rcu_torture_cleanup(void)
|
||||
fullstop = FULLSTOP_RMMOD;
|
||||
mutex_unlock(&fullstop_mutex);
|
||||
unregister_reboot_notifier(&rcutorture_shutdown_nb);
|
||||
rcu_torture_stall_cleanup();
|
||||
if (stutter_task) {
|
||||
VERBOSE_PRINTK_STRING("Stopping rcu_torture_stutter task");
|
||||
kthread_stop(stutter_task);
|
||||
@@ -1602,6 +1678,10 @@ rcu_torture_cleanup(void)
|
||||
cur_ops->cleanup();
|
||||
if (atomic_read(&n_rcu_torture_error))
|
||||
rcu_torture_print_module_parms(cur_ops, "End of test: FAILURE");
|
||||
else if (n_online_successes != n_online_attempts ||
|
||||
n_offline_successes != n_offline_attempts)
|
||||
rcu_torture_print_module_parms(cur_ops,
|
||||
"End of test: RCU_HOTPLUG");
|
||||
else
|
||||
rcu_torture_print_module_parms(cur_ops, "End of test: SUCCESS");
|
||||
}
|
||||
@@ -1819,6 +1899,7 @@ rcu_torture_init(void)
|
||||
}
|
||||
rcu_torture_onoff_init();
|
||||
register_reboot_notifier(&rcutorture_shutdown_nb);
|
||||
rcu_torture_stall_init();
|
||||
rcutorture_record_test_transition();
|
||||
mutex_unlock(&fullstop_mutex);
|
||||
return 0;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user