Files
kernel/include/linux
Peter Zijlstra ffda12a17a sched: optimize group load balancer
I noticed that tg_shares_up() unconditionally takes rq-locks for all cpus
in the sched_domain. This hurts.

We need the rq-locks whenever we change the weight of the per-cpu group sched
entities. To allevate this a little, only change the weight when the new
weight is at least shares_thresh away from the old value.

This avoids the rq-lock for the top level entries, since those will never
be re-weighted, and fuzzes the lower level entries a little to gain performance
in semi-stable situations.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-20 14:05:02 +02:00
..
2008-10-13 21:51:51 +01:00
2008-10-16 11:21:38 -07:00
2008-10-15 15:54:56 -04:00
2008-09-12 16:30:20 -07:00
2008-10-16 11:21:51 -07:00
2008-10-09 08:56:22 +02:00
2008-10-16 11:21:32 -07:00
2008-10-10 13:37:12 +01:00
2008-10-15 14:24:08 +02:00
2008-10-12 12:05:55 +02:00
2008-10-08 19:44:18 -04:00
2008-09-22 07:29:31 +01:00
2008-10-16 11:21:40 -07:00
2008-10-14 23:51:02 +02:00
2008-10-16 11:21:38 -07:00
2008-10-09 11:59:55 -07:00
2008-09-22 21:28:11 -07:00
2008-10-07 15:34:37 -07:00
2008-10-15 14:24:08 +02:00
2008-09-06 20:04:36 +02:00
2008-10-15 14:24:08 +02:00
2008-10-12 11:44:37 -07:00
2008-09-01 09:47:16 +02:00
2008-10-13 09:47:43 +11:00
2008-10-09 08:56:06 +02:00
2008-10-01 07:03:24 -07:00
2008-10-16 11:21:49 -07:00
2008-10-16 11:21:45 -07:00
2008-10-08 16:38:41 -07:00
2008-10-16 11:21:46 -07:00
2008-10-16 11:21:46 -07:00
2008-10-02 15:53:13 -07:00
2008-09-22 19:51:15 -07:00
2008-10-20 14:05:02 +02:00
2008-10-07 14:22:33 -07:00
2008-09-09 17:41:42 +02:00
2008-10-16 11:21:47 -07:00
2008-10-07 14:43:06 -07:00
2008-10-16 11:21:45 -07:00
2008-10-13 09:51:40 -07:00
2008-10-16 11:21:33 -07:00
2008-09-05 14:39:38 -07:00
2008-10-16 11:21:31 -07:00