Skip to content

Commit

Permalink
rcu: apply RCU protection to wake_affine()
Browse files Browse the repository at this point in the history
The task_group() function returns a pointer that must be protected
by either RCU, the ->alloc_lock, or the cgroup lock (see the
rcu_dereference_check() in task_subsys_state(), which is invoked by
task_group()).  The wake_affine() function currently does none of these,
which means that a concurrent update would be within its rights to free
the structure returned by task_group().  Because wake_affine() uses this
structure only to compute load-balancing heuristics, there is no reason
to acquire either of the two locks.

Therefore, this commit introduces an RCU read-side critical section that
starts before the first call to task_group() and ends after the last use
of the "tg" pointer returned from task_group().  Thanks to Li Zefan for
pointing out the need to extend the RCU read-side critical section from
that proposed by the original patch.

Signed-off-by: Daniel J Blueman <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
  • Loading branch information
Daniel J Blueman authored and paulmck committed Jun 23, 2010
1 parent 7e27d6e commit f3b577d
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions kernel/sched_fair.c
Original file line number Diff line number Diff line change
Expand Up @@ -1240,6 +1240,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
* effect of the currently running task from the load
* of the current CPU:
*/
rcu_read_lock();
if (sync) {
tg = task_group(current);
weight = current->se.load.weight;
Expand Down Expand Up @@ -1275,6 +1276,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync)
balanced = this_eff_load <= prev_eff_load;
} else
balanced = true;
rcu_read_unlock();

/*
* If the currently running task will sleep within
Expand Down

0 comments on commit f3b577d

Please sign in to comment.