Skip to content

Commit

Permalink
tracing/sched: Make preempt_schedule() notrace
Browse files Browse the repository at this point in the history
The function tracer code uses ftrace_preempt_disable() to disable
preemption instead of normal preempt_disable(). But there's a slight
race condition that may cause it to lose a preemption check.

This was made to keep the function tracer from recursing on itself
by disabling preemption then having the enable call the function tracer
again, causing infinite recursion.

The bug was assumed to happen if the call was just in schedule, but
this is incorrect. The bug is caused by preempt_schedule() which
is called by preempt_enable(). The calling of preempt_enable() when
NEED_RESCHED was set would call preempt_schedule() which would call
the function tracer again.

By making the preempt_schedule() and add_preempt_count() notrace
then this will prevent the inifinite recursion. This is because
the add_preempt_count() would stop the preempt_enable() in the
function tracer from calling preempt_schedule() again.

The sub_preempt_count() is also made notrace just to keep it
symmetric.

Signed-off-by: Steven Rostedt <[email protected]>
  • Loading branch information
Steven Rostedt authored and rostedt committed Jun 3, 2010
1 parent 9dda696 commit d1f74e2
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions kernel/sched.c
Original file line number Diff line number Diff line change
Expand Up @@ -3730,7 +3730,7 @@ int mutex_spin_on_owner(struct mutex *lock, struct thread_info *owner)
* off of preempt_enable. Kernel preemptions off return from interrupt
* occur there and call schedule directly.
*/
asmlinkage void __sched preempt_schedule(void)
asmlinkage void __sched notrace preempt_schedule(void)
{
struct thread_info *ti = current_thread_info();

Expand All @@ -3742,9 +3742,9 @@ asmlinkage void __sched preempt_schedule(void)
return;

do {
add_preempt_count(PREEMPT_ACTIVE);
add_preempt_count_notrace(PREEMPT_ACTIVE);
schedule();
sub_preempt_count(PREEMPT_ACTIVE);
sub_preempt_count_notrace(PREEMPT_ACTIVE);

/*
* Check again in case we missed a preemption opportunity
Expand Down

0 comments on commit d1f74e2

Please sign in to comment.