Skip to content
This repository has been archived by the owner on Dec 14, 2022. It is now read-only.

Commit

Permalink
oom-kill: remove boost_dying_task_prio()
Browse files Browse the repository at this point in the history
This is an almost-revert of commit 93b43fa ("oom: give the dying task a
higher priority").

That commit dramatically improved oom killer logic when a fork-bomb
occurs.  But I've found that it has nasty corner case.  Now cpu cgroup has
strange default RT runtime.  It's 0!  That said, if a process under cpu
cgroup promote RT scheduling class, the process never run at all.

If an admin inserts a !RT process into a cpu cgroup by setting
rtruntime=0, usually it runs perfectly because a !RT task isn't affected
by the rtruntime knob.  But if it promotes an RT task via an explicit
setscheduler() syscall or an OOM, the task can't run at all.  In short,
the oom killer doesn't work at all if admins are using cpu cgroup and don't
touch the rtruntime knob.

Eventually, kernel may hang up when oom kill occur.  I and the original
author Luis agreed to disable this logic.

Signed-off-by: KOSAKI Motohiro <[email protected]>
Acked-by: Luis Claudio R. Goncalves <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
Reviewed-by: Minchan Kim <[email protected]>
Acked-by: David Rientjes <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
kosaki authored and torvalds committed Apr 14, 2011
1 parent 929bea7 commit 341aea2
Showing 1 changed file with 0 additions and 28 deletions.
28 changes: 0 additions & 28 deletions mm/oom_kill.c
Original file line number Diff line number Diff line change
Expand Up @@ -83,24 +83,6 @@ static bool has_intersects_mems_allowed(struct task_struct *tsk,
}
#endif /* CONFIG_NUMA */

/*
* If this is a system OOM (not a memcg OOM) and the task selected to be
* killed is not already running at high (RT) priorities, speed up the
* recovery by boosting the dying task to the lowest FIFO priority.
* That helps with the recovery and avoids interfering with RT tasks.
*/
static void boost_dying_task_prio(struct task_struct *p,
struct mem_cgroup *mem)
{
struct sched_param param = { .sched_priority = 1 };

if (mem)
return;

if (!rt_task(p))
sched_setscheduler_nocheck(p, SCHED_FIFO, &param);
}

/*
* The process p may have detached its own ->mm while exiting or through
* use_mm(), but one or more of its subthreads may still have a valid
Expand Down Expand Up @@ -452,13 +434,6 @@ static int oom_kill_task(struct task_struct *p, struct mem_cgroup *mem)
set_tsk_thread_flag(p, TIF_MEMDIE);
force_sig(SIGKILL, p);

/*
* We give our sacrificial lamb high priority and access to
* all the memory it needs. That way it should be able to
* exit() and clear out its resources quickly...
*/
boost_dying_task_prio(p, mem);

return 0;
}
#undef K
Expand All @@ -482,7 +457,6 @@ static int oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order,
*/
if (p->flags & PF_EXITING) {
set_tsk_thread_flag(p, TIF_MEMDIE);
boost_dying_task_prio(p, mem);
return 0;
}

Expand Down Expand Up @@ -556,7 +530,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
*/
if (fatal_signal_pending(current)) {
set_thread_flag(TIF_MEMDIE);
boost_dying_task_prio(current, NULL);
return;
}

Expand Down Expand Up @@ -712,7 +685,6 @@ void out_of_memory(struct zonelist *zonelist, gfp_t gfp_mask,
*/
if (fatal_signal_pending(current)) {
set_thread_flag(TIF_MEMDIE);
boost_dying_task_prio(current, NULL);
return;
}

Expand Down

0 comments on commit 341aea2

Please sign in to comment.