Skip to content

Commit

Permalink
io_uring: fix __io_iopoll_check deadlock in io_sq_thread
Browse files Browse the repository at this point in the history
Since commit a3a0e43 ("io_uring: don't enter poll loop if we have
CQEs pending"), if we already events pending, we won't enter poll loop.
In case SETUP_IOPOLL and SETUP_SQPOLL are both enabled, if app has
been terminated and don't reap pending events which are already in cq
ring, and there are some reqs in poll_list, io_sq_thread will enter
__io_iopoll_check(), and find pending events, then return, this loop
will never have a chance to exit.

I have seen this issue in fio stress tests, to fix this issue, let
io_sq_thread call io_iopoll_getevents() with argument 'min' being zero,
and remove __io_iopoll_check().

Fixes: a3a0e43 ("io_uring: don't enter poll loop if we have CQEs pending")
Signed-off-by: Xiaoguang Wang <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
  • Loading branch information
Xiaoguang Wang authored and axboe committed Feb 22, 2020
1 parent 7143b5a commit c7849be
Showing 1 changed file with 9 additions and 18 deletions.
27 changes: 9 additions & 18 deletions fs/io_uring.c
Original file line number Diff line number Diff line change
Expand Up @@ -1672,11 +1672,17 @@ static void io_iopoll_reap_events(struct io_ring_ctx *ctx)
mutex_unlock(&ctx->uring_lock);
}

static int __io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
long min)
static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
long min)
{
int iters = 0, ret = 0;

/*
* We disallow the app entering submit/complete with polling, but we
* still need to lock the ring to prevent racing with polled issue
* that got punted to a workqueue.
*/
mutex_lock(&ctx->uring_lock);
do {
int tmin = 0;

Expand Down Expand Up @@ -1712,21 +1718,6 @@ static int __io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
ret = 0;
} while (min && !*nr_events && !need_resched());

return ret;
}

static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned *nr_events,
long min)
{
int ret;

/*
* We disallow the app entering submit/complete with polling, but we
* still need to lock the ring to prevent racing with polled issue
* that got punted to a workqueue.
*/
mutex_lock(&ctx->uring_lock);
ret = __io_iopoll_check(ctx, nr_events, min);
mutex_unlock(&ctx->uring_lock);
return ret;
}
Expand Down Expand Up @@ -5118,7 +5109,7 @@ static int io_sq_thread(void *data)
*/
mutex_lock(&ctx->uring_lock);
if (!list_empty(&ctx->poll_list))
__io_iopoll_check(ctx, &nr_events, 0);
io_iopoll_getevents(ctx, &nr_events, 0);
else
inflight = 0;
mutex_unlock(&ctx->uring_lock);
Expand Down

0 comments on commit c7849be

Please sign in to comment.