Skip to content

Commit

Permalink
epoll: clean up ep_modify
Browse files Browse the repository at this point in the history
ep_modify() doesn't need to set event.data from within the ep->lock
spinlock as the comment suggests.  The only place event.data is used is
ep_send_events_proc(), and this is protected by ep->mtx instead of
ep->lock.  Also update the comment for mutex_lock() at the top of
ep_scan_ready_list(), which mentions epoll_ctl(EPOLL_CTL_DEL) but not
epoll_ctl(EPOLL_CTL_MOD).

ep_modify() can also use spin_lock_irq() instead of spin_lock_irqsave().

Signed-off-by: Tony Battersby <[email protected]>
Acked-by: Davide Libenzi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
abattersby authored and torvalds committed Apr 1, 2009
1 parent d1bc90d commit e057e15
Showing 1 changed file with 7 additions and 12 deletions.
19 changes: 7 additions & 12 deletions fs/eventpoll.c
Original file line number Diff line number Diff line change
Expand Up @@ -435,7 +435,7 @@ static int ep_scan_ready_list(struct eventpoll *ep,

/*
* We need to lock this because we could be hit by
* eventpoll_release_file() and epoll_ctl(EPOLL_CTL_DEL).
* eventpoll_release_file() and epoll_ctl().
*/
mutex_lock(&ep->mtx);

Expand Down Expand Up @@ -972,32 +972,27 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi, struct epoll_even
{
int pwake = 0;
unsigned int revents;
unsigned long flags;

/*
* Set the new event interest mask before calling f_op->poll(), otherwise
* a potential race might occur. In fact if we do this operation inside
* the lock, an event might happen between the f_op->poll() call and the
* new event set registering.
* Set the new event interest mask before calling f_op->poll();
* otherwise we might miss an event that happens between the
* f_op->poll() call and the new event set registering.
*/
epi->event.events = event->events;
epi->event.data = event->data; /* protected by mtx */

/*
* Get current event bits. We can safely use the file* here because
* its usage count has been increased by the caller of this function.
*/
revents = epi->ffd.file->f_op->poll(epi->ffd.file, NULL);

spin_lock_irqsave(&ep->lock, flags);

/* Copy the data member from inside the lock */
epi->event.data = event->data;

/*
* If the item is "hot" and it is not registered inside the ready
* list, push it inside.
*/
if (revents & event->events) {
spin_lock_irq(&ep->lock);
if (!ep_is_linked(&epi->rdllink)) {
list_add_tail(&epi->rdllink, &ep->rdllist);

Expand All @@ -1007,8 +1002,8 @@ static int ep_modify(struct eventpoll *ep, struct epitem *epi, struct epoll_even
if (waitqueue_active(&ep->poll_wait))
pwake++;
}
spin_unlock_irq(&ep->lock);
}
spin_unlock_irqrestore(&ep->lock, flags);

/* We have to call this outside the lock */
if (pwake)
Expand Down

0 comments on commit e057e15

Please sign in to comment.