Skip to content

Commit

Permalink
Fix synchronize_irq races with IRQ handler
Browse files Browse the repository at this point in the history
As it is some callers of synchronize_irq rely on memory barriers
to provide synchronisation against the IRQ handlers.  For example,
the tg3 driver does

	tp->irq_sync = 1;
	smp_mb();
	synchronize_irq();

and then in the IRQ handler:

	if (!tp->irq_sync)
		netif_rx_schedule(dev, &tp->napi);

Unfortunately memory barriers only work well when they come in
pairs.  Because we don't actually have memory barriers on the
IRQ path, the memory barrier before the synchronize_irq() doesn't
actually protect us.

In particular, synchronize_irq() may return followed by the
result of netif_rx_schedule being made visible.

This patch (mostly written by Linus) fixes this by using spin
locks instead of memory barries on the synchronize_irq() path.

Signed-off-by: Herbert Xu <[email protected]>
Acked-by: Benjamin Herrenschmidt <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
herbertx authored and Linus Torvalds committed Oct 23, 2007
1 parent 48d2268 commit a98ce5c
Showing 1 changed file with 18 additions and 2 deletions.
20 changes: 18 additions & 2 deletions kernel/irq/manage.c
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,28 @@
void synchronize_irq(unsigned int irq)
{
struct irq_desc *desc = irq_desc + irq;
unsigned int status;

if (irq >= NR_IRQS)
return;

while (desc->status & IRQ_INPROGRESS)
cpu_relax();
do {
unsigned long flags;

/*
* Wait until we're out of the critical section. This might
* give the wrong answer due to the lack of memory barriers.
*/
while (desc->status & IRQ_INPROGRESS)
cpu_relax();

/* Ok, that indicated we're done: double-check carefully. */
spin_lock_irqsave(&desc->lock, flags);
status = desc->status;
spin_unlock_irqrestore(&desc->lock, flags);

/* Oops, that failed? */
} while (status & IRQ_INPROGRESS);
}
EXPORT_SYMBOL(synchronize_irq);

Expand Down

0 comments on commit a98ce5c

Please sign in to comment.