kernel-fxtec-pro1x/kernel/irq/migration.c
Eric W. Biederman e7b946e98a [PATCH] genirq: irq: add moved_masked_irq
Currently move_native_irq disables and renables the irq we are migrating to
ensure we don't take that irq when we are actually doing the migration
operation.  Disabling the irq needs to happen but sometimes doing the work is
move_native_irq is too late.

On x86 with ioapics the irq move sequences needs to be:
edge_triggered:
  mask irq.
  move irq.
  unmask irq.
  ack irq.
level_triggered:
  mask irq.
  ack irq.
  move irq.
  unmask irq.

We can easily perform the edge triggered sequence, with the current defintion
of move_native_irq.  However the level triggered case does not map well.  For
that I have added move_masked_irq, to allow me to disable the irqs around both
the ack and the move.

Q: Why have we not seen this problem earlier?

A: The only symptom I have been able to reproduce is that if we change
   the vector before acknowleding an irq the wrong irq is acknowledged.
   Since we currently are not reprogramming the irq vector during
   migration no problems show up.

   We have to mask the irq before we acknowledge the irq or else we could
   hit a window where an irq is asserted just before we acknowledge it.

   Edge triggered irqs do not have this problem because acknowledgements
   do not propogate in the same way.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Rajesh Shah <rajesh.shah@intel.com>
Cc: Andi Kleen <ak@muc.de>
Cc: "Protasevich, Natalie" <Natalie.Protasevich@UNISYS.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-10-04 07:55:26 -07:00

76 lines
1.7 KiB
C

#include <linux/irq.h>
void set_pending_irq(unsigned int irq, cpumask_t mask)
{
struct irq_desc *desc = irq_desc + irq;
unsigned long flags;
spin_lock_irqsave(&desc->lock, flags);
desc->status |= IRQ_MOVE_PENDING;
irq_desc[irq].pending_mask = mask;
spin_unlock_irqrestore(&desc->lock, flags);
}
void move_masked_irq(int irq)
{
struct irq_desc *desc = irq_desc + irq;
cpumask_t tmp;
if (likely(!(desc->status & IRQ_MOVE_PENDING)))
return;
/*
* Paranoia: cpu-local interrupts shouldn't be calling in here anyway.
*/
if (CHECK_IRQ_PER_CPU(desc->status)) {
WARN_ON(1);
return;
}
desc->status &= ~IRQ_MOVE_PENDING;
if (unlikely(cpus_empty(irq_desc[irq].pending_mask)))
return;
if (!desc->chip->set_affinity)
return;
assert_spin_locked(&desc->lock);
cpus_and(tmp, irq_desc[irq].pending_mask, cpu_online_map);
/*
* If there was a valid mask to work with, please
* do the disable, re-program, enable sequence.
* This is *not* particularly important for level triggered
* but in a edge trigger case, we might be setting rte
* when an active trigger is comming in. This could
* cause some ioapics to mal-function.
* Being paranoid i guess!
*
* For correct operation this depends on the caller
* masking the irqs.
*/
if (likely(!cpus_empty(tmp))) {
desc->chip->set_affinity(irq,tmp);
}
cpus_clear(irq_desc[irq].pending_mask);
}
void move_native_irq(int irq)
{
struct irq_desc *desc = irq_desc + irq;
if (likely(!(desc->status & IRQ_MOVE_PENDING)))
return;
if (likely(!(desc->status & IRQ_DISABLED)))
desc->chip->disable(irq);
move_masked_irq(irq);
if (likely(!(desc->status & IRQ_DISABLED)))
desc->chip->enable(irq);
}