KVM: eventfd: Fix lock order inversion.
When registering a new irqfd, we call its ->poll method to collect any event that might have previously been pending so that we can trigger it. This is done under the kvm->irqfds.lock, which means the eventfd's ctx lock is taken under it. However, if we get a POLLHUP in irqfd_wakeup, we will be called with the ctx lock held before getting the irqfds.lock to deactivate the irqfd, causing lockdep to complain. Calling the ->poll method does not really need the irqfds.lock, so let's just move it after we've given up the irqfds.lock in kvm_irqfd_assign(). Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
parent
93c4adc7af
commit
684a0b719d
1 changed files with 4 additions and 4 deletions
|
@ -391,19 +391,19 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
|
|||
lockdep_is_held(&kvm->irqfds.lock));
|
||||
irqfd_update(kvm, irqfd, irq_rt);
|
||||
|
||||
events = f.file->f_op->poll(f.file, &irqfd->pt);
|
||||
|
||||
list_add_tail(&irqfd->list, &kvm->irqfds.items);
|
||||
|
||||
spin_unlock_irq(&kvm->irqfds.lock);
|
||||
|
||||
/*
|
||||
* Check if there was an event already pending on the eventfd
|
||||
* before we registered, and trigger it as if we didn't miss it.
|
||||
*/
|
||||
events = f.file->f_op->poll(f.file, &irqfd->pt);
|
||||
|
||||
if (events & POLLIN)
|
||||
schedule_work(&irqfd->inject);
|
||||
|
||||
spin_unlock_irq(&kvm->irqfds.lock);
|
||||
|
||||
/*
|
||||
* do not drop the file until the irqfd is fully initialized, otherwise
|
||||
* we might race against the POLLHUP
|
||||
|
|
Loading…
Reference in a new issue