Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030704AbcCQMr5 (ORCPT ); Thu, 17 Mar 2016 08:47:57 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:31601 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030207AbcCQMrv (ORCPT ); Thu, 17 Mar 2016 08:47:51 -0400 From: Boris Ostrovsky To: david.vrabel@citrix.com, konrad.wilk@oracle.com Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, Boris Ostrovsky , stable@vger.kernel.org Subject: [PATCH] xen/events: Mask a moving irq Date: Thu, 17 Mar 2016 08:45:50 -0400 Message-Id: <1458218750-5202-1-git-send-email-boris.ostrovsky@oracle.com> X-Mailer: git-send-email 1.7.1 X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2695 Lines: 91 Moving an unmasked irq may result in irq handler being invoked on both source and target CPUs. With 2-level this can happen as follows: On source CPU: evtchn_2l_handle_events() -> generic_handle_irq() -> handle_edge_irq() -> eoi_pirq(): irq_move_irq(data); /***** WE ARE HERE *****/ if (VALID_EVTCHN(evtchn)) clear_evtchn(evtchn); If at this moment target processor is handling an unrelated event in evtchn_2l_handle_events()'s loop it may pick up our event since target's cpu_evtchn_mask claims that this event belongs to it *and* the event is unmasked and still pending. At the same time, source CPU will continue executing its own handle_edge_irq(). With FIFO interrupt the scenario is similar: irq_move_irq() may result in a EVTCHNOP_unmask hypercall which, in turn, may make the event pending on the target CPU. We can avoid this situation by moving and clearing the event while keeping event masked. Signed-off-by: Boris Ostrovsky Cc: stable@vger.kernel.org --- drivers/xen/events/events_base.c | 26 ++++++++++++++++++++++++-- 1 files changed, 24 insertions(+), 2 deletions(-) diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index 524c221..c5725ee 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -483,12 +483,23 @@ static void eoi_pirq(struct irq_data *data) int evtchn = evtchn_from_irq(data->irq); struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) }; int rc = 0; + int need_unmask = 0; - irq_move_irq(data); + if (unlikely(irqd_is_setaffinity_pending(data))) { + if (VALID_EVTCHN(evtchn)) + need_unmask = !test_and_set_mask(evtchn); + } if (VALID_EVTCHN(evtchn)) clear_evtchn(evtchn); + irq_move_irq(data); + + if (VALID_EVTCHN(evtchn)) { + if (unlikely(need_unmask)) + unmask_evtchn(evtchn); + } + if (pirq_needs_eoi(data->irq)) { rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi); WARN_ON(rc); @@ -1356,11 +1367,22 @@ static void disable_dynirq(struct irq_data *data) static void ack_dynirq(struct irq_data *data) { int evtchn = evtchn_from_irq(data->irq); + int need_unmask = 0; - irq_move_irq(data); + if (unlikely(irqd_is_setaffinity_pending(data))) { + if (VALID_EVTCHN(evtchn)) + need_unmask = !test_and_set_mask(evtchn); + } if (VALID_EVTCHN(evtchn)) clear_evtchn(evtchn); + + irq_move_irq(data); + + if (VALID_EVTCHN(evtchn)) { + if (unlikely(need_unmask)) + unmask_evtchn(evtchn); + } } static void mask_ack_dynirq(struct irq_data *data) -- 1.7.7.6