Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp552959pxb; Sat, 6 Mar 2021 08:22:54 -0800 (PST) X-Google-Smtp-Source: ABdhPJwTXL7Y3CDi94HcFU4SYc3wtiUaH9TStINYAohfVKHgZHU4geJgaGI4uAzXWUXJOFzX4ZXs X-Received: by 2002:a05:6402:1613:: with SMTP id f19mr14682921edv.222.1615047774147; Sat, 06 Mar 2021 08:22:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615047774; cv=none; d=google.com; s=arc-20160816; b=JHAtJ+97YLqs2J0iRq0JEPftC80GRcQ8s7kHRDDfv9sKMJmTH65dTP432oaHq7oI0i kluz9aonamHR55ruNmb2nh3SzahDm8SVoht+smfvs+Vf4RcHSHON6vzk6nWeQHNBAiTh 4arhnm6nDKqEb9P60AO2GMaMCQo5SXXsaaIqsJoCBgDb3F2iEvGBWqzcQDw1OOm8gLEC r7SFiOK8szn6V4FYElPOkMmzSGzYrVfr5EgW96hcieethlK3eEV5RsGfZf9XidUAAMrM y9R5HYtv6wFQ1+/PB2CvR9f8I0zpWWF/0k/FAtCy4uHwF7754yuMB84CjzjSd7mnIfpy azNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=J1kkgZZ52SPUCRXNyaZp2NxzcRXpEMVhj1MhJQhXSVY=; b=jx8MV33okGd4Vjlvh6y9g7aLuwL5C75/Iwut+pJEGaiRrm3QNhh5ZcDlcniNheobx5 hlj2mlQjbhbRSQpagn3/6K8t6nACEbFwEBIejVCayjzwY2uXWNlPBbfKFqHJTduyiVOa B2y/UT6fwvUr9fZ1TwuigQH6bxgsbmRcVbCMua0sC39aaeiXFwCjy59I4yRxbxIg1Sw1 LhVr3cSPz69rv+bBJzfg+BTUxiikgu1u98cNsHC//cX4dWdtm08NhyAMZcC7WylsSz5V xaWMkSlwu2iLjvTdmYzXXanyIYYd2T9saDUUAn65CxpTki45lDAyziB50VxGD3CJ8ZHE hdEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=eUJK6LvA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dp16si3565569ejc.520.2021.03.06.08.22.31; Sat, 06 Mar 2021 08:22:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=eUJK6LvA; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231292AbhCFQTK (ORCPT + 99 others); Sat, 6 Mar 2021 11:19:10 -0500 Received: from mx2.suse.de ([195.135.220.15]:39400 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230415AbhCFQSh (ORCPT ); Sat, 6 Mar 2021 11:18:37 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1615047516; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=J1kkgZZ52SPUCRXNyaZp2NxzcRXpEMVhj1MhJQhXSVY=; b=eUJK6LvAllQib5KkEmsQJMqpnbV00DWaGbWm4CS75zGLQXL86fhU6QvTpJoDchHwyluuC9 OYGfkOUxSQjjrAg3v08NMYLziJqNf+pbfQE3GPjFmY6CndKsmXb0GTTD86C9HyY1HW05f/ WjFvxiJwdk4H5lzw/uyuH3SRaM61ewQ= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 52614AE42; Sat, 6 Mar 2021 16:18:36 +0000 (UTC) From: Juergen Gross To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org Cc: Juergen Gross , Boris Ostrovsky , Stefano Stabellini , stable@vger.kernel.org, Julien Grall , Julien Grall Subject: [PATCH v4 3/3] xen/events: avoid handling the same event on two cpus at the same time Date: Sat, 6 Mar 2021 17:18:33 +0100 Message-Id: <20210306161833.4552-4-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210306161833.4552-1-jgross@suse.com> References: <20210306161833.4552-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When changing the cpu affinity of an event it can happen today that (with some unlucky timing) the same event will be handled on the old and the new cpu at the same time. Avoid that by adding an "event active" flag to the per-event data and call the handler only if this flag isn't set. Cc: stable@vger.kernel.org Reported-by: Julien Grall Signed-off-by: Juergen Gross Reviewed-by: Julien Grall --- V2: - new patch V3: - use common helper for end of handler action (Julien Grall) - move setting is_active to 0 for lateeoi (Boris Ostrovsky) --- drivers/xen/events/events_base.c | 32 +++++++++++++++++++++----------- 1 file changed, 21 insertions(+), 11 deletions(-) diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c index b27c012c86b5..8236e2364eeb 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -103,6 +103,7 @@ struct irq_info { #define EVT_MASK_REASON_EXPLICIT 0x01 #define EVT_MASK_REASON_TEMPORARY 0x02 #define EVT_MASK_REASON_EOI_PENDING 0x04 + u8 is_active; /* Is event just being handled? */ unsigned irq; evtchn_port_t evtchn; /* event channel */ unsigned short cpu; /* cpu bound */ @@ -810,6 +811,12 @@ static void xen_evtchn_close(evtchn_port_t port) BUG(); } +static void event_handler_exit(struct irq_info *info) +{ + smp_store_release(&info->is_active, 0); + clear_evtchn(info->evtchn); +} + static void pirq_query_unmask(int irq) { struct physdev_irq_status_query irq_status; @@ -828,14 +835,15 @@ static void pirq_query_unmask(int irq) static void eoi_pirq(struct irq_data *data) { - evtchn_port_t evtchn = evtchn_from_irq(data->irq); + struct irq_info *info = info_for_irq(data->irq); + evtchn_port_t evtchn = info ? info->evtchn : 0; struct physdev_eoi eoi = { .irq = pirq_from_irq(data->irq) }; int rc = 0; if (!VALID_EVTCHN(evtchn)) return; - clear_evtchn(evtchn); + event_handler_exit(info); if (pirq_needs_eoi(data->irq)) { rc = HYPERVISOR_physdev_op(PHYSDEVOP_eoi, &eoi); @@ -1666,6 +1674,8 @@ void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl) } info = info_for_irq(irq); + if (xchg_acquire(&info->is_active, 1)) + return; dev = (info->type == IRQT_EVTCHN) ? info->u.interdomain : NULL; if (dev) @@ -1853,12 +1863,11 @@ static void disable_dynirq(struct irq_data *data) static void ack_dynirq(struct irq_data *data) { - evtchn_port_t evtchn = evtchn_from_irq(data->irq); - - if (!VALID_EVTCHN(evtchn)) - return; + struct irq_info *info = info_for_irq(data->irq); + evtchn_port_t evtchn = info ? info->evtchn : 0; - clear_evtchn(evtchn); + if (VALID_EVTCHN(evtchn)) + event_handler_exit(info); } static void mask_ack_dynirq(struct irq_data *data) @@ -1874,7 +1883,7 @@ static void lateeoi_ack_dynirq(struct irq_data *data) if (VALID_EVTCHN(evtchn)) { do_mask(info, EVT_MASK_REASON_EOI_PENDING); - clear_evtchn(evtchn); + event_handler_exit(info); } } @@ -1885,7 +1894,7 @@ static void lateeoi_mask_ack_dynirq(struct irq_data *data) if (VALID_EVTCHN(evtchn)) { do_mask(info, EVT_MASK_REASON_EXPLICIT); - clear_evtchn(evtchn); + event_handler_exit(info); } } @@ -1998,10 +2007,11 @@ static void restore_cpu_ipis(unsigned int cpu) /* Clear an irq's pending state, in preparation for polling on it */ void xen_clear_irq_pending(int irq) { - evtchn_port_t evtchn = evtchn_from_irq(irq); + struct irq_info *info = info_for_irq(irq); + evtchn_port_t evtchn = info ? info->evtchn : 0; if (VALID_EVTCHN(evtchn)) - clear_evtchn(evtchn); + event_handler_exit(info); } EXPORT_SYMBOL(xen_clear_irq_pending); void xen_set_irq_pending(int irq) -- 2.26.2