Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp658308imu; Fri, 9 Nov 2018 04:05:29 -0800 (PST) X-Google-Smtp-Source: AJdET5feAuxGHdBuQ1DMDJ8/hW14sfbuGSQpPxCoTbtkkDJhVmmW6guwbJ3qnXgRoR1vQUkv8sMC X-Received: by 2002:a17:902:1e3:: with SMTP id b90-v6mr5784017plb.117.1541765129126; Fri, 09 Nov 2018 04:05:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541765129; cv=none; d=google.com; s=arc-20160816; b=Kul5EiXGJPJrI43WagfO4fMVw458S695d25zuaO4taLpMbd+lUr6NkqzdjwrRn4LEM 8J3C1kkmFTaMYZm5gL+QT7JDpA1YzElNI+boLVlGTZu6VCXzkadbvC9d6d9bDCi+gcX5 VLMOYR6Lynrj4tgg9lTGFof4blvHJqTnuerka7RMpIwGST5lgLWwLOxKLcXDhRWGfRjd uRK/29X4Ju430rTnmrjLJ1hg0aRbj+gE3CXRuU8+egKKFgCczcalqJEASVWSI/YITDrV yO4AkDnpLZ3d/7IiTLgu7XpF56YDEV4RJMjBNkuv9lUZyVp/S3UztCeyfQNbIQIBBwbT t26A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=ShX/DLVsmeQxdmOb6qW0UZ/qnBbrEeRh8eCxXt5KV/U=; b=TZ120IYMnY6bMVweN52PvQHIRdWAYHgi5sc9Rwftn15No6pVhGfygoJKVTPzCjXSwx Eyz2jkGjnoeIYWpfkzDMq0leNyQ+cGxiFdBd0AAUs4outA+5wPvuHmGvG7hO/xWvs5Jc JtXiGixRVcXi1Wyk8AGLAE9thc9b0VU7XcWmEHK7zfut3MGs6HK7prAtpPOovSnfXe54 0svFVLn2AAj9p8u4A8AHymjlmk6ZL4NSzWwBBrV/psecAzWMCMyT/nfjnVcx5zvYHZCe sQKWKj8BQJxm7sB/rQzPvx8CiWf/iYSLWxW4j5TQzQCr80ngGQPnlXoUeCVmBeL9cJLj tdkA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x15-v6si5940158pgj.566.2018.11.09.04.04.49; Fri, 09 Nov 2018 04:05:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728028AbeKIVok (ORCPT + 99 others); Fri, 9 Nov 2018 16:44:40 -0500 Received: from mx2.suse.de ([195.135.220.15]:57342 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727742AbeKIVok (ORCPT ); Fri, 9 Nov 2018 16:44:40 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id AD89AAEA5; Fri, 9 Nov 2018 12:04:18 +0000 (UTC) From: Juergen Gross To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Cc: boris.ostrovsky@oracle.com, sstabellini@kernel.org, hpa@zytor.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, Juergen Gross , stable@vger.kernel.org Subject: [PATCH] xen: fix xen_qlock_wait() Date: Fri, 9 Nov 2018 13:04:13 +0100 Message-Id: <20181109120413.9539-1-jgross@suse.com> X-Mailer: git-send-email 2.16.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit a856531951dc80 ("xen: make xen_qlock_wait() nestable") introduced a regression for Xen guests running fully virtualized (HVM or PVH mode). The Xen hypervisor wouldn't return from the poll hypercall with interrupts disabled in case of an interrupt (for PV guests it does). So instead of disabling interrupts in xen_qlock_wait() use a nesting counter to avoid calling xen_clear_irq_pending() in case xen_qlock_wait() is nested. Fixes: a856531951dc80 ("xen: make xen_qlock_wait() nestable") Cc: stable@vger.kernel.org Signed-off-by: Juergen Gross --- arch/x86/xen/spinlock.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 441c88262169..1c8a8816a402 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -21,6 +22,7 @@ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; static DEFINE_PER_CPU(char *, irq_name); +static DEFINE_PER_CPU(atomic_t, xen_qlock_wait_nest); static bool xen_pvspin = true; static void xen_qlock_kick(int cpu) @@ -39,25 +41,25 @@ static void xen_qlock_kick(int cpu) */ static void xen_qlock_wait(u8 *byte, u8 val) { - unsigned long flags; int irq = __this_cpu_read(lock_kicker_irq); + atomic_t *nest_cnt = this_cpu_ptr(&xen_qlock_wait_nest); /* If kicker interrupts not initialized yet, just spin */ if (irq == -1 || in_nmi()) return; - /* Guard against reentry. */ - local_irq_save(flags); + /* Detect reentry. */ + atomic_inc(nest_cnt); - /* If irq pending already clear it. */ - if (xen_test_irq_pending(irq)) { + /* If irq pending already and no nested call clear it. */ + if (atomic_read(nest_cnt) == 1 && xen_test_irq_pending(irq)) { xen_clear_irq_pending(irq); } else if (READ_ONCE(*byte) == val) { /* Block until irq becomes pending (or a spurious wakeup) */ xen_poll_irq(irq); } - local_irq_restore(flags); + atomic_dec(nest_cnt); } static irqreturn_t dummy_handler(int irq, void *dev_id) -- 2.16.4