Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp3567364imm; Mon, 1 Oct 2018 00:17:19 -0700 (PDT) X-Google-Smtp-Source: ACcGV63IocG6X3Kon7jP5+zkmhhzwN03+xC9hRzXp6m0nIRFRzDvfxDG+TVSkH8qz584gBEFQa+w X-Received: by 2002:a17:902:3c5:: with SMTP id d63-v6mr10240760pld.145.1538378239087; Mon, 01 Oct 2018 00:17:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538378239; cv=none; d=google.com; s=arc-20160816; b=0FqX2KXjnoxqegcAhbvESYD+3Ch11NsGWmDmVNC/7WfR+BVArWno5sFM0WQZlbklaO T5FWxlIyB6NnOqkjJABbV0reCb56goxXs0gfDtoSoUOcRId3CVL7MIIio6EUfcRI0prm yAQnvsDph7YN2+FSSbnTzdFXS6iniky+dBXmvc+sUM/I91sITY0/eBoEidKu+kxCGxSo a25WCJ+JSkUwErd85TmveNGCVLq4luAgadHpRGimF8yBLMtPrFw0xO206fOgjjSXofP2 NjZKqSbbnSy2kkXVAz40dlTWx0Wyo2Bdr6GG8MYBrN5nXAOmfH6z6rOkCn+dRMFxvTee hSxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=NosRI0cOpdPWr94ERepuDz5BP6H9AVzUZb4Kv/O+1q4=; b=O+k+Q39fKq4jHtz2RFYs3pQfUFjnqF6aoNz9fhra896f5dL+AmqsN8/jns9YMJ4MG6 SypDor26yFcQVcweRXg89pTX3qVlZsH3TAmcpvV1qIt0ScmsQpdsS+PI2nCltqrlq9yP DOZ0KFmPnpavPx/a7QgFV6F3+SLfHjQJsOVEssqakPxnSluMEOdh3G2Ym32/8m9DDZLt E5tAH1vdjLM35NCCHiwsiih7V++pCMAaXr/0KVbHlKahEGdVdZu/xylL/TS5fUsK6VLG uWAel+CNEckHhFGattUKsB7E4/GUBRcj2Ku0qXgoR+uImPiFPh9pt81q0A92mSVx3OGu o/xw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bc11-v6si6705198plb.120.2018.10.01.00.17.04; Mon, 01 Oct 2018 00:17:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728972AbeJANxO (ORCPT + 99 others); Mon, 1 Oct 2018 09:53:14 -0400 Received: from mx2.suse.de ([195.135.220.15]:56342 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728685AbeJANxO (ORCPT ); Mon, 1 Oct 2018 09:53:14 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id AC52CAD5A; Mon, 1 Oct 2018 07:16:53 +0000 (UTC) From: Juergen Gross To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, x86@kernel.org Cc: boris.ostrovsky@oracle.com, hpa@zytor.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, Juergen Gross , stable@vger.kernel.org, Waiman.Long@hp.com, peterz@infradead.org Subject: [PATCH 1/2] xen: fix race in xen_qlock_wait() Date: Mon, 1 Oct 2018 09:16:40 +0200 Message-Id: <20181001071641.19282-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20181001071641.19282-1-jgross@suse.com> References: <20181001071641.19282-1-jgross@suse.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In the following situation a vcpu waiting for a lock might not be woken up from xen_poll_irq(): CPU 1: CPU 2: CPU 3: takes a spinlock tries to get lock -> xen_qlock_wait() -> xen_clear_irq_pending() frees the lock -> xen_qlock_kick(cpu2) takes lock again tries to get lock -> *lock = _Q_SLOW_VAL -> *lock == _Q_SLOW_VAL ? -> xen_poll_irq() frees the lock -> xen_qlock_kick(cpu3) And cpu 2 will sleep forever. This can be avoided easily by modifying xen_qlock_wait() to call xen_poll_irq() only if the related irq was not pending and to call xen_clear_irq_pending() only if it was pending. Cc: stable@vger.kernel.org Cc: Waiman.Long@hp.com Cc: peterz@infradead.org Signed-off-by: Juergen Gross --- arch/x86/xen/spinlock.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-) diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 973f10e05211..cd210a4ba7b1 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -45,17 +45,12 @@ static void xen_qlock_wait(u8 *byte, u8 val) if (irq == -1) return; - /* clear pending */ - xen_clear_irq_pending(irq); - barrier(); + /* If irq pending already clear it and return. */ + if (xen_test_irq_pending(irq)) { + xen_clear_irq_pending(irq); + return; + } - /* - * We check the byte value after clearing pending IRQ to make sure - * that we won't miss a wakeup event because of the clearing. - * - * The sync_clear_bit() call in xen_clear_irq_pending() is atomic. - * So it is effectively a memory barrier for x86. - */ if (READ_ONCE(*byte) != val) return; -- 2.16.4