Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp3413934pxy; Mon, 26 Apr 2021 00:50:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxNDY/Z92nMiNqhrtPeM8hQzV18aOew4qJjeF2cAU77+FVUcs1IgtLuM5coeQ2CEIzjd1hl X-Received: by 2002:aa7:8a49:0:b029:25e:32bf:a839 with SMTP id n9-20020aa78a490000b029025e32bfa839mr16246757pfa.76.1619423421013; Mon, 26 Apr 2021 00:50:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619423421; cv=none; d=google.com; s=arc-20160816; b=aOvCpp35X7rfpCTgtd4/7WwalblnpHuO9l0BreLeOCdhS+Md1imPx6qgXZ0G+q7m/4 2MWCMWVKmV+5XKqYW7qPmPFCxB4wVvIcJp7dOtS3jdyKne0dWY/wasmQlZq+uwi2ZOne 5lVKqRXKDLRQIQN26TfPMEGJSuS5F2IQuYp+J3N8WOyIRv2BOlRepVfsUQwrydpZUlTI sF9rb/SjvhX4X68QKlVSsNTTIBfwH0m0byCwBGlIkX4LGF0JnR21oVfsm/a7dDYkD/A9 C8SgW/mezPs7LCkt5qArilEv+K8qVZpjwjX+nEnD7oArLTdWNSdfjXmhRna9k/qe7I4O HElQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=7mikF710B3uX9LBO6CZ9gOYwUmsR6wkZdYDlEXNw9v0=; b=w6sv3fWohJwRF3OL8FOIA1xKmM7LtTdkc7ELc1aeXE3stUqyvLpNN0xYh15N2z718O lklAjWNLvoQnIuE5KLzlCynaABZCuMrm6nbjR1wxfZXCzs1LWoZ7FCw2KILuNfZEI5nz nynU6rjgEUBIg0f/yOCWzf94El4fh1ntFCFufW0PQ+a4rNouPzgU+00/54KgqPtn+tmp KOiPe2P7mYmsukKHitiyaFbBNyWbdoYxKz0+5SUjjAKODdb+t5oLBqhOsE2xsacS/sLN k/wBzletfs++0FNhQOu5QVPGGF8t82HAGjG0YjNG70Yclxu7adDjMvXwqN+dkN1B3D1b MyMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=AOxL6PZm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i15si16831932pla.114.2021.04.26.00.50.00; Mon, 26 Apr 2021 00:50:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=AOxL6PZm; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232299AbhDZHse (ORCPT + 99 others); Mon, 26 Apr 2021 03:48:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:56166 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233632AbhDZHjs (ORCPT ); Mon, 26 Apr 2021 03:39:48 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id CD4AE613D2; Mon, 26 Apr 2021 07:38:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1619422703; bh=dQOG0laoapLFxO2rFa+JWZVS/IKfrplnybk3BUCUjbc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AOxL6PZmYsQi7xofhzvsIARGtU2MHzgBMnqSBuYf9OeZbqL2g6Vig6V8x/+bBngB4 Kcqa1mJvS50fTUBG3sY5iYXOBj3Ud+EHsAbB2++p0lI8Vxk4lsEfM/gHOnsRoIHOXq GRAXAPHJs+QhPL16M+srhdWXn/gohehuVjVHjTZQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Ali Saidi , "Peter Zijlstra (Intel)" , Steve Capper , Will Deacon , Waiman Long , Sasha Levin Subject: [PATCH 5.4 05/20] locking/qrwlock: Fix ordering in queued_write_lock_slowpath() Date: Mon, 26 Apr 2021 09:29:56 +0200 Message-Id: <20210426072816.857926428@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210426072816.686976183@linuxfoundation.org> References: <20210426072816.686976183@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ali Saidi [ Upstream commit 84a24bf8c52e66b7ac89ada5e3cfbe72d65c1896 ] While this code is executed with the wait_lock held, a reader can acquire the lock without holding wait_lock. The writer side loops checking the value with the atomic_cond_read_acquire(), but only truly acquires the lock when the compare-and-exchange is completed successfully which isn’t ordered. This exposes the window between the acquire and the cmpxchg to an A-B-A problem which allows reads following the lock acquisition to observe values speculatively before the write lock is truly acquired. We've seen a problem in epoll where the reader does a xchg while holding the read lock, but the writer can see a value change out from under it. Writer | Reader -------------------------------------------------------------------------------- ep_scan_ready_list() | |- write_lock_irq() | |- queued_write_lock_slowpath() | |- atomic_cond_read_acquire() | | read_lock_irqsave(&ep->lock, flags); --> (observes value before unlock) | chain_epi_lockless() | | epi->next = xchg(&ep->ovflist, epi); | | read_unlock_irqrestore(&ep->lock, flags); | | | atomic_cmpxchg_relaxed() | |-- READ_ONCE(ep->ovflist); | A core can order the read of the ovflist ahead of the atomic_cmpxchg_relaxed(). Switching the cmpxchg to use acquire semantics addresses this issue at which point the atomic_cond_read can be switched to use relaxed semantics. Fixes: b519b56e378ee ("locking/qrwlock: Use atomic_cond_read_acquire() when spinning in qrwlock") Signed-off-by: Ali Saidi [peterz: use try_cmpxchg()] Signed-off-by: Peter Zijlstra (Intel) Reviewed-by: Steve Capper Acked-by: Will Deacon Acked-by: Waiman Long Tested-by: Steve Capper Signed-off-by: Sasha Levin --- kernel/locking/qrwlock.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index fe9ca92faa2a..909b0bf22a1e 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -61,6 +61,8 @@ EXPORT_SYMBOL(queued_read_lock_slowpath); */ void queued_write_lock_slowpath(struct qrwlock *lock) { + int cnts; + /* Put the writer into the wait queue */ arch_spin_lock(&lock->wait_lock); @@ -74,9 +76,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock) /* When no more readers or writers, set the locked flag */ do { - atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING); - } while (atomic_cmpxchg_relaxed(&lock->cnts, _QW_WAITING, - _QW_LOCKED) != _QW_WAITING); + cnts = atomic_cond_read_relaxed(&lock->cnts, VAL == _QW_WAITING); + } while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED)); unlock: arch_spin_unlock(&lock->wait_lock); } -- 2.30.2