Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CB8FC433FE for ; Tue, 30 Nov 2021 11:46:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241057AbhK3LuQ (ORCPT ); Tue, 30 Nov 2021 06:50:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241125AbhK3Lte (ORCPT ); Tue, 30 Nov 2021 06:49:34 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 021F5C06175A for ; Tue, 30 Nov 2021 03:45:51 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id k25-20020a05600c1c9900b00332f798ba1dso13614214wms.4 for ; Tue, 30 Nov 2021 03:45:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TQFyZU0v42DaaTkISoD0OFzde9Hlzhfr9CYPS/61CiQ=; b=UzkPOK44rKyoX0ydzg9d6O+jrGkdKn2U4Uh9I6GaB3gkQKhjEvJW6OswApZ/DpFhIs MjWkd+FkuLnMui5ECel20ebBFkuRLa1f1neQ6VVz1S7yCh0IvJGlwo8aeF9bhgAZqyug l2zCEvskzXpYTMvQpQGB0z4ARki5+15v+PWKTCRoN0cGTSZ65LPhGkDFa7K5dBSQtt1+ LiZoJY4ePdc7NQgpEjukKVvNVBinKAvWLDa3Mfnn9KnFectfd/m0Z+qn/ceEmQlEYCrQ OGNEtLuGhqXgJgSt12TdFVVE+Mz8qn5rbzRLOK/zD3ItiiBRgqjdDcz7tWKq+w3BIhpz +2gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TQFyZU0v42DaaTkISoD0OFzde9Hlzhfr9CYPS/61CiQ=; b=Fr9Lnm5CNMMXwowG8iGV4p2lWGHznJJLVdAHaQJaD8oTL34Ew4esq+bl0rOqkcZXDi uuVM+he9w1l6iox9qKwPzEil9OBJVNxKcl/nP0OGzIkAJoc8qatJPNbHeVxeU8TJ82me w1QXXvrzANoVvtzSTRh1lzJWa11QoVJhPcISTEaCU9B/8M1BSKbDof01cEJ3g3eO4V4i NglCKSpo0oIu2buQB3LqV/4pnkSn7nxRqIE+Al+a4rr1T91EazRRy3ctdQCGJxyc8QZv ao1gHRUhvXPsNGhzzk4jdlEt8cprZbfSBbrX3WJwyOuiEuzIaSgPNTkYeq9R02I1SiW+ onbQ== X-Gm-Message-State: AOAM531Ap6wndOqPzOzWa5xAAN/WYWdNkLovF9yAu2Fv2pFc2wMZe4Q8 BeYSg/9qhO/ov/Y+q0M0peWw9htVvA== X-Google-Smtp-Source: ABdhPJxb0DkHs7VvMvArdCvCQrK9MrDdf3hAA2QD7Q29GE02b72kjglRFvit2CdEhNhUNiK6zoXiTRV6/Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:19c8:: with SMTP id u8mr4203223wmq.155.1638272749477; Tue, 30 Nov 2021 03:45:49 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:27 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-20-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 19/25] x86/qspinlock, kcsan: Instrument barrier of pv_queued_spin_unlock() From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org If CONFIG_PARAVIRT_SPINLOCKS=y, queued_spin_unlock() is implemented using pv_queued_spin_unlock() which is entirely inline asm based. As such, we do not receive any KCSAN barrier instrumentation via regular atomic operations. Add the missing KCSAN barrier instrumentation for the CONFIG_PARAVIRT_SPINLOCKS case. Signed-off-by: Marco Elver --- arch/x86/include/asm/qspinlock.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index d86ab942219c..d87451df480b 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -53,6 +53,7 @@ static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) static inline void queued_spin_unlock(struct qspinlock *lock) { + kcsan_release(); pv_queued_spin_unlock(lock); } -- 2.34.0.rc2.393.gf8c9666880-goog