Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp5209280pxj; Wed, 9 Jun 2021 11:45:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzaM0KamtsMn70Epb0Vw754Rrj/lrLY2qDu0waVUmZ5NNM6qFQ1+ctmL0eXFsZvBFGp0tCw X-Received: by 2002:aa7:cc19:: with SMTP id q25mr816242edt.56.1623264301782; Wed, 09 Jun 2021 11:45:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623264301; cv=none; d=google.com; s=arc-20160816; b=grDFZkQxKjXlLk0zlHBxLWOD3migwwt0l/Ml8y5lZ/hAnKnoxJuWyrtqA0bXvXMYxJ 2FtU84jkU3lKkBK0GBtVeYfWhnq54S2moRqlJwKCmJkawBcFavh7qfhcYWxRxW6Eip4q p+bo4mfKmhPhFDiaLKxGOdTilBVK1Ks3PvaPL0nev2Qr2a4hIlA1+N+rCu5ezXu4w4Pn qkjQyM0n6azOc3A/xuEfA4IwxXxxEofxljxbaQrLBbsLH4GeSXLg46qFWcaruWGDbmtI mxTi1JVDKoFfwBhc0t9w9o2S6g8lF6OMVt2umMCit/aKveA2FmL/xA3phOVaJNgW9xAB awBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=7pbMZlbFFOhmPxthnvLy38dplkfdQMze/IkHh1/gAXQ=; b=mFc+z1zmXKnyVtVh4s8z1N9B9H0AxN4PCTZlUL6jsJ4Bvne9T9uUTnb/T7MOEdU8PX Q45HTLRRrGgU/ezDhIWxc7N0yHRnm+8+AGeNM3R7xD94YIP9bGPHh5eq1E7+ZxnZXBqh rGCBtviCsHtIDfLFog0Chl2sSkc9QVx823aLCD6oUBq5+fEQmmJN9LKCwYktwvl4++3U AOHBXWTwx7dL/nrePq+9PNTA26Qt1UaaG/3jCMjvQe99x/qQuInz5mbgK82X+gLW7rjC F/pNZbutSP/4wb0Hg/I/HHT/2XsxOkhIatwzNfEytadG5oiU1Bfogx2h0vjPAMiLcVM5 MaYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y8si465479ejp.159.2021.06.09.11.44.36; Wed, 09 Jun 2021 11:45:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235608AbhFIMWR (ORCPT + 99 others); Wed, 9 Jun 2021 08:22:17 -0400 Received: from foss.arm.com ([217.140.110.172]:58464 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235662AbhFIMWL (ORCPT ); Wed, 9 Jun 2021 08:22:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1C52B143D; Wed, 9 Jun 2021 05:20:17 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0494E3F73D; Wed, 9 Jun 2021 05:20:13 -0700 (PDT) From: Mark Rutland To: linux-kernel@vger.kernel.org Cc: benh@kernel.crashing.org, boqun.feng@gmail.com, bp@alien8.de, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, ink@jurassic.park.msu.ru, jonas@southpole.se, juri.lelli@redhat.com, linux@armlinux.org.uk, luto@kernel.org, mark.rutland@arm.com, mattst88@gmail.com, mingo@redhat.com, monstr@monstr.eu, mpe@ellerman.id.au, paulmck@kernel.org, paulus@samba.org, peterz@infradead.org, rth@twiddle.net, shorne@gmail.com, stefan.kristiansson@saunalahti.fi, tglx@linutronix.de, vincent.guittot@linaro.org, will@kernel.org Subject: [RFC PATCH 02/10] entry: snapshot thread flags Date: Wed, 9 Jun 2021 13:19:53 +0100 Message-Id: <20210609122001.18277-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20210609122001.18277-1-mark.rutland@arm.com> References: <20210609122001.18277-1-mark.rutland@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some thread flags can be set remotely, and so even when IRQs are disabled, the flags can change under our feet. Generally this is unlikely to cause a problem in practice, but it is somewhat unsound, and KCSAN will legitimately warn that there is a data race. To avoid such issues, we should snapshot the flags prior to using them. Let's use the new helpers to do so in the common entry code. Signed-off-by: Mark Rutland Cc: Andy Lutomirski Cc: Peter Zijlstra Cc: Thomas Gleixner --- include/linux/entry-kvm.h | 2 +- kernel/entry/common.c | 4 ++-- kernel/entry/kvm.c | 4 ++-- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/entry-kvm.h b/include/linux/entry-kvm.h index 8b2b1d68b954..bc487dffb803 100644 --- a/include/linux/entry-kvm.h +++ b/include/linux/entry-kvm.h @@ -70,7 +70,7 @@ static inline void xfer_to_guest_mode_prepare(void) */ static inline bool __xfer_to_guest_mode_work_pending(void) { - unsigned long ti_work = READ_ONCE(current_thread_info()->flags); + unsigned long ti_work = read_thread_flags(); return !!(ti_work & XFER_TO_GUEST_MODE_WORK); } diff --git a/kernel/entry/common.c b/kernel/entry/common.c index a0b3b04fb596..3147a1f2ed74 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -188,7 +188,7 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs, /* Check if any of the above work has queued a deferred wakeup */ rcu_nocb_flush_deferred_wakeup(); - ti_work = READ_ONCE(current_thread_info()->flags); + ti_work = read_thread_flags(); } /* Return the latest work state for arch_exit_to_user_mode() */ @@ -197,7 +197,7 @@ static unsigned long exit_to_user_mode_loop(struct pt_regs *regs, static void exit_to_user_mode_prepare(struct pt_regs *regs) { - unsigned long ti_work = READ_ONCE(current_thread_info()->flags); + unsigned long ti_work = read_thread_flags(); lockdep_assert_irqs_disabled(); diff --git a/kernel/entry/kvm.c b/kernel/entry/kvm.c index 49972ee99aff..96d476e06c77 100644 --- a/kernel/entry/kvm.c +++ b/kernel/entry/kvm.c @@ -26,7 +26,7 @@ static int xfer_to_guest_mode_work(struct kvm_vcpu *vcpu, unsigned long ti_work) if (ret) return ret; - ti_work = READ_ONCE(current_thread_info()->flags); + ti_work = read_thread_flags(); } while (ti_work & XFER_TO_GUEST_MODE_WORK || need_resched()); return 0; } @@ -43,7 +43,7 @@ int xfer_to_guest_mode_handle_work(struct kvm_vcpu *vcpu) * disabled in the inner loop before going into guest mode. No need * to disable interrupts here. */ - ti_work = READ_ONCE(current_thread_info()->flags); + ti_work = read_thread_flags(); if (!(ti_work & XFER_TO_GUEST_MODE_WORK)) return 0; -- 2.11.0