Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp70438pxj; Tue, 15 Jun 2021 20:22:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwET81MneCqVs35qF/NVQyoM5QzKa5coMzG3HeyxO0UNJ1B0JvA0UnkVYWPWPfinpwehdsn X-Received: by 2002:a05:6638:24cb:: with SMTP id y11mr2131422jat.122.1623813778178; Tue, 15 Jun 2021 20:22:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623813778; cv=none; d=google.com; s=arc-20160816; b=h/qfZ58Br9Y+j69zlU58LHFdDs6QeR1wQPsaXtK/Pv/Pbnr9kQVUHpEbGABZ4oT3oi Sv1WbddbZtcIqRLDMqFRmby2VIIiQsOIGni4WRW1YK6xvZlzxzvwiDiLtcQo2P9TjwKa FQAxnE/YmbkvZHQQBoU3JMVmtjfFGE79cRFCcivAgUu0Rc9CYPqPk2PCIvTVKRKVG2Pq 74EezVjz4xNSoK2Us8sjuujC79A+EEGS2Rlc9K6zLEGrdwoJNGiHbwWS+vv3M7rZbmvO ZxCxpDGhnM2Unu9EoUAzDPUanuKPVg5nG1FHGNswfJ8kJPRVssVEfdFJQjlVpRYPURZS UHDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=1TqExUtlfyBEqrR34Aml8z2SmPLVG8xvYj8gNqLRIac=; b=CTQvrrFp1Dy7Q1hqV4N9mBzc3b55FEWrEY3i/jO+J7HieKbBvUk+RdTE3WW724RwgW qvJ3pO81j2QROLFLPPYL4PmFnIcyBSE2w32KO1Y61rnqJ+2zs38K7rLcBxwB51pyauNd DWofKjAZ1UeFnGoom0FWFFhdqYiehBI3FgF/xlbT9xrPwlCaE2+MzL+Jc1DPJ8B5NB2t EWjSxIVKafyiwu3HhUazX9RnL4OOiOQP44J0ZaztcZlxOkQoQmXy4XANFp5OugP6uqHd rplpLM2gOYURk7Iikbv6QEFTivFtroO4PQ2tlZZFawbwuq9Mkf0GoIPMAcIx7ybHEv3T cYBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=PVgslUPv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v6si918682ilu.109.2021.06.15.20.22.45; Tue, 15 Jun 2021 20:22:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=PVgslUPv; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230410AbhFPDXi (ORCPT + 99 others); Tue, 15 Jun 2021 23:23:38 -0400 Received: from mail.kernel.org ([198.145.29.99]:42832 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230175AbhFPDXY (ORCPT ); Tue, 15 Jun 2021 23:23:24 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id CBB8B613B9; Wed, 16 Jun 2021 03:21:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1623813678; bh=q8HZKPxpOM/5sR9eo6stVaodXdedvsUQjQBqGomYFTM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PVgslUPvnm4dag5figXFo7ByhJdeLViFstuVBA+2bOh0q+jJHFNjAdQ7OpvTwfLO/ e3N/qUhm7guRAWmGpjwLX9hfG3lnJpHOriP8yMB7h7o+8XSPrdeWGBjmnx4GETkErX jGk2ZjznvfntfeJKtdGpxUnGlbz+4wNCAmbFxySDGlhuJPuIteVF1+Nkxr1WP6Wjwv wVxUjteZpKgHgRynJAVOte0dGblqJ8Ae3qokLj6BiRa9XkfLj2r5FtpB74jgw8ZY5+ ZVrFcz4GKe0g0AhBux8EONMOlllRPzI1zDXWURMqMn27BBLuiAM7SOqCIN1oYlj2LX vUkKcug1HeGlw== From: Andy Lutomirski To: x86@kernel.org Cc: Dave Hansen , LKML , linux-mm@kvack.org, Andrew Morton , Andy Lutomirski , Mathieu Desnoyers , Nicholas Piggin , Peter Zijlstra Subject: [PATCH 5/8] membarrier, kthread: Use _ONCE accessors for task->mm Date: Tue, 15 Jun 2021 20:21:10 -0700 Message-Id: <74ace142f48db7d0e71b05b5ace72bfe8e0a2652.1623813516.git.luto@kernel.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org membarrier reads cpu_rq(remote cpu)->curr->mm without locking. Use READ_ONCE() and WRITE_ONCE() to remove the data races. Cc: Mathieu Desnoyers Cc: Nicholas Piggin Cc: Peter Zijlstra Signed-off-by: Andy Lutomirski --- fs/exec.c | 2 +- kernel/kthread.c | 4 ++-- kernel/sched/membarrier.c | 6 +++--- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index 18594f11c31f..2e63dea83411 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -1007,7 +1007,7 @@ static int exec_mmap(struct mm_struct *mm) local_irq_disable(); active_mm = tsk->active_mm; tsk->active_mm = mm; - tsk->mm = mm; + WRITE_ONCE(tsk->mm, mm); /* membarrier reads this without locks */ /* * This prevents preemption while active_mm is being loaded and * it and mm are being updated, which could cause problems for diff --git a/kernel/kthread.c b/kernel/kthread.c index 8275b415acec..4962794e02d5 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1322,7 +1322,7 @@ void kthread_use_mm(struct mm_struct *mm) mmgrab(mm); tsk->active_mm = mm; } - tsk->mm = mm; + WRITE_ONCE(tsk->mm, mm); /* membarrier reads this without locks */ membarrier_update_current_mm(mm); switch_mm_irqs_off(active_mm, mm, tsk); membarrier_finish_switch_mm(atomic_read(&mm->membarrier_state)); @@ -1363,7 +1363,7 @@ void kthread_unuse_mm(struct mm_struct *mm) smp_mb__after_spinlock(); sync_mm_rss(mm); local_irq_disable(); - tsk->mm = NULL; + WRITE_ONCE(tsk->mm, NULL); /* membarrier reads this without locks */ membarrier_update_current_mm(NULL); /* active_mm is still 'mm' */ enter_lazy_tlb(mm, tsk); diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c index 3173b063d358..c32c32a2441e 100644 --- a/kernel/sched/membarrier.c +++ b/kernel/sched/membarrier.c @@ -410,7 +410,7 @@ static int membarrier_private_expedited(int flags, int cpu_id) goto out; rcu_read_lock(); p = rcu_dereference(cpu_rq(cpu_id)->curr); - if (!p || p->mm != mm) { + if (!p || READ_ONCE(p->mm) != mm) { rcu_read_unlock(); goto out; } @@ -423,7 +423,7 @@ static int membarrier_private_expedited(int flags, int cpu_id) struct task_struct *p; p = rcu_dereference(cpu_rq(cpu)->curr); - if (p && p->mm == mm) + if (p && READ_ONCE(p->mm) == mm) __cpumask_set_cpu(cpu, tmpmask); } rcu_read_unlock(); @@ -521,7 +521,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm) struct task_struct *p; p = rcu_dereference(rq->curr); - if (p && p->mm == mm) + if (p && READ_ONCE(p->mm) == mm) __cpumask_set_cpu(cpu, tmpmask); } rcu_read_unlock(); -- 2.31.1