Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1501928pxj; Fri, 18 Jun 2021 08:29:47 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwMugHA/XKq7CYyrUfz7EyhbM9uwSvF9yDDlVFkrrSOJBHEWsZZY4Fs9RxBIzxG/HMDky1X X-Received: by 2002:a05:6602:12:: with SMTP id b18mr8141386ioa.115.1624030186865; Fri, 18 Jun 2021 08:29:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624030186; cv=none; d=google.com; s=arc-20160816; b=pRCFc2+L9ukfoT/JaUYCcSPGZpuExkD76fWNiuQgVlTLe5/dcUjF61Toh5On7vfjg0 oc0kJeFpGegDDexz3sBFaIneP272Ejb8JaFUoFx+bGYrvJiKLrbn9wCUJG1jzMQaR4LT WeCouQ8E/xAj3cvvBpTJMBqeCbpC9tqsWDkWDUC039VTdCO+817p9BdfRyi1nycYJZeh aQuniHNihiCP0TZ5GmDlWca3CFaYGLk64dJOgFgqBnUzFi4YtK67uckldq12HRdxhRoN +ZT2hqq/3Yr8mY2UeiFpZdhACG6HqQjOhWB41PauHVJ0JAFonhUwMqRHBtJNShUOqgiI zB5A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=8snUiDeEvxDRf4ac5iUBOoyV0yF9v4ciI9EcHsbcJgU=; b=P2JJpfdl6GrzHk0ZsuoxsJv8Dolg03RoHCXYlZFwnb6m1edz0UKCY59vH1D6e8a/6N XzhrmR5/RvE2IwCl88KPLkK5URpjBT/NUB9VxFSFg5ERyWuRwTFl/v3T6lpL28db9ALs s06L/W1tB3uLLG0x7w40Xd5QexK2jHjOQbQ2EfnCEbPDoQhy8Q74TYCWZkAyOW+cQrp6 ycZDupeIc8w8s6INJdfRSV9RSqW39v/VDJ5v17a3nyu1WVyxpdvB6xH+G5JhSPIHA7Ps HI3GjBofwlgqbCy4FCliiX7TT9XHBIHy6pVgFCzfkM4CeJAU6zPF6LfEkf20Sh6prQIo 3Ggg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id t44si9060019jal.72.2021.06.18.08.29.34; Fri, 18 Jun 2021 08:29:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233169AbhFRNau (ORCPT + 99 others); Fri, 18 Jun 2021 09:30:50 -0400 Received: from foss.arm.com ([217.140.110.172]:40946 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232258AbhFRNau (ORCPT ); Fri, 18 Jun 2021 09:30:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B5A981476; Fri, 18 Jun 2021 06:28:40 -0700 (PDT) Received: from e112269-lin.arm.com (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 166E93F719; Fri, 18 Jun 2021 06:28:37 -0700 (PDT) From: Steven Price To: Catalin Marinas , Marc Zyngier , Will Deacon Cc: Steven Price , James Morse , Julien Thierry , Suzuki K Poulose , kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Dave Martin , Mark Rutland , Thomas Gleixner , qemu-devel@nongnu.org, Juan Quintela , "Dr. David Alan Gilbert" , Richard Henderson , Peter Maydell , Andrew Jones Subject: [PATCH v16 1/7] arm64: mte: Handle race when synchronising tags Date: Fri, 18 Jun 2021 14:28:20 +0100 Message-Id: <20210618132826.54670-2-steven.price@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210618132826.54670-1-steven.price@arm.com> References: <20210618132826.54670-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org mte_sync_tags() used test_and_set_bit() to set the PG_mte_tagged flag before restoring/zeroing the MTE tags. However if another thread were to race and attempt to sync the tags on the same page before the first thread had completed restoring/zeroing then it would see the flag is already set and continue without waiting. This would potentially expose the previous contents of the tags to user space, and cause any updates that user space makes before the restoring/zeroing has completed to potentially be lost. Since this code is run from atomic contexts we can't just lock the page during the process. Instead implement a new (global) spinlock to protect the mte_sync_page_tags() function. Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE") Reviewed-by: Catalin Marinas Signed-off-by: Steven Price --- arch/arm64/kernel/mte.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 125a10e413e9..a3583a7fd400 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -25,6 +25,7 @@ u64 gcr_kernel_excl __ro_after_init; static bool report_fault_once = true; +static DEFINE_SPINLOCK(tag_sync_lock); #ifdef CONFIG_KASAN_HW_TAGS /* Whether the MTE asynchronous mode is enabled. */ @@ -34,13 +35,22 @@ EXPORT_SYMBOL_GPL(mte_async_mode); static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) { + unsigned long flags; pte_t old_pte = READ_ONCE(*ptep); + spin_lock_irqsave(&tag_sync_lock, flags); + + /* Recheck with the lock held */ + if (test_bit(PG_mte_tagged, &page->flags)) + goto out; + if (check_swap && is_swap_pte(old_pte)) { swp_entry_t entry = pte_to_swp_entry(old_pte); - if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) - return; + if (!non_swap_entry(entry) && mte_restore_tags(entry, page)) { + set_bit(PG_mte_tagged, &page->flags); + goto out; + } } page_kasan_tag_reset(page); @@ -53,6 +63,10 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) */ smp_wmb(); mte_clear_page_tags(page_address(page)); + set_bit(PG_mte_tagged, &page->flags); + +out: + spin_unlock_irqrestore(&tag_sync_lock, flags); } void mte_sync_tags(pte_t *ptep, pte_t pte) @@ -63,7 +77,7 @@ void mte_sync_tags(pte_t *ptep, pte_t pte) /* if PG_mte_tagged is set, tags have already been initialised */ for (i = 0; i < nr_pages; i++, page++) { - if (!test_and_set_bit(PG_mte_tagged, &page->flags)) + if (!test_bit(PG_mte_tagged, &page->flags)) mte_sync_page_tags(page, ptep, check_swap); } } -- 2.20.1