Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp1491685pxb; Fri, 13 Nov 2020 14:20:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJzTE0xrF2D1oHSP/V6CWrV1ENy7LCc+AhE/wCMkxW/jDffzZss+OFsZgiwOcnrDeWn894BJ X-Received: by 2002:a17:906:f2cc:: with SMTP id gz12mr193684ejb.181.1605306017789; Fri, 13 Nov 2020 14:20:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1605306017; cv=none; d=google.com; s=arc-20160816; b=ZemruhU+9pst/YeLpIgFVY+6vKagZFsBEsM1DhUf+oSXnd+xDd8sDw5ucjsUH46/+Y 2/21t32PF1rrXb5Io0aRokVHcHzWPtaOTPfKiFNd8n8lDPLqaTgljAVX6hOQlLlBYexT vkJMUicxu6abZjiAUwd1qq6IxXOCfp7G7qecUBnAvSNrladEfNE3E5uv0ezn5luGHg7D V7nN8IhqlU4hvdL/9tA/iVB3kKl9XE/N6GNkBoeIUNxF0WHwrZ5eMvgDZShuQvMBfjPs ClU0yKtF672gpZNfRx+SalVHzmE/RvzzcDw57FQY3Osm1ZgJZ9mZTbCTh6vLVDb3O7va r64w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=Ho6gRtmpYK99NRY0nqMDMTWGuBlu7BoOKXCHJVaPFrA=; b=idrMjMqa63ouxGm+3d2F3no8wH7JUrk8i/ALO32Yn7cpucxOLCh+jQIMrPS6en5sL0 n9leOiyox+k7KVMptSi9nREZTyhpsctcWfdM5vPt0qxfVZUmnbqw0TLQA4alwy1YBRRT cfohAdmVooChXw+QkiQOplbuDf82gMLaeQmo9ZGlAuNtLW6r86R9yZy51TYyZz5b5sVk GwdtB6W5ju0mldi8oUQJF8mPD1t/2bmVgwmO/v4PAdngnOSB0tl+NLRU2GQ0Hn4EzleQ oSWLfesa4wlK3ImX4g97H/9Niz7WDVf3TQCv6kFlqKHOIngBTU5rYYu2Skk+Ei1bY7nD 2VwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=MPUYatJk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s25si6545647ejv.48.2020.11.13.14.19.55; Fri, 13 Nov 2020 14:20:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=MPUYatJk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726429AbgKMWRv (ORCPT + 99 others); Fri, 13 Nov 2020 17:17:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726510AbgKMWRU (ORCPT ); Fri, 13 Nov 2020 17:17:20 -0500 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 92B38C0613D1 for ; Fri, 13 Nov 2020 14:17:19 -0800 (PST) Received: by mail-ed1-x549.google.com with SMTP id bc27so5449802edb.18 for ; Fri, 13 Nov 2020 14:17:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=Ho6gRtmpYK99NRY0nqMDMTWGuBlu7BoOKXCHJVaPFrA=; b=MPUYatJk7xzMsJxOWFcCa6BB02oLTJrhQbX6JQ7njNHRxutqkHM7Nf/rADloq7EOOd 1bOP73z+zEvZcc1/Rza5+PLrWfrHzsx3Kn4X1K/wF10kJV3wK/zwjavAis3RZUrC9Q86 07FBoH1iNkLllu20EOOOye3QtXKS23Wfg8XEt7QTHyh6fAaZiu0SLHyOB/k4Gj4ntaVd WHsmuRr9rs2GdcUzkKJqwMc/7K31SR26FRmxo5AcmO/+eHUOYTBsQAlHePI8VC7wVVio IqAAfiyCa2Qvhhp5ZIHWfyp4U2bFFecERfDNQTImby6l6XCKmRDgBDrmJJVq5IdYfD6Q RmYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ho6gRtmpYK99NRY0nqMDMTWGuBlu7BoOKXCHJVaPFrA=; b=Xf/dg5ybmfrfIMc5ZrqSFmPzONeXsk1L7vM5yrGq+36j8sdu3WDzwPl32plLs/ySCh o61hBe1z4KtaEwepSwMRyLmd9GkiRdkEV0o/2kRmDQkQCno9gRJNunOsdEaVkgEWnyfq vhUc45B82aCLFhufDzpOiXZOxCsBpyB7yWhVTEjxYIBR+FKxH9Srb0gUf9FxcFHHRkTr BznVJSSDMuCvkQrqHSWYkoU8h3/f+XAtofC1zHLEYs6Ym3tvnR3/FXC+LJU0SAZC49ez ROvu7AMnM9nkMHt5SLPDSTjmKKZGoWQbniEvekXp+J93K6tI2vc/7ivzwet6uzMgEnly Nc0Q== X-Gm-Message-State: AOAM532yYni8XESPBZ4l5tummgcpHBVuafDt3TN2NvLXp9+tOfIQxKF6 6vmhnRSqGgFbHMSXRmlo1Une2DqE6o9H2dNQ Sender: "andreyknvl via sendgmr" X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:7220:84ff:fe09:7e9d]) (user=andreyknvl job=sendgmr) by 2002:a05:6402:141:: with SMTP id s1mr4617132edu.87.1605305837947; Fri, 13 Nov 2020 14:17:17 -0800 (PST) Date: Fri, 13 Nov 2020 23:15:54 +0100 In-Reply-To: Message-Id: <18bca1ff61bf6605289e7213153b3fd5b8f81e27.1605305705.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.299.gdc1121823c-goog Subject: [PATCH mm v10 26/42] arm64: mte: Reset the page tag in page->flags From: Andrey Konovalov To: Andrew Morton Cc: Catalin Marinas , Will Deacon , Vincenzo Frascino , Dmitry Vyukov , Andrey Ryabinin , Alexander Potapenko , Marco Elver , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Vincenzo Frascino The hardware tag-based KASAN for compatibility with the other modes stores the tag associated to a page in page->flags. Due to this the kernel faults on access when it allocates a page with an initial tag and the user changes the tags. Reset the tag associated by the kernel to a page in all the meaningful places to prevent kernel faults on access. Note: An alternative to this approach could be to modify page_to_virt(). This though could end up being racy, in fact if a CPU checks the PG_mte_tagged bit and decides that the page is not tagged but another CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel access would fail. Signed-off-by: Vincenzo Frascino Signed-off-by: Andrey Konovalov --- Change-Id: I8451d438bb63364de2a3e68041e3a27866921d4e --- arch/arm64/kernel/hibernate.c | 5 +++++ arch/arm64/kernel/mte.c | 9 +++++++++ arch/arm64/mm/copypage.c | 9 +++++++++ arch/arm64/mm/mteswap.c | 9 +++++++++ 4 files changed, 32 insertions(+) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 42003774d261..9c9f47e9f7f4 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -371,6 +371,11 @@ static void swsusp_mte_restore_tags(void) unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); + /* + * It is not required to invoke page_kasan_tag_reset(page) + * at this point since the tags stored in page->flags are + * already restored. + */ mte_restore_page_tags(page_address(page), tags); mte_free_tag_storage(tags); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 8f99c65837fd..86d554ce98b6 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -34,6 +34,15 @@ static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap) return; } + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_clear_page_tags(page_address(page)); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 70a71f38b6a9..b5447e53cd73 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -23,6 +23,15 @@ void copy_highpage(struct page *to, struct page *from) if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); + page_kasan_tag_reset(to); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_copy_page_tags(kto, kfrom); } } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index c52c1847079c..7c4ef56265ee 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,6 +53,15 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; + page_kasan_tag_reset(page); + /* + * We need smp_wmb() in between setting the flags and clearing the + * tags because if another thread reads page->flags and builds a + * tagged address out of it, there is an actual dependency to the + * memory access, but on the current thread we do not guarantee that + * the new page->flags are visible before the tags were updated. + */ + smp_wmb(); mte_restore_page_tags(page_address(page), tags); return true; -- 2.29.2.299.gdc1121823c-goog