Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1855341pxb; Mon, 8 Mar 2021 07:59:49 -0800 (PST) X-Google-Smtp-Source: ABdhPJz7V9Ey405Yx4GMUH5Dy/gNH4IR20T633MHT4C/TxtdTlFCw985KVOks2IbwUM84sqSVJZF X-Received: by 2002:a17:906:82c5:: with SMTP id a5mr16011230ejy.232.1615219188828; Mon, 08 Mar 2021 07:59:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615219188; cv=none; d=google.com; s=arc-20160816; b=Hvh1BBz16aWpewb/v83/OgYpl1Jl9budJ4Y9huyJCtDobarcdaDyo1cHwNcFtH2mV5 pjbpSoDrTsjbfmQoyojuJzr0YoPsn3rV5w5b3r0SLRdY0+GR3/AHMuAt6Bz4E4w3Njv0 trN03xW4lGkxSVnlC9oug91VkMfIJG+q7SwqqXdVnvkuIwJlP15WIFcQtrMgMY+wL1MN 2V49t+7iWdnSXINjQHsnpTHyqAEHaxlqfRWBIgtzG9pyVTSkOLYInbywroeUhuf9u3VR pLNvMhoXEfe3mwWqLVamWBOH/dfPi7FxIPwp+CeCbKpsQuv4x4htsvA0X2QGDo9FOnnw Gofg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:sender:dkim-signature; bh=zCBHdpjDvAH5HFKLaFFW9ex0bLksKjp4z0Uq53dM3gc=; b=E/al26h+JCDm4nNvzqNcNKFzYAJh3NDaPa4IO8e18JodJl8xBqnEjavw05oKyEdBf4 bT1Q6CK3jrWxITLCFo4mkPLBt1PyqVcL/hhyb8hYUexby2+JP3OMzQnxMrjLSJKccr1n 4jH1aRz8oXJ+0cQMxST+7QFqgMo8/Y0KAqAMKAnhinbdKHaZukbggOSxWLHLTya+KShY Yoyib6nO1B2mMRz8SmWeXbVpxOdqOUNwbNluWNiUahU3szr09bC/Vm5EPUnqm7tWJ46f xp55JxZtsTUUDLseXqMUQpcaBidRUQV+VGeZCgTCp5lWQtHNvq6FnUN7W2ZhO8DVpgCA GefA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=YvuLJTYi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m13si7387007edd.35.2021.03.08.07.59.25; Mon, 08 Mar 2021 07:59:48 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=YvuLJTYi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230078AbhCHPz5 (ORCPT + 99 others); Mon, 8 Mar 2021 10:55:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47498 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229528AbhCHPz3 (ORCPT ); Mon, 8 Mar 2021 10:55:29 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7004EC06174A for ; Mon, 8 Mar 2021 07:55:29 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id e9so7930035qvf.21 for ; Mon, 08 Mar 2021 07:55:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=zCBHdpjDvAH5HFKLaFFW9ex0bLksKjp4z0Uq53dM3gc=; b=YvuLJTYiHSsuWbcy/5lVilsu8YhrlTYvVBdO6/L9foobH+R+hgG1E6iwz2BlOHwgTv v8dvJsWY942NIl0hP/BTdiu0am0sf1qb3DXVRp8lrrthsGXoKf7QSaBdbCwEZ4D94+sm EQxHFExymLnLMHVZWYwPc7YHESCdOxGnV40tMlrbQCt8DIWsuJnm65d4yCDVNl+awZZ8 vhAVx2EvY2PgYD9MJdBZUDgtRmybjs3uqnyJQkyrbMTLCkV2RMYwGwZsBakMV3Peu5ay 55E1Phebh9FEUqiATKcXA1DErCPxvwt3Q7caXucMgSZ42exbGBBUSZl73be8MiyU2ZtG lYYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zCBHdpjDvAH5HFKLaFFW9ex0bLksKjp4z0Uq53dM3gc=; b=l4QlKM3OuLhnd5y0bEdGZiGi5o2Kfefgs3JhqLdsT7ONHAvJcQQLao7f6LvWDq7Lic AJYiKNARvcaK5n2prm7vDW1ZbPFzON/jgrExLtFSVx1SDRlEpoqb7iIGPmPkWUmmJ7Ry CWvLodZRob0Wcw7kbUpw7stNV0+cadl4T9eleFBLDIa88TfVEZUHbMS3YQZPS+jhBZSg 45lH7Pf0pBBmmYIR/dnvDro1FuXAfTX+VFc8LGrUvQusmiymuo+SgcHTWlifgfFhwcJr bdYoz9cEoCCMQz7sUUotMD5vAX3FreQ3HRgZcjcndFmBIAfqmFx5o/pfi2lule6V0zqD 2uXQ== X-Gm-Message-State: AOAM5328llX8M7RYovi8Klx3jv7+LlQxULFlnx0SN5taafOPAqPTuEXe 7DaXfq0VC5cKCFH/2nxQ+XX2s4og/g4ZTIXG Sender: "andreyknvl via sendgmr" X-Received: from andreyknvl3.muc.corp.google.com ([2a00:79e0:15:13:85fb:aac9:69ed:e574]) (user=andreyknvl job=sendgmr) by 2002:a0c:e788:: with SMTP id x8mr21917279qvn.48.1615218928607; Mon, 08 Mar 2021 07:55:28 -0800 (PST) Date: Mon, 8 Mar 2021 16:55:16 +0100 In-Reply-To: Message-Id: <731edf3341d50e863a658689c184eb16abda70e6.1615218180.git.andreyknvl@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.30.1.766.gb4fecdf3b7-goog Subject: [PATCH v2 3/5] kasan, mm: integrate page_alloc init with HW_TAGS From: Andrey Konovalov To: Catalin Marinas , Vincenzo Frascino , Alexander Potapenko , Marco Elver Cc: Andrew Morton , Will Deacon , Dmitry Vyukov , Andrey Ryabinin , Peter Collingbourne , Evgenii Stepanov , Branislav Rankov , Kevin Brodsky , kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This change uses the previously added memory initialization feature of HW_TAGS KASAN routines for page_alloc memory when init_on_alloc/free is enabled. With this change, kernel_init_free_pages() is no longer called when both HW_TAGS KASAN and init_on_alloc/free are enabled. Instead, memory is initialized in KASAN runtime. To avoid discrepancies with which memory gets initialized that can be caused by future changes, both KASAN and kernel_init_free_pages() hooks are put together and a warning comment is added. This patch changes the order in which memory initialization and page poisoning hooks are called. This doesn't lead to any side-effects, as whenever page poisoning is enabled, memory initialization gets disabled. Combining setting allocation tags with memory initialization improves HW_TAGS KASAN performance when init_on_alloc/free is enabled. Signed-off-by: Andrey Konovalov --- include/linux/kasan.h | 30 ++++++++++++++++++++++-------- mm/kasan/common.c | 8 ++++---- mm/mempool.c | 4 ++-- mm/page_alloc.c | 37 ++++++++++++++++++++++++++----------- 4 files changed, 54 insertions(+), 25 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 1d89b8175027..c89613caa8cf 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -96,6 +96,11 @@ static __always_inline bool kasan_enabled(void) return static_branch_likely(&kasan_flag_enabled); } +static inline bool kasan_has_integrated_init(void) +{ + return kasan_enabled(); +} + #else /* CONFIG_KASAN_HW_TAGS */ static inline bool kasan_enabled(void) @@ -103,6 +108,11 @@ static inline bool kasan_enabled(void) return true; } +static inline bool kasan_has_integrated_init(void) +{ + return false; +} + #endif /* CONFIG_KASAN_HW_TAGS */ slab_flags_t __kasan_never_merge(void); @@ -120,20 +130,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) __kasan_unpoison_range(addr, size); } -void __kasan_alloc_pages(struct page *page, unsigned int order); +void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); static __always_inline void kasan_alloc_pages(struct page *page, - unsigned int order) + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_alloc_pages(page, order); + __kasan_alloc_pages(page, order, init); } -void __kasan_free_pages(struct page *page, unsigned int order); +void __kasan_free_pages(struct page *page, unsigned int order, bool init); static __always_inline void kasan_free_pages(struct page *page, - unsigned int order) + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_free_pages(page, order); + __kasan_free_pages(page, order, init); } void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, @@ -277,13 +287,17 @@ static inline bool kasan_enabled(void) { return false; } +static inline bool kasan_has_integrated_init(void) +{ + return false; +} static inline slab_flags_t kasan_never_merge(void) { return 0; } static inline void kasan_unpoison_range(const void *address, size_t size) {} -static inline void kasan_alloc_pages(struct page *page, unsigned int order) {} -static inline void kasan_free_pages(struct page *page, unsigned int order) {} +static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} +static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 316f7f8cd8e6..6107c795611f 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) return 0; } -void __kasan_alloc_pages(struct page *page, unsigned int order) +void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) { u8 tag; unsigned long i; @@ -108,14 +108,14 @@ void __kasan_alloc_pages(struct page *page, unsigned int order) tag = kasan_random_tag(); for (i = 0; i < (1 << order); i++) page_kasan_tag_set(page + i, tag); - kasan_unpoison(page_address(page), PAGE_SIZE << order, false); + kasan_unpoison(page_address(page), PAGE_SIZE << order, init); } -void __kasan_free_pages(struct page *page, unsigned int order) +void __kasan_free_pages(struct page *page, unsigned int order, bool init) { if (likely(!PageHighMem(page))) kasan_poison(page_address(page), PAGE_SIZE << order, - KASAN_FREE_PAGE, false); + KASAN_FREE_PAGE, init); } /* diff --git a/mm/mempool.c b/mm/mempool.c index 79959fac27d7..fe19d290a301 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -106,7 +106,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_slab_free_mempool(element); else if (pool->alloc == mempool_alloc_pages) - kasan_free_pages(element, (unsigned long)pool->pool_data); + kasan_free_pages(element, (unsigned long)pool->pool_data, false); } static void kasan_unpoison_element(mempool_t *pool, void *element) @@ -114,7 +114,7 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_unpoison_range(element, __ksize(element)); else if (pool->alloc == mempool_alloc_pages) - kasan_alloc_pages(element, (unsigned long)pool->pool_data); + kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); } static __always_inline void add_element(mempool_t *pool, void *element) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0efb07b5907c..aba9cd673eac 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -396,14 +396,14 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * initialization is done, but this is not likely to happen. */ static inline void kasan_free_nondeferred_pages(struct page *page, int order, - fpi_t fpi_flags) + bool init, fpi_t fpi_flags) { if (static_branch_unlikely(&deferred_pages)) return; if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && (fpi_flags & FPI_SKIP_KASAN_POISON)) return; - kasan_free_pages(page, order); + kasan_free_pages(page, order, init); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -455,12 +455,12 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) } #else static inline void kasan_free_nondeferred_pages(struct page *page, int order, - fpi_t fpi_flags) + bool init, fpi_t fpi_flags) { if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && (fpi_flags & FPI_SKIP_KASAN_POISON)) return; - kasan_free_pages(page, order); + kasan_free_pages(page, order, init); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1242,6 +1242,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; + bool init; VM_BUG_ON_PAGE(PageTail(page), page); @@ -1299,16 +1300,21 @@ static __always_inline bool free_pages_prepare(struct page *page, debug_check_no_obj_freed(page_address(page), PAGE_SIZE << order); } - if (want_init_on_free()) - kernel_init_free_pages(page, 1 << order); kernel_poison_pages(page, 1 << order); /* + * As memory initialization might be integrated into KASAN, + * kasan_free_pages and kernel_init_free_pages must be + * kept together to avoid discrepancies in behavior. + * * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - kasan_free_nondeferred_pages(page, order, fpi_flags); + init = want_init_on_free(); + if (init && !kasan_has_integrated_init()) + kernel_init_free_pages(page, 1 << order); + kasan_free_nondeferred_pages(page, order, init, fpi_flags); /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -2315,17 +2321,26 @@ static bool check_new_pages(struct page *page, unsigned int order) inline void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags) { + bool init; + set_page_private(page, 0); set_page_refcounted(page); arch_alloc_page(page, order); debug_pagealloc_map_pages(page, 1 << order); - kasan_alloc_pages(page, order); - kernel_unpoison_pages(page, 1 << order); - set_page_owner(page, order, gfp_flags); - if (!want_init_on_free() && want_init_on_alloc(gfp_flags)) + /* + * As memory initialization might be integrated into KASAN, + * kasan_alloc_pages and kernel_init_free_pages must be + * kept together to avoid discrepancies in behavior. + */ + init = !want_init_on_free() && want_init_on_alloc(gfp_flags); + kasan_alloc_pages(page, order, init); + if (init && !kasan_has_integrated_init()) kernel_init_free_pages(page, 1 << order); + + kernel_unpoison_pages(page, 1 << order); + set_page_owner(page, order, gfp_flags); } static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags, -- 2.30.1.766.gb4fecdf3b7-goog