From: Andrey Konovalov <[email protected]>
Currently, kernel_init_free_pages() serves two purposes: either only
zeroes memory or zeroes both memory and memory tags via a different
code path. As this function has only two callers, each using only one
code path, this behaviour is confusing.
This patch pulls the code that zeroes both memory and tags out of
kernel_init_free_pages().
As a result of this change, the code in free_pages_prepare() starts to
look complicated, but this is improved in the few following patches.
Those improvements are not integrated into this patch to make diffs
easier to read.
This patch does no functional changes.
Signed-off-by: Andrey Konovalov <[email protected]>
---
mm/page_alloc.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c99566a3b67e..3589333b5b77 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1269,16 +1269,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
PageSkipKASanPoison(page);
}
-static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
+static void kernel_init_free_pages(struct page *page, int numpages)
{
int i;
- if (zero_tags) {
- for (i = 0; i < numpages; i++)
- tag_clear_highpage(page + i);
- return;
- }
-
/* s390's use of memset() could override KASAN redzones. */
kasan_disable_current();
for (i = 0; i < numpages; i++) {
@@ -1372,7 +1366,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
bool init = want_init_on_free();
if (init)
- kernel_init_free_pages(page, 1 << order, false);
+ kernel_init_free_pages(page, 1 << order);
if (!skip_kasan_poison)
kasan_poison_pages(page, order, init);
}
@@ -2415,9 +2409,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
kasan_unpoison_pages(page, order, init);
- if (init)
- kernel_init_free_pages(page, 1 << order,
- gfp_flags & __GFP_ZEROTAGS);
+
+ if (init) {
+ if (gfp_flags & __GFP_ZEROTAGS) {
+ int i;
+
+ for (i = 0; i < 1 << order; i++)
+ tag_clear_highpage(page + i);
+ } else {
+ kernel_init_free_pages(page, 1 << order);
+ }
+ }
}
set_page_owner(page, order, gfp_flags);
--
2.25.1
On Tue, Nov 30, 2021 at 10:40 PM <[email protected]> wrote:
>
> From: Andrey Konovalov <[email protected]>
>
> Currently, kernel_init_free_pages() serves two purposes: either only
Nit: "it either"
> zeroes memory or zeroes both memory and memory tags via a different
> code path. As this function has only two callers, each using only one
> code path, this behaviour is confusing.
>
> This patch pulls the code that zeroes both memory and tags out of
> kernel_init_free_pages().
>
> As a result of this change, the code in free_pages_prepare() starts to
> look complicated, but this is improved in the few following patches.
> Those improvements are not integrated into this patch to make diffs
> easier to read.
>
> This patch does no functional changes.
>
> Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
> ---
> mm/page_alloc.c | 24 +++++++++++++-----------
> 1 file changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c99566a3b67e..3589333b5b77 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1269,16 +1269,10 @@ static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags)
> PageSkipKASanPoison(page);
> }
>
> -static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags)
> +static void kernel_init_free_pages(struct page *page, int numpages)
> {
> int i;
>
> - if (zero_tags) {
> - for (i = 0; i < numpages; i++)
> - tag_clear_highpage(page + i);
> - return;
> - }
> -
> /* s390's use of memset() could override KASAN redzones. */
> kasan_disable_current();
> for (i = 0; i < numpages; i++) {
> @@ -1372,7 +1366,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
> bool init = want_init_on_free();
>
> if (init)
> - kernel_init_free_pages(page, 1 << order, false);
> + kernel_init_free_pages(page, 1 << order);
> if (!skip_kasan_poison)
> kasan_poison_pages(page, order, init);
> }
> @@ -2415,9 +2409,17 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
> bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags);
>
> kasan_unpoison_pages(page, order, init);
> - if (init)
> - kernel_init_free_pages(page, 1 << order,
> - gfp_flags & __GFP_ZEROTAGS);
> +
> + if (init) {
> + if (gfp_flags & __GFP_ZEROTAGS) {
> + int i;
> +
> + for (i = 0; i < 1 << order; i++)
> + tag_clear_highpage(page + i);
> + } else {
> + kernel_init_free_pages(page, 1 << order);
> + }
> + }
> }
>
> set_page_owner(page, order, gfp_flags);
> --
> 2.25.1
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/e64fc8cd8e08fac044368aaba27be9fc6f60ff9c.1638308023.git.andreyknvl%40google.com.
--
Alexander Potapenko
Software Engineer
Google Germany GmbH
Erika-Mann-Straße, 33
80636 München
Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg