Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp846320pxf; Thu, 1 Apr 2021 15:31:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwvA3VJp/NesNNNyv3NbwdUWlo4GDLT/VtodQJpbNm8ftBcdqGaSjp9xpC5g3kp2qvW/rpA X-Received: by 2002:a17:906:400b:: with SMTP id v11mr11365916ejj.194.1617316316773; Thu, 01 Apr 2021 15:31:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617316316; cv=none; d=google.com; s=arc-20160816; b=caeqxmQlrc1C7qRBEOpbckMqulZlpzpZtv5gYPt1yF3QRoAXdBfbGuw8Rag1Ovqrb8 +EbuupX+qmJVMUqX8sjcqpa1HD2hbb+P7UI+Iv9/BAHmKfmibLfpr5sua4Ii1+2cT8Yh JFFiLa5+BiomvCfLhY7/b6Iajz0WrApl3C0D3BvOLx7Lex5GDmXxrj/00Dry974PL6he 07FMt8Ah3hr7bSL/Clxp2AA/ZxAqbDwf05/uwmmHPkxA2e+91I4E/6nsycMncHghwX6Q UchpWJ+L9kAtF65sCfTlN3hGErWUBS1F0sPRVBuBA9CEEXbqqNfMAhyI1I1mqVjsSMH3 MfGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=sa3WjSyN8Zm5Rj3RlxduwlnuZB5U285JqbDc4XwYx9s=; b=UmRNPX9yerj1o1Hb98yrRikyR4AHdM7pCvqUtNRML9HyDAtDLSZoDOTgYum/JgL8M0 y3rndprF2cdRIuoU2IdDREILz6pmc4cjPjcstID83GpGBdCEuUs0BSceGz/lNesqm+Wm B8ZfWJ6wdL9cFadwwQsE7pLDDoMzyNrPtqsE5LSkE3w6N1oawdy63ZikD0nnDnwuvw8p b4cUL6p8aPCOvT+toxGPY6nF2DN431Ycc1SJf350n4a2ZKNiKaMIM6AFZGrZeQj70ad6 Uu2sb6FZ1mmVrYjmSUYSLSmInuhL6FPZvrdg9AuA4iLbHojLdFRcqVriw2e0cs4zm3Mz a59Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gentoo.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id hb30si5283685ejc.218.2021.04.01.15.31.32; Thu, 01 Apr 2021 15:31:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gentoo.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233981AbhDAWa1 (ORCPT + 99 others); Thu, 1 Apr 2021 18:30:27 -0400 Received: from smtp.gentoo.org ([140.211.166.183]:44420 "EHLO smtp.gentoo.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233881AbhDAWa0 (ORCPT ); Thu, 1 Apr 2021 18:30:26 -0400 Received: by sf.home (Postfix, from userid 1000) id 1CBBC5A22061; Thu, 1 Apr 2021 23:30:19 +0100 (BST) From: Sergei Trofimovich To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Sergei Trofimovich , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Andrew Morton Subject: [PATCH] mm: page_owner: detect page_owner recursion via task_struct Date: Thu, 1 Apr 2021 23:30:10 +0100 Message-Id: <20210401223010.3580480-1-slyfox@gentoo.org> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Before the change page_owner recursion was detected via fetching backtrace and inspecting it for current instruction pointer. It has a few problems: - it is slightly slow as it requires extra backtrace and a linear stack scan of the result - it is too late to check if backtrace fetching required memory allocation itself (ia64's unwinder requires it). To simplify recursion tracking let's use page_owner recursion depth as a counter in 'struct task_struct'. The change make page_owner=on work on ia64 bu avoiding infinite recursion in: kmalloc() -> __set_page_owner() -> save_stack() -> unwind() [ia64-specific] -> build_script() -> kmalloc() -> __set_page_owner() [we short-circuit here] -> save_stack() -> unwind() [recursion] CC: Ingo Molnar CC: Peter Zijlstra CC: Juri Lelli CC: Vincent Guittot CC: Dietmar Eggemann CC: Steven Rostedt CC: Ben Segall CC: Mel Gorman CC: Daniel Bristot de Oliveira CC: Andrew Morton CC: linux-mm@kvack.org Signed-off-by: Sergei Trofimovich --- include/linux/sched.h | 9 +++++++++ init/init_task.c | 3 +++ mm/page_owner.c | 41 +++++++++++++++++------------------------ 3 files changed, 29 insertions(+), 24 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index ef00bb22164c..35771703fd89 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1371,6 +1371,15 @@ struct task_struct { struct llist_head kretprobe_instances; #endif +#ifdef CONFIG_PAGE_OWNER + /* + * Used by page_owner=on to detect recursion in page tracking. + * Is it fine to have non-atomic ops here if we ever access + * this variable via current->page_owner_depth? + */ + unsigned int page_owner_depth; +#endif + /* * New fields for task_struct should be added above here, so that * they are included in the randomized portion of task_struct. diff --git a/init/init_task.c b/init/init_task.c index 3711cdaafed2..f579f2b2eca8 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -213,6 +213,9 @@ struct task_struct init_task #ifdef CONFIG_SECCOMP .seccomp = { .filter_count = ATOMIC_INIT(0) }, #endif +#ifdef CONFIG_PAGE_OWNER + .page_owner_depth = 0, +#endif }; EXPORT_SYMBOL(init_task); diff --git a/mm/page_owner.c b/mm/page_owner.c index 7147fd34a948..422558605fcc 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -20,6 +20,16 @@ */ #define PAGE_OWNER_STACK_DEPTH (16) +/* + * How many reenters we allow to page_owner. + * + * Sometimes metadata allocation tracking requires more memory to be allocated: + * - when new stack trace is saved to stack depot + * - when backtrace itself is calculated (ia64) + * Instead of falling to infinite recursion give it a chance to recover. + */ +#define PAGE_OWNER_MAX_RECURSION_DEPTH (1) + struct page_owner { unsigned short order; short last_migrate_reason; @@ -97,42 +107,25 @@ static inline struct page_owner *get_page_owner(struct page_ext *page_ext) return (void *)page_ext + page_owner_ops.offset; } -static inline bool check_recursive_alloc(unsigned long *entries, - unsigned int nr_entries, - unsigned long ip) -{ - unsigned int i; - - for (i = 0; i < nr_entries; i++) { - if (entries[i] == ip) - return true; - } - return false; -} - static noinline depot_stack_handle_t save_stack(gfp_t flags) { unsigned long entries[PAGE_OWNER_STACK_DEPTH]; depot_stack_handle_t handle; unsigned int nr_entries; - nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); - - /* - * We need to check recursion here because our request to - * stackdepot could trigger memory allocation to save new - * entry. New memory allocation would reach here and call - * stack_depot_save_entries() again if we don't catch it. There is - * still not enough memory in stackdepot so it would try to - * allocate memory again and loop forever. - */ - if (check_recursive_alloc(entries, nr_entries, _RET_IP_)) + /* Avoid recursion. Used in stack trace generation code. */ + if (current->page_owner_depth >= PAGE_OWNER_MAX_RECURSION_DEPTH) return dummy_handle; + current->page_owner_depth++; + + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); + handle = stack_depot_save(entries, nr_entries, flags); if (!handle) handle = failure_handle; + current->page_owner_depth--; return handle; } -- 2.31.1