Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1224515pxf; Fri, 2 Apr 2021 04:54:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzEwrIZuoT2xXHaM9I+ceEK+o6O4Rb2p3jZDKUmPFvwTuxPE+TYsnS4FTPwIzg1qu+SvvU1 X-Received: by 2002:a6b:8fc5:: with SMTP id r188mr10444116iod.207.1617364467717; Fri, 02 Apr 2021 04:54:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617364467; cv=none; d=google.com; s=arc-20160816; b=1LWBdgd48m/48hLu6kDDa5O7yAtvJM/0hjqNBTB+QpLRXVz5aXCcLjQmwfzy0ggbDc UBySBYIJiTdQQ6anI+y/MDvC/arRm+z817gb6skqq+ry+iKRVcYvKDDOydsZUc7SOkg3 s08rOL8+T0W91GTO8ie0JiIEliieewGgxferA8iIBuHRUseh0D7IyBoherwLgrxBwLwr OVhauyBbaH3P4HNliJLh1A05AVgYQRreJ2/zLpwN3PFK+qbJw6zEKfezQXQ1hAuroQVt Vo1ZfEJMUg2zA8hh+RdRYhpSBd8o7VmMMZfL2SyAFg83nztg5i9eXw6Y8F4awErzM3XJ FrHg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=gDZTAGIciHTQOJW2COOIa78rQk2MOLyH0QOLsb3/8po=; b=n6/YN1O6tDxI5XZCrhPYezNGjE5n/z1qsexV48DhMRJYW2hv3VRtUBGTxGYeaADaIS LUIuGlBVaQp7AiuQzEuMwwKCGMJpTryLW2WSVyk2lVa2VWKWl5fLP9ikew3cgkZbqJpq JOQc21lp4u/OgG8fOwcIBjvadWajYQAzwOsyqb9vO8CeqCZXHLPthhYlxrx3BrYzG8P9 FavB4QeLQt39JZT+QnfQB7XUbkjf2BgUdZH55qqrDlYs+TNjIeC5ZAFQm2KlvIXL72Qq qEDdYRD8no2Vq6I3smZ8cDVGyrEUKaMETZ/7w6LnQKbTM/wOAfjjAFnBFMmhiiiSNH5o grCQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gentoo.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u22si8313756jat.108.2021.04.02.04.54.13; Fri, 02 Apr 2021 04:54:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=gentoo.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234161AbhDBLx6 (ORCPT + 99 others); Fri, 2 Apr 2021 07:53:58 -0400 Received: from smtp.gentoo.org ([140.211.166.183]:51290 "EHLO smtp.gentoo.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229599AbhDBLx6 (ORCPT ); Fri, 2 Apr 2021 07:53:58 -0400 Received: by sf.home (Postfix, from userid 1000) id AB3B35A22061; Fri, 2 Apr 2021 12:53:52 +0100 (BST) From: Sergei Trofimovich To: Andrew Morton , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Sergei Trofimovich , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira Subject: [PATCH v2] mm: page_owner: detect page_owner recursion via task_struct Date: Fri, 2 Apr 2021 12:53:42 +0100 Message-Id: <20210402115342.1463781-1-slyfox@gentoo.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210402125039.671f1f40@sf> References: <20210402125039.671f1f40@sf> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Before the change page_owner recursion was detected via fetching backtrace and inspecting it for current instruction pointer. It has a few problems: - it is slightly slow as it requires extra backtrace and a linear stack scan of the result - it is too late to check if backtrace fetching required memory allocation itself (ia64's unwinder requires it). To simplify recursion tracking let's use page_owner recursion flag in 'struct task_struct'. The change make page_owner=on work on ia64 by avoiding infinite recursion in: kmalloc() -> __set_page_owner() -> save_stack() -> unwind() [ia64-specific] -> build_script() -> kmalloc() -> __set_page_owner() [we short-circuit here] -> save_stack() -> unwind() [recursion] CC: Ingo Molnar CC: Peter Zijlstra CC: Juri Lelli CC: Vincent Guittot CC: Dietmar Eggemann CC: Steven Rostedt CC: Ben Segall CC: Mel Gorman CC: Daniel Bristot de Oliveira CC: Andrew Morton CC: linux-mm@kvack.org Signed-off-by: Sergei Trofimovich --- Change since v1: - use bit from task_struct instead of a new field - track only one recursion depth level so far include/linux/sched.h | 4 ++++ mm/page_owner.c | 32 ++++++++++---------------------- 2 files changed, 14 insertions(+), 22 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index ef00bb22164c..00986450677c 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -841,6 +841,10 @@ struct task_struct { /* Stalled due to lack of memory */ unsigned in_memstall:1; #endif +#ifdef CONFIG_PAGE_OWNER + /* Used by page_owner=on to detect recursion in page tracking. */ + unsigned in_page_owner:1; +#endif unsigned long atomic_flags; /* Flags requiring atomic access. */ diff --git a/mm/page_owner.c b/mm/page_owner.c index 7147fd34a948..64b2e4c6afb7 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -97,42 +97,30 @@ static inline struct page_owner *get_page_owner(struct page_ext *page_ext) return (void *)page_ext + page_owner_ops.offset; } -static inline bool check_recursive_alloc(unsigned long *entries, - unsigned int nr_entries, - unsigned long ip) -{ - unsigned int i; - - for (i = 0; i < nr_entries; i++) { - if (entries[i] == ip) - return true; - } - return false; -} - static noinline depot_stack_handle_t save_stack(gfp_t flags) { unsigned long entries[PAGE_OWNER_STACK_DEPTH]; depot_stack_handle_t handle; unsigned int nr_entries; - nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); - /* - * We need to check recursion here because our request to - * stackdepot could trigger memory allocation to save new - * entry. New memory allocation would reach here and call - * stack_depot_save_entries() again if we don't catch it. There is - * still not enough memory in stackdepot so it would try to - * allocate memory again and loop forever. + * Avoid recursion. + * + * Sometimes page metadata allocation tracking requires more + * memory to be allocated: + * - when new stack trace is saved to stack depot + * - when backtrace itself is calculated (ia64) */ - if (check_recursive_alloc(entries, nr_entries, _RET_IP_)) + if (current->in_page_owner) return dummy_handle; + current->in_page_owner = 1; + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2); handle = stack_depot_save(entries, nr_entries, flags); if (!handle) handle = failure_handle; + current->in_page_owner = 0; return handle; } -- 2.31.1