Received: by 2002:a25:d7c1:0:0:0:0:0 with SMTP id o184csp1278757ybg; Wed, 23 Oct 2019 13:11:56 -0700 (PDT) X-Google-Smtp-Source: APXvYqw3Ji3eqdtSzk0NwyXTzhzcXSoyZgqBzlz4yN3CawVJG6w2nudN2+ZeO9pPWuokkVldyEbz X-Received: by 2002:a17:906:4813:: with SMTP id w19mr19704007ejq.258.1571861516132; Wed, 23 Oct 2019 13:11:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1571861516; cv=none; d=google.com; s=arc-20160816; b=SjaeZohh7dMAGmZm5TRfkSAi7M78n9tmbpt1fQMMaQ2nVdHgNpzmE5gPuWeU97Q/i5 CKJBze0OIcEFg3CekpBmKkaw7pPYlDPvlsPhpbdRjh6rMk7OHsTbJjT19IgYpuYSWLX7 +l/6mvsBQlo4WS7C6OXQL9+oaER2qisK8dWjgkvZUgHES9rPgrLJohGflYb79jiisZ0m g1obbJDbbic37T3F32c76MOShMpBF339tggJzfZz6JSg+WNeFckKGQYxByklXaLJq1IX NbURiW3/6CSRH2CF92v6kTqNFFp+fQozL8LRQiNzCoPUss32IfiaBayDYLqI1xBRYDrz uszw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=SDfzCcHN2w85uTzfqulRkbOYbLvjfkpJHWoYkiqWwg4=; b=t9YYzcvZf4JE6pyLXeW/Ad6dSTLQKlzegfWs7K2012QZpbNT9mDn3gYsSyefskL/c8 n0LBbfLJte5fGn7DgeqX88VHOI0TZWmzF56PXdRsuDnrJXcTCKwytZC8aJkyS590+7B5 c7CnBi1PoVyUw/xGePjT2+ARi1yPLk4pM9u0T952rQidog4dL4XHfxnebgzLfwqbIlgX k50vEVaQr+hW0Tb+RN4DGndX7xW780FmC37fLbbTyFnOm7gl3PkzKzsqep4Ap/08eWvJ QkGCKXPNLp6743+oKNitVYSlQVEBk19I1aujYnYZ/3ctBSS9/9OmcA/Fq0wwF97bCIcO vs8Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k21si3166700ejx.384.2019.10.23.13.11.31; Wed, 23 Oct 2019 13:11:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391429AbfJWNhZ (ORCPT + 99 others); Wed, 23 Oct 2019 09:37:25 -0400 Received: from mx2.suse.de ([195.135.220.15]:47100 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2390204AbfJWNhZ (ORCPT ); Wed, 23 Oct 2019 09:37:25 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 84699B59E; Wed, 23 Oct 2019 13:37:22 +0000 (UTC) Date: Wed, 23 Oct 2019 15:37:20 +0200 From: Michal Hocko To: Vlastimil Babka Cc: Andrew Morton , Mel Gorman , Waiman Long , Johannes Weiner , Roman Gushchin , Konstantin Khlebnikov , Jann Horn , Song Liu , Greg Kroah-Hartman , Rafael Aquini , linux-mm@kvack.org, LKML Subject: Re: [RFC PATCH 2/2] mm, vmstat: reduce zone->lock holding time by /proc/pagetypeinfo Message-ID: <20191023133720.GA17610@dhcp22.suse.cz> References: <20191023095607.GE3016@techsingularity.net> <20191023102737.32274-1-mhocko@kernel.org> <20191023102737.32274-3-mhocko@kernel.org> <30211965-8ad0-416d-0fe1-113270bd1ea8@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <30211965-8ad0-416d-0fe1-113270bd1ea8@suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 23-10-19 15:32:05, Vlastimil Babka wrote: > On 10/23/19 12:27 PM, Michal Hocko wrote: > > From: Michal Hocko > > > > pagetypeinfo_showfree_print is called by zone->lock held in irq mode. > > This is not really nice because it blocks both any interrupts on that > > cpu and the page allocator. On large machines this might even trigger > > the hard lockup detector. > > > > Considering the pagetypeinfo is a debugging tool we do not really need > > exact numbers here. The primary reason to look at the outuput is to see > > how pageblocks are spread among different migratetypes therefore putting > > a bound on the number of pages on the free_list sounds like a reasonable > > tradeoff. > > > > The new output will simply tell > > [...] > > Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648 > > > > instead of > > Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648 > > > > The limit has been chosen arbitrary and it is a subject of a future > > change should there be a need for that. > > > > Suggested-by: Andrew Morton > > Signed-off-by: Michal Hocko > > Hmm dunno, I would rather e.g. hide the file behind some config or boot > option than do this. Or move it to /sys/kernel/debug ? But those wouldn't really help to prevent from the lockup, right? Besides that who would enable that config and how much of a difference would root only vs. debugfs make? Is the incomplete value a real problem? > > --- > > mm/vmstat.c | 19 ++++++++++++++++++- > > 1 file changed, 18 insertions(+), 1 deletion(-) > > > > diff --git a/mm/vmstat.c b/mm/vmstat.c > > index 4e885ecd44d1..762034fc3b83 100644 > > --- a/mm/vmstat.c > > +++ b/mm/vmstat.c > > @@ -1386,8 +1386,25 @@ static void pagetypeinfo_showfree_print(struct seq_file *m, > > > > area = &(zone->free_area[order]); > > > > - list_for_each(curr, &area->free_list[mtype]) > > + list_for_each(curr, &area->free_list[mtype]) { > > freecount++; > > + /* > > + * Cap the free_list iteration because it might > > + * be really large and we are under a spinlock > > + * so a long time spent here could trigger a > > + * hard lockup detector. Anyway this is a > > + * debugging tool so knowing there is a handful > > + * of pages in this order should be more than > > + * sufficient > > + */ > > + if (freecount > 100000) { > > + seq_printf(m, ">%6lu ", freecount); > > + spin_unlock_irq(&zone->lock); > > + cond_resched(); > > + spin_lock_irq(&zone->lock); > > + continue; > > + } > > + } > > seq_printf(m, "%6lu ", freecount); > > } > > seq_putc(m, '\n'); > > -- Michal Hocko SUSE Labs