Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751325AbdFESfe (ORCPT ); Mon, 5 Jun 2017 14:35:34 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:43664 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751239AbdFESfc (ORCPT ); Mon, 5 Jun 2017 14:35:32 -0400 Date: Mon, 5 Jun 2017 14:35:11 -0400 From: Johannes Weiner To: Michael Ellerman Cc: Yury Norov , Heiko Carstens , Josef Bacik , Michal Hocko , Vladimir Davydov , Andrew Morton , Rik van Riel , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, linux-s390@vger.kernel.org Subject: Re: [PATCH 2/6] mm: vmstat: move slab statistics from zone to node counters Message-ID: <20170605183511.GA8915@cmpxchg.org> References: <20170530181724.27197-1-hannes@cmpxchg.org> <20170530181724.27197-3-hannes@cmpxchg.org> <20170531091256.GA5914@osiris> <20170531113900.GB5914@osiris> <20170531171151.e4zh7ffzbl4w33gd@yury-thinkpad> <87mv9s2f8f.fsf@concordia.ellerman.id.au> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87mv9s2f8f.fsf@concordia.ellerman.id.au> User-Agent: Mutt/1.8.2 (2017-04-18) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5124 Lines: 111 On Thu, Jun 01, 2017 at 08:07:28PM +1000, Michael Ellerman wrote: > Yury Norov writes: > > > On Wed, May 31, 2017 at 01:39:00PM +0200, Heiko Carstens wrote: > >> On Wed, May 31, 2017 at 11:12:56AM +0200, Heiko Carstens wrote: > >> > On Tue, May 30, 2017 at 02:17:20PM -0400, Johannes Weiner wrote: > >> > > To re-implement slab cache vs. page cache balancing, we'll need the > >> > > slab counters at the lruvec level, which, ever since lru reclaim was > >> > > moved from the zone to the node, is the intersection of the node, not > >> > > the zone, and the memcg. > >> > > > >> > > We could retain the per-zone counters for when the page allocator > >> > > dumps its memory information on failures, and have counters on both > >> > > levels - which on all but NUMA node 0 is usually redundant. But let's > >> > > keep it simple for now and just move them. If anybody complains we can > >> > > restore the per-zone counters. > >> > > > >> > > Signed-off-by: Johannes Weiner > >> > > >> > This patch causes an early boot crash on s390 (linux-next as of today). > >> > CONFIG_NUMA on/off doesn't make any difference. I haven't looked any > >> > further into this yet, maybe you have an idea? > > > > The same on arm64. > > And powerpc. It looks like we need the following on top. I can't reproduce the crash, but it's verifiable with WARN_ONs in the vmstat functions that the nodestat array isn't properly initialized when slab bootstraps: --- >From 89ed86b5b538d8debd3c29567d7e1d31257fa577 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Mon, 5 Jun 2017 14:12:15 -0400 Subject: [PATCH] mm: vmstat: move slab statistics from zone to node counters fix Unable to handle kernel paging request at virtual address 2e116007 pgd = c0004000 [2e116007] *pgd=00000000 Internal error: Oops: 5 [#1] SMP ARM Modules linked in: CPU: 0 PID: 0 Comm: swapper Not tainted 4.12.0-rc3-00153-gb6bc6724488a #200 Hardware name: Generic DRA74X (Flattened Device Tree) task: c0d0adc0 task.stack: c0d00000 PC is at __mod_node_page_state+0x2c/0xc8 LR is at __per_cpu_offset+0x0/0x8 pc : [] lr : [] psr: 600000d3 sp : c0d01eec ip : 00000000 fp : c15782f4 r10: 00000000 r9 : c1591280 r8 : 00004000 r7 : 00000001 r6 : 00000006 r5 : 2e116000 r4 : 00000007 r3 : 00000007 r2 : 00000001 r1 : 00000006 r0 : c0dc27c0 Flags: nZCv IRQs off FIQs off Mode SVC_32 ISA ARM Segment none Control: 10c5387d Table: 8000406a DAC: 00000051 Process swapper (pid: 0, stack limit = 0xc0d00218) Stack: (0xc0d01eec to 0xc0d02000) 1ee0: 600000d3 c0dc27c0 c0271efc 00000001 c0d58864 1f00: ef470000 00008000 00004000 c029fbb0 01000000 c1572b5c 00002000 00000000 1f20: 00000001 00000001 00008000 c029f584 00000000 c0d58864 00008000 00008000 1f40: 01008000 c0c23790 c15782f4 a00000d3 c0d58864 c02a0364 00000000 c0819388 1f60: c0d58864 000000c0 01000000 c1572a58 c0aa57a4 00000080 00002000 c0dca000 1f80: efffe980 c0c53a48 00000000 c0c23790 c1572a58 c0c59e48 c0c59de8 c1572b5c 1fa0: c0dca000 c0c257a4 00000000 ffffffff c0dca000 c0d07940 c0dca000 c0c00a9c 1fc0: ffffffff ffffffff 00000000 c0c00680 00000000 c0c53a48 c0dca214 c0d07958 1fe0: c0c53a44 c0d0caa4 8000406a 412fc0f2 00000000 8000807c 00000000 00000000 [] (__mod_node_page_state) from [] (mod_node_page_state+0x2c/0x4c) [] (mod_node_page_state) from [] (cache_alloc_refill+0x5b8/0x828) [] (cache_alloc_refill) from [] (kmem_cache_alloc+0x24c/0x2d0) [] (kmem_cache_alloc) from [] (create_kmalloc_cache+0x20/0x8c) [] (create_kmalloc_cache) from [] (kmem_cache_init+0xac/0x11c) [] (kmem_cache_init) from [] (start_kernel+0x1b8/0x3c0) [] (start_kernel) from [<8000807c>] (0x8000807c) Code: e79e5103 e28c3001 e0833001 e1a04003 (e19440d5) ---[ end trace 0000000000000000 ]--- The zone counters work earlier than the node counters because the zones have special boot pagesets, whereas the nodes do not. Add boot nodestats against which we account until the dynamic per-cpu allocator is available. Signed-off-by: Johannes Weiner --- mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 5f89cfaddc4b..7f341f84b587 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5107,6 +5107,7 @@ static void build_zonelists(pg_data_t *pgdat) */ static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch); static DEFINE_PER_CPU(struct per_cpu_pageset, boot_pageset); +static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats); static void setup_zone_pageset(struct zone *zone); /* @@ -6010,6 +6011,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat) spin_lock_init(&pgdat->lru_lock); lruvec_init(node_lruvec(pgdat)); + pgdat->per_cpu_nodestats = &boot_nodestats; + for (j = 0; j < MAX_NR_ZONES; j++) { struct zone *zone = pgdat->node_zones + j; unsigned long size, realsize, freesize, memmap_pages; -- 2.13.0