Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752616AbbH1P5r (ORCPT ); Fri, 28 Aug 2015 11:57:47 -0400 Received: from mail-io0-f179.google.com ([209.85.223.179]:34911 "EHLO mail-io0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752221AbbH1P5p convert rfc822-to-8bit (ORCPT ); Fri, 28 Aug 2015 11:57:45 -0400 MIME-Version: 1.0 In-Reply-To: <2F8F36EB-D2D8-4DCE-910C-C56FFCEED3BF@numascale.com> References: <26D4DE95-B579-442D-AF7B-469CC4403C51@numascale.com> <2F8F36EB-D2D8-4DCE-910C-C56FFCEED3BF@numascale.com> Date: Fri, 28 Aug 2015 08:57:44 -0700 X-Google-Sender-Auth: yK6_67Onz8RzBV5_6DOzP2Ml8uw Message-ID: Subject: Re: CONFIG_HOLES_IN_ZONE and memory hot plug code on x86_64 From: Yinghai Lu To: Steffen Persvold Cc: x86 , LKML Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3641 Lines: 54 On Thu, Aug 27, 2015 at 11:42 PM, Steffen Persvold wrote: >>Can you post whole log with SRAT related info? > > I can probably reproduce again and get full logs when I get run time on the system again, but here’s some output that we saved in our internal Jira case : > > [ 0.000000] NUMA: Initialized distance table, cnt=6 > [ 0.000000] NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xd7ffffff] -> [mem 0x00000000-0xd7ffffff] > [ 0.000000] NUMA: Node 0 [mem 0x00000000-0xd7ffffff] + [mem 0x100000000-0x427ffffff] -> [mem 0x00000000-0x427ffffff] > [ 0.000000] NODE_DATA(0) allocated [mem 0x407fe3000-0x407ffffff] > [ 0.000000] NODE_DATA(1) allocated [mem 0x807fe3000-0x807ffffff] > [ 0.000000] NODE_DATA(2) allocated [mem 0xc07fe3000-0xc07ffffff] > [ 0.000000] NODE_DATA(3) allocated [mem 0x1007fe3000-0x1007ffffff] > [ 0.000000] NODE_DATA(4) allocated [mem 0x1407fe3000-0x1407ffffff] > [ 0.000000] NODE_DATA(5) allocated [mem 0x1807fdd000-0x1807ff9fff] > [ 0.000000] [ffffea0000000000-ffffea00101fffff] PMD -> [ffff8803f8600000-ffff880407dfffff] on node 0 > [ 0.000000] [ffffea0010a00000-ffffea00201fffff] PMD -> [ffff8807f8600000-ffff880807dfffff] on node 1 > [ 0.000000] [ffffea0020a00000-ffffea00301fffff] PMD -> [ffff880bf8600000-ffff880c07dfffff] on node 2 > [ 0.000000] [ffffea0030a00000-ffffea00401fffff] PMD -> [ffff880ff8600000-ffff881007dfffff] on node 3 > [ 0.000000] [ffffea0040a00000-ffffea00501fffff] PMD -> [ffff8813f8600000-ffff881407dfffff] on node 4 > [ 0.000000] [ffffea0050a00000-ffffea00601fffff] PMD -> [ffff8817f7e00000-ffff8818075fffff] on node 5 > > If I remember correctly there was a mix of 4GB and 8GB DIMMs populated on this system. In addition the firmware reserved 512MByte at the end of each memory controllers physical range (hence the reserved ranges in the e820 map). > > Note: this was with 4.1.0 vanilla so it could be obsolete now with 4.2-rc. I have not yet tested with your latest patches that you and Tony discussed. We still need to see your srat table layout. like the one from Tony's setup. [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x100000000-0xfffffffff] [ 0.000000] SRAT: Node 0 PXM 0 [mem 0x1000000000-0x1d6fffffff] [ 0.000000] SRAT: Node 1 PXM 1 [mem 0x1d70000000-0x2c17ffffff] [ 0.000000] SRAT: Node 1 PXM 1 [mem 0x2c18000000-0x3abfffffff] [ 0.000000] SRAT: Node 2 PXM 2 [mem 0x3ac0000000-0x4967ffffff] [ 0.000000] SRAT: Node 2 PXM 2 [mem 0x4968000000-0x580fffffff] [ 0.000000] SRAT: Node 3 PXM 3 [mem 0x5810000000-0x66b7ffffff] [ 0.000000] SRAT: Node 3 PXM 3 [mem 0x66b8000000-0x755fffffff] [ 0.000000] NUMA: Initialized distance table, cnt=4 [ 0.000000] NUMA: Node 0 [mem 0x00000000-0x7fffffff] + [mem 0x100000000-0xfffffffff] -> [mem 0x00000000-0xfffffffff] [ 0.000000] NUMA: Node 0 [mem 0x00000000-0xfffffffff] + [mem 0x1000000000-0x1d6fffffff] -> [mem 0x00000000-0x1d6fffffff] [ 0.000000] NUMA: Node 1 [mem 0x1d70000000-0x2c17ffffff] + [mem 0x2c18000000-0x3abfffffff] -> [mem 0x1d70000000-0x3abfffffff] [ 0.000000] NUMA: Node 2 [mem 0x3ac0000000-0x4967ffffff] + [mem 0x4968000000-0x580fffffff] -> [mem 0x3ac0000000-0x580fffffff] [ 0.000000] NUMA: Node 3 [mem 0x5810000000-0x66b7ffffff] + [mem 0x66b8000000-0x755fffffff] -> [mem 0x5810000000-0x755fffffff] -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/