Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp807740pxj; Thu, 17 Jun 2021 14:29:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwU9UGnE/am2kMUVIuF2o3257a304WXaFfYGgClz5ZV6ouyL1YZNDztBUOfY+KHySVDTkN5 X-Received: by 2002:a05:6402:3548:: with SMTP id f8mr428697edd.387.1623965398926; Thu, 17 Jun 2021 14:29:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1623965398; cv=none; d=google.com; s=arc-20160816; b=rI79i+CiuVRYIJ2UqpsfgkbHW2kakm/AlP5Wgwj8/XQIt1EianNcNMJCdH6NTe6KBO dzoJKzC+d2Bj8Eoad41LrLTJ99Y64iXoPOu9X4oA7dVCaaIix8JY8L5Kqmu2r5+kZ9Lk 6k/s2JpvoRYQ8L/of3bS7f3jz0ZUfYoI+l02zo+xXZJollrdbpbrXEyUC9v3qJZ0SAA2 rXSA5dbWA0aw/v0WbxnQr8rYUAIRC7f2X4K5o5k96LLjztDbDdi794X2TKaTj5gi3VOm D+17kkvPV0pACzkNI4iK7zVLZWhcuZwIRjHUNeDFnkc2hRppgWgqYBxMwqV7lllpSB+y 52sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:message-id:date:from:cc:to:subject:ironport-sdr :ironport-sdr; bh=yM2kcIc5YHXKFsl1heDfz77WAOIwYMCp46tkvYMwHcY=; b=eOdtvuJVGPgT5NuiUUbUMKst2UziV2BZdgOmIPK1XkW3F5puRUb+gxcp1qPhnZnYVK 3ps/casIyTOR/GRQYmwuxhT7L92ogQAzqqTtJBHEs9NEW9v7Q+ygoqW0EPjESq1b8q4T k/o+Oi27FEnDmlepoBtyRn+FXKvrOl/cCry2LPASyoRJT4Us8lHVXnbp+VcF83GbPqu/ mcfPL/I85UQNWudbwsFs4D7AU0OZlAcFAhYYXPQvtm4lDK7fbp56wYPv5F4JMKAIBHQw 58VRasIVk29S8j842lWI0xoMzmaGk+y05fFq+FoSEbVTOfDYVEy7G26SCGOnBeOzR20z GADA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i2si252568ejp.181.2021.06.17.14.29.36; Thu, 17 Jun 2021 14:29:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233188AbhFQTtb (ORCPT + 99 others); Thu, 17 Jun 2021 15:49:31 -0400 Received: from mga17.intel.com ([192.55.52.151]:23303 "EHLO mga17.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233057AbhFQTta (ORCPT ); Thu, 17 Jun 2021 15:49:30 -0400 IronPort-SDR: TR8lXcpHxbed9EszQJjZtbQ5y1D/jGbZNZu7jSPcy8FX0QTeNspHc7D3SGqn1385pfDrr257Jc 0s4aWtQ3VMbw== X-IronPort-AV: E=McAfee;i="6200,9189,10018"; a="186818419" X-IronPort-AV: E=Sophos;i="5.83,281,1616482800"; d="scan'208";a="186818419" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 12:47:22 -0700 IronPort-SDR: MprMayKO8/gtCYVlX9IaZQPRQFqztxrVqbt9byaOD2vrPghqLKIFyolw41o5ztTnoG0BFK0kX9 niEzMGYN8+pA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,281,1616482800"; d="scan'208";a="443343800" Received: from viggo.jf.intel.com (HELO localhost.localdomain) ([10.54.77.144]) by orsmga007.jf.intel.com with ESMTP; 17 Jun 2021 12:47:21 -0700 Subject: [PATCH] x86/mm: avoid truncating memblocks for SGX memory To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Dave Hansen , fan.du@intel.com, reinette.chatre@intel.com, jarkko@kernel.org, dan.j.williams@intel.com, dave.hansen@intel.com, x86@kernel.org, linux-sgx@vger.kernel.org, luto@kernel.org, peterz@infradead.org From: Dave Hansen Date: Thu, 17 Jun 2021 12:46:57 -0700 Message-Id: <20210617194657.0A99CB22@viggo.jf.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Fan Du tl;dr: Several SGX users reported seeing the following message on NUMA systems: sgx: [Firmware Bug]: Unable to map EPC section to online node. Fallback to the NUMA node 0. This turned out to be the 'memblock' code mistakenly throwing away SGX memory. === Full Changelog === The 'max_pfn' variable represents the highest known RAM address. It can be used, for instance, to quickly determine for which physical addresses there is mem_map[] space allocated. The numa_meminfo code makes an effort to throw out ("trim") all memory blocks which are above 'max_pfn'. SGX memory is not considered RAM (it is marked as "Reserved" in the e820) and is not taken into account by max_pfn. Despite this, SGX memory areas have NUMA affinity and are enumerated in the ACPI SRAT. The existing SGX code uses the numa_meminfo mechanism to look up the NUMA affinity for its memory areas. In cases where SGX memory was above max_pfn (usually just the one EPC section in the last highest NUMA node), the numa_memblock is truncated at 'max_pfn', which is below the SGX memory. When the SGX code tries to look up the affinity of this memory, it fails and produces an error message: sgx: [Firmware Bug]: Unable to map EPC section to online node. Fallback to the NUMA node 0. and assigns the memory to NUMA node 0. Instead of silently truncating the memory block at 'max_pfn' and dropping the SGX memory, add the truncated portion to 'numa_reserved_meminfo'. This allows the SGX code to later determine the NUMA affinity of its 'Reserved' area. Without this patch, numa_meminfo looks like this (from 'crash'): blk = { start = 0x0, end = 0x2080000000, nid = 0x0 } { start = 0x2080000000, end = 0x4000000000, nid = 0x1 } numa_reserved_meminfo is empty. After the patch, numa_meminfo looks like this: blk = { start = 0x0, end = 0x2080000000, nid = 0x0 } { start = 0x2080000000, end = 0x4000000000, nid = 0x1 } and numa_reserved_meminfo has an entry for node 1's SGX memory: blk = { start = 0x4000000000, end = 0x4080000000, nid = 0x1 } [ daveh: completely rewrote/reworked changelog ] Signed-off-by: Fan Du Reported-by: Reinette Chatre Reviewed-by: Jarkko Sakkinen Reviewed-by: Dan Williams Reviewed-by: Dave Hansen Fixes: 5d30f92e7631 ("x86/NUMA: Provide a range-to-target_node lookup facility") Cc: x86@kernel.org Cc: linux-sgx@vger.kernel.org Cc: Andy Lutomirski Cc: Peter Zijlstra --- b/arch/x86/mm/numa.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff -puN arch/x86/mm/numa.c~sgx-srat arch/x86/mm/numa.c --- a/arch/x86/mm/numa.c~sgx-srat 2021-06-17 11:23:05.116159990 -0700 +++ b/arch/x86/mm/numa.c 2021-06-17 11:55:46.117155100 -0700 @@ -254,7 +254,13 @@ int __init numa_cleanup_meminfo(struct n /* make sure all non-reserved blocks are inside the limits */ bi->start = max(bi->start, low); - bi->end = min(bi->end, high); + + /* preserve info for non-RAM areas above 'max_pfn': */ + if (bi->end > high) { + numa_add_memblk_to(bi->nid, high, bi->end, + &numa_reserved_meminfo); + bi->end = high; + } /* and there's no empty block */ if (bi->start >= bi->end) _