Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp1986154ybt; Thu, 2 Jul 2020 20:20:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJygnmQNIaFh+scqD9JuvYpXFH7aTKuDPZVekQVKjnp8kEb3dsqS/bZKML40bprmdATMlIij X-Received: by 2002:a17:906:3a17:: with SMTP id z23mr21212006eje.238.1593746444527; Thu, 02 Jul 2020 20:20:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1593746444; cv=none; d=google.com; s=arc-20160816; b=WfthlwQmVqIzgRIcIslMgc0xSJzxBrp1MsvR/WlsJOFW70Tj7tPuMme+304JmP4WKA 3qbHVvseXLJmzpbMJq918pTf+J0YmYNfhc7N5sloLFfmbM6+0O+7GC5cOnB7p8E8k6So hEycybrWoMFt9QOsw1WN73FvaH1xfVq/qEZjkzjbooQp3Q2e69oKuurPmcvKrQ9zisAR +hcYALk+pLUN5k7fBz4ADDghj63Shm7wPJ77ZL+9emoUtbA4WmuT+di7BUoNZcI/IyyV jsYZWr6IiLyzcajR+VX6HZkLneKgYXfi+YLlWqyaYbOjQxN1UBhwo7dPf3wlZhESIUSz OLog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=rVGQC/L7ZxXjda3InxwGFUuei1okmD8/1cbye9QaXEA=; b=DUfJtxUXEk50PwQnfnxXDfTzBz0PNbVyd7/lRJSAnPKGVFMj61c1YhKcavrARjb/FL UJUwKXF8ZFUb9uXp4yx83ex6/H3sLHM5lyR36+5EUN3DbGpGCvE3YKVmmQ1cmuFPvy9X 9Z0iIprXwNkfrlC4kJ/ErOKgEO8mg2nhJSZp1UWz2lUcokeUI92KtG1jldomwJJZH5C1 ANYF+H9Hhs0AVMOSx+3okP131/gy81vPxBysxB/qplG9pcX/XgUl/4ixOT9CTAYSTRVP xTSmkQQqcS4J6V4EeILCXdhAZXxvRt+Esi9/vH3Oruro8o7TEmsZJQ+WZ0HFBR9xgrxK iG9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f9si7545493ejl.52.2020.07.02.20.20.21; Thu, 02 Jul 2020 20:20:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726148AbgGCDTb (ORCPT + 99 others); Thu, 2 Jul 2020 23:19:31 -0400 Received: from out30-132.freemail.mail.aliyun.com ([115.124.30.132]:51159 "EHLO out30-132.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726033AbgGCDTb (ORCPT ); Thu, 2 Jul 2020 23:19:31 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=5;SR=0;TI=SMTPD_---0U1XFtt-_1593746368; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0U1XFtt-_1593746368) by smtp.aliyun-inc.com(127.0.0.1); Fri, 03 Jul 2020 11:19:28 +0800 From: Wei Yang To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, david@redhat.com, Wei Yang Subject: [Patch v2] mm/sparse: only sub-section aligned range would be populated Date: Fri, 3 Jul 2020 11:18:28 +0800 Message-Id: <20200703031828.14645-1-richard.weiyang@linux.alibaba.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org There are two code path which invoke __populate_section_memmap() * sparse_init_nid() * sparse_add_section() For both case, we are sure the memory range is sub-section aligned. * we pass PAGES_PER_SECTION to sparse_init_nid() * we check range by check_pfn_span() before calling sparse_add_section() Also, the counterpart of __populate_section_memmap(), we don't do such calculation and check since the range is checked by check_pfn_span() in __remove_pages(). Clear the calculation and check to keep it simple and comply with its counterpart. Signed-off-by: Wei Yang --- v2: * add a warn on once for unaligned range, suggested by David --- mm/sparse-vmemmap.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 0db7738d76e9..8d3a1b6287c5 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -247,20 +247,12 @@ int __meminit vmemmap_populate_basepages(unsigned long start, struct page * __meminit __populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap) { - unsigned long start; - unsigned long end; - - /* - * The minimum granularity of memmap extensions is - * PAGES_PER_SUBSECTION as allocations are tracked in the - * 'subsection_map' bitmap of the section. - */ - end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION); - pfn &= PAGE_SUBSECTION_MASK; - nr_pages = end - pfn; - - start = (unsigned long) pfn_to_page(pfn); - end = start + nr_pages * sizeof(struct page); + unsigned long start = (unsigned long) pfn_to_page(pfn); + unsigned long end = start + nr_pages * sizeof(struct page); + + if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) || + !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))) + return NULL; if (vmemmap_populate(start, end, nid, altmap)) return NULL; -- 2.20.1 (Apple Git-117)