Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp3333844ybd; Tue, 25 Jun 2019 00:25:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqyyP6biRtxxV7rvdYfvBITEGyY52YH5rjEF7tBeF2yqFhZaYoKckr0myN+H//V1FnEE2WvV X-Received: by 2002:a17:902:6b07:: with SMTP id o7mr129419673plk.180.1561447537980; Tue, 25 Jun 2019 00:25:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561447537; cv=none; d=google.com; s=arc-20160816; b=CwIrbDUVJ/sRsWjHit3qoMQWsaNP3gyvnDb7HRjTuO+ZhQdmvRJxlhzN3o2+ple9FS jWQfNmR32YVlg/q2wvkocurA+oPuwDS+1rU0MHoq4rQS5ECY+DOQ+f8658GgWKpelLrC JrFNDDP7jbI9pHKVhW2twrhZ1PuHvnAJaJ0caRWYphXSktBNGkAvB1xOBufqoard5RBf vKqeqr48/oeI81d+QT51tfXpBMils1aw367IfDcOOL5tlnkslh4A+Fn76FGgtfg+TuCD WAxjDRlaSDKiheS8wrEfves5hFQiQRCU9vdYVyiWjb5JAqFtXGquAkmHWzHjbPmhRGUE 9tQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=0OdKylyv6Kfx7u+c3BHE9VWBciy63ad+hvpspOFu6TY=; b=b6WrfLPDF4Basqb3lrjM98z8LF7iRGec4Y3s0UGPkr3aFYLieNIbF/nXxLkcBiCi4p 1Wdjhv46sxSMjx/Z7m2TKZhwyWTLmZMN031vviRymhYiHhhiEGAKdbC2CsdaYBDNRUGH wwpR1evgjun6uU1n2/zUaRBZ8HF/cXuY2yzdKDphdx3ymSnn6t5YqLMbQY9RUMDJvU42 2LK8aQnFpl9+vq2vsau/zokGPb3rBX2HjlaE4nPNZUPNDH9ji2mEdTSJwnB0qQ/fc2pU KzuUCkyIX2QJ8w9OUgrQjzNTtbs8gqSuRKtZcUV+K4VnKHNInoWsKY1LOCVhDVGH+fvI 8vUA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w67si13220975pfb.125.2019.06.25.00.25.22; Tue, 25 Jun 2019 00:25:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727027AbfFYF0u (ORCPT + 99 others); Tue, 25 Jun 2019 01:26:50 -0400 Received: from foss.arm.com ([217.140.110.172]:33024 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726009AbfFYF0u (ORCPT ); Tue, 25 Jun 2019 01:26:50 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 34B53360; Mon, 24 Jun 2019 22:26:49 -0700 (PDT) Received: from [10.163.1.179] (unknown [10.163.1.179]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0688C3F246; Mon, 24 Jun 2019 22:28:31 -0700 (PDT) Subject: Re: [PATCH V6 3/3] arm64/mm: Enable memory hot remove To: Mark Rutland , Steve Capper Cc: "linux-mm@kvack.org" , "linux-kernel@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , "akpm@linux-foundation.org" , Catalin Marinas , Will Deacon , "mhocko@suse.com" , "ira.weiny@intel.com" , "david@redhat.com" , "cai@lca.pw" , "logang@deltatee.com" , James Morse , "cpandya@codeaurora.org" , "arunks@codeaurora.org" , "dan.j.williams@intel.com" , "mgorman@techsingularity.net" , "osalvador@suse.de" , Ard Biesheuvel , nd References: <1560917860-26169-1-git-send-email-anshuman.khandual@arm.com> <1560917860-26169-4-git-send-email-anshuman.khandual@arm.com> <20190621143540.GA3376@capper-debian.cambridge.arm.com> <20190624165148.GA9847@lakrids.cambridge.arm.com> From: Anshuman Khandual Message-ID: <48f39fa1-c369-c8e2-4572-b7e016dca2d6@arm.com> Date: Tue, 25 Jun 2019 10:57:07 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20190624165148.GA9847@lakrids.cambridge.arm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/24/2019 10:22 PM, Mark Rutland wrote: > On Fri, Jun 21, 2019 at 03:35:53PM +0100, Steve Capper wrote: >> Hi Anshuman, >> >> On Wed, Jun 19, 2019 at 09:47:40AM +0530, Anshuman Khandual wrote: >>> The arch code for hot-remove must tear down portions of the linear map and >>> vmemmap corresponding to memory being removed. In both cases the page >>> tables mapping these regions must be freed, and when sparse vmemmap is in >>> use the memory backing the vmemmap must also be freed. >>> >>> This patch adds a new remove_pagetable() helper which can be used to tear >>> down either region, and calls it from vmemmap_free() and >>> ___remove_pgd_mapping(). The sparse_vmap argument determines whether the >>> backing memory will be freed. >>> >>> remove_pagetable() makes two distinct passes over the kernel page table. >>> In the first pass it unmaps, invalidates applicable TLB cache and frees >>> backing memory if required (vmemmap) for each mapped leaf entry. In the >>> second pass it looks for empty page table sections whose page table page >>> can be unmapped, TLB invalidated and freed. >>> >>> While freeing intermediate level page table pages bail out if any of its >>> entries are still valid. This can happen for partially filled kernel page >>> table either from a previously attempted failed memory hot add or while >>> removing an address range which does not span the entire page table page >>> range. >>> >>> The vmemmap region may share levels of table with the vmalloc region. >>> There can be conflicts between hot remove freeing page table pages with >>> a concurrent vmalloc() walking the kernel page table. This conflict can >>> not just be solved by taking the init_mm ptl because of existing locking >>> scheme in vmalloc(). Hence unlike linear mapping, skip freeing page table >>> pages while tearing down vmemmap mapping. >>> >>> While here update arch_add_memory() to handle __add_pages() failures by >>> just unmapping recently added kernel linear mapping. Now enable memory hot >>> remove on arm64 platforms by default with ARCH_ENABLE_MEMORY_HOTREMOVE. >>> >>> This implementation is overall inspired from kernel page table tear down >>> procedure on X86 architecture. >>> >>> Acked-by: David Hildenbrand >>> Signed-off-by: Anshuman Khandual >>> --- >> >> FWIW: >> Acked-by: Steve Capper >> >> One minor comment below though. >> >>> arch/arm64/Kconfig | 3 + >>> arch/arm64/mm/mmu.c | 290 ++++++++++++++++++++++++++++++++++++++++++++++++++-- >>> 2 files changed, 284 insertions(+), 9 deletions(-) >>> >>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >>> index 6426f48..9375f26 100644 >>> --- a/arch/arm64/Kconfig >>> +++ b/arch/arm64/Kconfig >>> @@ -270,6 +270,9 @@ config HAVE_GENERIC_GUP >>> config ARCH_ENABLE_MEMORY_HOTPLUG >>> def_bool y >>> >>> +config ARCH_ENABLE_MEMORY_HOTREMOVE >>> + def_bool y >>> + >>> config SMP >>> def_bool y >>> >>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >>> index 93ed0df..9e80a94 100644 >>> --- a/arch/arm64/mm/mmu.c >>> +++ b/arch/arm64/mm/mmu.c >>> @@ -733,6 +733,250 @@ int kern_addr_valid(unsigned long addr) >>> >>> return pfn_valid(pte_pfn(pte)); >>> } >>> + >>> +#ifdef CONFIG_MEMORY_HOTPLUG >>> +static void free_hotplug_page_range(struct page *page, size_t size) >>> +{ >>> + WARN_ON(!page || PageReserved(page)); >>> + free_pages((unsigned long)page_address(page), get_order(size)); >>> +} >> >> We are dealing with power of 2 number of pages, it makes a lot more >> sense (to me) to replace the size parameter with order. >> >> Also, all the callers are for known compile-time sizes, so we could just >> translate the size parameter as follows to remove any usage of get_order? >> PAGE_SIZE -> 0 >> PMD_SIZE -> PMD_SHIFT - PAGE_SHIFT >> PUD_SIZE -> PUD_SHIFT - PAGE_SHIFT > > Now that I look at this again, the above makes sense to me. > > I'd requested the current form (which I now realise is broken), since > back in v2 the code looked like: > > static void free_pagetable(struct page *page, int order) > { > ... > free_pages((unsigned long)page_address(page), order); > ... > } > > ... with callsites looking like: > > free_pagetable(pud_page(*pud), get_order(PUD_SIZE)); > > ... which I now see is off by PAGE_SHIFT, and we inherited that bug in > the current code, so the calculated order is vastly larger than it > should be. It's worrying that doesn't seem to be caught by anything in > testing. :/ get_order() returns the minimum page allocation order for a given size which already takes into account PAGE_SHIFT i.e get_order(PAGE_SIZE) returns 0. > > Anshuman, could you please fold in Steve's suggested change? I'll look > at the rest of the series shortly, so no need to resend that right away, > but it would be worth sorting out. get_order() is already optimized for built in constants. But will replace with absolute constants as Steve mentioned if that is preferred.