Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1029173yba; Thu, 4 Apr 2019 02:48:01 -0700 (PDT) X-Google-Smtp-Source: APXvYqyh/32Wr5wo4u3fVQXC0T/bCQ0a5M+lR6y1D26HkZ0f0FEYjCrwi0y4uex+ME9RL6qddN5B X-Received: by 2002:a63:6786:: with SMTP id b128mr4908558pgc.318.1554371281353; Thu, 04 Apr 2019 02:48:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554371281; cv=none; d=google.com; s=arc-20160816; b=tclpvQmpf3PMaj4ejq8O4knzrPu9LUPcFewzPRa5xxljvcvF1iINAm6lOlkrFM9K0u Sni7JmxvrNr0AgLIgymsg5XFtY/hFt6JMNjPL3/mCh7alSvds/h4ewBQIdb5e/aSzXeX GzNcdDWjBKw3PmDFafS6Fu8GyzwK6KQIq02IM1g0rKSjiPNKThQdn7lKrFQXk88VsoIK p3rUu+Pm3N+t04kXs63Qh8HW/dgwHKEgZE70tv1MzY5gwkTzko1s2fhIS9x/jdqozeAC K+Ry4bMFCtGiqoVzd26+JWV8Lx3FeZoiMP6cFQKQ6yEhtjw9PA2w3oeqZHwssP6hNRaR tGqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=kbLzgI/BbNqddxcLvhl156HmxQpyP2Y/KdoRc3dMBxE=; b=ZCy1dcZMHagpHPnwH2U5ml3baeaX8JzxyX6erNmNFZErXl64uHCpnHGSVp6fTdQF+K h1isxiVfozF7k7+6tgyGwrsPLbdgsrPToEsYBI3gBmRXfUSb4uCF3gQfvyU1HApTfhw9 AfPG0efwfsk539TYASXVRPYkWT5FHNMpOahco08eMOFDm5XUgYWrn4JeNiRYgNLrFqsp BmzJIIEgJJ7C2KB0dRWVuKE77pD3zCELmwLLcKx1Tkvb8bQyCyN/iupF57gmK0iKNh1h 5XPlEYBCLsMrtpyQFQ+1nEgEofM05maozY7GW4aNb+mrQ0PtymeiGlfJsU7wAAyiLO6B 06gQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t10si15797269plr.229.2019.04.04.02.47.46; Thu, 04 Apr 2019 02:48:01 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731118AbfDDJrN (ORCPT + 99 others); Thu, 4 Apr 2019 05:47:13 -0400 Received: from foss.arm.com ([217.140.101.70]:56712 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730361AbfDDJrL (ORCPT ); Thu, 4 Apr 2019 05:47:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 67B1C169E; Thu, 4 Apr 2019 02:47:11 -0700 (PDT) Received: from p8cg001049571a15.blr.arm.com (p8cg001049571a15.blr.arm.com [10.162.40.100]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6B6113F557; Thu, 4 Apr 2019 02:47:05 -0700 (PDT) From: Anshuman Khandual To: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, akpm@linux-foundation.org, will.deacon@arm.com, catalin.marinas@arm.com Cc: mhocko@suse.com, mgorman@techsingularity.net, james.morse@arm.com, mark.rutland@arm.com, robin.murphy@arm.com, cpandya@codeaurora.org, arunks@codeaurora.org, dan.j.williams@intel.com, osalvador@suse.de, logang@deltatee.com, david@redhat.com, cai@lca.pw Subject: [RFC 2/2] arm64/mm: Enable ZONE_DEVICE for all page configs Date: Thu, 4 Apr 2019 15:16:50 +0530 Message-Id: <1554371210-24736-2-git-send-email-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1554371210-24736-1-git-send-email-anshuman.khandual@arm.com> References: <1554265806-11501-1-git-send-email-anshuman.khandual@arm.com> <1554371210-24736-1-git-send-email-anshuman.khandual@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that vmemmap_populate_basepages() supports struct vmem_altmap based allocations, ZONE_DEVICE can be functional across all page size configs. Now vmemmap_populate_baepages() takes in actual struct vmem_altmap for allocation and remove_pagetable() should accommodate such new PTE level vmemmap mappings. Just remove the ARCH_HAS_ZONE_DEVICE dependency from ARM64_4K_PAGES. Signed-off-by: Anshuman Khandual --- arch/arm64/Kconfig | 2 +- arch/arm64/mm/mmu.c | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index b5d8cf57e220..4a37a33a4fe5 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -31,7 +31,7 @@ config ARM64 select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_TEARDOWN_DMA_OPS if IOMMU_SUPPORT select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST - select ARCH_HAS_ZONE_DEVICE if ARM64_4K_PAGES + select ARCH_HAS_ZONE_DEVICE select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_INLINE_READ_LOCK if !PREEMPT select ARCH_INLINE_READ_LOCK_BH if !PREEMPT diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 2859aa89cc4a..509ed7e547a3 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -818,8 +818,8 @@ static void __meminit free_pud_table(pud_t *pud_start, pgd_t *pgd, bool direct) #endif static void __meminit -remove_pte_table(pte_t *pte_start, unsigned long addr, - unsigned long end, bool direct) +remove_pte_table(pte_t *pte_start, unsigned long addr, unsigned long end, + bool direct, struct vmem_altmap *altmap) { pte_t *pte; @@ -829,7 +829,7 @@ remove_pte_table(pte_t *pte_start, unsigned long addr, continue; if (!direct) - free_pagetable(pte_page(*pte), 0); + free_huge_pagetable(pte_page(*pte), 0, altmap); spin_lock(&init_mm.page_table_lock); pte_clear(&init_mm, addr, pte); spin_unlock(&init_mm.page_table_lock); @@ -860,7 +860,7 @@ remove_pmd_table(pmd_t *pmd_start, unsigned long addr, unsigned long end, continue; } pte_base = pte_offset_kernel(pmd, 0UL); - remove_pte_table(pte_base, addr, next, direct); + remove_pte_table(pte_base, addr, next, direct, altmap); free_pte_table(pte_base, pmd, direct); } } @@ -921,7 +921,7 @@ remove_pagetable(unsigned long start, unsigned long end, int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_basepages(start, end, node, NULL); + return vmemmap_populate_basepages(start, end, node, altmap); } #else /* !ARM64_SWAPPER_USES_SECTION_MAPS */ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node, -- 2.20.1