Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp528099ybk; Fri, 15 May 2020 07:03:31 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxGmaFQhEzj+QFUlxzQukvyHc1c4t5Psf937dKwZUL+FZhXHGG1M0leniJdEzRbSRZMItC+ X-Received: by 2002:a17:906:9516:: with SMTP id u22mr3095383ejx.115.1589551411323; Fri, 15 May 2020 07:03:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589551411; cv=none; d=google.com; s=arc-20160816; b=uYraKuzwv6IeWfsKnhGaUEmasAP+sgvEfWGSwJO9tQ2N/IYwXxM2DLk/O6vJ5wnCMt z+T7BZlP2ot/nqY5MEvV8CtLBI8yi0euQkbdtBl+roRhXy6GMHvHq6AvrFK2v6hwlzZc C/xKfe9ItaPvCdM1udADIAHYbN3ULQ1uSDLVmSTQk4b8Si81vZl+vMj4WZaEJpk2vd7G mapngzS5dG+zNxeYGl/+AGVtV8+2xehVsnf5JK6lfJ+bVIvP6EB7HX7dssh7AHgOBOm0 MkS+cdllmglITsW1LFQh3xpQ8p6EcexRRUjXT6S4KbDhdagvfCJ4tuZID1j5QjEBkaJP 7nNA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=4mAFAS+6qHVirtXWY6XOU08367xp3K4qpQvnJ7QuZIo=; b=hHGKZRbiY2UPBuQMmYIHigWyvDNAMGecKE9SyxI9n4+wbKyx2bPmb/KkKGyGco2+TO 4TD9g7L+lTqpY0zGrle6coUPSJTcPK6pczPu9vZq8lKJsGnCgGzVMYtW7x2inrkFDmkv RljctgJTbTJdNoB2h9OfDWiHA64c6+rLMmHqcif2eySvJBuDRNdKKs/BwBpS8eey5hy7 CH1K3xzCG9Yu7BIMgBKsuXqSExCSpX3PrSOIr9Vmhox8yecuKpiqH+x2QR/JwmEQ7LqQ HVXVtDlJ0Qend5VCaYdRruxvrzvjNLAWX6f98NxBN8AI3t4W/c6m/5t+u/mtEVDWVCP4 +1pQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p10si1197116edq.609.2020.05.15.07.03.02; Fri, 15 May 2020 07:03:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726278AbgEOOA1 (ORCPT + 99 others); Fri, 15 May 2020 10:00:27 -0400 Received: from 8bytes.org ([81.169.241.247]:43292 "EHLO theia.8bytes.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726163AbgEOOA1 (ORCPT ); Fri, 15 May 2020 10:00:27 -0400 Received: by theia.8bytes.org (Postfix, from userid 1000) id 4A493261; Fri, 15 May 2020 16:00:25 +0200 (CEST) From: Joerg Roedel To: x86@kernel.org Cc: hpa@zytor.com, Dave Hansen , Andy Lutomirski , Peter Zijlstra , rjw@rjwysocki.net, Arnd Bergmann , Andrew Morton , Steven Rostedt , Vlastimil Babka , Michal Hocko , Matthew Wilcox , Joerg Roedel , joro@8bytes.org, linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 1/7] mm: Add functions to track page directory modifications Date: Fri, 15 May 2020 16:00:17 +0200 Message-Id: <20200515140023.25469-2-joro@8bytes.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200515140023.25469-1-joro@8bytes.org> References: <20200515140023.25469-1-joro@8bytes.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Joerg Roedel Add page-table allocation functions which will keep track of changed directory entries. They are needed for new PGD, P4D, PUD, and PMD entries and will be used in vmalloc and ioremap code to decide whether any changes in the kernel mappings need to be synchronized between page-tables in the system. Signed-off-by: Joerg Roedel Acked-by: Andy Lutomirski --- include/asm-generic/5level-fixup.h | 5 ++-- include/asm-generic/pgtable.h | 23 +++++++++++++++ include/linux/mm.h | 46 ++++++++++++++++++++++++++++++ 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/5level-fixup.h b/include/asm-generic/5level-fixup.h index 4c74b1c1d13b..58046ddc08d0 100644 --- a/include/asm-generic/5level-fixup.h +++ b/include/asm-generic/5level-fixup.h @@ -17,8 +17,9 @@ ((unlikely(pgd_none(*(p4d))) && __pud_alloc(mm, p4d, address)) ? \ NULL : pud_offset(p4d, address)) -#define p4d_alloc(mm, pgd, address) (pgd) -#define p4d_offset(pgd, start) (pgd) +#define p4d_alloc(mm, pgd, address) (pgd) +#define p4d_alloc_track(mm, pgd, address, mask) (pgd) +#define p4d_offset(pgd, start) (pgd) #ifndef __ASSEMBLY__ static inline int p4d_none(p4d_t p4d) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 329b8c8ca703..bf1418ae91a2 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1209,6 +1209,29 @@ static inline bool arch_has_pfn_modify_check(void) # define PAGE_KERNEL_EXEC PAGE_KERNEL #endif +/* + * Page Table Modification bits for pgtbl_mod_mask. + * + * These are used by the p?d_alloc_track*() set of functions an in the generic + * vmalloc/ioremap code to track at which page-table levels entries have been + * modified. Based on that the code can better decide when vmalloc and ioremap + * mapping changes need to be synchronized to other page-tables in the system. + */ +#define __PGTBL_PGD_MODIFIED 0 +#define __PGTBL_P4D_MODIFIED 1 +#define __PGTBL_PUD_MODIFIED 2 +#define __PGTBL_PMD_MODIFIED 3 +#define __PGTBL_PTE_MODIFIED 4 + +#define PGTBL_PGD_MODIFIED BIT(__PGTBL_PGD_MODIFIED) +#define PGTBL_P4D_MODIFIED BIT(__PGTBL_P4D_MODIFIED) +#define PGTBL_PUD_MODIFIED BIT(__PGTBL_PUD_MODIFIED) +#define PGTBL_PMD_MODIFIED BIT(__PGTBL_PMD_MODIFIED) +#define PGTBL_PTE_MODIFIED BIT(__PGTBL_PTE_MODIFIED) + +/* Page-Table Modification Mask */ +typedef unsigned int pgtbl_mod_mask; + #endif /* !__ASSEMBLY__ */ #ifndef io_remap_pfn_range diff --git a/include/linux/mm.h b/include/linux/mm.h index 5a323422d783..022fe682af9e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2078,13 +2078,54 @@ static inline pud_t *pud_alloc(struct mm_struct *mm, p4d_t *p4d, return (unlikely(p4d_none(*p4d)) && __pud_alloc(mm, p4d, address)) ? NULL : pud_offset(p4d, address); } + +static inline p4d_t *p4d_alloc_track(struct mm_struct *mm, pgd_t *pgd, + unsigned long address, + pgtbl_mod_mask *mod_mask) + +{ + if (unlikely(pgd_none(*pgd))) { + if (__p4d_alloc(mm, pgd, address)) + return NULL; + *mod_mask |= PGTBL_PGD_MODIFIED; + } + + return p4d_offset(pgd, address); +} + #endif /* !__ARCH_HAS_5LEVEL_HACK */ +static inline pud_t *pud_alloc_track(struct mm_struct *mm, p4d_t *p4d, + unsigned long address, + pgtbl_mod_mask *mod_mask) +{ + if (unlikely(p4d_none(*p4d))) { + if (__pud_alloc(mm, p4d, address)) + return NULL; + *mod_mask |= PGTBL_P4D_MODIFIED; + } + + return pud_offset(p4d, address); +} + static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address) { return (unlikely(pud_none(*pud)) && __pmd_alloc(mm, pud, address))? NULL: pmd_offset(pud, address); } + +static inline pmd_t *pmd_alloc_track(struct mm_struct *mm, pud_t *pud, + unsigned long address, + pgtbl_mod_mask *mod_mask) +{ + if (unlikely(pud_none(*pud))) { + if (__pmd_alloc(mm, pud, address)) + return NULL; + *mod_mask |= PGTBL_PUD_MODIFIED; + } + + return pmd_offset(pud, address); +} #endif /* CONFIG_MMU */ #if USE_SPLIT_PTE_PTLOCKS @@ -2200,6 +2241,11 @@ static inline void pgtable_pte_page_dtor(struct page *page) ((unlikely(pmd_none(*(pmd))) && __pte_alloc_kernel(pmd))? \ NULL: pte_offset_kernel(pmd, address)) +#define pte_alloc_kernel_track(pmd, address, mask) \ + ((unlikely(pmd_none(*(pmd))) && \ + (__pte_alloc_kernel(pmd) || ({*(mask)|=PGTBL_PMD_MODIFIED;0;})))?\ + NULL: pte_offset_kernel(pmd, address)) + #if USE_SPLIT_PMD_PTLOCKS static struct page *pmd_to_page(pmd_t *pmd) -- 2.17.1