Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp3814985rdh; Tue, 28 Nov 2023 04:52:04 -0800 (PST) X-Google-Smtp-Source: AGHT+IGOgaW1gBzX8RVf7MNDGiwhOmeXf3KsVZn6D1A29mal3xTvQnCmD9ZeJh9n4IfWEIJeV5vN X-Received: by 2002:a05:6a00:2d96:b0:6cb:6992:4640 with SMTP id fb22-20020a056a002d9600b006cb69924640mr16478855pfb.26.1701175924114; Tue, 28 Nov 2023 04:52:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701175924; cv=none; d=google.com; s=arc-20160816; b=asCgcz/kDYdb6KxDhZJw48B2u9UFLSu+p5z4jYc7TeGDN4QbptJIJ7wolhQVADo4gc SIOVlyNsFy3pV6zRmLPMd4xslv46S1wYrzgIXH+d0cCYCM8TVDfrPInYxxKOaI0ef+gO Nq1iUGNU9U+XBDOY9cVxNxmFVpgFy+tHXk4j+Aqfyyu2HobeIZQwVeA16sijsn79oxfL cY4dPR6pQQezgnNKxnd7xnvdQAkcY40RNBR0ogWhl1X5kZwpHN/6DugeK8kFY+Ikqlbp apZAgATOQR5gNTXAh1WB35PbrYv5ArWrTzDPAVSDQw7KOixS4zNVaqqt6+kJi7pwsg+2 UnFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=mKKCxqUa68AXoq6CPvT3YJ/y9czBWSSar0qlq6sKk6Q=; fh=p0bwCLj4P8ft7U/cqCqSiMx5LPepDG79kBiaKf22ILo=; b=QRiwLIRndhrZBXxlrQ2GqmYGrEzcXJHgjVXJAZKihOMvglZSsLtdPCIKY+Q5YntVrC gnezC3VWpMvJjPDPX7CgMIb/g/wr1ZcgzWZeXIS93TIBPtGL1pAGo0zEAnsvOWYcnZiA tx5XgVk2Iy6e0JLMjltu906n2mvuVHKYlIyUFEESEGgMw+dINw2AuLAI+T5L7MxLRpz6 3TCg2axfoFJkn516Ee27TX68cXL1rfuMw9TLLZ+0nhvSJBlXNi4eKAOpYx8hb+vPCQ07 /HJMRT23QNGYQ0gqwcBQJt24+CF+RIA3rC0n9zVUradwUs825UHjEKKkzXzz89T0HMLv z0LA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id a21-20020a056a001d1500b006cbee4dc5ffsi11523470pfx.359.2023.11.28.04.52.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Nov 2023 04:52:04 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id A5D638072150; Tue, 28 Nov 2023 04:51:14 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344711AbjK1Mus (ORCPT + 99 others); Tue, 28 Nov 2023 07:50:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43506 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344704AbjK1Muk (ORCPT ); Tue, 28 Nov 2023 07:50:40 -0500 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63FEBD4C for ; Tue, 28 Nov 2023 04:50:46 -0800 (PST) Received: from kwepemm000018.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Sfj3S74D2zvRDD; Tue, 28 Nov 2023 20:50:12 +0800 (CST) Received: from DESKTOP-RAUQ1L5.china.huawei.com (10.174.179.172) by kwepemm000018.china.huawei.com (7.193.23.4) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 28 Nov 2023 20:50:41 +0800 From: Weixi Zhu To: , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , Weixi Zhu Subject: [RFC PATCH 2/6] mm/gmem: add arch-independent abstraction to track address mapping status Date: Tue, 28 Nov 2023 20:50:21 +0800 Message-ID: <20231128125025.4449-3-weixi.zhu@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231128125025.4449-1-weixi.zhu@huawei.com> References: <20231128125025.4449-1-weixi.zhu@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.179.172] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm000018.china.huawei.com (7.193.23.4) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Tue, 28 Nov 2023 04:51:14 -0800 (PST) This patch adds an abstraction layer, struct vm_object, that maintains per-process virtual-to-physical mapping status stored in struct gm_mapping. For example, a virtual page may be mapped to a CPU physical page or to a device physical page. Struct vm_object effectively maintains an arch-independent page table, which is defined as a "logical page table". While arch-dependent page table used by a real MMU is named a "physical page table". The logical page table is useful if Linux core MM is extended to handle a unified virtual address space with external accelerators using customized MMUs. In this patch, struct vm_object utilizes a radix tree (xarray) to track where a virtual page is mapped to. This adds extra memory consumption from xarray, but provides a nice abstraction to isolate mapping status from the machine-dependent layer (PTEs). Besides supporting accelerators with external MMUs, struct vm_object is planned to further union with i_pages in struct address_mapping for file-backed memory. The idea of struct vm_object is originated from FreeBSD VM design, which provides a unified abstraction for anonymous memory, file-backed memory, page cache and etc[1]. Currently, Linux utilizes a set of hierarchical page walk functions to abstract page table manipulations of different CPU architecture. The problem happens when a device wants to reuse Linux MM code to manage its page table -- the device page table may not be accessible to the CPU. Existing solution like Linux HMM utilizes the MMU notifier mechanisms to invoke device-specific MMU functions, but relies on encoding the mapping status on the CPU page table entries. This entangles machine-independent code with machine-dependent code, and also brings unnecessary restrictions. The PTE size and format vary arch by arch, which harms the extensibility. [1] https://docs.freebsd.org/en/articles/vm-design/ Signed-off-by: Weixi Zhu --- include/linux/gmem.h | 120 +++++++++++++++++++++++++ include/linux/mm_types.h | 4 + mm/Makefile | 2 +- mm/vm_object.c | 184 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 309 insertions(+), 1 deletion(-) create mode 100644 mm/vm_object.c diff --git a/include/linux/gmem.h b/include/linux/gmem.h index fff877873557..529ff6755a99 100644 --- a/include/linux/gmem.h +++ b/include/linux/gmem.h @@ -9,11 +9,131 @@ #ifndef _GMEM_H #define _GMEM_H +#include + #ifdef CONFIG_GMEM + +#define GM_PAGE_CPU 0x10 /* Determines whether page is a pointer or a pfn number. */ +#define GM_PAGE_DEVICE 0x20 +#define GM_PAGE_NOMAP 0x40 +#define GM_PAGE_WILLNEED 0x80 + +#define GM_PAGE_TYPE_MASK (GM_PAGE_CPU | GM_PAGE_DEVICE | GM_PAGE_NOMAP) + +struct gm_mapping { + unsigned int flag; + + union { + struct page *page; /* CPU node */ + struct gm_dev *dev; /* hetero-node. TODO: support multiple devices */ + unsigned long pfn; + }; + + struct mutex lock; +}; + +static inline void gm_mapping_flags_set(struct gm_mapping *gm_mapping, int flags) +{ + if (flags & GM_PAGE_TYPE_MASK) + gm_mapping->flag &= ~GM_PAGE_TYPE_MASK; + + gm_mapping->flag |= flags; +} + +static inline void gm_mapping_flags_clear(struct gm_mapping *gm_mapping, int flags) +{ + gm_mapping->flag &= ~flags; +} + +static inline bool gm_mapping_cpu(struct gm_mapping *gm_mapping) +{ + return !!(gm_mapping->flag & GM_PAGE_CPU); +} + +static inline bool gm_mapping_device(struct gm_mapping *gm_mapping) +{ + return !!(gm_mapping->flag & GM_PAGE_DEVICE); +} + +static inline bool gm_mapping_nomap(struct gm_mapping *gm_mapping) +{ + return !!(gm_mapping->flag & GM_PAGE_NOMAP); +} + +static inline bool gm_mapping_willneed(struct gm_mapping *gm_mapping) +{ + return !!(gm_mapping->flag & GM_PAGE_WILLNEED); +} + /* h-NUMA topology */ void __init hnuma_init(void); + +/* vm object */ +/* + * Each per-process vm_object tracks the mapping status of virtual pages from + * all VMAs mmap()-ed with MAP_PRIVATE | MAP_PEER_SHARED. + */ +struct vm_object { + spinlock_t lock; + + /* + * The logical_page_table is a container that holds the mapping + * information between a VA and a struct page. + */ + struct xarray *logical_page_table; + atomic_t nr_pages; +}; + +int __init vm_object_init(void); +struct vm_object *vm_object_create(struct mm_struct *mm); +void vm_object_drop_locked(struct mm_struct *mm); + +struct gm_mapping *alloc_gm_mapping(void); +void free_gm_mappings(struct vm_area_struct *vma); +struct gm_mapping *vm_object_lookup(struct vm_object *obj, unsigned long va); +void vm_object_mapping_create(struct vm_object *obj, unsigned long start); +void unmap_gm_mappings_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end); +void munmap_in_peer_devices(struct mm_struct *mm, unsigned long start, + unsigned long end); #else static inline void hnuma_init(void) {} +static inline void __init vm_object_init(void) +{ +} +static inline struct vm_object *vm_object_create(struct vm_area_struct *vma) +{ + return NULL; +} +static inline void vm_object_drop_locked(struct vm_area_struct *vma) +{ +} +static inline struct gm_mapping *alloc_gm_mapping(void) +{ + return NULL; +} +static inline void free_gm_mappings(struct vm_area_struct *vma) +{ +} +static inline struct gm_mapping *vm_object_lookup(struct vm_object *obj, + unsigned long va) +{ + return NULL; +} +static inline void vm_object_mapping_create(struct vm_object *obj, + unsigned long start) +{ +} +static inline void unmap_gm_mappings_range(struct vm_area_struct *vma, + unsigned long start, + unsigned long end) +{ +} +static inline void munmap_in_peer_devices(struct mm_struct *mm, + unsigned long start, + unsigned long end) +{ +} #endif #endif /* _GMEM_H */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 957ce38768b2..4e50dc019d75 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -31,6 +31,7 @@ struct address_space; struct mem_cgroup; +struct vm_object; /* * Each physical page in the system has a struct page associated with @@ -974,6 +975,9 @@ struct mm_struct { #endif } lru_gen; #endif /* CONFIG_LRU_GEN */ +#ifdef CONFIG_GMEM + struct vm_object *vm_obj; +#endif } __randomize_layout; /* diff --git a/mm/Makefile b/mm/Makefile index f48ea2eb4a44..d2dfab012c96 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -138,4 +138,4 @@ obj-$(CONFIG_IO_MAPPING) += io-mapping.o obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o obj-$(CONFIG_GENERIC_IOREMAP) += ioremap.o obj-$(CONFIG_SHRINKER_DEBUG) += shrinker_debug.o -obj-$(CONFIG_GMEM) += gmem.o +obj-$(CONFIG_GMEM) += gmem.o vm_object.o diff --git a/mm/vm_object.c b/mm/vm_object.c new file mode 100644 index 000000000000..4e76737e0ca1 --- /dev/null +++ b/mm/vm_object.c @@ -0,0 +1,184 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * arch/alpha/boot/bootp.c + * + * Copyright (C) 1997 Jay Estabrook + * + * This file is used for creating a bootp file for the Linux/AXP kernel + * + * based significantly on the arch/alpha/boot/main.c of Linus Torvalds + */ +#include +#include + +/* + * Sine VM_OBJECT maintains the logical page table under each VMA, and each VMA + * points to a VM_OBJECT. Ultimately VM_OBJECTs must be maintained as long as VMA + * gets changed: merge, split, adjust + */ +static struct kmem_cache *vm_object_cachep; +static struct kmem_cache *gm_mapping_cachep; + +static inline void release_gm_mapping(struct gm_mapping *mapping) +{ + kmem_cache_free(gm_mapping_cachep, mapping); +} + +static inline struct gm_mapping *lookup_gm_mapping(struct vm_object *obj, + unsigned long pindex) +{ + return xa_load(obj->logical_page_table, pindex); +} + +int __init vm_object_init(void) +{ + vm_object_cachep = KMEM_CACHE(vm_object, 0); + if (!vm_object_cachep) + goto out; + + gm_mapping_cachep = KMEM_CACHE(gm_mapping, 0); + if (!gm_mapping_cachep) + goto free_vm_object; + + return 0; +free_vm_object: + kmem_cache_destroy(vm_object_cachep); +out: + return -ENOMEM; +} + +/* + * Create a VM_OBJECT and attach it to a mm_struct + * This should be called when a task_struct is created. + */ +struct vm_object *vm_object_create(struct mm_struct *mm) +{ + struct vm_object *obj = kmem_cache_alloc(vm_object_cachep, GFP_KERNEL); + + if (!obj) + return NULL; + + spin_lock_init(&obj->lock); + + /* + * The logical page table maps va >> PAGE_SHIFT + * to pointers of struct gm_mapping. + */ + obj->logical_page_table = kmalloc(sizeof(struct xarray), GFP_KERNEL); + if (!obj->logical_page_table) { + kmem_cache_free(vm_object_cachep, obj); + return NULL; + } + + xa_init(obj->logical_page_table); + atomic_set(&obj->nr_pages, 0); + + return obj; +} + +/* This should be called when a mm no longer refers to a VM_OBJECT */ +void vm_object_drop_locked(struct mm_struct *mm) +{ + struct vm_object *obj = mm->vm_obj; + + if (!obj) + return; + + /* + * We must enter this with VMA write-locked, which is unfortunately a + * giant lock. + */ + mmap_assert_write_locked(mm); + mm->vm_obj = NULL; + + xa_destroy(obj->logical_page_table); + kfree(obj->logical_page_table); + kmem_cache_free(vm_object_cachep, obj); +} + +/* + * Given a VA, the page_index is computed by + * page_index = address >> PAGE_SHIFT + */ +struct gm_mapping *vm_object_lookup(struct vm_object *obj, unsigned long va) +{ + return lookup_gm_mapping(obj, va >> PAGE_SHIFT); +} +EXPORT_SYMBOL_GPL(vm_object_lookup); + +void vm_object_mapping_create(struct vm_object *obj, unsigned long start) +{ + + unsigned long index = start >> PAGE_SHIFT; + struct gm_mapping *gm_mapping; + + if (!obj) + return; + + gm_mapping = alloc_gm_mapping(); + if (!gm_mapping) + return; + + __xa_store(obj->logical_page_table, index, gm_mapping, GFP_KERNEL); +} + +/* gm_mapping will not be release dynamically */ +struct gm_mapping *alloc_gm_mapping(void) +{ + struct gm_mapping *gm_mapping = kmem_cache_zalloc(gm_mapping_cachep, GFP_KERNEL); + + if (!gm_mapping) + return NULL; + + gm_mapping_flags_set(gm_mapping, GM_PAGE_NOMAP); + mutex_init(&gm_mapping->lock); + + return gm_mapping; +} + +/* This should be called when a PEER_SHAERD vma is freed */ +void free_gm_mappings(struct vm_area_struct *vma) +{ + struct gm_mapping *gm_mapping; + struct vm_object *obj; + + obj = vma->vm_mm->vm_obj; + if (!obj) + return; + + XA_STATE(xas, obj->logical_page_table, vma->vm_start >> PAGE_SHIFT); + + xa_lock(obj->logical_page_table); + xas_for_each(&xas, gm_mapping, vma->vm_end >> PAGE_SHIFT) { + release_gm_mapping(gm_mapping); + xas_store(&xas, NULL); + } + xa_unlock(obj->logical_page_table); +} + +void unmap_gm_mappings_range(struct vm_area_struct *vma, unsigned long start, + unsigned long end) +{ + struct xarray *logical_page_table; + struct gm_mapping *gm_mapping; + struct page *page = NULL; + + if (!vma->vm_mm->vm_obj) + return; + + logical_page_table = vma->vm_mm->vm_obj->logical_page_table; + if (!logical_page_table) + return; + + XA_STATE(xas, logical_page_table, start >> PAGE_SHIFT); + + xa_lock(logical_page_table); + xas_for_each(&xas, gm_mapping, end >> PAGE_SHIFT) { + page = gm_mapping->page; + if (page && (page_ref_count(page) != 0)) { + put_page(page); + gm_mapping->page = NULL; + } + } + xa_unlock(logical_page_table); +} -- 2.25.1