Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp343160pxk; Thu, 24 Sep 2020 07:02:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxMk1Z10uUApKTrJtz6doNQs2b4IIaGi7CnxMExIujul+D5OSpA0/2jf0/fhipoPnFbT3qZ X-Received: by 2002:a17:906:4093:: with SMTP id u19mr29413ejj.520.1600956130536; Thu, 24 Sep 2020 07:02:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600956130; cv=none; d=google.com; s=arc-20160816; b=old+l/pQEBUBzKh5kbEEJMNqdikAuJJMJIgZJrZqUrL4UL1hpbGe3cHq4mCG0UXeE7 dEGgPZYc9evfu3+qn4qfezDEHmBzy8wTjMqg1hy2bZpLaipjHfINng6JtQoZQzXpFYZB S+IheAwxFxP7+j1R0HTgHwW5NeHo6VE6M5uAGbjs6gximyEiCFJAU7F3FPk3/U1egaBG Wm3yRGwQvpv9MUcJTeoSgaeMoRKut+qWoEkibaVv1yZfjYV1smArRM9QKgcVNXuNg4x1 jAxz6ZLIb9AC8cNGNThP+UYp0l8SrfQXGurvZKBEX0RZIXdE5uBuLub7/pwRZPIJbPJW a0mg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cxDf+nMgR/LOq5Vlf/2efNcelt6NT7vAwUR//5r7v5s=; b=0KA4TmHmCqipZzNX2M8UKNxXtOkd96CmIum3IccSNF2RpRdsmPZhrI8rkM+jbe1Qmg 4M7plkQuewNjC9ft//P+O9GigYOgriQETCBFCH30izacKi3TbfpIe2vfjS5qJy208cL+ Llri7w2WQaNBlWalTTwwfc2Ksc4Ujnb4zFcQV6s8Xi6LuOtjGMpznxs/uZs4/bb46twR EmDf2NvKyBQjlu8FnsJzgijRdBwQGzjIM5HhG3TBq5Z9MZ2aU6WMQKD1AAsYPBoiciHN 5NfGIQsQ1zPsql4MbYwxwbRNiLNczSdFn1aIpZ68WyDH2NdhXi68DROl3q7tSHO36beS odaw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="SbZP2/46"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s17si2098500edx.593.2020.09.24.07.01.45; Thu, 24 Sep 2020 07:02:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="SbZP2/46"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728141AbgIXN7O (ORCPT + 99 others); Thu, 24 Sep 2020 09:59:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728136AbgIXN7N (ORCPT ); Thu, 24 Sep 2020 09:59:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE3AAC0613CE for ; Thu, 24 Sep 2020 06:59:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=cxDf+nMgR/LOq5Vlf/2efNcelt6NT7vAwUR//5r7v5s=; b=SbZP2/46nbLHz2NhZXBd8yw6tG ku7Nd+ECOIE3IIrVDkg8WlGJcls/wlGLJaMPtntqmHB9uEsRekeQLQxbgbfQCceXFMzyIGFbpKrjV a9s8K4XWj8+cXzGCXUhJ0nujJNX1EJqd3DlvVZbdF+R0Xolc35NKsbVCJF0miiaMcNPuQ3IKiNQJf rKzlRqNsYxU0+CnZmDbZuwW0mnoXEE2ulUdTX2I1ox71c7x7u8lEdPguj9hgOMlpFiXCj+9dkEpR9 r9nJEtzgjnxHY4QxipugU0L634YOJBELr0rU0AHOK/FMGwRsW6poMusrlgYEOYiv5Z+6BYOO1iRLv 1S7C43pw==; Received: from p4fdb0c34.dip0.t-ipconnect.de ([79.219.12.52] helo=localhost) by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1kLRlx-0003v8-Jq; Thu, 24 Sep 2020 13:58:57 +0000 From: Christoph Hellwig To: Andrew Morton Cc: Peter Zijlstra , Boris Ostrovsky , Juergen Gross , Stefano Stabellini , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Chris Wilson , Matthew Auld , Rodrigo Vivi , Minchan Kim , Matthew Wilcox , Nitin Gupta , x86@kernel.org, xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Subject: [PATCH 03/11] mm: add a vmap_pfn function Date: Thu, 24 Sep 2020 15:58:45 +0200 Message-Id: <20200924135853.875294-4-hch@lst.de> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200924135853.875294-1-hch@lst.de> References: <20200924135853.875294-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a proper helper to remap PFNs into kernel virtual space so that drivers don't have to abuse alloc_vm_area and open coded PTE manipulation for it. Signed-off-by: Christoph Hellwig --- include/linux/vmalloc.h | 1 + mm/Kconfig | 3 +++ mm/vmalloc.c | 45 +++++++++++++++++++++++++++++++++++++++++ 3 files changed, 49 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index b899681e3ff9f0..c77efeac242514 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -122,6 +122,7 @@ extern void vfree_atomic(const void *addr); extern void *vmap(struct page **pages, unsigned int count, unsigned long flags, pgprot_t prot); +void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot); extern void vunmap(const void *addr); extern int remap_vmalloc_range_partial(struct vm_area_struct *vma, diff --git a/mm/Kconfig b/mm/Kconfig index 6c974888f86f97..6fa7ba1199eb1e 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -815,6 +815,9 @@ config DEVICE_PRIVATE memory; i.e., memory that is only accessible from the device (or group of devices). You likely also want to select HMM_MIRROR. +config VMAP_PFN + bool + config FRAME_VECTOR bool diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ffad65f052c3f9..e2a2ded8d93478 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2413,6 +2413,51 @@ void *vmap(struct page **pages, unsigned int count, } EXPORT_SYMBOL(vmap); +#ifdef CONFIG_VMAP_PFN +struct vmap_pfn_data { + unsigned long *pfns; + pgprot_t prot; + unsigned int idx; +}; + +static int vmap_pfn_apply(pte_t *pte, unsigned long addr, void *private) +{ + struct vmap_pfn_data *data = private; + + if (WARN_ON_ONCE(pfn_valid(data->pfns[data->idx]))) + return -EINVAL; + *pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot)); + return 0; +} + +/** + * vmap_pfn - map an array of PFNs into virtually contiguous space + * @pfns: array of PFNs + * @count: number of pages to map + * @prot: page protection for the mapping + * + * Maps @count PFNs from @pfns into contiguous kernel virtual space and returns + * the start address of the mapping. + */ +void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot) +{ + struct vmap_pfn_data data = { .pfns = pfns, .prot = pgprot_nx(prot) }; + struct vm_struct *area; + + area = get_vm_area_caller(count * PAGE_SIZE, VM_IOREMAP, + __builtin_return_address(0)); + if (!area) + return NULL; + if (apply_to_page_range(&init_mm, (unsigned long)area->addr, + count * PAGE_SIZE, vmap_pfn_apply, &data)) { + free_vm_area(area); + return NULL; + } + return area->addr; +} +EXPORT_SYMBOL_GPL(vmap_pfn); +#endif /* CONFIG_VMAP_PFN */ + static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot, int node) { -- 2.28.0