Received: by 10.192.165.156 with SMTP id m28csp2338406imm; Thu, 12 Apr 2018 12:34:31 -0700 (PDT) X-Google-Smtp-Source: AIpwx49CE8eccdtqqAdBEfO9ZArNltXabsTwPAYlPadIVgmTTnX4t75y7DQWaaz5WZO4vrweB7zh X-Received: by 2002:a17:902:a605:: with SMTP id u5-v6mr2417304plq.394.1523561671193; Thu, 12 Apr 2018 12:34:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523561671; cv=none; d=google.com; s=arc-20160816; b=cYKwWPlLwmgiokWbVCYj9LIsmKlc9QWytgol+dfta2RGvLB6MUyh50TTlupMDq9jnh zojcuDJAU1nC8f18z+zWvdenXM/sngD0sWei1VQQnICeMRK5cgLG0vwzpITVM/72dcrv hULbCuvU9JIIHBqTYacHCacHduqa4Tf2CcvgHRojM+LIVzsJaNHeuDnVmZXow18X33/L OnNv3WAAQ5XS/DBOtzZyYOm/+9qsBRWtunHbjXsvKvSP9OrjqtvdkoVvk2QK2KnIbZ6X Jl8MDYQQ6LUlRAbHt2Vu0sIcBLFbPDPurcJ3CyvH+YaSNuOwXtjdtDHP6icxtXnpYq3j XRaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=mvj5atFcMLXPuQAN2h3ghQxztAnqd+fBioaDcln9kXc=; b=MYrNC2ZEZ0sNxahF6JSXURoN4kx6uc7qymAS1H8PABzc3PURyrv76P+s8uPfD0MXaz 1Uht0S39wbpcdljGAwx/k4CkUP/SSac5LvhaawwIP6GrhQB/CoZSelF7RIrWNWtIS7hq D9+uzQNPlpuOYgEHafp1mXkkH/qOTpxirJ+0/8LefQGhRdSl1U1y945xJGMIOzkcUGbU HwXX+C3fTpow6SG8gFfI1JuRi5B5kJ/VX8QHuSlvv+hbcpcHgZvRtLewBHzN5EsKNxMX vnoeroRurFzMUkB4RWYDNqXgqYL5so9ooKMemUYmQ0cxj91JXkTxZ/LowD+D+yA0E5Ze ePzA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b73si2611580pga.788.2018.04.12.12.34.17; Thu, 12 Apr 2018 12:34:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754010AbeDLTbn (ORCPT + 99 others); Thu, 12 Apr 2018 15:31:43 -0400 Received: from mail.efficios.com ([167.114.142.138]:45784 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753340AbeDLT2Y (ORCPT ); Thu, 12 Apr 2018 15:28:24 -0400 Received: from localhost (localhost [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id 955131B0707; Thu, 12 Apr 2018 15:28:23 -0400 (EDT) Received: from mail.efficios.com ([127.0.0.1]) by localhost (mail02.efficios.com [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id T7t8LuVmLS3R; Thu, 12 Apr 2018 15:28:23 -0400 (EDT) Received: from localhost (localhost [127.0.0.1]) by mail.efficios.com (Postfix) with ESMTP id 300161B06F9; Thu, 12 Apr 2018 15:28:23 -0400 (EDT) X-Virus-Scanned: amavisd-new at efficios.com Received: from mail.efficios.com ([127.0.0.1]) by localhost (mail02.efficios.com [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id wNme7HrgTk_U; Thu, 12 Apr 2018 15:28:23 -0400 (EDT) Received: from thinkos.internal.efficios.com (192-222-157-41.qc.cable.ebox.net [192.222.157.41]) by mail.efficios.com (Postfix) with ESMTPSA id CB2781B06E4; Thu, 12 Apr 2018 15:28:22 -0400 (EDT) From: Mathieu Desnoyers To: Peter Zijlstra , "Paul E . McKenney" , Boqun Feng , Andy Lutomirski , Dave Watson Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Paul Turner , Andrew Morton , Russell King , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Andrew Hunter , Andi Kleen , Chris Lameter , Ben Maurer , Steven Rostedt , Josh Triplett , Linus Torvalds , Catalin Marinas , Will Deacon , Michael Kerrisk , Mathieu Desnoyers Subject: [RFC PATCH for 4.18 10/23] mm: Introduce vm_map_user_ram, vm_unmap_user_ram Date: Thu, 12 Apr 2018 15:27:47 -0400 Message-Id: <20180412192800.15708-11-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180412192800.15708-1-mathieu.desnoyers@efficios.com> References: <20180412192800.15708-1-mathieu.desnoyers@efficios.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Create and destroy mappings aliased to a user-space mapping with the same cache coloring as the userspace mapping. Allow the kernel to load from and store to pages shared with user-space through its own mapping in kernel virtual addresses while ensuring cache conherency between kernel and userspace mappings for virtually aliased architectures. Signed-off-by: Mathieu Desnoyers Reviewed-by: Matthew Wilcox CC: "Paul E. McKenney" CC: Peter Zijlstra CC: Paul Turner CC: Thomas Gleixner CC: Andrew Hunter CC: Andy Lutomirski CC: Andi Kleen CC: Dave Watson CC: Chris Lameter CC: Ingo Molnar CC: "H. Peter Anvin" CC: Ben Maurer CC: Steven Rostedt CC: Josh Triplett CC: Linus Torvalds CC: Andrew Morton CC: Russell King CC: Catalin Marinas CC: Will Deacon CC: Michael Kerrisk CC: Boqun Feng --- include/linux/vmalloc.h | 4 +++ mm/vmalloc.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 70 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 1e5d8c392f15..d5e5c11ba947 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -58,6 +58,10 @@ struct vmap_area { extern void vm_unmap_ram(const void *mem, unsigned int count); extern void *vm_map_ram(struct page **pages, unsigned int count, int node, pgprot_t prot); +extern void vm_unmap_user_ram(const void *mem, unsigned int count); +extern void *vm_map_user_ram(struct page **pages, unsigned int count, + unsigned long uaddr, int node, pgprot_t prot); + extern void vm_unmap_aliases(void); #ifdef CONFIG_MMU diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ebff729cc956..ae033b825e45 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1199,6 +1199,72 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node, pgprot_t pro } EXPORT_SYMBOL(vm_map_ram); +/** + * vm_unmap_user_ram - unmap linear kernel address space set up by vm_map_user_ram + * @mem: the pointer returned by vm_map_user_ram + * @count: the count passed to that vm_map_user_ram call (cannot unmap partial) + */ +void vm_unmap_user_ram(const void *mem, unsigned int count) +{ + unsigned long size = (unsigned long)count << PAGE_SHIFT; + unsigned long addr = (unsigned long)mem; + struct vmap_area *va; + + might_sleep(); + BUG_ON(!addr); + BUG_ON(addr < VMALLOC_START); + BUG_ON(addr > VMALLOC_END); + BUG_ON(!PAGE_ALIGNED(addr)); + + debug_check_no_locks_freed(mem, size); + vmap_debug_free_range(addr, addr+size); + + va = find_vmap_area(addr); + BUG_ON(!va); + free_unmap_vmap_area(va); +} +EXPORT_SYMBOL(vm_unmap_user_ram); + +/** + * vm_map_user_ram - map user space pages linearly into kernel virtual address + * @pages: an array of pointers to the virtually contiguous pages to be mapped + * @count: number of pages + * @uaddr: address within the first page in the userspace mapping + * @node: prefer to allocate data structures on this node + * @prot: memory protection to use. PAGE_KERNEL for regular RAM + * + * Create a mapping aliased to a user-space mapping with the same cache + * coloring as the userspace mapping. Allow the kernel to load from and + * store to pages shared with user-space through its own mapping in kernel + * virtual addresses while ensuring cache conherency between kernel and + * userspace mappings for virtually aliased architectures. + * + * Returns: a pointer to the address that has been mapped, or %NULL on failure + */ +void *vm_map_user_ram(struct page **pages, unsigned int count, + unsigned long uaddr, int node, pgprot_t prot) +{ + unsigned long size = (unsigned long)count << PAGE_SHIFT; + unsigned long va_offset = ALIGN_DOWN(uaddr, PAGE_SIZE) & (SHMLBA - 1); + unsigned long alloc_size = ALIGN(va_offset + size, SHMLBA); + struct vmap_area *va; + unsigned long addr; + void *mem; + + va = alloc_vmap_area(alloc_size, SHMLBA, VMALLOC_START, VMALLOC_END, + node, GFP_KERNEL); + if (IS_ERR(va)) + return NULL; + addr = va->va_start + va_offset; + mem = (void *)addr; + if (vmap_page_range(addr, addr + size, prot, pages) < 0) { + vm_unmap_user_ram(mem, count); + return NULL; + } + return mem; +} +EXPORT_SYMBOL(vm_map_user_ram); + static struct vm_struct *vmlist __initdata; /** * vm_area_add_early - add vmap area early during boot -- 2.11.0