Received: by 2002:a4a:311b:0:0:0:0:0 with SMTP id k27-v6csp4806970ooa; Tue, 14 Aug 2018 10:49:15 -0700 (PDT) X-Google-Smtp-Source: AA+uWPxI+X+igndH8angnAuLOaNJM1HgVoxGSgiqTqrkA9jKvreXdHeaxfy48F6BrZS/+31IHee4 X-Received: by 2002:a62:fd06:: with SMTP id p6-v6mr23880315pfh.167.1534268955284; Tue, 14 Aug 2018 10:49:15 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534268955; cv=none; d=google.com; s=arc-20160816; b=VcltfikKBlBp+ucphH3esMiy0y3pQTeBpaPOlhCggRWwD5ydMC8mQ1NsQONJ+cpk8o On/AhGT7KzsPNV4X8arN4kUuR8INOCe8bisH0SbvVLOMwh0RiFSSgTAW/rLQV+MlBOxN UeLS8+1++WBPlugc431uinQ1PPD874MnCbNDlcTKYAnrCyfmN5he5y2AnQE0xjwPUIaz R8tGWkS99pdNngySeCrQ1pPssyXlHOA7F+GpiM2vMxU0I01nmBJtltmqUGyQQ3ocVJsg 0WqiTQXntq+E6Xoa9tTyn3JI/TaQZURP1ov0BAzRjNpXPLQk4fgfbL/HNPFCAMXeLpAp SmrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=zaA2fF6E5FCVEGwFIuO0JQBcDQz5GJyfFZ4A9U+PbgA=; b=hWZGcY3Cgd4YtQfVByUvF1a+7XdJZLeukYxK1U8cZkBxDuy7hmo22BLQpKIhda7jad ufHaNRsxCq8g9QSgvCPIdPnYDi/nIlsAvJc1zjy8FL2U5Pdpf3/Sh2kgkNRqLEHr1Y1b 4Gk412Ah8NfrMOKK/ALYEhwiBozWKhNIPIzL3KO5sF/G5wbN08V0mtjhDlizaaRrZUnJ zoH5Hl1T9YuF9R05Y1cxdbLoJQb/fQAJeBmn/srI/qp9o85dAgFAJNQymwzITm02lPXL pnVLHMJh06JPah5fcbAtx+h9KklihcLI2M+HLfliopDjIPkLsr5CuKNauF2oK+UlUG5z QJPw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t62-v6si20494590pgd.485.2018.08.14.10.49.00; Tue, 14 Aug 2018 10:49:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391177AbeHNUf2 (ORCPT + 99 others); Tue, 14 Aug 2018 16:35:28 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:60914 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390379AbeHNUf2 (ORCPT ); Tue, 14 Aug 2018 16:35:28 -0400 Received: from localhost (unknown [194.244.16.108]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id C71A540B; Tue, 14 Aug 2018 17:47:14 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andy Lutomirski , Kees Cook , Andrew Morton , Andy Lutomirski , Borislav Petkov , Dave Hansen , Fenghua Yu , "H. Peter Anvin" , Linus Torvalds , Oleg Nesterov , Peter Zijlstra , Quentin Casasnovas , Thomas Gleixner , Ingo Molnar , Guenter Roeck Subject: [PATCH 4.4 30/43] mm: Add vm_insert_pfn_prot() Date: Tue, 14 Aug 2018 19:18:06 +0200 Message-Id: <20180814171519.080414264@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180814171517.014285600@linuxfoundation.org> References: <20180814171517.014285600@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Andy Lutomirski commit 1745cbc5d0dee0749a6bc0ea8e872c5db0074061 upstream The x86 vvar vma contains pages with differing cacheability flags. x86 currently implements this by manually inserting all the ptes using (io_)remap_pfn_range when the vma is set up. x86 wants to move to using .fault with VM_FAULT_NOPAGE to set up the mappings as needed. The correct API to use to insert a pfn in .fault is vm_insert_pfn(), but vm_insert_pfn() can't override the vma's cache mode, and the HPET page in particular needs to be uncached despite the fact that the rest of the VMA is cached. Add vm_insert_pfn_prot() to support varying cacheability within the same non-COW VMA in a more sane manner. x86 could alternatively use multiple VMAs, but that's messy, would break CRIU, and would create unnecessary VMAs that would waste memory. Signed-off-by: Andy Lutomirski Reviewed-by: Kees Cook Acked-by: Andrew Morton Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Dave Hansen Cc: Fenghua Yu Cc: H. Peter Anvin Cc: Linus Torvalds Cc: Oleg Nesterov Cc: Peter Zijlstra Cc: Quentin Casasnovas Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/d2938d1eb37be7a5e4f86182db646551f11e45aa.1451446564.git.luto@kernel.org Signed-off-by: Ingo Molnar Signed-off-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman --- include/linux/mm.h | 2 ++ mm/memory.c | 25 +++++++++++++++++++++++-- 2 files changed, 25 insertions(+), 2 deletions(-) --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2083,6 +2083,8 @@ int remap_pfn_range(struct vm_area_struc int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *); int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); +int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, + unsigned long pfn, pgprot_t pgprot); int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len); --- a/mm/memory.c +++ b/mm/memory.c @@ -1605,8 +1605,29 @@ out: int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) { + return vm_insert_pfn_prot(vma, addr, pfn, vma->vm_page_prot); +} +EXPORT_SYMBOL(vm_insert_pfn); + +/** + * vm_insert_pfn_prot - insert single pfn into user vma with specified pgprot + * @vma: user vma to map to + * @addr: target user address of this page + * @pfn: source kernel pfn + * @pgprot: pgprot flags for the inserted page + * + * This is exactly like vm_insert_pfn, except that it allows drivers to + * to override pgprot on a per-page basis. + * + * This only makes sense for IO mappings, and it makes no sense for + * cow mappings. In general, using multiple vmas is preferable; + * vm_insert_pfn_prot should only be used if using multiple VMAs is + * impractical. + */ +int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr, + unsigned long pfn, pgprot_t pgprot) +{ int ret; - pgprot_t pgprot = vma->vm_page_prot; /* * Technically, architectures with pte_special can avoid all these * restrictions (same for remap_pfn_range). However we would like @@ -1628,7 +1649,7 @@ int vm_insert_pfn(struct vm_area_struct return ret; } -EXPORT_SYMBOL(vm_insert_pfn); +EXPORT_SYMBOL(vm_insert_pfn_prot); int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)