Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp1141619pxx; Fri, 30 Oct 2020 03:12:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxA7wdNDeM3eDuTXCIhSa7ftq3O2+zBr35GBrMLnf1qFNjP55Oa8NcxupjTvai4Pq4avhkD X-Received: by 2002:a17:906:1f53:: with SMTP id d19mr1629203ejk.255.1604052751756; Fri, 30 Oct 2020 03:12:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1604052751; cv=none; d=google.com; s=arc-20160816; b=D9pfC1oYkxlUvRyzbDMsR6DMFMgVaTfafRudu3RFKmogmbVoqc+V4XUkrDQZDXsy4y 8Wu059gKfPnYNtiRoIqV31qR6x+itS3vgwnaOewpXoKyeCLSf5rLgBp4xMblHu24cHjS myUbLFshM2fikHiFDqJ3DJhkiNkcbYh2bfnxC5nu/tUCO+/aulxkNDVwjPyAjK0jXxh5 ylWS02rNOJCBL83eRfWk4AYloIodlK0sSdwdN6MxS50K2/E1JWbY3Yb+0lmcitRanoS5 vn6J1CYZFl3effu+QdGhd0zYQKxc/3Vky95C9a002VmDxhPZYuA9c0Ww5bw1zivEp7fa QzEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=I4ilcVBAHuOCTtZMsF1m8kT7eVt7mrBkTULM8ihAU5Q=; b=Mdeb/3jlvEkzq9E+U13WZ9+uWQQKAIt6SjQlCTVjcAUVbmBPA7U5o+wisP/xKnd4g8 Ddibkx7CfiJMi83r1tQiaGmF2BTofBJxcHPo7kAeZ5zbLtTCqRpNiiB5hEJT7tflnumK YkYNP8lK1S6dpIrLvygwANUIjp+1mW27TvalnxAlh8qAl58f9x5oEU+zSpwg9aNANHHq JTvJUkN9mAMqDgtQs972+o6eZtbU9TkD0RV+efVsBgcOrZSLy6vJZl4qLNBMafhjQYRl tyq/XsqLRe25+K7iSU2wx+Rh0UbLHV5Yg4NQB8pz/TENn5siSjyL+K0ufheHQ1PD8b7v oMMQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=GO2lvH63; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r29si234939edc.378.2020.10.30.03.12.08; Fri, 30 Oct 2020 03:12:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=GO2lvH63; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726430AbgJ3KIw (ORCPT + 99 others); Fri, 30 Oct 2020 06:08:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726329AbgJ3KIu (ORCPT ); Fri, 30 Oct 2020 06:08:50 -0400 Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D897C0613D2 for ; Fri, 30 Oct 2020 03:08:50 -0700 (PDT) Received: by mail-wr1-x444.google.com with SMTP id n18so5790781wrs.5 for ; Fri, 30 Oct 2020 03:08:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I4ilcVBAHuOCTtZMsF1m8kT7eVt7mrBkTULM8ihAU5Q=; b=GO2lvH63CKIIvm4pmukV3hDOR0RNqeUA9AeBrEF8NltqrZhXote2bDxTwEJdVvTtJd xkKihu5rtfTbifyZElLLYgP0VDNzFe0Jt/QvYOwQWROiwJr0/YeFXLdGoKQhPmhfWICx 70SdqYqWB/N/SS3b98Sha2tiQj/Nzf1byR0IU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I4ilcVBAHuOCTtZMsF1m8kT7eVt7mrBkTULM8ihAU5Q=; b=EkEjrbEaBaFH3fbhPisXq7cfs+Txt9fCih3f2KcgJ7EOJmBXpyk7JhX1M6EAZj48A9 /9uwXUbw05oiiAEzLrIlNo3bx/DB0m7/FdcDzWiNH7Xywyy1J2Qf1Rtc7PDflFA6FgGt WQCBuMHHno+9NIvZgmvzoxaXBhmPSCKUfBmTp++x1OmkRZZpTxXlYrB1KddOJQnir9/k B+6kQW7GKRVOshUY2mBs5ZfdYawrqxUOwbLxk8zUQamA3o66KoUDJkeR4UemBqjdCSb1 7K9LneO6DhrAXg0AQGqqGFjBUqp4HeRXRgVkTvEnudbOj0PrjqLf7IY923uWV+BlIky/ VBhw== X-Gm-Message-State: AOAM531bxlCl959zUENuIWaDiYZc9ENPSfnD8It/imSbAEU209C0FS0T mkuTfOZo+1fBplJ4GN/dO/aw7Q== X-Received: by 2002:a5d:4bd1:: with SMTP id l17mr2236438wrt.38.1604052529128; Fri, 30 Oct 2020 03:08:49 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id v189sm4430947wmg.14.2020.10.30.03.08.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 Oct 2020 03:08:48 -0700 (PDT) From: Daniel Vetter To: DRI Development , LKML Cc: kvm@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-media@vger.kernel.org, Daniel Vetter , Daniel Vetter , Christoph Hellwig , Jason Gunthorpe , Kees Cook , Dan Williams , Andrew Morton , John Hubbard , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jan Kara Subject: [PATCH v5 08/15] mm: Add unsafe_follow_pfn Date: Fri, 30 Oct 2020 11:08:08 +0100 Message-Id: <20201030100815.2269-9-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201030100815.2269-1-daniel.vetter@ffwll.ch> References: <20201030100815.2269-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Way back it was a reasonable assumptions that iomem mappings never change the pfn range they point at. But this has changed: - gpu drivers dynamically manage their memory nowadays, invalidating ptes with unmap_mapping_range when buffers get moved - contiguous dma allocations have moved from dedicated carvetouts to cma regions. This means if we miss the unmap the pfn might contain pagecache or anon memory (well anything allocated with GFP_MOVEABLE) - even /dev/mem now invalidates mappings when the kernel requests that iomem region when CONFIG_IO_STRICT_DEVMEM is set, see 3234ac664a87 ("/dev/mem: Revoke mappings when a driver claims the region") Accessing pfns obtained from ptes without holding all the locks is therefore no longer a good idea. Unfortunately there's some users where this is not fixable (like v4l userptr of iomem mappings) or involves a pile of work (vfio type1 iommu). For now annotate these as unsafe and splat appropriately. This patch adds an unsafe_follow_pfn, which later patches will then roll out to all appropriate places. Also mark up follow_pfn as EXPORT_SYMBOL_GPL. The only safe way to use that by drivers/modules is together with an mmu_notifier, and that's all _GPL stuff. Signed-off-by: Daniel Vetter Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: Kees Cook Cc: Dan Williams Cc: Andrew Morton Cc: John Hubbard Cc: Jérôme Glisse Cc: Jan Kara Cc: Dan Williams Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: kvm@vger.kernel.org Signed-off-by: Daniel Vetter -- v5: Suggestions from Christoph - reindent for less weirdness - use IS_ENABLED instead of #ifdef - same checks for nommu, for consistency - EXPORT_SYMBOL_GPL for follow_pfn. - kerneldoc was already updated in previous versions to explain when follow_pfn can be used safely --- include/linux/mm.h | 2 ++ mm/memory.c | 34 ++++++++++++++++++++++++++++++++-- mm/nommu.c | 27 ++++++++++++++++++++++++++- security/Kconfig | 13 +++++++++++++ 4 files changed, 73 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 83d0be101a38..d0fe8bf46a9d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1661,6 +1661,8 @@ int follow_pte_pmd(struct mm_struct *mm, unsigned long address, pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn); +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn); int follow_phys(struct vm_area_struct *vma, unsigned long address, unsigned int flags, unsigned long *prot, resource_size_t *phys); int generic_access_phys(struct vm_area_struct *vma, unsigned long addr, diff --git a/mm/memory.c b/mm/memory.c index ac32039ce941..0db0c5e233fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4795,7 +4795,12 @@ EXPORT_SYMBOL(follow_pte_pmd); * @address: user virtual address * @pfn: location to store found PFN * - * Only IO mappings and raw PFN mappings are allowed. + * Only IO mappings and raw PFN mappings are allowed. Note that callers must + * ensure coherency with pte updates by using a &mmu_notifier to follow updates. + * If this is not feasible, or the access to the @pfn is only very short term, + * use follow_pte_pmd() instead and hold the pagetable lock for the duration of + * the access instead. Any caller not following these requirements must use + * unsafe_follow_pfn() instead. * * Return: zero and the pfn at @pfn on success, -ve otherwise. */ @@ -4816,7 +4821,32 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, pte_unmap_unlock(ptep, ptl); return 0; } -EXPORT_SYMBOL(follow_pfn); +EXPORT_SYMBOL_GPL(follow_pfn); + +/** + * unsafe_follow_pfn - look up PFN at a user virtual address + * @vma: memory mapping + * @address: user virtual address + * @pfn: location to store found PFN + * + * Only IO mappings and raw PFN mappings are allowed. + * + * Returns zero and the pfn at @pfn on success, -ve otherwise. + */ +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn) +{ + if (IS_ENABLED(CONFIG_STRICT_FOLLOW_PFN)) { + pr_info("unsafe follow_pfn usage rejected, see CONFIG_STRICT_FOLLOW_PFN\n"); + return -EINVAL; + } + + WARN_ONCE(1, "unsafe follow_pfn usage\n"); + add_taint(TAINT_USER, LOCKDEP_STILL_OK); + + return follow_pfn(vma, address, pfn); +} +EXPORT_SYMBOL(unsafe_follow_pfn); #ifdef CONFIG_HAVE_IOREMAP_PROT int follow_phys(struct vm_area_struct *vma, diff --git a/mm/nommu.c b/mm/nommu.c index 0faf39b32cdb..79fc98a6c94a 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -130,7 +130,32 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, *pfn = address >> PAGE_SHIFT; return 0; } -EXPORT_SYMBOL(follow_pfn); +EXPORT_SYMBOL_GPL(follow_pfn); + +/** + * unsafe_follow_pfn - look up PFN at a user virtual address + * @vma: memory mapping + * @address: user virtual address + * @pfn: location to store found PFN + * + * Only IO mappings and raw PFN mappings are allowed. + * + * Returns zero and the pfn at @pfn on success, -ve otherwise. + */ +int unsafe_follow_pfn(struct vm_area_struct *vma, unsigned long address, + unsigned long *pfn) +{ + if (IS_ENABLED(CONFIG_STRICT_FOLLOW_PFN)) { + pr_info("unsafe follow_pfn usage rejected, see CONFIG_STRICT_FOLLOW_PFN\n"); + return -EINVAL; + } + + WARN_ONCE(1, "unsafe follow_pfn usage\n"); + add_taint(TAINT_USER, LOCKDEP_STILL_OK); + + return follow_pfn(vma, address, pfn); +} +EXPORT_SYMBOL(unsafe_follow_pfn); LIST_HEAD(vmap_area_list); diff --git a/security/Kconfig b/security/Kconfig index 7561f6f99f1d..48945402e103 100644 --- a/security/Kconfig +++ b/security/Kconfig @@ -230,6 +230,19 @@ config STATIC_USERMODEHELPER_PATH If you wish for all usermode helper programs to be disabled, specify an empty string here (i.e. ""). +config STRICT_FOLLOW_PFN + bool "Disable unsafe use of follow_pfn" + depends on MMU + help + Some functionality in the kernel follows userspace mappings to iomem + ranges in an unsafe matter. Examples include v4l userptr for zero-copy + buffers sharing. + + If this option is switched on, such access is rejected. Only enable + this option when you must run userspace which requires this. + + If in doubt, say Y. + source "security/selinux/Kconfig" source "security/smack/Kconfig" source "security/tomoyo/Kconfig" -- 2.28.0