Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp1850623pxu; Fri, 9 Oct 2020 01:02:51 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy/SxB8DLiyjcNSOgsnCT1YwiMWWOBJ2qw10tJinGW4SRCa4yOuB+IvUgeVwTpRA8/+UC/U X-Received: by 2002:a05:6402:184f:: with SMTP id v15mr13622098edy.341.1602230571410; Fri, 09 Oct 2020 01:02:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602230571; cv=none; d=google.com; s=arc-20160816; b=iYAbHihRgQOcRbl1J/rb+m3dcnkRm5bTWm2su3d/jbD5B0VierIb7H62RJRCmOqjgL haCB4T95AzuyqhbEcdMiFYCfGexMmvL8Fdq8plcPkBcPzpEmc3IY9tZZ5a5ETixzvf3L 0MnAQ+EvzRag4ROMjBkOgR+Sz4hCbjaEeyWaCtIT6E7Iqd81VHSUXfYaTTETGm8e5jaP jbbLCQqrdZVsbE04GlT9+tgyNGjHsvVtESERqcnzBWBOJJ9DyyWU/7w9LF7zFknuCjpj o4/KbyqFguIHNm004N3yK81HRb1XErgHeenzS1Ak2BEFaV0jQih412JIDhAopYG9J75I 7sjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=tJHrg9nZGBiu6u9pKmTg4Lb9xj/KgbHamws8Cnd3i7Q=; b=WUV0KmgLw3iQbHQgHYEJEK0ZaOnyTWSsID17zEVDdyM5zwOyDDfbg5d6/rsU1zw5s5 JrrT/+HoAeyuvpYiLkAvUd1ARHT4fliJHkPuJoNYn7wyrMRMPvgK9vLV+3t+CZY220xU fr0wyWMFtilV/yfgeyZpD4k4zNkvQEElHQZpnTwjh5D6pgWC0FS2p/FA0D2L7B+nEjae imLiJ2fBv9nyj5P/qYmDchDFpEmHyF7kI2CyZaGqzdM0saIILNQhwufXWUulVfXabKXV WEEM1hk5jfLduEhC/yltlHQPgPEA+XCaMHIbtPSW8z/YgZCcdglCZWasPEM8TRGfaM0k JLYg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=Sgyv+GC9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j1si5427800edr.167.2020.10.09.01.02.28; Fri, 09 Oct 2020 01:02:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@ffwll.ch header.s=google header.b=Sgyv+GC9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732761AbgJIIAP (ORCPT + 99 others); Fri, 9 Oct 2020 04:00:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732702AbgJIIAM (ORCPT ); Fri, 9 Oct 2020 04:00:12 -0400 Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9B3DAC0613B3 for ; Fri, 9 Oct 2020 01:00:03 -0700 (PDT) Received: by mail-wm1-x343.google.com with SMTP id f21so8841959wml.3 for ; Fri, 09 Oct 2020 01:00:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tJHrg9nZGBiu6u9pKmTg4Lb9xj/KgbHamws8Cnd3i7Q=; b=Sgyv+GC9qcYcx0cF5qjyVAtK6D5sL3Wk7ULZQ9+WSVVIQIlPGZBXmKdtm0HWFIIX0o +lrV5GRXB4bQGBqEC3tLPCyASOS/Sev3rbO8w/fP4WgQReecrsSTbu8VufeY1dWQVICh pss3YDY5f+PbGnzEQocQL+ftqimx7hwXjF67c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tJHrg9nZGBiu6u9pKmTg4Lb9xj/KgbHamws8Cnd3i7Q=; b=dKGJJvtgzKStgz+Xn4t5k8PZ+bc2LURfzyNGCifHdl6BofEI/bdY8JGMcxZ3u8TKll w7qF/w39gvmy7vV0UNZ8KZghA3njTfbEszo0ZwzNGnWjUFtW7ncqjV1HHuMk6fzYPrzp CNZxEE89v795kNWxyRqhYZgi8pEMToJGqLIJDZGgHIWd0wmdogeZ9UwP0eaU+dc9HIxW Ligl2Et6TNkcHzFTfz5gFtUOGUQhvnB7zoZKX0y833BTOvJAfK5Au7lfxv/t7XmY9OEv vajhRN2J8Jp3ThYW/q413YCCstfZDVHfYUy5Jib2sIsHQJpg+v1xLzGmPtYKOxBQf55J kcgw== X-Gm-Message-State: AOAM530mYV270mMGNQP170a42xc55LvmYlS2EihainqBX0wkPa1R4cBb uNTue/xt31gob/VdbKgJjmBXeg== X-Received: by 2002:a1c:a3c3:: with SMTP id m186mr12641072wme.172.1602230402301; Fri, 09 Oct 2020 01:00:02 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa]) by smtp.gmail.com with ESMTPSA id u17sm11634118wri.45.2020.10.09.01.00.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Oct 2020 01:00:01 -0700 (PDT) From: Daniel Vetter To: DRI Development , LKML Cc: kvm@vger.kernel.org, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org, linux-media@vger.kernel.org, linux-s390@vger.kernel.org, Daniel Vetter , Jason Gunthorpe , Dan Williams , Kees Cook , Rik van Riel , Benjamin Herrensmidt , Dave Airlie , Andrew Morton , John Hubbard , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Jan Kara , Daniel Vetter Subject: [PATCH v2 07/17] mm: Close race in generic_access_phys Date: Fri, 9 Oct 2020 09:59:24 +0200 Message-Id: <20201009075934.3509076-8-daniel.vetter@ffwll.ch> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201009075934.3509076-1-daniel.vetter@ffwll.ch> References: <20201009075934.3509076-1-daniel.vetter@ffwll.ch> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Way back it was a reasonable assumptions that iomem mappings never change the pfn range they point at. But this has changed: - gpu drivers dynamically manage their memory nowadays, invalidating ptes with unmap_mapping_range when buffers get moved - contiguous dma allocations have moved from dedicated carvetouts to cma regions. This means if we miss the unmap the pfn might contain pagecache or anon memory (well anything allocated with GFP_MOVEABLE) - even /dev/mem now invalidates mappings when the kernel requests that iomem region when CONFIG_IO_STRICT_DEVMEM is set, see 3234ac664a87 ("/dev/mem: Revoke mappings when a driver claims the region") Accessing pfns obtained from ptes without holding all the locks is therefore no longer a good idea. Fix this. Since ioremap might need to manipulate pagetables too we need to drop the pt lock and have a retry loop if we raced. While at it, also add kerneldoc and improve the comment for the vma_ops->access function. It's for accessing, not for moving the memory from iomem to system memory, as the old comment seemed to suggest. References: 28b2ee20c7cb ("access_process_vm device memory infrastructure") Cc: Jason Gunthorpe Cc: Dan Williams Cc: Kees Cook Cc: Rik van Riel Cc: Benjamin Herrensmidt Cc: Dave Airlie Cc: Andrew Morton Cc: John Hubbard Cc: Jérôme Glisse Cc: Jan Kara Cc: Dan Williams Cc: linux-mm@kvack.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-samsung-soc@vger.kernel.org Cc: linux-media@vger.kernel.org Signed-off-by: Daniel Vetter -- v2: Fix inversion in the retry check (John). --- include/linux/mm.h | 3 ++- mm/memory.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 44 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index acd60fbf1a5a..2a16631c1fda 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -566,7 +566,8 @@ struct vm_operations_struct { vm_fault_t (*pfn_mkwrite)(struct vm_fault *vmf); /* called by access_process_vm when get_user_pages() fails, typically - * for use by special VMAs that can switch between memory and hardware + * for use by special VMAs. See also generic_access_phys() for a generic + * implementation useful for any iomem mapping. */ int (*access)(struct vm_area_struct *vma, unsigned long addr, void *buf, int len, int write); diff --git a/mm/memory.c b/mm/memory.c index fcfc4ca36eba..f7cbc4dde0ef 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4873,28 +4873,68 @@ int follow_phys(struct vm_area_struct *vma, return ret; } +/** + * generic_access_phys - generic implementation for iomem mmap access + * @vma: the vma to access + * @addr: userspace addres, not relative offset within @vma + * @buf: buffer to read/write + * @len: length of transfer + * @write: set to FOLL_WRITE when writing, otherwise reading + * + * This is a generic implementation for &vm_operations_struct.access for an + * iomem mapping. This callback is used by access_process_vm() when the @vma is + * not page based. + */ int generic_access_phys(struct vm_area_struct *vma, unsigned long addr, void *buf, int len, int write) { resource_size_t phys_addr; unsigned long prot = 0; void __iomem *maddr; + pte_t *ptep, pte; + spinlock_t *ptl; int offset = addr & (PAGE_SIZE-1); + int ret = -EINVAL; + + if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) + return -EINVAL; + +retry: + if (follow_pte(vma->vm_mm, addr, &ptep, &ptl)) + return -EINVAL; + pte = *ptep; + pte_unmap_unlock(ptep, ptl); - if (follow_phys(vma, addr, write, &prot, &phys_addr)) + prot = pgprot_val(pte_pgprot(pte)); + phys_addr = (resource_size_t)pte_pfn(pte) << PAGE_SHIFT; + + if ((write & FOLL_WRITE) && !pte_write(pte)) return -EINVAL; maddr = ioremap_prot(phys_addr, PAGE_ALIGN(len + offset), prot); if (!maddr) return -ENOMEM; + if (follow_pte(vma->vm_mm, addr, &ptep, &ptl)) + goto out_unmap; + + if (!pte_same(pte, *ptep)) { + pte_unmap_unlock(ptep, ptl); + iounmap(maddr); + + goto retry; + } + if (write) memcpy_toio(maddr + offset, buf, len); else memcpy_fromio(buf, maddr + offset, len); + ret = len; + pte_unmap_unlock(ptep, ptl); +out_unmap: iounmap(maddr); - return len; + return ret; } EXPORT_SYMBOL_GPL(generic_access_phys); #endif -- 2.28.0