Received: by 2002:a05:6a10:af89:0:0:0:0 with SMTP id iu9csp1287234pxb; Fri, 21 Jan 2022 14:24:59 -0800 (PST) X-Google-Smtp-Source: ABdhPJzWPXIpJ7CrhiIW/HhzICjWQ32JIjxQhZEqMat3DL769VBGdRpkt7GmmdpFmhkF/5vH1Xfh X-Received: by 2002:a17:902:7786:b0:14a:e8f4:a9d4 with SMTP id o6-20020a170902778600b0014ae8f4a9d4mr5534120pll.86.1642803899679; Fri, 21 Jan 2022 14:24:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1642803899; cv=none; d=google.com; s=arc-20160816; b=GnxkfFYKZDxR67RWLytR1OK8rKURxsa9g6AFq54Aid2jEyi1JTGABSpYftdugAWHfW QgWMWaxSDFaFMBjmTY2AfN3l4XUTN+ybbq9M3vdPdzolCZmIvqh2VmltoMQRyXOo5fKx FiGAU9lVqcD/NEmriv3doN+kVgdPPerOCJDJxzIpikKUEmn706OIMHXFcB6otEfrZQDU CSt5TEK2xMKMCpv3U+9W4O4RpoUtKMObnPfqaZED/bf7OVWd3PQs8KC/P/nLWEXoSqiQ /iI5HuZWxd8pj0citkehWzsSyM/Zd9QOfY+/TnxA8DlU7/ujV3J80J9Epy0ztGeOLpCb SAqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=H1MQOnRJwRcU2FFtF7OlsX8AZsA3kd13GubEwmSi2fI=; b=SwfRWmSfxvFCwvvu5Yz/xrZApFQmGbsNhxq1DDL1lWxh/8RtDRSywg3W2YQ1uBHLfD Lk4z68HwytrA5WSLDRMHUYKSviNYgdlLrL0Tn502kRhDd4mppur31PooaULsgGze/NHe oPsmKFJtMyg0mlysYM5brV4B7GNO+ANn4Kh6x5bvyH3jPoizzE9y5YcSiQkGhfa+RVwH tYR3/ievKhjB7t635MoEBrmDlvztneuPz82p8c2p19BAghc9UpCJrsrk/rnC43a7d+x9 dqk3lUNglfyLXBMtmvQe/bWK30G2qYRaiV8hTWLnnu9oSi3V6Qza7vq53+ThbPuMzEHt TqtA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l64si5547121pge.445.2022.01.21.14.24.47; Fri, 21 Jan 2022 14:24:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377065AbiATQhM (ORCPT + 99 others); Thu, 20 Jan 2022 11:37:12 -0500 Received: from foss.arm.com ([217.140.110.172]:44522 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231576AbiATQhM (ORCPT ); Thu, 20 Jan 2022 11:37:12 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 84CF0ED1; Thu, 20 Jan 2022 08:37:11 -0800 (PST) Received: from [10.57.68.26] (unknown [10.57.68.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AC96A3F73D; Thu, 20 Jan 2022 08:37:08 -0800 (PST) Message-ID: Date: Thu, 20 Jan 2022 16:37:01 +0000 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [PATCH] vmap(): don't allow invalid pages Content-Language: en-GB To: "Russell King (Oracle)" Cc: Matthew Wilcox , Yury Norov , Catalin Marinas , Will Deacon , Andrew Morton , Nicholas Piggin , Ding Tianhong , Anshuman Khandual , Alexey Klimov , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20220118235244.540103-1-yury.norov@gmail.com> <319b09bc-56a2-207f-6180-3cc7d8cd43d1@arm.com> From: Robin Murphy In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2022-01-20 13:03, Russell King (Oracle) wrote: > On Thu, Jan 20, 2022 at 12:22:35PM +0000, Robin Murphy wrote: >> On 2022-01-19 19:12, Russell King (Oracle) wrote: >>> On Wed, Jan 19, 2022 at 06:43:10PM +0000, Robin Murphy wrote: >>>> Indeed, my impression is that the only legitimate way to get hold of a page >>>> pointer without assumed provenance is via pfn_to_page(), which is where >>>> pfn_valid() comes in. Thus pfn_valid(page_to_pfn()) really *should* be a >>>> tautology. >>> >>> That can only be true if pfn == page_to_pfn(pfn_to_page(pfn)) for all >>> values of pfn. >>> >>> Given how pfn_to_page() is defined in the sparsemem case: >>> >>> #define __pfn_to_page(pfn) \ >>> ({ unsigned long __pfn = (pfn); \ >>> struct mem_section *__sec = __pfn_to_section(__pfn); \ >>> __section_mem_map_addr(__sec) + __pfn; \ >>> }) >>> #define page_to_pfn __page_to_pfn >>> >>> that isn't the case, especially when looking at page_to_pfn(): >>> >>> #define __page_to_pfn(pg) \ >>> ({ const struct page *__pg = (pg); \ >>> int __sec = page_to_section(__pg); \ >>> (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \ >>> }) >>> >>> Where: >>> >>> static inline unsigned long page_to_section(const struct page *page) >>> { >>> return (page->flags >> SECTIONS_PGSHIFT) & SECTIONS_MASK; >>> } >>> >>> So if page_to_section() returns something that is, e.g. zero for an >>> invalid page in a non-zero section, you're not going to end up with >>> the right pfn from page_to_pfn(). >> >> Right, I emphasised "should" in an attempt to imply "in the absence of >> serious bugs that have further-reaching consequences anyway". >> >>> As I've said now a couple of times, trying to determine of a struct >>> page pointer is valid is the wrong question to be asking. >> >> And doing so in one single place, on the justification of avoiding an >> incredibly niche symptom, is even more so. Not to mention that an address >> size fault is one of the best possible outcomes anyway, vs. the untold >> damage that may stem from accesses actually going through to random parts of >> the physical memory map. > > I don't see it as a "niche" symptom. The commit message specifically cites a Data Abort "at address translation later". Broadly speaking, a Data Abort due to an address size fault only occurs if you've been lucky enough that the bogus PA which got mapped is so spectacularly wrong that it's beyond the range configured in TCR.IPS. How many other architectures even have a mechanism like that? > If we start off with the struct page being invalid, then the result of > page_to_pfn() can not be relied upon to produce something that is > meaningful - which is exactly why the vmap() issue arises. > > With a pfn_valid() check, we at least know that the PFN points at > memory. No, we know it points to some PA space which has a struct page to represent it. pfn_valid() only says that pfn_to_page() will yield a valid result. That also includes things like reserved pages covering non-RAM areas, where a kernel VA mapping existing at all could potentially be fatal to the system even if it's never explicitly accessed - for all we know it might be a carveout belonging to overly-aggressive Secure software such that even a speculative prefetch might trigger an instant system reset. > However, that memory could be _anything_ in the system - it > could be the kernel image, and it could give userspace access to > change kernel code. > > So, while it is useful to do a pfn_valid() check in vmap(), as I said > to willy, this must _not_ be the primary check. It should IMHO use > WARN_ON() to make it blatently obvious that it should be something we > expect _not_ to trigger under normal circumstances, but is there to > catch programming errors elsewhere. Rather, "to partially catch unrelated programming errors elsewhere, provided the buggy code happens to call vmap() rather than any of the many other functions with a struct page * argument." That's where it stretches my definition of "useful" just a bit too far. It's not about perfect being the enemy of good, it's about why vmap() should be special, and death by a thousand "useful" cuts - if we don't trust the pointer, why not check its alignment for basic plausibility first? If it seems valid, why not check if the page flags look sensible to make sure? How many useful little checks is too many? Every bit of code footprint and execution overhead imposed unconditionally on all end users to theoretically save developers' debugging time still adds up. Although on that note, it looks like arch/arm's pfn_valid() is still a linear scan of the memblock array, so the overhead of adding that for every page in every vmap() might not even be so small... Robin.