Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp3718090ybi; Mon, 10 Jun 2019 15:32:51 -0700 (PDT) X-Google-Smtp-Source: APXvYqzg5RaosMrz0Ztqe2eZMYh+LEyuQbk5UJ2a3EAV9QZRJhZwf5jqTEwZilaq4WCPir4EM+Ph X-Received: by 2002:a63:8042:: with SMTP id j63mr1168787pgd.239.1560205971475; Mon, 10 Jun 2019 15:32:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560205971; cv=none; d=google.com; s=arc-20160816; b=0y5SCke5gEzFvAivJ7dhtYkZQo60L6z84eqW9yNd//8pSd8we2e0OqtC2RT29g/wiC 0a6VyiLnn9qye4vpvNVrF/vo4XMHyvjE4U4zBZHReA8S30OyzadjI7YjOeg8BKQBGH+2 guvz582wHIv8HPiK/IRO6dorEhzu8+2qaLXh/ODHptYO+SZtLeuK4vMNm2dZGMrapDp6 XHChrv6PQeqypA8GHuYscxnC66eag9TZbV3QTZsZGhrBIU+AasRjQp4RkLKvVT+bAq2R YVxHWdlraSxAKiIX6q5FyQ03Dd6KU1qz9hm8F1uTnXWQJJ9+nVD57TjX/ymtQbWcU7pV ziZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=myQuBRSD6DIQ8n1++Ea4JODiUHqoQNHZpZzl7lIifOs=; b=WgNkRRaqW/iNX3M+0tBk0VWBpDwL78xMENcBBNR7SwMgcsRAyJgBtdTe0dtMYHSQBY FL736kIoqwQrRZeR69ghfEdVUhFt5gHEkbLi23F0wt1vd3bQIXqsU8DB3vZDExXvfCor pWMIgaiaByYvHBC+gNjSr9iSWUULsyXGwWTUcmUBGqO0Az+29kgANAjvrGYvgUFvKHXn wR+yoovanlo/PVfqYzjH765al9JVedvvVFkdmaVO4OgMbxzeZKo7ox3P4tcLHjtfRJ48 KVYp5lKEpW/ca2N1HPGN3B7lYtfJnUJPrhYTNbMi1jigPoOc1BH8BGz8ZCL8w+UgBZOB wTNg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="frGO8G/4"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p61si1611516plb.17.2019.06.10.15.32.35; Mon, 10 Jun 2019 15:32:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="frGO8G/4"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390311AbfFJWa5 (ORCPT + 99 others); Mon, 10 Jun 2019 18:30:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:45676 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388328AbfFJWa5 (ORCPT ); Mon, 10 Jun 2019 18:30:57 -0400 Received: from gmail.com (unknown [104.132.1.77]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 60C9120859; Mon, 10 Jun 2019 22:30:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1560205856; bh=e+VeJnxtpzlvI42v3SjN2mLvhyZhkXm9AwXTJRT/v7w=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=frGO8G/4ulH/QqB3f1QF2jcbewiyh/kOleuIO7rIU5te7H8Gp01GCNrN2dB07bVrk DL5i16n1nITeeHcEjc9zbqQkXvYE2lYne0kDx2e8/msSN5I+4f+qDZH9juU9BSmBYh b55F2VuGRNvWflXtB+azitC/SmzKgS/CMAkhQtno= Date: Mon, 10 Jun 2019 15:30:55 -0700 From: Eric Biggers To: Kees Cook Cc: Matthew Wilcox , Laura Abbott , Andrew Morton , Rik van Riel , linux-kernel@vger.kernel.org Subject: Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Message-ID: <20190610223054.GT63833@gmail.com> References: <201905101341.A17DDD7@keescook> <201905111657.76FE0BDCEC@keescook> <20190512041142.GA24542@bombadil.infradead.org> <201905131430.541A76A6FE@keescook> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <201905131430.541A76A6FE@keescook> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 13, 2019 at 02:32:43PM -0700, Kees Cook wrote: > On Sat, May 11, 2019 at 09:11:42PM -0700, Matthew Wilcox wrote: > > On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote: > > > On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote: > > > > On 5/10/19 3:43 PM, Kees Cook wrote: > > > > > This feature continues to cause more problems than it solves[1]. Its > > > > > intention was to check the bounds of page-allocator allocations by using > > > > > __GFP_COMP, for which we would need to find all missing __GFP_COMP > > > > > markings. This work has been on hold and there is an argument[2] > > > > > that such markings are not even the correct signal for checking for > > > > > same-allocation pages. Instead of depending on BROKEN, this just removes > > > > > it entirely. It can be trivially reverted if/when a better solution for > > > > > tracking page allocator sizes is found. > > > > > > > > > > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html > > > > > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org > > > > > > > > I agree the page spanning is broken but is it worth keeping the > > > > checks against __rodata __bss etc.? > > > > > > They're all just white-listing later checks (except RODATA which is > > > doing a cheap RO test which is redundant on any architecture with actual > > > rodata...) so they don't have any value in staying without the rest of > > > the page allocator logic. > > > > > > > > - /* Is the object wholly within one base page? */ > > > > > - if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) == > > > > > - ((unsigned long)end & (unsigned long)PAGE_MASK))) > > > > > - return; > > > > > - > > > > > - /* Allow if fully inside the same compound (__GFP_COMP) page. */ > > > > > - endpage = virt_to_head_page(end); > > > > > - if (likely(endpage == page)) > > > > > - return; > > > > > > We _could_ keep the mixed CMA/reserved/neither check if we really wanted > > > to, but that's such a corner case of a corner case, I'm not sure it's > > > worth doing the virt_to_head_page() across the whole span to figure > > > it out. > > > > I'd delete that first check, because it's a subset of the second check, > > It seemed easier to short-circuit with a math test before doing the slightly more expensive virt_to_head_page(end) call. Do you think that's sensible? > > > > > /* Is the object wholly within a single (base or compound) page? */ > > endpage = virt_to_head_page(end); > > if (likely(endpage == page)) > > return; > > > > /* > > * If the start and end are more than MAX_ORDER apart, they must > > * be from separate allocations > > */ > > if (n >= (PAGE_SIZE << MAX_ORDER)) > > usercopy_abort("spans too many pages", NULL, to_user, 0, n); > > > > /* > > * If neither page is compound, we can't tell if the object is > > * within a single allocation or not > > */ > > if (!PageCompound(page) && !PageCompound(endpage)) > > return; > > > > > I really wish we had size of allocation reliably held somewhere. We'll > > > need it for doing memory tagging of the page allocator too... > > > > I think we'll need to store those allocations in a separate data structure > > on the side. As far as the rest of the kernel is concerned, those struct > > pages belong to them once the page allocator has given them. > > Okay, let me work up a page-type refactoring while allocation size can > stay back-burnered. > > -- > Kees Cook Any progress on this patch? - Eric