Received: by 10.192.165.148 with SMTP id m20csp3573058imm; Mon, 7 May 2018 14:49:32 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpPha7bmCF9bPIkR/uyKjxenQkIr4R9UlNtBCUl65PsBqnjPQMC7hq4OkiJIX0TMBKVyJ4d X-Received: by 2002:a65:4306:: with SMTP id j6-v6mr11843666pgq.341.1525729772366; Mon, 07 May 2018 14:49:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1525729772; cv=none; d=google.com; s=arc-20160816; b=nE9lOf2O15u3QxAJE7k7DlrgQ2kMmvKZMSKUzhyOTEj6wntarvBrgxdsIXVzbO6RyX CNFAQtqbPah5gqTKPxK+s2/iGuYfHTEFXId6GNrcOB+4paOBaMj1OUuaLFwoCU1873Nd wv/PBNYRDROWFm0Q9IMCpd6RpcAPv0kAGCSlrDPgt8py1V6CMk80uNht5UfNdJ/pRIZZ EYEk9QatFAPdfhROk8wgS3MWPuOrzyooIplG5mjSKllH02uunw+qIghgJst9X/y+1fyM 3ttaqnxpq4i8OmsIkVmnzRV5nVPZzx2WzpqBJQ7oZfYm2nO7gtwZ5erwMzgtwGmyCV1P JRSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:organization:from:references:cc:to:subject :arc-authentication-results; bh=a1B8lY6JjsFNopcJgjeUPqgN9xM7Cfa9TzR94b+RLD4=; b=Oq9BZt54bEMG40W821Jsjnl3UWxJjH1OW2TuJXtMH4RtfhfI9TMSCY5sKWayy+xVkv pmxnZI1ztJVTcJX2/w078D4ttbPoEqMmieYztYev3QpW6fuYeAH+E+poPDYPx4p6/D/m XZM8WZncJFvUdV4gPuGIllLax0XWP3bPhiTdt0VRrh3Nyl//1lf+G6harcVfkQcf05Ui Twi7Gy2BsO5qBvrIZcPkhqdilUrngQ9mcMsPNYApHYIAe5QPXMGulJscHejnddm42VuM 6OoYyNTEfYOca/uRg9W9Php6aj20pXVF+zlQ5Tgmge2+uDRq7dbzBN4lLl8mgXAchI+/ KY7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=canonical.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o1-v6si21215842plk.577.2018.05.07.14.49.18; Mon, 07 May 2018 14:49:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=canonical.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753188AbeEGVss (ORCPT + 99 others); Mon, 7 May 2018 17:48:48 -0400 Received: from youngberry.canonical.com ([91.189.89.112]:50180 "EHLO youngberry.canonical.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753091AbeEGVsr (ORCPT ); Mon, 7 May 2018 17:48:47 -0400 Received: from static-50-53-54-67.bvtn.or.frontiernet.net ([50.53.54.67] helo=[192.168.192.153]) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1fFnzy-0002dk-0n; Mon, 07 May 2018 21:48:46 +0000 Subject: Re: *alloc API changes To: Kees Cook , Matthew Wilcox Cc: Matthew Wilcox , Linux-MM , LKML , Rasmus Villemoes References: <20180505034646.GA20495@bombadil.infradead.org> <20180507113902.GC18116@bombadil.infradead.org> <20180507201945.GB15604@bombadil.infradead.org> From: John Johansen Organization: Canonical Message-ID: <45a048cc-6f80-113f-a508-b23e60251237@canonical.com> Date: Mon, 7 May 2018 14:48:43 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-GB Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/07/2018 01:27 PM, Kees Cook wrote: > On Mon, May 7, 2018 at 1:19 PM, Matthew Wilcox wrote: >> On Mon, May 07, 2018 at 09:03:54AM -0700, Kees Cook wrote: >>> On Mon, May 7, 2018 at 4:39 AM, Matthew Wilcox wrote: >>>> On Fri, May 04, 2018 at 09:24:56PM -0700, Kees Cook wrote: >>>>> On Fri, May 4, 2018 at 8:46 PM, Matthew Wilcox wrote: >>>>> The only fear I have with the saturating helpers is that we'll end up >>>>> using them in places that don't recognize SIZE_MAX. Like, say: >>>>> >>>>> size = mul(a, b) + 1; >>>>> >>>>> then *poof* size == 0. Now, I'd hope that code would use add(mul(a, >>>>> b), 1), but still... it makes me nervous. >>>> >>>> That's reasonable. So let's add: >>>> >>>> #define ALLOC_TOO_BIG (PAGE_SIZE << MAX_ORDER) >>>> >>>> (there's a presumably somewhat obsolete CONFIG_FORCE_MAX_ZONEORDER on some >>>> architectures which allows people to configure MAX_ORDER all the way up >>>> to 64. That config option needs to go away, or at least be limited to >>>> a much lower value). >>>> >>>> On x86, that's 4k << 11 = 8MB. On PPC, that might be 64k << 9 == 32MB. >>>> Those values should be relatively immune to further arithmetic causing >>>> an additional overflow. >>> >>> But we can do larger than 8MB allocations with vmalloc, can't we? >> >> Yes. And today with kvmalloc. However, I proposed to Linus that >> kvmalloc() shouldn't allow it -- we should have kvmalloc_large() which >> would, but kvmalloc wouldn't. He liked that idea, so I'm going with it. > > How would we handle size calculations for _large? > >> There are very, very few places which should need kvmalloc_large. >> That's one million 8-byte pointers. If you need more than that inside >> the kernel, you're doing something really damn weird and should do >> something that looks obviously different. > > I'm CCing John since I remember long ago running into problems loading > the AppArmor DFA with kmalloc and switching it to kvmalloc. John, how > large can the DFAs for AppArmor get? Would an 8MB limit be a problem? > theoretically yes, and I have done tests with policy larger than that, but in practice I have never seen it. The largest I have seen in practice is about 1.5MB. The policy container that wraps the dfa, could be larger if if its wrapping multiple policy sets (think pre-loading policy for multiple containers in one go), but we don't do that currently and there is no requirement for that to be handled with a single allocation. We have some improvements coming that will reduce our policy size, and enable it so that we can split some of the larger dfas into multiple allocations so I really don't expect this will be a problem. If it becomes an issue we know the size of the allocation needed and can just have a condition that calls vmalloc_large when needed.