Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp1903804ybe; Tue, 3 Sep 2019 05:22:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqx3uxntX6wzMOflH8OqKKs8Q3/W+RLZwCCgLQ4epDCbcGJH03UJBqJjy2HQykswvAh2NwyB X-Received: by 2002:aa7:800c:: with SMTP id j12mr37706628pfi.255.1567513338005; Tue, 03 Sep 2019 05:22:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567513337; cv=none; d=google.com; s=arc-20160816; b=n/nKORC4wexF/5PB+bdQ5r7A+yNXalkRtqOuC5U02y4IBc8py3ldmkxJRXnF6H+Yi1 4IvVHlhGQ8k73geptWJ3OjtHNHWWoFbxY0i/xwjQzL28TiKBHh3LX554P9R9BtQLxTEP mq/EBRRrRmsQ+Ot317UjUFL0FIWw8uoVpX7ZlEgL2Gk9WTTDTUKbVNEp0YfknVHqkZWL h7UIxE4Abuyh7rifPg2QYCtxHLxGX+Bi0odt/EMjHVFoXmncVZEHfdKLrFKfEA6ppwqw 3QmwrC46be5YlB6eqhP+N1pDFYJs6EjQ743amA6fIqRCsriWQ6ZAd4/HGpYhbDSrJBhc 22vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=iAHuNXiLV4QvdKr+s/ZC4G68ueSG0/B4FY3WJwOS8Zs=; b=g9IgwgRJvQ6bNh0Q4f7l43uVSn999Bc1LjgchwqFEWuKw8DX3YRqBMZYQjWHm2YEtS B+0cO9bai9KBdPWa/T19TT8zQ8MQXliQXRFGG/BMHZ3dmqTAsaGMLRUuxftcCB95F/Y6 jZD4DCw4HGxhWvdHkfl6JW7CxY1KQgLDhPmQ/uDsjlASJ8NfjZWWfKlVyYT0AvnENk+7 kEhT4ACjNUmJVCcCkU8tu8nG2iEYZi1tCy2lIXrabY6CUElERVuqfpXH+P4XhVMAVpMg H0o/y3ja0Jluxs1aIpf4igl9mfCxCRftZh+CNEdF6OPJTXprxLWuCRGEc3tbKsrm3pyT ySig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j16si11523348pgj.349.2019.09.03.05.22.01; Tue, 03 Sep 2019 05:22:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728883AbfICMTy (ORCPT + 99 others); Tue, 3 Sep 2019 08:19:54 -0400 Received: from mx2.suse.de ([195.135.220.15]:44152 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728490AbfICMTy (ORCPT ); Tue, 3 Sep 2019 08:19:54 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 21EA5B009; Tue, 3 Sep 2019 12:19:53 +0000 (UTC) Date: Tue, 3 Sep 2019 14:19:52 +0200 From: Michal Hocko To: Matthew Wilcox Cc: William Kucharski , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Dave Hansen , Song Liu , Bob Kasten , Mike Kravetz , Chad Mynhier , "Kirill A. Shutemov" , Johannes Weiner Subject: Re: [PATCH v5 1/2] mm: Allow the page cache to allocate large pages Message-ID: <20190903121952.GU14028@dhcp22.suse.cz> References: <20190902092341.26712-1-william.kucharski@oracle.com> <20190902092341.26712-2-william.kucharski@oracle.com> <20190903115748.GS14028@dhcp22.suse.cz> <20190903121155.GD29434@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190903121155.GD29434@bombadil.infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 03-09-19 05:11:55, Matthew Wilcox wrote: > On Tue, Sep 03, 2019 at 01:57:48PM +0200, Michal Hocko wrote: > > On Mon 02-09-19 03:23:40, William Kucharski wrote: > > > Add an 'order' argument to __page_cache_alloc() and > > > do_read_cache_page(). Ensure the allocated pages are compound pages. > > > > Why do we need to touch all the existing callers and change them to use > > order 0 when none is actually converted to a different order? This just > > seem to add a lot of code churn without a good reason. If anything I > > would simply add __page_cache_alloc_order and make __page_cache_alloc > > call it with order 0 argument. > > Patch 2/2 uses a non-zero order. It is a new caller and it can use a new function right? > I agree it's a lot of churn without > good reason; that's why I tried to add GFP_ORDER flags a few months ago. > Unfortunately, you didn't like that approach either. Is there any future plan that all/most __page_cache_alloc will get a non-zero order argument? > > Also is it so much to ask callers to provide __GFP_COMP explicitly? > > Yes, it's an unreasonable burden on the callers. Care to exaplain why? __GFP_COMP tends to be used in the kernel quite extensively. > Those that pass 0 will > have the test optimised away by the compiler (for the non-NUMA case). > For the NUMA case, passing zero is going to be only a couple of extra > instructions to not set the GFP_COMP flag. > > > > #ifdef CONFIG_NUMA > > > -extern struct page *__page_cache_alloc(gfp_t gfp); > > > +extern struct page *__page_cache_alloc(gfp_t gfp, unsigned int order); > > > #else > > > -static inline struct page *__page_cache_alloc(gfp_t gfp) > > > +static inline struct page *__page_cache_alloc(gfp_t gfp, unsigned int order) > > > { > > > - return alloc_pages(gfp, 0); > > > + if (order > 0) > > > + gfp |= __GFP_COMP; > > > + return alloc_pages(gfp, order); > > > } > > > #endif -- Michal Hocko SUSE Labs