Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp1894599ybe; Tue, 3 Sep 2019 05:14:18 -0700 (PDT) X-Google-Smtp-Source: APXvYqxhR+nqvQF7VnHHzUnl9jlaWy0lQPev+0f+fulRzslTsQ/cxqLIjBAofW0P1BfafQ4bH7Z4 X-Received: by 2002:a62:642:: with SMTP id 63mr40220322pfg.257.1567512858381; Tue, 03 Sep 2019 05:14:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567512858; cv=none; d=google.com; s=arc-20160816; b=vjeQPRZgmBSYIPKlVAZIK59kgSZJyqll1Ce4iL9E2WCZcHhirUhcLpULIkk0E+HMyH CratJZCKhXt+r1auVXjus9up08rLP8KdNKMa7w62+cWdpHEk7i53iRVJ0/BBktTHEZ/t zrsJYA1KPBIwhsKx6OEdh1P3oInSeopfIQ81W/3qfNDJtblla9plxhmCuWyWBIAkXocv KgbcKEpVvqpl8+YpRW0I5qAB/SY/vFmUU2ek8IKYWSUhdck/5cUun+ZFXiHxgkNVuFe4 ubHuQyV8+7uvFdg5ClWrWwtrc/tF+PWOXnlU6IU49vAHSy306TtBVWa2GSS9WWPBnOzX 7jng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=8EXrYd4RsH6BhIOlTTrp7nSzyNrjaXwYb7WWqgrRr9Y=; b=OvMQftWQFJo7cUPbuiFYXcu/CPTNsGIj7yHFkBfjgzJh7WDUT/gPsgtIcs7lK3lyKX BaRC5NM5LLZnuc2QH56V1xhRrSkCB6SW6wASjXmtBQmy6a/BQIe7qW4829RCr1hks6SD XFw7bAgE7b3GlMvAi9N2qDwYnr7LavU9p+8OruQuii78oIzHDmx/yvF26SKQYsZKTWqZ tWpU6WuHchXmg5AyrzLPx3wzYYfk33WhXBuFhH6rAyGTd6NVpgQPYN0GRXTdgcbeJ9h+ xckZZMndjFn7LnkSv7o8cs0ML8NW+CQGQpelSYkUNYXS7mIyKBSvnMv6HThAs62beH31 zHrA== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=btqI5faI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o35si6716802pje.106.2019.09.03.05.14.02; Tue, 03 Sep 2019 05:14:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=bombadil.20170209 header.b=btqI5faI; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729142AbfICML5 (ORCPT + 99 others); Tue, 3 Sep 2019 08:11:57 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:37254 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727077AbfICML4 (ORCPT ); Tue, 3 Sep 2019 08:11:56 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=8EXrYd4RsH6BhIOlTTrp7nSzyNrjaXwYb7WWqgrRr9Y=; b=btqI5faIUT1LIYYiY9J4zINYa 3jWPSYkalrJbyjVLIJldORJvZuDJ/zHsD1VvNmD4bdXCqkmlPA4w9XGItTEClgoh6M0RckxISoiqc r6niZXlB+7EoS7c6IyPny4ucc7gVTb6eTxrg3fmqRSKg4G66qRDSNmqNUpvZeEfnB4R9iTozEskz7 bbPH5Fa24IlPi9kjfXG3Rie8eQNOoIZB8hkVd/YqYDz9+EBL80/FVfoKcColNHXuUdCBjstZufBB2 hDA2QJyO9yDtah+GC7XcO8XZu47qpuo3x42WciEpEIuFH5lChFZiUVRS6iWB9ohV6ANbP//yvJxUm bozNI4a2g==; Received: from willy by bombadil.infradead.org with local (Exim 4.92 #3 (Red Hat Linux)) id 1i57f9-0002ll-Km; Tue, 03 Sep 2019 12:11:55 +0000 Date: Tue, 3 Sep 2019 05:11:55 -0700 From: Matthew Wilcox To: Michal Hocko Cc: William Kucharski , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Dave Hansen , Song Liu , Bob Kasten , Mike Kravetz , Chad Mynhier , "Kirill A. Shutemov" , Johannes Weiner Subject: Re: [PATCH v5 1/2] mm: Allow the page cache to allocate large pages Message-ID: <20190903121155.GD29434@bombadil.infradead.org> References: <20190902092341.26712-1-william.kucharski@oracle.com> <20190902092341.26712-2-william.kucharski@oracle.com> <20190903115748.GS14028@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190903115748.GS14028@dhcp22.suse.cz> User-Agent: Mutt/1.11.4 (2019-03-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 03, 2019 at 01:57:48PM +0200, Michal Hocko wrote: > On Mon 02-09-19 03:23:40, William Kucharski wrote: > > Add an 'order' argument to __page_cache_alloc() and > > do_read_cache_page(). Ensure the allocated pages are compound pages. > > Why do we need to touch all the existing callers and change them to use > order 0 when none is actually converted to a different order? This just > seem to add a lot of code churn without a good reason. If anything I > would simply add __page_cache_alloc_order and make __page_cache_alloc > call it with order 0 argument. Patch 2/2 uses a non-zero order. I agree it's a lot of churn without good reason; that's why I tried to add GFP_ORDER flags a few months ago. Unfortunately, you didn't like that approach either. > Also is it so much to ask callers to provide __GFP_COMP explicitly? Yes, it's an unreasonable burden on the callers. Those that pass 0 will have the test optimised away by the compiler (for the non-NUMA case). For the NUMA case, passing zero is going to be only a couple of extra instructions to not set the GFP_COMP flag. > > #ifdef CONFIG_NUMA > > -extern struct page *__page_cache_alloc(gfp_t gfp); > > +extern struct page *__page_cache_alloc(gfp_t gfp, unsigned int order); > > #else > > -static inline struct page *__page_cache_alloc(gfp_t gfp) > > +static inline struct page *__page_cache_alloc(gfp_t gfp, unsigned int order) > > { > > - return alloc_pages(gfp, 0); > > + if (order > 0) > > + gfp |= __GFP_COMP; > > + return alloc_pages(gfp, order); > > } > > #endif