Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp1684881pxb; Mon, 12 Apr 2021 04:21:34 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwZcOUPC39SyoLl7HIQlrup/pANaH01EaRKDxOKQnTndEi35QtDWOaNK7EMalmQUBtj+qgW X-Received: by 2002:a05:6402:84b:: with SMTP id b11mr28653433edz.56.1618226494725; Mon, 12 Apr 2021 04:21:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618226494; cv=none; d=google.com; s=arc-20160816; b=KjqgS5Yxgf/xRliCEzClTrt5erVbbsrLwKy2dfZx+xTj84FS0swLLOBbieXta/0Q1b mwPNaF42UMjHL4mFjEdC06tfJdnftgvRDvqmZlG65GKmgg9Wi5IVfPXwClKzcXhBTh0D 5zBiikT1IsvMAUQtUdJ63lIDFqvMLwttffx2bsBS3MnK38BzOEkf8qwAfwauLOtNVV5V qXOPgppWJP4Xsw/idISRxm7xbb1afx1zZO27VE94iHAKq1Ees1fiDlOP0gfca30v6UD8 mGAmqlyu2qfj0MoiSZ8YoFZTpCdEU0Jl1fpBSZufmhEk0o+j184gFxnQdUvwk1TJpVuf WaVg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=1wRKHmysm6BjP6sWYCBHwxFDvYWriGQvr6G4Hzw5j/c=; b=RZ/BsHF/HVOYcXRFFbu6swZsOZwvLe5JjwX2VAk509wbr6J36ycivT8NDooPm5ZXDU mk8HYTf2PvA2wQD28tz8Que0Fd9h6NXO3rMVGdFnw2x/agNE2rk07nyLNF3wAZn0K7uF DggSUBo7qoDAw0p+qhUn1xxlOW4YdskVxrHKQeOtieVSFF1hFqxV6uL2E9t69ccviPat ce5SZ9vCa73ShGc1eyENf10idJKaiExwLvep7tj9hxxzSAoJwuKoy0cjPV9riACU9gPj 6H7b3ttMIFYHsNG6QfmPfZNJ/LgVopWauGeXSmCAt0dLF8YeEn04oyn0QkXpkLVw0Pgu JuwA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id m26si7862969edp.330.2021.04.12.04.21.06; Mon, 12 Apr 2021 04:21:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-nfs-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-nfs-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240228AbhDLLUM (ORCPT + 99 others); Mon, 12 Apr 2021 07:20:12 -0400 Received: from outbound-smtp18.blacknight.com ([46.22.139.245]:36099 "EHLO outbound-smtp18.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240210AbhDLLUL (ORCPT ); Mon, 12 Apr 2021 07:20:11 -0400 Received: from mail.blacknight.com (pemlinmail04.blacknight.ie [81.17.254.17]) by outbound-smtp18.blacknight.com (Postfix) with ESMTPS id 264181C52A4 for ; Mon, 12 Apr 2021 12:19:53 +0100 (IST) Received: (qmail 30339 invoked from network); 12 Apr 2021 11:19:52 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 12 Apr 2021 11:19:52 -0000 Date: Mon, 12 Apr 2021 12:19:51 +0100 From: Mel Gorman To: Vlastimil Babka Cc: Andrew Morton , Chuck Lever , Jesper Dangaard Brouer , Christoph Hellwig , Alexander Duyck , Matthew Wilcox , Ilias Apalodimas , LKML , Linux-Net , Linux-MM , Linux-NFS Subject: Re: [PATCH 2/9] mm/page_alloc: Add a bulk page allocator Message-ID: <20210412111951.GW3697@techsingularity.net> References: <20210325114228.27719-1-mgorman@techsingularity.net> <20210325114228.27719-3-mgorman@techsingularity.net> <28729c76-4e09-f860-0db1-9c79c8220683@suse.cz> <20210412105938.GU3697@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20210412105938.GU3697@techsingularity.net> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org On Mon, Apr 12, 2021 at 11:59:38AM +0100, Mel Gorman wrote: > > I don't understand this comment. Only alloc_flags_nofragment() sets this flag > > and we don't use it here? > > > > It's there as a reminder that there are non-obvious consequences > to ALLOC_NOFRAGMENT that may affect the bulk allocation success > rate. __rmqueue_fallback will only select pageblock_order pages and if that > fails, we fall into the slow path that allocates a single page. I didn't > deal with it because it was not obvious that it's even relevant but I bet > in 6 months time, I'll forget that ALLOC_NOFRAGMENT may affect success > rates without the comment. I'm waiting for a bug that can trivially trigger > a case with a meaningful workload where the success rate is poor enough to > affect latency before adding complexity. Ideally by then, the allocation > paths would be unified a bit better. > So this needs better clarification. ALLOC_NOFRAGMENT is not a problem at the moment but at one point during development, it was a non-obvious potential problem. If the paths are unified, ALLOC_NOFRAGMENT *potentially* becomes a problem depending on how it's done and it needs careful consideration. For example, it could be part unified by moving the alloc_flags_nofragment() call into prepare_alloc_pages because in __alloc_pages, it always happens and it looks like an obvious partial unification. Hence the comment "May set ALLOC_NOFRAGMENT" because I wanted a reminder in case I "fixed" this in 6 months time and forgot the downside. -- Mel Gorman SUSE Labs